Compare commits

...

57 Commits

Author SHA1 Message Date
terence tsao
398c616dfb Reduce val index cache 2025-07-15 16:24:28 -07:00
terence tsao
96370a359c fix(execution): skip genesis block retrieval when EIP-6110 is active 2025-07-14 15:04:28 -07:00
Kaloyan Tanev
56e8881bc1 Rework DV aggregation selection proofs (#15156)
* Reword DV selection proofs

* Add changelog

* Expect 1 call to DomainData in DV update duties test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-07-14 15:33:26 +00:00
james-prysm
78f8411ad2 move validator run slot ticker (#15479)
* moving the ticker from chain start to right before the main loop and also after the wait for activation edge case

* fixing unit test

* adding in a unit test

* adding in comment based on potuz's feedback
2025-07-11 19:39:52 +00:00
james-prysm
83943b5dd8 validator client: removing need to call canonical head api (#15480)
* removing need to call cononical head api

* typo

* removing unneeded tests

* fixing unit tests
2025-07-10 20:25:17 +00:00
james-prysm
bc7e4f7751 switching enable to disable for duties (#15445) 2025-07-10 18:57:36 +00:00
Manu NALEPA
02f8726e46 VerifiedROBlobFromDisk and VerifiedRODataColumnFromDisk: Add tests. (#15484) 2025-07-10 16:11:12 +00:00
terence
16b567f6af Add log capitalization analyzer and apply changes (#15452)
* Add log capitalization analyzer and apply fixes across codebase

Implements a new nogo analyzer to enforce proper log message capitalization and applies the fixes to all affected log statements throughout the beacon chain, validator, and supporting components.

Co-Authored-By: Claude <noreply@anthropic.com>

* Radek's feedback

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-10 13:43:38 +00:00
Bastin
5c1d827335 Unify LC API 1/2 (bootstrap) (#15476)
* add versiotToForkEpoch map

* Unify LC API (bootstrap)
2025-07-10 13:07:46 +00:00
Bastin
68d7df0e4f add versiotToForkEpoch map (#15482) 2025-07-10 12:57:28 +00:00
kasey
288b33750d Isolate committee cache (#15478)
* always init service through NewService

* move head state cache to service struct

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-07-09 21:43:00 +00:00
terence
f4f48d6372 Optimize BuildBlobSidecars Merkle proof computation by pre-computing subtrees (#15473)
* Optimize BuildBlobSidecars Merkle proof computation by pre-computing subtrees

Co-Authored-By: Claude <noreply@anthropic.com>

* Add change log

* Fix change log

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-07-09 16:12:14 +00:00
james-prysm
f2d57f0b5f changes for safe validator shutdown and restarts on healthcheck (#15401)
* poc changes for safe validator shutdown

* simplifying health routine and adding safe shutdown after max restarts reached

* fixing health tests

* fixing tests

* changelog

* gofmt

* fixing runner

* improve how runner times out

* improvements to ux on logs

* linting

* adding in max healthcheck flag

* changelog

* Update james-prysm_safe-validator-shutdown.md

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/runner.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/service.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/runner.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/runner.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing some feedback from radek

* addressing some more feedback

* fixing name based on feedback

* fixing mistake on max health checks

* conflict accidently checked in

* go 1.23 no longer needs you to stop for the ticker

* Update flags.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* wip no unit test for recursive healthy host find

* rework healthcheck

* gaz

* fixing bugs and improving logs with new monitor

* removing health tracker, fixing runner tests, and adding placeholder for monitor tests

* fixing event stream check

* linting

* adding in health monitor tests

* gaz

* improving test

* removing some log.fatals

* forgot to remove comment

* missed fatal removal

* doppleganger should exit the node safely now

* Update validator/client/health_monitor.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek review

* Update validator/client/validator.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/validator.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/health_monitor.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/health_monitor.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/health_monitor.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/validator.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek feedback

* read up on more suggestions and making fixes to channel

* suggested updates after more reading

* reverting some of this because it froze the validator after healthcheck failed

* fully reverting

* some improvements I found during testing

* Update cmd/validator/flags/flags.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* preston's feedback

* clarifications on changelog

* converted to using an event feed instead of my own channel publishing implementation, adding relevant logs

* preston log suggestion

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-07-09 15:39:06 +00:00
Radosław Kapka
7025e50a6c Make the multi-value slice permanent (#15414)
* Remove Old State Paths

* Changelog

* Gazelle

* Lint

* Fix State Tests

* fix tests, update native state code

* move ErrOutOfBounds error from consensus types to mvslice

* fix TestStreamEvents_OperationsEvents

* add missing gc to fuzz tests

* more test fixes

* build fix

---------

Co-authored-by: nisdas <nishdas93@gmail.com>
2025-07-08 17:45:20 +00:00
Radosław Kapka
961ea05454 Allow SSZ requests for pending deposits, partial withdrawals and consolidations (#15474) 2025-07-08 17:45:05 +00:00
terence
da5525096c Move data col reconstruction log to after reconstruction / save complete (#15475)
* Move data col reconstruction / save log to after they are done

* Rename to reconstructionDuration

* Rename to reconstructionAndSaveDuration
2025-07-08 14:49:31 +00:00
terence
2d2507b907 Attest timely by default (#15410)
* Attest timely by default

* Fix deprecated flag naming
2025-07-04 18:28:48 +00:00
terence
a701f07f3a Increase mainnet default builder gas limit to 45M (#15455) 2025-07-04 15:58:32 +00:00
terence
f4bbe5ca40 Add batch verifier limit (#15467) 2025-07-04 15:57:33 +00:00
Manu NALEPA
4be8de2476 PeerDAS: Implement reconstruction of data column sidecars retrieved from the execution client. (#15469)
* `BestFinalized`: No functional change. Improve comments and reduce scope.

* PeerDAS execution: Implement engine method `GetBlobsV2` and `ReconstructDataColumnSidecars`.

* Fix James' comment.

* Fix James' comment.
2025-07-03 22:35:28 +00:00
james-prysm
fac509a3e6 duties v2 fix no assignment panic (#15466)
* adding fix

* preston's suggestion
2025-07-03 15:44:51 +00:00
Manu NALEPA
b1ac8209b2 PeerDAS: Implement reconstruct. (#15454)
* PeerDAS: Implement reconstruct.

* Fix Preston's comment.

* Fix Preston's comment.
2025-07-02 19:02:55 +00:00
Bastin
74c9586c66 refactor lc kv tests (#15465)
* refactor lc kv tests

* clean up

* Update lightclient_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-07-02 13:24:32 +00:00
Bastin
f0ad3dfaeb Refactor lc bootstrap tests (#15462)
* fix versioning

* changelog

* fix blockchain tests

* fix linter issue

* fix spec tests

* fix default lc update version

* fix lc header version

* gzl

* clean up the code

* Update testing/spectest/shared/common/light_client/update_ranking.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* add fulu set up in update ranking

* pass att block to createDefaultLCUpdate

* address comments

* linter

* Update lightclient.go

* refactor lc bootstrap tests

* changelog

* sort imports

* refactor lc bootstrap tests

* changelog

* Implement the new Fulu Metadata. (#15440)

* clean up

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-07-02 13:23:39 +00:00
Bastin
2540196747 Put the initialization of LC Store behind the enable-light-client flag (#15464)
* put lc store behind flag

* lint

* remove extra line
2025-07-02 11:09:37 +00:00
Manu NALEPA
f133751cce --chain-config-file: Do not use any more mainnet boot nodes. (#15460) 2025-07-02 09:36:31 +00:00
Bastin
bddcc158e4 Fix LC versioning bug (#15400)
* fix versioning

* changelog

* fix blockchain tests

* fix linter issue

* fix spec tests

* fix default lc update version

* fix lc header version

* gzl

* clean up the code

* Update testing/spectest/shared/common/light_client/update_ranking.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* add fulu set up in update ranking

* pass att block to createDefaultLCUpdate

* address comments

* linter

* Update lightclient.go

* sort imports

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2025-07-01 21:40:18 +00:00
Manu NALEPA
bc7664321b Implement the new Fulu Metadata. (#15440) 2025-07-01 07:07:32 +00:00
Manu NALEPA
97f416b3a7 dataColumnSidecarByRootRPCHandler: Do not rely any more on map iteration, which does not produce reproducible output order. (#15441) 2025-06-26 13:07:19 +00:00
Manu NALEPA
1c1e0f38bb PeerDAS: Implement beacon API blob sidecar endpoint for Fulu (#15436)
* `TestGetBlob`: Refactor (no functional change).

* `GenerateTestFuluBlockWithSidecars`: Generate commitments consistent with blobs.

This is actually not needed for this PR, but it does not hurt.

* Beacon API: Implement blob sidecars endpoint for Fulu.

* Fix Radek's comment.

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Fix Radek's comment.

* Update beacon-chain/rpc/lookup/blocker.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-06-26 10:05:22 +00:00
Manu NALEPA
121914d0d7 Implement SendDataColumnSidecarsByRangeRequest and SendDataColumnSidecarsByRootRequest. (#15430)
* Implement `SendDataColumnSidecarsByRangeRequest` and `SendDataColumnSidecarsByRootRequest`.

* Fix James's comment.

* Fix James's comment.

* Fix James' comment.

* Fix marshaller.
2025-06-24 21:40:16 +00:00
Manu NALEPA
e8625cd89d Peerdas various (#15423)
* Topic mapping: Groupe `const` and `var`.

Cosmetic change, no functional change.

* `TopicFromMessage`: Do not assume anymore that all Fulu specific topics are V3 only.

* Proto: Remove unused `DataColumnIdentifier` and add new `StatusV2`.

`DataColumnIdentifier` was removed in the spec here: https://github.com/ethereum/consensus-specs/pull/4284. Eventually, we stopped using it in Prysm, but never removed the corresponding proto message.

The new `StatusV2` is introduced in the spec here: https://github.com/ethereum/consensus-specs/pull/4374

* `readChunkedDataColumnSideCar` ==> `readChunkedDataColumnSidecar`.

* `rpc_send_request.go`: Reorganize file (no function change).

* `readChunkedDataColumnSidecar`: Add `validationFunctions` parameter and add tests.
2025-06-23 14:47:02 +00:00
terence
667aaf1564 Remove invalid from logs for blob sidecars missing parent or out of range (#15428) 2025-06-22 21:55:44 +00:00
Bastin
e020907d2a Revert "SSZ support for submitAttestationsV2 (#15422)" (#15426)
This reverts commit 9927cea35a.
2025-06-20 21:21:16 +00:00
Bastin
9927cea35a SSZ support for submitAttestationsV2 (#15422)
* ssz support for submitAttestationsV2

* changelog

* lowercase errors

* save 1 line

* Update beacon-chain/rpc/eth/beacon/handlers_pool.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* add comment

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-06-20 17:21:48 +00:00
Manu NALEPA
d4233471d2 PeerDAS: Implement data columns by range RPC handler. (#15421)
* `dataColumnsRPCMinValidSlot`: Add new test case.

* peerDAS: Implement `dataColumnSidecarByRangeRPCHandler`.

* `rateLimitingAmount`:  Define out of the function.

* Fix James' comment.

* Fix James' comment.
2025-06-20 15:22:50 +00:00
james-prysm
d63ae69920 adding ssz for get block endpoint (#15390)
* adding get ssz

* adding some tests

* gaz

* adding ssz to e2e

* wip ssz

* adding in additional check on header type

* remove unused

* renaming json rest handler, and adding in usage of use ssz debug flag

* fixing unit tests

* fixing tests

* gaz

* radek feedback

* Update config/features/config.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update config/features/flags.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update config/features/flags.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/get_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/get_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/get_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing feedback

* missing import

* another missing import

* fixing tests

* gaz

* removing unused

* gaz

* more radek feedback

* fixing context

* adding in check for non accepted conent type

* reverting to not create more edgecases

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-06-20 14:27:09 +00:00
terence
b9fd32dfff Remove deposit count from sync new block log (#15420)
* Remove deposit count from sync new block log

* Fix tests
2025-06-19 23:49:17 +00:00
Manu NALEPA
559d02bf4d peerDAS: Implement dataColumnSidecarByRootRPCHandler. (#15405)
* `CreateTestVerifiedRoDataColumnSidecars`: Use consistent block root.

* peerDAS: Implement `dataColumnSidecarByRootRPCHandler`.

* Fix James' comment.

* Fix James' comment.
2025-06-19 12:12:45 +00:00
Preston Van Loon
62fec4d1f3 Replace context.Background with testing.TB.Context where possible (#15416)
* Replace context.Background with testing.TB.Context where possible

* Fix failing tests
2025-06-16 22:09:18 +00:00
Preston Van Loon
6a13ba9125 remove eth1voting tool (#15415) 2025-06-16 19:53:39 +00:00
terence
2a3876427f Fix blob peer count metric (#15412) 2025-06-16 14:27:55 +00:00
Potuz
f8b88db1a4 EIP 7917 (#15129)
* Add the new Fulu state with the new field

* fix the hasher for the fulu state

* Fix ToProto() and ToProtoUnsafe()

* Add the fields as shared

* Add epoch transition code

* short circuit the proposer cache to use the state

* Marshal the state JSON

* update spectests to 1.6.0-alpha.1

* Remove deneb and electra entries from blob schedule

This was cherry picked from PR #15364
and edited to remove the minimal cases

* Fix minimal tests

* Increase deadling for processing blocks in spectests

* Preston's review

* review

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2025-06-12 20:40:34 +00:00
kasey
d12da8cbe0 remove usages of params from proto packages (#15403)
* remove usages of params from proto packages

* make it harder to mess up the order of request limit args

* remove errant edit (Terence review)

* fix missed updates after sig change

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-06-11 21:15:10 +00:00
terence
6087875da5 beacon api: Stream event for data column sidecar (#15387) 2025-06-10 22:33:52 +00:00
Radosław Kapka
214f4a76fb Reclaim memory manually in some tests that fuzz the beacon state (#15395) 2025-06-10 21:30:51 +00:00
Radosław Kapka
f5a9394c77 Reorganize processing of light client updates (#15383)
* Reorganize processing of light client updates

* changelog <3
2025-06-09 17:08:23 +00:00
Manu NALEPA
4095da8568 PeerDAS: Implement small, unrelated changes. (#15386)
* `verifyBlobCommitmentCount`: Print max allowed blob count in error message.

* `TestPersist`: Use `fieldparams.RootLength` instead of `32`.

* `TestDataColumnSidecarsByRootReq_Marshal`: Remove blank line.

* `ConvertPeerIDToNodeID`: Improve readability by using one line per field.

* `parseIndices`: Return `[]int` instead of `[]uint64`.

Rational: These indices are used in
`func (m *SparseMerkleTrie) MerkleProof(index int) ([][]byte, error)`

that requires `int` and not `uint64`.
This `MerkleProof` function is used at a lot of places in the codebase.
==> Changing the signature of `parseIndices` is simpler than changing the signature of `MerkleProof`.
2025-06-06 12:20:52 +00:00
Manu NALEPA
f1288a18ec PeerDAS: Implement DAS (#15367)
* PeerDAS: Implement DAS

* Fix Terence's comment.

* Fix Terence comment.

* Fix Terence's comment.

* Fix James' comment.

* Fix James' comment.

* Rename some variable/files with blobs.

* Fix James' comment.

* Fix James' comment.

* Fix James' comment.
2025-06-06 10:06:52 +00:00
Preston Van Loon
543ebe857e V6.0.4 changelog (#15385)
* Ran unclog to build release notes for v6.0.4

* Add changelog preamble

* Changelog fragment
2025-06-06 01:37:32 +00:00
james-prysm
e569df5ebc moving web flag to feature (#15382) 2025-06-05 19:18:54 +00:00
james-prysm
8c324cc491 validator client: adding in get duties v2 (#15380)
* adding in get duties v2

* gaz

* missed definition

* removing comment

* updating description
2025-06-05 15:49:57 +00:00
Bastin
265d84569c SSZ support for Attestation APIs Part 1 (#15377)
* ssz support for validator/GetAttestationData V1

* ssz support for validator/GetAggregateAttestation V2

* changelog

* remove duplicate header
2025-06-05 13:39:52 +00:00
Alleysira
79b064a6cc fix missing meta in resp of getPeers (#15371)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-06-04 17:55:01 +00:00
Preston Van Loon
182c18a7b2 Add regression test for PR 15369 (#15379)
* Update go.uber.org/mock to v0.5.2.

* Regression test for #15369

* Changelog fragment

* Use SetActiveTestCleanup

* Remove logrus log level change
2025-06-04 17:30:31 +00:00
james-prysm
8b9c161560 Get duties v2 ( gRPC) (#15273)
* adding in proto for getdutiesv2

* adding in GetDutiesV2

* updating changelog and mock

* fixing tests

* breaking up function into smaller functions for gocognit

* reverting some changes so we can break it into a separate PR for validator client only

* reverted too much adding mock change back in

* removing reserved based on preston's feedback

* adding duties tests back in

* Update beacon-chain/rpc/prysm/v1alpha1/validator/duties.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* updating based on preston's feedback

* updating build for new files

* maybe there's some flake with how the state is used, trying with it moved

* reverting config override

* reverting all changes to get duties v1 based on preston's feedback

* Update proto/prysm/v1alpha1/validator.proto

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing partial feedback

* adding small comment

* reorganizing function based on feedback

* removing unused field

* adding some more unit tests

* adding attribute for pubkeys to span

* preston's feedback

* fixing current to next

* probably safer to register current period now

* Update beacon-chain/core/helpers/beacon_committee.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* adding in preston's comment

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-06-04 15:05:57 +00:00
Manu NALEPA
4a4532f3ba Beacon node: Instantiate custody info. (#15378) 2025-06-04 13:08:40 +00:00
838 changed files with 20832 additions and 10731 deletions

View File

@@ -194,6 +194,7 @@ nogo(
"//tools/analyzers/gocognit:go_default_library",
"//tools/analyzers/ineffassign:go_default_library",
"//tools/analyzers/interfacechecker:go_default_library",
"//tools/analyzers/logcapitalization:go_default_library",
"//tools/analyzers/logruswitherror:go_default_library",
"//tools/analyzers/maligned:go_default_library",
"//tools/analyzers/nop:go_default_library",

View File

@@ -4,6 +4,49 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [v6.0.4](https://github.com/prysmaticlabs/prysm/compare/v6.0.3...v6.0.4) - 2025-06-05
This release has more work on PeerDAS, and light client support. Additionally, we have a few bug fixes:
- Blob cache size now correctly set at startup.
- A fix for slashing protection history exports where the validator database was in a nested folder.
- Corrected behavior of the API call for state committees with an invalid request.
- `/bin/sh` is now symlinked to `/bin/bash` for Prysm docker images.
In the [Hoodi](https://github.com/eth-clients/hoodi) testnet, the default gas limit is raised to 60M gas.
### Added
- Add light client mainnet spec test. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15295)
- Add support for light client req/resp domain. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15281)
- Added /bin/sh simlink to docker images. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15294)
- Added Prysm build data to otel tracing spans. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15302)
- Add light client minimal spec test support for `update_ranking` tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15297)
- Add fulu operation and epoch processing spec tests. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15284)
- Updated e2e Beacon API evaluator to support more endpoints, including the ones introduced in Electra. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15304)
- Data column sidecars verification methods. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15232)
- Implement data column sidecars filesystem. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15257)
- Add blob schedule support from https://github.com/ethereum/consensus-specs/pull/4277. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15272)
- random forkchoice spec tests for fulu. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15287)
- Add ability to download nightly test vectors. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15312)
- PeerDAS: Validation pipeline for data column sidecars received via gossip. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15310)
- PeerDAS: Implement P2P. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15347)
- PeerDAS: Implement the blockchain package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15350)
### Changed
- Update spec tests to v1.6.0-alpha.0. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15306)
- PeerDAS: Refactor the reconstruction pipeline. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
- PeerDAS: `DataColumnStorage.Get` - Exit early no columns are available. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15309)
- Default hoodi testnet builder gas limit to 60M. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15361)
### Fixed
- Fix cyclical dependencies issue when using testing/util package. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15248)
- Set seen blob cache size correctly based on current slot time at start up. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15348)
- Fix `slashing-protection-history export` failing when `validator.db` is in a nested folder like `data/direct/`. (#14954). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15351)
- Made `/eth/v1/beacon/states/{state_id}/committees` endpoint return `400` when slot does not belong to the specified epoch, aligning with the Beacon API spec (#15355). [[PR]](https://github.com/prysmaticlabs/prysm/pull/15356)
- Removed eager validator context cancellation that was causing validator builder registrations to fail occasionally. [[PR]](https://github.com/prysmaticlabs/prysm/pull/15369)
## [v6.0.3](https://github.com/prysmaticlabs/prysm/compare/v6.0.2...v6.0.3) - 2025-05-21
This release has important bugfixes for users of the [Beacon API](https://ethereum.github.io/beacon-APIs/). These fixes include:

View File

@@ -1,7 +1,7 @@
workspace(name = "prysm")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_pkg",
@@ -16,8 +16,6 @@ load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "toolchains_protoc",
sha256 = "abb1540f8a9e045422730670ebb2f25b41fa56ca5a7cf795175a110a0a68f4ad",
@@ -255,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.0"
consensus_spec_version = "v1.6.0-alpha.1"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-W7oKvoM0nAkyitykRxAw6kmCvjYC01IqiNJy0AmCnMM=",
"minimal": "sha256-ig7/zxomjv6buBWMom4IxAJh3lFJ9+JnY44E7c8ZNP8=",
"mainnet": "sha256-mjx+MkXtPhCNv4c4knLYLIkvIdpF7WTjx/ElvGPQzSo=",
"general": "sha256-o4t9p3R+fQHF4KOykGmwlG3zDw5wUdVWprkzId8aIsk=",
"minimal": "sha256-sU7ToI8t3MR8x0vVjC8ERmAHZDWpEmnAC9FWIpHi5x4=",
"mainnet": "sha256-YKS4wngg0LgI9Upp4MYJ77aG+8+e/G4YeqEIlp06LZw=",
},
version = consensus_spec_version,
)
@@ -280,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-u0RkIZIeGttb3sInR31mO64aBSwxALqO5SYIPlqEvPo=",
integrity = "sha256-Nv4TEuEJPQIM4E6T9J0FOITsmappmXZjGtlhe1HEXnU=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -1,20 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"health.go",
"interfaces.go",
"mock.go",
],
importpath = "github.com/OffchainLabs/prysm/v6/api/client/beacon/health",
visibility = ["//visibility:public"],
deps = ["@org_uber_go_mock//gomock:go_default_library"],
)
go_test(
name = "go_default_test",
srcs = ["health_test.go"],
embed = [":go_default_library"],
deps = ["@org_uber_go_mock//gomock:go_default_library"],
)

View File

@@ -1,58 +0,0 @@
package health
import (
"context"
"sync"
)
type NodeHealthTracker struct {
isHealthy *bool
healthChan chan bool
node Node
sync.RWMutex
}
func NewTracker(node Node) Tracker {
return &NodeHealthTracker{
node: node,
healthChan: make(chan bool, 1),
}
}
// HealthUpdates provides a read-only channel for health updates.
func (n *NodeHealthTracker) HealthUpdates() <-chan bool {
return n.healthChan
}
func (n *NodeHealthTracker) IsHealthy(_ context.Context) bool {
n.RLock()
defer n.RUnlock()
if n.isHealthy == nil {
return false
}
return *n.isHealthy
}
func (n *NodeHealthTracker) CheckHealth(ctx context.Context) bool {
n.Lock()
defer n.Unlock()
newStatus := n.node.IsHealthy(ctx)
if n.isHealthy == nil {
n.isHealthy = &newStatus
}
isStatusChanged := newStatus != *n.isHealthy
if isStatusChanged {
// Update the health status
n.isHealthy = &newStatus
// Send the new status to the health channel, potentially overwriting the existing value
select {
case <-n.healthChan:
n.healthChan <- newStatus
default:
n.healthChan <- newStatus
}
}
return newStatus
}

View File

@@ -1,111 +0,0 @@
package health
import (
"context"
"sync"
"testing"
"go.uber.org/mock/gomock"
)
func TestNodeHealth_IsHealthy(t *testing.T) {
tests := []struct {
name string
isHealthy bool
want bool
}{
{"initially healthy", true, true},
{"initially unhealthy", false, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
n := &NodeHealthTracker{
isHealthy: &tt.isHealthy,
healthChan: make(chan bool, 1),
}
if got := n.IsHealthy(context.Background()); got != tt.want {
t.Errorf("IsHealthy() = %v, want %v", got, tt.want)
}
})
}
}
func TestNodeHealth_UpdateNodeHealth(t *testing.T) {
tests := []struct {
name string
initial bool // Initial health status
newStatus bool // Status to update to
shouldSend bool // Should a message be sent through the channel
}{
{"healthy to unhealthy", true, false, true},
{"unhealthy to healthy", false, true, true},
{"remain healthy", true, true, false},
{"remain unhealthy", false, false, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
client := NewMockHealthClient(ctrl)
client.EXPECT().IsHealthy(gomock.Any()).Return(tt.newStatus)
n := &NodeHealthTracker{
isHealthy: &tt.initial,
node: client,
healthChan: make(chan bool, 1),
}
s := n.CheckHealth(context.Background())
// Check if health status was updated
if s != tt.newStatus {
t.Errorf("UpdateNodeHealth() failed to update isHealthy from %v to %v", tt.initial, tt.newStatus)
}
select {
case status := <-n.HealthUpdates():
if !tt.shouldSend {
t.Errorf("UpdateNodeHealth() unexpectedly sent status %v to HealthCh", status)
} else if status != tt.newStatus {
t.Errorf("UpdateNodeHealth() sent wrong status %v, want %v", status, tt.newStatus)
}
default:
if tt.shouldSend {
t.Error("UpdateNodeHealth() did not send any status to HealthCh when expected")
}
}
})
}
}
func TestNodeHealth_Concurrency(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
client := NewMockHealthClient(ctrl)
n := NewTracker(client)
var wg sync.WaitGroup
// Number of goroutines to spawn for both reading and writing
numGoroutines := 6
wg.Add(numGoroutines * 2) // for readers and writers
// Concurrently update health status
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
client.EXPECT().IsHealthy(gomock.Any()).Return(false).Times(1)
n.CheckHealth(context.Background())
client.EXPECT().IsHealthy(gomock.Any()).Return(true).Times(1)
n.CheckHealth(context.Background())
}()
}
// Concurrently read health status
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
_ = n.IsHealthy(context.Background()) // Just read the value
}()
}
wg.Wait() // Wait for all goroutines to finish
}

View File

@@ -1,13 +0,0 @@
package health
import "context"
type Tracker interface {
HealthUpdates() <-chan bool
CheckHealth(ctx context.Context) bool
Node
}
type Node interface {
IsHealthy(ctx context.Context) bool
}

View File

@@ -1,58 +0,0 @@
package health
import (
"context"
"reflect"
"sync"
"go.uber.org/mock/gomock"
)
var (
_ = Node(&MockHealthClient{})
)
// MockHealthClient is a mock of HealthClient interface.
type MockHealthClient struct {
ctrl *gomock.Controller
recorder *MockHealthClientMockRecorder
sync.Mutex
}
// MockHealthClientMockRecorder is the mock recorder for MockHealthClient.
type MockHealthClientMockRecorder struct {
mock *MockHealthClient
}
// IsHealthy mocks base method.
func (m *MockHealthClient) IsHealthy(arg0 context.Context) bool {
m.Lock()
defer m.Unlock()
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "IsHealthy", arg0)
ret0, ok := ret[0].(bool)
if !ok {
return false
}
return ret0
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockHealthClient) EXPECT() *MockHealthClientMockRecorder {
return m.recorder
}
// IsHealthy indicates an expected call of IsHealthy.
func (mr *MockHealthClientMockRecorder) IsHealthy(arg0 any) *gomock.Call {
mr.mock.Lock()
defer mr.mock.Unlock()
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsHealthy", reflect.TypeOf((*MockHealthClient)(nil).IsHealthy), arg0)
}
// NewMockHealthClient creates a new mock instance.
func NewMockHealthClient(ctrl *gomock.Controller) *MockHealthClient {
mock := &MockHealthClient{ctrl: ctrl}
mock.recorder = &MockHealthClientMockRecorder{mock}
return mock
}

View File

@@ -72,7 +72,7 @@ func (*requestLogger) observe(r *http.Request) (e error) {
log.WithFields(log.Fields{
"bodyBase64": "(nil value)",
"url": r.URL.String(),
}).Info("builder http request")
}).Info("Builder http request")
return nil
}
t := io.TeeReader(r.Body, b)
@@ -89,7 +89,7 @@ func (*requestLogger) observe(r *http.Request) (e error) {
log.WithFields(log.Fields{
"bodyBase64": string(body),
"url": r.URL.String(),
}).Info("builder http request")
}).Info("Builder http request")
return nil
}

View File

@@ -2,7 +2,6 @@ package builder
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
@@ -31,7 +30,7 @@ func (fn roundtrip) RoundTrip(r *http.Request) (*http.Response, error) {
}
func TestClient_Status(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
statusPath := "/eth/v1/builder/status"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
@@ -84,7 +83,7 @@ func TestClient_Status(t *testing.T) {
}
func TestClient_RegisterValidator(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
expectedBody := `[{"message":{"fee_recipient":"0x0000000000000000000000000000000000000000","gas_limit":"23","timestamp":"42","pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}]`
expectedPath := "/eth/v1/builder/validators"
t.Run("JSON success", func(t *testing.T) {
@@ -168,7 +167,7 @@ func TestClient_RegisterValidator(t *testing.T) {
}
func TestClient_GetHeader(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
expectedPath := "/eth/v1/builder/header/23/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
var slot primitives.Slot = 23
parentHash := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
@@ -601,7 +600,7 @@ func TestClient_GetHeader(t *testing.T) {
}
func TestSubmitBlindedBlock(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
t.Run("bellatrix", func(t *testing.T) {
hc := &http.Client{
@@ -1559,7 +1558,7 @@ func TestRequestLogger(t *testing.T) {
c, err := NewClient("localhost:3500", wo)
require.NoError(t, err)
ctx := context.Background()
ctx := t.Context()
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, getStatus, r.URL.Path)

View File

@@ -1,7 +1,6 @@
package event
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
@@ -30,7 +29,7 @@ func TestNewEventStream(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := NewEventStream(context.Background(), &http.Client{}, tt.host, tt.topics)
_, err := NewEventStream(t.Context(), &http.Client{}, tt.host, tt.topics)
if (err != nil) != tt.wantErr {
t.Errorf("NewEventStream() error = %v, wantErr %v", err, tt.wantErr)
}
@@ -56,7 +55,7 @@ func TestEventStream(t *testing.T) {
topics := []string{"head"}
eventsChannel := make(chan *Event, 1)
stream, err := NewEventStream(context.Background(), http.DefaultClient, server.URL, topics)
stream, err := NewEventStream(t.Context(), http.DefaultClient, server.URL, topics)
require.NoError(t, err)
go stream.Subscribe(eventsChannel)

View File

@@ -1,7 +1,6 @@
package grpc
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -16,7 +15,7 @@ type customErrorData struct {
func TestAppendHeaders(t *testing.T) {
t.Run("one_header", func(t *testing.T) {
ctx := AppendHeaders(context.Background(), []string{"first=value1"})
ctx := AppendHeaders(t.Context(), []string{"first=value1"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
@@ -24,7 +23,7 @@ func TestAppendHeaders(t *testing.T) {
})
t.Run("multiple_headers", func(t *testing.T) {
ctx := AppendHeaders(context.Background(), []string{"first=value1", "second=value2"})
ctx := AppendHeaders(t.Context(), []string{"first=value1", "second=value2"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 2, md.Len(), "MetadataV0 contains wrong number of values")
@@ -33,7 +32,7 @@ func TestAppendHeaders(t *testing.T) {
})
t.Run("one_empty_header", func(t *testing.T) {
ctx := AppendHeaders(context.Background(), []string{"first=value1", ""})
ctx := AppendHeaders(t.Context(), []string{"first=value1", ""})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
@@ -42,7 +41,7 @@ func TestAppendHeaders(t *testing.T) {
t.Run("incorrect_header", func(t *testing.T) {
logHook := logTest.NewGlobal()
ctx := AppendHeaders(context.Background(), []string{"first=value1", "second"})
ctx := AppendHeaders(t.Context(), []string{"first=value1", "second"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
@@ -51,7 +50,7 @@ func TestAppendHeaders(t *testing.T) {
})
t.Run("header_value_with_equal_sign", func(t *testing.T) {
ctx := AppendHeaders(context.Background(), []string{"first=value=1"})
ctx := AppendHeaders(t.Context(), []string{"first=value=1"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")

View File

@@ -1,7 +1,6 @@
package httprest
import (
"context"
"flag"
"fmt"
"net"
@@ -34,7 +33,7 @@ func TestServer_StartStop(t *testing.T) {
WithRouter(handler),
}
g, err := New(context.Background(), opts...)
g, err := New(t.Context(), opts...)
require.NoError(t, err)
g.Start()
@@ -62,7 +61,7 @@ func TestServer_NilHandler_NotFoundHandlerRegistered(t *testing.T) {
WithRouter(handler),
}
g, err := New(context.Background(), opts...)
g, err := New(t.Context(), opts...)
require.NoError(t, err)
writer := httptest.NewRecorder()

View File

@@ -923,7 +923,14 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
if err != nil {
return nil, err
}
srcLookahead, err := st.ProposerLookahead()
if err != nil {
return nil, err
}
lookahead := make([]string, len(srcLookahead))
for i, v := range srcLookahead {
lookahead[i] = fmt.Sprintf("%d", uint64(v))
}
return &BeaconStateFulu{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
@@ -962,5 +969,6 @@ func BeaconStateFuluFromConsensus(st beaconState.BeaconState) (*BeaconStateFulu,
PendingDeposits: PendingDepositsFromConsensus(pbd),
PendingPartialWithdrawals: PendingPartialWithdrawalsFromConsensus(ppw),
PendingConsolidations: PendingConsolidationsFromConsensus(pc),
ProposerLookahead: lookahead,
}, nil
}

View File

@@ -25,6 +25,13 @@ type BlockGossipEvent struct {
Block string `json:"block"`
}
type DataColumnGossipEvent struct {
Slot string `json:"slot"`
Index string `json:"index"`
BlockRoot string `json:"block_root"`
KzgCommitments []string `json:"kzg_commitments"`
}
type AggregatedAttEventSource struct {
Aggregate *Attestation `json:"aggregate"`
}

View File

@@ -33,8 +33,14 @@ type GetPeerResponse struct {
Data *Peer `json:"data"`
}
// Added Meta to align with beacon-api: https://ethereum.github.io/beacon-APIs/#/Node/getPeers
type Meta struct {
Count int `json:"count"`
}
type GetPeersResponse struct {
Data []*Peer `json:"data"`
Meta Meta `json:"meta"`
}
type Peer struct {

View File

@@ -219,4 +219,5 @@ type BeaconStateFulu struct {
PendingDeposits []*PendingDeposit `json:"pending_deposits"`
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
ProposerLookahead []string `json:"proposer_lookahead"`
}

View File

@@ -15,7 +15,7 @@ import (
func TestDebounce_NoEvents(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
interval := time.Second
timesHandled := int32(0)
wg := &sync.WaitGroup{}
@@ -39,7 +39,7 @@ func TestDebounce_NoEvents(t *testing.T) {
func TestDebounce_CtxClosing(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
interval := time.Second
timesHandled := int32(0)
wg := &sync.WaitGroup{}
@@ -75,7 +75,7 @@ func TestDebounce_CtxClosing(t *testing.T) {
func TestDebounce_SingleHandlerInvocation(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
interval := time.Second
timesHandled := int32(0)
go async.Debounce(ctx, interval, eventsChan, func(event interface{}) {
@@ -93,7 +93,7 @@ func TestDebounce_SingleHandlerInvocation(t *testing.T) {
func TestDebounce_MultipleHandlerInvocation(t *testing.T) {
eventsChan := make(chan interface{}, 100)
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
interval := time.Second
timesHandled := int32(0)
go async.Debounce(ctx, interval, eventsChan, func(event interface{}) {

View File

@@ -19,10 +19,10 @@ func RunEvery(ctx context.Context, period time.Duration, f func()) {
for {
select {
case <-ticker.C:
log.WithField("function", funcName).Trace("running")
log.WithField("function", funcName).Trace("Running")
f()
case <-ctx.Done():
log.WithField("function", funcName).Debug("context is closed, exiting")
log.WithField("function", funcName).Debug("Context is closed, exiting")
ticker.Stop()
return
}

View File

@@ -10,7 +10,7 @@ import (
)
func TestEveryRuns(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
i := int32(0)
async.RunEvery(ctx, 100*time.Millisecond, func() {

View File

@@ -1,12 +1,8 @@
package blockchain
import (
"context"
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -15,82 +11,71 @@ import (
)
func TestHeadSlot_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB},
}
s := testServiceWithDB(t)
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 1)
wait := make(chan struct{})
go func() {
defer close(wait)
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
}()
s.HeadSlot()
<-wait
}
func TestHeadRoot_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
head: &head{root: [32]byte{'A'}},
}
s := testServiceWithDB(t)
s.head = &head{root: [32]byte{'A'}}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
wait := make(chan struct{})
st, _ := util.DeterministicGenesisState(t, 1)
go func() {
defer close(wait)
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
}()
_, err = s.HeadRoot(context.Background())
_, err = s.HeadRoot(t.Context())
require.NoError(t, err)
<-wait
}
func TestHeadBlock_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
wsb, err := blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}})
require.NoError(t, err)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
head: &head{block: wsb},
}
s := testServiceWithDB(t)
s.head = &head{block: wsb}
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
wait := make(chan struct{})
st, _ := util.DeterministicGenesisState(t, 1)
go func() {
defer close(wait)
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
}()
_, err = s.HeadBlock(context.Background())
_, err = s.HeadBlock(t.Context())
require.NoError(t, err)
<-wait
}
func TestHeadState_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
}
s := testServiceWithDB(t)
beaconDB := s.cfg.BeaconDB
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
wait := make(chan struct{})
st, _ := util.DeterministicGenesisState(t, 1)
root := bytesutil.ToBytes32(bytesutil.PadTo([]byte{'s'}, 32))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(context.Background(), root))
require.NoError(t, beaconDB.SaveState(context.Background(), st, root))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(t.Context(), root))
require.NoError(t, beaconDB.SaveState(t.Context(), st, root))
go func() {
defer close(wait)
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
}()
_, err = s.HeadState(context.Background())
_, err = s.HeadState(t.Context())
require.NoError(t, err)
<-wait
}

View File

@@ -6,7 +6,6 @@ import (
"time"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
@@ -84,7 +83,7 @@ func prepareForkchoiceState(
func TestHeadRoot_Nil(t *testing.T) {
beaconDB := testDB.SetupDB(t)
c := setupBeaconChain(t, beaconDB)
headRoot, err := c.HeadRoot(context.Background())
headRoot, err := c.HeadRoot(t.Context())
require.NoError(t, err)
assert.DeepEqual(t, params.BeaconConfig().ZeroHash[:], headRoot, "Incorrect pre chain start value")
}
@@ -137,8 +136,8 @@ func TestFinalizedBlockHash(t *testing.T) {
}
func TestUnrealizedJustifiedBlockHash(t *testing.T) {
ctx := context.Background()
service := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ctx := t.Context()
service := testServiceWithDB(t)
ojc := &ethpb.Checkpoint{Root: []byte{'j'}}
ofc := &ethpb.Checkpoint{Root: []byte{'f'}}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
@@ -153,7 +152,7 @@ func TestUnrealizedJustifiedBlockHash(t *testing.T) {
}
func TestHeadSlot_CanRetrieve(t *testing.T) {
c := &Service{}
c := testServiceNoDB(t)
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
@@ -200,10 +199,10 @@ func TestHeadBlock_CanRetrieve(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{block: wsb, state: s}
received, err := c.HeadBlock(context.Background())
received, err := c.HeadBlock(t.Context())
require.NoError(t, err)
pb, err := received.Proto()
require.NoError(t, err)
@@ -213,15 +212,16 @@ func TestHeadBlock_CanRetrieve(t *testing.T) {
func TestHeadState_CanRetrieve(t *testing.T) {
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Slot: 2, GenesisValidatorsRoot: params.BeaconConfig().ZeroHash[:]})
require.NoError(t, err)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{state: s}
headState, err := c.HeadState(context.Background())
headState, err := c.HeadState(t.Context())
require.NoError(t, err)
assert.DeepEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Incorrect head state received")
}
func TestGenesisTime_CanRetrieve(t *testing.T) {
c := &Service{genesisTime: time.Unix(999, 0)}
c := testServiceNoDB(t)
c.genesisTime = time.Unix(999, 0)
wanted := time.Unix(999, 0)
assert.Equal(t, wanted, c.GenesisTime(), "Did not get wanted genesis time")
}
@@ -230,7 +230,7 @@ func TestCurrentFork_CanRetrieve(t *testing.T) {
f := &ethpb.Fork{Epoch: 999}
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Fork: f})
require.NoError(t, err)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{state: s}
if !proto.Equal(c.CurrentFork(), f) {
t.Error("Received incorrect fork version")
@@ -242,7 +242,7 @@ func TestCurrentFork_NilHeadSTate(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
}
c := &Service{}
c := testServiceNoDB(t)
if !proto.Equal(c.CurrentFork(), f) {
t.Error("Received incorrect fork version")
}
@@ -250,7 +250,7 @@ func TestCurrentFork_NilHeadSTate(t *testing.T) {
func TestGenesisValidatorsRoot_CanRetrieve(t *testing.T) {
// Should not panic if head state is nil.
c := &Service{}
c := testServiceNoDB(t)
assert.Equal(t, [32]byte{}, c.GenesisValidatorsRoot(), "Did not get correct genesis validators root")
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{GenesisValidatorsRoot: []byte{'a'}})
@@ -269,7 +269,7 @@ func TestHeadETH1Data_CanRetrieve(t *testing.T) {
d := &ethpb.Eth1Data{DepositCount: 999}
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Eth1Data: d})
require.NoError(t, err)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{state: s}
if !proto.Equal(c.HeadETH1Data(), d) {
t.Error("Received incorrect eth1 data")
@@ -277,7 +277,7 @@ func TestHeadETH1Data_CanRetrieve(t *testing.T) {
}
func TestIsCanonical_Ok(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
c := setupBeaconChain(t, beaconDB)
@@ -298,22 +298,22 @@ func TestIsCanonical_Ok(t *testing.T) {
func TestService_HeadValidatorsIndices(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 10)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{}
indices, err := c.HeadValidatorsIndices(context.Background(), 0)
indices, err := c.HeadValidatorsIndices(t.Context(), 0)
require.NoError(t, err)
require.Equal(t, 0, len(indices))
c.head = &head{state: s}
indices, err = c.HeadValidatorsIndices(context.Background(), 0)
indices, err = c.HeadValidatorsIndices(t.Context(), 0)
require.NoError(t, err)
require.Equal(t, 10, len(indices))
}
func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 1)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{}
root := c.HeadGenesisValidatorsRoot()
@@ -331,8 +331,8 @@ func TestService_HeadGenesisValidatorsRoot(t *testing.T) {
// ---------- D
func TestService_ChainHeads(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ctx := t.Context()
c := testServiceWithDB(t)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
@@ -366,7 +366,7 @@ func TestService_ChainHeads(t *testing.T) {
func TestService_HeadPublicKeyToValidatorIndex(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 10)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{state: s}
_, e := c.HeadPublicKeyToValidatorIndex([fieldparams.BLSPubkeyLength]byte{})
@@ -381,7 +381,7 @@ func TestService_HeadPublicKeyToValidatorIndex(t *testing.T) {
}
func TestService_HeadPublicKeyToValidatorIndexNil(t *testing.T) {
c := &Service{}
c := testServiceNoDB(t)
c.head = nil
idx, e := c.HeadPublicKeyToValidatorIndex([fieldparams.BLSPubkeyLength]byte{})
@@ -396,10 +396,10 @@ func TestService_HeadPublicKeyToValidatorIndexNil(t *testing.T) {
func TestService_HeadValidatorIndexToPublicKey(t *testing.T) {
s, _ := util.DeterministicGenesisState(t, 10)
c := &Service{}
c := testServiceNoDB(t)
c.head = &head{state: s}
p, err := c.HeadValidatorIndexToPublicKey(context.Background(), 0)
p, err := c.HeadValidatorIndexToPublicKey(t.Context(), 0)
require.NoError(t, err)
v, err := s.ValidatorAtIndex(0)
@@ -409,15 +409,15 @@ func TestService_HeadValidatorIndexToPublicKey(t *testing.T) {
}
func TestService_HeadValidatorIndexToPublicKeyNil(t *testing.T) {
c := &Service{}
c := testServiceNoDB(t)
c.head = nil
p, err := c.HeadValidatorIndexToPublicKey(context.Background(), 0)
p, err := c.HeadValidatorIndexToPublicKey(t.Context(), 0)
require.NoError(t, err)
require.Equal(t, [fieldparams.BLSPubkeyLength]byte{}, p)
c.head = &head{state: nil}
p, err = c.HeadValidatorIndexToPublicKey(context.Background(), 0)
p, err = c.HeadValidatorIndexToPublicKey(t.Context(), 0)
require.NoError(t, err)
require.Equal(t, [fieldparams.BLSPubkeyLength]byte{}, p)
}
@@ -428,10 +428,11 @@ func TestService_IsOptimistic(t *testing.T) {
cfg.BellatrixForkEpoch = 0
params.OverrideBeaconConfig(cfg)
ctx := context.Background()
ctx := t.Context()
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
c := testServiceWithDB(t)
c.head = &head{root: [32]byte{'b'}}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -450,22 +451,24 @@ func TestService_IsOptimistic(t *testing.T) {
require.Equal(t, true, opt)
// If head is nil, for some reason, an error should be returned rather than panic.
c = &Service{}
c = testServiceNoDB(t)
_, err = c.IsOptimistic(ctx)
require.ErrorIs(t, err, ErrNilHead)
}
func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {
ctx := context.Background()
c := &Service{genesisTime: time.Now()}
ctx := t.Context()
c := testServiceNoDB(t)
c.genesisTime = time.Now()
opt, err := c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, false, opt)
}
func TestService_IsOptimisticForRoot(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ctx := t.Context()
c := testServiceWithDB(t)
c.head = &head{root: [32]byte{'b'}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
@@ -481,28 +484,29 @@ func TestService_IsOptimisticForRoot(t *testing.T) {
}
func TestService_IsOptimisticForRoot_DB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ctx := t.Context()
c := testServiceWithDB(t)
c.head = &head{root: [32]byte{'b'}}
beaconDB := c.cfg.BeaconDB
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
util.SaveBlock(t, t.Context(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
util.SaveBlock(t, t.Context(), beaconDB, optimisticBlock)
validatedBlock := util.NewBeaconBlock()
validatedBlock.Block.Slot = 9
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
util.SaveBlock(t, t.Context(), beaconDB, validatedBlock)
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
@@ -524,48 +528,48 @@ func TestService_IsOptimisticForRoot_DB(t *testing.T) {
// Before the first finalized epoch, finalized root could be zeros.
validatedCheckpoint = &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, br))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: params.BeaconConfig().ZeroHash[:], Slot: 10}))
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: params.BeaconConfig().ZeroHash[:], Slot: 10}))
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err = c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
}
func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ctx := t.Context()
c := testServiceWithDB(t)
beaconDB := c.cfg.BeaconDB
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
util.SaveBlock(t, t.Context(), beaconDB, b)
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: br[:], Slot: 10}))
optimisticBlock := util.NewBeaconBlock()
optimisticBlock.Block.Slot = 97
optimisticRoot, err := optimisticBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, optimisticBlock)
util.SaveBlock(t, t.Context(), beaconDB, optimisticBlock)
validatedBlock := util.NewBeaconBlock()
validatedBlock.Block.Slot = 9
validatedRoot, err := validatedBlock.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, validatedBlock)
util.SaveBlock(t, t.Context(), beaconDB, validatedBlock)
validatedCheckpoint := &ethpb.Checkpoint{Root: br[:]}
require.NoError(t, beaconDB.SaveLastValidatedCheckpoint(ctx, validatedCheckpoint))
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: optimisticRoot[:], Slot: 11}))
optimistic, err := c.IsOptimisticForRoot(ctx, optimisticRoot)
require.NoError(t, err)
require.Equal(t, true, optimistic)
require.NoError(t, beaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
require.NoError(t, beaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Root: validatedRoot[:], Slot: 9}))
validated, err := c.IsOptimisticForRoot(ctx, validatedRoot)
require.NoError(t, err)
require.Equal(t, true, validated)
@@ -573,15 +577,15 @@ func TestService_IsOptimisticForRoot_DB_non_canonical(t *testing.T) {
}
func TestService_IsOptimisticForRoot_StateSummaryRecovered(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ctx := t.Context()
c := testServiceWithDB(t)
beaconDB := c.cfg.BeaconDB
c.head = &head{root: params.BeaconConfig().ZeroHash}
b := util.NewBeaconBlock()
b.Block.Slot = 10
br, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, context.Background(), beaconDB, b)
util.SaveBlock(t, t.Context(), beaconDB, b)
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, [32]byte{}))
_, err = c.IsOptimisticForRoot(ctx, br)
assert.NoError(t, err)
@@ -593,9 +597,9 @@ func TestService_IsOptimisticForRoot_StateSummaryRecovered(t *testing.T) {
}
func TestService_IsFinalized(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
c := &Service{cfg: &config{BeaconDB: beaconDB, ForkChoiceStore: doublylinkedtree.New()}}
ctx := t.Context()
c := testServiceWithDB(t)
beaconDB := c.cfg.BeaconDB
r1 := [32]byte{'a'}
require.NoError(t, c.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{
Root: r1,
@@ -616,7 +620,7 @@ func TestService_IsFinalized(t *testing.T) {
func Test_hashForGenesisRoot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
ctx := t.Context()
c := setupBeaconChain(t, beaconDB)
st, _ := util.DeterministicGenesisStateElectra(t, 10)
require.NoError(t, c.cfg.BeaconDB.SaveGenesisData(ctx, st))

View File

@@ -2,7 +2,6 @@ package blockchain
import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
"github.com/OffchainLabs/prysm/v6/time"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
@@ -17,9 +16,6 @@ var stateDefragmentationTime = promauto.NewSummary(prometheus.SummaryOpts{
// a higher number of fragmented indexes are reallocated to a new separate slice for
// that field.
func (s *Service) defragmentState(st state.BeaconState) {
if !features.Get().EnableExperimentalState {
return
}
startTime := time.Now()
st.Defragment()
elapsedTime := time.Since(startTime)

View File

@@ -141,7 +141,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
}
if err := s.saveHead(ctx, r, b, st); err != nil {
log.WithError(err).Error("could not save head after pruning invalid blocks")
log.WithError(err).Error("Could not save head after pruning invalid blocks")
}
log.WithFields(logrus.Fields{

View File

@@ -76,14 +76,14 @@ func (s *Service) sendFCUWithAttributes(cfg *postBlockProcessConfig, fcuArgs *fc
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
if err := s.computePayloadAttributes(cfg, fcuArgs); err != nil {
log.WithError(err).Error("could not compute payload attributes")
log.WithError(err).Error("Could not compute payload attributes")
return
}
if fcuArgs.attributes.IsEmpty() {
return
}
if _, err := s.notifyForkchoiceUpdate(cfg.ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice with payload attributes for proposal")
log.WithError(err).Error("Could not update forkchoice with payload attributes for proposal")
}
}
@@ -99,7 +99,7 @@ func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuCo
}
if err := s.saveHead(ctx, args.headRoot, args.headBlock, args.headState); err != nil {
log.WithError(err).Error("could not save head")
log.WithError(err).Error("Could not save head")
}
go s.firePayloadAttributesEvent(s.cfg.StateNotifier.StateFeed(), args.headBlock, args.headRoot, s.CurrentSlot()+1)
@@ -114,7 +114,7 @@ func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuCo
func (s *Service) shouldOverrideFCU(newHeadRoot [32]byte, proposingSlot primitives.Slot) bool {
headWeight, err := s.cfg.ForkChoiceStore.Weight(newHeadRoot)
if err != nil {
log.WithError(err).WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("could not determine node weight")
log.WithError(err).WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("Could not determine node weight")
}
currentSlot := s.CurrentSlot()
if proposingSlot == currentSlot {
@@ -135,7 +135,7 @@ func (s *Service) shouldOverrideFCU(newHeadRoot [32]byte, proposingSlot primitiv
secs, err := slots.SecondsSinceSlotStart(currentSlot,
uint64(s.genesisTime.Unix()), uint64(time.Now().Unix()))
if err != nil {
log.WithError(err).Error("could not compute seconds since slot start")
log.WithError(err).Error("Could not compute seconds since slot start")
}
if secs >= doublylinkedtree.ProcessAttestationsThreshold {
log.WithFields(logrus.Fields{

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"context"
"testing"
"time"
@@ -36,29 +35,29 @@ func TestService_isNewHead(t *testing.T) {
func TestService_getHeadStateAndBlock(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
_, _, err := service.getStateAndBlock(context.Background(), [32]byte{})
_, _, err := service.getStateAndBlock(t.Context(), [32]byte{})
require.ErrorContains(t, "block does not exist", err)
blk, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlock(&ethpb.SignedBeaconBlock{Signature: []byte{1}}))
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(context.Background(), blk))
require.NoError(t, service.cfg.BeaconDB.SaveBlock(t.Context(), blk))
st, _ := util.DeterministicGenesisState(t, 1)
r, err := blk.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), st, r))
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), st, r))
gotState, err := service.cfg.BeaconDB.State(context.Background(), r)
gotState, err := service.cfg.BeaconDB.State(t.Context(), r)
require.NoError(t, err)
require.DeepEqual(t, st.ToProto(), gotState.ToProto())
gotBlk, err := service.cfg.BeaconDB.Block(context.Background(), r)
gotBlk, err := service.cfg.BeaconDB.Block(t.Context(), r)
require.NoError(t, err)
require.DeepEqual(t, blk, gotBlk)
}
func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)

View File

@@ -98,7 +98,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
oldHeadRoot := bytesutil.ToBytes32(r)
isOptimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(newHeadRoot)
if err != nil {
log.WithError(err).Error("could not check if node is optimistically synced")
log.WithError(err).Error("Could not check if node is optimistically synced")
}
if headBlock.Block().ParentRoot() != oldHeadRoot {
// A chain re-org occurred, so we fire an event notifying the rest of the services.
@@ -111,11 +111,11 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
dep := math.Max(uint64(headSlot-forkSlot), uint64(newHeadSlot-forkSlot))
oldWeight, err := s.cfg.ForkChoiceStore.Weight(oldHeadRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", oldHeadRoot)).Warn("could not determine node weight")
log.WithField("root", fmt.Sprintf("%#x", oldHeadRoot)).Warn("Could not determine node weight")
}
newWeight, err := s.cfg.ForkChoiceStore.Weight(newHeadRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("could not determine node weight")
log.WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("Could not determine node weight")
}
log.WithFields(logrus.Fields{
"newSlot": fmt.Sprintf("%d", newHeadSlot),

View File

@@ -18,9 +18,6 @@ import (
"github.com/pkg/errors"
)
// Initialize the state cache for sync committees.
var syncCommitteeHeadStateCache = cache.NewSyncCommitteeHeadState()
// HeadSyncCommitteeFetcher is the interface that wraps the head sync committee related functions.
// The head sync committee functions return callers sync committee indices and public keys with respect to current head state.
type HeadSyncCommitteeFetcher interface {
@@ -143,7 +140,7 @@ func (s *Service) getSyncCommitteeHeadState(ctx context.Context, slot primitives
defer mLock.Unlock()
// If there's already a head state exists with the request slot, we don't need to process slots.
cachedState, err := syncCommitteeHeadStateCache.Get(slot)
cachedState, err := s.syncCommitteeHeadState.Get(slot)
switch {
case err == nil:
syncHeadStateHit.Inc()
@@ -166,7 +163,7 @@ func (s *Service) getSyncCommitteeHeadState(ctx context.Context, slot primitives
return nil, err
}
syncHeadStateMiss.Inc()
err = syncCommitteeHeadStateCache.Put(slot, headState)
err = s.syncCommitteeHeadState.Put(slot, headState)
return headState, err
default:
// In the event, we encounter another error

View File

@@ -1,12 +1,10 @@
package blockchain
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
dbTest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/require"
@@ -16,35 +14,35 @@ import (
func TestService_HeadSyncCommitteeIndices(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c := testServiceWithDB(t)
c.head = &head{state: s}
// Current period
slot := 2*uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
a, err := c.HeadSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
a, err := c.HeadSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
require.NoError(t, err)
// Current period where slot-2 across EPOCHS_PER_SYNC_COMMITTEE_PERIOD
slot = 3*uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) - 2
b, err := c.HeadSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
b, err := c.HeadSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
require.NoError(t, err)
require.DeepEqual(t, a, b)
// Next period where slot-1 across EPOCHS_PER_SYNC_COMMITTEE_PERIOD
slot = 3*uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) - 1
b, err = c.HeadSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
b, err = c.HeadSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
require.NoError(t, err)
require.DeepNotEqual(t, a, b)
}
func TestService_headCurrentSyncCommitteeIndices(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c := testServiceWithDB(t)
c.head = &head{state: s}
// Process slot up to `EpochsPerSyncCommitteePeriod` so it can `ProcessSyncCommitteeUpdates`.
slot := uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
indices, err := c.headCurrentSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
indices, err := c.headCurrentSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
require.NoError(t, err)
// NextSyncCommittee becomes CurrentSyncCommittee so it should be empty by default.
@@ -53,12 +51,12 @@ func TestService_headCurrentSyncCommitteeIndices(t *testing.T) {
func TestService_headNextSyncCommitteeIndices(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c := testServiceWithDB(t)
c.head = &head{state: s}
// Process slot up to `EpochsPerSyncCommitteePeriod` so it can `ProcessSyncCommitteeUpdates`.
slot := uint64(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
indices, err := c.headNextSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
indices, err := c.headNextSyncCommitteeIndices(t.Context(), 0, primitives.Slot(slot))
require.NoError(t, err)
// NextSyncCommittee should be empty after `ProcessSyncCommitteeUpdates`. Validator should get indices.
@@ -67,12 +65,12 @@ func TestService_headNextSyncCommitteeIndices(t *testing.T) {
func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c := testServiceWithDB(t)
c.head = &head{state: s}
// Process slot up to 2 * `EpochsPerSyncCommitteePeriod` so it can run `ProcessSyncCommitteeUpdates` twice.
slot := uint64(2*params.BeaconConfig().EpochsPerSyncCommitteePeriod)*uint64(params.BeaconConfig().SlotsPerEpoch) + 1
pubkeys, err := c.HeadSyncCommitteePubKeys(context.Background(), primitives.Slot(slot), 0)
pubkeys, err := c.HeadSyncCommitteePubKeys(t.Context(), primitives.Slot(slot), 0)
require.NoError(t, err)
// Any subcommittee should match the subcommittee size.
@@ -82,13 +80,13 @@ func TestService_HeadSyncCommitteePubKeys(t *testing.T) {
func TestService_HeadSyncCommitteeDomain(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{cfg: &config{BeaconDB: dbTest.SetupDB(t)}}
c := testServiceWithDB(t)
c.head = &head{state: s}
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommittee, s.GenesisValidatorsRoot())
require.NoError(t, err)
d, err := c.HeadSyncCommitteeDomain(context.Background(), 0)
d, err := c.HeadSyncCommitteeDomain(t.Context(), 0)
require.NoError(t, err)
require.DeepEqual(t, wanted, d)
@@ -96,13 +94,13 @@ func TestService_HeadSyncCommitteeDomain(t *testing.T) {
func TestService_HeadSyncContributionProofDomain(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c := testServiceWithDB(t)
c.head = &head{state: s}
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainContributionAndProof, s.GenesisValidatorsRoot())
require.NoError(t, err)
d, err := c.HeadSyncContributionProofDomain(context.Background(), 0)
d, err := c.HeadSyncContributionProofDomain(t.Context(), 0)
require.NoError(t, err)
require.DeepEqual(t, wanted, d)
@@ -110,30 +108,27 @@ func TestService_HeadSyncContributionProofDomain(t *testing.T) {
func TestService_HeadSyncSelectionProofDomain(t *testing.T) {
s, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().TargetCommitteeSize)
c := &Service{}
c := testServiceWithDB(t)
c.head = &head{state: s}
wanted, err := signing.Domain(s.Fork(), slots.ToEpoch(s.Slot()), params.BeaconConfig().DomainSyncCommitteeSelectionProof, s.GenesisValidatorsRoot())
require.NoError(t, err)
d, err := c.HeadSyncSelectionProofDomain(context.Background(), 0)
d, err := c.HeadSyncSelectionProofDomain(t.Context(), 0)
require.NoError(t, err)
require.DeepEqual(t, wanted, d)
}
func TestSyncCommitteeHeadStateCache_RoundTrip(t *testing.T) {
c := syncCommitteeHeadStateCache
t.Cleanup(func() {
syncCommitteeHeadStateCache = cache.NewSyncCommitteeHeadState()
})
s := testServiceNoDB(t)
beaconState, _ := util.DeterministicGenesisStateAltair(t, 100)
require.NoError(t, beaconState.SetSlot(100))
cachedState, err := c.Get(101)
cachedState, err := s.syncCommitteeHeadState.Get(101)
require.ErrorContains(t, cache.ErrNotFound.Error(), err)
require.Equal(t, nil, cachedState)
require.NoError(t, c.Put(101, beaconState))
cachedState, err = c.Get(101)
require.NoError(t, s.syncCommitteeHeadState.Put(101, beaconState))
cachedState, err = s.syncCommitteeHeadState.Get(101)
require.NoError(t, err)
require.DeepEqual(t, beaconState, cachedState)
}

View File

@@ -9,7 +9,6 @@ import (
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -34,17 +33,17 @@ func TestSaveHead_Same(t *testing.T) {
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, service.saveHead(context.Background(), r, b, st))
require.NoError(t, service.saveHead(t.Context(), r, b, st))
assert.Equal(t, primitives.Slot(0), service.headSlot(), "Head did not stay the same")
assert.Equal(t, r, service.headRoot(), "Head did not stay the same")
}
func TestSaveHead_Different(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldBlock := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldRoot, err := oldBlock.Block().HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
@@ -61,7 +60,7 @@ func TestSaveHead_Different(t *testing.T) {
newHeadSignedBlock.Block.Slot = 1
newHeadBlock := newHeadSignedBlock.Block
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
wsb := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
@@ -74,13 +73,13 @@ func TestSaveHead_Different(t *testing.T) {
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), headState, newRoot))
require.NoError(t, service.saveHead(t.Context(), newRoot, wsb, headState))
assert.Equal(t, primitives.Slot(1), service.HeadSlot(), "Head did not change")
cachedRoot, err := service.HeadRoot(context.Background())
cachedRoot, err := service.HeadRoot(t.Context())
require.NoError(t, err)
assert.DeepEqual(t, cachedRoot, newRoot[:], "Head did not change")
headBlock, err := service.headBlock()
@@ -92,12 +91,12 @@ func TestSaveHead_Different(t *testing.T) {
}
func TestSaveHead_Different_Reorg(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
hook := logTest.NewGlobal()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldBlock := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldRoot, err := oldBlock.Block().HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
@@ -120,7 +119,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
newHeadSignedBlock.Block.ParentRoot = reorgChainParent[:]
newHeadBlock := newHeadSignedBlock.Block
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
wsb := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
@@ -129,13 +128,13 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), headState, newRoot))
require.NoError(t, service.saveHead(t.Context(), newRoot, wsb, headState))
assert.Equal(t, primitives.Slot(1), service.HeadSlot(), "Head did not change")
cachedRoot, err := service.HeadRoot(context.Background())
cachedRoot, err := service.HeadRoot(t.Context())
require.NoError(t, err)
if !bytes.Equal(cachedRoot, newRoot[:]) {
t.Error("Head did not change")
@@ -154,20 +153,15 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
func Test_notifyNewHeadEvent(t *testing.T) {
t.Run("genesis_state_root", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
notifier := &mock.MockStateNotifier{RecordEvents: true}
srv := &Service{
cfg: &config{
StateNotifier: notifier,
ForkChoiceStore: doublylinkedtree.New(),
},
originBlockRoot: [32]byte{1},
}
st, blk, err := prepareForkchoiceState(context.Background(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
srv := testServiceWithDB(t)
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
srv.originBlockRoot = [32]byte{1}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(context.Background(), st, blk))
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
require.NoError(t, srv.notifyNewHeadEvent(context.Background(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -185,18 +179,14 @@ func Test_notifyNewHeadEvent(t *testing.T) {
})
t.Run("non_genesis_values", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
notifier := &mock.MockStateNotifier{RecordEvents: true}
genesisRoot := [32]byte{1}
srv := &Service{
cfg: &config{
StateNotifier: notifier,
ForkChoiceStore: doublylinkedtree.New(),
},
originBlockRoot: genesisRoot,
}
st, blk, err := prepareForkchoiceState(context.Background(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
srv := testServiceWithDB(t)
srv.originBlockRoot = genesisRoot
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(context.Background(), st, blk))
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
epoch1Start, err := slots.EpochStart(1)
require.NoError(t, err)
epoch2Start, err := slots.EpochStart(1)
@@ -205,7 +195,7 @@ func Test_notifyNewHeadEvent(t *testing.T) {
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
err = srv.notifyNewHeadEvent(context.Background(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
require.NoError(t, err)
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -225,11 +215,11 @@ func Test_notifyNewHeadEvent(t *testing.T) {
}
func TestRetrieveHead_ReadOnly(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
oldBlock := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldBlock := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, util.NewBeaconBlock())
oldRoot, err := oldBlock.Block().HashTreeRoot()
require.NoError(t, err)
service.head = &head{
@@ -243,7 +233,7 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
wsb := util.SaveBlock(t, t.Context(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
@@ -256,9 +246,9 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
headState, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, headState.SetSlot(1))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(context.Background(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(context.Background(), headState, newRoot))
require.NoError(t, service.saveHead(context.Background(), newRoot, wsb, headState))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(t.Context(), &ethpb.StateSummary{Slot: 1, Root: newRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(t.Context(), headState, newRoot))
require.NoError(t, service.saveHead(t.Context(), newRoot, wsb, headState))
rOnlyState, err := service.HeadStateReadOnly(ctx)
require.NoError(t, err)
@@ -267,7 +257,7 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
}
func TestSaveOrphanedAtts(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
@@ -333,7 +323,7 @@ func TestSaveOrphanedAtts(t *testing.T) {
}
func TestSaveOrphanedAttsElectra(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
@@ -404,7 +394,7 @@ func TestSaveOrphanedOps(t *testing.T) {
config.ShardCommitteePeriod = 0
params.OverrideBeaconConfig(config)
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
@@ -481,7 +471,7 @@ func TestSaveOrphanedOps(t *testing.T) {
}
func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.cfg.BLSToExecPool = blstoexec.NewPool()
@@ -539,7 +529,7 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
}
func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-10*int64(1)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
@@ -604,7 +594,7 @@ func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
}
func TestSaveOrphanedAtts_CanFilter_DoublyLinkedTrie(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
service.genesisTime = time.Now().Add(time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch+2)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"context"
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
@@ -11,7 +10,7 @@ import (
)
func TestService_getBlock(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
s := setupBeaconChain(t, beaconDB)
b1 := util.NewBeaconBlock()
@@ -42,7 +41,7 @@ func TestService_getBlock(t *testing.T) {
}
func TestService_hasBlockInInitSyncOrDB(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
s := setupBeaconChain(t, beaconDB)
b1 := util.NewBeaconBlock()

View File

@@ -26,9 +26,6 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
if len(b.Body().Attestations()) > 0 {
log = log.WithField("attestations", len(b.Body().Attestations()))
}
if len(b.Body().Deposits()) > 0 {
log = log.WithField("deposits", len(b.Body().Deposits()))
}
if len(b.Body().AttesterSlashings()) > 0 {
log = log.WithField("attesterSlashings", len(b.Body().AttesterSlashings()))
}
@@ -111,7 +108,6 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
"deposits": len(block.Body().Deposits()),
}
log.WithFields(lf).Debug("Synced new block")
} else {
@@ -159,7 +155,9 @@ func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}
fields["blsToExecutionChanges"] = len(changes)
if len(changes) > 0 {
fields["blsToExecutionChanges"] = len(changes)
}
}
log.WithFields(fields).Debug("Synced new payload")
return nil

View File

@@ -53,7 +53,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 deposits=1 prefix=blockchain slot=0",
want: "\"Finished applying state transition\" attestations=1 prefix=blockchain slot=0",
},
{name: "has attester slashing",
b: func() interfaces.ReadOnlyBeaconBlock {
@@ -93,7 +93,7 @@ func Test_logStateTransitionData(t *testing.T) {
require.NoError(t, err)
return wb
},
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 deposits=1 prefix=blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
want: "\"Finished applying state transition\" attestations=1 attesterSlashings=1 prefix=blockchain proposerSlashings=1 slot=0 voluntaryExits=1",
},
{name: "has payload",
b: func() interfaces.ReadOnlyBeaconBlock { return wrappedPayloadBlk },

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"context"
"testing"
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -9,23 +8,13 @@ import (
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestReportEpochMetrics_BadHeadState(t *testing.T) {
s, err := util.NewBeaconState()
require.NoError(t, err)
h, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, h.SetValidators(nil))
err = reportEpochMetrics(context.Background(), s, h)
require.ErrorContains(t, "failed to initialize precompute: state has nil validator slice", err)
}
func TestReportEpochMetrics_BadAttestation(t *testing.T) {
s, err := util.NewBeaconState()
require.NoError(t, err)
h, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 0}))
err = reportEpochMetrics(context.Background(), s, h)
err = reportEpochMetrics(t.Context(), s, h)
require.ErrorContains(t, "attestation with inclusion delay of 0", err)
}
@@ -36,6 +25,6 @@ func TestReportEpochMetrics_SlashedValidatorOutOfBound(t *testing.T) {
v.Slashed = true
require.NoError(t, h.UpdateValidatorAtIndex(0, v))
require.NoError(t, h.AppendCurrentEpochAttestations(&eth.PendingAttestation{InclusionDelay: 1, Data: util.HydrateAttestationData(&eth.AttestationData{})}))
err = reportEpochMetrics(context.Background(), h, h)
err = reportEpochMetrics(t.Context(), h, h)
require.ErrorContains(t, "slot 0 out of bounds", err)
}

View File

@@ -8,9 +8,10 @@ import (
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func testServiceOptsWithDB(t *testing.T) []Option {
func testServiceOptsWithDB(t testing.TB) []Option {
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New()
cs := startup.NewClockSynchronizer()
@@ -31,3 +32,15 @@ func testServiceOptsNoDB() []Option {
cs := startup.NewClockSynchronizer()
return []Option{WithClockSynchronizer(cs)}
}
func testServiceNoDB(t testing.TB) *Service {
s, err := NewService(t.Context(), testServiceOptsNoDB()...)
require.NoError(t, err)
return s
}
func testServiceWithDB(t testing.TB) *Service {
s, err := NewService(t.Context(), testServiceOptsWithDB(t)...)
require.NoError(t, err)
return s
}

View File

@@ -33,6 +33,14 @@ func WithMaxGoroutines(x int) Option {
}
}
// WithLCStore for light client store access.
func WithLCStore() Option {
return func(s *Service) error {
s.lcStore = lightclient.NewLightClientStore(s.cfg.BeaconDB)
return nil
}
}
// WithWeakSubjectivityCheckpoint for checkpoint sync.
func WithWeakSubjectivityCheckpoint(c *ethpb.Checkpoint) Option {
return func(s *Service) error {
@@ -258,3 +266,12 @@ func WithLightClientStore(lcs *lightclient.Store) Option {
return nil
}
}
// WithStartWaitingDataColumnSidecars sets a channel that the `areDataColumnsAvailable` function will fill
// in when starting to wait for additional data columns.
func WithStartWaitingDataColumnSidecars(c chan bool) Option {
return func(s *Service) error {
s.startWaitingDataColumnSidecars = c
return nil
}
}

View File

@@ -5,6 +5,7 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
@@ -88,7 +89,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
if err != nil {
return err
}
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt); err != nil {
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
return err
}

View File

@@ -161,7 +161,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
func TestService_GetRecentPreState(t *testing.T) {
service, _ := minimalTestService(t)
ctx := context.Background()
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
@@ -183,7 +183,7 @@ func TestService_GetRecentPreState(t *testing.T) {
func TestService_GetAttPreState_Concurrency(t *testing.T) {
service, _ := minimalTestService(t)
ctx := context.Background()
ctx := t.Context()
s, err := util.NewBeaconState()
require.NoError(t, err)
@@ -353,21 +353,21 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
}
func TestAttEpoch_MatchPrevEpoch(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)}))
}
func TestAttEpoch_MatchCurrentEpoch(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
nowTime := uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
require.NoError(t, verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Epoch: 1}))
}
func TestAttEpoch_NotMatch(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
nowTime := 2 * uint64(params.BeaconConfig().SlotsPerEpoch) * params.BeaconConfig().SecondsPerSlot
err := verifyAttTargetEpoch(ctx, 0, nowTime, &ethpb.Checkpoint{Root: make([]byte, fieldparams.RootLength)})
@@ -375,7 +375,7 @@ func TestAttEpoch_NotMatch(t *testing.T) {
}
func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -385,7 +385,7 @@ func TestVerifyBeaconBlock_NoBlock(t *testing.T) {
}
func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
@@ -402,7 +402,7 @@ func TestVerifyBeaconBlock_futureBlock(t *testing.T) {
}
func TestVerifyBeaconBlock_OK(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)

View File

@@ -72,8 +72,6 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
}
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
defer s.saveLightClientUpdate(cfg)
defer s.saveLightClientBootstrap(cfg)
}
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
@@ -331,7 +329,7 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
// The latest block header is from the previous epoch
r, err := st.LatestBlockHeader().HashTreeRoot()
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
log.WithError(err).Error("Could not update proposer index state-root map")
return nil
}
// The proposer indices cache takes the target root for the previous
@@ -341,12 +339,12 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
}
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e)
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
log.WithError(err).Error("Could not update proposer index state-root map")
return nil
}
err = helpers.UpdateCachedCheckpointToStateRoot(st, &forkchoicetypes.Checkpoint{Epoch: e, Root: target})
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
log.WithError(err).Error("Could not update proposer index state-root map")
}
return nil
}
@@ -564,7 +562,7 @@ func (s *Service) validateMergeTransitionBlock(ctx context.Context, stateVersion
// If there is not, it will call forkchoice updated with the correct payload attribute then cache the payload ID.
func (s *Service) runLateBlockTasks() {
if err := s.waitForSync(); err != nil {
log.WithError(err).Error("failed to wait for initial sync")
log.WithError(err).Error("Failed to wait for initial sync")
return
}
@@ -639,7 +637,11 @@ func missingDataColumnIndices(bs *filesystem.DataColumnStorage, root [fieldparam
// The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the sidecar notifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signedBlock interfaces.ReadOnlySignedBeaconBlock) error {
func (s *Service) isDataAvailable(
ctx context.Context,
root [fieldparams.RootLength]byte,
signedBlock interfaces.ReadOnlySignedBeaconBlock,
) error {
block := signedBlock.Block()
if block == nil {
return errors.New("invalid nil beacon block")
@@ -659,7 +661,11 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signedBloc
// areDataColumnsAvailable blocks until all data columns committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
func (s *Service) areDataColumnsAvailable(ctx context.Context, root [fieldparams.RootLength]byte, block interfaces.ReadOnlyBeaconBlock) error {
func (s *Service) areDataColumnsAvailable(
ctx context.Context,
root [fieldparams.RootLength]byte,
block interfaces.ReadOnlyBeaconBlock,
) error {
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
blockSlot, currentSlot := block.Slot(), s.CurrentSlot()
blockEpoch, currentEpoch := slots.ToEpoch(blockSlot), slots.ToEpoch(currentSlot)
@@ -726,6 +732,10 @@ func (s *Service) areDataColumnsAvailable(ctx context.Context, root [fieldparams
return nil
}
if s.startWaitingDataColumnSidecars != nil {
s.startWaitingDataColumnSidecars <- true
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(block.Slot()+1, s.genesisTime)
@@ -917,10 +927,10 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
// blocks.
lastState.CopyAllTries()
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
log.WithError(err).Debug("Could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("lateBlockTasks: could not update epoch boundary caches")
log.WithError(err).Error("Could not update epoch boundary caches")
}
// return early if we already started building a block for the current
// head root
@@ -934,7 +944,7 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if attribute.IsEmpty() {
headBlock, err := s.headBlock()
if err != nil {
log.WithError(err).WithField("head_root", headRoot).Error("unable to retrieve head block to fire payload attributes event")
log.WithError(err).WithField("head_root", headRoot).Error("Unable to retrieve head block to fire payload attributes event")
}
// notifyForkchoiceUpdate fires the payload attribute event. But in this case, we won't
// call notifyForkchoiceUpdate, so the event is fired here.

View File

@@ -68,11 +68,11 @@ func (s *Service) getFCUArgsEarlyBlock(cfg *postBlockProcessConfig, fcuArgs *fcu
func (s *Service) logNonCanonicalBlockReceived(blockRoot [32]byte, headRoot [32]byte) {
receivedWeight, err := s.cfg.ForkChoiceStore.Weight(blockRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", blockRoot)).Warn("could not determine node weight")
log.WithField("root", fmt.Sprintf("%#x", blockRoot)).Warn("Could not determine node weight")
}
headWeight, err := s.cfg.ForkChoiceStore.Weight(headRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", headRoot)).Warn("could not determine node weight")
log.WithField("root", fmt.Sprintf("%#x", headRoot)).Warn("Could not determine node weight")
}
log.WithFields(logrus.Fields{
"receivedRoot": fmt.Sprintf("%#x", blockRoot),
@@ -131,6 +131,12 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
}
func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
if err := s.processLightClientUpdate(cfg); err != nil {
log.WithError(err).Error("Failed to process light client update")
}
if err := s.processLightClientBootstrap(cfg); err != nil {
log.WithError(err).Error("Failed to process light client bootstrap")
}
if err := s.processLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client optimistic update")
}
@@ -139,38 +145,33 @@ func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
}
}
// saveLightClientUpdate saves the light client update for this block
// processLightClientUpdate saves the light client update for this block
// if it's better than the already saved one, when feature flag is enabled.
func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
func (s *Service) processLightClientUpdate(cfg *postBlockProcessConfig) error {
attestedRoot := cfg.roblock.Block().ParentRoot()
attestedBlock, err := s.getBlock(cfg.ctx, attestedRoot)
if err != nil {
log.WithError(err).Errorf("Saving light client update failed: Could not get attested block for root %#x", attestedRoot)
return
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
}
if attestedBlock == nil || attestedBlock.IsNil() {
log.Error("Saving light client update failed: Attested block is nil")
return
return errors.New("attested block is nil")
}
attestedState, err := s.cfg.StateGen.StateByRoot(cfg.ctx, attestedRoot)
if err != nil {
log.WithError(err).Errorf("Saving light client update failed: Could not get attested state for root %#x", attestedRoot)
return
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
}
if attestedState == nil || attestedState.IsNil() {
log.Error("Saving light client update failed: Attested state is nil")
return
return errors.New("attested state is nil")
}
finalizedRoot := attestedState.FinalizedCheckpoint().Root
finalizedBlock, err := s.getBlock(cfg.ctx, [32]byte(finalizedRoot))
if err != nil {
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
log.Debugf("Skipping saving light client update: Finalized block is nil for root %#x", finalizedRoot)
} else {
log.WithError(err).Errorf("Saving light client update failed: Could not get finalized block for root %#x", finalizedRoot)
log.Debugf("Skipping saving light client update because finalized block is nil for root %#x", finalizedRoot)
return nil
}
return
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
}
update, err := lightclient.NewLightClientUpdateFromBeaconState(
@@ -183,57 +184,52 @@ func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
finalizedBlock,
)
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not create light client update")
return
return errors.Wrapf(err, "could not create light client update")
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
oldUpdate, err := s.cfg.BeaconDB.LightClientUpdate(cfg.ctx, period)
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not get current light client update")
return
return errors.Wrapf(err, "could not get current light client update")
}
if oldUpdate == nil {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
} else {
log.WithField("period", period).Debug("Saving light client update: Saved new update")
return errors.Wrapf(err, "could not save light client update")
}
return
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
isNewUpdateBetter, err := lightclient.IsBetterUpdate(update, oldUpdate)
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not compare light client updates")
return
return errors.Wrapf(err, "could not compare light client updates")
}
if isNewUpdateBetter {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
} else {
log.WithField("period", period).Debug("Saving light client update: Saved new update")
return errors.Wrapf(err, "could not save light client update")
}
} else {
log.WithField("period", period).Debug("Saving light client update: New update is not better than the current one. Skipping save.")
log.WithField("period", period).Debug("Saved new light client update")
return nil
}
log.WithField("period", period).Debug("New light client update is not better than the current one, skipping save")
return nil
}
// saveLightClientBootstrap saves a light client bootstrap for this block
// processLightClientBootstrap saves a light client bootstrap for this block
// when feature flag is enabled.
func (s *Service) saveLightClientBootstrap(cfg *postBlockProcessConfig) {
func (s *Service) processLightClientBootstrap(cfg *postBlockProcessConfig) error {
blockRoot := cfg.roblock.Root()
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(cfg.ctx, s.CurrentSlot(), cfg.postState, cfg.roblock)
if err != nil {
log.WithError(err).Error("Saving light client bootstrap failed: Could not create light client bootstrap")
return
return errors.Wrapf(err, "could not create light client bootstrap")
}
err = s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap)
if err != nil {
log.WithError(err).Error("Saving light client bootstrap failed: Could not save light client bootstrap in DB")
if err := s.lcStore.SaveLightClientBootstrap(cfg.ctx, blockRoot, bootstrap); err != nil {
return errors.Wrapf(err, "could not save light client bootstrap")
}
return nil
}
func (s *Service) processLightClientFinalityUpdate(
@@ -531,7 +527,7 @@ func (s *Service) updateFinalized(ctx context.Context, cp *ethpb.Checkpoint) err
// is meant to be asynchronous and run in the background rather than being
// tied to the execution of a block.
if err := s.cfg.StateGen.MigrateToCold(s.ctx, fRoot); err != nil {
log.WithError(err).Error("could not migrate to cold")
log.WithError(err).Error("Could not migrate to cold")
}
}()
return nil
@@ -620,7 +616,7 @@ func (s *Service) insertFinalizedDepositsAndPrune(ctx context.Context, fRoot [32
// Update deposit cache.
finalizedState, err := s.cfg.StateGen.StateByRoot(ctx, fRoot)
if err != nil {
log.WithError(err).Error("could not fetch finalized state")
log.WithError(err).Error("Could not fetch finalized state")
return
}
@@ -638,7 +634,7 @@ func (s *Service) insertFinalizedDepositsAndPrune(ctx context.Context, fRoot [32
// because the Eth1 follow distance makes such long-range reorgs extremely unlikely.
eth1DepositIndex, err := mathutil.Int(finalizedState.Eth1DepositIndex())
if err != nil {
log.WithError(err).Error("could not cast eth1 deposit index")
log.WithError(err).Error("Could not cast eth1 deposit index")
return
}
// The deposit index in the state is always the index of the next deposit
@@ -647,12 +643,12 @@ func (s *Service) insertFinalizedDepositsAndPrune(ctx context.Context, fRoot [32
finalizedEth1DepIdx := eth1DepositIndex - 1
if err = s.cfg.DepositCache.InsertFinalizedDeposits(ctx, int64(finalizedEth1DepIdx), common.Hash(finalizedState.Eth1Data().BlockHash),
0 /* Setting a zero value as we have no access to block height */); err != nil {
log.WithError(err).Error("could not insert finalized deposits")
log.WithError(err).Error("Could not insert finalized deposits")
return
}
// Deposit proofs are only used during state transition and can be safely removed to save space.
if err = s.cfg.DepositCache.PruneProofs(ctx, int64(finalizedEth1DepIdx)); err != nil {
log.WithError(err).Error("could not prune deposit proofs")
log.WithError(err).Error("Could not prune deposit proofs")
}
// Prune deposits which have already been finalized, the below method prunes all pending deposits (non-inclusive) up
// to the provided eth1 deposit index.

View File

@@ -51,7 +51,7 @@ import (
)
func Test_pruneAttsFromPool_Electra(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
logHook := logTest.NewGlobal()
params.SetupTestConfigCleanup(t)
@@ -59,11 +59,8 @@ func Test_pruneAttsFromPool_Electra(t *testing.T) {
cfg.TargetCommitteeSize = 8
params.OverrideBeaconConfig(cfg)
s := Service{
cfg: &config{
AttPool: kv.NewAttCaches(),
},
}
s := testServiceNoDB(t)
s.cfg.AttPool = kv.NewAttCaches()
data := &ethpb.AttestationData{
BeaconBlockRoot: make([]byte, 32),
@@ -241,7 +238,7 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
fcp2 := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r0}
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(fcp2))
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
@@ -284,7 +281,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(fcp2))
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// 5 nodes from the block tree 1. B0 - B3 - B4 - B6 - B8
@@ -294,7 +291,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
wantedRoots := [][]byte{roots[0], roots[3], roots[4], roots[6], roots[8]}
for i, rt := range wantedRoots {
assert.Equal(t, true, service.cfg.ForkChoiceStore.HasNode(bytesutil.ToBytes32(rt)), fmt.Sprintf("Didn't save node: %d", i))
assert.Equal(t, true, service.cfg.BeaconDB.HasBlock(context.Background(), bytesutil.ToBytes32(rt)))
assert.Equal(t, true, service.cfg.BeaconDB.HasBlock(t.Context(), bytesutil.ToBytes32(rt)))
}
}
@@ -340,7 +337,7 @@ func TestFillForkChoiceMissingBlocks_FilterFinalized(t *testing.T) {
// Set finalized epoch to 2.
require.NoError(t, service.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 2, Root: r64}))
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.NoError(t, err)
// There should be 1 node: block 65
@@ -373,7 +370,7 @@ func TestFillForkChoiceMissingBlocks_FinalizedSibling(t *testing.T) {
require.NoError(t, err)
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
t.Context(), wsb, beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.Equal(t, ErrNotDescendantOfFinalized.Error(), err.Error())
}
@@ -451,33 +448,34 @@ func blockTree1(t *testing.T, beaconDB db.Database, genesisRoot []byte) ([][]byt
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
wsb, err := consensusblocks.NewSignedBeaconBlock(beaconBlock)
require.NoError(t, err)
if err := beaconDB.SaveBlock(context.Background(), wsb); err != nil {
if err := beaconDB.SaveBlock(t.Context(), wsb); err != nil {
return nil, err
}
if err := beaconDB.SaveState(context.Background(), st.Copy(), bytesutil.ToBytes32(beaconBlock.Block.ParentRoot)); err != nil {
if err := beaconDB.SaveState(t.Context(), st.Copy(), bytesutil.ToBytes32(beaconBlock.Block.ParentRoot)); err != nil {
return nil, errors.Wrap(err, "could not save state")
}
}
if err := beaconDB.SaveState(context.Background(), st.Copy(), r1); err != nil {
if err := beaconDB.SaveState(t.Context(), st.Copy(), r1); err != nil {
return nil, err
}
if err := beaconDB.SaveState(context.Background(), st.Copy(), r7); err != nil {
if err := beaconDB.SaveState(t.Context(), st.Copy(), r7); err != nil {
return nil, err
}
if err := beaconDB.SaveState(context.Background(), st.Copy(), r8); err != nil {
if err := beaconDB.SaveState(t.Context(), st.Copy(), r8); err != nil {
return nil, err
}
return [][]byte{r0[:], r1[:], nil, r3[:], r4[:], r5[:], r6[:], r7[:], r8[:]}, nil
}
func TestCurrentSlot_HandlesOverflow(t *testing.T) {
svc := Service{genesisTime: prysmTime.Now().Add(1 * time.Hour)}
svc := testServiceNoDB(t)
svc.genesisTime = prysmTime.Now().Add(1 * time.Hour)
slot := svc.CurrentSlot()
require.Equal(t, primitives.Slot(0), slot, "Unexpected slot")
}
func TestAncestorByDB_CtxErr(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -510,18 +508,18 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
beaconBlock := util.NewBeaconBlock()
beaconBlock.Block.Slot = b.Block.Slot
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
util.SaveBlock(t, context.Background(), beaconDB, beaconBlock)
util.SaveBlock(t, t.Context(), beaconDB, beaconBlock)
}
// Slots 100 to 200 are skip slots. Requesting root at 150 will yield root at 100. The last physical block.
r, err := service.Ancestor(context.Background(), r200[:], 150)
r, err := service.Ancestor(t.Context(), r200[:], 150)
require.NoError(t, err)
if bytesutil.ToBytes32(r) != r100 {
t.Error("Did not get correct root")
}
// Slots 1 to 100 are skip slots. Requesting root at 50 will yield root at 1. The last physical block.
r, err = service.Ancestor(context.Background(), r200[:], 50)
r, err = service.Ancestor(t.Context(), r200[:], 50)
require.NoError(t, err)
if bytesutil.ToBytes32(r) != r1 {
t.Error("Did not get correct root")
@@ -529,7 +527,7 @@ func TestAncestor_HandleSkipSlot(t *testing.T) {
}
func TestAncestor_CanUseForkchoice(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -557,12 +555,12 @@ func TestAncestor_CanUseForkchoice(t *testing.T) {
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
st, blkRoot, err := prepareForkchoiceState(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, ojc, ofc)
st, blkRoot, err := prepareForkchoiceState(t.Context(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
}
r, err := service.Ancestor(context.Background(), r200[:], 150)
r, err := service.Ancestor(t.Context(), r200[:], 150)
require.NoError(t, err)
if bytesutil.ToBytes32(r) != r100 {
t.Error("Did not get correct root")
@@ -594,14 +592,14 @@ func TestAncestor_CanUseDB(t *testing.T) {
beaconBlock := util.NewBeaconBlock()
beaconBlock.Block.Slot = b.Block.Slot
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
util.SaveBlock(t, context.Background(), beaconDB, beaconBlock)
util.SaveBlock(t, t.Context(), beaconDB, beaconBlock)
}
st, blkRoot, err := prepareForkchoiceState(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, ojc, ofc)
st, blkRoot, err := prepareForkchoiceState(t.Context(), 200, r200, r200, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
r, err := service.Ancestor(context.Background(), r200[:], 150)
r, err := service.Ancestor(t.Context(), r200[:], 150)
require.NoError(t, err)
if bytesutil.ToBytes32(r) != r100 {
t.Error("Did not get correct root")
@@ -609,7 +607,7 @@ func TestAncestor_CanUseDB(t *testing.T) {
}
func TestEnsureRootNotZeroHashes(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -623,7 +621,7 @@ func TestEnsureRootNotZeroHashes(t *testing.T) {
}
func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
opts := testServiceOptsNoDB()
service, err := NewService(ctx, opts...)
require.NoError(t, err)
@@ -922,7 +920,7 @@ func TestRemoveBlockAttestationsInPool(t *testing.T) {
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: r[:]}))
@@ -935,7 +933,7 @@ func TestRemoveBlockAttestationsInPool(t *testing.T) {
require.NoError(t, service.cfg.AttPool.SaveAggregatedAttestations(atts))
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
service.pruneAttsFromPool(context.Background(), nil /* state not needed pre-Electra */, wsb)
service.pruneAttsFromPool(t.Context(), nil /* state not needed pre-Electra */, wsb)
require.LogsDoNotContain(t, logHook, "Could not prune attestations")
require.Equal(t, 0, service.cfg.AttPool.AggregatedAttestationCount())
}
@@ -2667,7 +2665,7 @@ func TestRollbackBlock_ContextDeadline(t *testing.T) {
require.Equal(t, true, hasState)
// Set deadlined context when processing the block
cancCtx, canc := context.WithCancel(context.Background())
cancCtx, canc := context.WithCancel(t.Context())
canc()
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
@@ -2706,7 +2704,7 @@ func fakeResult(missing []uint64) map[uint64]struct{} {
return r
}
func TestSaveLightClientUpdate(t *testing.T) {
func TestProcessLightClientUpdate(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
@@ -2747,7 +2745,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
isValidPayload: true,
}
s.saveLightClientUpdate(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
@@ -2796,13 +2794,13 @@ func TestSaveLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -2848,7 +2846,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
scb := make([]byte, 64)
@@ -2863,7 +2861,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -2906,7 +2904,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
isValidPayload: true,
}
s.saveLightClientUpdate(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
@@ -2954,13 +2952,13 @@ func TestSaveLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -3006,7 +3004,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
scb := make([]byte, 64)
@@ -3021,7 +3019,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -3064,7 +3062,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
isValidPayload: true,
}
s.saveLightClientUpdate(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
@@ -3112,13 +3110,13 @@ func TestSaveLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -3164,7 +3162,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(l.AttestedBlock)
require.NoError(t, err)
scb := make([]byte, 64)
@@ -3179,7 +3177,7 @@ func TestSaveLightClientUpdate(t *testing.T) {
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
require.NoError(t, s.processLightClientUpdate(cfg))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
@@ -3192,120 +3190,51 @@ func TestSaveLightClientUpdate(t *testing.T) {
reset()
}
func TestSaveLightClientBootstrap(t *testing.T) {
func TestProcessLightClientBootstrap(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
defer reset()
s, tr := minimalTestService(t)
s, tr := minimalTestService(t, WithLCStore())
ctx := tr.ctx
t.Run("Altair", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Altair)
for testVersion := version.Altair; testVersion <= version.Electra; testVersion++ {
t.Run(version.String(testVersion), func(t *testing.T) {
l := util.NewTestLightClient(t, testVersion)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().VersionToForkEpochMap()[testVersion])*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientBootstrap(cfg)
require.NoError(t, s.processLightClientBootstrap(cfg))
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
// Check that the light client bootstrap is saved
b, err := s.lcStore.LightClientBootstrap(ctx, currentBlockRoot)
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Altair)
})
t.Run("Capella", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Capella)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Capella)
})
t.Run("Deneb", func(t *testing.T) {
l := util.NewTestLightClient(t, version.Deneb)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Deneb)
})
reset()
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), testVersion)
})
}
}
type testIsAvailableParams struct {
@@ -3315,7 +3244,7 @@ type testIsAvailableParams struct {
}
func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.Context, context.CancelFunc, *Service, [fieldparams.RootLength]byte, interfaces.SignedBeaconBlock) {
ctx, cancel := context.WithCancel(context.Background())
ctx, cancel := context.WithCancel(t.Context())
dataColumnStorage := filesystem.NewEphemeralDataColumnStorage(t)
options := append(params.options, WithDataColumnStorage(dataColumnStorage))
@@ -3332,17 +3261,27 @@ func testIsAvailableSetup(t *testing.T, params testIsAvailableParams) (context.C
signedBeaconBlock, err := util.GenerateFullBlockFulu(genesisState, secretKeys, conf, 10 /*block slot*/)
require.NoError(t, err)
root, err := signedBeaconBlock.Block.HashTreeRoot()
block := signedBeaconBlock.Block
bodyRoot, err := block.Body.HashTreeRoot()
require.NoError(t, err)
dataColumnsParams := make([]util.DataColumnParams, 0, len(params.columnsToSave))
root, err := block.HashTreeRoot()
require.NoError(t, err)
dataColumnsParams := make([]util.DataColumnParam, 0, len(params.columnsToSave))
for _, i := range params.columnsToSave {
dataColumnParam := util.DataColumnParams{ColumnIndex: i}
dataColumnParam := util.DataColumnParam{
Index: i,
Slot: block.Slot,
ProposerIndex: block.ProposerIndex,
ParentRoot: block.ParentRoot,
StateRoot: block.StateRoot,
BodyRoot: bodyRoot[:],
}
dataColumnsParams = append(dataColumnsParams, dataColumnParam)
}
dataColumnParamsByBlockRoot := util.DataColumnsParamsByRoot{root: dataColumnsParams}
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParamsByBlockRoot)
_, verifiedRODataColumns := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnsParams)
err = dataColumnStorage.Save(verifiedRODataColumns)
require.NoError(t, err)
@@ -3402,38 +3341,47 @@ func TestIsDataAvailable(t *testing.T) {
})
t.Run("Fulu - some initially missing data columns (no reconstruction)", func(t *testing.T) {
startWaiting := make(chan bool)
testParams := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
columnsToSave: []uint64{1, 17, 19, 75, 102, 117, 119}, // 119 is not needed, 42 and 87 are missing
blobKzgCommitmentsCount: 3,
}
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
block := signed.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
var wrongRoot [fieldparams.RootLength]byte
copy(wrongRoot[:], root[:])
wrongRoot[0]++ // change the root to simulate a wrong root
_, verifiedSidecarsWrongRoot := util.CreateTestVerifiedRoDataColumnSidecars(
t,
[]util.DataColumnParam{
{Index: 42, Slot: slot + 1}, // Needed index, but not for this slot.
})
_, verifiedSidecarsWrongRoot := util.CreateTestVerifiedRoDataColumnSidecars(t, util.DataColumnsParamsByRoot{wrongRoot: {
{ColumnIndex: 42}, // needed
}})
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, []util.DataColumnParam{
{Index: 87, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Needed index
{Index: 1, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Not needed index
{Index: 42, Slot: slot, ProposerIndex: proposerIndex, ParentRoot: parentRoot[:], StateRoot: stateRoot[:], BodyRoot: bodyRoot[:]}, // Needed index
})
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, util.DataColumnsParamsByRoot{root: {
{ColumnIndex: 87}, // needed
{ColumnIndex: 1}, // not needed
{ColumnIndex: 42}, // needed
}})
go func() {
<-startWaiting
time.AfterFunc(10*time.Millisecond, func() {
err := service.dataColumnStorage.Save(verifiedSidecarsWrongRoot)
require.NoError(t, err)
err = service.dataColumnStorage.Save(verifiedSidecars)
require.NoError(t, err)
})
}()
err := service.isDataAvailable(ctx, root, signed)
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
@@ -3442,6 +3390,9 @@ func TestIsDataAvailable(t *testing.T) {
missingColumns = uint64(2)
cgc = 128
)
startWaiting := make(chan bool)
var custodyInfo peerdas.CustodyInfo
custodyInfo.TargetGroupCount.SetValidatorsCustodyRequirement(cgc)
custodyInfo.ToAdvertiseGroupCount.Set(cgc)
@@ -3454,41 +3405,61 @@ func TestIsDataAvailable(t *testing.T) {
}
testParams := testIsAvailableParams{
options: []Option{WithCustodyInfo(&custodyInfo)},
options: []Option{WithCustodyInfo(&custodyInfo), WithStartWaitingDataColumnSidecars(startWaiting)},
columnsToSave: indices,
blobKzgCommitmentsCount: 3,
}
ctx, _, service, root, signed := testIsAvailableSetup(t, testParams)
block := signed.Block()
slot := block.Slot()
proposerIndex := block.ProposerIndex()
parentRoot := block.ParentRoot()
stateRoot := block.StateRoot()
bodyRoot, err := block.Body().HashTreeRoot()
require.NoError(t, err)
dataColumnParams := make([]util.DataColumnParams, 0, missingColumns)
dataColumnParams := make([]util.DataColumnParam, 0, missingColumns)
for i := minimumColumnsCountToReconstruct - missingColumns; i < minimumColumnsCountToReconstruct; i++ {
dataColumnParam := util.DataColumnParams{ColumnIndex: i}
dataColumnParam := util.DataColumnParam{
Index: i,
Slot: slot,
ProposerIndex: proposerIndex,
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
}
dataColumnParams = append(dataColumnParams, dataColumnParam)
}
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, util.DataColumnsParamsByRoot{root: dataColumnParams})
_, verifiedSidecars := util.CreateTestVerifiedRoDataColumnSidecars(t, dataColumnParams)
go func() {
<-startWaiting
time.AfterFunc(10*time.Millisecond, func() {
err := service.dataColumnStorage.Save(verifiedSidecars)
require.NoError(t, err)
})
}()
err := service.isDataAvailable(ctx, root, signed)
err = service.isDataAvailable(ctx, root, signed)
require.NoError(t, err)
})
t.Run("Fulu - some columns are definitively missing", func(t *testing.T) {
startWaiting := make(chan bool)
params := testIsAvailableParams{
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{})},
options: []Option{WithCustodyInfo(&peerdas.CustodyInfo{}), WithStartWaitingDataColumnSidecars(startWaiting)},
blobKzgCommitmentsCount: 3,
}
ctx, cancel, service, root, signed := testIsAvailableSetup(t, params)
time.AfterFunc(10*time.Millisecond, func() {
go func() {
<-startWaiting
cancel()
})
}()
err := service.isDataAvailable(ctx, root, signed)
require.NotNil(t, err)
@@ -3605,7 +3576,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
expectedVersion = version.Altair
case 2:
forkEpoch = uint64(params.BeaconConfig().BellatrixForkEpoch)
expectedVersion = version.Altair
expectedVersion = version.Bellatrix
case 3:
forkEpoch = uint64(params.BeaconConfig().CapellaForkEpoch)
expectedVersion = version.Capella
@@ -3614,7 +3585,7 @@ func TestProcessLightClientOptimisticUpdate(t *testing.T) {
expectedVersion = version.Deneb
case 5:
forkEpoch = uint64(params.BeaconConfig().ElectraForkEpoch)
expectedVersion = version.Deneb
expectedVersion = version.Electra
default:
t.Errorf("Unsupported fork version %s", version.String(testVersion))
}
@@ -3759,7 +3730,7 @@ func TestProcessLightClientFinalityUpdate(t *testing.T) {
expectedVersion = version.Altair
case 2:
forkEpoch = uint64(params.BeaconConfig().BellatrixForkEpoch)
expectedVersion = version.Altair
expectedVersion = version.Bellatrix
case 3:
forkEpoch = uint64(params.BeaconConfig().CapellaForkEpoch)
expectedVersion = version.Capella

View File

@@ -69,7 +69,7 @@ func (s *Service) spawnProcessAttestationsRoutine() {
go func() {
_, err := s.clockWaiter.WaitForClock(s.ctx)
if err != nil {
log.WithError(err).Error("spawnProcessAttestationsRoutine failed to receive genesis data")
log.WithError(err).Error("Failed to receive genesis data")
return
}
if s.genesisTime.IsZero() {
@@ -103,7 +103,7 @@ func (s *Service) spawnProcessAttestationsRoutine() {
} else {
s.cfg.ForkChoiceStore.Lock()
if err := s.cfg.ForkChoiceStore.NewSlot(s.ctx, slotInterval.Slot); err != nil {
log.WithError(err).Error("could not process new slot")
log.WithError(err).Error("Could not process new slot")
}
s.cfg.ForkChoiceStore.Unlock()
@@ -144,7 +144,7 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
log.WithField("newHeadRoot", fmt.Sprintf("%#x", newHeadRoot)).Debug("Head changed due to attestations")
headState, headBlock, err := s.getStateAndBlock(ctx, newHeadRoot)
if err != nil {
log.WithError(err).Error("could not get head block")
log.WithError(err).Error("Could not get head block")
return
}
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
@@ -161,7 +161,7 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
return
}
if err := s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice")
log.WithError(err).Error("Could not update forkchoice")
}
}

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"context"
"testing"
"time"
@@ -32,7 +31,7 @@ func TestAttestationCheckPtState_FarFutureSlot(t *testing.T) {
service.genesisTime = time.Now()
e := primitives.Epoch(slots.MaxSlotBuffer/uint64(params.BeaconConfig().SlotsPerEpoch) + 1)
_, err := service.AttestationTargetState(context.Background(), &ethpb.Checkpoint{Epoch: e})
_, err := service.AttestationTargetState(t.Context(), &ethpb.Checkpoint{Epoch: e})
require.ErrorContains(t, "exceeds max allowed value relative to the local clock", err)
}
@@ -56,11 +55,11 @@ func TestVerifyLMDFFGConsistent(t *testing.T) {
a.Data.Target.Root = []byte{'c'}
r33Root := r33.Root()
a.Data.BeaconBlockRoot = r33Root[:]
require.ErrorContains(t, wanted, service.VerifyLmdFfgConsistency(context.Background(), a))
require.ErrorContains(t, wanted, service.VerifyLmdFfgConsistency(t.Context(), a))
r32Root := r32.Root()
a.Data.Target.Root = r32Root[:]
err = service.VerifyLmdFfgConsistency(context.Background(), a)
err = service.VerifyLmdFfgConsistency(t.Context(), a)
require.NoError(t, err, "Could not verify LMD and FFG votes to be consistent")
}

View File

@@ -183,7 +183,7 @@ func (s *Service) updateCheckpoints(
return errors.Wrap(err, "could not get head state")
}
if err := reportEpochMetrics(ctx, postState, headSt); err != nil {
log.WithError(err).Error("could not report epoch metrics")
log.WithError(err).Error("Could not report epoch metrics")
}
}
if err := s.updateJustificationOnBlock(ctx, preState, postState, cp.j); err != nil {

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"context"
"sync"
"testing"
"time"
@@ -28,7 +27,7 @@ import (
)
func TestService_ReceiveBlock(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
genesis, keys := util.DeterministicGenesisState(t, 64)
copiedGen := genesis.Copy()
@@ -189,7 +188,7 @@ func TestHandleDA(t *testing.T) {
require.NoError(t, err)
s, _ := minimalTestService(t)
elapsed, err := s.handleDA(context.Background(), signedBeaconBlock, [fieldparams.RootLength]byte{}, nil)
elapsed, err := s.handleDA(t.Context(), signedBeaconBlock, [fieldparams.RootLength]byte{}, nil)
require.NoError(t, err)
require.Equal(t, true, elapsed > 0, "Elapsed time should be greater than 0")
}
@@ -228,7 +227,7 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
}
func TestService_ReceiveBlockBatch(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
genesis, keys := util.DeterministicGenesisState(t, 64)
genFullBlock := func(t *testing.T, conf *util.BlockGenConfig, slot primitives.Slot) *ethpb.SignedBeaconBlock {
@@ -293,23 +292,23 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
func TestService_HasBlock(t *testing.T) {
s, _ := minimalTestService(t)
r := [32]byte{'a'}
if s.HasBlock(context.Background(), r) {
if s.HasBlock(t.Context(), r) {
t.Error("Should not have block")
}
wsb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
require.NoError(t, s.saveInitSyncBlock(context.Background(), r, wsb))
if !s.HasBlock(context.Background(), r) {
require.NoError(t, s.saveInitSyncBlock(t.Context(), r, wsb))
if !s.HasBlock(t.Context(), r) {
t.Error("Should have block")
}
b := util.NewBeaconBlock()
b.Block.Slot = 1
util.SaveBlock(t, context.Background(), s.cfg.BeaconDB, b)
util.SaveBlock(t, t.Context(), s.cfg.BeaconDB, b)
r, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, true, s.HasBlock(context.Background(), r))
require.Equal(t, true, s.HasBlock(t.Context(), r))
s.blockBeingSynced.set(r)
require.Equal(t, false, s.HasBlock(context.Background(), r))
require.Equal(t, false, s.HasBlock(t.Context(), r))
}
func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
@@ -318,7 +317,7 @@ func TestCheckSaveHotStateDB_Enabling(t *testing.T) {
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
assert.LogsContain(t, hook, "Entering mode to save hot states in DB")
}
@@ -329,10 +328,10 @@ func TestCheckSaveHotStateDB_Disabling(t *testing.T) {
st := params.BeaconConfig().SlotsPerEpoch.Mul(uint64(epochsSinceFinalitySaveHotStateDB))
s.genesisTime = time.Now().Add(time.Duration(-1*int64(st)*int64(params.BeaconConfig().SecondsPerSlot)) * time.Second)
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
s.genesisTime = time.Now()
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
assert.LogsContain(t, hook, "Exiting mode to save hot states in DB")
}
@@ -341,7 +340,7 @@ func TestCheckSaveHotStateDB_Overflow(t *testing.T) {
s, _ := minimalTestService(t)
s.genesisTime = time.Now()
require.NoError(t, s.checkSaveHotStateDB(context.Background()))
require.NoError(t, s.checkSaveHotStateDB(t.Context()))
assert.LogsDoNotContain(t, hook, "Entering mode to save hot states in DB")
}
@@ -456,7 +455,7 @@ func Test_executePostFinalizationTasks(t *testing.T) {
headState, err := util.NewBeaconStateElectra()
require.NoError(t, err)
finalizedStRoot, err := headState.HashTreeRoot(context.Background())
finalizedStRoot, err := headState.HashTreeRoot(t.Context())
require.NoError(t, err)
genesis := util.NewBeaconBlock()

View File

@@ -47,27 +47,29 @@ import (
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
dataColumnStorage *filesystem.DataColumnStorage
slasherEnabled bool
lcStore *lightClient.Store
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
dataColumnStorage *filesystem.DataColumnStorage
slasherEnabled bool
lcStore *lightClient.Store
startWaitingDataColumnSidecars chan bool // for testing purposes only
syncCommitteeHeadState *cache.SyncCommitteeHeadStateCache
}
// config options for the service.
@@ -179,14 +181,15 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
seenIndex: make(map[[32]byte][]bool),
}
srv := &Service{
ctx: ctx,
cancel: cancel,
boundaryRoots: [][32]byte{},
checkpointStateCache: cache.NewCheckpointStateCache(),
initSyncBlocks: make(map[[32]byte]interfaces.ReadOnlySignedBeaconBlock),
blobNotifiers: bn,
cfg: &config{},
blockBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
ctx: ctx,
cancel: cancel,
boundaryRoots: [][32]byte{},
checkpointStateCache: cache.NewCheckpointStateCache(),
initSyncBlocks: make(map[[32]byte]interfaces.ReadOnlySignedBeaconBlock),
blobNotifiers: bn,
cfg: &config{},
blockBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
syncCommitteeHeadState: cache.NewSyncCommitteeHeadState(),
}
for _, opt := range opts {
if err := opt(srv); err != nil {
@@ -370,7 +373,7 @@ func (s *Service) startFromExecutionChain() error {
if e.Type == statefeed.ChainStarted {
data, ok := e.Data.(*statefeed.ChainStartedData)
if !ok {
log.Error("event data is not type *statefeed.ChainStartedData")
log.Error("Event data is not type *statefeed.ChainStartedData")
return
}
log.WithField("startTime", data.StartTime).Debug("Received chain start event")
@@ -407,7 +410,7 @@ func (s *Service) onExecutionChainStart(ctx context.Context, genesisTime time.Ti
vr := bytesutil.ToBytes32(initializedState.GenesisValidatorsRoot())
if err := s.clockSetter.SetClock(startup.NewClock(genesisTime, vr)); err != nil {
log.WithError(err).Fatal("failed to initialize blockchain service from execution start event")
log.WithError(err).Fatal("Failed to initialize blockchain service from execution start event")
}
}

View File

@@ -1,11 +1,9 @@
package blockchain
import (
"context"
"io"
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
@@ -18,15 +16,12 @@ func init() {
}
func TestChainService_SaveHead_DataRace(t *testing.T) {
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB},
}
s := testServiceWithDB(t)
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, err)
go func() {
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
}()
require.NoError(t, s.saveHead(context.Background(), [32]byte{}, b, st))
require.NoError(t, s.saveHead(t.Context(), [32]byte{}, b, st))
}

View File

@@ -42,7 +42,7 @@ import (
)
func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
ctx := context.Background()
ctx := t.Context()
var web3Service *execution.Service
var err error
srv, endpoint, err := mockExecution.SetupRPCServer()
@@ -115,7 +115,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
func TestChainStartStop_Initialized(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
chainService := setupBeaconChain(t, beaconDB)
@@ -152,7 +152,7 @@ func TestChainStartStop_Initialized(t *testing.T) {
func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
chainService := setupBeaconChain(t, beaconDB)
@@ -184,7 +184,7 @@ func TestChainStartStop_GenesisZeroHashes(t *testing.T) {
func TestChainService_InitializeBeaconChain(t *testing.T) {
helpers.ClearCache()
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
ctx := t.Context()
bc := setupBeaconChain(t, beaconDB)
var err error
@@ -226,7 +226,7 @@ func TestChainService_InitializeBeaconChain(t *testing.T) {
}
func TestChainService_CorrectGenesisRoots(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
chainService := setupBeaconChain(t, beaconDB)
@@ -295,7 +295,7 @@ func TestChainService_InitializeChainInfo(t *testing.T) {
require.NoError(t, err)
assert.DeepSSZEqual(t, headState.ToProtoUnsafe(), s.ToProtoUnsafe(), "Head state incorrect")
assert.Equal(t, c.HeadSlot(), headBlock.Block.Slot, "Head slot incorrect")
r, err := c.HeadRoot(context.Background())
r, err := c.HeadRoot(t.Context())
require.NoError(t, err)
if !bytes.Equal(headRoot[:], r) {
t.Error("head slot incorrect")
@@ -345,12 +345,8 @@ func TestChainService_InitializeChainInfo_SetHeadAtGenesis(t *testing.T) {
}
func TestChainService_SaveHeadNoDB(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
fc := doublylinkedtree.New()
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, fc), ForkChoiceStore: fc},
}
ctx := t.Context()
s := testServiceWithDB(t)
blk := util.NewBeaconBlock()
blk.Block.Slot = 1
r, err := blk.HashTreeRoot()
@@ -370,11 +366,8 @@ func TestChainService_SaveHeadNoDB(t *testing.T) {
}
func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
}
ctx := t.Context()
s := testServiceWithDB(t)
b := util.NewBeaconBlock()
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
@@ -391,32 +384,26 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
}
func TestServiceStop_SaveCachedBlocks(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
beaconDB := testDB.SetupDB(t)
s := &Service{
cfg: &config{BeaconDB: beaconDB, StateGen: stategen.New(beaconDB, doublylinkedtree.New())},
ctx: ctx,
cancel: cancel,
initSyncBlocks: make(map[[32]byte]interfaces.ReadOnlySignedBeaconBlock),
}
s := testServiceWithDB(t)
s.initSyncBlocks = make(map[[32]byte]interfaces.ReadOnlySignedBeaconBlock)
bb := util.NewBeaconBlock()
r, err := bb.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(bb)
require.NoError(t, err)
require.NoError(t, s.saveInitSyncBlock(ctx, r, wsb))
require.NoError(t, s.saveInitSyncBlock(s.ctx, r, wsb))
require.NoError(t, s.Stop())
require.Equal(t, true, s.cfg.BeaconDB.HasBlock(ctx, r))
require.Equal(t, true, s.cfg.BeaconDB.HasBlock(s.ctx, r))
}
func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
mgs := &MockClockSetter{}
service.clockSetter = mgs
gt := time.Now()
service.onExecutionChainStart(context.Background(), gt)
service.onExecutionChainStart(t.Context(), gt)
gs, err := beaconDB.GenesisState(ctx)
require.NoError(t, err)
require.NotEqual(t, nil, gs)
@@ -428,11 +415,8 @@ func TestProcessChainStartTime_ReceivedFeed(t *testing.T) {
}
func BenchmarkHasBlockDB(b *testing.B) {
beaconDB := testDB.SetupDB(b)
ctx := context.Background()
s := &Service{
cfg: &config{BeaconDB: beaconDB},
}
ctx := b.Context()
s := testServiceWithDB(b)
blk := util.NewBeaconBlock()
wsb, err := consensusblocks.NewSignedBeaconBlock(blk)
require.NoError(b, err)
@@ -447,11 +431,8 @@ func BenchmarkHasBlockDB(b *testing.B) {
}
func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
ctx := context.Background()
beaconDB := testDB.SetupDB(b)
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
}
ctx := b.Context()
s := testServiceWithDB(b)
blk := util.NewBeaconBlock()
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)

View File

@@ -38,7 +38,7 @@ func (s *Service) startupHeadRoot() [32]byte {
if headStr == "head" {
root, err := s.cfg.BeaconDB.HeadBlockRoot()
if err != nil {
log.WithError(err).Error("could not get head block root, starting with finalized block as head")
log.WithError(err).Error("Could not get head block root, starting with finalized block as head")
return fRoot
}
log.Infof("Using Head root of %#x", root)
@@ -46,7 +46,7 @@ func (s *Service) startupHeadRoot() [32]byte {
}
root, err := bytesutil.DecodeHexWithLength(headStr, 32)
if err != nil {
log.WithError(err).Error("could not parse head root, starting with finalized block as head")
log.WithError(err).Error("Could not parse head root, starting with finalized block as head")
return fRoot
}
return [32]byte(root)
@@ -64,16 +64,16 @@ func (s *Service) setupForkchoiceTree(st state.BeaconState) error {
}
blk, err := s.cfg.BeaconDB.Block(s.ctx, headRoot)
if err != nil {
log.WithError(err).Error("could not get head block, starting with finalized block as head")
log.WithError(err).Error("Could not get head block, starting with finalized block as head")
return nil
}
if slots.ToEpoch(blk.Block().Slot()) < cp.Epoch {
log.WithField("headRoot", fmt.Sprintf("%#x", headRoot)).Error("head block is older than finalized block, starting with finalized block as head")
log.WithField("headRoot", fmt.Sprintf("%#x", headRoot)).Error("Head block is older than finalized block, starting with finalized block as head")
return nil
}
chain, err := s.buildForkchoiceChain(s.ctx, blk)
if err != nil {
log.WithError(err).Error("could not build forkchoice chain, starting with finalized block as head")
log.WithError(err).Error("Could not build forkchoice chain, starting with finalized block as head")
return nil
}
s.cfg.ForkChoiceStore.Lock()

View File

@@ -32,7 +32,7 @@ func Test_startupHeadRoot(t *testing.T) {
})
defer resetCfg()
require.Equal(t, service.startupHeadRoot(), gr)
require.LogsContain(t, hook, "could not get head block root, starting with finalized block as head")
require.LogsContain(t, hook, "Could not get head block root, starting with finalized block as head")
})
st, _ := util.DeterministicGenesisState(t, 64)

View File

@@ -108,7 +108,7 @@ type testServiceRequirements struct {
}
func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceRequirements) {
ctx := context.Background()
ctx := t.Context()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New()
sg := stategen.New(beaconDB, fcs)

View File

@@ -555,11 +555,11 @@ func (s *ChainService) UpdateHead(ctx context.Context, slot primitives.Slot) {
ojc := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, slot, bytesutil.ToBytes32(s.Root), [32]byte{}, [32]byte{}, ojc, ojc)
if err != nil {
logrus.WithError(err).Error("could not update head")
logrus.WithError(err).Error("Could not update head")
}
err = s.ForkChoiceStore.InsertNode(ctx, st, root)
if err != nil {
logrus.WithError(err).Error("could not insert node to forkchoice")
logrus.WithError(err).Error("Could not insert node to forkchoice")
}
}

View File

@@ -1,11 +1,8 @@
package blockchain
import (
"context"
"testing"
testDB "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/types"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -18,11 +15,8 @@ import (
)
func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
beaconDB := testDB.SetupDB(t)
b := util.NewBeaconBlock()
b.Block.Slot = 1792480
util.SaveBlock(t, context.Background(), beaconDB, b)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
@@ -69,17 +63,17 @@ func TestService_VerifyWeakSubjectivityRoot(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := testServiceWithDB(t)
beaconDB := s.cfg.BeaconDB
util.SaveBlock(t, t.Context(), beaconDB, b)
wv, err := NewWeakSubjectivityVerifier(tt.checkpt, beaconDB)
require.Equal(t, !tt.disabled, wv.enabled)
require.NoError(t, err)
fcs := doublylinkedtree.New()
s := &Service{
cfg: &config{BeaconDB: beaconDB, WeakSubjectivityCheckpt: tt.checkpt, ForkChoiceStore: fcs},
wsVerifier: wv,
}
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: tt.finalizedEpoch}))
s.cfg.WeakSubjectivityCheckpt = tt.checkpt
s.wsVerifier = wv
require.Equal(t, !tt.disabled, wv.enabled)
require.NoError(t, s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: tt.finalizedEpoch}))
cp := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
err = s.wsVerifier.VerifyWeakSubjectivity(context.Background(), cp.Epoch)
err = s.wsVerifier.VerifyWeakSubjectivity(t.Context(), cp.Epoch)
if tt.wantErr == nil {
require.NoError(t, err)
} else {

View File

@@ -1,7 +1,6 @@
package builder
import (
"context"
"testing"
"time"
@@ -15,19 +14,19 @@ import (
)
func Test_NewServiceWithBuilder(t *testing.T) {
s, err := NewService(context.Background(), WithBuilderClient(&buildertesting.MockClient{}))
s, err := NewService(t.Context(), WithBuilderClient(&buildertesting.MockClient{}))
require.NoError(t, err)
assert.Equal(t, true, s.Configured())
}
func Test_NewServiceWithoutBuilder(t *testing.T) {
s, err := NewService(context.Background())
s, err := NewService(t.Context())
require.NoError(t, err)
assert.Equal(t, false, s.Configured())
}
func Test_RegisterValidator(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
db := dbtesting.SetupDB(t)
headFetcher := &blockchainTesting.ChainService{}
builder := buildertesting.NewClient()
@@ -40,7 +39,7 @@ func Test_RegisterValidator(t *testing.T) {
}
func Test_RegisterValidator_WithCache(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
headFetcher := &blockchainTesting.ChainService{}
builder := buildertesting.NewClient()
s, err := NewService(ctx, WithRegistrationCache(), WithHeadFetcher(headFetcher), WithBuilderClient(&builder))
@@ -55,16 +54,16 @@ func Test_RegisterValidator_WithCache(t *testing.T) {
}
func Test_BuilderMethodsWithouClient(t *testing.T) {
s, err := NewService(context.Background())
s, err := NewService(t.Context())
require.NoError(t, err)
assert.Equal(t, false, s.Configured())
_, err = s.GetHeader(context.Background(), 0, [32]byte{}, [48]byte{})
_, err = s.GetHeader(t.Context(), 0, [32]byte{}, [48]byte{})
assert.ErrorContains(t, ErrNoBuilder.Error(), err)
_, _, err = s.SubmitBlindedBlock(context.Background(), nil)
_, _, err = s.SubmitBlindedBlock(t.Context(), nil)
assert.ErrorContains(t, ErrNoBuilder.Error(), err)
err = s.RegisterValidator(context.Background(), nil)
err = s.RegisterValidator(t.Context(), nil)
assert.ErrorContains(t, ErrNoBuilder.Error(), err)
}

View File

@@ -3,7 +3,6 @@
package cache
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/testing/assert"
@@ -30,8 +29,8 @@ func TestCommitteeCache_FuzzCommitteesByEpoch(t *testing.T) {
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(c)
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), c))
_, err := cache.Committee(context.Background(), 0, c.Seed, 0)
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), c))
_, err := cache.Committee(t.Context(), 0, c.Seed, 0)
require.NoError(t, err)
}
@@ -45,9 +44,9 @@ func TestCommitteeCache_FuzzActiveIndices(t *testing.T) {
for i := 0; i < 100000; i++ {
fuzzer.Fuzz(c)
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), c))
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), c))
indices, err := cache.ActiveIndices(context.Background(), c.Seed)
indices, err := cache.ActiveIndices(t.Context(), c.Seed)
require.NoError(t, err)
assert.DeepEqual(t, c.SortedIndices, indices)
}

View File

@@ -44,15 +44,15 @@ func TestCommitteeCache_CommitteesByEpoch(t *testing.T) {
slot := params.BeaconConfig().SlotsPerEpoch
committeeIndex := primitives.CommitteeIndex(1)
indices, err := cache.Committee(context.Background(), slot, item.Seed, committeeIndex)
indices, err := cache.Committee(t.Context(), slot, item.Seed, committeeIndex)
require.NoError(t, err)
if indices != nil {
t.Error("Expected committee not to exist in empty cache")
}
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
wantedIndex := primitives.CommitteeIndex(0)
indices, err = cache.Committee(context.Background(), slot, item.Seed, wantedIndex)
indices, err = cache.Committee(t.Context(), slot, item.Seed, wantedIndex)
require.NoError(t, err)
start, end := startEndIndices(item, uint64(wantedIndex))
@@ -63,15 +63,15 @@ func TestCommitteeCache_ActiveIndices(t *testing.T) {
cache := NewCommitteesCache()
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6}}
indices, err := cache.ActiveIndices(context.Background(), item.Seed)
indices, err := cache.ActiveIndices(t.Context(), item.Seed)
require.NoError(t, err)
if indices != nil {
t.Error("Expected committee not to exist in empty cache")
}
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
indices, err = cache.ActiveIndices(context.Background(), item.Seed)
indices, err = cache.ActiveIndices(t.Context(), item.Seed)
require.NoError(t, err)
assert.DeepEqual(t, item.SortedIndices, indices)
}
@@ -80,13 +80,13 @@ func TestCommitteeCache_ActiveCount(t *testing.T) {
cache := NewCommitteesCache()
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6}}
count, err := cache.ActiveIndicesCount(context.Background(), item.Seed)
count, err := cache.ActiveIndicesCount(t.Context(), item.Seed)
require.NoError(t, err)
assert.Equal(t, 0, count, "Expected active count not to exist in empty cache")
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
count, err = cache.ActiveIndicesCount(context.Background(), item.Seed)
count, err = cache.ActiveIndicesCount(t.Context(), item.Seed)
require.NoError(t, err)
assert.Equal(t, len(item.SortedIndices), count)
}
@@ -100,7 +100,7 @@ func TestCommitteeCache_CanRotate(t *testing.T) {
for i := start; i < end; i++ {
s := []byte(strconv.Itoa(i))
item := &Committees{Seed: bytesutil.ToBytes32(s)}
require.NoError(t, cache.AddCommitteeShuffledList(context.Background(), item))
require.NoError(t, cache.AddCommitteeShuffledList(t.Context(), item))
}
k := cache.CommitteeCache.Keys()
@@ -130,7 +130,7 @@ func TestCommitteeCacheOutOfRange(t *testing.T) {
assert.NoError(t, err)
_ = cache.CommitteeCache.Add(key, comms)
_, err = cache.Committee(context.Background(), 0, seed, math.MaxUint64) // Overflow!
_, err = cache.Committee(t.Context(), 0, seed, math.MaxUint64) // Overflow!
require.NotNil(t, err, "Did not fail as expected")
}
@@ -138,15 +138,15 @@ func TestCommitteeCache_DoesNothingWhenCancelledContext(t *testing.T) {
cache := NewCommitteesCache()
item := &Committees{Seed: [32]byte{'A'}, SortedIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6}}
count, err := cache.ActiveIndicesCount(context.Background(), item.Seed)
count, err := cache.ActiveIndicesCount(t.Context(), item.Seed)
require.NoError(t, err)
assert.Equal(t, 0, count, "Expected active count not to exist in empty cache")
cancelled, cancel := context.WithCancel(context.Background())
cancelled, cancel := context.WithCancel(t.Context())
cancel()
require.ErrorIs(t, cache.AddCommitteeShuffledList(cancelled, item), context.Canceled)
count, err = cache.ActiveIndicesCount(context.Background(), item.Seed)
count, err = cache.ActiveIndicesCount(t.Context(), item.Seed)
require.NoError(t, err)
assert.Equal(t, 0, count)
}

View File

@@ -2,7 +2,6 @@ package depositsnapshot
import (
"bytes"
"context"
"fmt"
"math/big"
"testing"
@@ -55,7 +54,7 @@ func TestAllDeposits_ReturnsAllDeposits(t *testing.T) {
}
dc.deposits = deposits
d := dc.AllDeposits(context.Background(), nil)
d := dc.AllDeposits(t.Context(), nil)
assert.Equal(t, len(deposits), len(d))
}
@@ -95,7 +94,7 @@ func TestAllDeposits_FiltersDepositUpToAndIncludingBlockNumber(t *testing.T) {
}
dc.deposits = deposits
d := dc.AllDeposits(context.Background(), big.NewInt(11))
d := dc.AllDeposits(t.Context(), big.NewInt(11))
assert.Equal(t, 5, len(d))
}
@@ -127,7 +126,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
DepositRoot: wantedRoot,
},
}
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(13))
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(13))
assert.Equal(t, 4, int(n))
require.DeepEqual(t, wantedRoot, root[:])
})
@@ -143,7 +142,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
DepositRoot: wantedRoot,
},
}
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(10))
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(10))
assert.Equal(t, 1, int(n))
require.DeepEqual(t, wantedRoot, root[:])
})
@@ -169,7 +168,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
Deposit: &ethpb.Deposit{},
},
}
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(10))
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(10))
assert.Equal(t, 2, int(n))
require.DeepEqual(t, wantedRoot, root[:])
})
@@ -185,7 +184,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
DepositRoot: wantedRoot,
},
}
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(7))
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(7))
assert.Equal(t, 0, int(n))
require.DeepEqual(t, params.BeaconConfig().ZeroHash, root)
})
@@ -201,7 +200,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
DepositRoot: wantedRoot,
},
}
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(10))
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(10))
assert.Equal(t, 1, int(n))
require.DeepEqual(t, wantedRoot, root[:])
})
@@ -237,7 +236,7 @@ func TestDepositsNumberAndRootAtHeight(t *testing.T) {
Deposit: &ethpb.Deposit{},
},
}
n, root := dc.DepositsNumberAndRootAtHeight(context.Background(), big.NewInt(9))
n, root := dc.DepositsNumberAndRootAtHeight(t.Context(), big.NewInt(9))
assert.Equal(t, 3, int(n))
require.DeepEqual(t, wantedRoot, root[:])
})
@@ -288,10 +287,10 @@ func TestDepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
},
},
}
dc.InsertDepositContainers(context.Background(), ctrs)
dc.InsertDepositContainers(t.Context(), ctrs)
pk1 := bytesutil.PadTo([]byte("pk1"), 48)
dep, blkNum := dc.DepositByPubkey(context.Background(), pk1)
dep, blkNum := dc.DepositByPubkey(t.Context(), pk1)
if dep == nil || !bytes.Equal(dep.Data.PublicKey, pk1) {
t.Error("Returned wrong deposit")
@@ -303,7 +302,7 @@ func TestDepositByPubkey_ReturnsFirstMatchingDeposit(t *testing.T) {
func TestInsertDepositContainers_NotNil(t *testing.T) {
dc, err := New()
require.NoError(t, err)
dc.InsertDepositContainers(context.Background(), nil)
dc.InsertDepositContainers(t.Context(), nil)
assert.DeepEqual(t, []*ethpb.DepositContainer{}, dc.deposits)
}
@@ -359,10 +358,10 @@ func TestFinalizedDeposits_DepositsCachedCorrectly(t *testing.T) {
err = dc.finalizedDeposits.depositTree.pushLeaf(root)
require.NoError(t, err)
}
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
require.NoError(t, err)
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
require.NoError(t, err)
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(2), cachedDeposits.MerkleTrieIndex())
@@ -425,15 +424,15 @@ func TestFinalizedDeposits_UtilizesPreviouslyCachedDeposits(t *testing.T) {
err = dc.finalizedDeposits.Deposits().Insert(root[:], 0)
require.NoError(t, err)
}
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
require.NoError(t, err)
dc.deposits = append(dc.deposits, []*ethpb.DepositContainer{newFinalizedDeposit}...)
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
require.NoError(t, err)
require.NotNil(t, cachedDeposits, "Deposits not cached")
require.Equal(t, int64(1), cachedDeposits.MerkleTrieIndex())
@@ -459,10 +458,10 @@ func TestFinalizedDeposits_HandleZeroDeposits(t *testing.T) {
dc, err := New()
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
require.NoError(t, err)
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
require.NoError(t, err)
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(-1), cachedDeposits.MerkleTrieIndex())
@@ -509,10 +508,10 @@ func TestFinalizedDeposits_HandleSmallerThanExpectedDeposits(t *testing.T) {
}
dc.deposits = finalizedDeposits
err = dc.InsertFinalizedDeposits(context.Background(), 5, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 5, [32]byte{}, 0)
require.NoError(t, err)
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
require.NoError(t, err)
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(2), cachedDeposits.MerkleTrieIndex())
@@ -592,14 +591,14 @@ func TestFinalizedDeposits_HandleLowerEth1DepositIndex(t *testing.T) {
}
dc.deposits = finalizedDeposits
err = dc.InsertFinalizedDeposits(context.Background(), 5, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 5, [32]byte{}, 0)
require.NoError(t, err)
// Reinsert finalized deposits with a lower index.
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
require.NoError(t, err)
cachedDeposits, err := dc.FinalizedDeposits(context.Background())
cachedDeposits, err := dc.FinalizedDeposits(t.Context())
require.NoError(t, err)
require.NotNil(t, cachedDeposits, "Deposits not cached")
assert.Equal(t, int64(5), cachedDeposits.MerkleTrieIndex())
@@ -670,10 +669,10 @@ func TestNonFinalizedDeposits_ReturnsAllNonFinalizedDeposits(t *testing.T) {
Index: 3,
DepositRoot: rootCreator('D'),
})
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(context.Background(), 1, nil)
deps := dc.NonFinalizedDeposits(t.Context(), 1, nil)
assert.Equal(t, 2, len(deps))
}
@@ -681,7 +680,7 @@ func TestNonFinalizedDeposits_ReturnsAllNonFinalizedDeposits_Nil(t *testing.T) {
dc, err := New()
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(context.Background(), 0, nil)
deps := dc.NonFinalizedDeposits(t.Context(), 0, nil)
assert.Equal(t, 0, len(deps))
}
@@ -740,10 +739,10 @@ func TestNonFinalizedDeposits_ReturnsNonFinalizedDepositsUpToBlockNumber(t *test
Index: 3,
DepositRoot: rootCreator('D'),
})
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(context.Background(), 1, big.NewInt(10))
deps := dc.NonFinalizedDeposits(t.Context(), 1, big.NewInt(10))
assert.Equal(t, 1, len(deps))
}
@@ -785,21 +784,21 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
assert.NoError(t, err)
// Perform this in a nonsensical ordering
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 1, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 2, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 3, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 3, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 4, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 4, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 4, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 4, [32]byte{}, 0)
require.NoError(t, err)
// Mimic finalized deposit trie fetch.
fd, err := dc.FinalizedDeposits(context.Background())
fd, err := dc.FinalizedDeposits(t.Context())
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), nil)
deps := dc.NonFinalizedDeposits(t.Context(), fd.MerkleTrieIndex(), nil)
insertIndex := fd.MerkleTrieIndex() + 1
for _, dep := range deps {
@@ -810,24 +809,24 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
}
insertIndex++
}
err = dc.InsertFinalizedDeposits(context.Background(), 5, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 5, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 6, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 6, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 9, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 9, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 12, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 12, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 15, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 15, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 15, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 15, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 14, [32]byte{}, 0)
err = dc.InsertFinalizedDeposits(t.Context(), 14, [32]byte{}, 0)
require.NoError(t, err)
fd, err = dc.FinalizedDeposits(context.Background())
fd, err = dc.FinalizedDeposits(t.Context())
require.NoError(t, err)
deps = dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), nil)
deps = dc.NonFinalizedDeposits(t.Context(), fd.MerkleTrieIndex(), nil)
insertIndex = fd.MerkleTrieIndex() + 1
for _, dep := range dc.deposits {
@@ -888,9 +887,9 @@ func TestMin(t *testing.T) {
}
dc.deposits = finalizedDeposits
fd, err := dc.FinalizedDeposits(context.Background())
fd, err := dc.FinalizedDeposits(t.Context())
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), big.NewInt(16))
deps := dc.NonFinalizedDeposits(t.Context(), fd.MerkleTrieIndex(), big.NewInt(16))
insertIndex := fd.MerkleTrieIndex() + 1
for _, dep := range deps {
depHash, err := dep.Data.HashTreeRoot()
@@ -908,28 +907,28 @@ func TestDepositMap_WorksCorrectly(t *testing.T) {
require.NoError(t, err)
pk0 := bytesutil.PadTo([]byte("pk0"), 48)
dep, _ := dc.DepositByPubkey(context.Background(), pk0)
dep, _ := dc.DepositByPubkey(t.Context(), pk0)
var nilDep *ethpb.Deposit
assert.DeepEqual(t, nilDep, dep)
dep = &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: pk0, Amount: 1000}}
assert.NoError(t, dc.InsertDeposit(context.Background(), dep, 1000, 0, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(t.Context(), dep, 1000, 0, [32]byte{}))
dep, _ = dc.DepositByPubkey(context.Background(), pk0)
dep, _ = dc.DepositByPubkey(t.Context(), pk0)
assert.NotEqual(t, nilDep, dep)
assert.Equal(t, uint64(1000), dep.Data.Amount)
dep = &ethpb.Deposit{Proof: makeDepositProof(), Data: &ethpb.Deposit_Data{PublicKey: pk0, Amount: 10000}}
assert.NoError(t, dc.InsertDeposit(context.Background(), dep, 1000, 1, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(t.Context(), dep, 1000, 1, [32]byte{}))
// Make sure we have the same deposit returned over here.
dep, _ = dc.DepositByPubkey(context.Background(), pk0)
dep, _ = dc.DepositByPubkey(t.Context(), pk0)
assert.NotEqual(t, nilDep, dep)
assert.Equal(t, uint64(1000), dep.Data.Amount)
// Make sure another key doesn't work.
pk1 := bytesutil.PadTo([]byte("pk1"), 48)
dep, _ = dc.DepositByPubkey(context.Background(), pk1)
dep, _ = dc.DepositByPubkey(t.Context(), pk1)
assert.DeepEqual(t, nilDep, dep)
}

View File

@@ -1,7 +1,6 @@
package depositsnapshot
import (
"context"
"math/big"
"testing"
@@ -13,14 +12,14 @@ var _ PendingDepositsFetcher = (*Cache)(nil)
func TestInsertPendingDeposit_OK(t *testing.T) {
dc := Cache{}
dc.InsertPendingDeposit(context.Background(), &ethpb.Deposit{}, 111, 100, [32]byte{})
dc.InsertPendingDeposit(t.Context(), &ethpb.Deposit{}, 111, 100, [32]byte{})
assert.Equal(t, 1, len(dc.pendingDeposits), "deposit not inserted")
}
func TestInsertPendingDeposit_ignoresNilDeposit(t *testing.T) {
dc := Cache{}
dc.InsertPendingDeposit(context.Background(), nil /*deposit*/, 0 /*blockNum*/, 0, [32]byte{})
dc.InsertPendingDeposit(t.Context(), nil /*deposit*/, 0 /*blockNum*/, 0, [32]byte{})
assert.Equal(t, 0, len(dc.pendingDeposits))
}
@@ -34,13 +33,13 @@ func TestPendingDeposits_OK(t *testing.T) {
{Eth1BlockHeight: 6, Deposit: &ethpb.Deposit{Proof: [][]byte{[]byte("c")}}},
}
deposits := dc.PendingDeposits(context.Background(), big.NewInt(4))
deposits := dc.PendingDeposits(t.Context(), big.NewInt(4))
expected := []*ethpb.Deposit{
{Proof: [][]byte{[]byte("A")}},
{Proof: [][]byte{[]byte("B")}},
}
assert.DeepSSZEqual(t, expected, deposits)
all := dc.PendingDeposits(context.Background(), nil)
all := dc.PendingDeposits(t.Context(), nil)
assert.Equal(t, len(dc.pendingDeposits), len(all), "PendingDeposits(ctx, nil) did not return all deposits")
}

View File

@@ -1,7 +1,6 @@
package depositsnapshot
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
@@ -22,7 +21,7 @@ func TestPrunePendingDeposits_ZeroMerkleIndex(t *testing.T) {
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 0)
dc.PrunePendingDeposits(t.Context(), 0)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 2, Index: 2},
{Eth1BlockHeight: 4, Index: 4},
@@ -46,7 +45,7 @@ func TestPrunePendingDeposits_OK(t *testing.T) {
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 6)
dc.PrunePendingDeposits(t.Context(), 6)
expected := []*ethpb.DepositContainer{
{Eth1BlockHeight: 6, Index: 6},
{Eth1BlockHeight: 8, Index: 8},
@@ -65,7 +64,7 @@ func TestPrunePendingDeposits_OK(t *testing.T) {
{Eth1BlockHeight: 12, Index: 12},
}
dc.PrunePendingDeposits(context.Background(), 10)
dc.PrunePendingDeposits(t.Context(), 10)
expected = []*ethpb.DepositContainer{
{Eth1BlockHeight: 10, Index: 10},
{Eth1BlockHeight: 12, Index: 12},
@@ -86,7 +85,7 @@ func TestPruneAllPendingDeposits(t *testing.T) {
{Eth1BlockHeight: 12, Index: 12},
}
dc.PruneAllPendingDeposits(context.Background())
dc.PruneAllPendingDeposits(t.Context())
expected := []*ethpb.DepositContainer{}
assert.DeepEqual(t, expected, dc.pendingDeposits)
@@ -128,10 +127,10 @@ func TestPruneProofs_Ok(t *testing.T) {
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 1))
require.NoError(t, dc.PruneProofs(t.Context(), 1))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
@@ -173,10 +172,10 @@ func TestPruneProofs_SomeAlreadyPruned(t *testing.T) {
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 2))
require.NoError(t, dc.PruneProofs(t.Context(), 2))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[2].Deposit.Proof)
}
@@ -217,10 +216,10 @@ func TestPruneProofs_PruneAllWhenDepositIndexTooBig(t *testing.T) {
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 99))
require.NoError(t, dc.PruneProofs(t.Context(), 99))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
@@ -264,10 +263,10 @@ func TestPruneProofs_CorrectlyHandleLastIndex(t *testing.T) {
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
require.NoError(t, dc.PruneProofs(context.Background(), 4))
require.NoError(t, dc.PruneProofs(t.Context(), 4))
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)
@@ -311,10 +310,10 @@ func TestPruneAllProofs(t *testing.T) {
}
for _, ins := range deposits {
assert.NoError(t, dc.InsertDeposit(context.Background(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
assert.NoError(t, dc.InsertDeposit(t.Context(), ins.deposit, ins.blkNum, ins.index, [32]byte{}))
}
dc.PruneAllProofs(context.Background())
dc.PruneAllProofs(t.Context())
assert.DeepEqual(t, [][]byte(nil), dc.deposits[0].Deposit.Proof)
assert.DeepEqual(t, [][]byte(nil), dc.deposits[1].Deposit.Proof)

View File

@@ -1,7 +1,6 @@
package cache
import (
"context"
"testing"
"time"
@@ -24,7 +23,7 @@ func TestRegistrationCache(t *testing.T) {
Timestamp: uint64(time.Now().Unix()),
Pubkey: pubkey,
}
cache.UpdateIndexToRegisteredMap(context.Background(), m)
cache.UpdateIndexToRegisteredMap(t.Context(), m)
reg, err := cache.RegistrationByIndex(validatorIndex)
require.NoError(t, err)
require.Equal(t, string(reg.Pubkey), string(pubkey))
@@ -38,7 +37,7 @@ func TestRegistrationCache(t *testing.T) {
Timestamp: uint64(time.Now().Unix()),
Pubkey: pubkey,
}
cache.UpdateIndexToRegisteredMap(context.Background(), m)
cache.UpdateIndexToRegisteredMap(t.Context(), m)
reg, err := cache.RegistrationByIndex(validatorIndex2)
require.NoError(t, err)
require.Equal(t, string(reg.Pubkey), string(pubkey))

View File

@@ -1,7 +1,6 @@
package cache_test
import (
"context"
"sync"
"testing"
@@ -14,7 +13,7 @@ import (
)
func TestSkipSlotCache_RoundTrip(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
c := cache.NewSkipSlotCache()
r := [32]byte{'a'}
@@ -38,7 +37,7 @@ func TestSkipSlotCache_RoundTrip(t *testing.T) {
}
func TestSkipSlotCache_DisabledAndEnabled(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
c := cache.NewSkipSlotCache()
r := [32]byte{'a'}

View File

@@ -178,7 +178,7 @@ func (s *SyncCommitteeCache) UpdatePositionsInCommittee(syncCommitteeBoundaryRoo
s.lock.Lock()
defer s.lock.Unlock()
if clearCount != s.cleared.Load() {
log.Warn("cache rotated during async committee update operation - abandoning cache update")
log.Warn("Cache rotated during async committee update operation - abandoning cache update")
return nil
}

View File

@@ -81,6 +81,7 @@ go_test(
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/fuzz:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time:go_default_library",

View File

@@ -1,7 +1,6 @@
package altair_test
import (
"context"
"fmt"
"testing"
@@ -19,9 +18,10 @@ import (
"github.com/OffchainLabs/prysm/v6/math"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1/attestation"
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
fuzz "github.com/google/gofuzz"
gofuzz "github.com/google/gofuzz"
"github.com/prysmaticlabs/go-bitfield"
)
@@ -50,7 +50,7 @@ func TestProcessAttestations_InclusionDelayFailure(t *testing.T) {
)
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
}
@@ -81,7 +81,7 @@ func TestProcessAttestations_NeitherCurrentNorPrevEpoch(t *testing.T) {
)
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
}
@@ -110,13 +110,13 @@ func TestProcessAttestations_CurrentEpochFFGDataMismatches(t *testing.T) {
want := "source check point not equal to current justified checkpoint"
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
b.Block.Body.Attestations[0].Data.Source.Epoch = time.CurrentEpoch(beaconState)
b.Block.Body.Attestations[0].Data.Source.Root = []byte{}
wsb, err = blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
}
@@ -151,14 +151,14 @@ func TestProcessAttestations_PrevEpochFFGDataMismatches(t *testing.T) {
want := "source check point not equal to previous justified checkpoint"
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
b.Block.Body.Attestations[0].Data.Source.Epoch = time.PrevEpoch(beaconState)
b.Block.Body.Attestations[0].Data.Target.Epoch = time.PrevEpoch(beaconState)
b.Block.Body.Attestations[0].Data.Source.Root = []byte{}
wsb, err = blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
require.ErrorContains(t, want, err)
}
@@ -190,7 +190,7 @@ func TestProcessAttestations_InvalidAggregationBitsLength(t *testing.T) {
expected := "failed to verify aggregation bitfield: wanted participants bitfield length 3, got: 4"
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
require.ErrorContains(t, expected, err)
}
@@ -214,7 +214,7 @@ func TestProcessAttestations_OK(t *testing.T) {
cfc.Root = mockRoot[:]
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, 0)
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att.Data.Slot, 0)
require.NoError(t, err)
attestingIndices, err := attestation.AttestingIndices(att, committee)
require.NoError(t, err)
@@ -235,7 +235,7 @@ func TestProcessAttestations_OK(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
require.NoError(t, err)
})
t.Run("post-Electra", func(t *testing.T) {
@@ -260,7 +260,7 @@ func TestProcessAttestations_OK(t *testing.T) {
cfc.Root = mockRoot[:]
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, 0)
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att.Data.Slot, 0)
require.NoError(t, err)
attestingIndices, err := attestation.AttestingIndices(att, committee)
require.NoError(t, err)
@@ -281,7 +281,7 @@ func TestProcessAttestations_OK(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = altair.ProcessAttestationsNoVerifySignature(context.Background(), beaconState, wsb.Block())
_, err = altair.ProcessAttestationsNoVerifySignature(t.Context(), beaconState, wsb.Block())
require.NoError(t, err)
})
}
@@ -313,13 +313,13 @@ func TestProcessAttestationNoVerify_SourceTargetHead(t *testing.T) {
b, err := helpers.TotalActiveBalance(beaconState)
require.NoError(t, err)
beaconState, err = altair.ProcessAttestationNoVerifySignature(context.Background(), beaconState, att, b)
beaconState, err = altair.ProcessAttestationNoVerifySignature(t.Context(), beaconState, att, b)
require.NoError(t, err)
p, err := beaconState.CurrentEpochParticipation()
require.NoError(t, err)
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att.Data.Slot, att.Data.CommitteeIndex)
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att.Data.Slot, att.Data.CommitteeIndex)
require.NoError(t, err)
indices, err := attestation.AttestingIndices(att, committee)
require.NoError(t, err)
@@ -458,7 +458,7 @@ func TestValidatorFlag_Add_ExceedsLength(t *testing.T) {
}
func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
st := &ethpb.BeaconStateAltair{}
b := &ethpb.SignedBeaconBlockAltair{Block: &ethpb.BeaconBlockAltair{}}
for i := 0; i < 10000; i++ {
@@ -474,10 +474,11 @@ func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
}
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
r, err := altair.ProcessAttestationsNoVerifySignature(context.Background(), s, wsb.Block())
r, err := altair.ProcessAttestationsNoVerifySignature(t.Context(), s, wsb.Block())
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, s, b)
}
fuzz.FreeMemory(i)
}
}
@@ -555,10 +556,10 @@ func TestSetParticipationAndRewardProposer(t *testing.T) {
b, err := helpers.TotalActiveBalance(beaconState)
require.NoError(t, err)
st, err := altair.SetParticipationAndRewardProposer(context.Background(), beaconState, test.epoch, test.indices, test.participatedFlags, b)
st, err := altair.SetParticipationAndRewardProposer(t.Context(), beaconState, test.epoch, test.indices, test.participatedFlags, b)
require.NoError(t, err)
i, err := helpers.BeaconProposerIndex(context.Background(), st)
i, err := helpers.BeaconProposerIndex(t.Context(), st)
require.NoError(t, err)
b, err = beaconState.BalanceAtIndex(i)
require.NoError(t, err)
@@ -661,8 +662,8 @@ func TestRewardProposer(t *testing.T) {
{rewardNumerator: 1000000000000, want: 34234377253},
}
for _, test := range tests {
require.NoError(t, altair.RewardProposer(context.Background(), beaconState, test.rewardNumerator))
i, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
require.NoError(t, altair.RewardProposer(t.Context(), beaconState, test.rewardNumerator))
i, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
require.NoError(t, err)
b, err := beaconState.BalanceAtIndex(i)
require.NoError(t, err)

View File

@@ -1,7 +1,6 @@
package altair_test
import (
"context"
"math"
"testing"
@@ -26,7 +25,7 @@ import (
func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
@@ -34,7 +33,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
for i := range syncBits {
syncBits[i] = 0xff
}
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
require.NoError(t, err)
ps := slots.PrevSlot(beaconState.Slot())
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)
@@ -55,7 +54,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
}
var reward uint64
beaconState, reward, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
beaconState, reward, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
require.NoError(t, err)
assert.Equal(t, uint64(72192), reward)
@@ -77,7 +76,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
require.Equal(t, true, balances[indices[0]] > balances[nonSyncIndex])
// Proposer should be more profitable than rest of the sync committee
proposerIndex, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
proposerIndex, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
require.NoError(t, err)
require.Equal(t, true, balances[proposerIndex] > balances[indices[0]])
@@ -102,7 +101,7 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
func TestProcessSyncCommittee_MixParticipation_BadSignature(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
@@ -110,7 +109,7 @@ func TestProcessSyncCommittee_MixParticipation_BadSignature(t *testing.T) {
for i := range syncBits {
syncBits[i] = 0xAA
}
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
require.NoError(t, err)
ps := slots.PrevSlot(beaconState.Slot())
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)
@@ -130,14 +129,14 @@ func TestProcessSyncCommittee_MixParticipation_BadSignature(t *testing.T) {
SyncCommitteeSignature: aggregatedSig,
}
_, _, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
_, _, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
require.ErrorContains(t, "invalid sync committee signature", err)
}
func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
@@ -145,7 +144,7 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
for i := range syncBits {
syncBits[i] = 0xAA
}
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
require.NoError(t, err)
ps := slots.PrevSlot(beaconState.Slot())
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)
@@ -167,7 +166,7 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
SyncCommitteeSignature: aggregatedSig,
}
_, _, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
_, _, err = altair.ProcessSyncAggregate(t.Context(), beaconState, syncAggregate)
require.NoError(t, err)
}
@@ -175,7 +174,7 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
beaconState, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
require.NoError(t, err)
committeeKeys := committee.Pubkeys
committeeKeys[1] = committeeKeys[0]
@@ -192,7 +191,7 @@ func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
SyncCommitteeBits: syncBits,
}
require.NoError(t, beaconState.UpdateBalancesAtIndex(idx, 0))
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(t.Context(), beaconState, syncAggregate)
require.NoError(t, err)
require.Equal(t, 511, len(votedKeys))
require.DeepEqual(t, committeeKeys[0], votedKeys[0].Marshal())
@@ -203,7 +202,7 @@ func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
beaconState, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
@@ -215,7 +214,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
SyncCommitteeBits: syncBits,
}
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(t.Context(), beaconState, syncAggregate)
require.NoError(t, err)
votedMap := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
for _, key := range votedKeys {
@@ -228,7 +227,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
committeeKeys := currentSyncCommittee.Pubkeys
balances := st.Balances()
proposerIndex, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
proposerIndex, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
require.NoError(t, err)
for i := 0; i < len(syncBits); i++ {
@@ -254,7 +253,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
func Test_VerifySyncCommitteeSig(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, beaconState.SetSlot(1))
committee, err := altair.NextSyncCommittee(context.Background(), beaconState)
committee, err := altair.NextSyncCommittee(t.Context(), beaconState)
require.NoError(t, err)
require.NoError(t, beaconState.SetCurrentSyncCommittee(committee))
@@ -262,7 +261,7 @@ func Test_VerifySyncCommitteeSig(t *testing.T) {
for i := range syncBits {
syncBits[i] = 0xff
}
indices, err := altair.NextSyncCommitteeIndices(context.Background(), beaconState)
indices, err := altair.NextSyncCommitteeIndices(t.Context(), beaconState)
require.NoError(t, err)
ps := slots.PrevSlot(beaconState.Slot())
pbr, err := helpers.BlockRootAtSlot(beaconState, ps)

View File

@@ -1,21 +1,21 @@
package altair_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
"github.com/OffchainLabs/prysm/v6/testing/require"
fuzz "github.com/google/gofuzz"
gofuzz "github.com/google/gofuzz"
)
func TestFuzzProcessDeposits_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}
deposits := make([]*ethpb.Deposit, 100)
ctx := context.Background()
ctx := t.Context()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
for i := range deposits {
@@ -27,14 +27,15 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}
deposit := &ethpb.Deposit{}
ctx := context.Background()
ctx := t.Context()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
@@ -45,14 +46,15 @@ func TestFuzzProcessPreGenesisDeposit_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
ctx := context.Background()
ctx := t.Context()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
@@ -63,11 +65,12 @@ func TestFuzzProcessPreGenesisDeposit_Phase0_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
@@ -80,11 +83,12 @@ func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconStateAltair{}
deposit := &ethpb.Deposit{}
@@ -97,5 +101,6 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
fuzz.FreeMemory(i)
}
}

View File

@@ -1,7 +1,6 @@
package altair_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
@@ -42,7 +41,7 @@ func TestProcessDeposits_SameValidatorMultipleDepositsSameBlock(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{dep[0], dep[1], dep[2]})
newState, err := altair.ProcessDeposits(t.Context(), beaconState, []*ethpb.Deposit{dep[0], dep[1], dep[2]})
require.NoError(t, err, "Expected block deposits to process correctly")
require.Equal(t, 2, len(newState.Validators()), "Incorrect validator count")
}
@@ -70,7 +69,7 @@ func TestProcessDeposits_AddsNewValidatorDeposit(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{dep[0]})
newState, err := altair.ProcessDeposits(t.Context(), beaconState, []*ethpb.Deposit{dep[0]})
require.NoError(t, err, "Expected block deposits to process correctly")
if newState.Balances()[1] != dep[0].Data.Amount {
t.Errorf(
@@ -127,7 +126,7 @@ func TestProcessDeposits_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T)
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{deposit})
newState, err := altair.ProcessDeposits(t.Context(), beaconState, []*ethpb.Deposit{deposit})
require.NoError(t, err, "Process deposit failed")
require.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
}
@@ -256,7 +255,7 @@ func TestPreGenesisDeposits_SkipInvalidDeposit(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessPreGenesisDeposits(context.Background(), beaconState, dep)
newState, err := altair.ProcessPreGenesisDeposits(t.Context(), beaconState, dep)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
_, ok := newState.ValidatorIndexByPubkey(bytesutil.ToBytes48(dep[0].Data.PublicKey))
@@ -370,6 +369,6 @@ func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
})
require.NoError(t, err)
want := "deposit root did not verify"
_, err = altair.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
_, err = altair.ProcessDeposits(t.Context(), beaconState, b.Block.Body.Deposits)
assert.ErrorContains(t, want, err)
}

View File

@@ -1,7 +1,6 @@
package altair
import (
"context"
"math"
"testing"
@@ -32,7 +31,7 @@ func TestInitializeEpochValidators_Ok(t *testing.T) {
InactivityScores: []uint64{0, 1, 2, 3},
})
require.NoError(t, err)
v, b, err := InitializePrecomputeValidators(context.Background(), s)
v, b, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
assert.DeepEqual(t, &precompute.Validator{
IsSlashed: true,
@@ -74,7 +73,7 @@ func TestInitializeEpochValidators_Overflow(t *testing.T) {
InactivityScores: []uint64{0, 1},
})
require.NoError(t, err)
_, _, err = InitializePrecomputeValidators(context.Background(), s)
_, _, err = InitializePrecomputeValidators(t.Context(), s)
require.ErrorContains(t, "could not read every validator: addition overflows", err)
}
@@ -84,16 +83,16 @@ func TestInitializeEpochValidators_BadState(t *testing.T) {
InactivityScores: []uint64{},
})
require.NoError(t, err)
_, _, err = InitializePrecomputeValidators(context.Background(), s)
_, _, err = InitializePrecomputeValidators(t.Context(), s)
require.ErrorContains(t, "num of validators is different than num of inactivity scores", err)
}
func TestProcessEpochParticipation(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
require.DeepEqual(t, &precompute.Validator{
IsActiveCurrentEpoch: true,
@@ -169,9 +168,9 @@ func TestProcessEpochParticipation_InactiveValidator(t *testing.T) {
InactivityScores: []uint64{0, 0, 0},
})
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(context.Background(), st)
validators, balance, err := InitializePrecomputeValidators(t.Context(), st)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), st, balance, validators)
validators, balance, err = ProcessEpochParticipation(t.Context(), st, balance, validators)
require.NoError(t, err)
require.DeepEqual(t, &precompute.Validator{
IsActiveCurrentEpoch: false,
@@ -209,9 +208,9 @@ func TestProcessEpochParticipation_InactiveValidator(t *testing.T) {
func TestAttestationsDelta(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
deltas, err := AttestationsDelta(s, balance, validators)
require.NoError(t, err)
@@ -247,9 +246,9 @@ func TestAttestationsDelta(t *testing.T) {
func TestAttestationsDeltaBellatrix(t *testing.T) {
s, err := testStateBellatrix()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
deltas, err := AttestationsDelta(s, balance, validators)
require.NoError(t, err)
@@ -285,9 +284,9 @@ func TestAttestationsDeltaBellatrix(t *testing.T) {
func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
s, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, validators)
require.NoError(t, err)
@@ -324,9 +323,9 @@ func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
func TestProcessRewardsAndPenaltiesPrecompute_InactivityLeak(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
sCopy := s.Copy()
s, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, validators)
@@ -352,11 +351,11 @@ func TestProcessInactivityScores_CanProcessInactivityLeak(t *testing.T) {
defaultScore := uint64(5)
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
require.NoError(t, s.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().MinEpochsToInactivityPenalty+2)))
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
require.NoError(t, err)
inactivityScores, err := s.InactivityScores()
require.NoError(t, err)
@@ -373,11 +372,11 @@ func TestProcessInactivityScores_GenesisEpoch(t *testing.T) {
defaultScore := uint64(10)
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
require.NoError(t, s.SetSlot(params.BeaconConfig().GenesisSlot))
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
require.NoError(t, err)
inactivityScores, err := s.InactivityScores()
require.NoError(t, err)
@@ -392,11 +391,11 @@ func TestProcessInactivityScores_CanProcessNonInactivityLeak(t *testing.T) {
require.NoError(t, err)
defaultScore := uint64(5)
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
require.NoError(t, err)
inactivityScores, err := s.InactivityScores()
require.NoError(t, err)
@@ -410,9 +409,9 @@ func TestProcessInactivityScores_CanProcessNonInactivityLeak(t *testing.T) {
func TestProcessRewardsAndPenaltiesPrecompute_GenesisEpoch(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
validators, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
validators, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
require.NoError(t, s.SetSlot(0))
s, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, validators)
@@ -429,9 +428,9 @@ func TestProcessRewardsAndPenaltiesPrecompute_GenesisEpoch(t *testing.T) {
func TestProcessRewardsAndPenaltiesPrecompute_BadState(t *testing.T) {
s, err := testState()
require.NoError(t, err)
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
_, balance, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
_, balance, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
_, err = ProcessRewardsAndPenaltiesPrecompute(s, balance, []*precompute.Validator{})
require.ErrorContains(t, "validator registries not the same length as state's validator registries", err)
@@ -442,7 +441,7 @@ func TestProcessInactivityScores_NonEligibleValidator(t *testing.T) {
require.NoError(t, err)
defaultScore := uint64(5)
require.NoError(t, s.SetInactivityScores([]uint64{defaultScore, defaultScore, defaultScore, defaultScore}))
validators, balance, err := InitializePrecomputeValidators(context.Background(), s)
validators, balance, err := InitializePrecomputeValidators(t.Context(), s)
require.NoError(t, err)
// v0 is eligible (not active previous epoch, slashed and not withdrawable)
@@ -463,9 +462,9 @@ func TestProcessInactivityScores_NonEligibleValidator(t *testing.T) {
// v3 is eligible (active previous epoch)
validators[3].IsActivePrevEpoch = true
validators, _, err = ProcessEpochParticipation(context.Background(), s, balance, validators)
validators, _, err = ProcessEpochParticipation(t.Context(), s, balance, validators)
require.NoError(t, err)
s, _, err = ProcessInactivityScores(context.Background(), s, validators)
s, _, err = ProcessInactivityScores(t.Context(), s, validators)
require.NoError(t, err)
inactivityScores, err := s.InactivityScores()
require.NoError(t, err)

View File

@@ -1,7 +1,6 @@
package altair_test
import (
"context"
"fmt"
"math"
"testing"
@@ -29,7 +28,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
BodyRoot: bytesutil.PadTo([]byte{'c'}, 32),
}
require.NoError(t, s.SetLatestBlockHeader(h))
postState, err := altair.ProcessSyncCommitteeUpdates(context.Background(), s)
postState, err := altair.ProcessSyncCommitteeUpdates(t.Context(), s)
require.NoError(t, err)
current, err := postState.CurrentSyncCommittee()
require.NoError(t, err)
@@ -38,7 +37,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
require.DeepEqual(t, current, next)
require.NoError(t, s.SetSlot(params.BeaconConfig().SlotsPerEpoch))
postState, err = altair.ProcessSyncCommitteeUpdates(context.Background(), s)
postState, err = altair.ProcessSyncCommitteeUpdates(t.Context(), s)
require.NoError(t, err)
c, err := postState.CurrentSyncCommittee()
require.NoError(t, err)
@@ -48,7 +47,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
require.DeepEqual(t, next, n)
require.NoError(t, s.SetSlot(primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*params.BeaconConfig().SlotsPerEpoch-1))
postState, err = altair.ProcessSyncCommitteeUpdates(context.Background(), s)
postState, err = altair.ProcessSyncCommitteeUpdates(t.Context(), s)
require.NoError(t, err)
c, err = postState.CurrentSyncCommittee()
require.NoError(t, err)
@@ -61,7 +60,7 @@ func TestProcessSyncCommitteeUpdates_CanRotate(t *testing.T) {
// Test boundary condition.
slot := params.BeaconConfig().SlotsPerEpoch * primitives.Slot(time.CurrentEpoch(s)+params.BeaconConfig().EpochsPerSyncCommitteePeriod)
require.NoError(t, s.SetSlot(slot))
boundaryCommittee, err := altair.NextSyncCommittee(context.Background(), s)
boundaryCommittee, err := altair.NextSyncCommittee(t.Context(), s)
require.NoError(t, err)
require.DeepNotEqual(t, boundaryCommittee, n)
}

View File

@@ -31,7 +31,7 @@ func Test_BaseReward(t *testing.T) {
valIdx: 2,
st: genState(1),
want: 0,
errString: "validator index 2 does not exist",
errString: "index 2 out of bounds",
},
{
name: "active balance is 32eth",
@@ -89,7 +89,7 @@ func Test_BaseRewardWithTotalBalance(t *testing.T) {
valIdx: 2,
activeBalance: 1,
want: 0,
errString: "validator index 2 does not exist",
errString: "index 2 out of bounds",
},
{
name: "active balance is 1",

View File

@@ -1,7 +1,6 @@
package altair_test
import (
"context"
"testing"
"time"
@@ -97,7 +96,7 @@ func TestSyncCommitteeIndices_CanGet(t *testing.T) {
t.Run(version.String(v), func(t *testing.T) {
helpers.ClearCache()
st := getState(t, tt.args.validatorCount, v)
got, err := altair.NextSyncCommitteeIndices(context.Background(), st)
got, err := altair.NextSyncCommitteeIndices(t.Context(), st)
if tt.wantErr {
require.ErrorContains(t, tt.errString, err)
} else {
@@ -129,18 +128,18 @@ func TestSyncCommitteeIndices_DifferentPeriods(t *testing.T) {
}
st := getState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
got1, err := altair.NextSyncCommitteeIndices(context.Background(), st)
got1, err := altair.NextSyncCommitteeIndices(t.Context(), st)
require.NoError(t, err)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch))
got2, err := altair.NextSyncCommitteeIndices(context.Background(), st)
got2, err := altair.NextSyncCommitteeIndices(t.Context(), st)
require.NoError(t, err)
require.DeepNotEqual(t, got1, got2)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)))
got2, err = altair.NextSyncCommitteeIndices(context.Background(), st)
got2, err = altair.NextSyncCommitteeIndices(t.Context(), st)
require.NoError(t, err)
require.DeepNotEqual(t, got1, got2)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch*primitives.Slot(2*params.BeaconConfig().EpochsPerSyncCommitteePeriod)))
got2, err = altair.NextSyncCommitteeIndices(context.Background(), st)
got2, err = altair.NextSyncCommitteeIndices(t.Context(), st)
require.NoError(t, err)
require.DeepNotEqual(t, got1, got2)
}
@@ -206,7 +205,7 @@ func TestSyncCommittee_CanGet(t *testing.T) {
if !tt.wantErr {
require.NoError(t, tt.args.state.SetSlot(primitives.Slot(tt.args.epoch)*params.BeaconConfig().SlotsPerEpoch))
}
got, err := altair.NextSyncCommittee(context.Background(), tt.args.state)
got, err := altair.NextSyncCommittee(t.Context(), tt.args.state)
if tt.wantErr {
require.ErrorContains(t, tt.errString, err)
} else {
@@ -270,7 +269,7 @@ func TestValidateNilSyncContribution(t *testing.T) {
func TestSyncSubCommitteePubkeys_CanGet(t *testing.T) {
helpers.ClearCache()
st := getState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
com, err := altair.NextSyncCommittee(context.Background(), st)
com, err := altair.NextSyncCommittee(t.Context(), st)
require.NoError(t, err)
sub, err := altair.SyncSubCommitteePubkeys(com, 0)
require.NoError(t, err)

View File

@@ -1,7 +1,6 @@
package altair_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
@@ -13,7 +12,7 @@ import (
func TestProcessEpoch_CanProcess(t *testing.T) {
st, _ := util.DeterministicGenesisStateAltair(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
err := altair.ProcessEpoch(context.Background(), st)
err := altair.ProcessEpoch(t.Context(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")
@@ -45,7 +44,7 @@ func TestProcessEpoch_CanProcess(t *testing.T) {
func TestProcessEpoch_CanProcessBellatrix(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
err := altair.ProcessEpoch(context.Background(), st)
err := altair.ProcessEpoch(t.Context(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")

View File

@@ -1,7 +1,6 @@
package altair_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
@@ -17,7 +16,7 @@ import (
)
func TestTranslateParticipation(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
s, _ := util.DeterministicGenesisStateAltair(t, 64)
require.NoError(t, s.SetSlot(s.Slot()+params.BeaconConfig().MinAttestationInclusionDelay))
@@ -73,7 +72,7 @@ func TestTranslateParticipation(t *testing.T) {
func TestUpgradeToAltair(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, params.BeaconConfig().MaxValidatorsPerCommittee)
preForkState := st.Copy()
aState, err := altair.UpgradeToAltair(context.Background(), st)
aState, err := altair.UpgradeToAltair(t.Context(), st)
require.NoError(t, err)
require.Equal(t, preForkState.GenesisTime(), aState.GenesisTime())

View File

@@ -105,6 +105,7 @@ go_test(
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/fuzz:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",

View File

@@ -178,7 +178,7 @@ func VerifyAttestationNoVerifySignature(
}
}
return attestation.IsValidAttestationIndices(ctx, indexedAtt)
return attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot)
}
// ProcessAttestationNoVerifySignature processes the attestation without verifying the attestation signature. This
@@ -243,7 +243,7 @@ func VerifyIndexedAttestation(ctx context.Context, beaconState state.ReadOnlyBea
ctx, span := trace.StartSpan(ctx, "core.VerifyIndexedAttestation")
defer span.End()
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt); err != nil {
if err := attestation.IsValidAttestationIndices(ctx, indexedAtt, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
return err
}
domain, err := signing.Domain(

View File

@@ -41,7 +41,7 @@ func TestProcessAttestationNoVerifySignature_BeaconFuzzIssue78(t *testing.T) {
t.Fatal(err)
}
ctx := context.Background()
ctx := t.Context()
_, err = blocks.ProcessAttestationNoVerifySignature(ctx, st, att)
require.ErrorContains(t, "committee index 1 >= committee count 1", err)
}

View File

@@ -42,7 +42,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
require.NoError(t, beaconState.SetCurrentJustifiedCheckpoint(cfc))
require.NoError(t, beaconState.AppendCurrentEpochAttestations(&ethpb.PendingAttestation{}))
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
require.NoError(t, err)
attestingIndices1, err := attestation.AttestingIndices(att1, committee)
require.NoError(t, err)
@@ -64,7 +64,7 @@ func TestProcessAggregatedAttestation_OverlappingBits(t *testing.T) {
AggregationBits: aggBits2,
}
committee, err = helpers.BeaconCommitteeFromState(context.Background(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
committee, err = helpers.BeaconCommitteeFromState(t.Context(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
require.NoError(t, err)
attestingIndices2, err := attestation.AttestingIndices(att2, committee)
require.NoError(t, err)
@@ -137,7 +137,7 @@ func TestProcessAttestationsNoVerify_OlderThanSlotsPerEpoch(t *testing.T) {
},
AggregationBits: aggBits,
}
ctx := context.Background()
ctx := t.Context()
t.Run("attestation older than slots per epoch", func(t *testing.T) {
beaconState, _ := util.DeterministicGenesisState(t, 100)
@@ -360,9 +360,9 @@ func TestConvertToIndexed_OK(t *testing.T) {
Signature: att.Signature,
}
committee, err := helpers.BeaconCommitteeFromState(context.Background(), state, att.Data.Slot, att.Data.CommitteeIndex)
committee, err := helpers.BeaconCommitteeFromState(t.Context(), state, att.Data.Slot, att.Data.CommitteeIndex)
require.NoError(t, err)
ia, err := attestation.ConvertToIndexed(context.Background(), att, committee)
ia, err := attestation.ConvertToIndexed(t.Context(), att, committee)
require.NoError(t, err)
assert.DeepEqual(t, wanted, ia, "Convert attestation to indexed attestation didn't result as wanted")
}
@@ -448,7 +448,7 @@ func TestVerifyIndexedAttestation_OK(t *testing.T) {
tt.attestation.Signature = marshalledSig
err = blocks.VerifyIndexedAttestation(context.Background(), state, tt.attestation)
err = blocks.VerifyIndexedAttestation(t.Context(), state, tt.attestation)
assert.NoError(t, err, "Failed to verify indexed attestation")
}
}
@@ -471,7 +471,7 @@ func TestValidateIndexedAttestation_AboveMaxLength(t *testing.T) {
want := "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE"
st, err := state_native.InitializeFromProtoUnsafePhase0(&ethpb.BeaconState{})
require.NoError(t, err)
err = blocks.VerifyIndexedAttestation(context.Background(), st, indexedAtt1)
err = blocks.VerifyIndexedAttestation(t.Context(), st, indexedAtt1)
assert.ErrorContains(t, want, err)
}
@@ -493,7 +493,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
}
want := "nil or missing indexed attestation data"
_, err := blocks.AttestationSignatureBatch(context.Background(), beaconState, atts)
_, err := blocks.AttestationSignatureBatch(t.Context(), beaconState, atts)
assert.ErrorContains(t, want, err)
atts = []ethpb.Att{}
@@ -514,7 +514,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
}
want = "expected non-empty attesting indices"
_, err = blocks.AttestationSignatureBatch(context.Background(), beaconState, atts)
_, err = blocks.AttestationSignatureBatch(t.Context(), beaconState, atts)
assert.ErrorContains(t, want, err)
}
@@ -542,7 +542,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
}))
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
comm1, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
@@ -561,7 +561,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
}
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1*params.BeaconConfig().SlotsPerEpoch+1 /*slot*/, 1 /*committeeIndex*/)
comm2, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1*params.BeaconConfig().SlotsPerEpoch+1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
@@ -583,7 +583,7 @@ func TestVerifyAttestations_HandlesPlannedFork(t *testing.T) {
}
func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
numOfValidators := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(4))
validators := make([]*ethpb.Validator, numOfValidators)
_, keys, err := util.DeterministicDepositsAndKeys(numOfValidators)
@@ -602,7 +602,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
require.NoError(t, st.SetSlot(5))
require.NoError(t, st.SetValidators(validators))
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
comm1, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
att1 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm1))),
@@ -621,7 +621,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
}
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
comm2, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
att2 := util.HydrateAttestation(&ethpb.Attestation{
AggregationBits: bitfield.NewBitlist(uint64(len(comm2))),
@@ -651,7 +651,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
require.NoError(t, st.SetSlot(5))
require.NoError(t, st.SetValidators(validators))
comm1, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 0 /*committeeIndex*/)
comm1, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 0 /*committeeIndex*/)
require.NoError(t, err)
commBits1 := primitives.NewAttestationCommitteeBits()
commBits1.SetBitAt(0, true)
@@ -673,7 +673,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
}
att1.Signature = bls.AggregateSignatures(sigs).Marshal()
comm2, err := helpers.BeaconCommitteeFromState(context.Background(), st, 1 /*slot*/, 1 /*committeeIndex*/)
comm2, err := helpers.BeaconCommitteeFromState(t.Context(), st, 1 /*slot*/, 1 /*committeeIndex*/)
require.NoError(t, err)
commBits2 := primitives.NewAttestationCommitteeBits()
commBits2.SetBitAt(1, true)
@@ -702,7 +702,7 @@ func TestRetrieveAttestationSignatureSet_VerifiesMultipleAttestations(t *testing
}
func TestRetrieveAttestationSignatureSet_AcrossFork(t *testing.T) {
ctx := context.Background()
ctx := t.Context()
numOfValidators := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(4))
validators := make([]*ethpb.Validator, numOfValidators)
_, keys, err := util.DeterministicDepositsAndKeys(numOfValidators)

View File

@@ -1,7 +1,6 @@
package blocks_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -63,7 +62,7 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
assert.ErrorContains(t, "attestations are not slashable", err)
}
@@ -102,7 +101,7 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
assert.ErrorContains(t, "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE", err)
}
@@ -244,7 +243,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, tc.st.SetSlot(currentSlot))
newState, err := blocks.ProcessAttesterSlashings(context.Background(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()

View File

@@ -1,7 +1,6 @@
package blocks
import (
"context"
"testing"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
@@ -11,13 +10,14 @@ import (
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
"github.com/OffchainLabs/prysm/v6/testing/require"
fuzz "github.com/google/gofuzz"
gofuzz "github.com/google/gofuzz"
)
func TestFuzzProcessAttestationNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
ctx := context.Background()
fuzzer := gofuzz.NewWithSeed(0)
ctx := t.Context()
state := &ethpb.BeaconState{}
att := &ethpb.Attestation{}
@@ -28,11 +28,12 @@ func TestFuzzProcessAttestationNoVerify_10000(t *testing.T) {
require.NoError(t, err)
_, err = ProcessAttestationNoVerifySignature(ctx, s, att)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessBlockHeader_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
block := &ethpb.SignedBeaconBlock{}
@@ -47,13 +48,14 @@ func TestFuzzProcessBlockHeader_10000(t *testing.T) {
}
wsb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = ProcessBlockHeader(context.Background(), s, wsb)
_, err = ProcessBlockHeader(t.Context(), s, wsb)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzverifyDepositDataSigningRoot_10000(_ *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
var ba []byte
var pubkey [fieldparams.BLSPubkeyLength]byte
var sig [96]byte
@@ -77,14 +79,14 @@ func TestFuzzverifyDepositDataSigningRoot_10000(_ *testing.T) {
}
func TestFuzzProcessEth1DataInBlock_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
e := &ethpb.Eth1Data{}
state, err := state_native.InitializeFromProtoUnsafePhase0(&ethpb.BeaconState{})
require.NoError(t, err)
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(e)
s, err := ProcessEth1DataInBlock(context.Background(), state, e)
s, err := ProcessEth1DataInBlock(t.Context(), state, e)
if err != nil && s != nil {
t.Fatalf("state should be nil on err. found: %v on error: %v for state: %v and eth1data: %v", s, err, state, e)
}
@@ -92,7 +94,7 @@ func TestFuzzProcessEth1DataInBlock_10000(t *testing.T) {
}
func TestFuzzareEth1DataEqual_10000(_ *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
eth1data := &ethpb.Eth1Data{}
eth1data2 := &ethpb.Eth1Data{}
@@ -105,7 +107,7 @@ func TestFuzzareEth1DataEqual_10000(_ *testing.T) {
}
func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
eth1data := &ethpb.Eth1Data{}
var stateVotes []*ethpb.Eth1Data
for i := 0; i < 100000; i++ {
@@ -117,12 +119,13 @@ func TestFuzzEth1DataHasEnoughSupport_10000(t *testing.T) {
require.NoError(t, err)
_, err = Eth1DataHasEnoughSupport(s, eth1data)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessBlockHeaderNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
block := &ethpb.BeaconBlock{}
@@ -131,13 +134,14 @@ func TestFuzzProcessBlockHeaderNoVerify_10000(t *testing.T) {
fuzzer.Fuzz(block)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
_, err = ProcessBlockHeaderNoVerify(context.Background(), s, block.Slot, block.ProposerIndex, block.ParentRoot, []byte{})
_, err = ProcessBlockHeaderNoVerify(t.Context(), s, block.Slot, block.ProposerIndex, block.ParentRoot, []byte{})
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessRandao_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
b := &ethpb.SignedBeaconBlock{}
@@ -151,15 +155,16 @@ func TestFuzzProcessRandao_10000(t *testing.T) {
}
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
r, err := ProcessRandao(context.Background(), s, wsb)
r, err := ProcessRandao(t.Context(), s, wsb)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, b)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessRandaoNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
blockBody := &ethpb.BeaconBlockBody{}
@@ -172,14 +177,15 @@ func TestFuzzProcessRandaoNoVerify_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, blockBody)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
p := &ethpb.ProposerSlashing{}
ctx := context.Background()
ctx := t.Context()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(p)
@@ -189,11 +195,12 @@ func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, p)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzVerifyProposerSlashing_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
proposerSlashing := &ethpb.ProposerSlashing{}
for i := 0; i < 10000; i++ {
@@ -203,14 +210,15 @@ func TestFuzzVerifyProposerSlashing_10000(t *testing.T) {
require.NoError(t, err)
err = VerifyProposerSlashing(s, proposerSlashing)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
a := &ethpb.AttesterSlashing{}
ctx := context.Background()
ctx := t.Context()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(a)
@@ -220,14 +228,15 @@ func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, a)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzVerifyAttesterSlashing_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
attesterSlashing := &ethpb.AttesterSlashing{}
ctx := context.Background()
ctx := t.Context()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(attesterSlashing)
@@ -235,11 +244,12 @@ func TestFuzzVerifyAttesterSlashing_10000(t *testing.T) {
require.NoError(t, err)
err = VerifyAttesterSlashing(ctx, s, attesterSlashing)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzIsSlashableAttestationData_10000(_ *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
attestationData := &ethpb.AttestationData{}
attestationData2 := &ethpb.AttestationData{}
@@ -251,7 +261,7 @@ func TestFuzzIsSlashableAttestationData_10000(_ *testing.T) {
}
func TestFuzzslashableAttesterIndices_10000(_ *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
attesterSlashing := &ethpb.AttesterSlashing{}
for i := 0; i < 10000; i++ {
@@ -261,10 +271,10 @@ func TestFuzzslashableAttesterIndices_10000(_ *testing.T) {
}
func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
b := &ethpb.SignedBeaconBlock{}
ctx := context.Background()
ctx := t.Context()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(b)
@@ -279,14 +289,15 @@ func TestFuzzProcessAttestationsNoVerify_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, b)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
idxAttestation := &ethpb.IndexedAttestation{}
ctx := context.Background()
ctx := t.Context()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(idxAttestation)
@@ -294,11 +305,12 @@ func TestFuzzVerifyIndexedAttestationn_10000(t *testing.T) {
require.NoError(t, err)
err = VerifyIndexedAttestation(ctx, s, idxAttestation)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzverifyDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
deposit := &ethpb.Deposit{}
for i := 0; i < 10000; i++ {
@@ -308,14 +320,15 @@ func TestFuzzverifyDeposit_10000(t *testing.T) {
require.NoError(t, err)
err = VerifyDeposit(s, deposit)
_ = err
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
e := &ethpb.SignedVoluntaryExit{}
ctx := context.Background()
ctx := t.Context()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
fuzzer.Fuzz(e)
@@ -325,11 +338,12 @@ func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and exit: %v", r, err, state, e)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconState{}
e := &ethpb.SignedVoluntaryExit{}
for i := 0; i < 10000; i++ {
@@ -337,15 +351,16 @@ func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessVoluntaryExits(context.Background(), s, []*ethpb.SignedVoluntaryExit{e})
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e})
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, e)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzVerifyExit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
ve := &ethpb.SignedVoluntaryExit{}
rawVal := &ethpb.Validator{}
fork := &ethpb.Fork{}
@@ -369,5 +384,6 @@ func TestFuzzVerifyExit_10000(t *testing.T) {
_ = err
err = VerifyExitAndSignature(val, s, ve)
_ = err
fuzz.FreeMemory(i)
}
}

View File

@@ -1,7 +1,6 @@
package blocks_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -95,7 +94,7 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(context.Background(), beaconState, ss, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
require.NoError(t, err)
newRegistry := newState.Validators()
if !newRegistry[expectedSlashedVal].Slashed {

View File

@@ -1,7 +1,6 @@
package blocks_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -46,7 +45,7 @@ func TestBatchVerifyDepositsSignatures_Ok(t *testing.T) {
deposit.Proof = proof
require.NoError(t, err)
verified, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
verified, err := blocks.BatchVerifyDepositsSignatures(t.Context(), []*ethpb.Deposit{deposit})
require.NoError(t, err)
require.Equal(t, true, verified)
}
@@ -69,7 +68,7 @@ func TestBatchVerifyDepositsSignatures_InvalidSignature(t *testing.T) {
deposit.Proof = proof
require.NoError(t, err)
verified, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
verified, err := blocks.BatchVerifyDepositsSignatures(t.Context(), []*ethpb.Deposit{deposit})
require.NoError(t, err)
require.Equal(t, false, verified)
}
@@ -164,7 +163,7 @@ func TestBatchVerifyPendingDepositsSignatures_Ok(t *testing.T) {
sig2 := sk2.Sign(sr2[:])
pendingDeposit2.Signature = sig2.Marshal()
verified, err := blocks.BatchVerifyPendingDepositsSignatures(context.Background(), []*ethpb.PendingDeposit{pendingDeposit, pendingDeposit2})
verified, err := blocks.BatchVerifyPendingDepositsSignatures(t.Context(), []*ethpb.PendingDeposit{pendingDeposit, pendingDeposit2})
require.NoError(t, err)
require.Equal(t, true, verified)
}
@@ -175,7 +174,7 @@ func TestBatchVerifyPendingDepositsSignatures_InvalidSignature(t *testing.T) {
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
}
verified, err := blocks.BatchVerifyPendingDepositsSignatures(context.Background(), []*ethpb.PendingDeposit{pendingDeposit})
verified, err := blocks.BatchVerifyPendingDepositsSignatures(t.Context(), []*ethpb.PendingDeposit{pendingDeposit})
require.NoError(t, err)
require.Equal(t, false, verified)
}

View File

@@ -1,7 +1,6 @@
package blocks_test
import (
"context"
"fmt"
"testing"
@@ -177,7 +176,7 @@ func TestProcessEth1Data_SetsCorrectly(t *testing.T) {
period := uint64(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().EpochsPerEth1VotingPeriod)))
for i := uint64(0); i < period; i++ {
processedState, err := blocks.ProcessEth1DataInBlock(context.Background(), beaconState, b.Block.Body.Eth1Data)
processedState, err := blocks.ProcessEth1DataInBlock(t.Context(), beaconState, b.Block.Body.Eth1Data)
require.NoError(t, err)
require.Equal(t, true, processedState.Version() == version.Phase0)
}

View File

@@ -1,7 +1,6 @@
package blocks_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
@@ -47,7 +46,7 @@ func TestProcessVoluntaryExits_NotActiveLongEnoughToExit(t *testing.T) {
}
want := "validator has not been active long enough to exit"
_, err = blocks.ProcessVoluntaryExits(context.Background(), state, b.Block.Body.VoluntaryExits)
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
assert.ErrorContains(t, want, err)
}
@@ -77,7 +76,7 @@ func TestProcessVoluntaryExits_ExitAlreadySubmitted(t *testing.T) {
}
want := "validator with index 0 has already submitted an exit, which will take place at epoch: 10"
_, err = blocks.ProcessVoluntaryExits(context.Background(), state, b.Block.Body.VoluntaryExits)
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
assert.ErrorContains(t, want, err)
}
@@ -125,7 +124,7 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
},
}
newState, err := blocks.ProcessVoluntaryExits(context.Background(), state, b.Block.Body.VoluntaryExits)
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
require.NoError(t, err, "Could not process exits")
newRegistry := newState.Validators()
if newRegistry[0].ExitEpoch != helpers.ActivationExitEpoch(primitives.Epoch(state.Slot()/params.BeaconConfig().SlotsPerEpoch)) {

View File

@@ -1,7 +1,6 @@
package blocks_test
import (
"context"
"io"
"testing"
@@ -51,7 +50,7 @@ func TestProcessBlockHeader_ImproperBlockSlot(t *testing.T) {
currentEpoch := time.CurrentEpoch(state)
priv, err := bls.RandKey()
require.NoError(t, err)
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.ProposerIndex = pID
@@ -61,7 +60,7 @@ func TestProcessBlockHeader_ImproperBlockSlot(t *testing.T) {
block.Signature, err = signing.ComputeDomainAndSign(state, currentEpoch, block.Block, params.BeaconConfig().DomainBeaconProposer, priv)
require.NoError(t, err)
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), state)
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
validators[proposerIdx].Slashed = false
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
@@ -70,7 +69,7 @@ func TestProcessBlockHeader_ImproperBlockSlot(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
assert.ErrorContains(t, "block.Slot 10 must be greater than state.LatestBlockHeader.Slot 10", err)
}
@@ -85,7 +84,7 @@ func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
lbhdr, err := beaconState.LatestBlockHeader().HashTreeRoot()
require.NoError(t, err)
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
require.NoError(t, err)
block := util.NewBeaconBlock()
@@ -98,7 +97,7 @@ func TestProcessBlockHeader_WrongProposerSig(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = blocks.ProcessBlockHeader(context.Background(), beaconState, wsb)
_, err = blocks.ProcessBlockHeader(t.Context(), beaconState, wsb)
want := "signature did not verify"
assert.ErrorContains(t, want, err)
}
@@ -142,7 +141,7 @@ func TestProcessBlockHeader_DifferentSlots(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
want := "is different than block slot"
assert.ErrorContains(t, want, err)
}
@@ -172,7 +171,7 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
blockSig, err := signing.ComputeDomainAndSign(state, currentEpoch, &sszBytes, params.BeaconConfig().DomainBeaconProposer, priv)
require.NoError(t, err)
validators[5896].PublicKey = priv.PublicKey().Marshal()
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.Slot = 10
@@ -183,7 +182,7 @@ func TestProcessBlockHeader_PreviousBlockRootNotSignedRoot(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
want := "does not match"
assert.ErrorContains(t, want, err)
}
@@ -216,7 +215,7 @@ func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
require.NoError(t, err)
validators[12683].PublicKey = priv.PublicKey().Marshal()
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.Slot = 10
@@ -227,7 +226,7 @@ func TestProcessBlockHeader_SlashedProposer(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
_, err = blocks.ProcessBlockHeader(context.Background(), state, wsb)
_, err = blocks.ProcessBlockHeader(t.Context(), state, wsb)
want := "was previously slashed"
assert.ErrorContains(t, want, err)
}
@@ -257,7 +256,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
currentEpoch := time.CurrentEpoch(state)
priv, err := bls.RandKey()
require.NoError(t, err)
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.ProposerIndex = pID
@@ -269,7 +268,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
bodyRoot, err := block.Block.Body.HashTreeRoot()
require.NoError(t, err, "Failed to hash block bytes got")
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), state)
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
validators[proposerIdx].Slashed = false
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()
@@ -278,7 +277,7 @@ func TestProcessBlockHeader_OK(t *testing.T) {
wsb, err := consensusblocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
newState, err := blocks.ProcessBlockHeader(context.Background(), state, wsb)
newState, err := blocks.ProcessBlockHeader(t.Context(), state, wsb)
require.NoError(t, err, "Failed to process block header got")
var zeroHash [32]byte
nsh := newState.LatestBlockHeader()
@@ -318,7 +317,7 @@ func TestBlockSignatureSet_OK(t *testing.T) {
currentEpoch := time.CurrentEpoch(state)
priv, err := bls.RandKey()
require.NoError(t, err)
pID, err := helpers.BeaconProposerIndex(context.Background(), state)
pID, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
block := util.NewBeaconBlock()
block.Block.Slot = 10
@@ -327,7 +326,7 @@ func TestBlockSignatureSet_OK(t *testing.T) {
block.Block.ParentRoot = latestBlockSignedRoot[:]
block.Signature, err = signing.ComputeDomainAndSign(state, currentEpoch, block.Block, params.BeaconConfig().DomainBeaconProposer, priv)
require.NoError(t, err)
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), state)
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), state)
require.NoError(t, err)
validators[proposerIdx].Slashed = false
validators[proposerIdx].PublicKey = priv.PublicKey().Marshal()

View File

@@ -229,13 +229,16 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
if body.Version() < version.Deneb {
return nil
}
kzgs, err := body.BlobKzgCommitments()
if err != nil {
return err
}
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(kzgs) > maxBlobsPerBlock {
return fmt.Errorf("too many kzg commitments in block: %d", len(kzgs))
commitmentCount, maxBlobsPerBlock := len(kzgs), params.BeaconConfig().MaxBlobsPerBlock(slot)
if commitmentCount > maxBlobsPerBlock {
return fmt.Errorf("too many kzg commitments in block: actual count %d - max allowed %d", commitmentCount, maxBlobsPerBlock)
}
return nil
}

View File

@@ -926,8 +926,10 @@ func TestVerifyBlobCommitmentCount(t *testing.T) {
require.NoError(t, err)
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
b = &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1)}}
maxCommitmentsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())
b = &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, maxCommitmentsPerBlock+1)}}
rb, err = consensusblocks.NewBeaconBlock(b)
require.NoError(t, err)
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: actual count %d - max allowed %d", maxCommitmentsPerBlock+1, maxCommitmentsPerBlock), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
}

View File

@@ -1,7 +1,6 @@
package blocks_test
import (
"context"
"fmt"
"testing"
@@ -51,7 +50,7 @@ func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
},
}
want := "mismatched header slots"
_, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
assert.ErrorContains(t, want, err)
}
@@ -84,7 +83,7 @@ func TestProcessProposerSlashings_SameHeaders(t *testing.T) {
},
}
want := "expected slashing headers to differ"
_, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
assert.ErrorContains(t, want, err)
}
@@ -134,7 +133,7 @@ func TestProcessProposerSlashings_ValidatorNotSlashable(t *testing.T) {
"validator with key %#x is not slashable",
bytesutil.ToBytes48(beaconState.Validators()[0].PublicKey),
)
_, err = blocks.ProcessProposerSlashings(context.Background(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
assert.ErrorContains(t, want, err)
}
@@ -173,7 +172,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -221,7 +220,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusAltair(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -269,7 +268,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -317,7 +316,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusCapella(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(context.Background(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
require.NoError(t, err)
newStateVals := newState.Validators()

View File

@@ -1,7 +1,6 @@
package blocks_test
import (
"context"
"encoding/binary"
"testing"
@@ -21,7 +20,7 @@ import (
func TestProcessRandao_IncorrectProposerFailsVerification(t *testing.T) {
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
// We fetch the proposer's index as that is whom the RANDAO will be verified against.
proposerIdx, err := helpers.BeaconProposerIndex(context.Background(), beaconState)
proposerIdx, err := helpers.BeaconProposerIndex(t.Context(), beaconState)
require.NoError(t, err)
epoch := primitives.Epoch(0)
buf := make([]byte, 32)
@@ -42,7 +41,7 @@ func TestProcessRandao_IncorrectProposerFailsVerification(t *testing.T) {
want := "block randao: signature did not verify"
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
_, err = blocks.ProcessRandao(context.Background(), beaconState, wsb)
_, err = blocks.ProcessRandao(t.Context(), beaconState, wsb)
assert.ErrorContains(t, want, err)
}
@@ -62,7 +61,7 @@ func TestProcessRandao_SignatureVerifiesAndUpdatesLatestStateMixes(t *testing.T)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
newState, err := blocks.ProcessRandao(
context.Background(),
t.Context(),
beaconState,
wsb,
)
@@ -85,7 +84,7 @@ func TestRandaoSignatureSet_OK(t *testing.T) {
},
}
set, err := blocks.RandaoSignatureBatch(context.Background(), beaconState, block.Body.RandaoReveal)
set, err := blocks.RandaoSignatureBatch(t.Context(), beaconState, block.Body.RandaoReveal)
require.NoError(t, err)
verified, err := set.Verify()
require.NoError(t, err)

View File

@@ -200,7 +200,7 @@ func createAttestationSignatureBatch(
if err != nil {
return nil, err
}
if err := attestation.IsValidAttestationIndices(ctx, ia); err != nil {
if err := attestation.IsValidAttestationIndices(ctx, ia, params.BeaconConfig().MaxValidatorsPerCommittee, params.BeaconConfig().MaxCommitteesPerSlot); err != nil {
return nil, err
}
indices := ia.GetAttestingIndices()

View File

@@ -85,6 +85,7 @@ go_test(
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/fuzz:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",

View File

@@ -194,11 +194,11 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
if IsValidSwitchToCompoundingRequest(st, cr) {
srcIdx, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(cr.SourcePubkey))
if !ok {
log.Error("failed to find source validator index")
log.Error("Failed to find source validator index")
continue
}
if err := SwitchToCompoundingValidator(st, srcIdx); err != nil {
log.WithError(err).Error("failed to switch to compounding validator")
log.WithError(err).Error("Failed to switch to compounding validator")
}
continue
}
@@ -280,7 +280,7 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
}
bal, err := st.PendingBalanceToWithdraw(srcIdx)
if err != nil {
log.WithError(err).Error("failed to fetch pending balance to withdraw")
log.WithError(err).Error("Failed to fetch pending balance to withdraw")
continue
}
if bal > 0 {
@@ -290,7 +290,7 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
// Initiate the exit of the source validator.
exitEpoch, err := ComputeConsolidationEpochAndUpdateChurn(ctx, st, primitives.Gwei(srcV.EffectiveBalance))
if err != nil {
log.WithError(err).Error("failed to compute consolidation epoch")
log.WithError(err).Error("Failed to compute consolidation epoch")
continue
}
srcV.ExitEpoch = exitEpoch

View File

@@ -1,21 +1,21 @@
package electra_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
"github.com/OffchainLabs/prysm/v6/testing/require"
fuzz "github.com/google/gofuzz"
gofuzz "github.com/google/gofuzz"
)
func TestFuzzProcessDeposits_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconStateElectra{}
deposits := make([]*ethpb.Deposit, 100)
ctx := context.Background()
ctx := t.Context()
for i := 0; i < 10000; i++ {
fuzzer.Fuzz(state)
for i := range deposits {
@@ -27,11 +27,12 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
}
fuzz.FreeMemory(i)
}
}
func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
state := &ethpb.BeaconStateElectra{}
deposit := &ethpb.Deposit{}
@@ -44,5 +45,6 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
fuzz.FreeMemory(i)
}
}

View File

@@ -319,7 +319,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
invalidDep := &eth.PendingDeposit{PublicKey: make([]byte, 48)}
deps := []*eth.PendingDeposit{validDep, invalidDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))
require.NoError(t, electra.BatchProcessNewPendingDeposits(t.Context(), st, deps))
require.Equal(t, 1, len(st.Validators()))
require.Equal(t, 1, len(st.Balances()))
})
@@ -335,7 +335,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
wc[31] = byte(0)
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
deps := []*eth.PendingDeposit{validDep, validDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))
require.NoError(t, electra.BatchProcessNewPendingDeposits(t.Context(), st, deps))
require.Equal(t, 1, len(st.Validators()))
require.Equal(t, 1, len(st.Balances()))
require.Equal(t, params.BeaconConfig().MinActivationBalance*2, st.Balances()[0])
@@ -354,7 +354,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
invalidSigDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
invalidSigDep.Signature = make([]byte, 96)
deps := []*eth.PendingDeposit{validDep, invalidSigDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))
require.NoError(t, electra.BatchProcessNewPendingDeposits(t.Context(), st, deps))
require.Equal(t, 1, len(st.Validators()))
require.Equal(t, 1, len(st.Balances()))
require.Equal(t, 2*params.BeaconConfig().MinActivationBalance, st.Balances()[0])
@@ -368,12 +368,12 @@ func TestProcessDepositRequests(t *testing.T) {
require.NoError(t, st.SetDepositRequestsStartIndex(1))
t.Run("empty requests continues", func(t *testing.T) {
newSt, err := electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{})
newSt, err := electra.ProcessDepositRequests(t.Context(), st, []*enginev1.DepositRequest{})
require.NoError(t, err)
require.DeepEqual(t, newSt, st)
})
t.Run("nil request errors", func(t *testing.T) {
_, err = electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{nil})
_, err = electra.ProcessDepositRequests(t.Context(), st, []*enginev1.DepositRequest{nil})
require.ErrorContains(t, "nil deposit request", err)
})
@@ -406,7 +406,7 @@ func TestProcessDepositRequests(t *testing.T) {
Signature: sig.Marshal(),
},
}
st, err = electra.ProcessDepositRequests(context.Background(), st, requests)
st, err = electra.ProcessDepositRequests(t.Context(), st, requests)
require.NoError(t, err)
pbd, err := st.PendingDeposits()
@@ -437,7 +437,7 @@ func TestProcessDeposit_Electra_Simple(t *testing.T) {
},
})
require.NoError(t, err)
pdSt, err := electra.ProcessDeposits(context.Background(), st, deps)
pdSt, err := electra.ProcessDeposits(t.Context(), st, deps)
require.NoError(t, err)
pbd, err := pdSt.PendingDeposits()
require.NoError(t, err)
@@ -592,7 +592,7 @@ func TestApplyPendingDeposit_TopUp(t *testing.T) {
dep := stateTesting.GeneratePendingDeposit(t, sk, excessBalance, bytesutil.ToBytes32(wc), 0)
require.NoError(t, st.SetValidators(validators))
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
require.NoError(t, electra.ApplyPendingDeposit(t.Context(), st, dep))
b, err := st.BalanceAtIndex(0)
require.NoError(t, err)
@@ -608,7 +608,7 @@ func TestApplyPendingDeposit_UnknownKey(t *testing.T) {
wc[31] = byte(0)
dep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
require.Equal(t, 0, len(st.Validators()))
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
require.NoError(t, electra.ApplyPendingDeposit(t.Context(), st, dep))
// activates new validator
require.Equal(t, 1, len(st.Validators()))
b, err := st.BalanceAtIndex(0)
@@ -630,7 +630,7 @@ func TestApplyPendingDeposit_InvalidSignature(t *testing.T) {
Amount: 100,
}
require.Equal(t, 0, len(st.Validators()))
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
require.NoError(t, electra.ApplyPendingDeposit(t.Context(), st, dep))
// no validator added
require.Equal(t, 0, len(st.Validators()))
// no topup either

View File

@@ -1,7 +1,6 @@
package electra_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
@@ -54,7 +53,7 @@ func TestProcessOperationsWithNilRequests(t *testing.T) {
require.NoError(t, st.SetSlot(1))
_, err = electra.ProcessOperations(context.Background(), st, b.Block())
_, err = electra.ProcessOperations(t.Context(), st, b.Block())
require.ErrorContains(t, tc.errMsg, err)
})
}

View File

@@ -1,7 +1,6 @@
package electra_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
@@ -78,7 +77,7 @@ func TestProcessEpoch_CanProcessElectra(t *testing.T) {
TargetIndex: 1,
},
}))
err := electra.ProcessEpoch(context.Background(), st)
err := electra.ProcessEpoch(t.Context(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")

View File

@@ -1,7 +1,6 @@
package electra_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
@@ -290,7 +289,7 @@ func TestProcessWithdrawRequests(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := electra.ProcessWithdrawalRequests(context.Background(), tt.args.st, tt.args.wrs)
got, err := electra.ProcessWithdrawalRequests(t.Context(), tt.args.st, tt.args.wrs)
if (err != nil) != tt.wantErr {
t.Errorf("ProcessWithdrawalRequests() error = %v, wantErr %v", err, tt.wantErr)
return

View File

@@ -48,6 +48,7 @@ go_test(
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/fuzz:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_google_go_cmp//cmp:go_default_library",

View File

@@ -5,12 +5,13 @@ import (
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/fuzz"
"github.com/OffchainLabs/prysm/v6/testing/require"
fuzz "github.com/google/gofuzz"
gofuzz "github.com/google/gofuzz"
)
func TestFuzzFinalUpdates_10000(t *testing.T) {
fuzzer := fuzz.NewWithSeed(0)
fuzzer := gofuzz.NewWithSeed(0)
base := &ethpb.BeaconState{}
for i := 0; i < 10000; i++ {
@@ -19,5 +20,6 @@ func TestFuzzFinalUpdates_10000(t *testing.T) {
require.NoError(t, err)
_, err = ProcessFinalUpdates(s)
_ = err
fuzz.FreeMemory(i)
}
}

View File

@@ -1,7 +1,6 @@
package epoch_test
import (
"context"
"fmt"
"math"
"testing"
@@ -170,7 +169,7 @@ func TestProcessRegistryUpdates_NoRotation(t *testing.T) {
}
beaconState, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
require.NoError(t, err)
for i, validator := range newState.Validators() {
assert.Equal(t, params.BeaconConfig().MaxSeedLookahead, validator.ExitEpoch, "Could not update registry %d", i)
@@ -194,7 +193,7 @@ func TestProcessRegistryUpdates_EligibleToActivate(t *testing.T) {
beaconState, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
currentEpoch := time.CurrentEpoch(beaconState)
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
require.NoError(t, err)
for i, validator := range newState.Validators() {
if uint64(i) < limit && validator.ActivationEpoch != helpers.ActivationExitEpoch(currentEpoch) {
@@ -229,7 +228,7 @@ func TestProcessRegistryUpdates_EligibleToActivate_Cancun(t *testing.T) {
beaconState, err := state_native.InitializeFromProtoDeneb(base)
require.NoError(t, err)
currentEpoch := time.CurrentEpoch(beaconState)
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
require.NoError(t, err)
for i, validator := range newState.Validators() {
// Note: In Deneb, only validators indices before `MaxPerEpochActivationChurnLimit` should be activated.
@@ -257,7 +256,7 @@ func TestProcessRegistryUpdates_ActivationCompletes(t *testing.T) {
}
beaconState, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
require.NoError(t, err)
for i, validator := range newState.Validators() {
assert.Equal(t, params.BeaconConfig().MaxSeedLookahead, validator.ExitEpoch, "Could not update registry %d, unexpected exit slot", i)
@@ -281,7 +280,7 @@ func TestProcessRegistryUpdates_ValidatorsEjected(t *testing.T) {
}
beaconState, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
require.NoError(t, err)
for i, validator := range newState.Validators() {
assert.Equal(t, params.BeaconConfig().MaxSeedLookahead+1, validator.ExitEpoch, "Could not update registry %d, unexpected exit slot", i)
@@ -306,7 +305,7 @@ func TestProcessRegistryUpdates_CanExits(t *testing.T) {
}
beaconState, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, err := epoch.ProcessRegistryUpdates(context.Background(), beaconState)
newState, err := epoch.ProcessRegistryUpdates(t.Context(), beaconState)
require.NoError(t, err)
for i, validator := range newState.Validators() {
assert.Equal(t, exitEpoch, validator.ExitEpoch, "Could not update registry %d, unexpected exit slot", i)
@@ -386,7 +385,7 @@ func TestProcessHistoricalDataUpdate(t *testing.T) {
name: "before capella can process and get historical root",
st: func() state.BeaconState {
st, _ := util.DeterministicGenesisState(t, 1)
st, err := transition.ProcessSlots(context.Background(), st, params.BeaconConfig().SlotsPerHistoricalRoot-1)
st, err := transition.ProcessSlots(t.Context(), st, params.BeaconConfig().SlotsPerHistoricalRoot-1)
require.NoError(t, err)
return st
},
@@ -410,7 +409,7 @@ func TestProcessHistoricalDataUpdate(t *testing.T) {
name: "after capella can process and get historical summary",
st: func() state.BeaconState {
st, _ := util.DeterministicGenesisStateCapella(t, 1)
st, err := transition.ProcessSlots(context.Background(), st, params.BeaconConfig().SlotsPerHistoricalRoot-1)
st, err := transition.ProcessSlots(t.Context(), st, params.BeaconConfig().SlotsPerHistoricalRoot-1)
require.NoError(t, err)
return st
},

View File

@@ -1,7 +1,6 @@
package precompute_test
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/epoch/precompute"
@@ -206,10 +205,10 @@ func TestProcessAttestations(t *testing.T) {
for i := 0; i < len(pVals); i++ {
pVals[i] = &precompute.Validator{CurrentEpochEffectiveBalance: 100}
}
pVals, _, err = precompute.ProcessAttestations(context.Background(), beaconState, pVals, &precompute.Balance{})
pVals, _, err = precompute.ProcessAttestations(t.Context(), beaconState, pVals, &precompute.Balance{})
require.NoError(t, err)
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
committee, err := helpers.BeaconCommitteeFromState(t.Context(), beaconState, att1.Data.Slot, att1.Data.CommitteeIndex)
require.NoError(t, err)
indices, err := attestation.AttestingIndices(att1, committee)
require.NoError(t, err)
@@ -218,7 +217,7 @@ func TestProcessAttestations(t *testing.T) {
t.Error("Not a prev epoch attester")
}
}
committee, err = helpers.BeaconCommitteeFromState(context.Background(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
committee, err = helpers.BeaconCommitteeFromState(t.Context(), beaconState, att2.Data.Slot, att2.Data.CommitteeIndex)
require.NoError(t, err)
indices, err = attestation.AttestingIndices(att2, committee)
require.NoError(t, err)

Some files were not shown because too many files have changed in this diff Show More