Compare commits

...

121 Commits

Author SHA1 Message Date
james-prysm
b1b3cd11f5 Merge branch 'develop' into beacon-proposer-settings 2024-02-12 13:47:04 -06:00
Preston Van Loon
c4c6b47d9b validator/exit: Print other testnet beaconcha.in URLs (#13610) 2024-02-12 19:36:58 +00:00
Manu NALEPA
06a5548424 Slasher: Fixes double votes false positive and attester slashings duplication. (#13596)
* `Test_processAttestations`: Remove duplicated tests.

* Sort indexed attestations by data root.

* `processAttestations`: Don't return duplicate slashings anymore.

Fix https://github.com/prysmaticlabs/prysm/issues/13592.

* `AttesterDoubleVote`: Rename fields.

* Detect double votes in different batches.

In order to do that:
1. Each attestation of the batch is tested against the other attestations of the batch.
2. Each attestation of the batch is tested against the content of the database.
2. Attestations are saved into the database.

Fixes https://github.com/prysmaticlabs/prysm/issues/13590.
2024-02-12 17:35:22 +00:00
terence
5e74c798d4 Use Gwei for value in get header log (#13608) 2024-02-12 15:14:58 +00:00
Nishant Das
7b955c94ec Reduce Lookahead Steps Parameter (#13599)
* reduce lookahead steps

* test
2024-02-12 14:39:42 +00:00
terence
256a05bfd5 Remove deprecated flags: optional engine and registration (#13606) 2024-02-10 04:59:00 +00:00
Preston Van Loon
03068ba781 db: clear blobs when using --clear-db or --force-clear-db (#13605)
* Call Close() as part of ClearDB

* Add method to clear blob storage

* Clear blob storage when clearing DB
2024-02-10 03:52:30 +00:00
Nishant Das
5df8b83a05 Validate Range Availibility (#13587)
* fix it

* check range avail

* add test cases

* add checks

* kasey's review

* gaz
2024-02-09 23:41:08 +00:00
terence
db653b8863 Remove deprecated build block parallel flag (#13539) 2024-02-09 22:13:51 +00:00
Manu NALEPA
af203efa0c Slasher: Refactor and add tests (#13589)
* `helpers.go`: Improve naming consistency.

* `detect_attestations.go`: Improve readability.

* `receive.go`: Add `attsQueueSize` in log message.

* `checkSlashableAttestations`: Improve logging.

`avgBatchProcessingTime` is not displayed any more if not batch is
processed.

* `loadChunks`: Use explicit `chunkKind` and `chunkIndices`.

* `getChunk`: Use specific `chunkIndex` and `chunkKind`.

* `validatorIndicesInChunk` -> `validatorIndexesInChunk`.

* `epochUpdateForValidator`: Use explicit arguments.

* `getChunk`: Change order of arguments.

* `latestEpochWrittenForValidator`: Use `ok` parameter.

So the default value is not any more considered as the absence of
value.

* `applyAttestationForValidator`: Use explicit arguments.

* `updateSpans`: Use explicit arguments.

* `saveUpdatedChunks`: Use explicit arguments.

* `checkSurrounds`: Use explicit arguments.

We see here that, previously, in `checkSlashableAttestations`,
`checkSurrounds` was called with the default value of `slashertypes`: `MinSpan`.

Now, we set it expliciterly at `MinSpan`, which may explicit a bug.

* `epochUpdateForValidator`: Set modified by the function argument first.

* `applyAttestationForValidator`: Set mutated argument `chunksByChunkIdx`first.

* `applyAttestationForValidator`: Rename variables.

* `Test_processQueuedAttestations`: Fix test.

Two tests were actually exactly the same.

* `updateSpans`: Keep happy path in the outer scope.

Even if in this case the "happy" path means slashing.

* `checkSurrounds`: Rename variable.

* `getChunk`: Avoid side effects.

It adds a few lines for callers, but it does not modify any more
arguments and it does what it says: getting a chunk.

* `CheckSlashable`: Flatten.

* `detect_attestations_test.go`: Simplify.

* `CheckSlashable`: Add error log in case of missing attestation.

* `processQueuedAttestations`: Extract a sub function.

So testing will be easier.

* `processAttesterSlashings` and `processProposerSlashings`: Improve.

* `processAttesterSlashings`: Return processed slashings.

* `createAttestationWrapper`: Rename variables.

* `signingRoot` ==> `headerRoot` or `dataRoot`.

Before this commit, there is two typse of `signing root`s floating around.
- The first one is a real signing root, aka a hash tree root computed from an object root and
a domain. This real signing root is the object ready to be signed.
- The second one is a "false" signing root, which is actually just the hash tree root of an object. This object is either the `Data` field of an attestation, or the `Header` field of a block.

Having 2 differents objects with the same name `signing root` is quite confusing.
This commit renames wrongly named `signing root` objects.

* `createAttestationWrapper` => `createAttestationWrapperEmptySig`.

So it's clear for the user that the created attestation wrapper has an empty signature.

* Implement `createAttestationWrapper`.

* `processAttestations`: Return processed attester slashings.

* Test `processAttestations` instead of `processQueuedAttestations`.

By testing `processAttestations` instead of `processQueuedAttestations`, we get rid of a lot of tests fixtures, including the 200 ms sleep.

The whole testing duration is shorter.

* `Test_processAttestations`: Allow multiple steps.

* `Test_processAttestations`: Add double steps tests.

Some new failing tests are commented with a corresponding github issue.

* `NextChunkStartEpoch`: Fix function comment.

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* `chunks.go`: Avoid templating log messages.

* `checkSlashableAttestations`: Simplify duration computation.

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-02-09 21:02:18 +00:00
james-prysm
fa1df7ee53 adding unit tests and more logs 2024-02-09 14:43:16 -06:00
terence
5582c558c6 Remove deprecated aggregate parallel flag (#13538) 2024-02-09 20:18:36 +00:00
terence
573d9739ea Remove disable vectorized htr (#13537) 2024-02-09 18:28:13 +00:00
terence
bb18fa3f71 Remove deprecated late reorg flag (#13536) 2024-02-09 17:15:08 +00:00
Aditya Asgaonkar
6a605e6b6d Fork choice filter changes (#13464)
* implement confirmation rule prerequisite - f.c. filter changes

* update tests

* update WORKSPACE for spec v1.4.0-beta.6

* run bazel gazelle

* Fix consensus_spec sha256

* drift also forkchoice time when drifting the service on tests

* update minimal kzg_commitment_inclusion_proof_depth.size

* fix mock engine client

* remove unnecessary helper & revert test changes

* revert change of proof size in minimal preset

* fix tests

* fix loader test

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2024-02-09 17:01:44 +00:00
Nishant Das
a0787e2379 Handle Syncing Executon Client (#13597)
* fix it

* manu's review

* fix failing tests
2024-02-09 09:07:48 +00:00
Nishant Das
621bda068d Suppress Unwanted P2P Errors (#13598)
* supress unwanted errors

* gaz

* mod
2024-02-09 05:13:57 +00:00
james-prysm
48f0bef9bb Merge branch 'develop' into beacon-proposer-settings 2024-02-08 15:14:22 -06:00
Radosław Kapka
91504eb95a Improve vc logs (#13573)
* duties

* atts

* revert some changes

* revert timeTillDuty

* Manu's review

* Revert "Auxiliary commit to revert individual files from 6806ca9fbe18101f58ccb40fe191c61c183735a8"

This reverts commit 0820c870d2627950179b0edf7ce62ee4fa4a03a3.

* remove trash

* more reivew

* making Manu happy

* test fixes
2024-02-08 18:24:03 +00:00
james-prysm
3920cddb18 Merge branch 'develop' into beacon-proposer-settings 2024-02-08 10:31:33 -06:00
Sammy Rosso
5afb1255fe Add /eth/v1/beacon/deposit_snapshot endpoint (#13514)
* Add endpoint

* Uncomment in InitializeRoutes

* Add test

* Add 404

* Add more checks

* Test improvements

* Ssz

* Add ssz tags

* Add DepositSnapshot to bazel

* Fix tests

* Fix max size

* Resolve conflicts

* Revert untouched code

* Fix test + review

* Lint

* Oops

* Preston + Radek' review

* Only return 3 finalized roots

* Change to deposit contract depth

* Radek' review

* Gaz

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-02-07 22:53:08 +00:00
Manu NALEPA
9d6160e112 Slasher: Remove unused RPC. (#13594) 2024-02-07 21:11:58 +00:00
james-prysm
1383546999 Beacon API: get blob fix retention cases (#13585)
* fixing the handling for certain cases

* fixing tests

* Update beacon-chain/rpc/eth/blob/handlers_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* update comment based on review

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-02-06 22:31:17 +00:00
Justin Traglia
01116f7f82 Fix a few minor nits in protobuf definitions (#13512) 2024-02-06 21:17:32 +00:00
Thabokani
692ebd313f Fix typos in doc (#13583)
Signed-off-by: Thabokani <149070269+Thabokani@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-02-06 10:18:21 +00:00
Nishant Das
6fa656c1ee Add Sync Checker (#13580)
* fix it

* add it in

* typo

* fix tests

* fix tests

* export and add test

* preston's review
2024-02-06 02:34:30 +00:00
james-prysm
f102689c2c Merge branch 'develop' into beacon-proposer-settings 2024-02-05 10:25:03 -06:00
Dhruv Bodani
55a29a4670 Implement beacon committee selections (#13503)
* implement beacon committee selections

* fix build

* fix lint

* fix lint

* Update beacon-chain/rpc/eth/shared/structs.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/beacon_committee_selections.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/beacon_committee_selections.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/beacon_committee_selections.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* move beacon committee selection structs to validator module

* fix bazel build files

* add support for POST and GET endpoints for get state validators query

* add a handler to return error from beacon node

* move beacon committee selection to validator top-level module

* fix bazel

* re-arrange fields to fix lint

* fix TestServer_InitializeRoutes

* fix build and lint

* fix build and lint

* fix TestSubmitAggregateAndProof_Distributed

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-02-05 15:43:51 +00:00
james-prysm
fd132103fd Merge branch 'develop' into beacon-proposer-settings 2024-02-05 09:08:44 -06:00
Potuz
e2e7e84a96 Get the right head state when proposing a failed reorg (#13579)
* Get the right head state when proposing a failed reorg

* add unit test

* split logic
2024-02-05 13:40:35 +00:00
terence
91b0a93df7 Enhance EL block height log (#13582) 2024-02-05 01:52:01 +00:00
Preston Van Loon
8839015312 docker: Add coreutils to docker images (#13564)
* Add coreutils to docker images

* add coreutils dependencies

* Add a prysmaticlabs.com/uploads backup of the deb files

* Run gazelle and fix issues

* Remove broken tar, change http_archive deps to debian_archive, remove http mirrors in favor of snapshot

* Add comments about which deps are required by other deps
2024-02-03 19:21:21 +00:00
terence
61ab4bf7ca Rename block by range request log (#13561) 2024-02-03 19:20:04 +00:00
Radosław Kapka
e3ce1bde45 Move API structs to api module (#13577) 2024-02-03 11:57:01 +00:00
Nishant Das
9d1189b222 Do Not Cache For Non Active Public Keys (#13581)
* fix it

* clean up
2024-02-03 05:19:54 +00:00
KeienWang
74f5452a64 Fix typo in [beacon-chain/cache/depositsnapshot/deposit_cache_test.go]: Corrected a spelling error. (#13532)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-02-03 05:14:32 +00:00
Nishant Das
ea1204d3c7 Fix Slashing Gossip Checks (#13574)
* fix it

* add for proposals too
2024-02-02 23:13:22 +00:00
Radosław Kapka
d9ac69752b Return consensus block value in Wei (#13575)
* Return consensus block value in Wei

* Return consensus block value in Wei

* review
2024-02-02 18:17:40 +00:00
terence
52af63f25a Revise blob sidecar not found log (#13571)
* Update blob sidecar not found log

* Use fields
2024-02-01 20:48:59 +00:00
james-prysm
2dad245bc8 handle slice out of range (#13568)
* handle slice out of range

* adding some tests
2024-02-01 16:59:40 +00:00
Potuz
9a9990605c Update Gohashtree to v0.0.4-beta (#13569)
* Update Gohashtree to v0.0.4-beta

* go mod tidy

* go mod tidy

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-02-01 15:42:56 +00:00
james-prysm
2cddb5ca86 fixing jwt auth checks (#13565) 2024-02-01 15:13:52 +00:00
Nishant Das
73ce28c356 make it the default (#13556) 2024-01-31 10:27:26 +00:00
Manu NALEPA
7a294e861e Beacon node slasher improvement (#13549)
* Slasher: Ensure all gorouting are stopped before running `Stop` actions.

Fixes #13550.
In tests, `exitChan` are now useless since waitgroup are used to wait
for all goroutines to be stopped.

* `slasher.go`: Add comments and rename some variables. - NFC

* `detect_blocks.go`: Improve. - NFC

- Rename some variables.
- Add comments.
- Use second element of `range` when possible.

* `chunks.go`: Remove `_`receivers. - NFC

* `validateAttestationIntegrity`: Improve documentation. - NFC

* `filterAttestations`: Avoid `else`and rename variable. - NFC

* `slasher.go`: Fix and add comments.

* `SaveAttestationRecordsForValidators`: Remove unused code.

* `LastEpochWrittenForValidators`: Name variables consistently. - NFC

Avoid mixes between `indice(s)`and `index(es)`.

* `SaveLastEpochsWrittenForValidators`: Name variables consistently. - NFC

* `CheckAttesterDoubleVotes`: Rename variables and add comments. - NFC

* `schema.go`: Add comments. - NFC

* `processQueuedAttestations`: Add comments. - NFC

* `checkDoubleVotes`: Rename variable. - NFC

* `Test_processQueuedAttestations`: Ensure there is no error log.

* `shouldNotBeSlashable` => `shouldBeSlashable`

* `Test_processQueuedAttestations`: Add 2 test cases:
- Same target with different signing roots
- Same target with same signing roots

* `checkDoubleVotesOnDisk` ==> `checkDoubleVotes`.

Before this commit, `checkDoubleVotes` did two tasks:
- Checking if there are any slashable double votes in the input
  list of attestations with respect to each other.
- Checking if there are any slashable double votes in the input
  list of attestations with respect to our database.

However, `checkDoubleVotes` is called only in
`checkSlashableAttestations`.

And `checkSlashableAttestations` is called only in:
- `processQueuedAttestations`, and in
- `IsSlashableAttestation`

Study of case `processQueuedAttestations`:
---------------------------------------------
In `processQueuedAttestations`, `checkSlashableAttestations`
is ALWAYS called after
`Database.SaveAttestationRecordsForValidators`.

It means that, when calling `checkSlashableAttestations`,
`validAtts` are ALREADY stored in the DB.

Each attestation of `validAtts` will be checked twice:
- Against the other attestations of `validAtts` (the portion of
  deleted code)
- Against the content of the database.

One of those two checks is redundent.
==> We can remove the check against other attestations in `validAtts`.

Study of case `Database.SaveAttestationRecordsForValidators`:
----------------------------------------------------------------
In `Database.SaveAttestationRecordsForValidators`,
`checkSlashableAttestations` is ALWAYS called with a list of
attestations containing only ONE attestation.

This only attestaion will be checked twice:
- Against itself, and an attestation cannot conflict with itself.
- Against the content of the database.

==> We can remove the check against other attestations in `validAtts`.

=========================

In both cases, we showed that we can remove the check of attestation
against the content of `validAtts`, and the corresponding test
`Test_checkDoubleVotes_SlashableInputAttestations`.

* `Test_processQueuedBlocks_DetectsDoubleProposals`: Wrap proposals.

So we can add new proposals later.

* Fix slasher multiple proposals false negative.

If a first batch of blocks is sent with:
- validator 1 - slot 4 - signing root 1
- validator 1 - slot 5 - signing root 1

Then, if a second batch of blocks is sent with:
- validator 1 - slot 4 - signing root 2

Because we have two blocks proposed by the same validator (1) and for
the same slot (4), but with two different signing roots (1 and 2), the
validator 1 should be slashed.

This is not the case before this commit.
A new test case has been added as well to check this.

Fixes #13551

* `params.go`: Change comments. - NFC

* `CheckSlashable`: Keep the happy path without indentation.

* `detectAllAttesterSlashings` => `checkSurrounds`.

* Update beacon-chain/db/slasherkv/slasher.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/db/slasherkv/slasher.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* `CheckAttesterDoubleVotes`: Keep happy path without indentation.

Well, even if, in our case, "happy path" mean slashing.

* 'SaveAttestationRecordsForValidators': Save the first attestation.

In case of multiple votes, arbitrarily save the first attestation.
Saving the first one in particular has no functional impact,
since in any case all attestations will be tested against
the content of the database. So all but the first one will be
detected as slashable.

However, saving the first one and not an other one let us not
to modify the end to end tests, since they expect the first one
to be saved in the database.

* Rename `min` => `minimum`.

Not to conflict with the new `min` built-in function.

* `couldNotSaveSlashableAtt` ==> `couldNotCheckSlashableAtt`

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-01-31 09:49:14 +00:00
james-prysm
258123341e add a log and update size for promptui (#13542) 2024-01-30 17:19:31 +00:00
Preston Van Loon
224b136737 Revert "set limit to multiple of burst for goerli" (#13552)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-30 06:10:12 +00:00
Nishant Das
3ed4866eec Makes Our New Deposit Trie The Default (#13555)
* make 4881 the default

* fix failed build
2024-01-30 05:15:52 +00:00
kasey
373c853d17 set limit to multiple of burst for goerli (#13544)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-27 22:12:08 +00:00
james-prysm
b71312f575 update the comments of auto generated proto 2024-01-26 14:49:01 -06:00
james-prysm
4540cd5cdc moving proposer settings outside of keymanager.proto 2024-01-26 14:47:12 -06:00
james-prysm
d33f1f2d98 fixing typo 2024-01-26 14:12:10 -06:00
james-prysm
3dc45c2041 adding flags to main.go 2024-01-26 14:10:47 -06:00
james-prysm
f7a04ab66d Merge branch 'develop' into beacon-proposer-settings 2024-01-26 14:06:13 -06:00
james-prysm
01eb14a0f0 poc on proposer settings for updating tracked cache 2024-01-26 13:58:14 -06:00
terence
23b0718b5f Add metric for data availability wait time (#13534)
* Add metric for data availability wait time

* Kasey's feedback

* Kasey's feedback
2024-01-26 18:17:25 +00:00
terence
3a9854145c Correct metrics from ns to ms (#13540) 2024-01-26 17:43:30 +00:00
Radosław Kapka
1b70d2b566 Fetch unaggregated atts in GetAggregateAttestation (#13533) 2024-01-26 17:08:58 +00:00
Nishant Das
59b310a221 make it the same (#13531) 2024-01-26 05:35:27 +00:00
Nishant Das
22b6d1751d Enable Backfill in E2E (#13524)
* enable backfill for devmode

* enable backfill

* gaz

* move to its own package

* fix panic

* fix bug

* gaz

* kasey's review
2024-01-26 04:37:41 +00:00
Potuz
9c13d47f4c fix off by one (#13529) 2024-01-26 00:05:56 +00:00
Justin Traglia
835dce5f6e Enable wastedassign linter & fix findings (#13507)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-25 17:07:48 +00:00
james-prysm
c4c28e4825 fixing small typo in error messages (#13525) 2024-01-25 04:56:17 +00:00
Radosław Kapka
c996109b3a Return payload value in Wei from /eth/v3/validator/blocks (#13497)
* Add value in Wei to execution payload

* simplify how payload is returned

* test fix

* fix issues

* review

* fix block handlers
2024-01-24 20:58:35 +00:00
terence
e397f8a2bd Skip origin root when cleaning dirty state (#13521)
* Skip origin root when cleaning dirty state

* Clean up
2024-01-24 17:22:50 +00:00
Radosław Kapka
6438060733 Clear cache everywhere in tests of core helpers (#13509) 2024-01-24 16:11:43 +00:00
Nishant Das
a2892b1ed5 clean up validate beacon block (#13517) 2024-01-24 05:48:15 +00:00
Nishant Das
f4ab2ca79f lower it (#13516) 2024-01-24 01:28:36 +00:00
kasey
dbcf5c29cd moving some blob rpc validation close to peer read (#13511)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-23 22:54:16 +00:00
james-prysm
c9fe53bc32 Blob API: make errors more generic (#13513)
* make api response more generic

* gaz
2024-01-23 20:07:46 +00:00
terence
8522febd88 Add Holesky Deneb Epoch (#13506)
* Add Holesky Deneb Epoch

* Fix fork version

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix config

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-23 19:29:17 +00:00
james-prysm
75a28310c2 fixing route to match specs (#13510) 2024-01-23 18:04:03 +00:00
kasey
1df173e701 Block backfilling (#12968)
* backfill service

* fix bug where origin state is never unlocked

* support mvslice states

* use renamed interface

* refactor db code to skip block cache for backfill

* lint

* add test for verifier.verify

* enable service in service init test

* cancellation cleanup

* adding nil checks to configset juggling

* assume blocks are available by default

As long as we're sure the AvailableBlocker is initialized correctly
during node startup, defaulting to assuming we aren't in a checkpoint
sync simplifies things greatly for tests.

* block saving path refactor and bugfix

* fix fillback test

* fix BackfillStatus init tests

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-23 07:54:30 +00:00
terence
3187a05a76 Align aggregated att gossip validations (#13490)
* Align aggregated att gossip validations

* Feedback on reusing existing methods

* Nishant's feedback

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-23 04:37:06 +00:00
Justin Traglia
4e24102237 Fix minor issue in blsToExecChange validator (#13498)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-23 03:26:57 +00:00
james-prysm
8dd5e96b29 re-enabling jwt on keymanager API (#13492)
* re-enabling jwt on keymanager API

* adding tests

* Update validator/rpc/intercepter.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* handling error in test

* remove debugging logs

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-01-22 22:16:10 +00:00
james-prysm
4afb379f8d cleanup duties naming (#13451)
* updating some naming to reflect changes to duties

* fixing unit tests

* fixing more tests
2024-01-22 16:58:25 +00:00
Nishant Das
5a2453ac9c Add Debug State Transition Method (#13495)
* add it

* lint
2024-01-22 14:46:20 +00:00
Nishant Das
e610d2a5de fix it (#13496) 2024-01-22 14:26:14 +00:00
Preston Van Loon
233aaf2f9e e2e: Fix multiclient lighthouse flag removal (#13494) 2024-01-21 21:11:11 +00:00
Nishant Das
a49bdcaa1f fix it (#13493) 2024-01-20 16:15:38 +00:00
Gaki
bdd7b2caa9 chore: typo fix (#13461)
* messsage

* cancellation
2024-01-20 01:07:17 +00:00
terence
8de0e3804b Update Sepolia Deneb fork epoch (#13491) 2024-01-19 18:47:07 +00:00
Ying Quan Tan
bfb648067b Re-enable Slasher E2E Test (#13420)
* re-enable e2e slashing test #12415

* refactored slashing evaluator

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-19 04:44:27 +00:00
terence
852db1f3eb Remove debug setting highest slot log (#13488) 2024-01-19 04:25:15 +00:00
Nishant Das
5d3663ef8d update lighthouse and tests (#13470) 2024-01-19 03:46:36 +00:00
Radosław Kapka
a608630727 Add Inactivity field ro attestation rewards (#13382) 2024-01-18 18:51:35 +00:00
Mario Vega
37739b4193 fix blobsidecar json tag for commitment inclusion proof (#13475)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-18 17:43:43 +00:00
james-prysm
4d2067dbae bugfix: ssz post-requests should check content type not accept (#13482)
* updating post requests that accept ssz to check content type instead of accept header

* radek's review comments to make things more clear
2024-01-18 17:41:31 +00:00
Nishant Das
fc05e306dd Allow Pcli to Run State Transitions Easily (#13484)
* add all this in

* gaz

* add flag
2024-01-18 14:44:06 +00:00
Radosław Kapka
204de13c86 REST VC: Subscribe to Beacon API events (#13453)
* Revert "Revert "REST VC: Subscribe to Beacon API events  (#13354)" (#13428)"

This reverts commit 8d092a1113.

* change logic

* review

* test fix

* fix critical error

* merge flag check

* change error msg

* return on errors

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-18 14:27:41 +00:00
terence
f3ef1b64d6 Enhance block by root log (#13472) 2024-01-18 13:43:10 +00:00
terence
c3dbfa66d0 Change blob latency metrics to ms (#13481) 2024-01-17 23:28:42 +00:00
terence
93aba997f4 Move checking of attribute empty earlier (#13465) 2024-01-17 18:42:56 +00:00
Potuz
79bb7efbf8 Check init sync before getting payload attributes (#13479)
* Check init sync before getting payload attributes

This PR adds a helper to forkchoice to return the delay of the latest
imported block. It also adds a helper with an heuristic to check if the
node is during init sync. If the highest imported node was imported with
a delay of less than an epoch then the node is considered in regular
sync. If on the other hand, in addition the highest imported node is
more than two epochs old, then the node is considered in init Sync.

The helper to check this only uses forkchoice and therefore requires a
read lock. There are four paths to call this

1) During regular block processing, we defer a function to send the
   second FCU call with attributes. This function may not be called at
all if we are not regularly syncing
2) During regular block processing, we check in the path
   `postBlockProces->getFCUArgs->computePayloadAttributes` the payload
attributes if we are syncing a late block. In this case forkchoice is
already locked and we add a call in `getFCUArgs` to return early if not
regularly syncing
3) During handling of late blocks on `lateBlockTasks` we simply return
   early if not in regular sync (This is the biggest change as it takes
a longer FC lock for lateBlockTasks)
4) On Attestation processing, in UpdateHead, we are already locked so we
   just add a check to not update head on this path if not regularly
syncing.

* fix build

* Fix mocks
2024-01-17 15:39:28 +00:00
terence
87b53db3b4 Capitalize Aggregated Unaggregated Attestations Log (#13473) 2024-01-17 13:30:31 +00:00
terence
fe431b9201 Use correct HistoricalRoots (#13477) 2024-01-17 08:14:32 +00:00
james-prysm
790a09f9b1 Improve wait for activation (#13448)
* removing timeout on wait for activation, instead switched to an event driven approach

* fixing unit tests

* linting

* simplifying return

* adding sleep for the remaining slot to avoid cpu spikes

* removing ifstatement on log

* removing ifstatement on log

* improving switch statement

* removing the loop entirely

* fixing unit test

* fixing manu's reported issue with deletion of json file

* missed change around writefile at path

* gofmt

* fixing deepsource issue with reading file

* trying to clean file to avoid deepsource issue

* still getting error trying a different approach

* fixing stream loop

* fixing unit test

* Update validator/keymanager/local/keymanager.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* fixing linting

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-01-16 17:04:54 +00:00
Manu NALEPA
46387a903a getLegacyDatabaseLocation: Change message. (#13471)
* `getLegacyDatabaseLocation`: Change message.

* Update validator/node/node.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-16 11:29:36 +00:00
Nishant Das
6a65e07684 Add Spans to Core Validator Methods (#13467)
* add traces

* gaz
2024-01-16 07:52:46 +00:00
Potuz
abef94d7ad do not check optimistic status if cached attestation (#13462)
* do not check optimistic status if cached attestation

* Gazelle

* Gazelle again

* fix nil panics

* more nil checks

* more nil checks

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-01-15 18:50:33 +00:00
Manu NALEPA
99a8d0bac6 Validator client - Improve readability - NO FUNCTIONAL CHANGE (#13468)
* Improve `NewServiceRegistry` documentation.

* Improve `README.md`.

* Improve readability of `registerValidatorService`.

* Move `log` in `main.go`.

Since `log` is only used in `main.go`.

* Clean Tos.

* `DefaultDataDir`: Use `switch` instead of `if/elif`.

* `ReadPassword`: Remove unused receiver.

* `validator/main.go`: Clean.

* `WarnIfPlatformNotSupported`: Add Mac OSX ARM64.

* `runner.go`: Use idiomatic `err` handling.

* `waitForChainStart`: Avoid `chainStartResponse`mutation.

* `WaitForChainStart`: Reduce cognitive complexity.

* Logs: `powchain` ==> `execution`.
2024-01-15 14:46:54 +00:00
Preston Van Loon
b585ff77f5 Fix port logging in bootnode (#13457) 2024-01-15 04:38:22 +00:00
Nishant Das
1ff5a43385 Add the Abillity to Defragment the Beacon State (#13444)
* Defragment head state

* change log level

* change it to be more efficient

* add flag

* add tests and clean up

* fix it

* gosimple

* Update container/multi-value-slice/multi_value_slice.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's review

* unlock it

* remove from fc lock

---------

Co-authored-by: rkapka <rkapka@wp.pl>
2024-01-13 05:44:02 +00:00
dependabot[bot]
0cfbddc980 Bump github.com/quic-go/quic-go from 0.39.3 to 0.39.4 (#13445)
* Bump github.com/quic-go/quic-go from 0.39.3 to 0.39.4

Bumps [github.com/quic-go/quic-go](https://github.com/quic-go/quic-go) from 0.39.3 to 0.39.4.
- [Release notes](https://github.com/quic-go/quic-go/releases)
- [Changelog](https://github.com/quic-go/quic-go/blob/master/Changelog.md)
- [Commits](https://github.com/quic-go/quic-go/compare/v0.39.3...v0.39.4)

---
updated-dependencies:
- dependency-name: github.com/quic-go/quic-go
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* Ran gazelle

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-12 18:12:19 +00:00
Manu NALEPA
22a484c45e Fixes issues when running validator client with the --web flag and non existing validator.db file AND/OR prysm-wallet-v2 directory. (#13460)
* `getLegacyDatabaseLocation`: Add tests.

* `getLegacyDatabaseLocation`: Handle `c.wallet == nil`.

* `saveAuthToken`: Create parent directory if needed.
2024-01-12 15:53:27 +00:00
terence
6ddafe1159 Delete invalid blob at block processing (#13456)
* Delete invalid blob at block processing

* Fix test
2024-01-12 08:09:45 +00:00
qinlz2
b8c5af665f [3/5] light client events (#13225)
* add http streaming light client events

* expose ForkChoiceStore

* return error in insertFinalizedDeposits

* send light client updates

* Revert "return error in insertFinalizedDeposits"

This reverts commit f7068663b8c8b3a3bf45950d5258011a5e4d803e.

* fix: lint

* fix: patch the wrong error response

* refactor: rename the JSON structs

* fix: LC finalized stream return correct format

* fix: LC op stream return correct JSON format

* fix: omit nil JSON fields

* chore: gazzle

* fix: make update by range return list directly based on spec

* chore: remove unneccessary json annotations

* chore: adjust comments

* feat: introduce EnableLightClientEvents feature flag

* feat: use enable-lightclient-events flag

* chore: more logging details

* chore: fix rebase errors

* chore: adjust data structure to save mem

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* refactor: rename config EnableLightClient

* refactor: rename feature flag

* refactor: move helper functions to helper pkg

* test: fix broken unit tests

---------

Co-authored-by: Nicolás Pernas Maradei <nicolas@polymerlabs.org>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-11 18:38:59 +00:00
Radosław Kapka
2875ce6ee1 Use a single rest handler (#13446) 2024-01-11 16:03:35 +00:00
Manu NALEPA
a883ae2a76 BN: Move --db-backup-output-dir as a deprecated flag. (#13450) 2024-01-11 14:11:36 +00:00
Preston Van Loon
3a2b486bde Bazel 7.0.0 (#13321) 2024-01-10 15:34:11 +00:00
terence
283e09569d Remove old blob types (#13438)
* Remove old types

* Gen

* Remove old types

* Gen

* Fix lint

* Rm unused key

* Kasey's comment
2024-01-10 09:38:06 +00:00
Preston Van Loon
69723b4a77 Update go to 1.21.6 (#13440) 2024-01-10 09:37:40 +00:00
psr
4fe6834ba5 http endpoint cleanup (#13432)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-09 23:48:43 +00:00
Preston Van Loon
98e3f2b80f sort static analyzers, add more, fix violations (#13441) 2024-01-09 23:29:36 +00:00
Enrico Del Fante
2aef7a3ec5 Update teku's bootnode (#13437) 2024-01-09 22:28:41 +00:00
Brandon Liu
c41a54be9d fix metric for exited validator (#13379)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-09 22:15:53 +00:00
Justin Traglia
7e65378f63 Check sidecar index in BlobSidecarsByRoot response (#13180)
* Check sidecar index in BlobSidecarsByRoot response

* Remove unnecessary MaxBlobsPerBlock check
2024-01-09 22:14:56 +00:00
Justin Traglia
cf606e3766 Only process blocks which haven't been processed (#13442)
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-09 22:14:03 +00:00
Justin Traglia
703cfc5819 Initialize exec payload fields and enforce order (#13372)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-09 21:49:35 +00:00
GoodDaisy
c6ebe157a6 Fix typos (#13435) 2024-01-09 21:03:36 +00:00
Preston Van Loon
a3cc81a048 Add nil check for head in IsOptimistic (#13439) 2024-01-09 19:40:26 +00:00
596 changed files with 17566 additions and 10168 deletions

View File

@@ -1 +1 @@
6.4.0
7.0.0

View File

@@ -10,7 +10,7 @@
# Prysm specific remote-cache properties.
build:remote-cache --remote_download_minimal
build:remote-cache --experimental_remote_build_event_upload=minimal
build:remote-cache --remote_build_event_upload=minimal
build:remote-cache --remote_cache=grpc://bazel-remote-cache:9092
# Does not work with rules_oci. See https://github.com/bazel-contrib/rules_oci/issues/292
#build:remote-cache --experimental_remote_downloader=grpc://bazel-remote-cache:9092

View File

@@ -80,7 +80,6 @@ linters:
- thelper
- unparam
- varnamelen
- wastedassign
- wrapcheck
- wsl

View File

@@ -194,33 +194,6 @@ nogo(
config = ":nogo_config_with_excludes",
visibility = ["//visibility:public"],
deps = [
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unmarshal:go_default_library",
"@org_golang_x_tools//go/analysis/passes/tests:go_default_library",
"@org_golang_x_tools//go/analysis/passes/structtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stdmethods:go_default_library",
"@org_golang_x_tools//go/analysis/passes/shift:go_default_library",
# "@org_golang_x_tools//go/analysis/passes/shadow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/printf:go_default_library",
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_default_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ctrlflow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/copylock:go_default_library",
# "@org_golang_x_tools//go/analysis/passes/cgocall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildssa:go_default_library",
"@org_golang_x_tools//go/analysis/passes/bools:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomicalign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomic:go_default_library",
"@org_golang_x_tools//go/analysis/passes/assign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_default_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_default_library",
"//tools/analyzers/comparesame:go_default_library",
"//tools/analyzers/cryptorand:go_default_library",
"//tools/analyzers/errcheck:go_default_library",
@@ -236,6 +209,53 @@ nogo(
"//tools/analyzers/shadowpredecl:go_default_library",
"//tools/analyzers/slicedirect:go_default_library",
"//tools/analyzers/uintcast:go_default_library",
"@org_golang_x_tools//go/analysis/passes/appends:go_default_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_default_library",
"@org_golang_x_tools//go/analysis/passes/assign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomic:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomicalign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/bools:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildssa:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildtag:go_default_library",
# cgocall disabled
#"@org_golang_x_tools//go/analysis/passes/cgocall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/copylock:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ctrlflow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/defers:go_default_library",
"@org_golang_x_tools//go/analysis/passes/directive:go_default_library",
# fieldalignment disabled
#"@org_golang_x_tools//go/analysis/passes/fieldalignment:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/framepointer:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpmux:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ifaceassert:go_default_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_default_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_default_library",
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_default_library",
"@org_golang_x_tools//go/analysis/passes/printf:go_default_library",
"@org_golang_x_tools//go/analysis/passes/reflectvaluecompare:go_default_library",
# shadow disabled
#"@org_golang_x_tools//go/analysis/passes/shadow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/shift:go_default_library",
"@org_golang_x_tools//go/analysis/passes/sigchanyzer:go_default_library",
"@org_golang_x_tools//go/analysis/passes/slog:go_default_library",
"@org_golang_x_tools//go/analysis/passes/sortslice:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stdmethods:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stringintconv:go_default_library",
"@org_golang_x_tools//go/analysis/passes/structtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/testinggoroutine:go_default_library",
"@org_golang_x_tools//go/analysis/passes/tests:go_default_library",
"@org_golang_x_tools//go/analysis/passes/timeformat:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unmarshal:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unusedresult:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unusedwrite:go_default_library",
"@org_golang_x_tools//go/analysis/passes/usesgenerics:go_default_library",
] + select({
# nogo checks that fail with coverage enabled.
":coverage_enabled": [],

View File

@@ -55,7 +55,7 @@ bazel build //beacon-chain --config=release
## Adding / updating dependencies
1. Add your dependency as you would with go modules. I.e. `go get ...`
1. Run `gazelle update-repos -from_file=go.mod` to update the bazel managed dependencies.
1. Run `bazel run //:gazelle -- update-repos -from_file=go.mod` to update the bazel managed dependencies.
Example:

6
MODULE.bazel Normal file
View File

@@ -0,0 +1,6 @@
###############################################################################
# Bazel now uses Bzlmod by default to manage external dependencies.
# Please consider migrating your external dependencies from WORKSPACE to MODULE.bazel.
#
# For more details, please check https://github.com/bazelbuild/bazel/issues/18958
###############################################################################

1245
MODULE.bazel.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -106,6 +106,13 @@ load("@rules_distroless//distroless:dependencies.bzl", "rules_distroless_depende
rules_distroless_dependencies()
http_archive(
name = "distroless",
integrity = "sha256-Cf00kUp1NyXA3LzbdyYy4Kda27wbkB8+A9MliTxq4jE=",
strip_prefix = "distroless-9dc924b9fe812eec2fa0061824dcad39eb09d0d6",
url = "https://github.com/GoogleContainerTools/distroless/archive/9dc924b9fe812eec2fa0061824dcad39eb09d0d6.tar.gz", # 2024-01-24
)
load("@aspect_bazel_lib//lib:repositories.bzl", "aspect_bazel_lib_dependencies", "aspect_bazel_lib_register_toolchains")
aspect_bazel_lib_dependencies()
@@ -144,6 +151,10 @@ http_archive(
],
)
load("//:distroless_deps.bzl", "distroless_deps")
distroless_deps()
# Override default import in rules_go with special patch until
# https://github.com/gogo/protobuf/pull/582 is merged.
git_repository(
@@ -182,7 +193,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.21.5",
go_version = "1.21.6",
nogo = "@//:nogo",
)
@@ -223,7 +234,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.4.0-beta.5"
consensus_spec_version = "v1.4.0-beta.6"
bls_test_version = "v0.1.1"
@@ -239,7 +250,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "9017ffff84d64a7c4c9e6ff9f421f9479f71d3b463b738f54e02158dbb4f50f0",
sha256 = "7dc467d7be97525c88a1d3683665c1354cc86297fd62009e7cf5000905b25652",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -255,7 +266,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "f08711682553fe7c9362f1400ed8c56b2fa9576df08581fcad4c508ba8ad4788",
sha256 = "e163011254b6ce100205fb779ba660faedc9bc9f7bb4408c25746a7aa5e8d8bc",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -271,7 +282,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "7ea3189e3879f2ac62467cbf2945c00b6c94d30cdefb2d645c630b1018c50e10",
sha256 = "b73c81b6386053a2141f6f43b457489668621c7013f740ed93edf9ac0e34f091",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -286,7 +297,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "4119992a2efc79e5cb2bdc07ed08c0b1fa32332cbd0d88e6467f34938df97026",
sha256 = "47726c527512d03ef3e706a8e7f8d5db6a5f2153351db0470dab780f6a87c4dd",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -349,17 +360,17 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "9f66d8d5644982d3d0d2e3d2b9ebe77a5f96638a5d7fcd715599c32818195cb3",
strip_prefix = "holesky-ea39b9006210848e13f28d92e12a30548cecd41d",
url = "https://github.com/eth-clients/holesky/archive/ea39b9006210848e13f28d92e12a30548cecd41d.tar.gz", # 2023-09-21
sha256 = "5f4be6fd088683ea9db45c863b9c5a1884422449e5b59fd2d561d3ba0f73ffd9",
strip_prefix = "holesky-9d9aabf2d4de51334ee5fed6c79a4d55097d1a43",
url = "https://github.com/eth-clients/holesky/archive/9d9aabf2d4de51334ee5fed6c79a4d55097d1a43.tar.gz", # 2024-01-22
)
http_archive(
name = "com_google_protobuf",
sha256 = "4e176116949be52b0408dfd24f8925d1eb674a781ae242a75296b17a1c721395",
strip_prefix = "protobuf-23.3",
sha256 = "9bd87b8280ef720d3240514f884e56a712f2218f0d693b48050c836028940a42",
strip_prefix = "protobuf-25.1",
urls = [
"https://github.com/protocolbuffers/protobuf/archive/v23.3.tar.gz",
"https://github.com/protocolbuffers/protobuf/archive/v25.1.tar.gz",
],
)

View File

@@ -12,11 +12,8 @@ go_library(
deps = [
"//api/client:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/rpc/eth/beacon:go_default_library",
"//beacon-chain/rpc/eth/config:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/prysm/beacon:go_default_library",
"//beacon-chain/state:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -17,10 +17,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/api/server"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/beacon"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/config"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
apibeacon "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/beacon"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/network/forks"
@@ -150,8 +147,8 @@ func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fo
if err != nil {
return nil, errors.Wrapf(err, "error requesting fork by state id = %s", stateId)
}
fr := &shared.Fork{}
dataWrapper := &struct{ Data *shared.Fork }{Data: fr}
fr := &structs.Fork{}
dataWrapper := &struct{ Data *structs.Fork }{Data: fr}
err = json.Unmarshal(body, dataWrapper)
if err != nil {
return nil, errors.Wrap(err, "error decoding json response in GetFork")
@@ -179,12 +176,12 @@ func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, er
}
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
func (c *Client) GetConfigSpec(ctx context.Context) (*config.GetSpecResponse, error) {
func (c *Client) GetConfigSpec(ctx context.Context) (*structs.GetSpecResponse, error) {
body, err := c.Get(ctx, getConfigSpecPath)
if err != nil {
return nil, errors.Wrap(err, "error requesting configSpecPath")
}
fsr := &config.GetSpecResponse{}
fsr := &structs.GetSpecResponse{}
err = json.Unmarshal(body, fsr)
if err != nil {
return nil, err
@@ -259,7 +256,7 @@ func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData
if err != nil {
return nil, err
}
v := &apibeacon.GetWeakSubjectivityResponse{}
v := &structs.GetWeakSubjectivityResponse{}
err = json.Unmarshal(body, v)
if err != nil {
return nil, err
@@ -285,7 +282,7 @@ func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData
// SubmitChangeBLStoExecution calls a beacon API endpoint to set the withdrawal addresses based on the given signed messages.
// If the API responds with something other than OK there will be failure messages associated to the corresponding request message.
func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*shared.SignedBLSToExecutionChange) error {
func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*structs.SignedBLSToExecutionChange) error {
u := c.BaseURL().ResolveReference(&url.URL{Path: changeBLStoExecutionPath})
body, err := json.Marshal(request)
if err != nil {
@@ -324,12 +321,12 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*shar
// GetBLStoExecutionChanges gets all the set withdrawal messages in the node's operation pool.
// Returns a struct representation of json response.
func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*beacon.BLSToExecutionChangesPoolResponse, error) {
func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*structs.BLSToExecutionChangesPoolResponse, error) {
body, err := c.Get(ctx, changeBLStoExecutionPath)
if err != nil {
return nil, err
}
poolResponse := &beacon.BLSToExecutionChangesPoolResponse{}
poolResponse := &structs.BLSToExecutionChangesPoolResponse{}
err = json.Unmarshal(body, poolResponse)
if err != nil {
return nil, err
@@ -338,7 +335,7 @@ func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*beacon.BLSToExe
}
type forkScheduleResponse struct {
Data []shared.Fork
Data []structs.Fork
}
func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, error) {

View File

@@ -11,7 +11,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v4/api/client/builder",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/rpc/eth/shared:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -40,7 +40,7 @@ go_test(
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//beacon-chain/rpc/eth/shared:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -165,7 +165,7 @@ func WrappedBuilderBidCapella(p *ethpb.BuilderBidCapella) (Bid, error) {
// Header returns the execution data interface.
func (b builderBidCapella) Header() (interfaces.ExecutionData, error) {
// We have to convert big endian to little endian because the value is coming from the execution layer.
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header, blocks.PayloadValueToGwei(b.p.Value))
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header, blocks.PayloadValueToWei(b.p.Value))
}
// BlobKzgCommitments --
@@ -249,7 +249,7 @@ func (b builderBidDeneb) HashTreeRootWith(hh *ssz.Hasher) error {
// Header --
func (b builderBidDeneb) Header() (interfaces.ExecutionData, error) {
// We have to convert big endian to little endian because the value is coming from the execution layer.
return blocks.WrappedExecutionPayloadHeaderDeneb(b.p.Header, blocks.PayloadValueToGwei(b.p.Value))
return blocks.WrappedExecutionPayloadHeaderDeneb(b.p.Header, blocks.PayloadValueToWei(b.p.Value))
}
// BlobKzgCommitments --

View File

@@ -6,6 +6,7 @@ import (
"encoding/json"
"fmt"
"io"
"math/big"
"net"
"net/http"
"net/url"
@@ -13,7 +14,7 @@ import (
"text/template"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
@@ -266,9 +267,9 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
tracing.AnnotateError(span, err)
return err
}
vs := make([]*shared.SignedValidatorRegistration, len(svr))
vs := make([]*structs.SignedValidatorRegistration, len(svr))
for i := 0; i < len(svr); i++ {
vs[i] = shared.SignedValidatorRegistrationFromConsensus(svr[i])
vs[i] = structs.SignedValidatorRegistrationFromConsensus(svr[i])
}
body, err := json.Marshal(vs)
if err != nil {
@@ -293,7 +294,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := shared.SignedBlindedBeaconBlockBellatrixFromConsensus(&ethpb.SignedBlindedBeaconBlockBellatrix{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
b, err := structs.SignedBlindedBeaconBlockBellatrixFromConsensus(&ethpb.SignedBlindedBeaconBlockBellatrix{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockBellatrix to json marshalable type")
}
@@ -330,7 +331,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := shared.SignedBlindedBeaconBlockCapellaFromConsensus(&ethpb.SignedBlindedBeaconBlockCapella{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
b, err := structs.SignedBlindedBeaconBlockCapellaFromConsensus(&ethpb.SignedBlindedBeaconBlockCapella{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockCapella to json marshalable type")
}
@@ -357,7 +358,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not extract proto message from payload")
}
payload, err := blocks.WrappedExecutionPayloadCapella(p, 0)
payload, err := blocks.WrappedExecutionPayloadCapella(p, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrapf(err, "could not wrap execution payload in interface")
}
@@ -367,7 +368,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := shared.SignedBlindedBeaconBlockDenebFromConsensus(&ethpb.SignedBlindedBeaconBlockDeneb{Message: psb.Message, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
b, err := structs.SignedBlindedBeaconBlockDenebFromConsensus(&ethpb.SignedBlindedBeaconBlockDeneb{Message: psb.Message, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockDeneb to json marshalable type")
}
@@ -394,7 +395,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not extract proto message from payload")
}
payload, err := blocks.WrappedExecutionPayloadDeneb(p, 0)
payload, err := blocks.WrappedExecutionPayloadDeneb(p, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrapf(err, "could not wrap execution payload in interface")
}

View File

@@ -13,7 +13,7 @@ import (
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
types "github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
@@ -376,7 +376,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "deneb", r.Header.Get("Eth-Consensus-Version"))
var req shared.SignedBlindedBeaconBlockDeneb
var req structs.SignedBlindedBeaconBlockDeneb
err := json.NewDecoder(r.Body).Decode(&req)
require.NoError(t, err)
block, err := req.ToConsensus()

View File

@@ -13,7 +13,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/math"
v1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
@@ -38,7 +38,7 @@ func TestSignedValidatorRegistration_MarshalJSON(t *testing.T) {
},
Signature: make([]byte, 96),
}
a := shared.SignedValidatorRegistrationFromConsensus(svr)
a := structs.SignedValidatorRegistrationFromConsensus(svr)
je, err := json.Marshal(a)
require.NoError(t, err)
// decode with a struct w/ plain strings so we can check the string encoding of the hex fields
@@ -55,7 +55,7 @@ func TestSignedValidatorRegistration_MarshalJSON(t *testing.T) {
require.Equal(t, "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", un.Message.Pubkey)
t.Run("roundtrip", func(t *testing.T) {
b := &shared.SignedValidatorRegistration{}
b := &structs.SignedValidatorRegistration{}
if err := json.Unmarshal(je, b); err != nil {
require.NoError(t, err)
}
@@ -1718,7 +1718,7 @@ func TestUint256UnmarshalTooBig(t *testing.T) {
func TestMarshalBlindedBeaconBlockBodyBellatrix(t *testing.T) {
expected, err := os.ReadFile("testdata/blinded-block.json")
require.NoError(t, err)
b, err := shared.BlindedBeaconBlockBellatrixFromConsensus(&eth.BlindedBeaconBlockBellatrix{
b, err := structs.BlindedBeaconBlockBellatrixFromConsensus(&eth.BlindedBeaconBlockBellatrix{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
@@ -1748,7 +1748,7 @@ func TestMarshalBlindedBeaconBlockBodyBellatrix(t *testing.T) {
func TestMarshalBlindedBeaconBlockBodyCapella(t *testing.T) {
expected, err := os.ReadFile("testdata/blinded-block-capella.json")
require.NoError(t, err)
b, err := shared.BlindedBeaconBlockCapellaFromConsensus(&eth.BlindedBeaconBlockCapella{
b, err := structs.BlindedBeaconBlockCapellaFromConsensus(&eth.BlindedBeaconBlockCapella{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),

View File

@@ -1,3 +1,7 @@
package api
const WebUrlPrefix = "/v2/validator/"
const (
WebUrlPrefix = "/v2/validator/"
WebApiUrlPrefix = "/api/v2/validator/"
KeymanagerApiPrefix = "/eth/v1"
)

View File

@@ -7,4 +7,6 @@ const (
ConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
)

View File

@@ -0,0 +1,50 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"block.go",
"conversions.go",
"conversions_block.go",
"conversions_state.go",
"endpoints_beacon.go",
"endpoints_blob.go",
"endpoints_builder.go",
"endpoints_config.go",
"endpoints_debug.go",
"endpoints_events.go",
"endpoints_lightclient.go",
"endpoints_node.go",
"endpoints_rewards.go",
"endpoints_validator.go",
"other.go",
"state.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/api/server/structs",
visibility = ["//visibility:public"],
deps = [
"//api/server:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["conversions_test.go"],
embed = [":go_default_library"],
deps = [
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -1,4 +1,4 @@
package shared
package structs
type SignedBeaconBlock struct {
Message *BeaconBlock `json:"message"`
@@ -325,11 +325,11 @@ type ExecutionPayloadDeneb struct {
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadHeaderDeneb struct {
@@ -345,9 +345,9 @@ type ExecutionPayloadHeaderDeneb struct {
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}

View File

@@ -1,4 +1,4 @@
package shared
package structs
import (
"fmt"
@@ -23,7 +23,7 @@ var errNilValue = errors.New("nil value")
func ValidatorFromConsensus(v *eth.Validator) *Validator {
return &Validator{
PublicKey: hexutil.Encode(v.PublicKey),
Pubkey: hexutil.Encode(v.PublicKey),
WithdrawalCredentials: hexutil.Encode(v.WithdrawalCredentials),
EffectiveBalance: fmt.Sprintf("%d", v.EffectiveBalance),
Slashed: v.Slashed,
@@ -1074,3 +1074,17 @@ func sszBytesToUint256String(b []byte) (string, error) {
}
return bi.String(), nil
}
func DepositSnapshotFromConsensus(ds *eth.DepositSnapshot) *DepositSnapshot {
finalized := make([]string, 0, len(ds.Finalized))
for _, f := range ds.Finalized {
finalized = append(finalized, hexutil.Encode(f))
}
return &DepositSnapshot{
Finalized: finalized,
DepositRoot: hexutil.Encode(ds.DepositRoot),
DepositCount: fmt.Sprintf("%d", ds.DepositCount),
ExecutionBlockHash: hexutil.Encode(ds.ExecutionHash),
ExecutionBlockHeight: fmt.Sprintf("%d", ds.ExecutionDepth),
}
}

View File

@@ -1,4 +1,4 @@
package shared
package structs
import (
"fmt"
@@ -559,7 +559,7 @@ func (b *SignedBlindedBeaconBlockBellatrix) ToGeneric() (*eth.GenericSignedBeaco
Block: bl,
Signature: sig,
}
return &eth.GenericSignedBeaconBlock{Block: &eth.GenericSignedBeaconBlock_BlindedBellatrix{BlindedBellatrix: block}, IsBlinded: true, PayloadValue: 0 /* can't get payload value from blinded block */}, nil
return &eth.GenericSignedBeaconBlock{Block: &eth.GenericSignedBeaconBlock_BlindedBellatrix{BlindedBellatrix: block}, IsBlinded: true}, nil
}
func (b *BlindedBeaconBlockBellatrix) ToGeneric() (*eth.GenericBeaconBlock, error) {
@@ -567,7 +567,7 @@ func (b *BlindedBeaconBlockBellatrix) ToGeneric() (*eth.GenericBeaconBlock, erro
if err != nil {
return nil, err
}
return &eth.GenericBeaconBlock{Block: &eth.GenericBeaconBlock_BlindedBellatrix{BlindedBellatrix: block}, IsBlinded: true, PayloadValue: 0 /* can't get payload value from blinded block */}, nil
return &eth.GenericBeaconBlock{Block: &eth.GenericBeaconBlock_BlindedBellatrix{BlindedBellatrix: block}, IsBlinded: true}, nil
}
func (b *BlindedBeaconBlockBellatrix) ToConsensus() (*eth.BlindedBeaconBlockBellatrix, error) {
@@ -1016,7 +1016,7 @@ func (b *SignedBlindedBeaconBlockCapella) ToGeneric() (*eth.GenericSignedBeaconB
Block: bl,
Signature: sig,
}
return &eth.GenericSignedBeaconBlock{Block: &eth.GenericSignedBeaconBlock_BlindedCapella{BlindedCapella: block}, IsBlinded: true, PayloadValue: 0 /* can't get payload value from blinded block */}, nil
return &eth.GenericSignedBeaconBlock{Block: &eth.GenericSignedBeaconBlock_BlindedCapella{BlindedCapella: block}, IsBlinded: true}, nil
}
func (b *BlindedBeaconBlockCapella) ToGeneric() (*eth.GenericBeaconBlock, error) {
@@ -1024,7 +1024,7 @@ func (b *BlindedBeaconBlockCapella) ToGeneric() (*eth.GenericBeaconBlock, error)
if err != nil {
return nil, err
}
return &eth.GenericBeaconBlock{Block: &eth.GenericBeaconBlock_BlindedCapella{BlindedCapella: block}, IsBlinded: true, PayloadValue: 0 /* can't get payload value from blinded block */}, nil
return &eth.GenericBeaconBlock{Block: &eth.GenericBeaconBlock_BlindedCapella{BlindedCapella: block}, IsBlinded: true}, nil
}
func (b *BlindedBeaconBlockCapella) ToConsensus() (*eth.BlindedBeaconBlockCapella, error) {
@@ -2333,10 +2333,10 @@ func ExecutionPayloadHeaderDenebFromConsensus(payload *enginev1.ExecutionPayload
Timestamp: fmt.Sprintf("%d", payload.Timestamp),
ExtraData: hexutil.Encode(payload.ExtraData),
BaseFeePerGas: baseFeePerGas,
BlobGasUsed: fmt.Sprintf("%d", payload.BlobGasUsed),
ExcessBlobGas: fmt.Sprintf("%d", payload.ExcessBlobGas),
BlockHash: hexutil.Encode(payload.BlockHash),
TransactionsRoot: hexutil.Encode(payload.TransactionsRoot),
WithdrawalsRoot: hexutil.Encode(payload.WithdrawalsRoot),
BlobGasUsed: fmt.Sprintf("%d", payload.BlobGasUsed),
ExcessBlobGas: fmt.Sprintf("%d", payload.ExcessBlobGas),
}, nil
}

View File

@@ -1,4 +1,4 @@
package shared
package structs
import (
"errors"

View File

@@ -0,0 +1,26 @@
package structs
import (
"testing"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
func TestDepositSnapshotFromConsensus(t *testing.T) {
ds := &eth.DepositSnapshot{
Finalized: [][]byte{{0xde, 0xad, 0xbe, 0xef}, {0xca, 0xfe, 0xba, 0xbe}},
DepositRoot: []byte{0xab, 0xcd},
DepositCount: 12345,
ExecutionHash: []byte{0x12, 0x34},
ExecutionDepth: 67890,
}
res := DepositSnapshotFromConsensus(ds)
require.NotNil(t, res)
require.DeepEqual(t, []string{"0xdeadbeef", "0xcafebabe"}, res.Finalized)
require.Equal(t, "0xabcd", res.DepositRoot)
require.Equal(t, "12345", res.DepositCount)
require.Equal(t, "0x1234", res.ExecutionBlockHash)
require.Equal(t, "67890", res.ExecutionBlockHeight)
}

View File

@@ -1,9 +1,7 @@
package beacon
package structs
import (
"encoding/json"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
)
type BlockRootResponse struct {
@@ -17,31 +15,31 @@ type BlockRoot struct {
}
type GetCommitteesResponse struct {
Data []*shared.Committee `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*Committee `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type ListAttestationsResponse struct {
Data []*shared.Attestation `json:"data"`
Data []*Attestation `json:"data"`
}
type SubmitAttestationsRequest struct {
Data []*shared.Attestation `json:"data"`
Data []*Attestation `json:"data"`
}
type ListVoluntaryExitsResponse struct {
Data []*shared.SignedVoluntaryExit `json:"data"`
Data []*SignedVoluntaryExit `json:"data"`
}
type SubmitSyncCommitteeSignaturesRequest struct {
Data []*shared.SyncCommitteeMessage `json:"data"`
Data []*SyncCommitteeMessage `json:"data"`
}
type GetStateForkResponse struct {
Data *shared.Fork `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data *Fork `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type GetFinalityCheckpointsResponse struct {
@@ -51,9 +49,9 @@ type GetFinalityCheckpointsResponse struct {
}
type FinalityCheckpoints struct {
PreviousJustified *shared.Checkpoint `json:"previous_justified"`
CurrentJustified *shared.Checkpoint `json:"current_justified"`
Finalized *shared.Checkpoint `json:"finalized"`
PreviousJustified *Checkpoint `json:"previous_justified"`
CurrentJustified *Checkpoint `json:"current_justified"`
Finalized *Checkpoint `json:"finalized"`
}
type GetGenesisResponse struct {
@@ -67,15 +65,15 @@ type Genesis struct {
}
type GetBlockHeadersResponse struct {
Data []*shared.SignedBeaconBlockHeaderContainer `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*SignedBeaconBlockHeaderContainer `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type GetBlockHeaderResponse struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data *shared.SignedBeaconBlockHeaderContainer `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data *SignedBeaconBlockHeaderContainer `json:"data"`
}
type GetValidatorsRequest struct {
@@ -108,17 +106,6 @@ type ValidatorContainer struct {
Validator *Validator `json:"validator"`
}
type Validator struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
EffectiveBalance string `json:"effective_balance"`
Slashed bool `json:"slashed"`
ActivationEligibilityEpoch string `json:"activation_eligibility_epoch"`
ActivationEpoch string `json:"activation_epoch"`
ExitEpoch string `json:"exit_epoch"`
WithdrawableEpoch string `json:"withdrawable_epoch"`
}
type ValidatorBalance struct {
Index string `json:"index"`
Balance string `json:"balance"`
@@ -141,9 +128,9 @@ type SignedBlock struct {
}
type GetBlockAttestationsResponse struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*shared.Attestation `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*Attestation `json:"data"`
}
type GetStateRootResponse struct {
@@ -178,13 +165,34 @@ type SyncCommitteeValidators struct {
}
type BLSToExecutionChangesPoolResponse struct {
Data []*shared.SignedBLSToExecutionChange `json:"data"`
Data []*SignedBLSToExecutionChange `json:"data"`
}
type GetAttesterSlashingsResponse struct {
Data []*shared.AttesterSlashing `json:"data"`
Data []*AttesterSlashing `json:"data"`
}
type GetProposerSlashingsResponse struct {
Data []*shared.ProposerSlashing `json:"data"`
Data []*ProposerSlashing `json:"data"`
}
type GetWeakSubjectivityResponse struct {
Data *WeakSubjectivityData `json:"data"`
}
type WeakSubjectivityData struct {
WsCheckpoint *Checkpoint `json:"ws_checkpoint"`
StateRoot string `json:"state_root"`
}
type GetDepositSnapshotResponse struct {
Data *DepositSnapshot `json:"data"`
}
type DepositSnapshot struct {
Finalized []string `json:"finalized"`
DepositRoot string `json:"deposit_root"`
DepositCount string `json:"deposit_count"`
ExecutionBlockHash string `json:"execution_block_hash"`
ExecutionBlockHeight string `json:"execution_block_height"`
}

View File

@@ -0,0 +1,14 @@
package structs
type SidecarsResponse struct {
Data []*Sidecar `json:"data"`
}
type Sidecar struct {
Index string `json:"index"`
Blob string `json:"blob"`
SignedBeaconBlockHeader *SignedBeaconBlockHeader `json:"signed_block_header"`
KzgCommitment string `json:"kzg_commitment"`
KzgProof string `json:"kzg_proof"`
CommitmentInclusionProof []string `json:"kzg_commitment_inclusion_proof"`
}

View File

@@ -1,4 +1,4 @@
package builder
package structs
type ExpectedWithdrawalsResponse struct {
Data []*ExpectedWithdrawal `json:"data"`

View File

@@ -1,6 +1,4 @@
package config
import "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
package structs
type GetDepositContractResponse struct {
Data *DepositContractData `json:"data"`
@@ -12,7 +10,7 @@ type DepositContractData struct {
}
type GetForkScheduleResponse struct {
Data []*shared.Fork `json:"data"`
Data []*Fork `json:"data"`
}
type GetSpecResponse struct {

View File

@@ -1,9 +1,7 @@
package debug
package structs
import (
"encoding/json"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
)
type GetBeaconStateV2Response struct {
@@ -24,18 +22,18 @@ type ForkChoiceHead struct {
}
type GetForkChoiceDumpResponse struct {
JustifiedCheckpoint *shared.Checkpoint `json:"justified_checkpoint"`
FinalizedCheckpoint *shared.Checkpoint `json:"finalized_checkpoint"`
JustifiedCheckpoint *Checkpoint `json:"justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
ForkChoiceNodes []*ForkChoiceNode `json:"fork_choice_nodes"`
ExtraData *ForkChoiceDumpExtraData `json:"extra_data"`
}
type ForkChoiceDumpExtraData struct {
UnrealizedJustifiedCheckpoint *shared.Checkpoint `json:"unrealized_justified_checkpoint"`
UnrealizedFinalizedCheckpoint *shared.Checkpoint `json:"unrealized_finalized_checkpoint"`
ProposerBoostRoot string `json:"proposer_boost_root"`
PreviousProposerBoostRoot string `json:"previous_proposer_boost_root"`
HeadRoot string `json:"head_root"`
UnrealizedJustifiedCheckpoint *Checkpoint `json:"unrealized_justified_checkpoint"`
UnrealizedFinalizedCheckpoint *Checkpoint `json:"unrealized_finalized_checkpoint"`
ProposerBoostRoot string `json:"proposer_boost_root"`
PreviousProposerBoostRoot string `json:"previous_proposer_boost_root"`
HeadRoot string `json:"head_root"`
}
type ForkChoiceNode struct {

View File

@@ -1,9 +1,7 @@
package events
package structs
import (
"encoding/json"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
)
type HeadEvent struct {
@@ -23,13 +21,13 @@ type BlockEvent struct {
}
type AggregatedAttEventSource struct {
Aggregate *shared.Attestation `json:"aggregate"`
Aggregate *Attestation `json:"aggregate"`
}
type UnaggregatedAttEventSource struct {
AggregationBits string `json:"aggregation_bits"`
Data *shared.AttestationData `json:"data"`
Signature string `json:"signature"`
AggregationBits string `json:"aggregation_bits"`
Data *AttestationData `json:"data"`
Signature string `json:"signature"`
}
type FinalizedCheckpointEvent struct {
@@ -71,18 +69,18 @@ type PayloadAttributesV1 struct {
}
type PayloadAttributesV2 struct {
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
Withdrawals []*shared.Withdrawal `json:"withdrawals"`
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
Withdrawals []*Withdrawal `json:"withdrawals"`
}
type PayloadAttributesV3 struct {
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
Withdrawals []*shared.Withdrawal `json:"withdrawals"`
ParentBeaconBlockRoot string `json:"parent_beacon_block_root"`
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
Withdrawals []*Withdrawal `json:"withdrawals"`
ParentBeaconBlockRoot string `json:"parent_beacon_block_root"`
}
type BlobSidecarEvent struct {
@@ -92,3 +90,27 @@ type BlobSidecarEvent struct {
KzgCommitment string `json:"kzg_commitment"`
VersionedHash string `json:"versioned_hash"`
}
type LightClientFinalityUpdateEvent struct {
Version string `json:"version"`
Data *LightClientFinalityUpdate `json:"data"`
}
type LightClientFinalityUpdate struct {
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
FinalizedHeader *BeaconBlockHeader `json:"finalized_header"`
FinalityBranch []string `json:"finality_branch"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientOptimisticUpdateEvent struct {
Version string `json:"version"`
Data *LightClientOptimisticUpdate `json:"data"`
}
type LightClientOptimisticUpdate struct {
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}

View File

@@ -0,0 +1,31 @@
package structs
type LightClientBootstrapResponse struct {
Version string `json:"version"`
Data *LightClientBootstrap `json:"data"`
}
type LightClientBootstrap struct {
Header *BeaconBlockHeader `json:"header"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
CurrentSyncCommitteeBranch []string `json:"current_sync_committee_branch"`
}
type LightClientUpdate struct {
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee,omitempty"`
FinalizedHeader *BeaconBlockHeader `json:"finalized_header,omitempty"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
NextSyncCommitteeBranch []string `json:"next_sync_committee_branch,omitempty"`
FinalityBranch []string `json:"finality_branch,omitempty"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientUpdateWithVersion struct {
Version string `json:"version"`
Data *LightClientUpdate `json:"data"`
}
type LightClientUpdatesByRangeResponse struct {
Updates []*LightClientUpdateWithVersion `json:"updates"`
}

View File

@@ -1,4 +1,4 @@
package node
package structs
type SyncStatusResponse struct {
Data *SyncStatusResponseData `json:"data"`
@@ -63,3 +63,11 @@ type GetVersionResponse struct {
type Version struct {
Version string `json:"version"`
}
type AddrRequest struct {
Addr string `json:"addr"`
}
type PeersResponse struct {
Peers []*Peer `json:"peers"`
}

View File

@@ -1,4 +1,4 @@
package rewards
package structs
type BlockRewardsResponse struct {
Data *BlockRewards `json:"data"`
@@ -31,6 +31,7 @@ type IdealAttestationReward struct {
Head string `json:"head"`
Target string `json:"target"`
Source string `json:"source"`
Inactivity string `json:"inactivity"`
}
type TotalAttestationReward struct {
@@ -38,7 +39,7 @@ type TotalAttestationReward struct {
Head string `json:"head"`
Target string `json:"target"`
Source string `json:"source"`
InclusionDelay string `json:"inclusion_delay"`
Inactivity string `json:"inactivity"`
}
type SyncCommitteeRewardsResponse struct {

View File

@@ -1,37 +1,37 @@
package validator
package structs
import (
"encoding/json"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
)
type AggregateAttestationResponse struct {
Data *shared.Attestation `json:"data"`
Data *Attestation `json:"data"`
}
type SubmitContributionAndProofsRequest struct {
Data []*shared.SignedContributionAndProof `json:"data"`
Data []*SignedContributionAndProof `json:"data"`
}
type SubmitAggregateAndProofsRequest struct {
Data []*shared.SignedAggregateAttestationAndProof `json:"data"`
Data []*SignedAggregateAttestationAndProof `json:"data"`
}
type SubmitSyncCommitteeSubscriptionsRequest struct {
Data []*shared.SyncCommitteeSubscription `json:"data"`
Data []*SyncCommitteeSubscription `json:"data"`
}
type SubmitBeaconCommitteeSubscriptionsRequest struct {
Data []*shared.BeaconCommitteeSubscription `json:"data"`
Data []*BeaconCommitteeSubscription `json:"data"`
}
type GetAttestationDataResponse struct {
Data *shared.AttestationData `json:"data"`
Data *AttestationData `json:"data"`
}
type ProduceSyncCommitteeContributionResponse struct {
Data *shared.SyncCommitteeContribution `json:"data"`
Data *SyncCommitteeContribution `json:"data"`
}
type GetAttesterDutiesResponse struct {
@@ -90,3 +90,31 @@ type Liveness struct {
Index string `json:"index"`
IsLive bool `json:"is_live"`
}
type GetValidatorCountResponse struct {
ExecutionOptimistic string `json:"execution_optimistic"`
Finalized string `json:"finalized"`
Data []*ValidatorCount `json:"data"`
}
type ValidatorCount struct {
Status string `json:"status"`
Count string `json:"count"`
}
type GetValidatorPerformanceRequest struct {
PublicKeys [][]byte `json:"public_keys,omitempty"`
Indices []primitives.ValidatorIndex `json:"indices,omitempty"`
}
type GetValidatorPerformanceResponse struct {
PublicKeys [][]byte `json:"public_keys,omitempty"`
CorrectlyVotedSource []bool `json:"correctly_voted_source,omitempty"`
CorrectlyVotedTarget []bool `json:"correctly_voted_target,omitempty"`
CorrectlyVotedHead []bool `json:"correctly_voted_head,omitempty"`
CurrentEffectiveBalances []uint64 `json:"current_effective_balances,omitempty"`
BalancesBeforeEpochTransition []uint64 `json:"balances_before_epoch_transition,omitempty"`
BalancesAfterEpochTransition []uint64 `json:"balances_after_epoch_transition,omitempty"`
MissingValidators [][]byte `json:"missing_validators,omitempty"`
InactivityScores []uint64 `json:"inactivity_scores,omitempty"`
}

View File

@@ -1,7 +1,7 @@
package shared
package structs
type Validator struct {
PublicKey string `json:"pubkey"`
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
EffectiveBalance string `json:"effective_balance"`
Slashed bool `json:"slashed"`

View File

@@ -1,4 +1,4 @@
package shared
package structs
type BeaconState struct {
GenesisTime string `json:"genesis_time"`

View File

@@ -6,6 +6,7 @@ go_library(
"chain_info.go",
"chain_info_forkchoice.go",
"currently_syncing_block.go",
"defragment.go",
"error.go",
"execution_engine.go",
"forkchoice_update_execution.go",

View File

@@ -6,7 +6,10 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
f "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
@@ -18,7 +21,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// ChainInfoFetcher defines a common interface for methods in blockchain service which
@@ -334,12 +336,21 @@ func (s *Service) HeadValidatorIndexToPublicKey(_ context.Context, index primiti
return v.PublicKey(), nil
}
// ForkChoicer returns the forkchoice interface.
func (s *Service) ForkChoicer() f.ForkChoicer {
return s.cfg.ForkChoiceStore
}
// IsOptimistic returns true if the current head is optimistic.
func (s *Service) IsOptimistic(_ context.Context) (bool, error) {
if slots.ToEpoch(s.CurrentSlot()) < params.BeaconConfig().BellatrixForkEpoch {
return false, nil
}
s.headLock.RLock()
if s.head == nil {
s.headLock.RUnlock()
return false, ErrNilHead
}
headRoot := s.head.root
headSlot := s.head.slot
headOptimistic := s.head.optimistic
@@ -545,3 +556,10 @@ func (s *Service) RecentBlockSlot(root [32]byte) (primitives.Slot, error) {
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.Slot(root)
}
// inRegularSync queries the initial sync service to
// determine if the node is in regular sync or is still
// syncing to the head of the chain.
func (s *Service) inRegularSync() bool {
return s.cfg.SyncChecker.Synced()
}

View File

@@ -429,6 +429,11 @@ func TestService_IsOptimistic(t *testing.T) {
opt, err = c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, opt)
// If head is nil, for some reason, an error should be returned rather than panic.
c = &Service{}
_, err = c.IsOptimistic(ctx)
require.ErrorIs(t, err, ErrNilHead)
}
func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {

View File

@@ -0,0 +1,27 @@
package blockchain
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/time"
)
var stateDefragmentationTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "head_state_defragmentation_milliseconds",
Help: "Milliseconds it takes to defragment the head state",
})
// This method defragments our state, so that any specific fields which have
// a higher number of fragmented indexes are reallocated to a new separate slice for
// that field.
func (s *Service) defragmentState(st state.BeaconState) {
if !features.Get().EnableExperimentalState {
return
}
startTime := time.Now()
st.Defragment()
elapsedTime := time.Since(startTime)
stateDefragmentationTime.Observe(float64(elapsedTime.Milliseconds()))
}

View File

@@ -28,6 +28,8 @@ var (
// ErrNotCheckpoint is returned when a given checkpoint is not a
// checkpoint in any chain known to forkchoice
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
// ErrNilHead is returned when no head is present in the blockchain service.
ErrNilHead = errors.New("nil head")
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")

View File

@@ -387,9 +387,9 @@ func (s *Service) removeInvalidBlockAndState(ctx context.Context, blkRoots [][32
// This is an irreparable condition, it would me a justified or finalized block has become invalid.
return err
}
// No op if the sidecar does not exist.
if err := s.cfg.BeaconDB.DeleteBlobSidecars(ctx, root); err != nil {
return err
if err := s.blobStorage.Remove(root); err != nil {
// Blobs may not exist for some blocks, leading to deletion failures. Log such errors at debug level.
log.WithError(err).Debug("Could not remove blob from blob storage")
}
}
return nil

View File

@@ -151,8 +151,10 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
}})
@@ -494,8 +496,10 @@ func Test_NotifyNewPayload(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
@@ -597,8 +601,10 @@ func Test_NotifyNewPayload(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},

View File

@@ -11,7 +11,6 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//consensus-types/blocks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
@@ -25,7 +24,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//proto/prysm/v1alpha1:go_default_library",
"//consensus-types/blocks:go_default_library",
"//testing/require:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",

View File

@@ -1,37 +1,10 @@
package kzg
import (
"fmt"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
// IsDataAvailable checks that
// - all blobs in the block are available
// - Expected KZG commitments match the number of blobs in the block
// - That the number of proofs match the number of blobs
// - That the proofs are verified against the KZG commitments
func IsDataAvailable(commitments [][]byte, sidecars []*ethpb.DeprecatedBlobSidecar) error {
if len(commitments) != len(sidecars) {
return fmt.Errorf("could not check data availability, expected %d commitments, obtained %d",
len(commitments), len(sidecars))
}
if len(commitments) == 0 {
return nil
}
blobs := make([]GoKZG.Blob, len(commitments))
proofs := make([]GoKZG.KZGProof, len(commitments))
cmts := make([]GoKZG.KZGCommitment, len(commitments))
for i, sidecar := range sidecars {
blobs[i] = bytesToBlob(sidecar.Blob)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)
cmts[i] = bytesToCommitment(commitments[i])
}
return kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs)
}
// Verify performs single or batch verification of commitments depending on the number of given BlobSidecars.
func Verify(sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {

View File

@@ -8,7 +8,7 @@ import (
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/sirupsen/logrus"
)
@@ -58,10 +58,9 @@ func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZG
return commitment, proof, err
}
func TestIsDataAvailable(t *testing.T) {
sidecars := make([]*ethpb.DeprecatedBlobSidecar, 0)
commitments := make([][]byte, 0)
require.NoError(t, IsDataAvailable(commitments, sidecars))
func TestVerify(t *testing.T) {
sidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(sidecars...))
}
func TestBytesToAny(t *testing.T) {

View File

@@ -182,6 +182,10 @@ var (
Name: "chain_service_processing_milliseconds",
Help: "Total time to call a chain service in ReceiveBlock()",
})
dataAvailWaitedTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "da_waited_time_milliseconds",
Help: "Total time spent waiting for a data availability check in ReceiveBlock()",
})
processAttsElapsedTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "process_attestations_milliseconds",

View File

@@ -198,3 +198,10 @@ func WithBlobStorage(b *filesystem.BlobStorage) Option {
return nil
}
}
func WithSyncChecker(checker Checker) Option {
return func(s *Service) error {
s.cfg.SyncChecker = checker
return nil
}
}

View File

@@ -6,6 +6,8 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
@@ -29,7 +31,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// A custom slot deadline for processing state slots in our cache.
@@ -66,7 +67,10 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
startTime := time.Now()
fcuArgs := &fcuConfig{}
defer s.handleSecondFCUCall(cfg, fcuArgs)
if s.inRegularSync() {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
defer s.sendLightClientFeeds(cfg)
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
defer reportAttestationInclusion(cfg.signed.Block())
@@ -102,6 +106,7 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
if err := s.sendFCU(cfg, fcuArgs); err != nil {
return errors.Wrap(err, "could not send FCU to engine")
}
return nil
}
@@ -320,7 +325,10 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
}
// The proposer indices cache takes the target root for the previous
// epoch as key
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e-1)
if e > 0 {
e = e - 1
}
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e)
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
return nil
@@ -577,10 +585,15 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if s.CurrentSlot() == s.HeadSlot() {
return
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
// return early if we are in init sync
if !s.inRegularSync() {
return
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
@@ -595,18 +608,22 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if err := transition.UpdateNextSlotCache(ctx, lastRoot, lastState); err != nil {
log.WithError(err).Debug("could not update next slot state cache")
}
// handleEpochBoundary requires a forkchoice lock to obtain the target root.
s.cfg.ForkChoiceStore.RLock()
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("lateBlockTasks: could not update epoch boundary caches")
}
s.cfg.ForkChoiceStore.RUnlock()
// return early if we already started building a block for the current
// head root
_, has := s.cfg.PayloadIDCache.PayloadID(s.CurrentSlot()+1, headRoot)
if has {
return
}
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
return
}
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
@@ -617,18 +634,12 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
s.headLock.RUnlock()
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if fcuArgs.attributes.IsEmpty() {
return
}
s.cfg.ForkChoiceStore.RLock()
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
s.cfg.ForkChoiceStore.RUnlock()
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}

View File

@@ -7,21 +7,24 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v4/math"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// CurrentSlot returns the current slot based on time.
@@ -34,6 +37,9 @@ func (s *Service) getFCUArgs(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) er
if err := s.getFCUArgsEarlyBlock(cfg, fcuArgs); err != nil {
return err
}
if !s.inRegularSync() {
return nil
}
slot := cfg.signed.Block().Slot()
if slots.WithinVotingWindow(uint64(s.genesisTime.Unix()), slot) {
return nil
@@ -106,6 +112,128 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
})
}
// sendLightClientFeeds sends the light client feeds when feature flag is enabled.
func (s *Service) sendLightClientFeeds(cfg *postBlockProcessConfig) {
if features.Get().EnableLightClient {
if _, err := s.sendLightClientOptimisticUpdate(cfg.ctx, cfg.signed, cfg.postState); err != nil {
log.WithError(err).Error("Failed to send light client optimistic update")
}
// Get the finalized checkpoint
finalized := s.ForkChoicer().FinalizedCheckpoint()
// LightClientFinalityUpdate needs super majority
s.tryPublishLightClientFinalityUpdate(cfg.ctx, cfg.signed, finalized, cfg.postState)
}
}
func (s *Service) tryPublishLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, finalized *forkchoicetypes.Checkpoint, postState state.BeaconState) {
if finalized.Epoch <= s.lastPublishedLightClientEpoch {
return
}
config := params.BeaconConfig()
if finalized.Epoch < config.AltairForkEpoch {
return
}
syncAggregate, err := signed.Block().Body().SyncAggregate()
if err != nil || syncAggregate == nil {
return
}
// LightClientFinalityUpdate needs super majority
if syncAggregate.SyncCommitteeBits.Count()*3 < config.SyncCommitteeSize*2 {
return
}
_, err = s.sendLightClientFinalityUpdate(ctx, signed, postState)
if err != nil {
log.WithError(err).Error("Failed to send light client finality update")
} else {
s.lastPublishedLightClientEpoch = finalized.Epoch
}
}
// sendLightClientFinalityUpdate sends a light client finality update notification to the state feed.
func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
}
// Get finalized block
var finalizedBlock interfaces.ReadOnlySignedBeaconBlock
finalizedCheckPoint := attestedState.FinalizedCheckpoint()
if finalizedCheckPoint != nil {
finalizedRoot := bytesutil.ToBytes32(finalizedCheckPoint.Root)
finalizedBlock, err = s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
finalizedBlock = nil
}
}
update, err := NewLightClientFinalityUpdateFromBeaconState(
ctx,
postState,
signed,
attestedState,
finalizedBlock,
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
}
// Return the result
result := &ethpbv2.LightClientFinalityUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: CreateLightClientFinalityUpdate(update),
}
// Send event
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: result,
}), nil
}
// sendLightClientOptimisticUpdate sends a light client optimistic update notification to the state feed.
func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
}
update, err := NewLightClientOptimisticUpdateFromBeaconState(
ctx,
postState,
signed,
attestedState,
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
}
// Return the result
result := &ethpbv2.LightClientOptimisticUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: CreateLightClientOptimisticUpdate(update),
}
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: result,
}), nil
}
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch
// boundary in order to compute the right proposer indices after processing
// state transition. This function is called on late blocks while still locked,

View File

@@ -911,7 +911,6 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
name: "state older than Bellatrix, nil payload",
stateVersion: 1,
payload: nil,
errString: "attempted to wrap nil",
},
{
name: "state older than Bellatrix, empty payload",
@@ -923,8 +922,10 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
{
@@ -938,7 +939,6 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
name: "state is Bellatrix, nil payload",
stateVersion: 2,
payload: nil,
errString: "attempted to wrap nil",
},
{
name: "state is Bellatrix, empty payload",
@@ -968,6 +968,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
@@ -2045,7 +2046,9 @@ func TestFillMissingBlockPayloadId_PrepareAllPayloads(t *testing.T) {
// boost. It alters the genesisTime tracked by the store.
func driftGenesisTime(s *Service, slot, delay int64) {
offset := slot*int64(params.BeaconConfig().SecondsPerSlot) + delay
s.SetGenesisTime(time.Unix(time.Now().Unix()-offset, 0))
newTime := time.Unix(time.Now().Unix()-offset, 0)
s.SetGenesisTime(newTime)
s.cfg.ForkChoiceStore.SetGenesisTime(uint64(newTime.Unix()))
}
func TestMissingIndices(t *testing.T) {

View File

@@ -150,7 +150,9 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
headBlock: headBlock,
proposingSlot: proposingSlot,
}
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}

View File

@@ -122,6 +122,10 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
}
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
// Defragment the state before continuing block processing.
s.defragmentState(postState)
// The rest of block processing takes a lock on forkchoice.
s.cfg.ForkChoiceStore.Lock()

View File

@@ -11,6 +11,8 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/async/event"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
@@ -37,34 +39,35 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
lastPublishedLightClientEpoch primitives.Epoch
}
// config options for the service.
@@ -90,6 +93,13 @@ type config struct {
BlockFetcher execution.POWBlockFetcher
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
}
// Checker is an interface used to determine if a node is in initial sync
// or regular sync.
type Checker interface {
Synced() bool
}
var ErrMissingClockSetter = errors.New("blockchain Service initialized without a startup.ClockSetter")

View File

@@ -6,10 +6,12 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v4/async/event"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
@@ -116,6 +118,8 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithBLSToExecPool(req.blsPool),
WithDepositCache(dc),
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithSyncChecker(mock.MockChecker{}),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -180,6 +180,14 @@ func (mon *MockOperationNotifier) OperationFeed() *event.Feed {
return mon.feed
}
// MockChecker is a mock sync checker.
type MockChecker struct{}
// Synced returns true.
func (_ MockChecker) Synced() bool {
return true
}
// ReceiveBlockInitialSync mocks ReceiveBlockInitialSync method in chain service.
func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte) error {
if s.State == nil {

View File

@@ -2,6 +2,7 @@ package testing
import (
"context"
"math/big"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api/client/builder"
@@ -54,13 +55,13 @@ func (s *MockBuilderService) SubmitBlindedBlock(_ context.Context, b interfaces.
}
return w, nil, s.ErrSubmitBlindedBlock
case version.Capella:
w, err := blocks.WrappedExecutionPayloadCapella(s.PayloadCapella, 0)
w, err := blocks.WrappedExecutionPayloadCapella(s.PayloadCapella, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrap(err, "could not wrap capella payload")
}
return w, nil, s.ErrSubmitBlindedBlock
case version.Deneb:
w, err := blocks.WrappedExecutionPayloadDeneb(s.PayloadDeneb, 0)
w, err := blocks.WrappedExecutionPayloadDeneb(s.PayloadDeneb, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrap(err, "could not wrap deneb payload")
}

View File

@@ -796,7 +796,7 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
err = dc.InsertFinalizedDeposits(context.Background(), 4, [32]byte{}, 0)
require.NoError(t, err)
// Mimick finalized deposit trie fetch.
// Mimic finalized deposit trie fetch.
fd, err := dc.FinalizedDeposits(context.Background())
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), nil)

View File

@@ -115,6 +115,7 @@ func (p *ProposerIndicesCache) IndicesFromCheckpoint(c forkchoicetypes.Checkpoin
root, ok := p.rootMap[c]
p.Unlock()
if !ok {
ProposerIndicesCacheMiss.Inc()
return emptyIndices, ok
}
return p.ProposerIndices(c.Epoch+1, root)

View File

@@ -37,70 +37,69 @@ func TestProposerCache_Set(t *testing.T) {
func TestProposerCache_CheckpointAndPrune(t *testing.T) {
cache := NewProposerIndicesCache()
indices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
root := [32]byte{'a'}
cpRoot := [32]byte{'b'}
copy(indices[3:], []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6})
for i := 1; i < 10; i++ {
root := [32]byte{byte(i)}
cache.Set(primitives.Epoch(i), root, indices)
cpRoot := [32]byte{byte(i - 1)}
cache.SetCheckpoint(forkchoicetypes.Checkpoint{Epoch: primitives.Epoch(i - 1), Root: cpRoot}, root)
}
received, ok := cache.ProposerIndices(1, root)
received, ok := cache.ProposerIndices(1, [32]byte{1})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.ProposerIndices(4, root)
received, ok = cache.ProposerIndices(4, [32]byte{4})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.ProposerIndices(9, root)
received, ok = cache.ProposerIndices(9, [32]byte{9})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 0, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: [32]byte{3}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: [32]byte{4}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: [32]byte{8}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
cache.Prune(5)
emptyIndices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
received, ok = cache.ProposerIndices(1, root)
received, ok = cache.ProposerIndices(1, [32]byte{1})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.ProposerIndices(4, root)
received, ok = cache.ProposerIndices(4, [32]byte{4})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.ProposerIndices(9, root)
received, ok = cache.ProposerIndices(9, [32]byte{9})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 0, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 0, Root: [32]byte{0}})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: [32]byte{3}})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: [32]byte{4}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: [32]byte{8}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
}

View File

@@ -15,11 +15,12 @@ import (
// AttDelta contains rewards and penalties for a single attestation.
type AttDelta struct {
HeadReward uint64
SourceReward uint64
SourcePenalty uint64
TargetReward uint64
TargetPenalty uint64
HeadReward uint64
SourceReward uint64
SourcePenalty uint64
TargetReward uint64
TargetPenalty uint64
InactivityPenalty uint64
}
// InitializePrecomputeValidators precomputes individual validator for its attested balances and the total sum of validators attested balances of the epoch.
@@ -251,7 +252,7 @@ func ProcessRewardsAndPenaltiesPrecompute(
if err != nil {
return nil, err
}
balances[i] = helpers.DecreaseBalanceWithVal(balances[i], delta.SourcePenalty+delta.TargetPenalty)
balances[i] = helpers.DecreaseBalanceWithVal(balances[i], delta.SourcePenalty+delta.TargetPenalty+delta.InactivityPenalty)
vals[i].AfterEpochTransitionBalance = balances[i]
}
@@ -351,7 +352,7 @@ func attestationDelta(
if err != nil {
return &AttDelta{}, err
}
attDelta.TargetPenalty += n / inactivityDenominator
attDelta.InactivityPenalty = n / inactivityDenominator
}
return attDelta, nil

View File

@@ -220,7 +220,7 @@ func TestAttestationsDelta(t *testing.T) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
// Reward amount should increase as validator index increases due to setup.
@@ -258,7 +258,7 @@ func TestAttestationsDeltaBellatrix(t *testing.T) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
// Reward amount should increase as validator index increases due to setup.
@@ -306,7 +306,7 @@ func TestProcessRewardsAndPenaltiesPrecompute_Ok(t *testing.T) {
penalties := make([]uint64, len(deltas))
for i, d := range deltas {
rewards[i] = d.HeadReward + d.SourceReward + d.TargetReward
penalties[i] = d.SourcePenalty + d.TargetPenalty
penalties[i] = d.SourcePenalty + d.TargetPenalty + d.InactivityPenalty
}
for i := range rewards {
wanted[i] += rewards[i]

View File

@@ -105,6 +105,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
@@ -136,6 +137,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
@@ -168,6 +170,7 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"math/big"
"testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
@@ -609,7 +610,7 @@ func Test_ProcessPayloadCapella(t *testing.T) {
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
payload.PrevRandao = random
wrapped, err := consensusblocks.WrappedExecutionPayloadCapella(payload, 0)
wrapped, err := consensusblocks.WrappedExecutionPayloadCapella(payload, big.NewInt(0))
require.NoError(t, err)
_, err = blocks.ProcessPayload(st, wrapped)
require.NoError(t, err)
@@ -853,10 +854,10 @@ func emptyPayloadHeader() (interfaces.ExecutionData, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
})
}
@@ -868,12 +869,12 @@ func emptyPayloadHeaderCapella() (interfaces.ExecutionData, error) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
}, 0)
}, big.NewInt(0))
}
func emptyPayload() *enginev1.ExecutionPayload {
@@ -884,10 +885,10 @@ func emptyPayload() *enginev1.ExecutionPayload {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
}
}
@@ -899,10 +900,10 @@ func emptyPayloadCapella() *enginev1.ExecutionPayloadCapella {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
ExtraData: make([]byte, 0),
}
}

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"math/big"
"math/rand"
"testing"
@@ -642,7 +643,10 @@ func TestProcessBlindWithdrawals(t *testing.T) {
require.NoError(t, err)
wdRoot, err := ssz.WithdrawalSliceRoot(test.Args.Withdrawals, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
p, err := consensusblocks.WrappedExecutionPayloadHeaderCapella(&enginev1.ExecutionPayloadHeaderCapella{WithdrawalsRoot: wdRoot[:]}, 0)
p, err := consensusblocks.WrappedExecutionPayloadHeaderCapella(
&enginev1.ExecutionPayloadHeaderCapella{WithdrawalsRoot: wdRoot[:]},
big.NewInt(0),
)
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, p)
if test.Control.ExpectedError {
@@ -1060,7 +1064,7 @@ func TestProcessWithdrawals(t *testing.T) {
}
st, err := prepareValidators(spb, test.Args)
require.NoError(t, err)
p, err := consensusblocks.WrappedExecutionPayloadCapella(&enginev1.ExecutionPayloadCapella{Withdrawals: test.Args.Withdrawals}, 0)
p, err := consensusblocks.WrappedExecutionPayloadCapella(&enginev1.ExecutionPayloadCapella{Withdrawals: test.Args.Withdrawals}, big.NewInt(0))
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, p)
if test.Control.ExpectedError {

View File

@@ -84,6 +84,7 @@ func TestUpgradeToCapella(t *testing.T) {
GasLimit: prevHeader.GasLimit(),
GasUsed: prevHeader.GasUsed(),
Timestamp: prevHeader.Timestamp(),
ExtraData: prevHeader.ExtraData(),
BaseFeePerGas: prevHeader.BaseFeePerGas(),
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,

View File

@@ -57,6 +57,10 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
if err != nil {
return nil, err
}
historicalRoots, err := state.HistoricalRoots()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateDeneb{
GenesisTime: state.GenesisTime(),
@@ -70,7 +74,7 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: [][]byte{},
HistoricalRoots: historicalRoots,
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
@@ -101,10 +105,10 @@ func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
ExcessBlobGas: 0,
BlobGasUsed: 0,
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
ExcessBlobGas: 0,
BlobGasUsed: 0,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,

View File

@@ -14,6 +14,7 @@ import (
func TestUpgradeToDeneb(t *testing.T) {
st, _ := util.DeterministicGenesisStateCapella(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetHistoricalRoots([][]byte{{1}}))
preForkState := st.Copy()
mSt, err := deneb.UpgradeToDeneb(st)
require.NoError(t, err)
@@ -46,6 +47,12 @@ func TestUpgradeToDeneb(t *testing.T) {
require.NoError(t, err)
require.DeepSSZEqual(t, make([]uint64, numValidators), s)
hr1, err := preForkState.HistoricalRoots()
require.NoError(t, err)
hr2, err := mSt.HistoricalRoots()
require.NoError(t, err)
require.DeepEqual(t, hr1, hr2)
f := mSt.Fork()
require.DeepSSZEqual(t, &ethpb.Fork{
PreviousVersion: st.Fork().CurrentVersion,
@@ -85,6 +92,7 @@ func TestUpgradeToDeneb(t *testing.T) {
GasLimit: prevHeader.GasLimit(),
GasUsed: prevHeader.GasUsed(),
Timestamp: prevHeader.Timestamp(),
ExtraData: prevHeader.ExtraData(),
BaseFeePerGas: prevHeader.BaseFeePerGas(),
BlockHash: prevHeader.BlockHash(),
TransactionsRoot: txRoot,

View File

@@ -79,6 +79,7 @@ func UpgradeToBellatrix(state state.BeaconState) (state.BeaconState, error) {
GasLimit: 0,
GasUsed: 0,
Timestamp: 0,
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),

View File

@@ -79,6 +79,7 @@ func TestUpgradeToBellatrix(t *testing.T) {
GasLimit: 0,
GasUsed: 0,
Timestamp: 0,
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),

View File

@@ -27,6 +27,10 @@ const (
NewHead
// MissedSlot is sent when we need to notify users that a slot was missed.
MissedSlot
// LightClientFinalityUpdate event
LightClientFinalityUpdate
// LightClientOptimisticUpdate event
LightClientOptimisticUpdate
)
// BlockProcessedData is the data sent with BlockProcessed events.

View File

@@ -50,7 +50,6 @@ go_test(
"attestation_test.go",
"beacon_committee_test.go",
"block_test.go",
"main_test.go",
"randao_test.go",
"rewards_penalties_test.go",
"shuffle_test.go",

View File

@@ -20,6 +20,8 @@ import (
func TestAttestation_IsAggregator(t *testing.T) {
t.Run("aggregator", func(t *testing.T) {
helpers.ClearCache()
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, 0, 0)
require.NoError(t, err)
@@ -30,6 +32,8 @@ func TestAttestation_IsAggregator(t *testing.T) {
})
t.Run("not aggregator", func(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
beaconState, privKeys := util.DeterministicGenesisState(t, 2048)
@@ -44,6 +48,8 @@ func TestAttestation_IsAggregator(t *testing.T) {
}
func TestAttestation_ComputeSubnetForAttestation(t *testing.T) {
helpers.ClearCache()
// Create 10 committees
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize
@@ -204,6 +210,8 @@ func Test_ValidateAttestationTime(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
err := helpers.ValidateAttestationTime(tt.args.attSlot, tt.args.genesisTime,
params.BeaconConfig().MaximumGossipClockDisparityDuration())
if tt.wantedErr != "" {
@@ -216,6 +224,8 @@ func Test_ValidateAttestationTime(t *testing.T) {
}
func TestVerifyCheckpointEpoch_Ok(t *testing.T) {
helpers.ClearCache()
// Genesis was 6 epochs ago exactly.
offset := params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot * 6)
genesis := time.Now().Add(-1 * time.Second * time.Duration(offset))
@@ -285,6 +295,8 @@ func TestValidateNilAttestation(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
if tt.errString != "" {
require.ErrorContains(t, tt.errString, helpers.ValidateNilAttestation(tt.attestation))
} else {
@@ -326,6 +338,8 @@ func TestValidateSlotTargetEpoch(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
if tt.errString != "" {
require.ErrorContains(t, tt.errString, helpers.ValidateSlotTargetEpoch(tt.attestation.Data))
} else {

View File

@@ -379,7 +379,7 @@ func UpdateCachedCheckpointToStateRoot(state state.ReadOnlyBeaconState, cp *fork
if cp.Epoch <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
return nil
}
slot, err := slots.EpochEnd(cp.Epoch - 1)
slot, err := slots.EpochEnd(cp.Epoch)
if err != nil {
return err
}

View File

@@ -21,6 +21,8 @@ import (
)
func TestComputeCommittee_WithoutCache(t *testing.T) {
ClearCache()
// Create 10 committees
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize
@@ -71,6 +73,8 @@ func TestComputeCommittee_WithoutCache(t *testing.T) {
}
func TestComputeCommittee_RegressionTest(t *testing.T) {
ClearCache()
indices := []primitives.ValidatorIndex{1, 3, 8, 16, 18, 19, 20, 23, 30, 35, 43, 46, 47, 54, 56, 58, 69, 70, 71, 83, 84, 85, 91, 96, 100, 103, 105, 106, 112, 121, 127, 128, 129, 140, 142, 144, 146, 147, 149, 152, 153, 154, 157, 160, 173, 175, 180, 182, 188, 189, 191, 194, 201, 204, 217, 221, 226, 228, 230, 231, 239, 241, 249, 250, 255}
seed := [32]byte{68, 110, 161, 250, 98, 230, 161, 172, 227, 226, 99, 11, 138, 124, 201, 134, 38, 197, 0, 120, 6, 165, 122, 34, 19, 216, 43, 226, 210, 114, 165, 183}
index := uint64(215)
@@ -80,6 +84,8 @@ func TestComputeCommittee_RegressionTest(t *testing.T) {
}
func TestVerifyBitfieldLength_OK(t *testing.T) {
ClearCache()
bf := bitfield.Bitlist{0xFF, 0x01}
committeeSize := uint64(8)
assert.NoError(t, VerifyBitfieldLength(bf, committeeSize), "Bitfield is not validated when it was supposed to be")
@@ -91,7 +97,7 @@ func TestVerifyBitfieldLength_OK(t *testing.T) {
func TestCommitteeAssignments_CannotRetrieveFutureEpoch(t *testing.T) {
ClearCache()
defer ClearCache()
epoch := primitives.Epoch(1)
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: 0, // Epoch 0.
@@ -103,7 +109,7 @@ func TestCommitteeAssignments_CannotRetrieveFutureEpoch(t *testing.T) {
func TestCommitteeAssignments_NoProposerForSlot0(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
var activationEpoch primitives.Epoch
@@ -190,10 +196,10 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
},
}
defer ClearCache()
for i, tt := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
ClearCache()
validatorIndexToCommittee, proposerIndexToSlots, err := CommitteeAssignments(context.Background(), state, slots.ToEpoch(tt.slot))
require.NoError(t, err, "Failed to determine CommitteeAssignments")
cac := validatorIndexToCommittee[tt.index]
@@ -209,6 +215,8 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
}
func TestCommitteeAssignments_CannotRetrieveFuture(t *testing.T) {
ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
@@ -239,6 +247,8 @@ func TestCommitteeAssignments_CannotRetrieveFuture(t *testing.T) {
}
func TestCommitteeAssignments_CannotRetrieveOlderThanSlotsPerHistoricalRoot(t *testing.T) {
ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
@@ -259,7 +269,7 @@ func TestCommitteeAssignments_CannotRetrieveOlderThanSlotsPerHistoricalRoot(t *t
func TestCommitteeAssignments_EverySlotHasMin1Proposer(t *testing.T) {
ClearCache()
defer ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
@@ -380,9 +390,9 @@ func TestVerifyAttestationBitfieldLengths_OK(t *testing.T) {
},
}
defer ClearCache()
for i, tt := range tests {
ClearCache()
require.NoError(t, state.SetSlot(tt.stateSlot))
err := VerifyAttestationBitfieldLengths(context.Background(), state, tt.attestation)
if tt.verificationFailure {
@@ -395,7 +405,7 @@ func TestVerifyAttestationBitfieldLengths_OK(t *testing.T) {
func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
ClearCache()
defer ClearCache()
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount
validators := make([]*ethpb.Validator, validatorCount)
indices := make([]primitives.ValidatorIndex, validatorCount)
@@ -425,7 +435,7 @@ func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
func TestUpdateCommitteeCache_CanUpdateAcrossEpochs(t *testing.T) {
ClearCache()
defer ClearCache()
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount
validators := make([]*ethpb.Validator, validatorCount)
indices := make([]primitives.ValidatorIndex, validatorCount)

View File

@@ -60,6 +60,8 @@ func TestBlockRootAtSlot_CorrectBlockRoot(t *testing.T) {
}
for i, tt := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
helpers.ClearCache()
s.Slot = tt.stateSlot
state, err := state_native.InitializeFromProtoPhase0(s)
require.NoError(t, err)
@@ -110,6 +112,8 @@ func TestBlockRootAtSlot_OutOfBounds(t *testing.T) {
},
}
for _, tt := range tests {
helpers.ClearCache()
state.Slot = tt.stateSlot
s, err := state_native.InitializeFromProtoPhase0(state)
require.NoError(t, err)

View File

@@ -1,13 +0,0 @@
package helpers
import (
"os"
"testing"
)
// run ClearCache before each test to prevent cross-test side effects
func TestMain(m *testing.M) {
ClearCache()
code := m.Run()
os.Exit(code)
}

View File

@@ -40,6 +40,8 @@ func TestRandaoMix_OK(t *testing.T) {
},
}
for _, test := range tests {
ClearCache()
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(test.epoch+1))))
mix, err := RandaoMix(state, test.epoch)
require.NoError(t, err)
@@ -74,6 +76,8 @@ func TestRandaoMix_CopyOK(t *testing.T) {
},
}
for _, test := range tests {
ClearCache()
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(test.epoch+1))))
mix, err := RandaoMix(state, test.epoch)
require.NoError(t, err)
@@ -88,6 +92,8 @@ func TestRandaoMix_CopyOK(t *testing.T) {
}
func TestGenerateSeed_OK(t *testing.T) {
ClearCache()
randaoMixes := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(randaoMixes); i++ {
intInBytes := make([]byte, 32)

View File

@@ -14,6 +14,8 @@ import (
)
func TestTotalBalance_OK(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: []*ethpb.Validator{
{EffectiveBalance: 27 * 1e9}, {EffectiveBalance: 28 * 1e9},
{EffectiveBalance: 32 * 1e9}, {EffectiveBalance: 40 * 1e9},
@@ -27,6 +29,8 @@ func TestTotalBalance_OK(t *testing.T) {
}
func TestTotalBalance_ReturnsEffectiveBalanceIncrement(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: []*ethpb.Validator{}})
require.NoError(t, err)
@@ -47,6 +51,8 @@ func TestGetBalance_OK(t *testing.T) {
{i: 2, b: []uint64{0, 0, 0}},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Balances: test.b})
require.NoError(t, err)
assert.Equal(t, test.b[test.i], state.Balances()[test.i], "Incorrect Validator balance")
@@ -62,6 +68,8 @@ func TestTotalActiveBalance(t *testing.T) {
{10000},
}
for _, test := range tests {
ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
validators = append(validators, &ethpb.Validator{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance, ExitEpoch: 1})
@@ -75,8 +83,6 @@ func TestTotalActiveBalance(t *testing.T) {
}
func TestTotalActiveBal_ReturnMin(t *testing.T) {
ClearCache()
defer ClearCache()
tests := []struct {
vCount int
}{
@@ -85,6 +91,8 @@ func TestTotalActiveBal_ReturnMin(t *testing.T) {
{10000},
}
for _, test := range tests {
ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
validators = append(validators, &ethpb.Validator{EffectiveBalance: 1, ExitEpoch: 1})
@@ -98,8 +106,6 @@ func TestTotalActiveBal_ReturnMin(t *testing.T) {
}
func TestTotalActiveBalance_WithCache(t *testing.T) {
ClearCache()
defer ClearCache()
tests := []struct {
vCount int
wantCount int
@@ -109,6 +115,8 @@ func TestTotalActiveBalance_WithCache(t *testing.T) {
{vCount: 10000, wantCount: 10000},
}
for _, test := range tests {
ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
validators = append(validators, &ethpb.Validator{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance, ExitEpoch: 1})
@@ -133,6 +141,8 @@ func TestIncreaseBalance_OK(t *testing.T) {
{i: 2, b: []uint64{27 * 1e9, 28 * 1e9, 32 * 1e9}, nb: 33 * 1e9, eb: 65 * 1e9},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
{EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 4}},
@@ -157,6 +167,8 @@ func TestDecreaseBalance_OK(t *testing.T) {
{i: 3, b: []uint64{27 * 1e9, 28 * 1e9, 1, 28 * 1e9}, nb: 28 * 1e9, eb: 0},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
{EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 3}},
@@ -169,6 +181,8 @@ func TestDecreaseBalance_OK(t *testing.T) {
}
func TestFinalityDelay(t *testing.T) {
ClearCache()
base := buildState(params.BeaconConfig().SlotsPerEpoch*10, 1)
base.FinalizedCheckpoint = &ethpb.Checkpoint{Epoch: 3}
beaconState, err := state_native.InitializeFromProtoPhase0(base)
@@ -199,6 +213,8 @@ func TestFinalityDelay(t *testing.T) {
}
func TestIsInInactivityLeak(t *testing.T) {
ClearCache()
base := buildState(params.BeaconConfig().SlotsPerEpoch*10, 1)
base.FinalizedCheckpoint = &ethpb.Checkpoint{Epoch: 3}
beaconState, err := state_native.InitializeFromProtoPhase0(base)
@@ -269,6 +285,8 @@ func TestIncreaseBadBalance_NotOK(t *testing.T) {
{i: 2, b: []uint64{math.MaxUint64, math.MaxUint64, math.MaxUint64}, nb: 33 * 1e9},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
{EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 4}},

View File

@@ -13,6 +13,8 @@ import (
)
func TestShuffleList_InvalidValidatorCount(t *testing.T) {
ClearCache()
maxShuffleListSize = 20
list := make([]primitives.ValidatorIndex, 21)
if _, err := ShuffleList(list, [32]byte{123, 125}); err == nil {
@@ -23,6 +25,8 @@ func TestShuffleList_InvalidValidatorCount(t *testing.T) {
}
func TestShuffleList_OK(t *testing.T) {
ClearCache()
var list1 []primitives.ValidatorIndex
seed1 := [32]byte{1, 128, 12}
seed2 := [32]byte{2, 128, 12}
@@ -47,6 +51,8 @@ func TestShuffleList_OK(t *testing.T) {
}
func TestSplitIndices_OK(t *testing.T) {
ClearCache()
var l []uint64
numValidators := uint64(64000)
for i := uint64(0); i < numValidators; i++ {
@@ -61,6 +67,8 @@ func TestSplitIndices_OK(t *testing.T) {
}
func TestShuffleList_Vs_ShuffleIndex(t *testing.T) {
ClearCache()
var list []primitives.ValidatorIndex
listSize := uint64(1000)
seed := [32]byte{123, 42}
@@ -125,6 +133,8 @@ func BenchmarkShuffleList(b *testing.B) {
}
func TestShuffledIndex(t *testing.T) {
ClearCache()
var list []primitives.ValidatorIndex
listSize := uint64(399)
for i := primitives.ValidatorIndex(0); uint64(i) < listSize; i++ {
@@ -147,6 +157,8 @@ func TestShuffledIndex(t *testing.T) {
}
func TestSplitIndicesAndOffset_OK(t *testing.T) {
ClearCache()
var l []uint64
validators := uint64(64000)
for i := uint64(0); i < validators; i++ {

View File

@@ -18,7 +18,7 @@ import (
func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -49,7 +49,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -77,7 +77,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -105,7 +105,7 @@ func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
func TestIsNextEpochSyncCommittee_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -135,6 +135,8 @@ func TestIsNextEpochSyncCommittee_UsingCache(t *testing.T) {
}
func TestIsNextEpochSyncCommittee_UsingCommittee(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -161,6 +163,8 @@ func TestIsNextEpochSyncCommittee_UsingCommittee(t *testing.T) {
}
func TestIsNextEpochSyncCommittee_DoesNotExist(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -188,7 +192,7 @@ func TestIsNextEpochSyncCommittee_DoesNotExist(t *testing.T) {
func TestCurrentEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -219,7 +223,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -260,7 +264,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -288,7 +292,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
func TestNextEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -318,6 +322,8 @@ func TestNextEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
}
func TestNextEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -345,7 +351,7 @@ func TestNextEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
func TestNextEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -372,6 +378,8 @@ func TestNextEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
}
func TestUpdateSyncCommitteeCache_BadSlot(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: 1,
})
@@ -388,6 +396,8 @@ func TestUpdateSyncCommitteeCache_BadSlot(t *testing.T) {
}
func TestUpdateSyncCommitteeCache_BadRoot(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*params.BeaconConfig().SlotsPerEpoch - 1,
LatestBlockHeader: &ethpb.BeaconBlockHeader{StateRoot: params.BeaconConfig().ZeroHash[:]},
@@ -399,7 +409,7 @@ func TestUpdateSyncCommitteeCache_BadRoot(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_SameBlockRoot(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),

View File

@@ -179,8 +179,6 @@ func TestIsSlashableValidator_OK(t *testing.T) {
func TestBeaconProposerIndex_OK(t *testing.T) {
params.SetupTestConfigCleanup(t)
ClearCache()
defer ClearCache()
c := params.BeaconConfig()
c.MinGenesisActiveValidatorCount = 16384
params.OverrideBeaconConfig(c)
@@ -224,9 +222,9 @@ func TestBeaconProposerIndex_OK(t *testing.T) {
},
}
defer ClearCache()
for _, tt := range tests {
ClearCache()
require.NoError(t, state.SetSlot(tt.slot))
result, err := BeaconProposerIndex(context.Background(), state)
require.NoError(t, err, "Failed to get shard and committees at slot")
@@ -235,9 +233,9 @@ func TestBeaconProposerIndex_OK(t *testing.T) {
}
func TestBeaconProposerIndex_BadState(t *testing.T) {
params.SetupTestConfigCleanup(t)
ClearCache()
defer ClearCache()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig()
c.MinGenesisActiveValidatorCount = 16384
params.OverrideBeaconConfig(c)
@@ -268,6 +266,8 @@ func TestBeaconProposerIndex_BadState(t *testing.T) {
}
func TestComputeProposerIndex_Compatibility(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
@@ -309,12 +309,16 @@ func TestComputeProposerIndex_Compatibility(t *testing.T) {
}
func TestDelayedActivationExitEpoch_OK(t *testing.T) {
ClearCache()
epoch := primitives.Epoch(9999)
wanted := epoch + 1 + params.BeaconConfig().MaxSeedLookahead
assert.Equal(t, wanted, ActivationExitEpoch(epoch))
}
func TestActiveValidatorCount_Genesis(t *testing.T) {
ClearCache()
c := 1000
validators := make([]*ethpb.Validator, c)
for i := 0; i < len(validators); i++ {
@@ -348,7 +352,6 @@ func TestChurnLimit_OK(t *testing.T) {
{validatorCount: 1000000, wantedChurn: 15 /* validatorCount/churnLimitQuotient */},
{validatorCount: 2000000, wantedChurn: 30 /* validatorCount/churnLimitQuotient */},
}
defer ClearCache()
for _, test := range tests {
ClearCache()
@@ -382,9 +385,6 @@ func TestChurnLimitDeneb_OK(t *testing.T) {
{1000000, params.BeaconConfig().MaxPerEpochActivationChurnLimit},
{2000000, params.BeaconConfig().MaxPerEpochActivationChurnLimit},
}
defer ClearCache()
for _, test := range tests {
ClearCache()
@@ -417,7 +417,7 @@ func TestChurnLimitDeneb_OK(t *testing.T) {
// Test basic functionality of ActiveValidatorIndices without caching. This test will need to be
// rewritten when releasing some cache flag.
func TestActiveValidatorIndices(t *testing.T) {
farFutureEpoch := params.BeaconConfig().FarFutureEpoch
//farFutureEpoch := params.BeaconConfig().FarFutureEpoch
type args struct {
state *ethpb.BeaconState
epoch primitives.Epoch
@@ -428,7 +428,7 @@ func TestActiveValidatorIndices(t *testing.T) {
want []primitives.ValidatorIndex
wantedErr string
}{
{
/*{
name: "all_active_epoch_10",
args: args{
state: &ethpb.BeaconState{
@@ -559,7 +559,7 @@ func TestActiveValidatorIndices(t *testing.T) {
epoch: 10,
},
want: []primitives.ValidatorIndex{0, 2, 3},
},
},*/
{
name: "impossible_zero_validators", // Regression test for issue #13051
args: args{
@@ -569,22 +569,21 @@ func TestActiveValidatorIndices(t *testing.T) {
},
epoch: 10,
},
wantedErr: "no active validator indices",
wantedErr: "state has nil validator slice",
},
}
defer ClearCache()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
s, err := state_native.InitializeFromProtoPhase0(tt.args.state)
require.NoError(t, err)
require.NoError(t, s.SetValidators(tt.args.state.Validators))
got, err := ActiveValidatorIndices(context.Background(), s, tt.args.epoch)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
return
}
assert.DeepEqual(t, tt.want, got, "ActiveValidatorIndices()")
ClearCache()
})
}
}
@@ -685,6 +684,8 @@ func TestComputeProposerIndex(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
bState := &ethpb.BeaconState{Validators: tt.args.validators}
stTrie, err := state_native.InitializeFromProtoUnsafePhase0(bState)
require.NoError(t, err)
@@ -717,6 +718,8 @@ func TestIsEligibleForActivationQueue(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
assert.Equal(t, tt.want, IsEligibleForActivationQueue(tt.validator), "IsEligibleForActivationQueue()")
})
}
@@ -744,6 +747,8 @@ func TestIsIsEligibleForActivation(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
assert.Equal(t, tt.want, IsEligibleForActivation(s, tt.validator), "IsEligibleForActivation()")
@@ -782,6 +787,8 @@ func computeProposerIndexWithValidators(validators []*ethpb.Validator, activeInd
}
func TestLastActivatedValidatorIndex_OK(t *testing.T) {
ClearCache()
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
@@ -805,6 +812,8 @@ func TestLastActivatedValidatorIndex_OK(t *testing.T) {
}
func TestProposerIndexFromCheckpoint(t *testing.T) {
ClearCache()
e := primitives.Epoch(2)
r := [32]byte{'a'}
root := [32]byte{'b'}

View File

@@ -202,3 +202,14 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
Root: bRoot,
}, nil
}
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
// value defined:
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#configuration
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
func MinEpochsForBlockRequests() primitives.Epoch {
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)
}

View File

@@ -48,6 +48,7 @@ func TestWeakSubjectivity_ComputeWeakSubjectivityPeriod(t *testing.T) {
t.Run(fmt.Sprintf("valCount: %d, avgBalance: %d", tt.valCount, tt.avgBalance), func(t *testing.T) {
// Reset committee cache - as we need to recalculate active validator set for each test.
helpers.ClearCache()
got, err := helpers.ComputeWeakSubjectivityPeriod(context.Background(), genState(t, tt.valCount, tt.avgBalance), params.BeaconConfig())
require.NoError(t, err)
assert.Equal(t, tt.want, got, "valCount: %v, avgBalance: %v", tt.valCount, tt.avgBalance)
@@ -177,6 +178,8 @@ func TestWeakSubjectivity_IsWithinWeakSubjectivityPeriod(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
sr, _, e := tt.genWsCheckpoint()
got, err := helpers.IsWithinWeakSubjectivityPeriod(context.Background(), tt.epoch, tt.genWsState(), sr, e, params.BeaconConfig())
if tt.wantedErr != "" {
@@ -247,6 +250,8 @@ func TestWeakSubjectivity_ParseWeakSubjectivityInputString(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
wsCheckpt, err := helpers.ParseWeakSubjectivityInputString(tt.input)
if tt.wantedErr != "" {
require.ErrorContains(t, tt.wantedErr, err)
@@ -281,3 +286,21 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
return beaconState
}
func TestMinEpochsForBlockRequests(t *testing.T) {
helpers.ClearCache()
params.SetActiveTestCleanup(t, params.MainnetConfig())
var expected primitives.Epoch = 33024
// expected value of 33024 via spec commentary:
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
//
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY
// + MAX_SAFETY_DECAY * CHURN_LIMIT_QUOTIENT // (2 * 100)
// )
//
// Where MAX_SAFETY_DECAY = 100 and thus MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 (~5 months).
require.Equal(t, expected, helpers.MinEpochsForBlockRequests())
}

View File

@@ -245,10 +245,10 @@ func createFullBellatrixBlockWithOperations(t *testing.T) (state.BeaconState,
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: bytesutil.PadTo([]byte{1, 2, 3, 4}, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
},
},
},
@@ -284,11 +284,11 @@ func createFullCapellaBlockWithOperations(t *testing.T) (state.BeaconState,
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: bytesutil.PadTo([]byte{1, 2, 3, 4}, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
Withdrawals: make([]*enginev1.Withdrawal, 0),
ExtraData: make([]byte, 0),
},
},
},

View File

@@ -209,6 +209,7 @@ func OptimizedGenesisBeaconStateBellatrix(genesisTime uint64, preState state.Bea
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
Transactions: make([][]byte, 0),
@@ -269,6 +270,7 @@ func EmptyGenesisStateBellatrix() (state.BeaconState, error) {
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),

View File

@@ -103,7 +103,7 @@ func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROB
return scs, nil
}
// safeCommitemntArray is a fixed size array of commitment byte slices. This is helpful for avoiding
// safeCommitmentArray is a fixed size array of commitment byte slices. This is helpful for avoiding
// gratuitous bounds checks.
type safeCommitmentArray [fieldparams.MaxBlobsPerBlock][]byte

View File

@@ -22,9 +22,6 @@ var ErrNotFoundOriginBlockRoot = kv.ErrNotFoundOriginBlockRoot
// ErrNotFoundBackfillBlockRoot wraps ErrNotFound for an error specific to the backfill block root.
var ErrNotFoundBackfillBlockRoot = kv.ErrNotFoundBackfillBlockRoot
// ErrNotFoundGenesisBlockRoot means no genesis block root was found, indicating the db was not initialized with genesis
var ErrNotFoundGenesisBlockRoot = kv.ErrNotFoundGenesisBlockRoot
// IsNotFound allows callers to treat errors from a flat-file database, where the file record is missing,
// as equivalent to db.ErrNotFound.
func IsNotFound(err error) bool {

View File

@@ -156,7 +156,7 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
}
partialMoved = true
blobsWrittenCounter.Inc()
blobSaveLatency.Observe(time.Since(startTime).Seconds())
blobSaveLatency.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
@@ -180,11 +180,17 @@ func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, er
return blocks.VerifiedROBlob{}, err
}
defer func() {
blobFetchLatency.Observe(time.Since(startTime).Seconds())
blobFetchLatency.Observe(float64(time.Since(startTime).Milliseconds()))
}()
return verification.BlobSidecarNoop(ro)
}
// Remove removes all blobs for a given root.
func (bs *BlobStorage) Remove(root [32]byte) error {
rootDir := blobNamer{root: root}.dir()
return bs.fs.RemoveAll(rootDir)
}
// Indices generates a bitmap representing which BlobSidecar.Index values are present on disk for a given root.
// This value can be compared to the commitments observed in a block to determine which indices need to be found
// on the network to confirm data availability.
@@ -222,6 +228,20 @@ func (bs *BlobStorage) Indices(root [32]byte) ([fieldparams.MaxBlobsPerBlock]boo
return mask, nil
}
// Clear deletes all files on the filesystem.
func (bs *BlobStorage) Clear() error {
dirs, err := listDir(bs.fs, ".")
if err != nil {
return err
}
for _, dir := range dirs {
if err := bs.fs.RemoveAll(dir); err != nil {
return err
}
}
return nil
}
type blobNamer struct {
root [32]byte
index uint64

View File

@@ -73,6 +73,34 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
})
t.Run("round trip write, read and delete", func(t *testing.T) {
bs := NewEphemeralBlobStorage(t)
err := bs.Save(testSidecars[0])
require.NoError(t, err)
expected := testSidecars[0]
actual, err := bs.Get(expected.BlockRoot(), expected.Index)
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
require.NoError(t, bs.Remove(expected.BlockRoot()))
_, err = bs.Get(expected.BlockRoot(), expected.Index)
require.ErrorContains(t, "file does not exist", err)
})
t.Run("clear", func(t *testing.T) {
blob := testSidecars[0]
b := NewEphemeralBlobStorage(t)
require.NoError(t, b.Save(blob))
res, err := b.Get(blob.BlockRoot(), blob.Index)
require.NoError(t, err)
require.NotNil(t, res)
require.NoError(t, b.Clear())
// After clearing, the blob should not exist in the db.
_, err = b.Get(blob.BlockRoot(), blob.Index)
require.ErrorIs(t, err, os.ErrNotExist)
})
}
// pollUntil polls a condition function until it returns true or a timeout is reached.

View File

@@ -6,15 +6,15 @@ import (
)
var (
blobBuckets = []float64{0.00003, 0.00005, 0.00007, 0.00009, 0.00011, 0.00013, 0.00015}
blobBuckets = []float64{3, 5, 7, 9, 11, 13}
blobSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "blob_storage_save_latency",
Help: "Latency of BlobSidecar storage save operations in seconds",
Help: "Latency of BlobSidecar storage save operations in milliseconds",
Buckets: blobBuckets,
})
blobFetchLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "blob_storage_get_latency",
Help: "Latency of BlobSidecar storage get operations in seconds",
Help: "Latency of BlobSidecar storage get operations in milliseconds",
Buckets: blobBuckets,
})
blobsPrunedCounter = promauto.NewCounter(prometheus.CounterOpts{

View File

@@ -13,9 +13,11 @@ go_library(
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/slasher/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/backup:go_default_library",
"//proto/dbval:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
],

View File

@@ -11,9 +11,11 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filters"
slashertypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/slasher/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/monitoring/backup"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
@@ -55,12 +57,9 @@ type ReadOnlyDatabase interface {
FeeRecipientByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (common.Address, error)
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
// Blob operations.
BlobSidecarsByRoot(ctx context.Context, beaconBlockRoot [32]byte, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error)
BlobSidecarsBySlot(ctx context.Context, slot primitives.Slot, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error)
// origin checkpoint sync support
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
BackfillBlockRoot(ctx context.Context) ([32]byte, error)
BackfillStatus(context.Context) (*dbval.BackfillStatus, error)
}
// NoHeadAccessDatabase defines a struct without access to chain head data.
@@ -71,6 +70,7 @@ type NoHeadAccessDatabase interface {
DeleteBlock(ctx context.Context, root [32]byte) error
SaveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) error
SaveBlocks(ctx context.Context, blocks []interfaces.ReadOnlySignedBeaconBlock) error
SaveROBlocks(ctx context.Context, blks []blocks.ROBlock, cache bool) error
SaveGenesisBlockRoot(ctx context.Context, blockRoot [32]byte) error
// State related methods.
SaveState(ctx context.Context, state state.ReadOnlyBeaconState, blockRoot [32]byte) error
@@ -93,9 +93,6 @@ type NoHeadAccessDatabase interface {
SaveFeeRecipientsByValidatorIDs(ctx context.Context, ids []primitives.ValidatorIndex, addrs []common.Address) error
SaveRegistrationsByValidatorIDs(ctx context.Context, ids []primitives.ValidatorIndex, regs []*ethpb.ValidatorRegistrationV1) error
// Blob operations.
DeleteBlobSidecars(ctx context.Context, beaconBlockRoot [32]byte) error
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error
}
@@ -112,9 +109,10 @@ type HeadAccessDatabase interface {
SaveGenesisData(ctx context.Context, state state.BeaconState) error
EnsureEmbeddedGenesis(ctx context.Context) error
// initialization method needed for origin checkpoint sync
// Support for checkpoint sync and backfill.
SaveOrigin(ctx context.Context, serState, serBlock []byte) error
SaveBackfillBlockRoot(ctx context.Context, blockRoot [32]byte) error
SaveBackfillStatus(context.Context, *dbval.BackfillStatus) error
BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error
}
// SlasherDatabase interface for persisting data related to detecting slashable offenses on Ethereum.

View File

@@ -4,8 +4,8 @@ go_library(
name = "go_default_library",
srcs = [
"archived_point.go",
"backfill.go",
"backup.go",
"blob.go",
"blocks.go",
"checkpoint.go",
"deposit_contract.go",
@@ -39,7 +39,6 @@ go_library(
"//beacon-chain/state/genesis:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
@@ -50,6 +49,7 @@ go_library(
"//io/file:go_default_library",
"//monitoring/progress:go_default_library",
"//monitoring/tracing:go_default_library",
"//proto/dbval:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time:go_default_library",
@@ -75,8 +75,8 @@ go_test(
name = "go_default_test",
srcs = [
"archived_point_test.go",
"backfill_test.go",
"backup_test.go",
"blob_test.go",
"blocks_test.go",
"checkpoint_test.go",
"deposit_contract_test.go",
@@ -110,11 +110,11 @@ go_test(
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/dbval:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/testing:go_default_library",
"//testing/assert:go_default_library",
"//testing/assertions:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",

View File

@@ -0,0 +1,44 @@
package kv
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
"google.golang.org/protobuf/proto"
)
// SaveBackfillStatus encodes the given BackfillStatus protobuf struct and writes it to a single key in the db.
// This value is used by the backfill service to keep track of the range of blocks that need to be synced. It is also used by the
// code that serves blocks or regenerates states to keep track of what range of blocks are available.
func (s *Store) SaveBackfillStatus(ctx context.Context, bf *dbval.BackfillStatus) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveBackfillStatus")
defer span.End()
bfb, err := proto.Marshal(bf)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(blocksBucket)
return bucket.Put(backfillStatusKey, bfb)
})
}
// BackfillStatus retrieves the most recently saved version of the BackfillStatus protobuf struct.
// This is used to persist information about backfill status across restarts.
func (s *Store) BackfillStatus(ctx context.Context) (*dbval.BackfillStatus, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.BackfillStatus")
defer span.End()
bf := &dbval.BackfillStatus{}
err := s.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(blocksBucket)
bs := bucket.Get(backfillStatusKey)
if len(bs) == 0 {
return errors.Wrap(ErrNotFound, "BackfillStatus not found")
}
return proto.Unmarshal(bs, bf)
})
return bf, err
}

View File

@@ -0,0 +1,35 @@
package kv
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"google.golang.org/protobuf/proto"
)
func TestBackfillRoundtrip(t *testing.T) {
db := setupDB(t)
b := &dbval.BackfillStatus{}
b.LowSlot = 23
b.LowRoot = bytesutil.PadTo([]byte("low"), 32)
b.LowParentRoot = bytesutil.PadTo([]byte("parent"), 32)
m, err := proto.Marshal(b)
require.NoError(t, err)
ub := &dbval.BackfillStatus{}
require.NoError(t, proto.Unmarshal(m, ub))
require.Equal(t, b.LowSlot, ub.LowSlot)
require.DeepEqual(t, b.LowRoot, ub.LowRoot)
require.DeepEqual(t, b.LowParentRoot, ub.LowParentRoot)
ctx := context.Background()
require.NoError(t, db.SaveBackfillStatus(ctx, b))
dbub, err := db.BackfillStatus(ctx)
require.NoError(t, err)
require.Equal(t, b.LowSlot, dbub.LowSlot)
require.DeepEqual(t, b.LowRoot, dbub.LowRoot)
require.DeepEqual(t, b.LowParentRoot, dbub.LowParentRoot)
}

View File

@@ -1,320 +0,0 @@
package kv
import (
"bytes"
"context"
"sort"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
types "github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
)
var (
errBlobSlotMismatch = errors.New("sidecar slot mismatch")
errBlobParentMismatch = errors.New("sidecar parent root mismatch")
errBlobRootMismatch = errors.New("sidecar root mismatch")
errBlobProposerMismatch = errors.New("sidecar proposer index mismatch")
errBlobSidecarLimit = errors.New("sidecar exceeds maximum number of blobs")
errEmptySidecar = errors.New("nil or empty blob sidecars")
errNewerBlobExists = errors.New("Will not overwrite newer blobs in db")
)
// A blob rotating key is represented as bytes(slot_to_rotating_buffer(blob.slot)) ++ bytes(blob.slot) ++ blob.block_root
type blobRotatingKey []byte
// BufferPrefix returns the first 8 bytes of the rotating key.
// This represents bytes(slot_to_rotating_buffer(blob.slot)) in the rotating key.
func (rk blobRotatingKey) BufferPrefix() []byte {
return rk[0:8]
}
// Slot returns the information from the key.
func (rk blobRotatingKey) Slot() types.Slot {
slotBytes := rk[8:16]
return bytesutil.BytesToSlotBigEndian(slotBytes)
}
// BlockRoot returns the block root information from the key.
func (rk blobRotatingKey) BlockRoot() []byte {
return rk[16:]
}
// SaveBlobSidecar saves the blobs for a given epoch in the sidecar bucket. When we receive a blob:
//
// 1. Convert slot using a modulo operator to [0, maxSlots] where maxSlots = MAX_EPOCHS_TO_PERSIST_BLOBS*SLOTS_PER_EPOCH
//
// 2. Compute key for blob as bytes(slot_to_rotating_buffer(blob.slot)) ++ bytes(blob.slot) ++ blob.block_root
//
// 3. Begin the save algorithm: If the incoming blob has a slot bigger than the saved slot at the spot
// in the rotating keys buffer, we overwrite all elements for that slot. Otherwise, we merge the blob with an existing one.
// Trying to replace a newer blob with an older one is an error.
func (s *Store) SaveBlobSidecar(ctx context.Context, scs []*ethpb.DeprecatedBlobSidecar) error {
if len(scs) == 0 {
return errEmptySidecar
}
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlobSidecar")
defer span.End()
first := scs[0]
newKey := s.blobSidecarKey(first)
prefix := newKey.BufferPrefix()
var prune []blobRotatingKey
return s.db.Update(func(tx *bolt.Tx) error {
var existing []byte
sc := &ethpb.DeprecatedBlobSidecars{}
bkt := tx.Bucket(blobsBucket)
c := bkt.Cursor()
for k, v := c.Seek(prefix); bytes.HasPrefix(k, prefix); k, v = c.Next() {
key := blobRotatingKey(k)
ks := key.Slot()
if ks < first.Slot {
// Mark older blobs at the same position of the ring buffer for deletion.
prune = append(prune, key)
continue
}
if ks > first.Slot {
// We shouldn't be overwriting newer blobs with older blobs. Something is wrong.
return errNewerBlobExists
}
// The slot isn't older or newer, so it must be equal.
// If the roots match, then we want to merge the new sidecars with the existing data.
if bytes.Equal(first.BlockRoot, key.BlockRoot()) {
existing = v
if err := decode(ctx, v, sc); err != nil {
return err
}
}
// If the slot is equal but the roots don't match, leave the existing key alone and allow the sidecar
// to be written to the new key with the same prefix. In this case sc will be empty, so it will just
// contain the incoming sidecars when we write it.
}
sc.Sidecars = append(sc.Sidecars, scs...)
sortSidecars(sc.Sidecars)
var err error
sc.Sidecars, err = validUniqueSidecars(sc.Sidecars)
if err != nil {
return err
}
encoded, err := encode(ctx, sc)
if err != nil {
return err
}
// don't write if the merged result is the same as before
if len(existing) == len(encoded) && bytes.Equal(existing, encoded) {
return nil
}
// Only prune if we're actually going through with the update.
for _, k := range prune {
if err := bkt.Delete(k); err != nil {
// note: attempting to delete a key that does not exist should not return an error.
log.WithError(err).Warnf("Could not delete blob key %#x.", k)
}
}
return bkt.Put(newKey, encoded)
})
}
// validUniqueSidecars ensures that all sidecars have the same slot, parent root, block root, and proposer index, and
// there are no more than MAX_BLOBS_PER_BLOCK sidecars.
func validUniqueSidecars(scs []*ethpb.DeprecatedBlobSidecar) ([]*ethpb.DeprecatedBlobSidecar, error) {
if len(scs) == 0 {
return nil, errEmptySidecar
}
// If there's only 1 sidecar, we've got nothing to compare.
if len(scs) == 1 {
return scs, nil
}
prev := scs[0]
didx := 1
for i := 1; i < len(scs); i++ {
sc := scs[i]
if sc.Slot != prev.Slot {
return nil, errors.Wrapf(errBlobSlotMismatch, "%d != %d", sc.Slot, prev.Slot)
}
if !bytes.Equal(sc.BlockParentRoot, prev.BlockParentRoot) {
return nil, errors.Wrapf(errBlobParentMismatch, "%x != %x", sc.BlockParentRoot, prev.BlockParentRoot)
}
if !bytes.Equal(sc.BlockRoot, prev.BlockRoot) {
return nil, errors.Wrapf(errBlobRootMismatch, "%x != %x", sc.BlockRoot, prev.BlockRoot)
}
if sc.ProposerIndex != prev.ProposerIndex {
return nil, errors.Wrapf(errBlobProposerMismatch, "%d != %d", sc.ProposerIndex, prev.ProposerIndex)
}
// skip duplicate
if sc.Index == prev.Index {
continue
}
if didx != i {
scs[didx] = scs[i]
}
prev = scs[i]
didx += 1
}
if didx > fieldparams.MaxBlobsPerBlock {
return nil, errors.Wrapf(errBlobSidecarLimit, "%d > %d", didx, fieldparams.MaxBlobsPerBlock)
}
return scs[0:didx], nil
}
// sortSidecars sorts the sidecars by their index.
func sortSidecars(scs []*ethpb.DeprecatedBlobSidecar) {
sort.Slice(scs, func(i, j int) bool {
return scs[i].Index < scs[j].Index
})
}
// BlobSidecarsByRoot retrieves the blobs for the given beacon block root.
// If the `indices` argument is omitted, all blobs for the root will be returned.
// Otherwise, the result will be filtered to only include the specified indices.
// An error will result if an invalid index is specified.
// The bucket size is bounded by 131072 entries. That's the most blobs a node will keep before rotating it out.
func (s *Store) BlobSidecarsByRoot(ctx context.Context, root [32]byte, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlobSidecarsByRoot")
defer span.End()
var enc []byte
if err := s.db.View(func(tx *bolt.Tx) error {
c := tx.Bucket(blobsBucket).Cursor()
// Bucket size is bounded and bolt cursors are fast. Moreover, a thin caching layer can be added.
for k, v := c.First(); k != nil; k, v = c.Next() {
if bytes.HasSuffix(k, root[:]) {
enc = v
break
}
}
return nil
}); err != nil {
return nil, err
}
if enc == nil {
return nil, ErrNotFound
}
sc := &ethpb.DeprecatedBlobSidecars{}
if err := decode(ctx, enc, sc); err != nil {
return nil, err
}
return filterForIndices(sc, indices...)
}
func filterForIndices(sc *ethpb.DeprecatedBlobSidecars, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
if len(indices) == 0 {
return sc.Sidecars, nil
}
// This loop assumes that the BlobSidecars value stores the complete set of blobs for a block
// in ascending order from eg 0..3, without gaps. This allows us to assume the indices argument
// maps 1:1 with indices in the BlobSidecars storage object.
maxIdx := uint64(len(sc.Sidecars)) - 1
sidecars := make([]*ethpb.DeprecatedBlobSidecar, len(indices))
for i, idx := range indices {
if idx > maxIdx {
return nil, errors.Wrapf(ErrNotFound, "BlobSidecars missing index: index %d", idx)
}
sidecars[i] = sc.Sidecars[idx]
}
return sidecars, nil
}
// BlobSidecarsBySlot retrieves BlobSidecars for the given slot.
// If the `indices` argument is omitted, all blobs for the slot will be returned.
// Otherwise, the result will be filtered to only include the specified indices.
// An error will result if an invalid index is specified.
// The bucket size is bounded by 131072 entries. That's the most blobs a node will keep before rotating it out.
func (s *Store) BlobSidecarsBySlot(ctx context.Context, slot types.Slot, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlobSidecarsBySlot")
defer span.End()
var enc []byte
sk := s.slotKey(slot)
if err := s.db.View(func(tx *bolt.Tx) error {
c := tx.Bucket(blobsBucket).Cursor()
// Bucket size is bounded and bolt cursors are fast. Moreover, a thin caching layer can be added.
for k, v := c.Seek(sk); bytes.HasPrefix(k, sk); k, _ = c.Next() {
slotInKey := bytesutil.BytesToSlotBigEndian(k[8:16])
if slotInKey == slot {
enc = v
break
}
}
return nil
}); err != nil {
return nil, err
}
if enc == nil {
return nil, ErrNotFound
}
sc := &ethpb.DeprecatedBlobSidecars{}
if err := decode(ctx, enc, sc); err != nil {
return nil, err
}
return filterForIndices(sc, indices...)
}
// DeleteBlobSidecars returns true if the blobs are in the db.
func (s *Store) DeleteBlobSidecars(ctx context.Context, beaconBlockRoot [32]byte) error {
_, span := trace.StartSpan(ctx, "BeaconDB.DeleteBlobSidecar")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blobsBucket)
c := bkt.Cursor()
for k, _ := c.First(); k != nil; k, _ = c.Next() {
if bytes.HasSuffix(k, beaconBlockRoot[:]) {
if err := bkt.Delete(k); err != nil {
return err
}
}
}
return nil
})
}
// We define a blob sidecar key as: bytes(slot_to_rotating_buffer(blob.slot)) ++ bytes(blob.slot) ++ blob.block_root
// where slot_to_rotating_buffer(slot) = slot % MAX_SLOTS_TO_PERSIST_BLOBS.
func (s *Store) blobSidecarKey(blob *ethpb.DeprecatedBlobSidecar) blobRotatingKey {
key := s.slotKey(blob.Slot)
key = append(key, bytesutil.SlotToBytesBigEndian(blob.Slot)...)
key = append(key, blob.BlockRoot...)
return key
}
func (s *Store) slotKey(slot types.Slot) []byte {
return bytesutil.SlotToBytesBigEndian(slot.ModSlot(s.blobRetentionSlots()))
}
func (s *Store) blobRetentionSlots() types.Slot {
return types.Slot(s.blobRetentionEpochs.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
}
var errBlobRetentionEpochMismatch = errors.New("epochs for blobs request value in DB does not match runtime config")
func (s *Store) checkEpochsForBlobSidecarsRequestBucket(db *bolt.DB) error {
uRetentionEpochs := uint64(s.blobRetentionEpochs)
if err := db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket(chainMetadataBucket)
v := b.Get(blobRetentionEpochsKey)
if v == nil {
if err := b.Put(blobRetentionEpochsKey, bytesutil.Uint64ToBytesBigEndian(uRetentionEpochs)); err != nil {
return err
}
return nil
}
e := bytesutil.BytesToUint64BigEndian(v)
if e != uRetentionEpochs {
return errors.Wrapf(errBlobRetentionEpochMismatch, "db=%d, config=%d", e, uRetentionEpochs)
}
return nil
}); err != nil {
return err
}
return nil
}

View File

@@ -1,532 +0,0 @@
package kv
import (
"context"
"crypto/rand"
"fmt"
"testing"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
types "github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assertions"
"github.com/prysmaticlabs/prysm/v4/testing/require"
bolt "go.etcd.io/bbolt"
)
func equalBlobSlices(expect []*ethpb.DeprecatedBlobSidecar, got []*ethpb.DeprecatedBlobSidecar) error {
if len(expect) != len(got) {
return fmt.Errorf("mismatched lengths, expect=%d, got=%d", len(expect), len(got))
}
for i := 0; i < len(expect); i++ {
es := expect[i]
gs := got[i]
var e string
assertions.DeepEqual(assertions.SprintfAssertionLoggerFn(&e), es, gs)
if e != "" {
return errors.New(e)
}
}
return nil
}
func TestStore_BlobSidecars(t *testing.T) {
ctx := context.Background()
t.Run("empty", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, 0)
require.ErrorContains(t, "nil or empty blob sidecars", db.SaveBlobSidecar(ctx, scs))
})
t.Run("empty by root", func(t *testing.T) {
db := setupDB(t)
got, err := db.BlobSidecarsByRoot(ctx, [32]byte{})
require.ErrorIs(t, ErrNotFound, err)
require.Equal(t, 0, len(got))
})
t.Run("empty by slot", func(t *testing.T) {
db := setupDB(t)
got, err := db.BlobSidecarsBySlot(ctx, 1)
require.ErrorIs(t, ErrNotFound, err)
require.Equal(t, 0, len(got))
})
t.Run("save and retrieve by root (one)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, 1)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, 1, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by root (max), per batch", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by root, max and individually", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve valid subset by root", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
// we'll request indices 0 and 3, so make a slice with those indices for comparison
expect := make([]*ethpb.DeprecatedBlobSidecar, 2)
expect[0] = scs[0]
expect[1] = scs[3]
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot), 0, 3)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(expect, got))
require.Equal(t, uint64(0), got[0].Index)
require.Equal(t, uint64(3), got[1].Index)
})
t.Run("error for invalid index when retrieving by root", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot), uint64(len(scs)))
require.ErrorIs(t, err, ErrNotFound)
require.Equal(t, 0, len(got))
})
t.Run("save and retrieve by slot (one)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, 1)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, 1, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by slot (max)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve by slot, max and individually", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save and retrieve valid subset by slot", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
// we'll request indices 0 and 3, so make a slice with those indices for comparison
expect := make([]*ethpb.DeprecatedBlobSidecar, 2)
expect[0] = scs[0]
expect[1] = scs[3]
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot, 0, 3)
require.NoError(t, err)
require.NoError(t, equalBlobSlices(expect, got))
require.Equal(t, uint64(0), got[0].Index)
require.Equal(t, uint64(3), got[1].Index)
})
t.Run("error for invalid index when retrieving by slot", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot, uint64(len(scs)))
require.ErrorIs(t, err, ErrNotFound)
require.Equal(t, 0, len(got))
})
t.Run("delete works", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
require.NoError(t, db.DeleteBlobSidecars(ctx, bytesutil.ToBytes32(scs[0].BlockRoot)))
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.ErrorIs(t, ErrNotFound, err)
require.Equal(t, 0, len(got))
})
t.Run("saving blob different times", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for i := 0; i < fieldparams.MaxBlobsPerBlock; i++ {
scs[i].Slot = primitives.Slot(i)
scs[i].BlockRoot = bytesutil.PadTo([]byte{byte(i)}, 32)
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{scs[i]}))
br := bytesutil.ToBytes32(scs[i].BlockRoot)
saved, err := db.BlobSidecarsByRoot(ctx, br)
require.NoError(t, err)
require.NoError(t, equalBlobSlices([]*ethpb.DeprecatedBlobSidecar{scs[i]}, saved))
}
})
t.Run("saving a new blob for rotation (batch)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
oldBlockRoot := scs[0].BlockRoot
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(oldBlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
newScs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range newScs {
sc.Slot = sc.Slot + newRetentionSlot
}
require.NoError(t, db.SaveBlobSidecar(ctx, newScs))
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(newScs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(newScs, got))
})
t.Run("save multiple blobs after new rotation (individually)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
scs = generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range scs {
sc.Slot = sc.Slot + newRetentionSlot
}
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save multiple blobs after new rotation (batch then individually)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
oldBlockRoot := scs[0].BlockRoot
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(oldBlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
scs = generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range scs {
sc.Slot = sc.Slot + newRetentionSlot
}
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save multiple blobs after new rotation (individually then batch)", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
scs = generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
newRetentionSlot := primitives.Slot(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest.Mul(uint64(params.BeaconConfig().SlotsPerEpoch)))
for _, sc := range scs {
sc.Slot = sc.Slot + newRetentionSlot
}
require.NoError(t, db.SaveBlobSidecar(ctx, scs))
_, err = db.BlobSidecarsBySlot(ctx, 100)
require.ErrorIs(t, ErrNotFound, err)
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
})
t.Run("save equivocating blobs", func(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock/2)
eScs := generateEquivocatingBlobSidecars(t, fieldparams.MaxBlobsPerBlock/2)
for i, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{eScs[i]}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(scs, got))
got, err = db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(eScs[0].BlockRoot))
require.NoError(t, err)
require.NoError(t, equalBlobSlices(eScs, got))
})
}
func generateBlobSidecars(t *testing.T, n uint64) []*ethpb.DeprecatedBlobSidecar {
blobSidecars := make([]*ethpb.DeprecatedBlobSidecar, n)
for i := uint64(0); i < n; i++ {
blobSidecars[i] = generateBlobSidecar(t, i)
}
return blobSidecars
}
func generateBlobSidecar(t *testing.T, index uint64) *ethpb.DeprecatedBlobSidecar {
blob := make([]byte, 131072)
_, err := rand.Read(blob)
require.NoError(t, err)
kzgCommitment := make([]byte, 48)
_, err = rand.Read(kzgCommitment)
require.NoError(t, err)
kzgProof := make([]byte, 48)
_, err = rand.Read(kzgProof)
require.NoError(t, err)
return &ethpb.DeprecatedBlobSidecar{
BlockRoot: bytesutil.PadTo([]byte{'a'}, 32),
Index: index,
Slot: 100,
BlockParentRoot: bytesutil.PadTo([]byte{'b'}, 32),
ProposerIndex: 101,
Blob: blob,
KzgCommitment: kzgCommitment,
KzgProof: kzgProof,
}
}
func generateEquivocatingBlobSidecars(t *testing.T, n uint64) []*ethpb.DeprecatedBlobSidecar {
blobSidecars := make([]*ethpb.DeprecatedBlobSidecar, n)
for i := uint64(0); i < n; i++ {
blobSidecars[i] = generateEquivocatingBlobSidecar(t, i)
}
return blobSidecars
}
func generateEquivocatingBlobSidecar(t *testing.T, index uint64) *ethpb.DeprecatedBlobSidecar {
blob := make([]byte, 131072)
_, err := rand.Read(blob)
require.NoError(t, err)
kzgCommitment := make([]byte, 48)
_, err = rand.Read(kzgCommitment)
require.NoError(t, err)
kzgProof := make([]byte, 48)
_, err = rand.Read(kzgProof)
require.NoError(t, err)
return &ethpb.DeprecatedBlobSidecar{
BlockRoot: bytesutil.PadTo([]byte{'c'}, 32),
Index: index,
Slot: 100,
BlockParentRoot: bytesutil.PadTo([]byte{'b'}, 32),
ProposerIndex: 102,
Blob: blob,
KzgCommitment: kzgCommitment,
KzgProof: kzgProof,
}
}
func Test_validUniqueSidecars_validation(t *testing.T) {
tests := []struct {
name string
scs []*ethpb.DeprecatedBlobSidecar
err error
}{
{name: "empty", scs: []*ethpb.DeprecatedBlobSidecar{}, err: errEmptySidecar},
{name: "too many sidecars", scs: generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock+1), err: errBlobSidecarLimit},
{name: "invalid slot", scs: []*ethpb.DeprecatedBlobSidecar{{Slot: 1}, {Slot: 2}}, err: errBlobSlotMismatch},
{name: "invalid proposer index", scs: []*ethpb.DeprecatedBlobSidecar{{ProposerIndex: 1}, {ProposerIndex: 2}}, err: errBlobProposerMismatch},
{name: "invalid root", scs: []*ethpb.DeprecatedBlobSidecar{{BlockRoot: []byte{1}}, {BlockRoot: []byte{2}}}, err: errBlobRootMismatch},
{name: "invalid parent root", scs: []*ethpb.DeprecatedBlobSidecar{{BlockParentRoot: []byte{1}}, {BlockParentRoot: []byte{2}}}, err: errBlobParentMismatch},
{name: "happy path", scs: []*ethpb.DeprecatedBlobSidecar{{Index: 0}, {Index: 1}}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := validUniqueSidecars(tt.scs)
if tt.err != nil {
require.ErrorIs(t, err, tt.err)
} else {
require.NoError(t, err)
}
})
}
}
func Test_validUniqueSidecars_dedup(t *testing.T) {
cases := []struct {
name string
scs []*ethpb.DeprecatedBlobSidecar
expected []*ethpb.DeprecatedBlobSidecar
err error
}{
{
name: "duplicate sidecar",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 1}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
},
{
name: "single sidecar",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
},
{
name: "multiple duplicates",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 2}, {Index: 3}, {Index: 3}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}},
},
{
name: "ok number after de-dupe, > 6 before",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 2}, {Index: 2}, {Index: 2}, {Index: 3}, {Index: 3}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}},
},
{
name: "max unique, no dupes",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}},
},
{
name: "too many unique",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}, {Index: 7}},
err: errBlobSidecarLimit,
},
{
name: "too many unique with dupes",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 1}, {Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}, {Index: 7}},
err: errBlobSidecarLimit,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
u, err := validUniqueSidecars(c.scs)
if c.err != nil {
require.ErrorIs(t, err, c.err)
} else {
require.NoError(t, err)
}
require.Equal(t, len(c.expected), len(u))
})
}
}
func TestStore_sortSidecars(t *testing.T) {
scs := []*ethpb.DeprecatedBlobSidecar{
{Index: 6},
{Index: 4},
{Index: 2},
{Index: 1},
{Index: 3},
{Index: 5},
{},
}
sortSidecars(scs)
for i := 0; i < len(scs)-1; i++ {
require.Equal(t, uint64(i), scs[i].Index)
}
}
func BenchmarkStore_BlobSidecarsByRoot(b *testing.B) {
s := setupDB(b)
ctx := context.Background()
require.NoError(b, s.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{
{BlockRoot: bytesutil.PadTo([]byte{'a'}, 32), Slot: 0},
}))
err := s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blobsBucket)
for i := 1; i < 131071; i++ {
r := make([]byte, 32)
_, err := rand.Read(r)
require.NoError(b, err)
scs := []*ethpb.DeprecatedBlobSidecar{
{BlockRoot: r, Slot: primitives.Slot(i)},
}
k := s.blobSidecarKey(scs[0])
encodedBlobSidecar, err := encode(ctx, &ethpb.DeprecatedBlobSidecars{Sidecars: scs})
require.NoError(b, err)
require.NoError(b, bkt.Put(k, encodedBlobSidecar))
}
return nil
})
require.NoError(b, err)
require.NoError(b, s.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{
{BlockRoot: bytesutil.PadTo([]byte{'b'}, 32), Slot: 131071},
}))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := s.BlobSidecarsByRoot(ctx, [32]byte{'b'})
require.NoError(b, err)
}
}
func Test_checkEpochsForBlobSidecarsRequestBucket(t *testing.T) {
s := setupDB(t)
require.NoError(t, s.checkEpochsForBlobSidecarsRequestBucket(s.db)) // First write
require.NoError(t, s.checkEpochsForBlobSidecarsRequestBucket(s.db)) // First check
s.blobRetentionEpochs += 1
require.ErrorIs(t, s.checkEpochsForBlobSidecarsRequestBucket(s.db), errBlobRetentionEpochMismatch)
}
func TestBlobRotatingKey(t *testing.T) {
s := setupDB(t)
k := s.blobSidecarKey(&ethpb.DeprecatedBlobSidecar{
Slot: 1,
BlockRoot: []byte{2},
})
require.Equal(t, types.Slot(1), k.Slot())
require.DeepEqual(t, []byte{2}, k.BlockRoot())
require.DeepEqual(t, s.slotKey(types.Slot(1)), k.BufferPrefix())
}

View File

@@ -70,25 +70,6 @@ func (s *Store) OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
return root, err
}
// BackfillBlockRoot keeps track of the highest block available before the OriginCheckpointBlockRoot
func (s *Store) BackfillBlockRoot(ctx context.Context) ([32]byte, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.BackfillBlockRoot")
defer span.End()
var root [32]byte
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
rootSlice := bkt.Get(backfillBlockRootKey)
if len(rootSlice) == 0 {
return ErrNotFoundBackfillBlockRoot
}
root = bytesutil.ToBytes32(rootSlice)
return nil
})
return root, err
}
// HeadBlock returns the latest canonical block in the Ethereum Beacon Chain.
func (s *Store) HeadBlock(ctx context.Context) (interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HeadBlock")
@@ -292,55 +273,95 @@ func (s *Store) SaveBlocks(ctx context.Context, blks []interfaces.ReadOnlySigned
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlocks")
defer span.End()
// Performing marshaling, hashing, and indexing outside the bolt transaction
// to minimize the time we hold the DB lock.
blockRoots := make([][]byte, len(blks))
encodedBlocks := make([][]byte, len(blks))
indicesForBlocks := make([]map[string][]byte, len(blks))
for i, blk := range blks {
blockRoot, err := blk.Block().HashTreeRoot()
robs := make([]blocks.ROBlock, len(blks))
for i := range blks {
rb, err := blocks.NewROBlock(blks[i])
if err != nil {
return err
return errors.Wrapf(err, "failed to make an ROBlock for a block in SaveBlocks")
}
enc, err := s.marshalBlock(ctx, blk)
if err != nil {
return err
}
blockRoots[i] = blockRoot[:]
encodedBlocks[i] = enc
indicesByBucket := createBlockIndicesFromBlock(ctx, blk.Block())
indicesForBlocks[i] = indicesByBucket
robs[i] = rb
}
saveBlinded, err := s.shouldSaveBlinded(ctx)
return s.SaveROBlocks(ctx, robs, true)
}
type blockBatchEntry struct {
root []byte
block interfaces.ReadOnlySignedBeaconBlock
enc []byte
updated bool
indices map[string][]byte
}
func prepareBlockBatch(blks []blocks.ROBlock, shouldBlind bool) ([]blockBatchEntry, error) {
batch := make([]blockBatchEntry, len(blks))
for i := range blks {
batch[i].root, batch[i].block = blks[i].RootSlice(), blks[i].ReadOnlySignedBeaconBlock
batch[i].indices = blockIndices(batch[i].block.Block().Slot(), batch[i].block.Block().ParentRoot())
if shouldBlind {
blinded, err := batch[i].block.ToBlinded()
if err != nil {
if !errors.Is(err, blocks.ErrUnsupportedVersion) {
return nil, errors.Wrapf(err, "could not convert block to blinded format for root %#x", batch[i].root)
}
// Pre-deneb blocks give ErrUnsupportedVersion; use the full block already in the batch entry.
} else {
batch[i].block = blinded
}
}
enc, err := encodeBlock(batch[i].block)
if err != nil {
return nil, errors.Wrapf(err, "failed to encode block for root %#x", batch[i].root)
}
batch[i].enc = enc
}
return batch, nil
}
func (s *Store) SaveROBlocks(ctx context.Context, blks []blocks.ROBlock, cache bool) error {
shouldBlind, err := s.shouldSaveBlinded(ctx)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
// Precompute expensive values outside the db transaction.
batch, err := prepareBlockBatch(blks, shouldBlind)
if err != nil {
return errors.Wrap(err, "failed to encode all blocks in batch for saving to the db")
}
err = s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
for i, blk := range blks {
if existingBlock := bkt.Get(blockRoots[i]); existingBlock != nil {
for i := range batch {
if exists := bkt.Get(batch[i].root); exists != nil {
continue
}
if err := updateValueForIndices(ctx, indicesForBlocks[i], blockRoots[i], tx); err != nil {
return errors.Wrap(err, "could not update DB indices")
if err := bkt.Put(batch[i].root, batch[i].enc); err != nil {
return errors.Wrapf(err, "could write block to db with root %#x", batch[i].root)
}
if saveBlinded {
blindedBlock, err := blk.ToBlinded()
if err != nil {
if !errors.Is(err, blocks.ErrUnsupportedVersion) {
return err
}
} else {
blk = blindedBlock
}
}
s.blockCache.Set(string(blockRoots[i]), blk, int64(len(encodedBlocks[i])))
if err := bkt.Put(blockRoots[i], encodedBlocks[i]); err != nil {
return err
if err := updateValueForIndices(ctx, batch[i].indices, batch[i].root, tx); err != nil {
return errors.Wrapf(err, "could not update DB indices for root %#x", batch[i].root)
}
batch[i].updated = true
}
return nil
})
if !cache {
return err
}
for i := range batch {
if batch[i].updated {
s.blockCache.Set(string(batch[i].root), batch[i].block, int64(len(batch[i].enc)))
}
}
return err
}
// blockIndices takes in a beacon block and returns
// a map of bolt DB index buckets corresponding to each particular key for indices for
// data, such as (shard indices bucket -> shard 5).
func blockIndices(slot primitives.Slot, parentRoot [32]byte) map[string][]byte {
return map[string][]byte{
string(blockSlotIndicesBucket): bytesutil.SlotToBytesBigEndian(slot),
string(blockParentRootIndicesBucket): parentRoot[:],
}
}
// SaveHeadBlockRoot to the db.
@@ -417,17 +438,6 @@ func (s *Store) SaveOriginCheckpointBlockRoot(ctx context.Context, blockRoot [32
})
}
// SaveBackfillBlockRoot is used to keep track of the most recently backfilled block root when
// the node was initialized via checkpoint sync.
func (s *Store) SaveBackfillBlockRoot(ctx context.Context, blockRoot [32]byte) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveBackfillBlockRoot")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(blocksBucket)
return bucket.Put(backfillBlockRootKey, blockRoot[:])
})
}
// HighestRootsBelowSlot returns roots from the database slot index from the highest slot below the input slot.
// The slot value at the beginning of the return list is the slot where the roots were found. This is helpful so that
// calling code can make decisions based on the slot without resolving the blocks to discover their slot (for instance
@@ -726,31 +736,6 @@ func blockRootsBySlot(ctx context.Context, tx *bolt.Tx, slot primitives.Slot) ([
return [][32]byte{}, nil
}
// createBlockIndicesFromBlock takes in a beacon block and returns
// a map of bolt DB index buckets corresponding to each particular key for indices for
// data, such as (shard indices bucket -> shard 5).
func createBlockIndicesFromBlock(ctx context.Context, block interfaces.ReadOnlyBeaconBlock) map[string][]byte {
_, span := trace.StartSpan(ctx, "BeaconDB.createBlockIndicesFromBlock")
defer span.End()
indicesByBucket := make(map[string][]byte)
// Every index has a unique bucket for fast, binary-search
// range scans for filtering across keys.
buckets := [][]byte{
blockSlotIndicesBucket,
}
indices := [][]byte{
bytesutil.SlotToBytesBigEndian(block.Slot()),
}
buckets = append(buckets, blockParentRootIndicesBucket)
parentRoot := block.ParentRoot()
indices = append(indices, parentRoot[:])
for i := 0; i < len(buckets); i++ {
indicesByBucket[string(buckets[i])] = indices[i]
}
return indicesByBucket
}
// createBlockFiltersFromIndices takes in filter criteria and returns
// a map with a single key-value pair: "block-parent-root-indices” -> parentRoot (array of bytes).
//
@@ -838,74 +823,44 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
return blocks.NewSignedBeaconBlock(rawBlock)
}
func (s *Store) marshalBlock(
ctx context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
shouldBlind, err := s.shouldSaveBlinded(ctx)
func encodeBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
key, err := keyForBlock(blk)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "could not determine version encoding key for block")
}
if shouldBlind {
return marshalBlockBlinded(ctx, blk)
enc, err := blk.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal block")
}
return marshalBlockFull(ctx, blk)
dbfmt := make([]byte, len(key)+len(enc))
if len(key) > 0 {
copy(dbfmt, key)
}
copy(dbfmt[len(key):], enc)
return snappy.Encode(nil, dbfmt), nil
}
// Encodes a full beacon block to the DB with its associated key.
func marshalBlockFull(
_ context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
var encodedBlock []byte
var err error
encodedBlock, err = blk.MarshalSSZ()
if err != nil {
return nil, err
}
func keyForBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
switch blk.Version() {
case version.Deneb:
return snappy.Encode(nil, append(denebKey, encodedBlock...)), nil
case version.Capella:
return snappy.Encode(nil, append(capellaKey, encodedBlock...)), nil
case version.Bellatrix:
return snappy.Encode(nil, append(bellatrixKey, encodedBlock...)), nil
case version.Altair:
return snappy.Encode(nil, append(altairKey, encodedBlock...)), nil
case version.Phase0:
return snappy.Encode(nil, encodedBlock), nil
default:
return nil, errors.New("unknown block version")
}
}
// Encodes a blinded beacon block with its associated key.
// If the block does not support blinding, we then encode it as a full
// block with its associated key by calling marshalBlockFull.
func marshalBlockBlinded(
ctx context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
blindedBlock, err := blk.ToBlinded()
if err != nil {
switch {
case errors.Is(err, blocks.ErrUnsupportedVersion):
return marshalBlockFull(ctx, blk)
default:
return nil, errors.Wrap(err, "could not convert block to blinded format")
if blk.IsBlinded() {
return denebBlindKey, nil
}
}
encodedBlock, err := blindedBlock.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal blinded block")
}
switch blk.Version() {
case version.Deneb:
return snappy.Encode(nil, append(denebBlindKey, encodedBlock...)), nil
return denebKey, nil
case version.Capella:
return snappy.Encode(nil, append(capellaBlindKey, encodedBlock...)), nil
if blk.IsBlinded() {
return capellaBlindKey, nil
}
return capellaKey, nil
case version.Bellatrix:
return snappy.Encode(nil, append(bellatrixBlindKey, encodedBlock...)), nil
if blk.IsBlinded() {
return bellatrixBlindKey, nil
}
return bellatrixKey, nil
case version.Altair:
return altairKey, nil
case version.Phase0:
return nil, nil
default:
return nil, fmt.Errorf("unsupported block version: %v", blk.Version())
}

View File

@@ -126,23 +126,6 @@ var blockTests = []struct {
},
}
func TestStore_SaveBackfillBlockRoot(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
_, err := db.BackfillBlockRoot(ctx)
require.ErrorIs(t, err, ErrNotFoundBackfillBlockRoot)
var expected [32]byte
copy(expected[:], []byte{0x23})
err = db.SaveBackfillBlockRoot(ctx, expected)
require.NoError(t, err)
actual, err := db.BackfillBlockRoot(ctx)
require.NoError(t, err)
require.Equal(t, expected, actual)
}
func TestStore_SaveBlock_NoDuplicates(t *testing.T) {
BlockCacheSize = 1
slot := primitives.Slot(20)

View File

@@ -21,3 +21,8 @@ var ErrNotFoundBackfillBlockRoot = errors.Wrap(ErrNotFound, "BackfillBlockRoot")
// ErrNotFoundFeeRecipient is a not found error specifically for the fee recipient getter
var ErrNotFoundFeeRecipient = errors.Wrap(ErrNotFound, "fee recipient")
var errEmptyBlockSlice = errors.New("[]blocks.ROBlock is empty")
var errIncorrectBlockParent = errors.New("unexpected missing or forked blocks in a []ROBlock")
var errFinalizedChildNotFound = errors.New("unable to find finalized root descending from backfill batch")
var errNotConnectedToFinalized = errors.New("unable to finalize backfill blocks, finalized parent_root does not match")

Some files were not shown because too many files have changed in this diff Show More