Compare commits

...

331 Commits

Author SHA1 Message Date
Thabokani
692ebd313f Fix typos in doc (#13583)
Signed-off-by: Thabokani <149070269+Thabokani@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-02-06 10:18:21 +00:00
Nishant Das
6fa656c1ee Add Sync Checker (#13580)
* fix it

* add it in

* typo

* fix tests

* fix tests

* export and add test

* preston's review
2024-02-06 02:34:30 +00:00
Dhruv Bodani
55a29a4670 Implement beacon committee selections (#13503)
* implement beacon committee selections

* fix build

* fix lint

* fix lint

* Update beacon-chain/rpc/eth/shared/structs.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/beacon_committee_selections.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/beacon_committee_selections.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/beacon_committee_selections.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* move beacon committee selection structs to validator module

* fix bazel build files

* add support for POST and GET endpoints for get state validators query

* add a handler to return error from beacon node

* move beacon committee selection to validator top-level module

* fix bazel

* re-arrange fields to fix lint

* fix TestServer_InitializeRoutes

* fix build and lint

* fix build and lint

* fix TestSubmitAggregateAndProof_Distributed

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-02-05 15:43:51 +00:00
Potuz
e2e7e84a96 Get the right head state when proposing a failed reorg (#13579)
* Get the right head state when proposing a failed reorg

* add unit test

* split logic
2024-02-05 13:40:35 +00:00
terence
91b0a93df7 Enhance EL block height log (#13582) 2024-02-05 01:52:01 +00:00
Preston Van Loon
8839015312 docker: Add coreutils to docker images (#13564)
* Add coreutils to docker images

* add coreutils dependencies

* Add a prysmaticlabs.com/uploads backup of the deb files

* Run gazelle and fix issues

* Remove broken tar, change http_archive deps to debian_archive, remove http mirrors in favor of snapshot

* Add comments about which deps are required by other deps
2024-02-03 19:21:21 +00:00
terence
61ab4bf7ca Rename block by range request log (#13561) 2024-02-03 19:20:04 +00:00
Radosław Kapka
e3ce1bde45 Move API structs to api module (#13577) 2024-02-03 11:57:01 +00:00
Nishant Das
9d1189b222 Do Not Cache For Non Active Public Keys (#13581)
* fix it

* clean up
2024-02-03 05:19:54 +00:00
KeienWang
74f5452a64 Fix typo in [beacon-chain/cache/depositsnapshot/deposit_cache_test.go]: Corrected a spelling error. (#13532)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-02-03 05:14:32 +00:00
Nishant Das
ea1204d3c7 Fix Slashing Gossip Checks (#13574)
* fix it

* add for proposals too
2024-02-02 23:13:22 +00:00
Radosław Kapka
d9ac69752b Return consensus block value in Wei (#13575)
* Return consensus block value in Wei

* Return consensus block value in Wei

* review
2024-02-02 18:17:40 +00:00
terence
52af63f25a Revise blob sidecar not found log (#13571)
* Update blob sidecar not found log

* Use fields
2024-02-01 20:48:59 +00:00
james-prysm
2dad245bc8 handle slice out of range (#13568)
* handle slice out of range

* adding some tests
2024-02-01 16:59:40 +00:00
Potuz
9a9990605c Update Gohashtree to v0.0.4-beta (#13569)
* Update Gohashtree to v0.0.4-beta

* go mod tidy

* go mod tidy

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-02-01 15:42:56 +00:00
james-prysm
2cddb5ca86 fixing jwt auth checks (#13565) 2024-02-01 15:13:52 +00:00
Nishant Das
73ce28c356 make it the default (#13556) 2024-01-31 10:27:26 +00:00
Manu NALEPA
7a294e861e Beacon node slasher improvement (#13549)
* Slasher: Ensure all gorouting are stopped before running `Stop` actions.

Fixes #13550.
In tests, `exitChan` are now useless since waitgroup are used to wait
for all goroutines to be stopped.

* `slasher.go`: Add comments and rename some variables. - NFC

* `detect_blocks.go`: Improve. - NFC

- Rename some variables.
- Add comments.
- Use second element of `range` when possible.

* `chunks.go`: Remove `_`receivers. - NFC

* `validateAttestationIntegrity`: Improve documentation. - NFC

* `filterAttestations`: Avoid `else`and rename variable. - NFC

* `slasher.go`: Fix and add comments.

* `SaveAttestationRecordsForValidators`: Remove unused code.

* `LastEpochWrittenForValidators`: Name variables consistently. - NFC

Avoid mixes between `indice(s)`and `index(es)`.

* `SaveLastEpochsWrittenForValidators`: Name variables consistently. - NFC

* `CheckAttesterDoubleVotes`: Rename variables and add comments. - NFC

* `schema.go`: Add comments. - NFC

* `processQueuedAttestations`: Add comments. - NFC

* `checkDoubleVotes`: Rename variable. - NFC

* `Test_processQueuedAttestations`: Ensure there is no error log.

* `shouldNotBeSlashable` => `shouldBeSlashable`

* `Test_processQueuedAttestations`: Add 2 test cases:
- Same target with different signing roots
- Same target with same signing roots

* `checkDoubleVotesOnDisk` ==> `checkDoubleVotes`.

Before this commit, `checkDoubleVotes` did two tasks:
- Checking if there are any slashable double votes in the input
  list of attestations with respect to each other.
- Checking if there are any slashable double votes in the input
  list of attestations with respect to our database.

However, `checkDoubleVotes` is called only in
`checkSlashableAttestations`.

And `checkSlashableAttestations` is called only in:
- `processQueuedAttestations`, and in
- `IsSlashableAttestation`

Study of case `processQueuedAttestations`:
---------------------------------------------
In `processQueuedAttestations`, `checkSlashableAttestations`
is ALWAYS called after
`Database.SaveAttestationRecordsForValidators`.

It means that, when calling `checkSlashableAttestations`,
`validAtts` are ALREADY stored in the DB.

Each attestation of `validAtts` will be checked twice:
- Against the other attestations of `validAtts` (the portion of
  deleted code)
- Against the content of the database.

One of those two checks is redundent.
==> We can remove the check against other attestations in `validAtts`.

Study of case `Database.SaveAttestationRecordsForValidators`:
----------------------------------------------------------------
In `Database.SaveAttestationRecordsForValidators`,
`checkSlashableAttestations` is ALWAYS called with a list of
attestations containing only ONE attestation.

This only attestaion will be checked twice:
- Against itself, and an attestation cannot conflict with itself.
- Against the content of the database.

==> We can remove the check against other attestations in `validAtts`.

=========================

In both cases, we showed that we can remove the check of attestation
against the content of `validAtts`, and the corresponding test
`Test_checkDoubleVotes_SlashableInputAttestations`.

* `Test_processQueuedBlocks_DetectsDoubleProposals`: Wrap proposals.

So we can add new proposals later.

* Fix slasher multiple proposals false negative.

If a first batch of blocks is sent with:
- validator 1 - slot 4 - signing root 1
- validator 1 - slot 5 - signing root 1

Then, if a second batch of blocks is sent with:
- validator 1 - slot 4 - signing root 2

Because we have two blocks proposed by the same validator (1) and for
the same slot (4), but with two different signing roots (1 and 2), the
validator 1 should be slashed.

This is not the case before this commit.
A new test case has been added as well to check this.

Fixes #13551

* `params.go`: Change comments. - NFC

* `CheckSlashable`: Keep the happy path without indentation.

* `detectAllAttesterSlashings` => `checkSurrounds`.

* Update beacon-chain/db/slasherkv/slasher.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/db/slasherkv/slasher.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* `CheckAttesterDoubleVotes`: Keep happy path without indentation.

Well, even if, in our case, "happy path" mean slashing.

* 'SaveAttestationRecordsForValidators': Save the first attestation.

In case of multiple votes, arbitrarily save the first attestation.
Saving the first one in particular has no functional impact,
since in any case all attestations will be tested against
the content of the database. So all but the first one will be
detected as slashable.

However, saving the first one and not an other one let us not
to modify the end to end tests, since they expect the first one
to be saved in the database.

* Rename `min` => `minimum`.

Not to conflict with the new `min` built-in function.

* `couldNotSaveSlashableAtt` ==> `couldNotCheckSlashableAtt`

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-01-31 09:49:14 +00:00
james-prysm
258123341e add a log and update size for promptui (#13542) 2024-01-30 17:19:31 +00:00
Preston Van Loon
224b136737 Revert "set limit to multiple of burst for goerli" (#13552)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-30 06:10:12 +00:00
Nishant Das
3ed4866eec Makes Our New Deposit Trie The Default (#13555)
* make 4881 the default

* fix failed build
2024-01-30 05:15:52 +00:00
kasey
373c853d17 set limit to multiple of burst for goerli (#13544)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-27 22:12:08 +00:00
terence
23b0718b5f Add metric for data availability wait time (#13534)
* Add metric for data availability wait time

* Kasey's feedback

* Kasey's feedback
2024-01-26 18:17:25 +00:00
terence
3a9854145c Correct metrics from ns to ms (#13540) 2024-01-26 17:43:30 +00:00
Radosław Kapka
1b70d2b566 Fetch unaggregated atts in GetAggregateAttestation (#13533) 2024-01-26 17:08:58 +00:00
Nishant Das
59b310a221 make it the same (#13531) 2024-01-26 05:35:27 +00:00
Nishant Das
22b6d1751d Enable Backfill in E2E (#13524)
* enable backfill for devmode

* enable backfill

* gaz

* move to its own package

* fix panic

* fix bug

* gaz

* kasey's review
2024-01-26 04:37:41 +00:00
Potuz
9c13d47f4c fix off by one (#13529) 2024-01-26 00:05:56 +00:00
Justin Traglia
835dce5f6e Enable wastedassign linter & fix findings (#13507)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-25 17:07:48 +00:00
james-prysm
c4c28e4825 fixing small typo in error messages (#13525) 2024-01-25 04:56:17 +00:00
Radosław Kapka
c996109b3a Return payload value in Wei from /eth/v3/validator/blocks (#13497)
* Add value in Wei to execution payload

* simplify how payload is returned

* test fix

* fix issues

* review

* fix block handlers
2024-01-24 20:58:35 +00:00
terence
e397f8a2bd Skip origin root when cleaning dirty state (#13521)
* Skip origin root when cleaning dirty state

* Clean up
2024-01-24 17:22:50 +00:00
Radosław Kapka
6438060733 Clear cache everywhere in tests of core helpers (#13509) 2024-01-24 16:11:43 +00:00
Nishant Das
a2892b1ed5 clean up validate beacon block (#13517) 2024-01-24 05:48:15 +00:00
Nishant Das
f4ab2ca79f lower it (#13516) 2024-01-24 01:28:36 +00:00
kasey
dbcf5c29cd moving some blob rpc validation close to peer read (#13511)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-23 22:54:16 +00:00
james-prysm
c9fe53bc32 Blob API: make errors more generic (#13513)
* make api response more generic

* gaz
2024-01-23 20:07:46 +00:00
terence
8522febd88 Add Holesky Deneb Epoch (#13506)
* Add Holesky Deneb Epoch

* Fix fork version

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix config

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-23 19:29:17 +00:00
james-prysm
75a28310c2 fixing route to match specs (#13510) 2024-01-23 18:04:03 +00:00
kasey
1df173e701 Block backfilling (#12968)
* backfill service

* fix bug where origin state is never unlocked

* support mvslice states

* use renamed interface

* refactor db code to skip block cache for backfill

* lint

* add test for verifier.verify

* enable service in service init test

* cancellation cleanup

* adding nil checks to configset juggling

* assume blocks are available by default

As long as we're sure the AvailableBlocker is initialized correctly
during node startup, defaulting to assuming we aren't in a checkpoint
sync simplifies things greatly for tests.

* block saving path refactor and bugfix

* fix fillback test

* fix BackfillStatus init tests

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-23 07:54:30 +00:00
terence
3187a05a76 Align aggregated att gossip validations (#13490)
* Align aggregated att gossip validations

* Feedback on reusing existing methods

* Nishant's feedback

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-23 04:37:06 +00:00
Justin Traglia
4e24102237 Fix minor issue in blsToExecChange validator (#13498)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-23 03:26:57 +00:00
james-prysm
8dd5e96b29 re-enabling jwt on keymanager API (#13492)
* re-enabling jwt on keymanager API

* adding tests

* Update validator/rpc/intercepter.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* handling error in test

* remove debugging logs

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-01-22 22:16:10 +00:00
james-prysm
4afb379f8d cleanup duties naming (#13451)
* updating some naming to reflect changes to duties

* fixing unit tests

* fixing more tests
2024-01-22 16:58:25 +00:00
Nishant Das
5a2453ac9c Add Debug State Transition Method (#13495)
* add it

* lint
2024-01-22 14:46:20 +00:00
Nishant Das
e610d2a5de fix it (#13496) 2024-01-22 14:26:14 +00:00
Preston Van Loon
233aaf2f9e e2e: Fix multiclient lighthouse flag removal (#13494) 2024-01-21 21:11:11 +00:00
Nishant Das
a49bdcaa1f fix it (#13493) 2024-01-20 16:15:38 +00:00
Gaki
bdd7b2caa9 chore: typo fix (#13461)
* messsage

* cancellation
2024-01-20 01:07:17 +00:00
terence
8de0e3804b Update Sepolia Deneb fork epoch (#13491) 2024-01-19 18:47:07 +00:00
Ying Quan Tan
bfb648067b Re-enable Slasher E2E Test (#13420)
* re-enable e2e slashing test #12415

* refactored slashing evaluator

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-19 04:44:27 +00:00
terence
852db1f3eb Remove debug setting highest slot log (#13488) 2024-01-19 04:25:15 +00:00
Nishant Das
5d3663ef8d update lighthouse and tests (#13470) 2024-01-19 03:46:36 +00:00
Radosław Kapka
a608630727 Add Inactivity field ro attestation rewards (#13382) 2024-01-18 18:51:35 +00:00
Mario Vega
37739b4193 fix blobsidecar json tag for commitment inclusion proof (#13475)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-18 17:43:43 +00:00
james-prysm
4d2067dbae bugfix: ssz post-requests should check content type not accept (#13482)
* updating post requests that accept ssz to check content type instead of accept header

* radek's review comments to make things more clear
2024-01-18 17:41:31 +00:00
Nishant Das
fc05e306dd Allow Pcli to Run State Transitions Easily (#13484)
* add all this in

* gaz

* add flag
2024-01-18 14:44:06 +00:00
Radosław Kapka
204de13c86 REST VC: Subscribe to Beacon API events (#13453)
* Revert "Revert "REST VC: Subscribe to Beacon API events  (#13354)" (#13428)"

This reverts commit 8d092a1113.

* change logic

* review

* test fix

* fix critical error

* merge flag check

* change error msg

* return on errors

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-18 14:27:41 +00:00
terence
f3ef1b64d6 Enhance block by root log (#13472) 2024-01-18 13:43:10 +00:00
terence
c3dbfa66d0 Change blob latency metrics to ms (#13481) 2024-01-17 23:28:42 +00:00
terence
93aba997f4 Move checking of attribute empty earlier (#13465) 2024-01-17 18:42:56 +00:00
Potuz
79bb7efbf8 Check init sync before getting payload attributes (#13479)
* Check init sync before getting payload attributes

This PR adds a helper to forkchoice to return the delay of the latest
imported block. It also adds a helper with an heuristic to check if the
node is during init sync. If the highest imported node was imported with
a delay of less than an epoch then the node is considered in regular
sync. If on the other hand, in addition the highest imported node is
more than two epochs old, then the node is considered in init Sync.

The helper to check this only uses forkchoice and therefore requires a
read lock. There are four paths to call this

1) During regular block processing, we defer a function to send the
   second FCU call with attributes. This function may not be called at
all if we are not regularly syncing
2) During regular block processing, we check in the path
   `postBlockProces->getFCUArgs->computePayloadAttributes` the payload
attributes if we are syncing a late block. In this case forkchoice is
already locked and we add a call in `getFCUArgs` to return early if not
regularly syncing
3) During handling of late blocks on `lateBlockTasks` we simply return
   early if not in regular sync (This is the biggest change as it takes
a longer FC lock for lateBlockTasks)
4) On Attestation processing, in UpdateHead, we are already locked so we
   just add a check to not update head on this path if not regularly
syncing.

* fix build

* Fix mocks
2024-01-17 15:39:28 +00:00
terence
87b53db3b4 Capitalize Aggregated Unaggregated Attestations Log (#13473) 2024-01-17 13:30:31 +00:00
terence
fe431b9201 Use correct HistoricalRoots (#13477) 2024-01-17 08:14:32 +00:00
james-prysm
790a09f9b1 Improve wait for activation (#13448)
* removing timeout on wait for activation, instead switched to an event driven approach

* fixing unit tests

* linting

* simplifying return

* adding sleep for the remaining slot to avoid cpu spikes

* removing ifstatement on log

* removing ifstatement on log

* improving switch statement

* removing the loop entirely

* fixing unit test

* fixing manu's reported issue with deletion of json file

* missed change around writefile at path

* gofmt

* fixing deepsource issue with reading file

* trying to clean file to avoid deepsource issue

* still getting error trying a different approach

* fixing stream loop

* fixing unit test

* Update validator/keymanager/local/keymanager.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* fixing linting

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-01-16 17:04:54 +00:00
Manu NALEPA
46387a903a getLegacyDatabaseLocation: Change message. (#13471)
* `getLegacyDatabaseLocation`: Change message.

* Update validator/node/node.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-16 11:29:36 +00:00
Nishant Das
6a65e07684 Add Spans to Core Validator Methods (#13467)
* add traces

* gaz
2024-01-16 07:52:46 +00:00
Potuz
abef94d7ad do not check optimistic status if cached attestation (#13462)
* do not check optimistic status if cached attestation

* Gazelle

* Gazelle again

* fix nil panics

* more nil checks

* more nil checks

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-01-15 18:50:33 +00:00
Manu NALEPA
99a8d0bac6 Validator client - Improve readability - NO FUNCTIONAL CHANGE (#13468)
* Improve `NewServiceRegistry` documentation.

* Improve `README.md`.

* Improve readability of `registerValidatorService`.

* Move `log` in `main.go`.

Since `log` is only used in `main.go`.

* Clean Tos.

* `DefaultDataDir`: Use `switch` instead of `if/elif`.

* `ReadPassword`: Remove unused receiver.

* `validator/main.go`: Clean.

* `WarnIfPlatformNotSupported`: Add Mac OSX ARM64.

* `runner.go`: Use idiomatic `err` handling.

* `waitForChainStart`: Avoid `chainStartResponse`mutation.

* `WaitForChainStart`: Reduce cognitive complexity.

* Logs: `powchain` ==> `execution`.
2024-01-15 14:46:54 +00:00
Preston Van Loon
b585ff77f5 Fix port logging in bootnode (#13457) 2024-01-15 04:38:22 +00:00
Nishant Das
1ff5a43385 Add the Abillity to Defragment the Beacon State (#13444)
* Defragment head state

* change log level

* change it to be more efficient

* add flag

* add tests and clean up

* fix it

* gosimple

* Update container/multi-value-slice/multi_value_slice.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's review

* unlock it

* remove from fc lock

---------

Co-authored-by: rkapka <rkapka@wp.pl>
2024-01-13 05:44:02 +00:00
dependabot[bot]
0cfbddc980 Bump github.com/quic-go/quic-go from 0.39.3 to 0.39.4 (#13445)
* Bump github.com/quic-go/quic-go from 0.39.3 to 0.39.4

Bumps [github.com/quic-go/quic-go](https://github.com/quic-go/quic-go) from 0.39.3 to 0.39.4.
- [Release notes](https://github.com/quic-go/quic-go/releases)
- [Changelog](https://github.com/quic-go/quic-go/blob/master/Changelog.md)
- [Commits](https://github.com/quic-go/quic-go/compare/v0.39.3...v0.39.4)

---
updated-dependencies:
- dependency-name: github.com/quic-go/quic-go
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* Ran gazelle

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-12 18:12:19 +00:00
Manu NALEPA
22a484c45e Fixes issues when running validator client with the --web flag and non existing validator.db file AND/OR prysm-wallet-v2 directory. (#13460)
* `getLegacyDatabaseLocation`: Add tests.

* `getLegacyDatabaseLocation`: Handle `c.wallet == nil`.

* `saveAuthToken`: Create parent directory if needed.
2024-01-12 15:53:27 +00:00
terence
6ddafe1159 Delete invalid blob at block processing (#13456)
* Delete invalid blob at block processing

* Fix test
2024-01-12 08:09:45 +00:00
qinlz2
b8c5af665f [3/5] light client events (#13225)
* add http streaming light client events

* expose ForkChoiceStore

* return error in insertFinalizedDeposits

* send light client updates

* Revert "return error in insertFinalizedDeposits"

This reverts commit f7068663b8c8b3a3bf45950d5258011a5e4d803e.

* fix: lint

* fix: patch the wrong error response

* refactor: rename the JSON structs

* fix: LC finalized stream return correct format

* fix: LC op stream return correct JSON format

* fix: omit nil JSON fields

* chore: gazzle

* fix: make update by range return list directly based on spec

* chore: remove unneccessary json annotations

* chore: adjust comments

* feat: introduce EnableLightClientEvents feature flag

* feat: use enable-lightclient-events flag

* chore: more logging details

* chore: fix rebase errors

* chore: adjust data structure to save mem

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* refactor: rename config EnableLightClient

* refactor: rename feature flag

* refactor: move helper functions to helper pkg

* test: fix broken unit tests

---------

Co-authored-by: Nicolás Pernas Maradei <nicolas@polymerlabs.org>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-11 18:38:59 +00:00
Radosław Kapka
2875ce6ee1 Use a single rest handler (#13446) 2024-01-11 16:03:35 +00:00
Manu NALEPA
a883ae2a76 BN: Move --db-backup-output-dir as a deprecated flag. (#13450) 2024-01-11 14:11:36 +00:00
Preston Van Loon
3a2b486bde Bazel 7.0.0 (#13321) 2024-01-10 15:34:11 +00:00
terence
283e09569d Remove old blob types (#13438)
* Remove old types

* Gen

* Remove old types

* Gen

* Fix lint

* Rm unused key

* Kasey's comment
2024-01-10 09:38:06 +00:00
Preston Van Loon
69723b4a77 Update go to 1.21.6 (#13440) 2024-01-10 09:37:40 +00:00
psr
4fe6834ba5 http endpoint cleanup (#13432)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-09 23:48:43 +00:00
Preston Van Loon
98e3f2b80f sort static analyzers, add more, fix violations (#13441) 2024-01-09 23:29:36 +00:00
Enrico Del Fante
2aef7a3ec5 Update teku's bootnode (#13437) 2024-01-09 22:28:41 +00:00
Brandon Liu
c41a54be9d fix metric for exited validator (#13379)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-09 22:15:53 +00:00
Justin Traglia
7e65378f63 Check sidecar index in BlobSidecarsByRoot response (#13180)
* Check sidecar index in BlobSidecarsByRoot response

* Remove unnecessary MaxBlobsPerBlock check
2024-01-09 22:14:56 +00:00
Justin Traglia
cf606e3766 Only process blocks which haven't been processed (#13442)
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-09 22:14:03 +00:00
Justin Traglia
703cfc5819 Initialize exec payload fields and enforce order (#13372)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-09 21:49:35 +00:00
GoodDaisy
c6ebe157a6 Fix typos (#13435) 2024-01-09 21:03:36 +00:00
Preston Van Loon
a3cc81a048 Add nil check for head in IsOptimistic (#13439) 2024-01-09 19:40:26 +00:00
Nishant Das
75bbeb61cc Add Detailed Multi Value Metrics (#13429)
* add it

* pingo

* gaz

* remove pingo

* fix for old forks
2024-01-09 05:16:03 +00:00
kasey
5cea6bebb8 minimize syscalls in pruning routine (#13425)
* minimize syscalls in pruning routine

* terence feedback

* Update beacon-chain/db/filesystem/pruner.go

Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>

* pr feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>
2024-01-08 22:31:16 +00:00
Potuz
28596d669b Use proposer index cache for blob verification (#13423)
* Use proposer index cache for blob verification

* add unit test

* Fix test

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-01-07 03:24:07 +00:00
kasey
0e043d55b4 VerifiedROBlobs in initial-sync (#13351)
* Use VerifiedROBlobs in initial-sync

* Update beacon-chain/das/cache.go

Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>

* Apply suggestions from code review

comment fixes

Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>

* fix lint error from gh web ui

* deepsource fixes

* more deepsource

* fix init wiring

* mark blobless blocks verified in batch mode

* move sig check after parent checks

* validate block commitment length at start of da check

* remove vestigial locking

* rm more copy-locksta

* rm old comment

* fail the entire batch if any sidecar fails

* lint

* skip redundant checks, fix len check

* assume sig and proposer checks passed for block

* inherits most checks from processed block

* Assume block processing handles most checks

* lint

* cleanup unused call and gaz

* more detailed logging for e2e

* fix bad refactor breaking non-finalized init-sync

* self-review cleanup

* gaz

* Update beacon-chain/verification/blob.go

Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>

* terence and justin feedback

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Justin Traglia <95511699+jtraglia@users.noreply.github.com>
2024-01-06 23:47:09 +00:00
Radosław Kapka
8d092a1113 Revert "REST VC: Subscribe to Beacon API events (#13354)" (#13428)
This reverts commit e68b2821c1.
2024-01-06 21:36:42 +00:00
kasey
073c4edc5f use ROForkchoice in blob verifier (#13426)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-06 19:39:03 +00:00
terence
d055db1c31 Unlock forkchoice store if attribute is empty (#13427)
* Unlock forkchoice store if attribute is empty

* Better version
2024-01-06 07:32:56 +00:00
Nishant Das
a974627258 Make Aggregating In Parallel The Permanent Default (#13407)
* make it the permanent default

* gaz
2024-01-06 07:29:06 +00:00
Potuz
67dccc5e43 Break out several helpers from postBlockProcess (#13419)
* Break out several helpers from `postBlockProcess`

In addition fix a bug found by @terencechain where we should use a slot
context instead of the parent context in the second FCU call.

* Remove calls for tracked proposer

getPayloadAttribute already takes care of this
Also compute correctly the time into voting window

* call with attributes only when incoming block is canonical

* check for empty payload instead of only nil

* add unit tests

* move log for non-canonical block

* return early if the incoming block does not change head

* Pass fcuArgs as arguments

* lint
2024-01-06 02:29:07 +00:00
terence
ff06e08274 Prune dangling blob (#13424)
* Prune dangling blob

* Fix test

* Kasey's feedback

* Preston's feedback

* Use warning, fix test
2024-01-05 22:29:57 +00:00
james-prysm
d3d25e3ae5 proposer and attester slashing sse (#13414)
* wip

* adding in event notifiers for slashing events

* fixing tests
2024-01-05 15:27:50 +00:00
Nishant Das
929e9ddf4c enable it (#13421) 2024-01-05 05:23:22 +00:00
Nishant Das
7c0e79d432 Make New Engine Methods The Permanent Default (#13406)
* make them the default

* gaz

* fix tests
2024-01-05 04:38:04 +00:00
terence
3c1c0b3c00 Update blob pruning log (#13417) 2024-01-04 18:02:19 +00:00
james-prysm
d439e6da74 adding builder boost factor to get block v3 (#13409)
* adding builder boost factor to functions

* gaz

* fixing linting

* fixing unit tests

* gaz

* addressing review comments

* fixing tests

* addressing review feedback

* gaz

* changing log based on review
2024-01-04 17:25:18 +00:00
Radosław Kapka
e68b2821c1 REST VC: Subscribe to Beacon API events (#13354)
* Initial code for head event streaming

* handle events and error

* keepalive event

* tests

* generate new mock

* remove single case select

* cleanup

* explain eventByteLimit

* use 2 channels in test

* review

* more review

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-01-04 17:14:45 +00:00
Potuz
cfef8f4676 Don't hardcode 4 seconds in forkchoice (#13416) 2024-01-04 16:49:16 +00:00
terence
9709412511 Use Afero Walk for Pruning Blob (#13410)
* Use Afero walk

* Return err

* Use wrap

* More err to the end

* Fix loop
2024-01-04 16:41:00 +00:00
james-prysm
7781eb60f4 Add rpc trigger for blob sidecar event (#13411)
* adding missed rpc trigger for blob sidecar event

* fixing unit tests

* moving event feed to after receive blob call to prioritize db
2024-01-04 14:22:24 +00:00
Potuz
396b8bf970 Simplify fcu 4 (#13403)
* send two FCU when proposing

* compute voting window at runtime
2024-01-04 13:43:57 +00:00
Nishant Das
d5107942a1 update it (#13415) 2024-01-04 11:22:23 +00:00
terence
bd4a520013 Initialize blob storage without pruning (#13412) 2024-01-04 05:56:38 +00:00
Sammy Rosso
a0ff1351a0 Fix batch pruning errors (#13355)
* Add compareAndSwap

* Update lastPrunedEpoch before prune

* Fix and test

* Remove debug log

* Kasey' review

* Fix tests

* Address Kasey's comments

* Fix prune before slot

* Rename

* Fix bad test

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-01-03 20:52:07 +00:00
Nishant Das
7e6fd5fd8b Make Reorging Of Late Blocks The Permanent Default (#13405)
* make it the permanent default

* gaz

* fix merge conflicts
2024-01-03 14:46:58 +00:00
Nishant Das
d984210baa fix it (#13404) 2024-01-03 13:54:18 +00:00
Potuz
31c72672d7 Remove the getPayloadAttribute call from updateForkchoiceWithExecution (#13402)
* Remove the getPayloadAttribute call from updateForkchoiceWithExecution

* Move log
2024-01-03 12:43:40 +00:00
Potuz
8c1e180dd1 Simplify fcu 2 (#13400)
* change getPayloadAttribute signature

* Unify different FCU arguments
2024-01-02 22:45:55 +00:00
Manu NALEPA
886d76fe7c Refactor validator client help. (#13401)
* Define `cli.App` without mutation.

No functional change.

* `usage.go`:  Clean `appHelpTemplate`.

No functional change is added.
Modifications consist in adding prefix/suffix `-` to improve readability of
the template without adding new lines in template inference.

We now see some inconsistencies of the template:
- `if .App.Version` is around the `AUTHOR` section.
- `if .App.Copyright` is around both `COPYRIGHT` and `VERSION` sections.
- `if len .App.Authors` is around nothing.

* `usage.go`: Surround version and author correctly.

* `usage.go`: `AUTHOR` ==> `AUTHORS`

* `usage.go`: `GLOBAL` --> `global`.

* `--grpc-max-msg-size`: Remove double default.

* VC: Standardize help message.

- Flags help begin with a capital letter and end with a period.
- If a flag help begins with a verb, it is conjugated.
- Expermitemtal, danger etc... mentions are between parenthesis.

* VC help message: Wrap too long lines.
2024-01-02 18:02:28 +00:00
Potuz
a602acf492 Remove getPayloadAttributes from FCU call (#13399) 2024-01-02 17:37:18 +00:00
terence
1b6547de6a Add Goerli Deneb Fork Epoch (#13390)
* Add deneb fork epoch

* Fix test
2024-01-02 15:31:57 +00:00
Nishant Das
88685bb3bd Fix Up Builder Evaluator (#13395)
* fix it up

* fix evaluator

* fix evaluator again

* fix it

* gaz
2024-01-02 10:40:26 +00:00
Nishant Das
2319b7d4bd increase params (#13398) 2024-01-02 10:19:59 +00:00
Manu NALEPA
82b2840d68 --validatorS-registration-batch-size (add s) (#13396) 2024-01-02 09:52:14 +00:00
Manu NALEPA
cf221d0f4c Validator client: Always use the --datadir value. (#13392)
Fix https://github.com/prysmaticlabs/prysm/issues/13391
2024-01-02 09:24:24 +00:00
Preston Van Loon
0956e3a657 Update libp2p/go-libp2p-asn-util to v0.4.1 (#13370)
* Update go-libp2p-asn-util to v0.4.1

* fix go mod

---------

Co-authored-by: nisdas <nishdas93@gmail.com>
2024-01-02 07:30:50 +00:00
terence
351ed1c511 Check kzg commitment count from builder (#13394) 2024-01-02 06:50:23 +00:00
Potuz
9809f5ac77 Simplify fcu 1 (#13387)
* Remove unsafe proposer indices cache

* Simplify FCU #1

This PR starts the process of gradually simplifying FCU
It removes the responsibility of getting the state and block from this
function and informing if head has changed. It is only called when the
imported block has actually become head.

* Add a call to FCU in edge cases
2023-12-30 12:20:20 +00:00
Potuz
cff5e2b5fe Remove unsafe proposer indices cache (#13385) 2023-12-30 12:20:02 +00:00
terence
dd15f9e0cc Rewrite ProposeBlock endpoint (#13380)
* Init

* Tests

* Init

* Tests

* Radek's feedback

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* More Radek's feedback

* Potuz feedback

* Use inline copy

* Fix conflict

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-12-29 23:32:58 +00:00
terence
1c9ded4684 Remove blind field from block type (#13389)
* Init

* Init

* Fix tests
2023-12-29 21:28:19 +00:00
Potuz
d4cc6fcf4a update shuffling caches before calling FCU on epoch boundaries (#13383)
* update shuffling caches before calling FCU on epoch boundaries

* Terence's review
2023-12-28 15:19:09 +00:00
Radosław Kapka
49c16f1a71 Return SignedBeaconBlock from ReadOnlySignedBeaconBlock.Copy (#13386) 2023-12-28 08:55:45 +00:00
terence
e70b606e78 Replace validator count with validator indices in update fee recipient log (#13384)
* Add validator count to updated fee recipient address log

* Add validator count to updated fee recipient address log

* Replace
2023-12-27 16:46:15 +00:00
Potuz
0e8b37c317 Log value of local payload when proposing (#13381) 2023-12-27 14:43:32 +00:00
Potuz
e80db9554d Use advanced epoch cache when preparing proposals (#13377) 2023-12-27 12:42:51 +00:00
Radosław Kapka
d0bf03e863 Simplify error handling for JsonRestHandler (#13369)
* Simplify error handling for `JsonRestHandler`

* POST

* reduce complexity

* review feedback

* uncomment route

* fix rest of tests
2023-12-22 22:39:20 +00:00
Potuz
b7e0819f00 refactor Payload Id caches (#12987)
* init

- getLocalPayload does not use the proposer ID from the cache but takes
  it from the block

- Fixed tests in blockchain package
- Fixed tests in the RPC package
- Fixed spectests

EpochProposers takes 256 bytes that can be avoided to be copied, but
this optimization is not clear to be worth it.

assginmentStatus can be optimized to use the cached version from the
TrackedValidatorsCache

We shouldn't cache the proposer duties when calling getDuties but when
we update the epoch boundary instead

* track validators on prepare proposers

* more rpc tests

* more rpc tests

* initialize grpc caches

* Add back fcu log

Also fix two existing bugs wrong parent hash on pre Capella and wrong
blockhashes on altair

* use beacon default fee recipient if there is none in the vc

* fix validator test

* radek's review

* push always proposer settings even if no flag is specified in the VC

* Only register with the builder if the VC flag is set

Great find by @terencechain

* add regression test

* Radek's review

* change signature of registration builder
2023-12-22 18:47:51 +00:00
Radosław Kapka
7d64104003 block publishing (#13376) 2023-12-22 18:15:00 +00:00
Nishant Das
b1e8a9ea3d fix it with regression (#13375) 2023-12-22 12:33:23 +00:00
Radosław Kapka
cc1028ca3c Use deneb key for deneb state in saveStatesEfficientInternal (#13374)
* Use deneb key for deneb state in saveStatesEfficientInternal

* move reset out of inner loop
2023-12-21 18:14:04 +00:00
Nishant Das
233f4d99a2 Update Libp2p To v0.32.1 and Go to v1.21.5 (#13304)
* update libp2p

* fix tests

* fix tests

* fix build

* update to go v1.21

* workflow

* workflow again

* update ci

* update golangci

* disable quic
2023-12-21 16:09:54 +00:00
terence
a068f3877e Use block value correctly when proposing a block (#13368)
* Use block value correctly

* Fix the function
2023-12-21 03:00:34 +00:00
james-prysm
856907d760 Small encoding fixes on logs and http error code change (#13345)
* fixing some bad encodings

* changing http error to align with other clients

* fixing unit test
2023-12-20 18:18:55 +00:00
Sammy Rosso
c6801df05a Fix total pruned metric + add to logging (#13367) 2023-12-19 16:15:01 +00:00
SQL TRIGGER
bc7b15b04e typo fix (#13357) 2023-12-19 16:03:40 +00:00
Nishant Das
eb713d1177 Refactor Network Config Into Main Config (#13364)
* change parameters to main config

* add more changes

* change to accepted format

* fix changes in config

* gaz

* fix test

* fix test again
2023-12-19 14:59:30 +00:00
Preston Van Loon
844b2c6602 Add error wrapping to blob initialization errors (#13366) 2023-12-19 14:55:26 +00:00
Potuz
9efaa832cd use different keys for the proposer indices cache (#13272)
* use different keys for the proposer indices cache

* Add a way to get the proposer indices from a checkpoint

* fix fuzzing tests

* use htr instead of body root

* move comment
2023-12-19 13:14:55 +00:00
Radosław Kapka
e9d26c61d7 Do not skip mev boost in v3 block production endpoint (#13365) 2023-12-19 12:46:17 +00:00
Sammy Rosso
374d77f437 Blob filesystem metrics (#13316)
* Add metrics

* Replace counter with gauge

* Preston's comments

* Remove hardcoded number

* Count blob files

* Fix count order

* Fixes

* Cleanup

* Add blob bucket

* Update beacon-chain/node/node.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Rename

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-12-18 18:24:07 +00:00
Justin Traglia
1f6d1d1852 For golangci-lint, enable all by default (#13353)
* For golangci-lint, enable all by default

* Use latest golangci-lint here too

* Use v1.55.2 instead of latest

* Remove usestdlibvars from list

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
2023-12-18 18:20:55 +00:00
terence
0eff83cb9d Use a cache of one entry to build attestation (#13300)
* Use a cache of one entry to build attestation

* Gazelle

* Enforce on RPC side

* Rm unused var

* Potuz feedback, dont use pointer

* Fix tests

* Init fetcher

* Add in-progress

* Add back missing lock

* Potuz feedback

* Update beacon-chain/rpc/prysm/v1alpha1/validator/attester_test.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2023-12-18 16:12:43 +00:00
Justin Traglia
ffe2f6b732 Enable mirror linter and fix findings (#13342)
* Enable mirror linter and fix findings

* Use latest version of golangci-lint

* Use v1.55.2 instead of latest

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-12-18 12:16:26 +00:00
terence
d57bca97a5 Check builder header kzg commitment (#13358) 2023-12-18 06:14:59 +00:00
Nishant Das
b45a6664be Enable Deneb For E2E Scenario Tests (#13317)
* fix all cases

* update web3signer

* current progress

* fix it finally

* push it back to capella

* remove hard-coded forks

* fix failing tests

* gaz

* fix dumb bug

* fix bad test setup

* change back
2023-12-16 11:37:44 +00:00
Justin Traglia
c56abfb840 Enable usestdlibvars linter and fix findings (#13339)
Co-authored-by: terence <terence@prysmaticlabs.com>
2023-12-15 19:21:54 +00:00
Preston Van Loon
d70f477b1e Fix docker image version strings in CI (#13356) 2023-12-15 19:15:51 +00:00
Preston Van Loon
db096488b0 fixing sa4006 (#13350) 2023-12-15 16:49:27 +00:00
Radosław Kapka
344e68b81b Use SkipMevBoost properly during block production (#13352)
* fix bugs

* tests

* name fix
2023-12-15 16:14:42 +00:00
Justin Traglia
1962cca69e Fix error string generation for missing commitments (#13338) 2023-12-15 04:03:45 +00:00
Justin Traglia
4a374435c0 Enable errname linter and fix findings (#13341) 2023-12-15 03:26:48 +00:00
David Theodore
0fde4a22e1 reordered blob validation (#13347) 2023-12-15 02:46:12 +00:00
terence
62ecc0d177 Add more color to sending blob by range req log (#13349) 2023-12-15 02:43:16 +00:00
Justin Traglia
97dfec84f6 Handle potential error from newBlockRangeBatcher (#13344) 2023-12-15 02:28:07 +00:00
terence
53bc96844e Move pruning log to after retention check (#13348) 2023-12-15 00:49:29 +00:00
terence
ddcf0c18dc Excluse DA wait time for chain processing time (#13335)
* Excluse DA wait time for chain processing time

* Rename
2023-12-14 22:46:48 +00:00
james-prysm
45a2746d0e Builder API: Fix max field check on toProto function (#13334)
* fixing field param used in ToProto function

* fxing test to pass

* making blobs empty in test
2023-12-14 03:03:00 +00:00
Preston Van Loon
09f3df309d Remove rules_docker, make multiarch images canonical (#13324)
* Remove rules_docker

* Update base image
2023-12-13 23:31:58 +00:00
Potuz
96df81d5c5 Hook to slot stream instead of block stream on the VC (#13327)
* Hook to slot stream instead of block stream on the VC

* Implement StreamSlots in the BN

* mock update

* fix tests

* don't return from stream

* Terence's review

* deepsource second complain

---------

Co-authored-by: rkapka <rkapka@wp.pl>
2023-12-13 23:13:56 +00:00
terence
c47c52152b Enhance Pruning Logs (#13331)
* Log prunning info

* Added start log

* Log prunning info

* Added start log
2023-12-13 19:24:47 +00:00
james-prysm
4cbe144a6c CLI: fixing account import ux bugs (#13328)
* fixing account import checking wallet twice, and adding sub folder search with a depth of 2

* removing uneeded check

* fixing unit test

* adding reset cache to fix potential flake

* improving test based on feedback
2023-12-13 17:11:32 +00:00
Justin Traglia
52b9b65adb Add sanity checks for bundle from builder (#13319)
* Add sanity checks for bundle from builder

* Add more checks to BlobsBundle.ToProto()

* Fix minor typo

* Fix tests & add new ones

* Add tests for ToProto

* Add "not" to error message

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-12-13 15:54:00 +00:00
Radosław Kapka
ea59b1ec71 Increase buffer of events channel (#13329) 2023-12-13 15:37:45 +00:00
Radosław Kapka
175c484c44 Uncomment e2e flakiness (#13326) 2023-12-13 12:50:13 +00:00
Nishant Das
8aaab86987 fix it (#13325) 2023-12-13 11:01:01 +00:00
Preston Van Loon
381116a3e8 Fix missing testnet versions. Issue #13288 (#13323) 2023-12-12 21:44:14 +00:00
Sammy Rosso
3d61fd0436 Blob filesystem add pruning during blob write (#13275)
* Add prune during write

* Fix merge errors

* Add test

* Add test timeout

* Gaz

* Check prune at midpoint

* Fix slot number

* More checks
2023-12-12 21:27:15 +00:00
james-prysm
b19d24c581 Remove signed block requirement from no-verify functions (#13314)
* removing fake wrappers

* fixing conficts and missed tests

* fixing more conflicts

* addressing missed unit test

* fixing nogo error

* fixing more unit tests

* fixing more tests
2023-12-12 20:18:40 +00:00
Radosław Kapka
8387088a52 Handle HTTP 404 Not Found in SubmitAggregateAndProof (#13320)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-12-12 19:46:19 +00:00
Potuz
ce7452c97a update spectests to 1.4.0-beta.5 (#13318)
* update spectests to 1.4.0-beta.5

* add spec config
2023-12-12 18:27:48 +00:00
james-prysm
5e56b5fdd7 Beacon APIs: re enabling blob events (#13315)
* re enabling blob events

* terence's comments

* Update beacon-chain/rpc/eth/events/events_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-12-12 15:58:11 +00:00
Nishant Das
bfaba378f6 activate deneb (#13311) 2023-12-12 08:04:55 +00:00
Sammy Rosso
3bd116db16 Blob filesystem add pruning at startup (#13253)
* Add Save blob and tests

* Remove locks

* Remove test cleanup

* Fix go mod

* Cleanup

* Add checksum

* Add file hashing to fileutil

* Move test

* Check data when exists

* Add one more test

* Rename

* Gaz

* Add packaged level comment

* Fix block proposals in the REST validator client (#13116)

* Fix block proposals in the REST validator client

* fix graffiti test

* return empty graffiti

* fallback to old endpoints

* logs

* handle 404

* everything passes

* review from James

* log undecoded value

* test fixes and additions

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* fix head slot in log (#13139)

* zig: Update zig to recent main branch commit (#13142)

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* Fix Pending Queue Deadline Bug (#13145)

* rearrange deadline

* naming

* Add pruning

* Gaz

* Gaz

* Update pruning

* Cleanup

* Making a mess

* Benchmarking

* Forgot to add the file + fixes

* Fixes

* Pruning from DB fixed

* Add prune by file data

* Fix pruning

* Prune fixes

* Cleanup db blockRoot filter

* Handle file close error

* Fix deletion

* Change read at + remove retentionEpich from bs

* Gaz

* Seperate logic + add detailed comments

* Add tests

* Add retention slot when creating blobStorage

* Fix tests

* Gaz

* Fix testonly import

* Add pruning at startup

* Add nil check

* Fix merge errors

* Fix test

* Fix test

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-12-11 17:08:52 +00:00
terence
7d2ddaee43 Test improvement TestValidateVoluntaryExit_ValidExit (#13313) 2023-12-11 08:01:56 +00:00
terence
122a7782ff Initialize blob storage for initial sync service (#13312) 2023-12-11 07:52:07 +00:00
terence
9b1b6f9be6 Use verified blob for gossip checks (#13294)
* Use blob verifier for gossip rules

* Fixing tests

* Fix lint

* Mocks

* Trying Kasey's rec

* mock verifier init workaround

* Add more tests

* Reset deneb epoch for exit test

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-12-11 00:37:45 +00:00
Delweng
0eb08a4f96 beacon-chain/rpc: use BalanceAtIndex instead of Balances to reduce memory copy (#13279)
* beacon-chain/rpc: use BalanceAtIndex instead of Balances

Signed-off-by: jsvisa <delweng@gmail.com>

* beacon-chain/rpc: stream use BalanceAtIndex is sufficient

Signed-off-by: jsvisa <delweng@gmail.com>

* beacon-chain/rpc: fix commit review

Signed-off-by: jsvisa <delweng@gmail.com>

* beacon-chain/rpc: http2 -> httputil

Signed-off-by: jsvisa <delweng@gmail.com>

---------

Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-12-10 00:04:45 +00:00
Radosław Kapka
bdfa06ed65 Simplify post-evaluation in Beacon API evaluator (#13309)
* Simplify post-evaluation in Beacon API evaluator

* revert slottime

* remove unused error
2023-12-09 07:54:31 +00:00
Manu NALEPA
a94f2b93e3 filterAndCacheActiveKeys: Stop filtering out exiting validators (#13305)
* `filterAndCacheActiveKeys`: Add test cases

- Validator is in unknown status (to be fitered out)
- Validator is in pending status,
   with activation period > current period (to be filtered out)
- Validator is in pending status,
   with activation period == current period (to be kept)

* `filterAndCacheActiveKeys`: Keep exiting keys

Initially:
-------
If a validator is in exiting (so, with status==EXITING != ACTIVE) state,
it will be filtered out by the `filterAndCacheActiveKeys` function.
The validator won't be registered to the beacon node.

If this exiting validator has to propose a block:
- the block will be proposed using local block building only.
- the fee recipient will be the one set in the beacon node.

(Additionally, if the beacon node Prysm without any
fee recipient defined at the beacon node level, the fee recipient
will default on the `0x00000...` burn address.)

This commit modifies the `filterAndCacheActiveKeys` function
by stopping filtering out exiting validators.
2023-12-09 07:53:08 +00:00
Radosław Kapka
4c47756aed HTTP endpoints cleanup (#13251)
* remove validation package

* structs cleanup

* merge with apimiddleware removal

* more validation and Bls capitalization

* builder test fix

* use strconv for uint->str conversions

* use DecodeHexWithLength

* use exact param names

* rename http package to httputil

* change conversions to fmt.Sprintf

* handle query paramsd and route variables

* spans and receiver name

* split structs, move bytes helper

* missing ok check

* fix reference to indexed failure

* errors fixup

* add godoc to helper

* fix BLS casing and chainhead ref

* review

* fix import in tests

* gzl
2023-12-08 20:37:20 +00:00
Nishant Das
440841d565 only run it in the middle of an epoch (#13303) 2023-12-08 15:14:01 +00:00
Preston Van Loon
ff99616833 Fix staticcheck violations (#13301)
* Fix violations of sa2002

* Fix violations of sa4005

* Fix violations of sa4010

* Fix violations for sa4023

* Comment on commented static checks
2023-12-08 13:07:52 +00:00
Preston Van Loon
f537a98fcd Add staticchecks to bazel builds (#13298)
* Update staticcheck to latest

* Add static checks while ignoring for third party / external stuff

* Added a hack to keep go mod happy.

* disable SA2002

* Pin go mod tidy checker image to golang:1.20-alpine
2023-12-08 05:42:55 +00:00
Radosław Kapka
cee38660c7 Gracefully handle unknown validator index in the REST VC (#13296)
* Gracefully handle unknown validaor index in the REST VC

* add apostrophes
2023-12-08 04:30:50 +00:00
james-prysm
481d77bfde APIs: reusing grpc cors middleware for rest (#13284)
* reusing grpc cors middleware for rest

* addressing radek's comments

* Update api/server/middleware.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* fixing to recommended name

* fixing naming

* fixing rename on test

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-12-08 04:24:18 +00:00
Nishant Das
590317553c Support New Subnet Backbone (#13179)
* add in changes

* fix it up

* fix test

* gaz

* lint

* add back

* fix tests

* fix it

* fix tests

* add lib

* fix it
2023-12-08 04:07:48 +00:00
terence
68b7d1009e Update README.md (#13302) 2023-12-08 04:07:10 +00:00
james-prysm
b5b8825cc8 Beacon API: fix get blob returns 500 instead of empty (#13297)
* fix blob api, should return empty if no indicies were found

* fixing small bug with slice
2023-12-07 22:33:26 +00:00
Justin Traglia
382b8b23c2 Ensure partial blob is deleted if there's an error (#13292)
* Ensure partial blob is deleted if there's an error

* Add debug log if file is removed
2023-12-07 20:52:16 +00:00
kasey
40a3ebab91 initialize sig cache for verification.Initializer (#13295)
* initialize sig cache for verification.Initializer

* gaz

* lint

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-12-07 20:14:01 +00:00
james-prysm
83af9a5694 Beacon API: update Deneb endpoints after removing blob signing (#13235)
* making needed changes to beacon API based on removal of blobsidecar from block contents

* fixing tests and reverting some changes to be addressed later

* fixing generated code from protos

* gaz

* fixing get blob handler and adding blob storage to the blob service

* updating unit tests

* WIP

* wip tests

* got tests passing but needs cleanup

* removing gomod and gosum changes

* fixing more tests

* fixing more tests

* fixing more tests

* gaz

* moving some proto types around

* removing unneeded unit test

* fixing proposer paths

* adding more tests

* fixing more tests

* improving more unit tests

* updating one blob only unit test

* changing arguments of buildBlobSidecar

* reverting a change based on feedback

* terence's review items

* fixing test based on new develop changes

* radek's comments

* addressed more comments from radek

* adding in blobs to test data

* fixing casing in test

* removing extra line

* fixing issue from bad merge

* Update beacon-chain/rpc/eth/beacon/handlers_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/handlers_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/beacon/handlers_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update beacon-chain/rpc/eth/blob/handlers.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* moving core getblob business logic to blocker based on radek's comment

* fixing mock blocker

* gaz

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-12-07 17:37:11 +00:00
Nishant Das
6a45323ab7 only run metrics for canonical blocks (#13289) 2023-12-07 11:03:23 +00:00
kasey
4008ea736f Verify roblobs (#13245)
* scaffolding for verification package

* WIP blob verification methods

* lock wrapper for safer forkchoice sharing

* more solid cache and verification designs; adding tests

* more test coverage, adding missing cache files

* clearer func name

* remove forkchoice borrower (it's in another PR)

* revert temporary interface experiment

* lint

* nishant feedback

* add comments with spec text to all verifications

* some comments on public methods

* invert confusing verification name

* deep source

* remove cache from ProposerCache + gaz

* more consistently early return on error paths

* messed up the test with the wrong config value

* terence naming feedback

* tests on BeginsAt

* lint

* deep source...

* name errors after failure, not expectation

* deep sooource

* check len()==0 instead of nil so empty lists work

* update test for EIP-7044

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-12-07 02:36:25 +00:00
Justin Traglia
4e4fb9ad52 Split blob pruning into two funcs (#13285) 2023-12-06 23:39:02 +00:00
kasey
737e0e0d3a Use functional options for --blob-retention-epochs (#13283)
* blob retention period functional opts

* missed unstaged change

* missed other init after cleardb

* fix ineffassign

* fix dup import

* config failsafe for tests

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2023-12-06 20:20:34 +00:00
Potuz
604c82626f Allow requests for old target roots (#13281) 2023-12-06 17:11:34 +00:00
Nishant Das
e1a3852f08 push up the defaults (#13278) 2023-12-06 16:46:46 +00:00
Preston Van Loon
a40cc40edf CI: Add merge queue events trigger for github workflows (#13282) 2023-12-06 16:13:13 +00:00
james-prysm
0e3c1d42f6 Beacon API: routes unit test (#13276)
* refactoring to add a routes unit test for rest handlers

* gaz

* updating names for functions

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-12-06 15:17:13 +00:00
Radosław Kapka
f41e603e5a Simplify Beacon API evaluator (#13265)
* Simplify Beacon API evaluator

* remove comment

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-12-06 14:54:33 +00:00
Radosław Kapka
28c3330375 Don't fetch duties for unknown keys (#13269)
* Don't fetch duties for unknown keys

* test
2023-12-06 14:18:45 +00:00
Nishant Das
cb465086e8 Fix Optimistic Sync Evaluator (#13262)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-12-06 13:41:47 +00:00
Nishant Das
cdd7958739 Fix Domain Data Caching (#13263)
* fix domain data caching

* fix locking of domain data

* preston's review
2023-12-06 12:04:19 +00:00
Nishant Das
97a522827b Bump Up Gossip Queue Size (#13277) 2023-12-06 09:49:26 +00:00
Preston Van Loon
b84b795f23 Relax file permissions check on existing directories (#13274) 2023-12-05 19:39:45 -06:00
Sammy Rosso
f40b8583f7 Blob filesystem: prune blobs (#13147)
* Add Save blob and tests

* Remove locks

* Remove test cleanup

* Fix go mod

* Cleanup

* Add checksum

* Add file hashing to fileutil

* Move test

* Check data when exists

* Add one more test

* Rename

* Gaz

* Add packaged level comment

* Fix block proposals in the REST validator client (#13116)

* Fix block proposals in the REST validator client

* fix graffiti test

* return empty graffiti

* fallback to old endpoints

* logs

* handle 404

* everything passes

* review from James

* log undecoded value

* test fixes and additions

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* fix head slot in log (#13139)

* zig: Update zig to recent main branch commit (#13142)

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* Fix Pending Queue Deadline Bug (#13145)

* rearrange deadline

* naming

* Add pruning

* Gaz

* Gaz

* Update pruning

* Cleanup

* Making a mess

* Benchmarking

* Forgot to add the file + fixes

* Fixes

* Pruning from DB fixed

* Add prune by file data

* Fix pruning

* Prune fixes

* Cleanup db blockRoot filter

* Handle file close error

* Fix deletion

* Change read at + remove retentionEpich from bs

* Gaz

* Seperate logic + add detailed comments

* Add tests

* Add retention slot when creating blobStorage

* Fix tests

* Gaz

* Fix testonly import

* Fix linter errors

* Fix retentionSlot calculation

* Move + use MaxEpochsToPersistBlobs

* Remove unused ctx

* Prestons suggestion

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Rename

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-12-05 21:07:34 +00:00
Preston Van Loon
a6b6a938de blobstorage: Improve mkdirall error (#13271) 2023-12-05 19:57:08 +00:00
Brandon Liu
c78d698d89 Add --jwt-id flag (#13218)
* add jwt-id flag

* optimize unit test for jwt-id

* Add jwt-id to help text

* gofmt

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-12-05 19:02:25 +00:00
Nishant Das
705e98e3c3 no need to hash it again (#13261) 2023-12-05 07:59:01 -03:00
kasey
ce2344301c forkchoice.Getter wrapper with locking wrappers (#13244)
* forkchoice.Getter wrapper with locking wrappers

* comments

* lint

* only expose fast fc getters

* potuz feedback re rlock

* update mocks for new fc method

* appease deepsource

* add missing exported func comment

* yeet errors to make the linter happy

* even more devious _discard

* rm TargetRoot

* derp

* handle nil error in _discard

* deep source

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-12-04 21:01:39 +00:00
Manu NALEPA
1112e01c06 Make Prysm VC compatible with the version v5.3.0 of the slashing protections interchange tests. (#13232)
* `TestStore_GenesisValidatorsRoot_ReadAndWrite`: Make all test cases independents.

In a test with multiple test cases, each test case should be independents.
(aka: Removing test case `A` should not impact test case `B`)

* `SaveGenesisValidatorsRoot`: Allow to overwrite the genesis validator root if the root is the same.

* `ProposalHistoryForSlot`: Add `signingRootExists`

Currently, it is not possible with `ProposalHistoryForSlot` to know if a
proposal is stored with and `0x00000....` signing root or with an empty
signing root. Both cases result to `proposalExists == true` and
`signingRoot == 0x00000`.

This commit adds a new return boolean: `signingRootExists`.

If a proposal has been saved with a `0x00000...` signing root, then:
- `proposalExists` is set to `true`, and
- `signingRootExists` is set to `true`, and
- `signingRoot` is set to `0x00000...`

If a proposal has been saved with an empty signing root, then:
- `proposalExists` is set to `true`, and
- `signingRootExists` is set to `false`, and
- (`signingRoot` is set to `0x00000...`)

* `ImportStandardProtectionJSON`: When importing EIP-3076 Slashing Protection Interchange Format, do not filter any more slashable keys.
Note: Those keys are still saved into the black-listed public keys list.

There is two reason not to do so:
- The EIP-3076 test cases do not know about Prysm's internal black-listed public keys list.
  Tests will expect, without looking into this internal black-listed public keys list,
  to deny a further signature. If we filter these keys from the DB (even if we keep them
  into the black-listed keys list), then some tests will fail.
- If we import a interchange file containing slashable keys and we filter them, then,
  if we re-export the DB, those slashing offences won't appear in the exported interchange
  file.

* `transformSignedBlocks`: Store an 0-len byte slice

When importing an EIP-3076 interchange format, and when no
signing root is specified into the file, we currently store a
`0x00000.....` signing root.

In such a case, instead storing `0x00000...`, this commit stores
a 0-len byte array, so we can differentiate real `0x000.....` signing
root and no signing-root at all.

* `slashableProposalCheck`: Manage lack of sign root

Currently, `slashableProposalCheck` does not really make a difference
between a `0x0000.....` signing root and a missing signing root.

(Signing roots can be missing when importing an EIP-3076 interchange
file.)

This commit differentiate, for  `slashableProposalCheck`, `0x0000....`
signing root and a missing signing root.

* `AttestationRecord.SigningRoot`: ==> `[]byte`

When importing attestations from EIP-3076 interchange format,
the signing root of an attestation may be missing.

Currently, Prysm consider any missing attestation signing root as
`0x000...`.
However, it may conflict with signing root which really are equal to
`0x000...`.

This commit transforms `AttestationRecord.SigningRoot` from `[32]byte` to
`[]byte`, and change the minimal set of functions (sic) to support this
new type.

* `CheckSlashableAttestation`: Empty signing root

Regarding slashing roots, 2 attestations are slashable, if:
- both signing roots are defined and differs, or
- one attestation exists, but without a signing root

* `filterSlashablePubKeysFromAttestations`: Err sort

Rergarding `CheckSlashableAttestation`, we consider that:
- If slashable == NotSlashable and err != nil, then CheckSlashableAttestation
failed.
- If slashable != NotSlashable, then err contains the reason why the attestation
is slashable.

* `setupEIP3076SpecTests`: Update to `v5.3.0`

This commit:
- Updates the version of EIP-3076 tests to `v.5.2.1`.
- Setups on anti-slashing DB per test case, instead per step.

* `ImportStandardProtectionJSON`: Reduce cycl cmplxt

* `AttestationHistoryForPubKey`: copy signing root

BoltDB documentation specifies:
| Byte slices returned from Bolt are only valid during a transaction.
| Once the transaction has been committed or rolled back then the memory
| they point to can be reused by a new page or can be unmapped
| from virtual memory and you'll see an unexpected fault address panic
| when accessing it.
2023-12-04 17:10:32 +00:00
terence
243bcb03ce Fix FFG LMD Consistency Check (Option 2) (#13258)
* Fix FFG LMD Consistency Check with TargetRootForSlot

* Add test, removed  implementation

* convert to epoch and fix self target

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2023-12-04 15:31:18 +00:00
Radosław Kapka
a0ca4a67b0 Remove API Middleware (#13243)
* remove api/gateway/apimiddleware

* fix errors in api/gateway

* remove beacon-chain/rpc/apimiddleware

* fix errors in api/client/beacon

* fix errors in validator/client/beacon-api

* fix errors in beacon-chain/node

* fix errors in validator/node

* fix errors in cmd/prysmctl/validator

* fix errors in testing/endtoend

* fix all other code

* remove comment

* fix tests

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-12-04 11:55:21 +00:00
Preston Van Loon
b68a4e12aa Update bazel and other CI improvements (#13246)
* Update bazel to 6.4.0, review flags

* Remove problematic/slow targets

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-12-01 22:20:54 +00:00
kasey
c010601f3b Initialize cancellable root context in main.go (#13252)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-12-01 21:16:06 +00:00
james-prysm
394bd1786a HTTP validator API: beacon and account endpoints (#13191)
* fixing squashing changes, migrates beacon , account, and auth endpoints on validator client

* adding accounts endpoints

* fixing tests and query endpoints

* adding auth endpoint and fixing unit tests

* removing unused files and updating node file to skip gRPC

* ineffectual assignment fix

* rolling back a change to fix e2e

* fixing issues with ui

* updating with webui version 2.0.5

* updating package name flag in readme

* removing restore assets functions

* adding nomemcopy flag to see if vulenerability scan passes

* making data non compressed to avoid copy vulnerability

* Update beacon-chain/rpc/eth/shared/structs_validator.go

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>

* updating site_data, and skipping static analysis on file

* adding back deprecation comment notice

* updating workflows to ignore generated

* addressing radek comments

* missed a conversion

---------

Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-12-01 20:40:09 +00:00
Potuz
461af4baa6 Add test helpers to produce commitments and proofs (#13242)
* Add test helpers to produce commitments and proofs

* go mod tidy

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-12-01 19:51:19 +00:00
Sammy Rosso
7a70305935 Blob filesystem: delete blobs (#13233)
* Add deletion

* Gaz

* Return on removal

* Test cleanup

* Simply blob deletion

* Add test case to prove that deleting a root that doesn't exist will not return an error

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-12-01 19:03:47 +00:00
Potuz
83ce7e3607 Verify lmd without ancestor (#13250) 2023-12-01 17:31:27 +00:00
Potuz
cf8e554981 track target in forkchoice (#13249) 2023-12-01 16:30:34 +00:00
Nishant Das
59aa978223 Optimize Multivalue Slice For Trie Recomputation (#13238)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-12-01 12:01:28 +01:00
Radosław Kapka
e4a5711c8f Redesign of Beacon API evaluator (#13229)
* redesign

* ssz

* small fixes

* capitalize json and ssz

* rename and split files

* clearer names and comments

* bazel fix

* one more simplification
2023-11-30 16:53:51 +00:00
Nishant Das
d8b38cf230 Drop Transaction Count for Transaction Generator (#13228)
* reduce

* comment
2023-11-30 10:55:18 +00:00
Nishant Das
ca36634de6 Improve Gossipsub Rejection Metric (#13236)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-30 10:16:26 +00:00
Nishant Das
1c35b66132 Add Gossipsub Queue Flag (#13237)
* add it

* remove var

* fix tests

* terence's comments

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-30 14:22:59 +08:00
terence
56c1f9aab5 Update Prysm Proposer end points for Builder API (#13240) 2023-11-29 13:07:57 -08:00
Radosław Kapka
5ecb4d62a9 REST VC: Use POST to fetch validators (#13239) 2023-11-29 18:53:26 +01:00
james-prysm
bc107a61e3 builder API: remove blinded blob sidecar (#13202) 2023-11-29 06:28:37 -08:00
Nishant Das
9f41375550 remove subscriber checker (#13234) 2023-11-29 10:11:10 +08:00
Radosław Kapka
6a638bd148 HTTP handler for Beacon API events (#13207)
* in progress

* implementation done

* bzl

* fixes

* tests in progress

* tests

* go mod tidy

* Update beacon-chain/rpc/eth/events/events.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* fix config test

* fix unreachable code issue

* remove proto service dir

* test fix

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-11-28 23:20:02 +00:00
Nishant Das
52f1b3f958 add in changes (#13226) 2023-11-28 22:12:13 +08:00
Nishant Das
80526a1899 Verify Block Signatures On Insertion Into Pending Queue (#13183)
* add check for bad signatures via gossip

* edge case handled
2023-11-28 03:13:59 +00:00
Manu NALEPA
da2212f6cc Allow validators registration batching on Builder API /eth/v1/builder/validators (#13178)
* builder `NewClient`: Simplify + fix some typos.

* Validator client: Implement `validator-registration-batch-size` option

* Address Potuz comments

* Address Potuz's comments

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-28 00:23:48 +00:00
terence
7cc05401ca Update proposer RPC to new blob sidecar format (#13189) 2023-11-27 15:44:52 -08:00
Radosław Kapka
cd8d499198 Move weak subjectivity endpoint to HTTP (#13220)
* Move weak subjectivity endpoint to HTTP

* remove server test file

* remove deprecation
2023-11-27 14:44:26 +00:00
Radosław Kapka
2fbda536b0 Fix handling POST requests in the REST VC (#13215)
* Fix handling POST requests in the REST VC

* tests for decodeResp
2023-11-25 23:25:14 +00:00
Nishant Das
0498e0a4d5 Fix Blob Storage Path (#13222)
* fix the path

* gaz
2023-11-25 01:57:22 +00:00
Radosław Kapka
098d6a3c0b Handle non-JSON responses from Beacon API (#13213)
* Run Beacon API evaluator at slot 3

* Revert "Auxiliary commit to revert individual files from f80b444688ed1acb267ee8bf00ba602d1f890cc7"

This reverts commit 0d3d7a4113533ac0516efe12d09cc3b9d78793f1.
2023-11-24 23:30:21 +00:00
Potuz
67d0b26a21 Accept block when error is only in logging (#13223)
* Accept block when error is only in logging

* linter shutup

* ignore nilerr on the linter
2023-11-24 19:00:53 +00:00
terence
6c85587d14 Update broadcast method to use BlobSidecar instead of SingedBlobSidecar (#13221)
* Update broadcast method to use BlobSidecar instead of SingedBlobSidecar

* Fix test
2023-11-24 07:18:00 +00:00
terence
6daa72634d Fix forkchoice pkg's comments grammar (#13217) 2023-11-22 17:27:42 -08:00
Potuz
07ee42660a lock RecentBlockSlot (#13212)
* lock RecentBlockSlot

* Kasey's fix
2023-11-22 16:58:00 -03:00
hzysvilla
4b5db8003b Comment typo (#13209)
* Update config.go

* Update flags.go

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-11-22 18:54:19 +00:00
Radosław Kapka
4b3c511a26 POST version of GetValidators and GetValidatorBalances (#13199)
* POST versions of GetValidators and GetValidatorBalances

* post statuses

* balances test

* group params

* test error cases
2023-11-22 17:30:52 +00:00
terence
8902ad3a20 Implement Slot-Dependent Caching for Blobs Bundle (#13205) 2023-11-22 07:23:50 -08:00
kasey
1123df7432 Verified roblobs (#13190)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-21 18:44:38 +00:00
Radosław Kapka
a7edec9b98 Better error handling in REST VC (#13203) 2023-11-21 17:42:55 +01:00
Nicolás Pernas Maradei
10ccf1840f [2/5] light client http api (#12984)
Co-authored-by: Lizhang <lizhang@polymerlabs.org>
2023-11-21 13:26:39 +01:00
terence
d035be29cd Optimize ReplayBlocks for Zero Diff (#13198)
* Stategen: replay block return early when zero diff

* Fix test setup
2023-11-17 18:19:05 +00:00
Preston Van Loon
bba8dd6f1e bazel: Run buildifier, general cleanup (#13193)
* Run buildifier

* Other BUILD.bazel cleanup, rm swagger stuff.

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-16 18:41:37 +00:00
terence
8f5ae760ee Add concurrency test for getting attestation state (#13196)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-16 17:52:59 +00:00
terence
5ba91a5216 Add construct_generic_block_test to build file (#13195)
* Add construct_generic_block_test test to build file

* Use the right require library

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-15 20:33:56 +00:00
james-prysm
4c381938e1 HTTP validator API: wallet endpoints (#13171)
* converting wallet calls to pure http

* fixing proto and gaz

* adding routes and fixing test

* fixing error handling

* fixing protos after conflict with develop

* adding deprecation notice

* fixing route test

* review feedback

* addressing more comments

* updating comment to be more clear

* fixing web_api proto
2023-11-15 19:40:14 +00:00
james-prysm
d4726f2866 HTTP Validator API: slashing protection import and export (#13165)
* adding migration for import and export slashing protection

* Update validator/rpc/handle_slashing.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/handle_slashing.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/handle_slashing.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/handle_slashing.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing comments

* fixing unit test errors after view comments

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-11-15 17:35:22 +00:00
terence
4e3419e870 Enhance Validation for Block by Root RPC Requests (#13184)
* blk-by-root-check-root

* Account for gap

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-14 17:13:01 +00:00
terence
ac06362baf Add a helper for max request block (#13173)
* Add a helper for max request block

* Add test

* Use deneb fork epoch from config

* Fix comment
2023-11-14 05:50:51 +00:00
Radosław Kapka
28aa11c976 Config HTTP endpoints (#13168)
* Config HTTP endpoints

* error on unsupported type

* type assertion
2023-11-13 23:38:23 +00:00
Radosław Kapka
798d5ec585 Remove default value of circuit breaker flags (#13186)
* Update default value of `max-builder-epoch-missed-slots`

* remove the default value

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-13 18:55:55 +00:00
Radosław Kapka
9b97f3fd92 Return 404 from eth/v1/beacon/headers when there are no blocks (#13185)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-13 18:06:26 +00:00
Radosław Kapka
0946b5853f Pool slashings HTTP endpoints (#13148)
* Pool slashings HTTP endpoints

* e2e fix

* commit

* remove pb files

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-13 17:22:39 +00:00
Nishant Das
1530d17977 Fix Withdrawals (#13181)
* fix withdrawals

* disable it
2023-11-09 13:50:57 +00:00
Potuz
e46f9c5631 KZG Commitment inclusion proof verifier (#13174)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-08 19:04:21 +00:00
Preston Van Loon
3097601530 pgo: Enable pgo behind release flag (#13158)
* Revert a54e61ecb0

* Configure the use of pgo profiles behind the release config flag (--config=release)
2023-11-08 13:33:26 +00:00
Nishant Das
4a515c36e6 Deneb E2E (#13040)
* save changes

* add dep

* add changes

* add latest changes

* push changes

* hack it for mainnet

* fix deps

* update it

* add changes

* fix e2e

* revert it

* gaz

* remove log

* preston's review

* clear up

* add more logs

* fix nonce gaps

* make it better

* fix blobs

* set value

* add support for deneb scenario paths

* update to fix scenario

* go mod

* clean up

* fix up

* reduce cog complexity

* lint

* remove

* go sec

* Update testing/endtoend/evaluators/fork.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update proto/ssz_proto_library.bzl

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fix

* radek's review

* make it atomic

* gaz

* add deneb case

* remove deneb activation

* change e2e yaml

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-11-08 12:24:23 +00:00
Potuz
afaeff9d4c Merkle Proofs of KZG commitments (#13159)
* Merkle Proofs of KZG commitments

* fix mock

* Implement Merkle proof spectests

* Check Proof construction in spectests

* fix Merkle proof generator

* Add unit test

* add ssz package unit tests

* add benchmark

* fix typo in comment

* ProposerSlashing was repeated

* Terence's review

* move to consensus_blocks

* use existing error
2023-11-06 08:49:35 -03:00
terence
f663f605d2 Add blob getters (#13170)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-04 14:35:56 +00:00
Terence
12f7143c4f Validator client: remove blob signing (#13169) 2023-11-03 12:10:15 -07:00
Radosław Kapka
1f250f7e89 Validator HTTP endpoints (#13167)
* HTTP validator endpoints

* Sammy's review

* capitalize errors

* test fix

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-03 16:59:04 +00:00
Sammy Rosso
0f65e51d1e Blob filesystem: Save Blobs (#13129)
* Add Save blob and tests

* Remove locks

* Remove test cleanup

* Fix go mod

* Cleanup

* Add checksum

* Add file hashing to fileutil

* Move test

* Check data when exists

* Add one more test

* Rename

* Gaz

* Add packaged level comment

* Save full sidecar + reviews

* Use path builder in test

* Use other BlobSidecar

* Cleanup

* Fix gosec

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-03 16:24:30 +00:00
Radosław Kapka
d1dd8471a3 Debug HTTP endpoints (#13164)
* Debug HTTP endpoints

* register endpoints

* tests

* small fixes

* config test fix
2023-11-03 15:33:46 +00:00
Terence
7a6487b746 Remove pending blobs queue (#13166) 2023-11-03 07:07:43 -07:00
Potuz
daa6d2e741 Implement Merkle proof spectests (#13146)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-11-02 23:08:16 +00:00
Raul Jordan
8a743a6430 Update Terms of Service (#13163)
* tos updates

* fixes
2023-11-02 17:11:11 +00:00
james-prysm
c0fb16a96f HTTP validator API: health endpoints (#13149)
* updating health endpoints

* updating tests

* updating tests

* moving where the header is written and adding allow origin header

* removing header

* Update validator/rpc/handlers_health.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/handlers_health.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/handlers_health.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's comments

* Update handlers_health.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* adding the correct errors to handle error

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-11-02 15:51:21 +00:00
Terence
57eda1de63 Add RO blob sidecar (#13144) 2023-11-01 10:03:49 -07:00
Preston Van Loon
a54e61ecb0 pgo: remove default pprof profile (#13150) 2023-10-31 21:43:41 +00:00
james-prysm
27b4e32e1c HTTP Validator API: /eth/v1/keystores (#13113)
* WIP

* fixing tests

* fixing bazel

* fixing api client

* fixing tests

* fixing more tests and bazel

* fixing trace and more bazel issues

* fixing router path function definitions

* fixing more tests and deep source issues

* adding delete test

* if a route is provided, reregister before the catch all on the middleware.

* fixing linting

* fixing deepsource complaint

* gaz

* more deepsource issues

* fixing missed err check

* changing how routes are registered

* radek reviews

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* fixing unit test after sammy's review

* adding radek's comments

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2023-10-31 16:33:54 +00:00
Nishant Das
b56bf00682 Fix Pending Queue Deadline Bug (#13145)
* rearrange deadline

* naming
2023-10-31 06:40:41 +00:00
Preston Van Loon
b24b60dbd8 zig: Update zig to recent main branch commit (#13142)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-30 23:48:45 +00:00
Potuz
dc9d34b41b fix head slot in log (#13139) 2023-10-30 16:56:20 -03:00
Radosław Kapka
2ef0b3526d Fix block proposals in the REST validator client (#13116)
* Fix block proposals in the REST validator client

* fix graffiti test

* return empty graffiti

* fallback to old endpoints

* logs

* handle 404

* everything passes

* review from James

* log undecoded value

* test fixes and additions

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-30 18:15:45 +00:00
Sammy Rosso
047613069e Rename Blob retention epoch flag (#13124)
* Rename flag and add alias

* Update cmd/beacon-chain/flags/base.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix sentence

* Fix TestConfigureBlobRetentionEpoch

* Fix silly mistake

* Reviews

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-10-30 17:35:31 +00:00
Justin Traglia
159a5dd69d Check that blobs count is correct when unblinding (#13118)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-10-30 16:49:32 +01:00
hyunchel
470ea6d717 Remove no-op cancel func (#13069)
This cancel function is currently a no-op due to the blank identifier.
One might argue that the cancel func should be restored from no-op by
replacing the blank identifier with the proper variable. When the parent
context is cancelled, however, all the functions down the call tree with
the context will be notified of the cancellation anyway. Removing the
cancel function would not change any outcome under the current
implementation.

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-10-30 14:43:25 +00:00
Radosław Kapka
b441f20e6a Remove /node/peers/{peer_id} from Beacon API evaluator (#13138) 2023-10-30 14:21:07 +00:00
Potuz
2ea5bff9c0 Log when sending FCU with payload attributes (#13137)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-30 12:30:20 +00:00
terencechain
c2433ff854 Update spectest and changed minimal preset for field elements (#13090)
* update trusted setup

* update dependencies

* Update workspace

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-30 11:41:58 +00:00
Preston Van Loon
82640b3d88 Enable profile guided optimization for beacon-chain (#13035)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-30 05:52:36 +00:00
Radosław Kapka
f925aded66 Allow unknown fields in Beacon API responses (#13131) 2023-10-27 16:47:50 +00:00
james-prysm
10a89fef13 DEPRECTATION: Remove exchange transition configuration call (#13127)
* wip removing call to execution client for transition configuration

* updating bazel and execution engine proto

* removing more spots where the call was added

* removing unused metric
2023-10-27 15:43:00 +00:00
Nishant Das
56c65b8527 Return Error Gracefully When Removing 4881 Flag (#13096)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-27 13:24:43 +00:00
Radosław Kapka
022ee17af9 Better Beacon API evaluator part 1 (#13084)
* Better Beacon API evaluator part 1

* rename package

* more endpoints

* rename package back

* more endpoints

* small improvements

* remove the need for `params`

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-10-27 11:57:49 +00:00
terencechain
203dc5f63b Check blob index duplication for blob notifier (#13123)
* Check blob index duplication for blob notifier

* Better locks and test

* Better locks and test

* Kasey's feedback

* Fix init
2023-10-27 03:26:34 +00:00
Stefan
6f941b8138 fix segmentation fork when Capella for epoch is MaxUint64 (#13126)
* fix segmentation fork when Capella for epoch is MaxUint64

fix segmentation

fix segmentation

* Update cmd/prysmctl/testnet/generate_genesis.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Update cmd/prysmctl/testnet/generate_genesis.go

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

---------

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2023-10-27 00:46:08 +00:00
dependabot[bot]
ac412259eb Bump google.golang.org/grpc from 1.53.0 to 1.56.3 (#13119)
* Bump google.golang.org/grpc from 1.53.0 to 1.56.3

Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.53.0 to 1.56.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.53.0...v1.56.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* gazelle

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-26 16:54:57 +00:00
Radosław Kapka
3d78a52980 Fill state attestations (#13121) 2023-10-26 15:35:01 +00:00
Preston Van Loon
5de8ec4600 Update go to 1.20.10 (#13120) 2023-10-26 02:31:12 +00:00
Preston Van Loon
7e88eefc60 Add zero length check on indices during NextSyncCommitteeIndices (#13117)
* Add zero length check on indices during NextSyncCommitteeIndices computation. Fixes #13051

* Move the error further up the stack

* Fix TestSubmitAggregateAndProof_IsAggregatorAndNoAtts

* Delete TestServer_ListAssignments_NoResults. That is an impossible scenario that now returns an error
2023-10-25 21:42:17 +00:00
terencechain
cabf3476e7 Add context deadline for pending queue's receive block (#13114)
* Add context dead like for pending queue's receive block

* Use timeout
2023-10-25 19:40:17 +00:00
Radosław Kapka
5a01eecc50 HTTP state endpoints (#13099)
* slowly plowing through

* implementation ready

* wrong epoch particip

* fix epoch participation

* tests

* fix e2e

* error handling in tests

* review from James

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-10-25 18:12:58 +00:00
terencechain
b608c9f711 Log blob's kzg commmitment at sync (#13111)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-25 16:27:19 +00:00
Justin Traglia
671bf00c98 Fix bug in Beacon API getBlobs (#13100)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-10-25 03:33:59 +00:00
terencechain
cbf6a2752d Reject Blob Sidecar Incorrect Index (#13094)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-25 01:57:54 +00:00
Nishant Das
642458f037 Fix Pending Queue Expiration Bug (#13104)
* fix bug

* make test better

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-24 20:28:57 +00:00
james-prysm
2a067d5d03 HTTP Validator API: /eth/v1/validator/{pubkey}/feerecipient (#13085)
* migrating fee recipient endpoints to pure http implementation

* fixing linting

* fixing type name

* fixing after merging develop

* fixing linting and tests

* Update validator/rpc/structs.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/structs.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/structs.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-10-24 16:55:45 +00:00
terencechain
a2f60364ae Check return and request lengths for blob sidecar by root (#13106) 2023-10-24 15:02:44 +00:00
Justin Traglia
45f68fa8d5 Replace MAX_BLOB_EPOCHS usages with more accurate terms (#13098)
Co-authored-by: terencechain <terence@prysmaticlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-10-24 03:28:50 +00:00
terencechain
f55708b995 Fix blob sidecar subnet check (#13102)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-23 23:08:25 +00:00
Delweng
00826e8858 beacon-chain/blockchain: fix some datarace in go test (#13036)
* beacon-chain/blockchain: mockBeaconNode with mutex

Signed-off-by: jsvisa <delweng@gmail.com>

* beacon-node/forkchoice: bool -> atomic.Bool

Signed-off-by: jsvisa <delweng@gmail.com>

* beacon-chain/blockchain: datarace in concurrent postBlock

Signed-off-by: jsvisa <delweng@gmail.com>

* Revert "beacon-node/forkchoice: bool -> atomic.Bool"

This reverts commit 4aad095b0f.

Signed-off-by: jsvisa <delweng@gmail.com>

---------

Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-10-23 21:55:47 +00:00
terencechain
76fec1799e Replace Empty Slice Literals with Nil Slices (#13093)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-23 16:36:11 +00:00
james-prysm
9c938d354d HTTP Validator API: /eth/v1/validator/{pubkey}/gas_limit (#13082)
* WIP

* more WIP

* fixing unit tests

* gaz

* gofmt

* adding routes

* adding tests for validator routes

* adding in missed comment

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* adding log and removing unneeded type

* fixing casing on tests

* adding more tests

* Update server.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* addressing radek's comments

* handling error

* fixing naming on validator struct for mock

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-10-23 15:49:28 +00:00
terencechain
83932d8e05 Refactor Error String Formatting According to Go Best Practices (#13092)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-23 07:57:25 +00:00
Nishant Das
beebb56c8e Fix Builder Testing For Multiclient Runs (#13091) 2023-10-23 06:33:46 +00:00
Potuz
0920fb1f61 Return early from ReceiveBlock if already sycned (#13089)
* Return early from ReceiveBlock if already sycned

* Fix bad setup test
2023-10-22 18:31:50 -03:00
Delweng
29f8880638 beacon-node/rpc: fix go test datarace (#13018)
* beacon-chain/p2p: ust atomic.Bool instead of bool

Signed-off-by: jsvisa <delweng@gmail.com>

* beacon-chain/p2p,rpc: read mock.BroadcastMessages with lock

Signed-off-by: jsvisa <delweng@gmail.com>

* beacon-chain/p2p,rpc: read attestation with lock

Signed-off-by: jsvisa <delweng@gmail.com>

* beacon-chain/rpc: fix typo

Signed-off-by: jsvisa <delweng@gmail.com>

* beacon-chain/p2p: typo

Signed-off-by: jsvisa <delweng@gmail.com>

---------

Signed-off-by: jsvisa <delweng@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-10-22 15:12:55 +00:00
Nishant Das
f91efafe24 Fix Multivalue Slice Deadlock (#13087)
* fix deadlock

* gofmt

* lint
2023-10-21 17:08:52 +00:00
terencechain
9387a36b66 Refactor Exported Names to Follow Golang Best Practices (#13075)
* Fix exported names that start with a package name

* A few more renames

* Fix exported names that start with a package name

* A few more renames

* Radek's feedback

* Fix conflict

* fix keymanager test

* Fix comments

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-20 16:45:33 +00:00
Potuz
65ce27292c sync only up to previous epoch on phase 1 (#13083)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-10-21 00:05:14 +08:00
terencechain
823f8ee3a2 Fix redundant type converstion (#13076)
* Fix redundant type converstion

* Revert generated changes

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-20 15:07:10 +00:00
vuittont60
88e1b9edb3 docs: fix typo (#13023)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-10-20 14:55:16 +00:00
Nishant Das
c7e28908f5 Add Clarification To Sync Committee Cache (#13067)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-10-20 13:57:07 +00:00
james-prysm
7143fe80bc HTTP VALIDATOR API: remote keymanager api /eth/v1/remotekeys (#13059)
* WIP migrating keymanager api changes

* gaz

* fixing more tests

* fixing unit tests

* fixing deepsource

* fixing visibility of package

* fixing more package visability issues

* gaz

* fixing test

* moving routes to proper location

* removing whitespae for linting

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's comments

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2023-10-19 16:17:42 +00:00
1211 changed files with 62920 additions and 63039 deletions

View File

@@ -27,6 +27,7 @@ build:minimal --@io_bazel_rules_go//go/config:tags=minimal
# Release flags
build:release --compilation_mode=opt
build:release --stamp
build:release --define pgo_enabled=1
# Build binary with cgo symbolizer for debugging / profiling.
build:cgo_symbolizer --copt=-g

View File

@@ -1 +1 @@
6.3.2
7.0.0

View File

@@ -9,8 +9,8 @@
#build:remote-cache --strategy=Genrule=standalone
# Prysm specific remote-cache properties.
#build:remote-cache --disk_cache=
build:remote-cache --remote_download_toplevel
build:remote-cache --remote_download_minimal
build:remote-cache --remote_build_event_upload=minimal
build:remote-cache --remote_cache=grpc://bazel-remote-cache:9092
# Does not work with rules_oci. See https://github.com/bazel-contrib/rules_oci/issues/292
#build:remote-cache --experimental_remote_downloader=grpc://bazel-remote-cache:9092
@@ -29,7 +29,10 @@ build --experimental_use_hermetic_linux_sandbox
# Import workspace options.
import %workspace%/.bazelrc
startup --host_jvm_args=-Xmx4g --host_jvm_args=-Xms2g
# Enable blake3 once it is supported in remote cache. See: https://github.com/buchgr/bazel-remote/issues/710
# startup --digest_function=blake3
startup --host_jvm_args=-Xmx8g --host_jvm_args=-Xms4g
build --experimental_strict_action_env
build --sandbox_tmpfs_path=/tmp
build --verbose_failures
@@ -39,6 +42,7 @@ build --curses=no --color=no
build --keep_going
build --test_output=errors
build --flaky_test_attempts=5
build --build_runfile_links=false # Only build runfile symlink forest when required by local action, test, or run command.
# Disabled race detection due to unstable test results under constrained environment build kite
# build --features=race

View File

@@ -1,4 +1,4 @@
FROM golang:alpine
FROM golang:1.21-alpine
COPY entrypoint.sh /entrypoint.sh

View File

@@ -16,7 +16,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.21.5'
- id: list
uses: shogo82148/actions-go-fuzz/list@v0
with:
@@ -36,7 +36,7 @@ jobs:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.20'
go-version: '1.21.5'
- uses: shogo82148/actions-go-fuzz/run@v0
with:
packages: ${{ matrix.package }}

View File

@@ -5,6 +5,8 @@ on:
branches: [ master ]
pull_request:
branches: [ '*' ]
merge_group:
types: [checks_requested]
jobs:
formatting:
@@ -26,15 +28,15 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.20
- name: Set up Go 1.21
uses: actions/setup-go@v3
with:
go-version: '1.20'
go-version: '1.21.5'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
go install github.com/securego/gosec/v2/cmd/gosec@v2.15.0
gosec -exclude=G307 -exclude-dir=crypto/bls/herumi ./...
gosec -exclude-generated -exclude=G307 -exclude-dir=crypto/bls/herumi ./...
lint:
name: Lint
@@ -43,16 +45,16 @@ jobs:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Go 1.20
- name: Set up Go 1.21
uses: actions/setup-go@v3
with:
go-version: '1.20'
go-version: '1.21.5'
id: go
- name: Golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: v1.52.2
version: v1.55.2
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:
@@ -62,7 +64,7 @@ jobs:
- name: Set up Go 1.x
uses: actions/setup-go@v2
with:
go-version: '1.20'
go-version: '1.21.5'
id: go
- name: Check out code into the Go module directory

View File

@@ -6,21 +6,82 @@ run:
- proto
- tools/analyzers
timeout: 10m
go: '1.19'
go: '1.21.5'
linters:
disable-all: true
enable:
- gofmt
- goimports
- unused
- errcheck
- gosimple
- gocognit
- dupword
- nilerr
- whitespace
- misspell
enable-all: true
disable:
# Deprecated linters:
- deadcode
- exhaustivestruct
- golint
- govet
- ifshort
- interfacer
- maligned
- nosnakecase
- scopelint
- structcheck
- varcheck
# Disabled for now:
- asasalint
- bodyclose
- containedctx
- contextcheck
- cyclop
- depguard
- dogsled
- dupl
- durationcheck
- errorlint
- exhaustive
- exhaustruct
- forbidigo
- forcetypeassert
- funlen
- gci
- gochecknoglobals
- gochecknoinits
- goconst
- gocritic
- gocyclo
- godot
- godox
- goerr113
- gofumpt
- gomnd
- gomoddirectives
- inamedparam
- interfacebloat
- ireturn
- lll
- maintidx
- makezero
- musttag
- nakedret
- nestif
- nilnil
- nlreturn
- noctx
- nolintlint
- nonamedreturns
- nosprintfhostport
- perfsprint
- prealloc
- predeclared
- promlinter
- protogetter
- revive
- staticcheck
- stylecheck
- tagalign
- tagliatelle
- thelper
- unparam
- varnamelen
- wrapcheck
- wsl
linters-settings:
gocognit:

View File

@@ -4,6 +4,7 @@ load("@com_github_atlassian_bazel_tools//goimports:def.bzl", "goimports")
load("@io_kubernetes_build//defs:run_in_workspace.bzl", "workspace_binary")
load("@io_bazel_rules_go//go:def.bzl", "nogo")
load("@bazel_skylib//rules:common_settings.bzl", "string_setting")
load("@prysm//tools/nogo_config:def.bzl", "nogo_config_exclude")
prefix = "github.com/prysmaticlabs/prysm"
@@ -82,38 +83,117 @@ workspace_binary(
cmd = "@com_github_golang_lint//golint",
)
STATICCHECK_ANALYZERS = [
# Enabled static checks. See https://staticcheck.dev/docs/checks/
# Please. keep this list sorted. Don't be a bad person by inserting stuff randomly.
"sa1000",
"sa1001",
"sa1002",
"sa1003",
"sa1004",
"sa1005",
"sa1006",
"sa1007",
"sa1008",
"sa1010",
"sa1011",
"sa1012",
"sa1013",
"sa1014",
"sa1015",
"sa1016",
"sa1017",
"sa1018",
# "sa1019", # TODO: Fix all uses of deprecated things.
"sa1020",
"sa1021",
"sa1023",
"sa1024",
"sa1025",
"sa1026",
"sa1027",
"sa1028",
"sa1029",
"sa1030",
"sa2000",
"sa2001",
"sa2002",
"sa2003",
"sa3000",
"sa3001",
"sa4000",
"sa4001",
"sa4003",
"sa4004",
"sa4005",
"sa4006",
"sa4008",
"sa4009",
"sa4010",
"sa4011",
"sa4012",
"sa4013",
"sa4014",
"sa4015",
"sa4016",
"sa4017",
"sa4018",
"sa4019",
"sa4020",
"sa4021",
"sa4022",
"sa4023",
"sa4024",
"sa4025",
"sa4026",
"sa4027",
"sa4028",
"sa4029",
"sa4030",
"sa4031",
"sa4032",
"sa5000",
"sa5001",
"sa5002",
"sa5003",
"sa5004",
"sa5005",
"sa5007",
"sa5008",
"sa5009",
"sa5010",
"sa5011",
"sa5012",
"sa6000",
"sa6001",
"sa6002",
"sa6003",
"sa6005",
"sa6006",
"sa9001",
"sa9002",
#"sa9003", # Doesn't build. See https://github.com/dominikh/go-tools/pull/1483
"sa9004",
"sa9005",
"sa9006",
"sa9007",
"sa9008",
]
nogo_config_exclude(
name = "nogo_config_with_excludes",
checks = [sa.upper() for sa in STATICCHECK_ANALYZERS],
exclude_files = [
"external/.*",
],
input = "nogo_config.json",
)
nogo(
name = "nogo",
config = "nogo_config.json",
config = ":nogo_config_with_excludes",
visibility = ["//visibility:public"],
deps = [
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unmarshal:go_default_library",
"@org_golang_x_tools//go/analysis/passes/tests:go_default_library",
"@org_golang_x_tools//go/analysis/passes/structtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stdmethods:go_default_library",
"@org_golang_x_tools//go/analysis/passes/shift:go_default_library",
# "@org_golang_x_tools//go/analysis/passes/shadow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/printf:go_default_library",
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_default_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ctrlflow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/copylock:go_default_library",
# "@org_golang_x_tools//go/analysis/passes/cgocall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildssa:go_default_library",
"@org_golang_x_tools//go/analysis/passes/bools:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomicalign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomic:go_default_library",
"@org_golang_x_tools//go/analysis/passes/assign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_default_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_default_library",
"//tools/analyzers/comparesame:go_default_library",
"//tools/analyzers/cryptorand:go_default_library",
"//tools/analyzers/errcheck:go_default_library",
@@ -129,6 +209,53 @@ nogo(
"//tools/analyzers/shadowpredecl:go_default_library",
"//tools/analyzers/slicedirect:go_default_library",
"//tools/analyzers/uintcast:go_default_library",
"@org_golang_x_tools//go/analysis/passes/appends:go_default_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_default_library",
"@org_golang_x_tools//go/analysis/passes/assign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomic:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomicalign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/bools:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildssa:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildtag:go_default_library",
# cgocall disabled
#"@org_golang_x_tools//go/analysis/passes/cgocall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/copylock:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ctrlflow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/defers:go_default_library",
"@org_golang_x_tools//go/analysis/passes/directive:go_default_library",
# fieldalignment disabled
#"@org_golang_x_tools//go/analysis/passes/fieldalignment:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/framepointer:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpmux:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ifaceassert:go_default_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_default_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_default_library",
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_default_library",
"@org_golang_x_tools//go/analysis/passes/printf:go_default_library",
"@org_golang_x_tools//go/analysis/passes/reflectvaluecompare:go_default_library",
# shadow disabled
#"@org_golang_x_tools//go/analysis/passes/shadow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/shift:go_default_library",
"@org_golang_x_tools//go/analysis/passes/sigchanyzer:go_default_library",
"@org_golang_x_tools//go/analysis/passes/slog:go_default_library",
"@org_golang_x_tools//go/analysis/passes/sortslice:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stdmethods:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stringintconv:go_default_library",
"@org_golang_x_tools//go/analysis/passes/structtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/testinggoroutine:go_default_library",
"@org_golang_x_tools//go/analysis/passes/tests:go_default_library",
"@org_golang_x_tools//go/analysis/passes/timeformat:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unmarshal:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unusedresult:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unusedwrite:go_default_library",
"@org_golang_x_tools//go/analysis/passes/usesgenerics:go_default_library",
] + select({
# nogo checks that fail with coverage enabled.
":coverage_enabled": [],
@@ -136,7 +263,7 @@ nogo(
"@org_golang_x_tools//go/analysis/passes/composite:go_default_library",
"@org_golang_x_tools//go/analysis/passes/lostcancel:go_default_library",
],
}),
}) + ["@co_honnef_go_tools//staticcheck/%s:go_default_library" % c for c in STATICCHECK_ANALYZERS],
)
config_setting(
@@ -144,6 +271,11 @@ config_setting(
values = {"define": "coverage_enabled=1"},
)
config_setting(
name = "pgo_enabled",
values = {"define": "pgo_enabled=1"},
)
common_files = {
"//:LICENSE.md": "LICENSE.md",
"//:README.md": "README.md",

View File

@@ -55,7 +55,7 @@ bazel build //beacon-chain --config=release
## Adding / updating dependencies
1. Add your dependency as you would with go modules. I.e. `go get ...`
1. Run `gazelle update-repos -from_file=go.mod` to update the bazel managed dependencies.
1. Run `bazel run //:gazelle -- update-repos -from_file=go.mod` to update the bazel managed dependencies.
Example:

6
MODULE.bazel Normal file
View File

@@ -0,0 +1,6 @@
###############################################################################
# Bazel now uses Bzlmod by default to manage external dependencies.
# Please consider migrating your external dependencies from WORKSPACE to MODULE.bazel.
#
# For more details, please check https://github.com/bazelbuild/bazel/issues/18958
###############################################################################

1245
MODULE.bazel.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -7,7 +7,7 @@
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/prysmaticlabs)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/prysmaticlabs/prysm/badge)](https://www.gitpoap.io/gh/prysmaticlabs/prysm)
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the [Ethereum Consensus](https://ethereum.org/en/eth2/) specification, developed by [Prysmatic Labs](https://prysmaticlabs.com). See the [Changelog](https://github.com/prysmaticlabs/prysm/releases) for details of the latest releases and upcoming breaking changes.
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the [Ethereum Consensus](https://ethereum.org/en/developers/docs/consensus-mechanisms/#proof-of-stake) [specification](https://github.com/ethereum/consensus-specs), developed by [Offchain Labs](https://www.offchainlabs.com). See the [Changelog](https://github.com/prysmaticlabs/prysm/releases) for details of the latest releases and upcoming breaking changes.
### Getting Started

View File

@@ -1,45 +1,53 @@
## Terms of Use
# Terms of Use
Effective as of November 2, 2023
Effective as of Oct 14, 2020
By downloading, accessing or using the Prysm implementation (“Prysm”), you (referenced herein as “you” or the “user”) certify that you have read and agreed to the terms and conditions below (the “Terms”) which form a binding contract between you and Offchain Labs, Inc. (as successor in interest to Prysmatic Labs LLC) (referenced herein as “Offchain Labs”, “we” or “us”). If you do not agree to the Terms, do not download or use Prysm. Additionally, the Terms of Use available at https://arbitrum.io/tos (or any successor site, the “OCL Terms of Use”) are hereby incorporated by reference into these Terms. In the event of any conflict between provisions set forth herein and those set forth in the OCL Terms of Use, the provisions set forth herein shall control.
By downloading, accessing or using the Prysm implementation (“Prysm”), you (referenced herein as “you” or the “user”) certify that you have read and agreed to the terms and conditions below (the “Terms”) which form a binding contract between you and Prysmatic Labs (referenced herein as “we” or “us”). If you do not agree to the Terms, do not download or use Prysm.
## About Prysm
### About Prysm
Prysm is a client implementation for Ethereum consensus protocol for a proof-of-stake blockchain. To participate in the network, a user must send ETH from the Eth1.0 chain into a validator deposit contract, which will queue in the user as a validator in the system. Validators participate in proposing and voting on blocks in the protocol, and the network applies rewards/penalties based on their behavior. A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the official documentation portal, however, we do not warrant the accuracy, completeness or usefulness of this documentation. Any reliance you place on such information is strictly at your own risk.
Prysm is a client implementation for the Ethereum blockchains consensus protocol. To participate in the network, a user must send ETH from the Ethereum mainnet blockchain to a validator deposit smart contract on Ethereum mainnet. Validators participate in proposing and voting on blocks in the protocol, and the network applies rewards/penalties based on their behavior. A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the official documentation portal, however, we do not warrant the accuracy, completeness or usefulness of this documentation. Any reliance you place on such information is strictly at your own risk.
### Licensing Terms
Prysm is a fully open-source software program licensed pursuant to the GNU General Public License v3.0.
## Licensing Terms
Prysm is an open-source software program licensed pursuant to the GNU General Public License v3.0.
The Offchain Labs name, the term “Prysm” and all related names, logos, product and service names, designs and slogans are trademarks of Offchain Labs or its affiliates and/or licensors. You must not use such marks without our prior written permission.
PLEASE READ THESE TERMS CAREFULLY, AS THE OCL TERMS OF USE INCORPORATED BY REFERENCE HEREIN CONTAIN AN AGREEMENT TO ARBITRATE AND OTHER IMPORTANT INFORMATION REGARDING YOUR LEGAL RIGHTS, REMEDIES, AND OBLIGATIONS. THE AGREEMENT TO ARBITRATE REQUIRES (WITH LIMITED EXCEPTION) THAT YOU SUBMIT CLAIMS YOU HAVE AGAINST US TO BINDING AND FINAL ARBITRATION, AND FURTHER (1) YOU WILL ONLY BE PERMITTED TO PURSUE CLAIMS AGAINST OFFCHAIN LABS ON AN INDIVIDUAL BASIS, NOT AS A PLAINTIFF OR CLASS MEMBER IN ANY CLASS OR REPRESENTATIVE ACTION OR PROCEEDING, (2) YOU WILL ONLY BE PERMITTED TO SEEK RELIEF (INCLUDING MONETARY, INJUNCTIVE, AND DECLARATORY RELIEF) ON AN INDIVIDUAL BASIS, AND (3) YOU MAY NOT BE ABLE TO HAVE ANY CLAIMS YOU HAVE AGAINST US RESOLVED BY A JURY OR IN A COURT OF LAW.
The Prysmatic Labs name, the term “Prysm” and all related names, logos, product and service names, designs and slogans are trademarks of Prysmatic Labs or its affiliates and/or licensors. You must not use such marks without our prior written permission.
## Risks of Operating Prysm
### Risks of Operating Prysm
The use of Prysm and acting as a validator on the Ethereum network can lead to loss of money. Ethereum is still an experimental system and ETH remains a risky investment. You alone are responsible for your actions on Prysm including the security of your ETH and meeting any applicable minimum system requirements.
The use of Prysm and acting as a validator on the Ethereum network can lead to loss of money, tokens and value. Ethereum is still an experimental system and ETH remains a risky investment. You alone are responsible for your actions on Prysm, including the security of your ETH and meeting any applicable minimum system requirements.
Use of Prysm and the ability to receive rewards or penalties may be affected at any time by mistakes made by the user or other users, software problems such as bugs, errors, incorrectly constructed transactions, unsafe cryptographic libraries or malware affecting the network, technical failures in the hardware of a user, security problems experienced by a user and/or actions or inactions of third parties and/or events experienced by third parties, among other risks. We cannot and do not guarantee that any user of Prysm will make money, that the Prysm network will operate in accordance with the documentation or that transactions will be effective or secure.
We make no claims that Prysm is appropriate or permitted for use in any specific jurisdiction. Access to Prysm may not be legal by certain persons or in certain jurisdictions or countries. If you access Prysm, you do so on your own initiative and are responsible for compliance with local laws.
YOU ACKNOWLEDGE THAT WE ARE NOT RESPONSIBLE FOR ANY RISKS ASSOCIATED WITH YOUR USE OF PRYSM, AND CANNOT BE HELD LIABLE FOR ANY RESULTING LOSSES THAT YOU EXPERIENCE WHILE ACCESSING OR USING PRYSM.
Some Internet plans will charge an additional amount for any excess upload bandwidth used that isnt included in the plan and may terminate your connection without warning because of overuse. We advise that you check whether your Internet connection is subjected to such limitations and monitor your bandwidth use so that you can stop Prysm before you reach your upload limit.
BY ACCESSING AND USING PRYSM, YOU REPRESENT AND WARRANT THAT YOU UNDERSTAND THE INHERENT RISKS ASSOCIATED WITH USING CRYPTOGRAPHIC AND BLOCKCHAIN-BASED SYSTEMS, AND THAT YOU HAVE A WORKING KNOWLEDGE OF THE USAGE AND INTRICACIES OF DIGITAL ASSETS, SUCH AS THOSE FOLLOWING THE ETHEREUM TOKEN STANDARD (ERC-20). YOU FURTHER UNDERSTAND THAT THE MARKETS FOR DIGITAL ASSETS ARE HIGHLY VOLATILE DUE TO VARIOUS FACTORS, INCLUDING ADOPTION, SPECULATION, TECHNOLOGY, SECURITY, AND REGULATION. YOU ACKNOWLEDGE AND ACCEPT THAT THE COST AND SPEED OF TRANSACTING WITH CRYPTOGRAPHIC AND BLOCKCHAIN-BASED SYSTEMS SUCH AS ETHEREUM ARE VARIABLE AND MAY INCREASE DRAMATICALLY AT ANY TIME. YOU UNDERSTAND THAT ANYONE CAN CREATE A TOKEN, INCLUDING FAKE VERSIONS OF EXISTING TOKENS AND TOKENS THAT FALSELY CLAIM TO REPRESENT PROJECTS, AND ACKNOWLEDGE AND ACCEPT THE RISK THAT YOU MAY MISTAKENLY INTERACT WITH THOSE OR OTHER TOKENS. YOU FURTHER ACKNOWLEDGE THAT WE ARE NOT RESPONSIBLE FOR ANY OF THE VARIABLES OR RISKS DESCRIBED IN THESE TERMS. YOU UNDERSTAND AND AGREE TO ASSUME FULL RESPONSIBILITY FOR ALL OF THE RISKS OF ACCESSING AND USING PRYSM. YOU ARE SOLELY RESPONSIBLE FOR YOUR WALLETS, FOR SAFEGUARDING THE ASSOCIATED PRIVATE KEY AND FOR ANY ACTIVITY THAT OCCURS USING YOUR WALLET. WITHOUT LIMITING THE FOREGOING, YOU ALSO UNDERSTAND THAT THERE MAY BE TAX AND REGULATORY RISKS RELATED TO USING PRYSM. IT IS YOUR SOLE RESPONSIBILITY TO DETERMINE WHETHER, AND TO WHAT EXTENT, ANY TAXES APPLY TO ANY TRANSACTIONS YOU CONDUCT IN CONNECTION WITH YOUR USE OF PRYSM, AND TO WITHHOLD, COLLECT, REPORT AND REMIT THE CORRECT AMOUNTS OF TAXES TO THE APPROPRIATE TAX AUTHORITIES. DIGITAL ASSETS, BLOCKCHAIN TECHNOLOGY, AND ANY RELATED SOFTWARE AND SERVICES ARE ALSO SUBJECT TO LEGAL AND REGULATORY UNCERTAINTY IN THE UNITED STATES AND OTHER JURISDICTIONS. YOU UNDERSTAND THAT LEGISLATIVE AND REGULATORY CHANGES OR ACTIONS MAY ADVERSELY AFFECT THE USAGE, TRANSFERABILITY, TRANSACTABILITY AND ACCESSIBILITY RELATED TO PRYSM.
### Warranty Disclaimer
PRYSM IS PROVIDED ON AN “AS-IS” BASIS AND MAY INCLUDE ERRORS, OMISSIONS, OR OTHER INACCURACIES. PRYSMATIC LABS AND ITS CONTRIBUTORS MAKE NO REPRESENTATIONS OR WARRANTIES ABOUT PRYSM FOR ANY PURPOSE, AND HEREBY EXPRESSLY DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT OR ANY OTHER IMPLIED WARRANTY UNDER THE UNIFORM COMPUTER INFORMATION TRANSACTIONS ACT AS ENACTED BY ANY STATE. WE ALSO MAKE NO REPRESENTATIONS OR WARRANTIES THAT PRYSM WILL OPERATE ERROR-FREE, UNINTERRUPTED, OR IN A MANNER THAT WILL MEET YOUR REQUIREMENTS AND/OR NEEDS. THEREFORE, YOU ASSUME THE ENTIRE RISK REGARDING THE QUALITY AND/OR PERFORMANCE OF PRYSM AND ANY TRANSACTIONS ENTERED INTO THEREON.
We make no claims that Prysm is appropriate or permitted for use in any specific jurisdiction. Access to Prysm may not be legal by certain persons or in certain jurisdictions or countries. If you access Prysm, you do so on your own initiative and are responsible for compliance with all Applicable Law (as defined below), including, without limitation, for the avoidance of doubt, local laws.
### Limitation of Liability
In no event will Prysmatic Labs or any of its contributors be liable, whether in contract, warranty, tort (including negligence, whether active, passive or imputed), product liability, strict liability or other theory, breach of statutory duty or otherwise arising out of, or in connection with, your use of Prysm, for any direct, indirect, incidental, special or consequential damages (including any loss of profits or data, business interruption or other pecuniary loss, or damage, loss or other compromise of data, in each case whether direct, indirect, incidental, special or consequential) arising out of use Prysm, even if we or other users have been advised of the possibility of such damages. The foregoing limitations and disclaimers shall apply to the maximum extent permitted by applicable law, even if any remedy fails of its essential purpose. You acknowledge and agree that the limitations of liability afforded us hereunder constitute a material and actual inducement and condition to entering into these Terms, and are reasonable, fair and equitable in scope to protect our legitimate interests in light of the fact that we are not receiving consideration from you for providing Prysm.
Some Internet plans will charge additional amounts for bandwidth or any excess upload bandwidth used that isnt included in the plan and may terminate your connection without warning because of overuse. We advise that you check whether your Internet connection is subjected to any such limitations and monitor your bandwidth use and upload volumes.
### Indemnification
To the maximum extent permitted by law, you will defend, indemnify and hold Prysmatic Labs and its contributors harmless from and against any and all claims, actions, suits, investigations, or proceedings by any third party (including any party or purported party to or beneficiary or purported beneficiary of any transaction on Prysm), as well as any and all losses, liabilities,
damages, costs, and expenses (including reasonable attorneys fees) arising out of, accruing from, or in any way related to (i) your breach of the terms of this Agreement, (ii) any transaction, or the failure to occur of any transaction on Prysm, and (iii) your negligence, fraud, or willful misconduct.
## Warranty Disclaimer
### Compliance with Laws and Tax Obligations
Your use of Prysm is subject to all applicable laws of any governmental authority, including, without limitation, federal, state and foreign securities laws, tax laws, tariff and trade laws, ordinances, judgments, decrees, injunctions, writs and orders or like actions of any governmental authority and rules, regulations, orders, interpretations, licenses, and permits of any federal,
regional, state, county, municipal or other governmental authority and you agree to comply with all such laws in your use of Prysm. The users of Prysm are solely responsible to determinate what, if any, taxes apply to their ETH transactions. The owners of, or contributors to, Prysm are not responsible for determining the taxes that apply to ETH transactions.
PRYSM IS PROVIDED ON AN “AS-IS” BASIS AND MAY INCLUDE ERRORS, OMISSIONS, OR OTHER INACCURACIES. WITHOUT LIMITING ANYTHING SET FORTH ELSEWHERE IN THESE TERMS, OFFCHAIN LABS AND ITS CONTRIBUTORS MAKE NO REPRESENTATIONS OR WARRANTIES ABOUT PRYSM FOR ANY PURPOSE, AND HEREBY EXPRESSLY DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT OR ANY OTHER IMPLIED WARRANTY UNDER THE UNIFORM COMPUTER INFORMATION TRANSACTIONS ACT AS ENACTED BY ANY STATE OR OTHER GOVERNMENTAL AUTHORITY. WE ALSO MAKE NO REPRESENTATIONS OR WARRANTIES THAT PRYSM WILL OPERATE ERROR-FREE, UNINTERRUPTED, OR IN A MANNER THAT WILL MEET YOUR REQUIREMENTS AND/OR NEEDS. THEREFORE, YOU ASSUME THE ENTIRE RISK REGARDING THE QUALITY AND/OR PERFORMANCE OF PRYSM AND ANY TRANSACTIONS ENTERED INTO THEREON.
### Miscellaneous
These Terms will be construed and enforced in accordance with the laws of the state of Illinois as applied to agreements entered into and completely performed in Illinois. You agree to the personal jurisdiction by and venue in Illinois and waive any objection to such jurisdiction or venue.
## Limitation of Liability
We reserve the right to revise these Terms, and your rights and obligations are at all times subject to the then-current Terms provided on Prysm. Your continued use of Prysm constitutes acceptance of such revised Terms.
IN NO EVENT WILL OFFCHAIN LABS OR ANY OF ITS AFFILIATES OR ITS OR ANY SUCH AFFILIATES DIRECTORS, OFFICERS, EMPLOYEES, AGENTS, OR REPRESENTATIVES OR ANY CONTRIBUTORS (COLLECTIVELY, THE “OCL PARTIES”) BE LIABLE, WHETHER IN CONTRACT, WARRANTY, TORT (INCLUDING NEGLIGENCE, WHETHER ACTIVE, PASSIVE OR IMPUTED), PRODUCT LIABILITY, STRICT LIABILITY OR OTHER THEORY, BREACH OF STATUTORY DUTY OR OTHERWISE ARISING OUT OF, OR IN CONNECTION WITH, YOUR USE OF PRYSM, FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES (INCLUDING ANY LOSS OF PROFITS OR DATA, BUSINESS INTERRUPTION OR OTHER PECUNIARY LOSS, OR DAMAGE, LOSS OR OTHER COMPROMISE OF DATA, IN EACH CASE WHETHER DIRECT, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL) ARISING OUT OF USE PRYSM, EVEN IF WE OR OTHER USERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. The foregoing limitations and disclaimers shall apply to the maximum extent permitted by Applicable Law, even if any remedy fails of its essential purpose. You acknowledge and agree that the limitations of liability afforded us hereunder constitute a material and actual inducement and condition to entering into these Terms, and are reasonable, fair and equitable in scope to protect our legitimate interests in light of the fact that we are not receiving consideration from you for providing Prysm.
These Terms constitute the entire agreement between you and Prysmatic Labs regarding use of Prysm and will supersede all prior agreements whether, written or oral. No usage of trade or other regular practice or method of dealing between the parties will be used to modify, interpret, supplement, or alter the terms of these Terms.
## Indemnification
To the maximum extent permitted by Applicable Law, you will defend, indemnify and hold each OCL Party harmless from and against any and all claims, actions, suits, investigations, or proceedings by any third party (including any party or purported party to or beneficiary or purported beneficiary of any transaction or other activity on Prysm), as well as any and all losses, liabilities, damages, costs, and expenses (including reasonable attorneys fees and costs) arising out of, accruing from, or in any way related to (i) your breach of the terms of this Agreement, (ii) any transaction, or the failure to occur of any transaction on Prysm, and (iii) your negligence, fraud, or willful misconduct.
## Compliance with Laws
Your use of Prysm is subject to all applicable laws of any governmental authority, including, without limitation, federal, state and foreign securities laws, tax laws, tariff and trade laws, ordinances, judgments, decrees, injunctions, writs and orders or like actions of any governmental authority and rules, regulations, orders, interpretations, licenses, and permits of any federal, regional, state, county, municipal or other governmental authority (collectively, “Applicable Law”) and you agree to comply with all such Applicable Law in your use of Prysm. The users of Prysm are solely responsible to determinate what, if any, taxes apply to their ETH transactions. The owners of, or contributors to, Prysm are not responsible for determining the taxes that apply to ETH transactions.
## Miscellaneous
These Terms will be governed by the laws of the State of Delaware without regard to its conflict of law provisions. With respect to any disputes or claims not subject to arbitration, as set forth in the OCL Terms of Use, you and Offchain Labs submit to the personal and exclusive jurisdiction of the state and federal courts located within New York, New York and waive any objection to such jurisdiction and venue. The failure of Offchain Labs to exercise or enforce any right or provision of these Terms will not constitute a waiver of such right or provision.
We reserve the right to revise these Terms, and your rights and obligations are at all times subject to the then-current Terms provided on Prysm. Your use of Prysm following any such revision to these Terms constitutes acceptance of such revised Terms.
These Terms constitute the entire agreement between you and Offchain Labs regarding use of Prysm and will supersede all prior agreements whether, written or oral. No usage of trade or other regular practice or method of dealing between the parties will be used to modify, interpret, supplement, or alter the terms of these Terms.
If any portion of these Terms is held invalid or unenforceable, such invalidity or enforceability will not affect the other provisions of these Terms, which will remain in full force and effect, and the invalid or unenforceable portion will be given effect to the greatest extent possible. The failure of a party to require performance of any provision will not affect that partys right to require performance at any time thereafter, nor will a waiver of any breach or default of these Terms or any provision of these Terms constitute a waiver of any subsequent breach or default or a waiver of the provision itself.
If any portion of these Terms is held invalid or unenforceable, such invalidity or enforceability will not affect the other provisions of these Terms, which will remain in full force and effect, and the invalid or unenforceable portion will be given effect to the greatest extent possible. The failure of a party to require performance of any provision will not affect that partys right to require performance at any time thereafter, nor will a waiver of any breach or default of these Terms or any provision of these Terms constitute a waiver of any subsequent breach or default or a waiver of the provision itself.

177
WORKSPACE
View File

@@ -27,7 +27,23 @@ http_archive(
load("@hermetic_cc_toolchain//toolchain:defs.bzl", zig_toolchains = "toolchains")
zig_toolchains()
# Temporarily use a nightly build until 0.12.0 is released.
# See: https://github.com/prysmaticlabs/prysm/issues/13130
zig_toolchains(
host_platform_sha256 = {
"linux-aarch64": "45afb8e32adde825165f4f293fcea9ecea503f7f9ec0e9bf4435afe70e67fb70",
"linux-x86_64": "f136c6a8a0f6adcb057d73615fbcd6f88281b3593f7008d5f7ed514ff925c02e",
"macos-aarch64": "05d995853c05243151deff47b60bdc2674f1e794a939eaeca0f42312da031cee",
"macos-x86_64": "721754ba5a50f31e8a1f0e1a74cace26f8246576878ac4a8591b0ee7b6db1fc1",
"windows-x86_64": "93f5248b2ea8c5ee8175e15b1384e133edc1cd49870b3ea259062a2e04164343",
},
url_formats = [
"https://ziglang.org/builds/zig-{host_platform}-{version}.{_ext}",
"https://mirror.bazel.build/ziglang.org/builds/zig-{host_platform}-{version}.{_ext}",
"https://prysmaticlabs.com/mirror/ziglang.org/builds/zig-{host_platform}-{version}.{_ext}",
],
version = "0.12.0-dev.1349+fa022d1ec",
)
# Register zig sdk toolchains with support for Ubuntu 20.04 (Focal Fossa) which has an EOL date of April, 2025.
# For ubuntu glibc support, see https://launchpad.net/ubuntu/+source/glibc
@@ -80,11 +96,29 @@ http_archive(
)
http_archive(
name = "io_bazel_rules_docker",
sha256 = "b1e80761a8a8243d03ebca8845e9cc1ba6c82ce7c5179ce2b295cd36f7e394bf",
urls = ["https://github.com/bazelbuild/rules_docker/releases/download/v0.25.0/rules_docker-v0.25.0.tar.gz"],
name = "rules_distroless",
sha256 = "e64f06e452cd153aeab81f752ccf4642955b3af319e64f7bc7a7c9252f76b10e",
strip_prefix = "rules_distroless-f5e678217b57ce3ad2f1c0204bd4e9d416255773",
url = "https://github.com/GoogleContainerTools/rules_distroless/archive/f5e678217b57ce3ad2f1c0204bd4e9d416255773.tar.gz",
)
load("@rules_distroless//distroless:dependencies.bzl", "rules_distroless_dependencies")
rules_distroless_dependencies()
http_archive(
name = "distroless",
integrity = "sha256-Cf00kUp1NyXA3LzbdyYy4Kda27wbkB8+A9MliTxq4jE=",
strip_prefix = "distroless-9dc924b9fe812eec2fa0061824dcad39eb09d0d6",
url = "https://github.com/GoogleContainerTools/distroless/archive/9dc924b9fe812eec2fa0061824dcad39eb09d0d6.tar.gz", # 2024-01-24
)
load("@aspect_bazel_lib//lib:repositories.bzl", "aspect_bazel_lib_dependencies", "aspect_bazel_lib_register_toolchains")
aspect_bazel_lib_dependencies()
aspect_bazel_lib_register_toolchains()
http_archive(
name = "rules_oci",
sha256 = "c71c25ed333a4909d2dd77e0b16c39e9912525a98c7fa85144282be8d04ef54c",
@@ -110,13 +144,17 @@ http_archive(
# Expose internals of go_test for custom build transitions.
"//third_party:io_bazel_rules_go_test.patch",
],
sha256 = "91585017debb61982f7054c9688857a2ad1fd823fc3f9cb05048b0025c47d023",
sha256 = "d6ab6b57e48c09523e93050f13698f708428cfd5e619252e369d377af6597707",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.42.0/rules_go-v0.42.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.42.0/rules_go-v0.42.0.zip",
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.43.0/rules_go-v0.43.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.43.0/rules_go-v0.43.0.zip",
],
)
load("//:distroless_deps.bzl", "distroless_deps")
distroless_deps()
# Override default import in rules_go with special patch until
# https://github.com/gogo/protobuf/pull/582 is merged.
git_repository(
@@ -132,67 +170,16 @@ git_repository(
# gazelle args: -go_prefix github.com/gogo/protobuf -proto legacy
)
load(
"@io_bazel_rules_docker//repositories:repositories.bzl",
container_repositories = "repositories",
)
container_repositories()
load(
"@io_bazel_rules_docker//container:container.bzl",
"container_pull",
)
# Pulled gcr.io/distroless/cc-debian11:latest on 2022-02-23
container_pull(
name = "cc_image_base_amd64",
digest = "sha256:2a0daf90a7deb78465bfca3ef2eee6e91ce0a5706059f05d79d799a51d339523",
registry = "gcr.io",
repository = "distroless/cc-debian11",
)
# Pulled gcr.io/distroless/cc-debian11:debug on 2022-02-23
container_pull(
name = "cc_debug_image_base_amd64",
digest = "sha256:7bd596f5f200588f13a69c268eea6ce428b222b67cd7428d6a7fef95e75c052a",
registry = "gcr.io",
repository = "distroless/cc-debian11",
)
# Pulled from gcr.io/distroless/base-debian11:latest on 2022-02-23
container_pull(
name = "go_image_base_amd64",
digest = "sha256:34e682800774ecbd0954b1663d90238505f1ba5543692dbc75feef7dd4839e90",
registry = "gcr.io",
repository = "distroless/base-debian11",
)
# Pulled from gcr.io/distroless/base-debian11:debug on 2022-02-23
container_pull(
name = "go_debug_image_base_amd64",
digest = "sha256:0f503c6bfd207793bc416f20a35bf6b75d769a903c48f180ad73f60f7b60d7bd",
registry = "gcr.io",
repository = "distroless/base-debian11",
)
container_pull(
name = "alpine_cc_linux_amd64",
digest = "sha256:752aa0c9a88461ffc50c5267bb7497ef03a303e38b2c8f7f2ded9bebe5f1f00e",
registry = "index.docker.io",
repository = "pinglamb/alpine-glibc",
)
load("@rules_oci//oci:pull.bzl", "oci_pull")
# A multi-arch base image
oci_pull(
name = "linux_debian11_multiarch_base", # Debian bullseye
digest = "sha256:9b8e0854865dcaf49470b4ec305df45957020fbcf17b71eeb50ffd3bc5bf885d", # 2023-05-17
digest = "sha256:b82f113425c5b5c714151aaacd8039bc141821cdcd3c65202d42bdf9c43ae60b", # 2023-12-12
image = "gcr.io/distroless/cc-debian11",
platforms = [
"linux/amd64",
"linux/arm64",
"linux/arm64/v8",
],
reproducible = True,
)
@@ -206,7 +193,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.20.9",
go_version = "1.21.6",
nogo = "@//:nogo",
)
@@ -228,8 +215,8 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "91434d5fd5e1c6eb7b0174fed2afe25e09bddf00e1e4c431db931b2cee4e7773",
url = "https://github.com/eth-clients/slashing-protection-interchange-tests/archive/b8413ca42dc92308019d0d4db52c87e9e125c4e9.tar.gz",
sha256 = "516d551cfb3e50e4ac2f42db0992f4ceb573a7cb1616d727a725c8161485329f",
url = "https://github.com/eth-clients/slashing-protection-interchange-tests/archive/refs/tags/v5.3.0.tar.gz",
)
http_archive(
@@ -247,9 +234,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_test_version = "v1.4.0-beta.2-hotfix"
consensus_spec_version = "v1.4.0-beta.2"
consensus_spec_version = "v1.4.0-beta.5"
bls_test_version = "v0.1.1"
@@ -265,8 +250,8 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "99770a001189f66204a4ef79161c8002bcbbcbd8236f1c6479bd5b83a3c68d42",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_test_version,
sha256 = "9017ffff84d64a7c4c9e6ff9f421f9479f71d3b463b738f54e02158dbb4f50f0",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -281,8 +266,8 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "56763f6492ee137108271007d62feef60d8e3f1698e53dee4bc4b07e55f7326b",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_test_version,
sha256 = "f08711682553fe7c9362f1400ed8c56b2fa9576df08581fcad4c508ba8ad4788",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -297,8 +282,8 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "bc1cac1a991cdc7426efea14385dcf215df85ed3f0572b824ad6a1d7ca0c89ad",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_test_version,
sha256 = "7ea3189e3879f2ac62467cbf2945c00b6c94d30cdefb2d645c630b1018c50e10",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -312,7 +297,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "c5898001aaab2a5bb38a39ff9d17a52f1f9befcc26e63752cbf556040f0c884e",
sha256 = "4119992a2efc79e5cb2bdc07ed08c0b1fa32332cbd0d88e6467f34938df97026",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -348,6 +333,22 @@ filegroup(
url = "https://github.com/eth-clients/eth2-networks/archive/7b4897888cebef23801540236f73123e21774954.tar.gz",
)
http_archive(
name = "goerli_testnet",
build_file_content = """
filegroup(
name = "configs",
srcs = [
"prater/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
sha256 = "43fc0f55ddff7b511713e2de07aa22846a67432df997296fb4fc09cd8ed1dcdb",
strip_prefix = "goerli-6522ac6684693740cd4ddcc2a0662e03702aa4a1",
url = "https://github.com/eth-clients/goerli/archive/6522ac6684693740cd4ddcc2a0662e03702aa4a1.tar.gz",
)
http_archive(
name = "holesky_testnet",
build_file_content = """
@@ -359,17 +360,17 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "9f66d8d5644982d3d0d2e3d2b9ebe77a5f96638a5d7fcd715599c32818195cb3",
strip_prefix = "holesky-ea39b9006210848e13f28d92e12a30548cecd41d",
url = "https://github.com/eth-clients/holesky/archive/ea39b9006210848e13f28d92e12a30548cecd41d.tar.gz", # 2023-09-21
sha256 = "5f4be6fd088683ea9db45c863b9c5a1884422449e5b59fd2d561d3ba0f73ffd9",
strip_prefix = "holesky-9d9aabf2d4de51334ee5fed6c79a4d55097d1a43",
url = "https://github.com/eth-clients/holesky/archive/9d9aabf2d4de51334ee5fed6c79a4d55097d1a43.tar.gz", # 2024-01-22
)
http_archive(
name = "com_google_protobuf",
sha256 = "4e176116949be52b0408dfd24f8925d1eb674a781ae242a75296b17a1c721395",
strip_prefix = "protobuf-23.3",
sha256 = "9bd87b8280ef720d3240514f884e56a712f2218f0d693b48050c836028940a42",
strip_prefix = "protobuf-25.1",
urls = [
"https://github.com/protocolbuffers/protobuf/archive/v23.3.tar.gz",
"https://github.com/protocolbuffers/protobuf/archive/v25.1.tar.gz",
],
)
@@ -403,24 +404,6 @@ load("@prysm//testing/endtoend:deps.bzl", "e2e_deps")
e2e_deps()
load(
"@io_bazel_rules_docker//go:image.bzl",
_go_image_repos = "repositories",
)
# Golang images
# This is using gcr.io/distroless/base
_go_image_repos()
# CC images
# This is using gcr.io/distroless/base
load(
"@io_bazel_rules_docker//cc:image.bzl",
_cc_image_repos = "repositories",
)
_cc_image_repos()
load("@com_github_atlassian_bazel_tools//gometalinter:deps.bzl", "gometalinter_dependencies")
gometalinter_dependencies()

View File

@@ -2,7 +2,10 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["headers.go"],
srcs = [
"constants.go",
"headers.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/api",
visibility = ["//visibility:public"],
)

View File

@@ -11,10 +11,9 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api/client:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/rpc/apimiddleware:go_default_library",
"//beacon-chain/rpc/eth/beacon:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/state:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -22,7 +21,6 @@ go_library(
"//encoding/ssz/detect:go_default_library",
"//io/file:go_default_library",
"//network/forks:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",

View File

@@ -13,17 +13,14 @@ import (
"strconv"
"text/template"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/beacon"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/network/forks"
v1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/api/server"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
log "github.com/sirupsen/logrus"
)
@@ -32,7 +29,7 @@ const (
getSignedBlockPath = "/eth/v2/beacon/blocks"
getBlockRootPath = "/eth/v1/beacon/blocks/{{.Id}}/root"
getForkForStatePath = "/eth/v1/beacon/states/{{.Id}}/fork"
getWeakSubjectivityPath = "/eth/v1/beacon/weak_subjectivity"
getWeakSubjectivityPath = "/prysm/v1/beacon/weak_subjectivity"
getForkSchedulePath = "/eth/v1/config/fork_schedule"
getConfigSpecPath = "/eth/v1/config/spec"
getStatePath = "/eth/v2/debug/beacon/states"
@@ -150,8 +147,8 @@ func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fo
if err != nil {
return nil, errors.Wrapf(err, "error requesting fork by state id = %s", stateId)
}
fr := &shared.Fork{}
dataWrapper := &struct{ Data *shared.Fork }{Data: fr}
fr := &structs.Fork{}
dataWrapper := &struct{ Data *structs.Fork }{Data: fr}
err = json.Unmarshal(body, dataWrapper)
if err != nil {
return nil, errors.Wrap(err, "error decoding json response in GetFork")
@@ -179,12 +176,12 @@ func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, er
}
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
func (c *Client) GetConfigSpec(ctx context.Context) (*v1.SpecResponse, error) {
func (c *Client) GetConfigSpec(ctx context.Context) (*structs.GetSpecResponse, error) {
body, err := c.Get(ctx, getConfigSpecPath)
if err != nil {
return nil, errors.Wrap(err, "error requesting configSpecPath")
}
fsr := &v1.SpecResponse{}
fsr := &structs.GetSpecResponse{}
err = json.Unmarshal(body, fsr)
if err != nil {
return nil, err
@@ -259,16 +256,16 @@ func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData
if err != nil {
return nil, err
}
v := &apimiddleware.WeakSubjectivityResponse{}
v := &structs.GetWeakSubjectivityResponse{}
err = json.Unmarshal(body, v)
if err != nil {
return nil, err
}
epoch, err := strconv.ParseUint(v.Data.Checkpoint.Epoch, 10, 64)
epoch, err := strconv.ParseUint(v.Data.WsCheckpoint.Epoch, 10, 64)
if err != nil {
return nil, err
}
blockRoot, err := hexutil.Decode(v.Data.Checkpoint.Root)
blockRoot, err := hexutil.Decode(v.Data.WsCheckpoint.Root)
if err != nil {
return nil, err
}
@@ -285,7 +282,7 @@ func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData
// SubmitChangeBLStoExecution calls a beacon API endpoint to set the withdrawal addresses based on the given signed messages.
// If the API responds with something other than OK there will be failure messages associated to the corresponding request message.
func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*shared.SignedBLSToExecutionChange) error {
func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*structs.SignedBLSToExecutionChange) error {
u := c.BaseURL().ResolveReference(&url.URL{Path: changeBLStoExecutionPath})
body, err := json.Marshal(request)
if err != nil {
@@ -306,7 +303,7 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*shar
if resp.StatusCode != http.StatusOK {
decoder := json.NewDecoder(resp.Body)
decoder.DisallowUnknownFields()
errorJson := &apimiddleware.IndexedVerificationFailureErrorJson{}
errorJson := &server.IndexedVerificationFailureError{}
if err := decoder.Decode(errorJson); err != nil {
return errors.Wrapf(err, "failed to decode error JSON for %s", resp.Request.URL)
}
@@ -324,12 +321,12 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*shar
// GetBLStoExecutionChanges gets all the set withdrawal messages in the node's operation pool.
// Returns a struct representation of json response.
func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*beacon.BLSToExecutionChangesPoolResponse, error) {
func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*structs.BLSToExecutionChangesPoolResponse, error) {
body, err := c.Get(ctx, changeBLStoExecutionPath)
if err != nil {
return nil, err
}
poolResponse := &beacon.BLSToExecutionChangesPoolResponse{}
poolResponse := &structs.BLSToExecutionChangesPoolResponse{}
err = json.Unmarshal(body, poolResponse)
if err != nil {
return nil, err
@@ -338,7 +335,7 @@ func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*beacon.BLSToExe
}
type forkScheduleResponse struct {
Data []shared.Fork
Data []structs.Fork
}
func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, error) {

View File

@@ -11,7 +11,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v4/api/client/builder",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/rpc/eth/shared:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -20,8 +20,6 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//network:go_default_library",
"//network/authorization:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
@@ -42,7 +40,7 @@ go_test(
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//beacon-chain/rpc/eth/shared:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -6,7 +6,6 @@ import (
consensus_types "github.com/prysmaticlabs/prysm/v4/consensus-types"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
)
@@ -22,7 +21,7 @@ type SignedBid interface {
// Bid is an interface describing the method set of a builder bid.
type Bid interface {
Header() (interfaces.ExecutionData, error)
BlindedBlobsBundle() (*enginev1.BlindedBlobsBundle, error)
BlobKzgCommitments() ([][]byte, error)
Value() []byte
Pubkey() []byte
Version() int
@@ -115,9 +114,9 @@ func (b builderBid) Header() (interfaces.ExecutionData, error) {
return blocks.WrappedExecutionPayloadHeader(b.p.Header)
}
// BlindedBlobsBundle --
func (b builderBid) BlindedBlobsBundle() (*enginev1.BlindedBlobsBundle, error) {
return nil, errors.New("blinded blobs bundle not available before Deneb")
// BlobKzgCommitments --
func (b builderBid) BlobKzgCommitments() ([][]byte, error) {
return [][]byte{}, errors.New("blob kzg commitments not available before Deneb")
}
// Version --
@@ -166,12 +165,12 @@ func WrappedBuilderBidCapella(p *ethpb.BuilderBidCapella) (Bid, error) {
// Header returns the execution data interface.
func (b builderBidCapella) Header() (interfaces.ExecutionData, error) {
// We have to convert big endian to little endian because the value is coming from the execution layer.
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header, blocks.PayloadValueToGwei(b.p.Value))
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header, blocks.PayloadValueToWei(b.p.Value))
}
// BlindedBlobsBundle --
func (b builderBidCapella) BlindedBlobsBundle() (*enginev1.BlindedBlobsBundle, error) {
return nil, errors.New("blinded blobs bundle not available before Deneb")
// BlobKzgCommitments --
func (b builderBidCapella) BlobKzgCommitments() ([][]byte, error) {
return [][]byte{}, errors.New("blob kzg commitments not available before Deneb")
}
// Version --
@@ -250,12 +249,12 @@ func (b builderBidDeneb) HashTreeRootWith(hh *ssz.Hasher) error {
// Header --
func (b builderBidDeneb) Header() (interfaces.ExecutionData, error) {
// We have to convert big endian to little endian because the value is coming from the execution layer.
return blocks.WrappedExecutionPayloadHeaderDeneb(b.p.Header, blocks.PayloadValueToGwei(b.p.Value))
return blocks.WrappedExecutionPayloadHeaderDeneb(b.p.Header, blocks.PayloadValueToWei(b.p.Value))
}
// BlindedBlobsBundle --
func (b builderBidDeneb) BlindedBlobsBundle() (*enginev1.BlindedBlobsBundle, error) {
return b.p.BlindedBlobsBundle, nil
// BlobKzgCommitments --
func (b builderBidDeneb) BlobKzgCommitments() ([][]byte, error) {
return b.p.BlobKzgCommitments, nil
}
type signedBuilderBidDeneb struct {

View File

@@ -6,6 +6,7 @@ import (
"encoding/json"
"fmt"
"io"
"math/big"
"net"
"net/http"
"net/url"
@@ -13,14 +14,12 @@ import (
"text/template"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v4/network"
"github.com/prysmaticlabs/prysm/v4/network/authorization"
v1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
@@ -89,7 +88,7 @@ type BuilderClient interface {
NodeURL() string
GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubkey [48]byte) (SignedBid, error)
RegisterValidator(ctx context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error
SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock, blobs []*ethpb.SignedBlindedBlobSidecar) (interfaces.ExecutionData, *v1.BlobsBundle, error)
SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error)
Status(ctx context.Context) error
}
@@ -104,8 +103,7 @@ type Client struct {
// `host` is the base host + port used to construct request urls. This value can be
// a URL string, or NewClient will assume an http endpoint if just `host:port` is used.
func NewClient(host string, opts ...ClientOpt) (*Client, error) {
endpoint := covertEndPoint(host)
u, err := urlForHost(endpoint.Url)
u, err := urlForHost(host)
if err != nil {
return nil, err
}
@@ -121,8 +119,7 @@ func NewClient(host string, opts ...ClientOpt) (*Client, error) {
func urlForHost(h string) (*url.URL, error) {
// try to parse as url (being permissive)
u, err := url.Parse(h)
if err == nil && u.Host != "" {
if u, err := url.Parse(h); err == nil && u.Host != "" {
return u, nil
}
// try to parse as host:port
@@ -140,7 +137,7 @@ func (c *Client) NodeURL() string {
type reqOption func(*http.Request)
// do is a generic, opinionated request function to reduce boilerplate amongst the methods in this package api/client/builder/types.go.
// do is a generic, opinionated request function to reduce boilerplate amongst the methods in this package api/client/builder.
func (c *Client) do(ctx context.Context, method string, path string, body io.Reader, opts ...reqOption) (res []byte, err error) {
ctx, span := trace.StartSpan(ctx, "builder.client.do")
defer func() {
@@ -270,13 +267,9 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
tracing.AnnotateError(span, err)
return err
}
vs := make([]*shared.SignedValidatorRegistration, len(svr))
vs := make([]*structs.SignedValidatorRegistration, len(svr))
for i := 0; i < len(svr); i++ {
svrJson, err := shared.SignedValidatorRegistrationFromConsensus(svr[i])
if err != nil {
return errors.Wrap(err, fmt.Sprintf("failed to encode to SignedValidatorRegistration at index %d", i))
}
vs[i] = svrJson
vs[i] = structs.SignedValidatorRegistrationFromConsensus(svr[i])
}
body, err := json.Marshal(vs)
if err != nil {
@@ -291,7 +284,7 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
// SubmitBlindedBlock calls the builder API endpoint that binds the validator to the builder and submits the block.
// The response is the full execution payload used to create the blinded block.
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock, blobs []*ethpb.SignedBlindedBlobSidecar) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
if !sb.IsBlinded() {
return nil, nil, errNotBlinded
}
@@ -301,7 +294,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := shared.SignedBlindedBeaconBlockBellatrixFromConsensus(&ethpb.SignedBlindedBeaconBlockBellatrix{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
b, err := structs.SignedBlindedBeaconBlockBellatrixFromConsensus(&ethpb.SignedBlindedBeaconBlockBellatrix{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockBellatrix to json marshalable type")
}
@@ -338,7 +331,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := shared.SignedBlindedBeaconBlockCapellaFromConsensus(&ethpb.SignedBlindedBeaconBlockCapella{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
b, err := structs.SignedBlindedBeaconBlockCapellaFromConsensus(&ethpb.SignedBlindedBeaconBlockCapella{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockCapella to json marshalable type")
}
@@ -365,7 +358,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not extract proto message from payload")
}
payload, err := blocks.WrappedExecutionPayloadCapella(p, 0)
payload, err := blocks.WrappedExecutionPayloadCapella(p, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrapf(err, "could not wrap execution payload in interface")
}
@@ -375,9 +368,9 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := shared.SignedBlindedBeaconBlockContentsDenebFromConsensus(&ethpb.SignedBlindedBeaconBlockAndBlobsDeneb{SignedBlindedBlock: psb, SignedBlindedBlobSidecars: blobs})
b, err := structs.SignedBlindedBeaconBlockDenebFromConsensus(&ethpb.SignedBlindedBeaconBlockDeneb{Message: psb.Message, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockContentsDeneb to json marshalable type")
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockDeneb to json marshalable type")
}
body, err := json.Marshal(b)
if err != nil {
@@ -402,7 +395,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not extract proto message from payload")
}
payload, err := blocks.WrappedExecutionPayloadDeneb(p, 0)
payload, err := blocks.WrappedExecutionPayloadDeneb(p, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrapf(err, "could not wrap execution payload in interface")
}
@@ -431,38 +424,29 @@ func non200Err(response *http.Response) error {
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case 204:
case http.StatusNoContent:
log.WithError(ErrNoContent).Debug(msg)
return ErrNoContent
case 400:
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
case http.StatusBadRequest:
log.WithError(ErrBadRequest).Debug(msg)
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrBadRequest, errMessage.Message)
case 404:
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
case http.StatusNotFound:
log.WithError(ErrNotFound).Debug(msg)
return errors.Wrap(ErrNotFound, errMessage.Message)
case 500:
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrNotFound, errMessage.Message)
case http.StatusInternalServerError:
log.WithError(ErrNotOK).Debug(msg)
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrNotOK, errMessage.Message)
default:
log.WithError(ErrNotOK).Debug(msg)
return errors.Wrap(ErrNotOK, fmt.Sprintf("unsupported error code: %d", response.StatusCode))
}
}
func covertEndPoint(ep string) network.Endpoint {
return network.Endpoint{
Url: ep,
Auth: network.AuthorizationData{ // Auth is not used for builder.
Method: authorization.None,
Value: "",
}}
}

View File

@@ -12,10 +12,8 @@ import (
"strconv"
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
types "github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
@@ -270,28 +268,20 @@ func TestClient_GetHeader(t *testing.T) {
bidValue := bytesutil.ReverseByteOrder(bid.Value())
require.DeepEqual(t, bidValue, value.Bytes())
require.DeepEqual(t, big.NewInt(0).SetBytes(bidValue), value.Int)
bundle, err := bid.BlindedBlobsBundle()
kcgCommitments, err := bid.BlobKzgCommitments()
require.NoError(t, err)
require.Equal(t, len(bundle.BlobRoots) <= fieldparams.MaxBlobsPerBlock && len(bundle.BlobRoots) > 0, true)
for i := range bundle.BlobRoots {
require.Equal(t, len(bundle.BlobRoots[i]) == fieldparams.RootLength, true)
}
require.Equal(t, len(bundle.KzgCommitments) > 0, true)
for i := range bundle.KzgCommitments {
require.Equal(t, len(bundle.KzgCommitments[i]) == 48, true)
}
require.Equal(t, len(bundle.Proofs) > 0, true)
for i := range bundle.Proofs {
require.Equal(t, len(bundle.Proofs[i]) == 48, true)
require.Equal(t, len(kcgCommitments) > 0, true)
for i := range kcgCommitments {
require.Equal(t, len(kcgCommitments[i]) == 48, true)
}
})
t.Run("deneb, no bundle", func(t *testing.T) {
t.Run("deneb, too many kzg commitments", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleHeaderResponseDenebNoBundle)),
Body: io.NopCloser(bytes.NewBufferString(testExampleHeaderResponseDenebTooManyBlobs)),
Request: r.Clone(ctx),
}, nil
}),
@@ -300,27 +290,9 @@ func TestClient_GetHeader(t *testing.T) {
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
h, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.NoError(t, err)
expectedWithdrawalsRoot := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
bid, err := h.Message()
require.NoError(t, err)
bidHeader, err := bid.Header()
require.NoError(t, err)
withdrawalsRoot, err := bidHeader.WithdrawalsRoot()
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(expectedWithdrawalsRoot, withdrawalsRoot))
value, err := stringToUint256("652312848583266388373324160190187140051835877600158453279131187530910662656")
require.NoError(t, err)
require.Equal(t, fmt.Sprintf("%#x", value.SSZBytes()), fmt.Sprintf("%#x", bid.Value()))
bidValue := bytesutil.ReverseByteOrder(bid.Value())
require.DeepEqual(t, bidValue, value.Bytes())
require.DeepEqual(t, big.NewInt(0).SetBytes(bidValue), value.Int)
bundle, err := bid.BlindedBlobsBundle()
require.NoError(t, err)
require.Equal(t, (*v1.BlindedBlobsBundle)(nil), bundle)
_, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.ErrorContains(t, "could not extract proto message from header: too many blob commitments: 7", err)
})
t.Run("unsupported version", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
@@ -362,7 +334,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
}
sbbb, err := blocks.NewSignedBeaconBlock(testSignedBlindedBeaconBlockBellatrix(t))
require.NoError(t, err)
ep, _, err := c.SubmitBlindedBlock(ctx, sbbb, nil)
ep, _, err := c.SubmitBlindedBlock(ctx, sbbb)
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"), ep.ParentHash()))
bfpg, err := stringToUint256("452312848583266388373324160190187140051835877600158453279131187530910662656")
@@ -388,7 +360,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
}
sbb, err := blocks.NewSignedBeaconBlock(testSignedBlindedBeaconBlockCapella(t))
require.NoError(t, err)
ep, _, err := c.SubmitBlindedBlock(ctx, sbb, nil)
ep, _, err := c.SubmitBlindedBlock(ctx, sbb)
require.NoError(t, err)
withdrawals, err := ep.Withdrawals()
require.NoError(t, err)
@@ -399,18 +371,17 @@ func TestSubmitBlindedBlock(t *testing.T) {
assert.Equal(t, uint64(1), withdrawals[0].Amount)
})
t.Run("deneb", func(t *testing.T) {
test := testSignedBlindedBeaconBlockAndBlobsDeneb(t)
test := testSignedBlindedBeaconBlockDeneb(t)
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "deneb", r.Header.Get("Eth-Consensus-Version"))
var req shared.SignedBlindedBeaconBlockContentsDeneb
var req structs.SignedBlindedBeaconBlockDeneb
err := json.NewDecoder(r.Body).Decode(&req)
require.NoError(t, err)
block, err := req.SignedBlindedBlock.ToConsensus()
block, err := req.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, block, test.SignedBlindedBlock)
require.DeepEqual(t, block, test)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayloadDeneb)),
@@ -423,10 +394,10 @@ func TestSubmitBlindedBlock(t *testing.T) {
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
sbb, err := blocks.NewSignedBeaconBlock(test.SignedBlindedBlock)
sbb, err := blocks.NewSignedBeaconBlock(test)
require.NoError(t, err)
ep, blobBundle, err := c.SubmitBlindedBlock(ctx, sbb, test.SignedBlindedBlobSidecars)
ep, blobBundle, err := c.SubmitBlindedBlock(ctx, sbb)
require.NoError(t, err)
withdrawals, err := ep.Withdrawals()
require.NoError(t, err)
@@ -436,9 +407,6 @@ func TestSubmitBlindedBlock(t *testing.T) {
assert.DeepEqual(t, ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943"), withdrawals[0].Address)
assert.Equal(t, uint64(1), withdrawals[0].Amount)
require.NotNil(t, blobBundle)
require.Equal(t, hexutil.Encode(blobBundle.Blobs[0]), hexutil.Encode(make([]byte, fieldparams.BlobLength)))
require.Equal(t, hexutil.Encode(blobBundle.KzgCommitments[0]), "0x8dab030c51e16e84be9caab84ee3d0b8bbec1db4a0e4de76439da8424d9b957370a10a78851f97e4b54d2ce1ab0d686f")
require.Equal(t, hexutil.Encode(blobBundle.Proofs[0]), "0xb4021b0de10f743893d4f71e1bf830c019e832958efd6795baf2f83b8699a9eccc5dc99015d8d4d8ec370d0cc333c06a")
})
t.Run("mismatched versions, expected bellatrix got capella", func(t *testing.T) {
hc := &http.Client{
@@ -457,13 +425,13 @@ func TestSubmitBlindedBlock(t *testing.T) {
}
sbbb, err := blocks.NewSignedBeaconBlock(testSignedBlindedBeaconBlockBellatrix(t))
require.NoError(t, err)
_, _, err = c.SubmitBlindedBlock(ctx, sbbb, nil)
_, _, err = c.SubmitBlindedBlock(ctx, sbbb)
require.ErrorContains(t, "not a bellatrix payload", err)
})
t.Run("not blinded", func(t *testing.T) {
sbb, err := blocks.NewSignedBeaconBlock(&eth.SignedBeaconBlockBellatrix{Block: &eth.BeaconBlockBellatrix{Body: &eth.BeaconBlockBodyBellatrix{}}})
sbb, err := blocks.NewSignedBeaconBlock(&eth.SignedBeaconBlockBellatrix{Block: &eth.BeaconBlockBellatrix{Body: &eth.BeaconBlockBodyBellatrix{ExecutionPayload: &v1.ExecutionPayload{}}}})
require.NoError(t, err)
_, _, err = (&Client{}).SubmitBlindedBlock(ctx, sbb, nil)
_, _, err = (&Client{}).SubmitBlindedBlock(ctx, sbb)
require.ErrorIs(t, err, errNotBlinded)
})
}
@@ -753,91 +721,70 @@ func testSignedBlindedBeaconBlockCapella(t *testing.T) *eth.SignedBlindedBeaconB
}
}
func testSignedBlindedBeaconBlockAndBlobsDeneb(t *testing.T) *eth.SignedBlindedBeaconBlockAndBlobsDeneb {
basebytes, err := shared.Uint256ToSSZBytes("14074904626401341155369551180448584754667373453244490859944217516317499064576")
func testSignedBlindedBeaconBlockDeneb(t *testing.T) *eth.SignedBlindedBeaconBlockDeneb {
basebytes, err := bytesutil.Uint256ToSSZBytes("14074904626401341155369551180448584754667373453244490859944217516317499064576")
if err != nil {
log.Error(err)
}
return &eth.SignedBlindedBeaconBlockAndBlobsDeneb{
SignedBlindedBlock: &eth.SignedBlindedBeaconBlockDeneb{
Message: &eth.BlindedBeaconBlockDeneb{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Body: &eth.BlindedBeaconBlockBodyDeneb{
RandaoReveal: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
Eth1Data: &eth.Eth1Data{
DepositRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
DepositCount: 1,
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Graffiti: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ProposerSlashings: []*eth.ProposerSlashing{
{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
return &eth.SignedBlindedBeaconBlockDeneb{
Message: &eth.BlindedBeaconBlockDeneb{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Body: &eth.BlindedBeaconBlockBodyDeneb{
RandaoReveal: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
Eth1Data: &eth.Eth1Data{
DepositRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
DepositCount: 1,
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Graffiti: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ProposerSlashings: []*eth.ProposerSlashing{
{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
AttesterSlashings: []*eth.AttesterSlashing{
{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
AttesterSlashings: []*eth.AttesterSlashing{
{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
Attestations: []*eth.Attestation{
{
AggregationBits: bitfield.Bitlist{0x01},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
@@ -854,68 +801,72 @@ func testSignedBlindedBeaconBlockAndBlobsDeneb(t *testing.T) *eth.SignedBlindedB
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
Deposits: []*eth.Deposit{
{
Proof: [][]byte{ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")},
Data: &eth.Deposit_Data{
PublicKey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
WithdrawalCredentials: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Amount: 1,
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Attestations: []*eth.Attestation{
{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
VoluntaryExits: []*eth.SignedVoluntaryExit{
{
Exit: &eth.VoluntaryExit{
Epoch: 1,
ValidatorIndex: 1,
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Deposits: []*eth.Deposit{
{
Proof: [][]byte{ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")},
Data: &eth.Deposit_Data{
PublicKey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
WithdrawalCredentials: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Amount: 1,
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
SyncAggregate: &eth.SyncAggregate{
SyncCommitteeSignature: make([]byte, 96),
SyncCommitteeBits: bitfield.Bitvector512(ezDecode(t, "0x6451e9f951ebf05edc01de67e593484b672877054f055903ff0df1a1a945cf30ca26bb4d4b154f94a1bc776bcf5d0efb3603e1f9b8ee2499ccdcfe2a18cef458")),
},
ExecutionPayloadHeader: &v1.ExecutionPayloadHeaderDeneb{
ParentHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
FeeRecipient: ezDecode(t, "0xabcf8e0d4e9587369b2301d0790347320302cc09"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ReceiptsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
LogsBloom: ezDecode(t, "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
PrevRandao: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BaseFeePerGas: basebytes,
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
TransactionsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
WithdrawalsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BlobGasUsed: 1,
ExcessBlobGas: 2,
},
VoluntaryExits: []*eth.SignedVoluntaryExit{
{
Exit: &eth.VoluntaryExit{
Epoch: 1,
ValidatorIndex: 1,
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
SignedBlindedBlobSidecars: []*eth.SignedBlindedBlobSidecar{
{
Message: &eth.BlindedBlobSidecar{
BlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Index: 0,
Slot: 1,
BlockParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ProposerIndex: 1,
BlobRoot: ezDecode(t, "0x24564723180fcb3d994104538d351c8dcbde12d541676bb736cf678018ca4739"),
KzgCommitment: ezDecode(t, "0x8dab030c51e16e84be9caab84ee3d0b8bbec1db4a0e4de76439da8424d9b957370a10a78851f97e4b54d2ce1ab0d686f"),
KzgProof: ezDecode(t, "0xb4021b0de10f743893d4f71e1bf830c019e832958efd6795baf2f83b8699a9eccc5dc99015d8d4d8ec370d0cc333c06a"),
SyncAggregate: &eth.SyncAggregate{
SyncCommitteeSignature: make([]byte, 96),
SyncCommitteeBits: ezDecode(t, "0x6451e9f951ebf05edc01de67e593484b672877054f055903ff0df1a1a945cf30ca26bb4d4b154f94a1bc776bcf5d0efb3603e1f9b8ee2499ccdcfe2a18cef458"),
},
ExecutionPayloadHeader: &v1.ExecutionPayloadHeaderDeneb{
ParentHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
FeeRecipient: ezDecode(t, "0xabcf8e0d4e9587369b2301d0790347320302cc09"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ReceiptsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
LogsBloom: ezDecode(t, "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
PrevRandao: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BaseFeePerGas: basebytes,
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
TransactionsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
WithdrawalsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BlobGasUsed: 1,
ExcessBlobGas: 2,
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
}
}

View File

@@ -41,7 +41,7 @@ func (m MockClient) RegisterValidator(_ context.Context, svr []*ethpb.SignedVali
}
// SubmitBlindedBlock --
func (MockClient) SubmitBlindedBlock(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock, _ []*ethpb.SignedBlindedBlobSidecar) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
func (MockClient) SubmitBlindedBlock(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
return nil, nil, nil
}

View File

@@ -117,7 +117,7 @@ type VersionResponse struct {
Version string `json:"version"`
}
// ExecHeaderResponse is a JSON representation of the builder API header response for Bellatrix.
// ExecHeaderResponse is a JSON representation of the builder API header response for Bellatrix.
type ExecHeaderResponse struct {
Version string `json:"version"`
Data struct {
@@ -357,6 +357,45 @@ func FromProtoCapella(payload *v1.ExecutionPayloadCapella) (ExecutionPayloadCape
}, nil
}
func FromProtoDeneb(payload *v1.ExecutionPayloadDeneb) (ExecutionPayloadDeneb, error) {
bFee, err := sszBytesToUint256(payload.BaseFeePerGas)
if err != nil {
return ExecutionPayloadDeneb{}, err
}
txs := make([]hexutil.Bytes, len(payload.Transactions))
for i := range payload.Transactions {
txs[i] = bytesutil.SafeCopyBytes(payload.Transactions[i])
}
withdrawals := make([]Withdrawal, len(payload.Withdrawals))
for i, w := range payload.Withdrawals {
withdrawals[i] = Withdrawal{
Index: Uint256{Int: big.NewInt(0).SetUint64(w.Index)},
ValidatorIndex: Uint256{Int: big.NewInt(0).SetUint64(uint64(w.ValidatorIndex))},
Address: bytesutil.SafeCopyBytes(w.Address),
Amount: Uint256{Int: big.NewInt(0).SetUint64(w.Amount)},
}
}
return ExecutionPayloadDeneb{
ParentHash: bytesutil.SafeCopyBytes(payload.ParentHash),
FeeRecipient: bytesutil.SafeCopyBytes(payload.FeeRecipient),
StateRoot: bytesutil.SafeCopyBytes(payload.StateRoot),
ReceiptsRoot: bytesutil.SafeCopyBytes(payload.ReceiptsRoot),
LogsBloom: bytesutil.SafeCopyBytes(payload.LogsBloom),
PrevRandao: bytesutil.SafeCopyBytes(payload.PrevRandao),
BlockNumber: Uint64String(payload.BlockNumber),
GasLimit: Uint64String(payload.GasLimit),
GasUsed: Uint64String(payload.GasUsed),
Timestamp: Uint64String(payload.Timestamp),
ExtraData: bytesutil.SafeCopyBytes(payload.ExtraData),
BaseFeePerGas: bFee,
BlockHash: bytesutil.SafeCopyBytes(payload.BlockHash),
Transactions: txs,
Withdrawals: withdrawals,
BlobGasUsed: Uint64String(payload.BlobGasUsed),
ExcessBlobGas: Uint64String(payload.ExcessBlobGas),
}, nil
}
// ExecHeaderResponseCapella is the response of builder API /eth/v1/builder/header/{slot}/{parent_hash}/{pubkey} for Capella.
type ExecHeaderResponseCapella struct {
Data struct {
@@ -869,16 +908,19 @@ func (bb *BuilderBidDeneb) ToProto() (*eth.BuilderBidDeneb, error) {
if err != nil {
return nil, err
}
var bundle *v1.BlindedBlobsBundle
if bb.BlindedBlobsBundle != nil {
bundle, err = bb.BlindedBlobsBundle.ToProto()
if err != nil {
return nil, err
if len(bb.BlobKzgCommitments) > fieldparams.MaxBlobsPerBlock {
return nil, fmt.Errorf("too many blob commitments: %d", len(bb.BlobKzgCommitments))
}
kzgCommitments := make([][]byte, len(bb.BlobKzgCommitments))
for i, commit := range bb.BlobKzgCommitments {
if len(commit) != fieldparams.BLSPubkeyLength {
return nil, fmt.Errorf("commitment length %d is not %d", len(commit), fieldparams.BLSPubkeyLength)
}
kzgCommitments[i] = bytesutil.SafeCopyBytes(commit)
}
return &eth.BuilderBidDeneb{
Header: header,
BlindedBlobsBundle: bundle,
BlobKzgCommitments: kzgCommitments,
Value: bytesutil.SafeCopyBytes(bb.Value.SSZBytes()),
Pubkey: bytesutil.SafeCopyBytes(bb.Pubkey),
}, nil
@@ -887,42 +929,11 @@ func (bb *BuilderBidDeneb) ToProto() (*eth.BuilderBidDeneb, error) {
// BuilderBidDeneb is a field of ExecHeaderResponseDeneb.
type BuilderBidDeneb struct {
Header *ExecutionPayloadHeaderDeneb `json:"header"`
BlindedBlobsBundle *BlindedBlobsBundle `json:"blinded_blobs_bundle"`
BlobKzgCommitments []hexutil.Bytes `json:"blob_kzg_commitments"`
Value Uint256 `json:"value"`
Pubkey hexutil.Bytes `json:"pubkey"`
}
// BlindedBlobsBundle is a field of BuilderBidDeneb and represents the blinded blobs of the associated header.
type BlindedBlobsBundle struct {
KzgCommitments []hexutil.Bytes `json:"commitments"`
Proofs []hexutil.Bytes `json:"proofs"`
BlobRoots []hexutil.Bytes `json:"blob_roots"`
}
// ToProto creates a BlindedBlobsBundle Proto from BlindedBlobsBundle.
func (r *BlindedBlobsBundle) ToProto() (*v1.BlindedBlobsBundle, error) {
kzg := make([][]byte, len(r.KzgCommitments))
for i := range kzg {
kzg[i] = bytesutil.SafeCopyBytes(r.KzgCommitments[i])
}
proofs := make([][]byte, len(r.Proofs))
for i := range proofs {
proofs[i] = bytesutil.SafeCopyBytes(r.Proofs[i])
}
blobRoots := make([][]byte, len(r.BlobRoots))
for i := range blobRoots {
blobRoots[i] = bytesutil.SafeCopyBytes(r.BlobRoots[i])
}
return &v1.BlindedBlobsBundle{
KzgCommitments: kzg,
Proofs: proofs,
BlobRoots: blobRoots,
}, nil
}
// ExecutionPayloadHeaderDeneb a field part of the BuilderBidDeneb.
type ExecutionPayloadHeaderDeneb struct {
ParentHash hexutil.Bytes `json:"parent_hash"`
@@ -1052,6 +1063,16 @@ type BlobsBundle struct {
// ToProto returns a BlobsBundle Proto.
func (b BlobsBundle) ToProto() (*v1.BlobsBundle, error) {
if len(b.Blobs) > fieldparams.MaxBlobCommitmentsPerBlock {
return nil, fmt.Errorf("blobs length %d is more than max %d", len(b.Blobs), fieldparams.MaxBlobCommitmentsPerBlock)
}
if len(b.Commitments) != len(b.Blobs) {
return nil, fmt.Errorf("commitments length %d does not equal blobs length %d", len(b.Commitments), len(b.Blobs))
}
if len(b.Proofs) != len(b.Blobs) {
return nil, fmt.Errorf("proofs length %d does not equal blobs length %d", len(b.Proofs), len(b.Blobs))
}
commitments := make([][]byte, len(b.Commitments))
for i := range b.Commitments {
if len(b.Commitments[i]) != fieldparams.BLSPubkeyLength {
@@ -1066,9 +1087,6 @@ func (b BlobsBundle) ToProto() (*v1.BlobsBundle, error) {
}
proofs[i] = bytesutil.SafeCopyBytes(b.Proofs[i])
}
if len(b.Blobs) > fieldparams.MaxBlobsPerBlock {
return nil, fmt.Errorf("blobs length %d is more than max %d", len(b.Blobs), fieldparams.MaxBlobsPerBlock)
}
blobs := make([][]byte, len(b.Blobs))
for i := range b.Blobs {
if len(b.Blobs[i]) != fieldparams.BlobLength {
@@ -1083,6 +1101,28 @@ func (b BlobsBundle) ToProto() (*v1.BlobsBundle, error) {
}, nil
}
// FromBundleProto converts the proto bundle type to the builder
// type.
func FromBundleProto(bundle *v1.BlobsBundle) *BlobsBundle {
commitments := make([]hexutil.Bytes, len(bundle.KzgCommitments))
for i := range bundle.KzgCommitments {
commitments[i] = bytesutil.SafeCopyBytes(bundle.KzgCommitments[i])
}
proofs := make([]hexutil.Bytes, len(bundle.Proofs))
for i := range bundle.Proofs {
proofs[i] = bytesutil.SafeCopyBytes(bundle.Proofs[i])
}
blobs := make([]hexutil.Bytes, len(bundle.Blobs))
for i := range bundle.Blobs {
blobs[i] = bytesutil.SafeCopyBytes(bundle.Blobs[i])
}
return &BlobsBundle{
Commitments: commitments,
Proofs: proofs,
Blobs: blobs,
}
}
// ToProto returns ExecutionPayloadDeneb Proto and BlobsBundle Proto separately.
func (r *ExecPayloadResponseDeneb) ToProto() (*v1.ExecutionPayloadDeneb, *v1.BlobsBundle, error) {
if r.Data == nil {

View File

@@ -13,7 +13,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/math"
v1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
@@ -38,8 +38,7 @@ func TestSignedValidatorRegistration_MarshalJSON(t *testing.T) {
},
Signature: make([]byte, 96),
}
a, err := shared.SignedValidatorRegistrationFromConsensus(svr)
require.NoError(t, err)
a := structs.SignedValidatorRegistrationFromConsensus(svr)
je, err := json.Marshal(a)
require.NoError(t, err)
// decode with a struct w/ plain strings so we can check the string encoding of the hex fields
@@ -56,7 +55,7 @@ func TestSignedValidatorRegistration_MarshalJSON(t *testing.T) {
require.Equal(t, "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", un.Message.Pubkey)
t.Run("roundtrip", func(t *testing.T) {
b := &shared.SignedValidatorRegistration{}
b := &structs.SignedValidatorRegistration{}
if err := json.Unmarshal(je, b); err != nil {
require.NoError(t, err)
}
@@ -142,17 +141,9 @@ var testExampleHeaderResponseDeneb = `{
"blob_gas_used": "1",
"excess_blob_gas": "2"
},
"blinded_blobs_bundle": {
"commitments": [
"blob_kzg_commitments": [
"0x8dab030c51e16e84be9caab84ee3d0b8bbec1db4a0e4de76439da8424d9b957370a10a78851f97e4b54d2ce1ab0d686f"
],
"proofs": [
"0xb4021b0de10f743893d4f71e1bf830c019e832958efd6795baf2f83b8699a9eccc5dc99015d8d4d8ec370d0cc333c06a"
],
"blob_roots": [
"0x24564723180fcb3d994104538d351c8dcbde12d541676bb736cf678018ca4739"
]
},
],
"value": "652312848583266388373324160190187140051835877600158453279131187530910662656",
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
},
@@ -218,6 +209,39 @@ var testExampleHeaderResponseUnknownVersion = `{
}
}`
var testExampleHeaderResponseDenebTooManyBlobs = `{
"version": "deneb",
"data": {
"message": {
"header": {
"parent_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"fee_recipient": "0xabcf8e0d4e9587369b2301d0790347320302cc09",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"receipts_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"logs_bloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"prev_randao": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"block_number": "1",
"gas_limit": "1",
"gas_used": "1",
"timestamp": "1",
"extra_data": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"base_fee_per_gas": "452312848583266388373324160190187140051835877600158453279131187530910662656",
"block_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"transactions_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"withdrawals_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"blob_gas_used": "1",
"excess_blob_gas": "2"
},
"blob_kzg_commitments": [
"","","","","","",""
],
"value": "652312848583266388373324160190187140051835877600158453279131187530910662656",
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
},
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
}`
func TestExecutionHeaderResponseUnmarshal(t *testing.T) {
hr := &ExecHeaderResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleHeaderResponse), hr))
@@ -637,6 +661,151 @@ var testExampleExecutionPayloadDeneb = fmt.Sprintf(`{
}
}`, hexutil.Encode(make([]byte, fieldparams.BlobLength)))
var testExampleExecutionPayloadDenebTooManyBlobs = fmt.Sprintf(`{
"version": "deneb",
"data": {
"execution_payload":{
"parent_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"fee_recipient": "0xabcf8e0d4e9587369b2301d0790347320302cc09",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"receipts_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"logs_bloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"prev_randao": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"block_number": "1",
"gas_limit": "1",
"gas_used": "1",
"timestamp": "1",
"extra_data": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"base_fee_per_gas": "452312848583266388373324160190187140051835877600158453279131187530910662656",
"block_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"transactions": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
],
"withdrawals": [
{
"index": "1",
"validator_index": "1",
"address": "0xcf8e0d4e9587369b2301d0790347320302cc0943",
"amount": "1"
}
],
"blob_gas_used": "2",
"excess_blob_gas": "3"
},
"blobs_bundle": {
"commitments": [
"0x8dab030c51e16e84be9caab84ee3d0b8bbec1db4a0e4de76439da8424d9b957370a10a78851f97e4b54d2ce1ab0d686f"
],
"proofs": [
"0xb4021b0de10f743893d4f71e1bf830c019e832958efd6795baf2f83b8699a9eccc5dc99015d8d4d8ec370d0cc333c06a"
],
"blobs": %s
}
}
}`, beyondMaxEmptyBlobs())
func beyondMaxEmptyBlobs() string {
moreThanMax := fieldparams.MaxBlobCommitmentsPerBlock + 2
blobs := make([]string, moreThanMax)
b, err := json.Marshal(blobs)
if err != nil {
panic(err)
}
return string(b)
}
var testExampleExecutionPayloadDenebDifferentCommitmentCount = fmt.Sprintf(`{
"version": "deneb",
"data": {
"execution_payload":{
"parent_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"fee_recipient": "0xabcf8e0d4e9587369b2301d0790347320302cc09",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"receipts_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"logs_bloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"prev_randao": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"block_number": "1",
"gas_limit": "1",
"gas_used": "1",
"timestamp": "1",
"extra_data": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"base_fee_per_gas": "452312848583266388373324160190187140051835877600158453279131187530910662656",
"block_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"transactions": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
],
"withdrawals": [
{
"index": "1",
"validator_index": "1",
"address": "0xcf8e0d4e9587369b2301d0790347320302cc0943",
"amount": "1"
}
],
"blob_gas_used": "2",
"excess_blob_gas": "3"
},
"blobs_bundle": {
"commitments": [
"0x8dab030c51e16e84be9caab84ee3d0b8bbec1db4a0e4de76439da8424d9b957370a10a78851f97e4b54d2ce1ab0d686f",
"0xc00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
],
"proofs": [
"0xb4021b0de10f743893d4f71e1bf830c019e832958efd6795baf2f83b8699a9eccc5dc99015d8d4d8ec370d0cc333c06a"
],
"blobs": [
"%s"
]
}
}
}`, hexutil.Encode(make([]byte, fieldparams.BlobLength)))
var testExampleExecutionPayloadDenebDifferentProofCount = fmt.Sprintf(`{
"version": "deneb",
"data": {
"execution_payload":{
"parent_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"fee_recipient": "0xabcf8e0d4e9587369b2301d0790347320302cc09",
"state_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"receipts_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"logs_bloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"prev_randao": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"block_number": "1",
"gas_limit": "1",
"gas_used": "1",
"timestamp": "1",
"extra_data": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"base_fee_per_gas": "452312848583266388373324160190187140051835877600158453279131187530910662656",
"block_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"transactions": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86"
],
"withdrawals": [
{
"index": "1",
"validator_index": "1",
"address": "0xcf8e0d4e9587369b2301d0790347320302cc0943",
"amount": "1"
}
],
"blob_gas_used": "2",
"excess_blob_gas": "3"
},
"blobs_bundle": {
"commitments": [
"0x8dab030c51e16e84be9caab84ee3d0b8bbec1db4a0e4de76439da8424d9b957370a10a78851f97e4b54d2ce1ab0d686f"
],
"proofs": [
"0xb4021b0de10f743893d4f71e1bf830c019e832958efd6795baf2f83b8699a9eccc5dc99015d8d4d8ec370d0cc333c06a",
"0xc00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
],
"blobs": [
"%s"
]
}
}
}`, hexutil.Encode(make([]byte, fieldparams.BlobLength)))
func TestExecutionPayloadResponseUnmarshal(t *testing.T) {
epr := &ExecPayloadResponse{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayload), epr))
@@ -1013,7 +1182,6 @@ func TestExecutionPayloadResponseCapellaToProto(t *testing.T) {
},
}
require.DeepEqual(t, expected, p)
}
func TestExecutionPayloadResponseDenebToProto(t *testing.T) {
@@ -1092,7 +1260,27 @@ func TestExecutionPayloadResponseDenebToProto(t *testing.T) {
}
require.DeepEqual(t, blobsBundle, expectedBlobs)
}
func TestExecutionPayloadResponseDenebToProtoInvalidBlobCount(t *testing.T) {
hr := &ExecPayloadResponseDeneb{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadDenebTooManyBlobs), hr))
_, _, err := hr.ToProto()
require.ErrorContains(t, fmt.Sprintf("blobs length %d is more than max %d", fieldparams.MaxBlobCommitmentsPerBlock+2, fieldparams.MaxBlobCommitmentsPerBlock), err)
}
func TestExecutionPayloadResponseDenebToProtoDifferentCommitmentCount(t *testing.T) {
hr := &ExecPayloadResponseDeneb{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadDenebDifferentCommitmentCount), hr))
_, _, err := hr.ToProto()
require.ErrorContains(t, "commitments length 2 does not equal blobs length 1", err)
}
func TestExecutionPayloadResponseDenebToProtoDifferentProofCount(t *testing.T) {
hr := &ExecPayloadResponseDeneb{}
require.NoError(t, json.Unmarshal([]byte(testExampleExecutionPayloadDenebDifferentProofCount), hr))
_, _, err := hr.ToProto()
require.ErrorContains(t, "proofs length 2 does not equal blobs length 1", err)
}
func pbEth1Data() *eth.Eth1Data {
@@ -1530,7 +1718,7 @@ func TestUint256UnmarshalTooBig(t *testing.T) {
func TestMarshalBlindedBeaconBlockBodyBellatrix(t *testing.T) {
expected, err := os.ReadFile("testdata/blinded-block.json")
require.NoError(t, err)
b, err := shared.BlindedBeaconBlockBellatrixFromConsensus(&eth.BlindedBeaconBlockBellatrix{
b, err := structs.BlindedBeaconBlockBellatrixFromConsensus(&eth.BlindedBeaconBlockBellatrix{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
@@ -1560,7 +1748,7 @@ func TestMarshalBlindedBeaconBlockBodyBellatrix(t *testing.T) {
func TestMarshalBlindedBeaconBlockBodyCapella(t *testing.T) {
expected, err := os.ReadFile("testdata/blinded-block-capella.json")
require.NoError(t, err)
b, err := shared.BlindedBeaconBlockCapellaFromConsensus(&eth.BlindedBeaconBlockCapella{
b, err := structs.BlindedBeaconBlockCapellaFromConsensus(&eth.BlindedBeaconBlockCapella{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),

View File

@@ -32,7 +32,7 @@ func Non200Err(response *http.Response) error {
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case 404:
case http.StatusNotFound:
return errors.Wrap(ErrNotFound, msg)
default:
return errors.Wrap(ErrNotOK, msg)

View File

@@ -7,7 +7,7 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api/client:go_default_library",
"//validator/rpc/apimiddleware:go_default_library",
"//validator/rpc:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -8,7 +8,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/validator/rpc/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/validator/rpc"
)
const (
@@ -41,17 +41,17 @@ func (c *Client) GetValidatorPubKeys(ctx context.Context) ([]string, error) {
if err != nil {
return nil, err
}
if len(jsonlocal.Keystores) == 0 && len(jsonremote.Keystores) == 0 {
if len(jsonlocal.Data) == 0 && len(jsonremote.Data) == 0 {
return nil, errors.New("there are no local keys or remote keys on the validator")
}
hexKeys := make(map[string]bool)
for index := range jsonlocal.Keystores {
hexKeys[jsonlocal.Keystores[index].ValidatingPubkey] = true
for index := range jsonlocal.Data {
hexKeys[jsonlocal.Data[index].ValidatingPubkey] = true
}
for index := range jsonremote.Keystores {
hexKeys[jsonremote.Keystores[index].Pubkey] = true
for index := range jsonremote.Data {
hexKeys[jsonremote.Data[index].Pubkey] = true
}
keys := make([]string, 0)
for k := range hexKeys {
@@ -61,12 +61,12 @@ func (c *Client) GetValidatorPubKeys(ctx context.Context) ([]string, error) {
}
// GetLocalValidatorKeys calls the keymanager APIs for local validator keys
func (c *Client) GetLocalValidatorKeys(ctx context.Context) (*apimiddleware.ListKeystoresResponseJson, error) {
func (c *Client) GetLocalValidatorKeys(ctx context.Context) (*rpc.ListKeystoresResponse, error) {
localBytes, err := c.Get(ctx, localKeysPath, client.WithAuthorizationToken(c.Token()))
if err != nil {
return nil, err
}
jsonlocal := &apimiddleware.ListKeystoresResponseJson{}
jsonlocal := &rpc.ListKeystoresResponse{}
if err := json.Unmarshal(localBytes, jsonlocal); err != nil {
return nil, errors.Wrap(err, "failed to parse local keystore list")
}
@@ -74,14 +74,14 @@ func (c *Client) GetLocalValidatorKeys(ctx context.Context) (*apimiddleware.List
}
// GetRemoteValidatorKeys calls the keymanager APIs for web3signer validator keys
func (c *Client) GetRemoteValidatorKeys(ctx context.Context) (*apimiddleware.ListRemoteKeysResponseJson, error) {
func (c *Client) GetRemoteValidatorKeys(ctx context.Context) (*rpc.ListRemoteKeysResponse, error) {
remoteBytes, err := c.Get(ctx, remoteKeysPath, client.WithAuthorizationToken(c.Token()))
if err != nil {
if !strings.Contains(err.Error(), "Prysm Wallet is not of type Web3Signer") {
return nil, err
}
}
jsonremote := &apimiddleware.ListRemoteKeysResponseJson{}
jsonremote := &rpc.ListRemoteKeysResponse{}
if len(remoteBytes) != 0 {
if err := json.Unmarshal(remoteBytes, jsonremote); err != nil {
return nil, errors.Wrap(err, "failed to parse remote keystore list")
@@ -107,13 +107,13 @@ func (c *Client) GetFeeRecipientAddresses(ctx context.Context, validators []stri
}
// GetFeeRecipientAddress takes a public key and calls the keymanager API to return its fee recipient.
func (c *Client) GetFeeRecipientAddress(ctx context.Context, pubkey string) (*apimiddleware.GetFeeRecipientByPubkeyResponseJson, error) {
func (c *Client) GetFeeRecipientAddress(ctx context.Context, pubkey string) (*rpc.GetFeeRecipientByPubkeyResponse, error) {
path := strings.Replace(feeRecipientPath, "{pubkey}", pubkey, 1)
b, err := c.Get(ctx, path, client.WithAuthorizationToken(c.Token()))
if err != nil {
return nil, err
}
feejson := &apimiddleware.GetFeeRecipientByPubkeyResponseJson{}
feejson := &rpc.GetFeeRecipientByPubkeyResponse{}
if err := json.Unmarshal(b, feejson); err != nil {
return nil, errors.Wrap(err, "failed to parse fee recipient")
}

7
api/constants.go Normal file
View File

@@ -0,0 +1,7 @@
package api
const (
WebUrlPrefix = "/v2/validator/"
WebApiUrlPrefix = "/api/v2/validator/"
KeymanagerApiPrefix = "/eth/v1"
)

View File

@@ -14,16 +14,16 @@ go_library(
"//validator:__subpackages__",
],
deps = [
"//api/gateway/apimiddleware:go_default_library",
"//api/server:go_default_library",
"//runtime:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_grpc_ecosystem_grpc_gateway_v2//runtime:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_rs_cors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//connectivity:go_default_library",
"@org_golang_google_grpc//credentials:go_default_library",
"@org_golang_google_grpc//credentials/insecure:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
@@ -33,7 +33,6 @@ go_test(
srcs = ["gateway_test.go"],
embed = [":go_default_library"],
deps = [
"//api/gateway/apimiddleware:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",

View File

@@ -1,43 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"api_middleware.go",
"log.go",
"param_handling.go",
"process_field.go",
"process_request.go",
"structs.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/grpc:go_default_library",
"//encoding/bytesutil:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_wealdtech_go_bytesutil//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"param_handling_test.go",
"process_request_test.go",
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/grpc:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
],
)

View File

@@ -1,265 +0,0 @@
package apimiddleware
import (
"net/http"
"reflect"
"time"
"github.com/gorilla/mux"
)
// ApiProxyMiddleware is a proxy between an Ethereum consensus API HTTP client and grpc-gateway.
// The purpose of the proxy is to handle HTTP requests and gRPC responses in such a way that:
// - Ethereum consensus API requests can be handled by grpc-gateway correctly
// - gRPC responses can be returned as spec-compliant Ethereum consensus API responses
type ApiProxyMiddleware struct {
GatewayAddress string
EndpointCreator EndpointFactory
Timeout time.Duration
router *mux.Router
}
// EndpointFactory is responsible for creating new instances of Endpoint values.
type EndpointFactory interface {
Create(path string) (*Endpoint, error)
Paths() []string
IsNil() bool
}
// Endpoint is a representation of an API HTTP endpoint that should be proxied by the middleware.
type Endpoint struct {
Path string // The path of the HTTP endpoint.
GetResponse interface{} // The struct corresponding to the JSON structure used in a GET response.
PostRequest interface{} // The struct corresponding to the JSON structure used in a POST request.
PostResponse interface{} // The struct corresponding to the JSON structure used in a POST response.
DeleteRequest interface{} // The struct corresponding to the JSON structure used in a DELETE request.
DeleteResponse interface{} // The struct corresponding to the JSON structure used in a DELETE response.
RequestURLLiterals []string // Names of URL parameters that should not be base64-encoded.
RequestQueryParams []QueryParam // Query parameters of the request.
Err ErrorJson // The struct corresponding to the error that should be returned in case of a request failure.
Hooks HookCollection // A collection of functions that can be invoked at various stages of the request/response cycle.
CustomHandlers []CustomHandler // Functions that will be executed instead of the default request/response behaviour.
}
// RunDefault expresses whether the default processing logic should be carried out after running a pre hook.
type RunDefault bool
// DefaultEndpoint returns an Endpoint with default configuration, e.g. DefaultErrorJson for error handling.
func DefaultEndpoint() Endpoint {
return Endpoint{
Err: &DefaultErrorJson{},
}
}
// QueryParam represents a single query parameter's metadata.
type QueryParam struct {
Name string
Hex bool
Enum bool
}
// CustomHandler is a function that can be invoked at the very beginning of the request,
// essentially replacing the whole default request/response logic with custom logic for a specific endpoint.
type CustomHandler = func(m *ApiProxyMiddleware, endpoint Endpoint, w http.ResponseWriter, req *http.Request) (handled bool)
// HookCollection contains hooks that can be used to amend the default request/response cycle with custom logic for a specific endpoint.
type HookCollection struct {
OnPreDeserializeRequestBodyIntoContainer func(endpoint *Endpoint, w http.ResponseWriter, req *http.Request) (RunDefault, ErrorJson)
OnPostDeserializeRequestBodyIntoContainer func(endpoint *Endpoint, w http.ResponseWriter, req *http.Request) ErrorJson
OnPreDeserializeGrpcResponseBodyIntoContainer func([]byte, interface{}) (RunDefault, ErrorJson)
OnPreSerializeMiddlewareResponseIntoJson func(interface{}) (RunDefault, []byte, ErrorJson)
}
// fieldProcessor applies the processing function f to a value when the tag is present on the field.
type fieldProcessor struct {
tag string
f func(value reflect.Value) error
}
// Run starts the proxy, registering all proxy endpoints.
func (m *ApiProxyMiddleware) Run(gatewayRouter *mux.Router) {
for _, path := range m.EndpointCreator.Paths() {
gatewayRouter.HandleFunc(path, m.WithMiddleware(path))
}
m.router = gatewayRouter
}
// ServeHTTP for the proxy middleware.
func (m *ApiProxyMiddleware) ServeHTTP(w http.ResponseWriter, req *http.Request) {
m.router.ServeHTTP(w, req)
}
// WithMiddleware wraps the given endpoint handler with the middleware logic.
func (m *ApiProxyMiddleware) WithMiddleware(path string) http.HandlerFunc {
return func(w http.ResponseWriter, req *http.Request) {
endpoint, err := m.EndpointCreator.Create(path)
if err != nil {
log.WithError(err).Errorf("Could not create endpoint for path: %s", path)
return
}
for _, handler := range endpoint.CustomHandlers {
if handler(m, *endpoint, w, req) {
return
}
}
if req.Method == "POST" {
if errJson := handlePostRequestForEndpoint(endpoint, w, req); errJson != nil {
WriteError(w, errJson, nil)
return
}
}
if req.Method == "DELETE" && req.Body != http.NoBody {
if errJson := handleDeleteRequestForEndpoint(endpoint, req); errJson != nil {
WriteError(w, errJson, nil)
return
}
}
if errJson := m.PrepareRequestForProxying(*endpoint, req); errJson != nil {
WriteError(w, errJson, nil)
return
}
grpcResp, errJson := m.ProxyRequest(req)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
grpcRespBody, errJson := ReadGrpcResponseBody(grpcResp.Body)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
var respJson []byte
if !GrpcResponseIsEmpty(grpcRespBody) {
respHasError, errJson := HandleGrpcResponseError(endpoint.Err, grpcResp, grpcRespBody, w)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
if respHasError {
return
}
var resp interface{}
if req.Method == "GET" {
resp = endpoint.GetResponse
} else if req.Method == "DELETE" {
resp = endpoint.DeleteResponse
} else {
resp = endpoint.PostResponse
}
if errJson := deserializeGrpcResponseBodyIntoContainerWrapped(endpoint, grpcRespBody, resp); errJson != nil {
WriteError(w, errJson, nil)
return
}
if errJson := ProcessMiddlewareResponseFields(resp); errJson != nil {
WriteError(w, errJson, nil)
return
}
respJson, errJson = serializeMiddlewareResponseIntoJsonWrapped(endpoint, respJson, resp)
if errJson != nil {
WriteError(w, errJson, nil)
return
}
}
if errJson := WriteMiddlewareResponseHeadersAndBody(grpcResp, respJson, w); errJson != nil {
WriteError(w, errJson, nil)
return
}
if errJson := Cleanup(grpcResp.Body); errJson != nil {
WriteError(w, errJson, nil)
return
}
}
}
func handlePostRequestForEndpoint(endpoint *Endpoint, w http.ResponseWriter, req *http.Request) ErrorJson {
if errJson := deserializeRequestBodyIntoContainerWrapped(endpoint, req, w); errJson != nil {
return errJson
}
if errJson := ProcessRequestContainerFields(endpoint.PostRequest); errJson != nil {
return errJson
}
return SetRequestBodyToRequestContainer(endpoint.PostRequest, req)
}
func handleDeleteRequestForEndpoint(endpoint *Endpoint, req *http.Request) ErrorJson {
if errJson := DeserializeRequestBodyIntoContainer(req.Body, endpoint.DeleteRequest); errJson != nil {
return errJson
}
if errJson := ProcessRequestContainerFields(endpoint.DeleteRequest); errJson != nil {
return errJson
}
return SetRequestBodyToRequestContainer(endpoint.DeleteRequest, req)
}
func deserializeRequestBodyIntoContainerWrapped(endpoint *Endpoint, req *http.Request, w http.ResponseWriter) ErrorJson {
runDefault := true
if endpoint.Hooks.OnPreDeserializeRequestBodyIntoContainer != nil {
run, errJson := endpoint.Hooks.OnPreDeserializeRequestBodyIntoContainer(endpoint, w, req)
if errJson != nil {
return errJson
}
if !run {
runDefault = false
}
}
if runDefault {
if errJson := DeserializeRequestBodyIntoContainer(req.Body, endpoint.PostRequest); errJson != nil {
return errJson
}
}
if endpoint.Hooks.OnPostDeserializeRequestBodyIntoContainer != nil {
if errJson := endpoint.Hooks.OnPostDeserializeRequestBodyIntoContainer(endpoint, w, req); errJson != nil {
return errJson
}
}
return nil
}
func deserializeGrpcResponseBodyIntoContainerWrapped(endpoint *Endpoint, grpcResponseBody []byte, resp interface{}) ErrorJson {
runDefault := true
if endpoint.Hooks.OnPreDeserializeGrpcResponseBodyIntoContainer != nil {
run, errJson := endpoint.Hooks.OnPreDeserializeGrpcResponseBodyIntoContainer(grpcResponseBody, resp)
if errJson != nil {
return errJson
}
if !run {
runDefault = false
}
}
if runDefault {
if errJson := DeserializeGrpcResponseBodyIntoContainer(grpcResponseBody, resp); errJson != nil {
return errJson
}
}
return nil
}
func serializeMiddlewareResponseIntoJsonWrapped(endpoint *Endpoint, respJson []byte, resp interface{}) ([]byte, ErrorJson) {
runDefault := true
var errJson ErrorJson
if endpoint.Hooks.OnPreSerializeMiddlewareResponseIntoJson != nil {
var run RunDefault
run, respJson, errJson = endpoint.Hooks.OnPreSerializeMiddlewareResponseIntoJson(resp)
if errJson != nil {
return nil, errJson
}
if !run {
runDefault = false
}
}
if runDefault {
respJson, errJson = SerializeMiddlewareResponseIntoJson(resp)
if errJson != nil {
return nil, errJson
}
}
return respJson, nil
}

View File

@@ -1,5 +0,0 @@
package apimiddleware
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "apimiddleware")

View File

@@ -1,103 +0,0 @@
package apimiddleware
import (
"encoding/base64"
"net/http"
"net/url"
"strings"
"github.com/gorilla/mux"
butil "github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/wealdtech/go-bytesutil"
)
// HandleURLParameters processes URL parameters, allowing parameterized URLs to be safely and correctly proxied to grpc-gateway.
func HandleURLParameters(url string, req *http.Request, literals []string) ErrorJson {
segments := strings.Split(url, "/")
segmentsLoop:
for i, s := range segments {
// We only care about segments which are parameterized.
if isRequestParam(s) {
// Don't do anything with parameters which should be forwarded literally to gRPC.
for _, l := range literals {
if s == "{"+l+"}" {
continue segmentsLoop
}
}
routeVar := mux.Vars(req)[s[1:len(s)-1]]
bRouteVar := []byte(routeVar)
if butil.IsHex(bRouteVar) {
var err error
bRouteVar, err = bytesutil.FromHexString(string(bRouteVar))
if err != nil {
return InternalServerErrorWithMessage(err, "could not process URL parameter")
}
}
// Converting hex to base64 may result in a value which malforms the URL.
// We use URLEncoding to safely escape such values.
base64RouteVar := base64.URLEncoding.EncodeToString(bRouteVar)
// Merge segments back into the full URL.
splitPath := strings.Split(req.URL.Path, "/")
splitPath[i] = base64RouteVar
req.URL.Path = strings.Join(splitPath, "/")
}
}
return nil
}
// HandleQueryParameters processes query parameters, allowing them to be safely and correctly proxied to grpc-gateway.
func HandleQueryParameters(req *http.Request, params []QueryParam) ErrorJson {
queryParams := req.URL.Query()
normalizeQueryValues(queryParams)
for key, vals := range queryParams {
for _, p := range params {
if key == p.Name {
if p.Hex {
queryParams.Del(key)
for _, v := range vals {
b := []byte(v)
if butil.IsHex(b) {
var err error
b, err = bytesutil.FromHexString(v)
if err != nil {
return InternalServerErrorWithMessage(err, "could not process query parameter")
}
}
queryParams.Add(key, base64.URLEncoding.EncodeToString(b))
}
}
if p.Enum {
queryParams.Del(key)
for _, v := range vals {
// gRPC expects uppercase enum values.
queryParams.Add(key, strings.ToUpper(v))
}
}
}
}
}
req.URL.RawQuery = queryParams.Encode()
return nil
}
// isRequestParam verifies whether the passed string is a request parameter.
// Request parameters are enclosed in { and }.
func isRequestParam(s string) bool {
return len(s) > 2 && s[0] == '{' && s[len(s)-1] == '}'
}
func normalizeQueryValues(queryParams url.Values) {
// Replace comma-separated values with individual values.
for key, vals := range queryParams {
splitVals := make([]string, 0)
for _, v := range vals {
splitVals = append(splitVals, strings.Split(v, ",")...)
}
queryParams[key] = splitVals
}
}

View File

@@ -1,124 +0,0 @@
package apimiddleware
import (
"bytes"
"net/http/httptest"
"testing"
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
func TestHandleURLParameters(t *testing.T) {
var body bytes.Buffer
t.Run("no_params", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example/bar", &body)
errJson := HandleURLParameters("/not_param", request, []string{})
require.Equal(t, true, errJson == nil)
assert.Equal(t, "/bar", request.URL.Path)
})
t.Run("with_params", func(t *testing.T) {
muxVars := make(map[string]string)
muxVars["bar_param"] = "bar"
muxVars["quux_param"] = "quux"
request := httptest.NewRequest("GET", "http://foo.example/bar/baz/quux", &body)
request = mux.SetURLVars(request, muxVars)
errJson := HandleURLParameters("/{bar_param}/not_param/{quux_param}", request, []string{})
require.Equal(t, true, errJson == nil)
assert.Equal(t, "/YmFy/baz/cXV1eA==", request.URL.Path)
})
t.Run("with_literal", func(t *testing.T) {
muxVars := make(map[string]string)
muxVars["bar_param"] = "bar"
request := httptest.NewRequest("GET", "http://foo.example/bar/baz", &body)
request = mux.SetURLVars(request, muxVars)
errJson := HandleURLParameters("/{bar_param}/not_param/", request, []string{"bar_param"})
require.Equal(t, true, errJson == nil)
assert.Equal(t, "/bar/baz", request.URL.Path)
})
t.Run("with_hex", func(t *testing.T) {
muxVars := make(map[string]string)
muxVars["hex_param"] = "0x626172"
request := httptest.NewRequest("GET", "http://foo.example/0x626172/baz", &body)
request = mux.SetURLVars(request, muxVars)
errJson := HandleURLParameters("/{hex_param}/not_param/", request, []string{})
require.Equal(t, true, errJson == nil)
assert.Equal(t, "/YmFy/baz", request.URL.Path)
})
}
func TestHandleQueryParameters(t *testing.T) {
var body bytes.Buffer
t.Run("regular_params", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example?bar=bar&baz=baz", &body)
errJson := HandleQueryParameters(request, []QueryParam{{Name: "bar"}, {Name: "baz"}})
require.Equal(t, true, errJson == nil)
query := request.URL.Query()
v, ok := query["bar"]
require.Equal(t, true, ok, "query param not found")
require.Equal(t, 1, len(v), "wrong number of query param values")
assert.Equal(t, "bar", v[0])
v, ok = query["baz"]
require.Equal(t, true, ok, "query param not found")
require.Equal(t, 1, len(v), "wrong number of query param values")
assert.Equal(t, "baz", v[0])
})
t.Run("hex_and_enum_params", func(t *testing.T) {
request := httptest.NewRequest("GET", "http://foo.example?hex=0x626172&baz=baz", &body)
errJson := HandleQueryParameters(request, []QueryParam{{Name: "hex", Hex: true}, {Name: "baz", Enum: true}})
require.Equal(t, true, errJson == nil)
query := request.URL.Query()
v, ok := query["hex"]
require.Equal(t, true, ok, "query param not found")
require.Equal(t, 1, len(v), "wrong number of query param values")
assert.Equal(t, "YmFy", v[0])
v, ok = query["baz"]
require.Equal(t, true, ok, "query param not found")
require.Equal(t, 1, len(v), "wrong number of query param values")
assert.Equal(t, "BAZ", v[0])
})
}
func TestIsRequestParam(t *testing.T) {
tests := []struct {
s string
b bool
}{
{"", false},
{"{", false},
{"}", false},
{"{}", false},
{"{x}", true},
{"{very_long_parameter_name_with_underscores}", true},
}
for _, tt := range tests {
b := isRequestParam(tt.s)
assert.Equal(t, tt.b, b)
}
}
func TestNormalizeQueryValues(t *testing.T) {
input := make(map[string][]string)
input["key"] = []string{"value1", "value2,value3,value4", "value5"}
normalizeQueryValues(input)
require.Equal(t, 5, len(input["key"]))
assert.Equal(t, "value1", input["key"][0])
assert.Equal(t, "value2", input["key"][1])
assert.Equal(t, "value3", input["key"][2])
assert.Equal(t, "value4", input["key"][3])
assert.Equal(t, "value5", input["key"][4])
}

View File

@@ -1,179 +0,0 @@
package apimiddleware
import (
"encoding/base64"
"fmt"
"math/big"
"reflect"
"strconv"
"strings"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/wealdtech/go-bytesutil"
)
// processField calls each processor function on any field that has the matching tag set.
// It is a recursive function.
func processField(s interface{}, processors []fieldProcessor) error {
kind := reflect.TypeOf(s).Kind()
if kind != reflect.Ptr && kind != reflect.Slice && kind != reflect.Array {
return fmt.Errorf("processing fields of kind '%v' is unsupported", kind)
}
t := reflect.TypeOf(s).Elem()
v := reflect.Indirect(reflect.ValueOf(s))
for i := 0; i < t.NumField(); i++ {
switch v.Field(i).Kind() {
case reflect.Slice:
sliceElem := t.Field(i).Type.Elem()
kind := sliceElem.Kind()
// Recursively process slices to struct pointers.
switch {
case kind == reflect.Ptr && sliceElem.Elem().Kind() == reflect.Struct:
for j := 0; j < v.Field(i).Len(); j++ {
if err := processField(v.Field(i).Index(j).Interface(), processors); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
// Process each string in string slices.
case kind == reflect.String:
for _, proc := range processors {
_, hasTag := t.Field(i).Tag.Lookup(proc.tag)
if !hasTag {
continue
}
for j := 0; j < v.Field(i).Len(); j++ {
if err := proc.f(v.Field(i).Index(j)); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
}
}
// Recursively process struct pointers.
case reflect.Ptr:
if v.Field(i).Elem().Kind() == reflect.Struct {
if err := processField(v.Field(i).Interface(), processors); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
default:
field := t.Field(i)
for _, proc := range processors {
if _, hasTag := field.Tag.Lookup(proc.tag); hasTag {
if err := proc.f(v.Field(i)); err != nil {
return errors.Wrapf(err, "could not process field '%s'", t.Field(i).Name)
}
}
}
}
}
return nil
}
func hexToBase64Processor(v reflect.Value) error {
if v.String() == "0x" {
v.SetString("")
return nil
}
b, err := bytesutil.FromHexString(v.String())
if err != nil {
return err
}
v.SetString(base64.StdEncoding.EncodeToString(b))
return nil
}
func base64ToHexProcessor(v reflect.Value) error {
if v.String() == "" {
// Empty hex values are represented as "0x".
v.SetString("0x")
return nil
}
b, err := base64.StdEncoding.DecodeString(v.String())
if err != nil {
return err
}
v.SetString(hexutil.Encode(b))
return nil
}
func base64ToChecksumAddressProcessor(v reflect.Value) error {
if v.String() == "" {
// Empty hex values are represented as "0x".
v.SetString("0x")
return nil
}
b, err := base64.StdEncoding.DecodeString(v.String())
if err != nil {
return err
}
v.SetString(common.BytesToAddress(b).Hex())
return nil
}
func base64ToUint256Processor(v reflect.Value) error {
if v.String() == "" {
return nil
}
littleEndian, err := base64.StdEncoding.DecodeString(v.String())
if err != nil {
return err
}
if len(littleEndian) != 32 {
return errors.New("invalid length for Uint256")
}
// Integers are stored as little-endian, but
// big.Int expects big-endian. So we need to reverse
// the byte order before decoding.
var bigEndian [32]byte
for i := 0; i < len(littleEndian); i++ {
bigEndian[i] = littleEndian[len(littleEndian)-1-i]
}
var uint256 big.Int
uint256.SetBytes(bigEndian[:])
v.SetString(uint256.String())
return nil
}
func uint256ToBase64Processor(v reflect.Value) error {
if v.String() == "" {
return nil
}
uint256, ok := new(big.Int).SetString(v.String(), 10)
if !ok {
return fmt.Errorf("could not parse Uint256")
}
bigEndian := uint256.Bytes()
if len(bigEndian) > 32 {
return fmt.Errorf("number too big for Uint256")
}
// Integers are stored as little-endian, but
// big.Int gives big-endian. So we need to reverse
// the byte order before encoding.
var littleEndian [32]byte
for i := 0; i < len(bigEndian); i++ {
littleEndian[i] = bigEndian[len(bigEndian)-1-i]
}
v.SetString(base64.StdEncoding.EncodeToString(littleEndian[:]))
return nil
}
func enumToLowercaseProcessor(v reflect.Value) error {
v.SetString(strings.ToLower(v.String()))
return nil
}
func timeToUnixProcessor(v reflect.Value) error {
t, err := time.Parse(time.RFC3339, v.String())
if err != nil {
return err
}
v.SetString(strconv.FormatUint(uint64(t.Unix()), 10))
return nil
}

View File

@@ -1,283 +0,0 @@
package apimiddleware
import (
"bytes"
"encoding/json"
"io"
"net"
"net/http"
"strconv"
"strings"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/grpc"
)
// DeserializeRequestBodyIntoContainer deserializes the request's body into an endpoint-specific struct.
func DeserializeRequestBodyIntoContainer(body io.Reader, requestContainer interface{}) ErrorJson {
decoder := json.NewDecoder(body)
decoder.DisallowUnknownFields()
if err := decoder.Decode(&requestContainer); err != nil {
if strings.Contains(err.Error(), "json: unknown field") {
e := errors.Wrap(err, "could not decode request body")
return &DefaultErrorJson{
Message: e.Error(),
Code: http.StatusBadRequest,
}
}
return InternalServerErrorWithMessage(err, "could not decode request body")
}
return nil
}
// ProcessRequestContainerFields processes fields of an endpoint-specific container according to field tags.
func ProcessRequestContainerFields(requestContainer interface{}) ErrorJson {
if err := processField(requestContainer, []fieldProcessor{
{
tag: "hex",
f: hexToBase64Processor,
},
{
tag: "uint256",
f: uint256ToBase64Processor,
},
}); err != nil {
return InternalServerErrorWithMessage(err, "could not process request data")
}
return nil
}
// SetRequestBodyToRequestContainer makes the endpoint-specific container the new body of the request.
func SetRequestBodyToRequestContainer(requestContainer interface{}, req *http.Request) ErrorJson {
// Serialize the struct, which now includes a base64-encoded value, into JSON.
j, err := json.Marshal(requestContainer)
if err != nil {
return InternalServerErrorWithMessage(err, "could not marshal request")
}
// Set the body to the new JSON.
req.Body = io.NopCloser(bytes.NewReader(j))
req.Header.Set("Content-Length", strconv.Itoa(len(j)))
req.ContentLength = int64(len(j))
return nil
}
// PrepareRequestForProxying applies additional logic to the request so that it can be correctly proxied to grpc-gateway.
func (m *ApiProxyMiddleware) PrepareRequestForProxying(endpoint Endpoint, req *http.Request) ErrorJson {
req.URL.Scheme = "http"
req.URL.Host = m.GatewayAddress
req.RequestURI = ""
if errJson := HandleURLParameters(endpoint.Path, req, endpoint.RequestURLLiterals); errJson != nil {
return errJson
}
if errJson := HandleQueryParameters(req, endpoint.RequestQueryParams); errJson != nil {
return errJson
}
// We have to add the prefix after handling parameters because adding the prefix changes URL segment indexing.
req.URL.Path = "/internal" + req.URL.Path
return nil
}
// ProxyRequest proxies the request to grpc-gateway.
func (m *ApiProxyMiddleware) ProxyRequest(req *http.Request) (*http.Response, ErrorJson) {
// We do not use http.DefaultClient because it does not have any timeout.
netClient := &http.Client{Timeout: m.Timeout}
grpcResp, err := netClient.Do(req)
if err != nil {
if err, ok := err.(net.Error); ok && err.Timeout() {
return nil, TimeoutError()
}
return nil, InternalServerErrorWithMessage(err, "could not proxy request")
}
if grpcResp == nil {
return nil, &DefaultErrorJson{Message: "nil response from gRPC-gateway", Code: http.StatusInternalServerError}
}
return grpcResp, nil
}
// ReadGrpcResponseBody reads the body from the grpc-gateway's response.
func ReadGrpcResponseBody(r io.Reader) ([]byte, ErrorJson) {
body, err := io.ReadAll(r)
if err != nil {
return nil, InternalServerErrorWithMessage(err, "could not read response body")
}
return body, nil
}
// HandleGrpcResponseError acts on an error that resulted from a grpc-gateway's response.
// Whether there was an error is indicated by the bool return value. In case of an error,
// there is no need to write to the response because it's taken care of by the function.
func HandleGrpcResponseError(errJson ErrorJson, resp *http.Response, respBody []byte, w http.ResponseWriter) (bool, ErrorJson) {
responseHasError := false
if err := json.Unmarshal(respBody, errJson); err != nil {
return false, InternalServerErrorWithMessage(err, "could not unmarshal error")
}
if errJson.Msg() != "" {
responseHasError = true
// Something went wrong, but the request completed, meaning we can write headers and the error message.
for h, vs := range resp.Header {
for _, v := range vs {
if strings.HasSuffix(h, api.VersionHeader) {
w.Header().Set(api.VersionHeader, v)
} else {
w.Header().Set(h, v)
}
}
}
// Handle gRPC timeout.
if resp.StatusCode == http.StatusGatewayTimeout {
WriteError(w, TimeoutError(), resp.Header)
} else {
// Set code to HTTP code because unmarshalled body contained gRPC code.
errJson.SetCode(resp.StatusCode)
WriteError(w, errJson, resp.Header)
}
}
return responseHasError, nil
}
// GrpcResponseIsEmpty determines whether the grpc-gateway's response body contains no data.
func GrpcResponseIsEmpty(grpcResponseBody []byte) bool {
return len(grpcResponseBody) == 0 || string(grpcResponseBody) == "{}"
}
// DeserializeGrpcResponseBodyIntoContainer deserializes the grpc-gateway's response body into an endpoint-specific struct.
func DeserializeGrpcResponseBodyIntoContainer(body []byte, responseContainer interface{}) ErrorJson {
if err := json.Unmarshal(body, &responseContainer); err != nil {
return InternalServerErrorWithMessage(err, "could not unmarshal response")
}
return nil
}
// ProcessMiddlewareResponseFields processes fields of an endpoint-specific container according to field tags.
func ProcessMiddlewareResponseFields(responseContainer interface{}) ErrorJson {
if err := processField(responseContainer, []fieldProcessor{
{
tag: "hex",
f: base64ToHexProcessor,
},
{
tag: "address",
f: base64ToChecksumAddressProcessor,
},
{
tag: "enum",
f: enumToLowercaseProcessor,
},
{
tag: "time",
f: timeToUnixProcessor,
},
{
tag: "uint256",
f: base64ToUint256Processor,
},
}); err != nil {
return InternalServerErrorWithMessage(err, "could not process response data")
}
return nil
}
// SerializeMiddlewareResponseIntoJson serializes the endpoint-specific response struct into a JSON representation.
func SerializeMiddlewareResponseIntoJson(responseContainer interface{}) (jsonResponse []byte, errJson ErrorJson) {
j, err := json.Marshal(responseContainer)
if err != nil {
return nil, InternalServerErrorWithMessage(err, "could not marshal response")
}
return j, nil
}
// WriteMiddlewareResponseHeadersAndBody populates headers and the body of the final response.
func WriteMiddlewareResponseHeadersAndBody(grpcResp *http.Response, responseJson []byte, w http.ResponseWriter) ErrorJson {
var statusCodeHeader string
for h, vs := range grpcResp.Header {
// We don't want to expose any gRPC metadata in the HTTP response, so we skip forwarding metadata headers.
if strings.HasPrefix(h, grpc.MetadataPrefix) {
if h == grpc.WithPrefix(grpc.HttpCodeMetadataKey) {
statusCodeHeader = vs[0]
} else if strings.HasSuffix(h, api.VersionHeader) {
w.Header().Set(api.VersionHeader, vs[0])
}
} else {
for _, v := range vs {
w.Header().Set(h, v)
}
}
}
if !GrpcResponseIsEmpty(responseJson) {
w.Header().Set("Content-Length", strconv.Itoa(len(responseJson)))
if statusCodeHeader != "" {
code, err := strconv.Atoi(statusCodeHeader)
if err != nil {
return InternalServerErrorWithMessage(err, "could not parse status code")
}
w.WriteHeader(code)
} else {
w.WriteHeader(grpcResp.StatusCode)
}
if _, err := io.Copy(w, io.NopCloser(bytes.NewReader(responseJson))); err != nil {
return InternalServerErrorWithMessage(err, "could not write response message")
}
} else {
w.Header().Set("Content-Length", "0")
w.WriteHeader(grpcResp.StatusCode)
}
return nil
}
// WriteError writes the error by manipulating headers and the body of the final response.
func WriteError(w http.ResponseWriter, errJson ErrorJson, responseHeader http.Header) {
// Include custom error in the error JSON.
hasCustomError := false
if responseHeader != nil {
customError, ok := responseHeader[grpc.WithPrefix(grpc.CustomErrorMetadataKey)]
if ok {
hasCustomError = true
// Assume header has only one value and read the 0 index.
if err := json.Unmarshal([]byte(customError[0]), errJson); err != nil {
log.WithError(err).Error("Could not unmarshal custom error message")
return
}
}
}
var j []byte
if hasCustomError {
var err error
j, err = json.Marshal(errJson)
if err != nil {
log.WithError(err).Error("Could not marshal error message")
return
}
} else {
var err error
// We marshal the response body into a DefaultErrorJson if the custom error is not present.
// This is because the ErrorJson argument is the endpoint's error definition, which may contain custom fields.
// In such a scenario marhaling the endpoint's error would populate the resulting JSON
// with these fields even if they are not present in the gRPC header.
d := &DefaultErrorJson{
Message: errJson.Msg(),
Code: errJson.StatusCode(),
}
j, err = json.Marshal(d)
if err != nil {
log.WithError(err).Error("Could not marshal error message")
return
}
}
w.Header().Set("Content-Length", strconv.Itoa(len(j)))
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(errJson.StatusCode())
if _, err := io.Copy(w, io.NopCloser(bytes.NewReader(j))); err != nil {
log.WithError(err).Error("Could not write error message")
}
}
// Cleanup performs final cleanup on the initial response from grpc-gateway.
func Cleanup(grpcResponseBody io.ReadCloser) ErrorJson {
if err := grpcResponseBody.Close(); err != nil {
return InternalServerErrorWithMessage(err, "could not close response body")
}
return nil
}

View File

@@ -1,435 +0,0 @@
package apimiddleware
import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/grpc"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/sirupsen/logrus/hooks/test"
)
type testRequestContainer struct {
TestString string
TestHexString string `hex:"true"`
TestEmptyHexString string `hex:"true"`
TestUint256String string `uint256:"true"`
}
func defaultRequestContainer() *testRequestContainer {
return &testRequestContainer{
TestString: "test string",
TestHexString: "0x666F6F", // hex encoding of "foo"
TestEmptyHexString: "0x",
TestUint256String: "4196",
}
}
type testResponseContainer struct {
TestString string
TestHex string `hex:"true"`
TestEmptyHex string `hex:"true"`
TestAddress string `address:"true"`
TestEmptyAddress string `address:"true"`
TestUint256 string `uint256:"true"`
TestEnum string `enum:"true"`
TestTime string `time:"true"`
}
func defaultResponseContainer() *testResponseContainer {
return &testResponseContainer{
TestString: "test string",
TestHex: "Zm9v", // base64 encoding of "foo"
TestEmptyHex: "",
TestAddress: "Zm9v",
TestEmptyAddress: "",
TestEnum: "Test Enum",
TestTime: "2006-01-02T15:04:05Z",
// base64 encoding of 4196 in little-endian
TestUint256: "ZBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=",
}
}
type testErrorJson struct {
Message string
Code int
CustomField string
}
// StatusCode returns the error's underlying error code.
func (e *testErrorJson) StatusCode() int {
return e.Code
}
// Msg returns the error's underlying message.
func (e *testErrorJson) Msg() string {
return e.Message
}
// SetCode sets the error's underlying error code.
func (e *testErrorJson) SetCode(code int) {
e.Code = code
}
// SetMsg sets the error's underlying message.
func (e *testErrorJson) SetMsg(msg string) {
e.Message = msg
}
func TestDeserializeRequestBodyIntoContainer(t *testing.T) {
t.Run("ok", func(t *testing.T) {
var bodyJson bytes.Buffer
err := json.NewEncoder(&bodyJson).Encode(defaultRequestContainer())
require.NoError(t, err)
container := &testRequestContainer{}
errJson := DeserializeRequestBodyIntoContainer(&bodyJson, container)
require.Equal(t, true, errJson == nil)
assert.Equal(t, "test string", container.TestString)
})
t.Run("error", func(t *testing.T) {
var bodyJson bytes.Buffer
bodyJson.Write([]byte("foo"))
errJson := DeserializeRequestBodyIntoContainer(&bodyJson, &testRequestContainer{})
require.NotNil(t, errJson)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not decode request body"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
t.Run("unknown field", func(t *testing.T) {
var bodyJson bytes.Buffer
bodyJson.Write([]byte("{\"foo\":\"foo\"}"))
errJson := DeserializeRequestBodyIntoContainer(&bodyJson, &testRequestContainer{})
require.NotNil(t, errJson)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not decode request body"))
assert.Equal(t, http.StatusBadRequest, errJson.StatusCode())
})
}
func TestProcessRequestContainerFields(t *testing.T) {
t.Run("ok", func(t *testing.T) {
container := defaultRequestContainer()
errJson := ProcessRequestContainerFields(container)
require.Equal(t, true, errJson == nil)
assert.Equal(t, "Zm9v", container.TestHexString)
assert.Equal(t, "", container.TestEmptyHexString)
assert.Equal(t, "ZBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=", container.TestUint256String)
})
t.Run("error", func(t *testing.T) {
errJson := ProcessRequestContainerFields("foo")
require.NotNil(t, errJson)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not process request data"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
}
func TestSetRequestBodyToRequestContainer(t *testing.T) {
var body bytes.Buffer
request := httptest.NewRequest("GET", "http://foo.example", &body)
errJson := SetRequestBodyToRequestContainer(defaultRequestContainer(), request)
require.Equal(t, true, errJson == nil)
container := &testRequestContainer{}
require.NoError(t, json.NewDecoder(request.Body).Decode(container))
assert.Equal(t, "test string", container.TestString)
contentLengthHeader, ok := request.Header["Content-Length"]
require.Equal(t, true, ok)
require.Equal(t, 1, len(contentLengthHeader), "wrong number of header values")
assert.Equal(t, "108", contentLengthHeader[0])
assert.Equal(t, int64(108), request.ContentLength)
}
func TestPrepareRequestForProxying(t *testing.T) {
middleware := &ApiProxyMiddleware{
GatewayAddress: "http://gateway.example",
}
// We will set some params to make the request more interesting.
endpoint := Endpoint{
Path: "/{url_param}",
RequestURLLiterals: []string{"url_param"},
RequestQueryParams: []QueryParam{{Name: "query_param"}},
}
var body bytes.Buffer
request := httptest.NewRequest("GET", "http://foo.example?query_param=bar", &body)
errJson := middleware.PrepareRequestForProxying(endpoint, request)
require.Equal(t, true, errJson == nil)
assert.Equal(t, "http", request.URL.Scheme)
assert.Equal(t, middleware.GatewayAddress, request.URL.Host)
assert.Equal(t, "", request.RequestURI)
}
func TestReadGrpcResponseBody(t *testing.T) {
var b bytes.Buffer
b.Write([]byte("foo"))
body, jsonErr := ReadGrpcResponseBody(&b)
require.Equal(t, true, jsonErr == nil)
assert.Equal(t, "foo", string(body))
}
func TestHandleGrpcResponseError(t *testing.T) {
response := &http.Response{
StatusCode: 400,
Header: http.Header{
"Foo": []string{"foo"},
"Bar": []string{"bar"},
},
}
writer := httptest.NewRecorder()
errJson := &testErrorJson{
Message: "foo",
Code: 400,
}
b, err := json.Marshal(errJson)
require.NoError(t, err)
hasError, e := HandleGrpcResponseError(errJson, response, b, writer)
require.Equal(t, true, e == nil)
assert.Equal(t, true, hasError)
v, ok := writer.Header()["Foo"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "foo", v[0])
v, ok = writer.Header()["Bar"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "bar", v[0])
assert.Equal(t, 400, errJson.StatusCode())
}
func TestGrpcResponseIsEmpty(t *testing.T) {
t.Run("nil", func(t *testing.T) {
assert.Equal(t, true, GrpcResponseIsEmpty(nil))
})
t.Run("empty_slice", func(t *testing.T) {
assert.Equal(t, true, GrpcResponseIsEmpty(make([]byte, 0)))
})
t.Run("empty_brackets", func(t *testing.T) {
assert.Equal(t, true, GrpcResponseIsEmpty([]byte("{}")))
})
t.Run("non_empty", func(t *testing.T) {
assert.Equal(t, false, GrpcResponseIsEmpty([]byte("{\"foo\":\"bar\"})")))
})
}
func TestDeserializeGrpcResponseBodyIntoContainer(t *testing.T) {
t.Run("ok", func(t *testing.T) {
body, err := json.Marshal(defaultRequestContainer())
require.NoError(t, err)
container := &testRequestContainer{}
errJson := DeserializeGrpcResponseBodyIntoContainer(body, container)
require.Equal(t, true, errJson == nil)
assert.Equal(t, "test string", container.TestString)
})
t.Run("error", func(t *testing.T) {
var bodyJson bytes.Buffer
bodyJson.Write([]byte("foo"))
errJson := DeserializeGrpcResponseBodyIntoContainer(bodyJson.Bytes(), &testRequestContainer{})
require.NotNil(t, errJson)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not unmarshal response"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
}
func TestProcessMiddlewareResponseFields(t *testing.T) {
t.Run("Ok", func(t *testing.T) {
container := defaultResponseContainer()
errJson := ProcessMiddlewareResponseFields(container)
require.Equal(t, true, errJson == nil)
assert.Equal(t, "0x666f6f", container.TestHex)
assert.Equal(t, "0x", container.TestEmptyHex)
assert.Equal(t, "0x0000000000000000000000000000000000666F6f", container.TestAddress)
assert.Equal(t, "0x", container.TestEmptyAddress)
assert.Equal(t, "4196", container.TestUint256)
assert.Equal(t, "test enum", container.TestEnum)
assert.Equal(t, "1136214245", container.TestTime)
})
t.Run("error", func(t *testing.T) {
errJson := ProcessMiddlewareResponseFields("foo")
require.NotNil(t, errJson)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not process response data"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
}
func TestSerializeMiddlewareResponseIntoJson(t *testing.T) {
container := defaultResponseContainer()
j, errJson := SerializeMiddlewareResponseIntoJson(container)
assert.Equal(t, true, errJson == nil)
cToDeserialize := &testResponseContainer{}
require.NoError(t, json.Unmarshal(j, cToDeserialize))
assert.Equal(t, "test string", cToDeserialize.TestString)
}
func TestWriteMiddlewareResponseHeadersAndBody(t *testing.T) {
t.Run("GET", func(t *testing.T) {
response := &http.Response{
Header: http.Header{
"Foo": []string{"foo"},
grpc.WithPrefix(grpc.HttpCodeMetadataKey): []string{"204"},
grpc.WithPrefix(api.VersionHeader): []string{"capella"},
},
}
container := defaultResponseContainer()
responseJson, err := json.Marshal(container)
require.NoError(t, err)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
errJson := WriteMiddlewareResponseHeadersAndBody(response, responseJson, writer)
require.Equal(t, true, errJson == nil)
v, ok := writer.Header()["Foo"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "foo", v[0])
v, ok = writer.Header()["Content-Length"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "224", v[0])
v, ok = writer.Header()["Eth-Consensus-Version"]
require.Equal(t, true, ok, "header not found")
assert.Equal(t, "capella", v[0])
assert.Equal(t, 204, writer.Code)
assert.DeepEqual(t, responseJson, writer.Body.Bytes())
})
t.Run("GET_no_grpc_status_code_header", func(t *testing.T) {
response := &http.Response{
Header: http.Header{},
StatusCode: 204,
}
container := defaultResponseContainer()
responseJson, err := json.Marshal(container)
require.NoError(t, err)
writer := httptest.NewRecorder()
errJson := WriteMiddlewareResponseHeadersAndBody(response, responseJson, writer)
require.Equal(t, true, errJson == nil)
assert.Equal(t, 204, writer.Code)
})
t.Run("GET_invalid_status_code", func(t *testing.T) {
response := &http.Response{
Header: http.Header{"Grpc-Metadata-Eth-Consensus-Version": []string{"capella"}},
}
// Set invalid status code.
response.Header[grpc.WithPrefix(grpc.HttpCodeMetadataKey)] = []string{"invalid"}
response.Header[grpc.WithPrefix(api.VersionHeader)] = []string{"capella"}
container := defaultResponseContainer()
responseJson, err := json.Marshal(container)
require.NoError(t, err)
writer := httptest.NewRecorder()
errJson := WriteMiddlewareResponseHeadersAndBody(response, responseJson, writer)
require.Equal(t, false, errJson == nil)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not parse status code"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
t.Run("POST", func(t *testing.T) {
response := &http.Response{
Header: http.Header{},
StatusCode: 204,
}
container := defaultResponseContainer()
responseJson, err := json.Marshal(container)
require.NoError(t, err)
writer := httptest.NewRecorder()
errJson := WriteMiddlewareResponseHeadersAndBody(response, responseJson, writer)
require.Equal(t, true, errJson == nil)
assert.Equal(t, 204, writer.Code)
})
t.Run("POST_with_response_body", func(t *testing.T) {
response := &http.Response{
Header: http.Header{},
StatusCode: 204,
}
container := defaultResponseContainer()
responseJson, err := json.Marshal(container)
require.NoError(t, err)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
errJson := WriteMiddlewareResponseHeadersAndBody(response, responseJson, writer)
require.Equal(t, true, errJson == nil)
assert.Equal(t, 204, writer.Code)
assert.DeepEqual(t, responseJson, writer.Body.Bytes())
})
t.Run("POST_with_empty_json_body", func(t *testing.T) {
response := &http.Response{
Header: http.Header{},
StatusCode: 204,
}
responseJson, err := json.Marshal(struct{}{})
require.NoError(t, err)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
errJson := WriteMiddlewareResponseHeadersAndBody(response, responseJson, writer)
require.Equal(t, true, errJson == nil)
assert.Equal(t, 204, writer.Code)
assert.DeepEqual(t, []byte(nil), writer.Body.Bytes())
assert.Equal(t, "0", writer.Header()["Content-Length"][0])
})
}
func TestWriteError(t *testing.T) {
t.Run("ok", func(t *testing.T) {
responseHeader := http.Header{
grpc.WithPrefix(grpc.CustomErrorMetadataKey): []string{"{\"CustomField\":\"bar\"}"},
}
errJson := &testErrorJson{
Message: "foo",
Code: 500,
}
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
WriteError(writer, errJson, responseHeader)
v, ok := writer.Header()["Content-Length"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "48", v[0])
v, ok = writer.Header()["Content-Type"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "application/json", v[0])
assert.Equal(t, 500, writer.Code)
eDeserialize := &testErrorJson{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), eDeserialize))
assert.Equal(t, "foo", eDeserialize.Message)
assert.Equal(t, 500, eDeserialize.Code)
assert.Equal(t, "bar", eDeserialize.CustomField)
})
t.Run("invalid_custom_error_header", func(t *testing.T) {
logHook := test.NewGlobal()
responseHeader := http.Header{
grpc.WithPrefix(grpc.CustomErrorMetadataKey): []string{"invalid"},
}
WriteError(httptest.NewRecorder(), &testErrorJson{}, responseHeader)
assert.LogsContain(t, logHook, "Could not unmarshal custom error message")
})
}

View File

@@ -1,69 +0,0 @@
package apimiddleware
import (
"net/http"
"github.com/pkg/errors"
)
// ---------------
// Error handling.
// ---------------
// ErrorJson describes common functionality of all JSON error representations.
type ErrorJson interface {
StatusCode() int
SetCode(code int)
Msg() string
SetMsg(msg string)
}
// DefaultErrorJson is a JSON representation of a simple error value, containing only a message and an error code.
type DefaultErrorJson struct {
Message string `json:"message"`
Code int `json:"code"`
}
// InternalServerErrorWithMessage returns a DefaultErrorJson with 500 code and a custom message.
func InternalServerErrorWithMessage(err error, message string) *DefaultErrorJson {
e := errors.Wrapf(err, message)
return &DefaultErrorJson{
Message: e.Error(),
Code: http.StatusInternalServerError,
}
}
// InternalServerError returns a DefaultErrorJson with 500 code.
func InternalServerError(err error) *DefaultErrorJson {
return &DefaultErrorJson{
Message: err.Error(),
Code: http.StatusInternalServerError,
}
}
func TimeoutError() *DefaultErrorJson {
return &DefaultErrorJson{
Message: "Request timeout",
Code: http.StatusRequestTimeout,
}
}
// StatusCode returns the error's underlying error code.
func (e *DefaultErrorJson) StatusCode() int {
return e.Code
}
// Msg returns the error's underlying message.
func (e *DefaultErrorJson) Msg() string {
return e.Message
}
// SetCode sets the error's underlying error code.
func (e *DefaultErrorJson) SetCode(code int) {
e.Code = code
}
// SetMsg sets the error's underlying message.
func (e *DefaultErrorJson) SetMsg(msg string) {
e.Message = msg
}

View File

@@ -6,19 +6,17 @@ import (
"fmt"
"net"
"net/http"
"path"
"strings"
"time"
"github.com/gorilla/mux"
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/api/server"
"github.com/prysmaticlabs/prysm/v4/runtime"
"github.com/rs/cors"
"google.golang.org/grpc"
"google.golang.org/grpc/connectivity"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
)
var _ runtime.Service = (*Gateway)(nil)
@@ -35,7 +33,6 @@ type PbHandlerRegistration func(context.Context, *gwruntime.ServeMux, *grpc.Clie
// MuxHandler is a function that implements the mux handler functionality.
type MuxHandler func(
apiMiddlewareHandler *apimiddleware.ApiProxyMiddleware,
h http.HandlerFunc,
w http.ResponseWriter,
req *http.Request,
@@ -43,16 +40,15 @@ type MuxHandler func(
// Config parameters for setting up the gateway service.
type config struct {
maxCallRecvMsgSize uint64
remoteCert string
gatewayAddr string
remoteAddr string
allowedOrigins []string
apiMiddlewareEndpointFactory apimiddleware.EndpointFactory
muxHandler MuxHandler
pbHandlers []*PbMux
router *mux.Router
timeout time.Duration
maxCallRecvMsgSize uint64
remoteCert string
gatewayAddr string
remoteAddr string
allowedOrigins []string
muxHandler MuxHandler
pbHandlers []*PbMux
router *mux.Router
timeout time.Duration
}
// Gateway is the gRPC gateway to serve HTTP JSON traffic as a proxy and forward it to the gRPC server.
@@ -61,7 +57,6 @@ type Gateway struct {
conn *grpc.ClientConn
server *http.Server
cancel context.CancelFunc
proxy *apimiddleware.ApiProxyMiddleware
ctx context.Context
startFailure error
}
@@ -109,15 +104,11 @@ func (g *Gateway) Start() {
}
}
corsMux := g.corsMiddleware(g.cfg.router)
if g.cfg.apiMiddlewareEndpointFactory != nil && !g.cfg.apiMiddlewareEndpointFactory.IsNil() {
g.registerApiMiddleware()
}
corsMux := server.CorsHandler(g.cfg.allowedOrigins).Middleware(g.cfg.router)
if g.cfg.muxHandler != nil {
g.cfg.router.PathPrefix("/").HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
g.cfg.muxHandler(g.proxy, corsMux.ServeHTTP, w, r)
g.cfg.muxHandler(corsMux.ServeHTTP, w, r)
})
}
@@ -167,35 +158,6 @@ func (g *Gateway) Stop() error {
return nil
}
func (g *Gateway) corsMiddleware(h http.Handler) http.Handler {
c := cors.New(cors.Options{
AllowedOrigins: g.cfg.allowedOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodDelete, http.MethodOptions},
AllowCredentials: true,
MaxAge: 600,
AllowedHeaders: []string{"*"},
})
return c.Handler(h)
}
const swaggerDir = "proto/prysm/v1alpha1/"
// SwaggerServer returns swagger specification files located under "/swagger/"
func SwaggerServer() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if !strings.HasSuffix(r.URL.Path, ".swagger.json") {
log.Debugf("Not found: %s", r.URL.Path)
http.NotFound(w, r)
return
}
log.Debugf("Serving %s\n", r.URL.Path)
p := strings.TrimPrefix(r.URL.Path, "/swagger/")
p = path.Join(swaggerDir, p)
http.ServeFile(w, r, p)
}
}
// dial the gRPC server.
func (g *Gateway) dial(ctx context.Context, network, addr string) (*grpc.ClientConn, error) {
switch network {
@@ -211,19 +173,21 @@ func (g *Gateway) dial(ctx context.Context, network, addr string) (*grpc.ClientC
// dialTCP creates a client connection via TCP.
// "addr" must be a valid TCP address with a port number.
func (g *Gateway) dialTCP(ctx context.Context, addr string) (*grpc.ClientConn, error) {
security := grpc.WithInsecure()
var security grpc.DialOption
if len(g.cfg.remoteCert) > 0 {
creds, err := credentials.NewClientTLSFromFile(g.cfg.remoteCert, "")
if err != nil {
return nil, err
}
security = grpc.WithTransportCredentials(creds)
} else {
// Use insecure credentials when there's no remote cert provided.
security = grpc.WithTransportCredentials(insecure.NewCredentials())
}
opts := []grpc.DialOption{
security,
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(g.cfg.maxCallRecvMsgSize))),
}
return grpc.DialContext(ctx, addr, opts...)
}
@@ -240,19 +204,9 @@ func (g *Gateway) dialUnix(ctx context.Context, addr string) (*grpc.ClientConn,
return d(addr, 0)
}
opts := []grpc.DialOption{
grpc.WithInsecure(),
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(f),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(g.cfg.maxCallRecvMsgSize))),
}
return grpc.DialContext(ctx, addr, opts...)
}
func (g *Gateway) registerApiMiddleware() {
g.proxy = &apimiddleware.ApiProxyMiddleware{
GatewayAddress: g.cfg.gatewayAddr,
EndpointCreator: g.cfg.apiMiddlewareEndpointFactory,
Timeout: g.cfg.timeout,
}
log.Info("Starting API middleware")
g.proxy.Run(g.cfg.router)
}

View File

@@ -10,7 +10,6 @@ import (
"testing"
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
@@ -18,36 +17,18 @@ import (
"github.com/urfave/cli/v2"
)
type mockEndpointFactory struct {
}
func (*mockEndpointFactory) Paths() []string {
return []string{}
}
func (*mockEndpointFactory) Create(_ string) (*apimiddleware.Endpoint, error) {
return nil, nil
}
func (*mockEndpointFactory) IsNil() bool {
return false
}
func TestGateway_Customized(t *testing.T) {
r := mux.NewRouter()
cert := "cert"
origins := []string{"origin"}
size := uint64(100)
endpointFactory := &mockEndpointFactory{}
opts := []Option{
WithRouter(r),
WithRemoteCert(cert),
WithAllowedOrigins(origins),
WithMaxCallRecvMsgSize(size),
WithApiMiddleware(endpointFactory),
WithMuxHandler(func(
_ *apimiddleware.ApiProxyMiddleware,
_ http.HandlerFunc,
_ http.ResponseWriter,
_ *http.Request,
@@ -63,7 +44,6 @@ func TestGateway_Customized(t *testing.T) {
require.Equal(t, 1, len(g.cfg.allowedOrigins))
assert.Equal(t, origins[0], g.cfg.allowedOrigins[0])
assert.Equal(t, size, g.cfg.maxCallRecvMsgSize)
assert.Equal(t, endpointFactory, g.cfg.apiMiddlewareEndpointFactory)
}
func TestGateway_StartStop(t *testing.T) {
@@ -83,7 +63,6 @@ func TestGateway_StartStop(t *testing.T) {
WithGatewayAddr(gatewayAddress),
WithRemoteAddr(selfAddress),
WithMuxHandler(func(
_ *apimiddleware.ApiProxyMiddleware,
_ http.HandlerFunc,
_ http.ResponseWriter,
_ *http.Request,

View File

@@ -5,7 +5,6 @@ import (
"github.com/gorilla/mux"
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware"
)
type Option func(g *Gateway) error
@@ -70,14 +69,6 @@ func WithMaxCallRecvMsgSize(size uint64) Option {
}
}
// WithApiMiddleware allows adding an API middleware proxy to the gateway.
func WithApiMiddleware(endpointFactory apimiddleware.EndpointFactory) Option {
return func(g *Gateway) error {
g.cfg.apiMiddlewareEndpointFactory = endpointFactory
return nil
}
}
// WithTimeout allows changing the timeout value for API calls.
func WithTimeout(seconds uint64) Option {
return func(g *Gateway) error {

View File

@@ -7,4 +7,6 @@ const (
ConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
)

View File

@@ -3,16 +3,25 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"error.go",
"middleware.go",
"util.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/api/server",
visibility = ["//visibility:public"],
deps = [
"@com_github_gorilla_mux//:go_default_library",
"@com_github_rs_cors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["util_test.go"],
srcs = [
"error_test.go",
"middleware_test.go",
"util_test.go",
],
embed = [":go_default_library"],
deps = [
"//testing/assert:go_default_library",

45
api/server/error.go Normal file
View File

@@ -0,0 +1,45 @@
package server
import (
"fmt"
"strings"
)
// DecodeError represents an error resulting from trying to decode an HTTP request.
// It tracks the full field name for which decoding failed.
type DecodeError struct {
path []string
err error
}
// NewDecodeError wraps an error (either the initial decoding error or another DecodeError).
// The current field that failed decoding must be passed in.
func NewDecodeError(err error, field string) *DecodeError {
de, ok := err.(*DecodeError)
if ok {
return &DecodeError{path: append([]string{field}, de.path...), err: de.err}
}
return &DecodeError{path: []string{field}, err: err}
}
// Error returns the formatted error message which contains the full field name and the actual decoding error.
func (e *DecodeError) Error() string {
return fmt.Sprintf("could not decode %s: %s", strings.Join(e.path, "."), e.err.Error())
}
// IndexedVerificationFailureError wraps a collection of verification failures.
type IndexedVerificationFailureError struct {
Message string `json:"message"`
Code int `json:"code"`
Failures []*IndexedVerificationFailure `json:"failures"`
}
func (e *IndexedVerificationFailureError) StatusCode() int {
return e.Code
}
// IndexedVerificationFailure represents an issue when verifying a single indexed object e.g. an item in an array.
type IndexedVerificationFailure struct {
Index int `json:"index"`
Message string `json:"message"`
}

16
api/server/error_test.go Normal file
View File

@@ -0,0 +1,16 @@
package server
import (
"errors"
"testing"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
)
func TestDecodeError(t *testing.T) {
e := errors.New("not a number")
de := NewDecodeError(e, "Z")
de = NewDecodeError(de, "Y")
de = NewDecodeError(de, "X")
assert.Equal(t, "could not decode X.Y.Z: not a number", de.Error())
}

View File

@@ -2,8 +2,12 @@ package server
import (
"net/http"
"github.com/gorilla/mux"
"github.com/rs/cors"
)
// NormalizeQueryValuesHandler normalizes an input query of "key=value1,value2,value3" to "key=value1&key=value2&key=value3"
func NormalizeQueryValuesHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
query := r.URL.Query()
@@ -13,3 +17,16 @@ func NormalizeQueryValuesHandler(next http.Handler) http.Handler {
next.ServeHTTP(w, r)
})
}
// CorsHandler sets the cors settings on api endpoints
func CorsHandler(allowOrigins []string) mux.MiddlewareFunc {
c := cors.New(cors.Options{
AllowedOrigins: allowOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodDelete, http.MethodOptions},
AllowCredentials: true,
MaxAge: 600,
AllowedHeaders: []string{"*"},
})
return c.Handler
}

View File

@@ -0,0 +1,54 @@
package server
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
func TestNormalizeQueryValuesHandler(t *testing.T) {
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := NormalizeQueryValuesHandler(nextHandler)
tests := []struct {
name string
inputQuery string
expectedQuery string
}{
{
name: "3 values",
inputQuery: "key=value1,value2,value3",
expectedQuery: "key=value1&key=value2&key=value3", // replace with expected normalized value
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req, err := http.NewRequest("GET", "/test?"+test.inputQuery, nil)
if err != nil {
t.Fatal(err)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v", rr.Code, http.StatusOK)
}
if req.URL.RawQuery != test.expectedQuery {
t.Errorf("query not normalized: got %v want %v", req.URL.RawQuery, test.expectedQuery)
}
if rr.Body.String() != "next handler" {
t.Errorf("next handler was not executed")
}
})
}
}

View File

@@ -0,0 +1,40 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"block.go",
"conversions.go",
"conversions_block.go",
"conversions_state.go",
"endpoints_beacon.go",
"endpoints_blob.go",
"endpoints_builder.go",
"endpoints_config.go",
"endpoints_debug.go",
"endpoints_events.go",
"endpoints_lightclient.go",
"endpoints_node.go",
"endpoints_rewards.go",
"endpoints_validator.go",
"other.go",
"state.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/api/server/structs",
visibility = ["//visibility:public"],
deps = [
"//api/server:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

353
api/server/structs/block.go Normal file
View File

@@ -0,0 +1,353 @@
package structs
type SignedBeaconBlock struct {
Message *BeaconBlock `json:"message"`
Signature string `json:"signature"`
}
type BeaconBlock struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBody `json:"body"`
}
type BeaconBlockBody struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings"`
Attestations []*Attestation `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
}
type SignedBeaconBlockAltair struct {
Message *BeaconBlockAltair `json:"message"`
Signature string `json:"signature"`
}
type BeaconBlockAltair struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBodyAltair `json:"body"`
}
type BeaconBlockBodyAltair struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings"`
Attestations []*Attestation `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
}
type SignedBeaconBlockBellatrix struct {
Message *BeaconBlockBellatrix `json:"message"`
Signature string `json:"signature"`
}
type BeaconBlockBellatrix struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBodyBellatrix `json:"body"`
}
type BeaconBlockBodyBellatrix struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings"`
Attestations []*Attestation `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayload *ExecutionPayload `json:"execution_payload"`
}
type SignedBlindedBeaconBlockBellatrix struct {
Message *BlindedBeaconBlockBellatrix `json:"message"`
Signature string `json:"signature"`
}
type BlindedBeaconBlockBellatrix struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BlindedBeaconBlockBodyBellatrix `json:"body"`
}
type BlindedBeaconBlockBodyBellatrix struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings"`
Attestations []*Attestation `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeader `json:"execution_payload_header"`
}
type SignedBeaconBlockCapella struct {
Message *BeaconBlockCapella `json:"message"`
Signature string `json:"signature"`
}
type BeaconBlockCapella struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBodyCapella `json:"body"`
}
type BeaconBlockBodyCapella struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings"`
Attestations []*Attestation `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayload *ExecutionPayloadCapella `json:"execution_payload"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
}
type SignedBlindedBeaconBlockCapella struct {
Message *BlindedBeaconBlockCapella `json:"message"`
Signature string `json:"signature"`
}
type BlindedBeaconBlockCapella struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BlindedBeaconBlockBodyCapella `json:"body"`
}
type BlindedBeaconBlockBodyCapella struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings"`
Attestations []*Attestation `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeaderCapella `json:"execution_payload_header"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
}
type SignedBeaconBlockContentsDeneb struct {
SignedBlock *SignedBeaconBlockDeneb `json:"signed_block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type BeaconBlockContentsDeneb struct {
Block *BeaconBlockDeneb `json:"block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type SignedBeaconBlockDeneb struct {
Message *BeaconBlockDeneb `json:"message"`
Signature string `json:"signature"`
}
type BeaconBlockDeneb struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBodyDeneb `json:"body"`
}
type BeaconBlockBodyDeneb struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings"`
Attestations []*Attestation `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayload *ExecutionPayloadDeneb `json:"execution_payload"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
}
type BlindedBeaconBlockDeneb struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BlindedBeaconBlockBodyDeneb `json:"body"`
}
type SignedBlindedBeaconBlockDeneb struct {
Message *BlindedBeaconBlockDeneb `json:"message"`
Signature string `json:"signature"`
}
type BlindedBeaconBlockBodyDeneb struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashing `json:"attester_slashings"`
Attestations []*Attestation `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeaderDeneb `json:"execution_payload_header"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
}
type SignedBeaconBlockHeaderContainer struct {
Header *SignedBeaconBlockHeader `json:"header"`
Root string `json:"root"`
Canonical bool `json:"canonical"`
}
type SignedBeaconBlockHeader struct {
Message *BeaconBlockHeader `json:"message"`
Signature string `json:"signature"`
}
type BeaconBlockHeader struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
BodyRoot string `json:"body_root"`
}
type ExecutionPayload struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
}
type ExecutionPayloadHeader struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
}
type ExecutionPayloadCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
}
type ExecutionPayloadHeaderCapella struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
}
type ExecutionPayloadDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
Transactions []string `json:"transactions"`
Withdrawals []*Withdrawal `json:"withdrawals"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadHeaderDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
StateRoot string `json:"state_root"`
ReceiptsRoot string `json:"receipts_root"`
LogsBloom string `json:"logs_bloom"`
PrevRandao string `json:"prev_randao"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
Timestamp string `json:"timestamp"`
ExtraData string `json:"extra_data"`
BaseFeePerGas string `json:"base_fee_per_gas"`
BlockHash string `json:"block_hash"`
TransactionsRoot string `json:"transactions_root"`
WithdrawalsRoot string `json:"withdrawals_root"`
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,595 @@
package structs
import (
"errors"
"fmt"
"github.com/ethereum/go-ethereum/common/hexutil"
beaconState "github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
)
var errPayloadHeaderNotFound = errors.New("expected payload header not found")
func BeaconStateFromConsensus(st beaconState.BeaconState) (*BeaconState, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
for i, r := range srcBr {
br[i] = hexutil.Encode(r)
}
srcSr := st.StateRoots()
sr := make([]string, len(srcSr))
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
}
srcVotes := st.Eth1DataVotes()
votes := make([]*Eth1Data, len(srcVotes))
for i, e := range srcVotes {
votes[i] = Eth1DataFromConsensus(e)
}
srcVals := st.Validators()
vals := make([]*Validator, len(srcVals))
for i, v := range srcVals {
vals[i] = ValidatorFromConsensus(v)
}
srcBals := st.Balances()
bals := make([]string, len(srcBals))
for i, b := range srcBals {
bals[i] = fmt.Sprintf("%d", b)
}
srcRm := st.RandaoMixes()
rm := make([]string, len(srcRm))
for i, m := range srcRm {
rm[i] = hexutil.Encode(m)
}
srcSlashings := st.Slashings()
slashings := make([]string, len(srcSlashings))
for i, s := range srcSlashings {
slashings[i] = fmt.Sprintf("%d", s)
}
srcPrevAtts, err := st.PreviousEpochAttestations()
if err != nil {
return nil, err
}
prevAtts := make([]*PendingAttestation, len(srcPrevAtts))
for i, a := range srcPrevAtts {
prevAtts[i] = PendingAttestationFromConsensus(a)
}
srcCurrAtts, err := st.CurrentEpochAttestations()
if err != nil {
return nil, err
}
currAtts := make([]*PendingAttestation, len(srcCurrAtts))
for i, a := range srcCurrAtts {
currAtts[i] = PendingAttestationFromConsensus(a)
}
return &BeaconState{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
LatestBlockHeader: BeaconBlockHeaderFromConsensus(st.LatestBlockHeader()),
BlockRoots: br,
StateRoots: sr,
HistoricalRoots: hr,
Eth1Data: Eth1DataFromConsensus(st.Eth1Data()),
Eth1DataVotes: votes,
Eth1DepositIndex: fmt.Sprintf("%d", st.Eth1DepositIndex()),
Validators: vals,
Balances: bals,
RandaoMixes: rm,
Slashings: slashings,
PreviousEpochAttestations: prevAtts,
CurrentEpochAttestations: currAtts,
JustificationBits: hexutil.Encode(st.JustificationBits()),
PreviousJustifiedCheckpoint: CheckpointFromConsensus(st.PreviousJustifiedCheckpoint()),
CurrentJustifiedCheckpoint: CheckpointFromConsensus(st.CurrentJustifiedCheckpoint()),
FinalizedCheckpoint: CheckpointFromConsensus(st.FinalizedCheckpoint()),
}, nil
}
func BeaconStateAltairFromConsensus(st beaconState.BeaconState) (*BeaconStateAltair, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
for i, r := range srcBr {
br[i] = hexutil.Encode(r)
}
srcSr := st.StateRoots()
sr := make([]string, len(srcSr))
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
}
srcVotes := st.Eth1DataVotes()
votes := make([]*Eth1Data, len(srcVotes))
for i, e := range srcVotes {
votes[i] = Eth1DataFromConsensus(e)
}
srcVals := st.Validators()
vals := make([]*Validator, len(srcVals))
for i, v := range srcVals {
vals[i] = ValidatorFromConsensus(v)
}
srcBals := st.Balances()
bals := make([]string, len(srcBals))
for i, b := range srcBals {
bals[i] = fmt.Sprintf("%d", b)
}
srcRm := st.RandaoMixes()
rm := make([]string, len(srcRm))
for i, m := range srcRm {
rm[i] = hexutil.Encode(m)
}
srcSlashings := st.Slashings()
slashings := make([]string, len(srcSlashings))
for i, s := range srcSlashings {
slashings[i] = fmt.Sprintf("%d", s)
}
srcPrevPart, err := st.PreviousEpochParticipation()
if err != nil {
return nil, err
}
prevPart := make([]string, len(srcPrevPart))
for i, p := range srcPrevPart {
prevPart[i] = fmt.Sprintf("%d", p)
}
srcCurrPart, err := st.CurrentEpochParticipation()
if err != nil {
return nil, err
}
currPart := make([]string, len(srcCurrPart))
for i, p := range srcCurrPart {
currPart[i] = fmt.Sprintf("%d", p)
}
srcIs, err := st.InactivityScores()
if err != nil {
return nil, err
}
is := make([]string, len(srcIs))
for i, s := range srcIs {
is[i] = fmt.Sprintf("%d", s)
}
currSc, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSc, err := st.NextSyncCommittee()
if err != nil {
return nil, err
}
return &BeaconStateAltair{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
LatestBlockHeader: BeaconBlockHeaderFromConsensus(st.LatestBlockHeader()),
BlockRoots: br,
StateRoots: sr,
HistoricalRoots: hr,
Eth1Data: Eth1DataFromConsensus(st.Eth1Data()),
Eth1DataVotes: votes,
Eth1DepositIndex: fmt.Sprintf("%d", st.Eth1DepositIndex()),
Validators: vals,
Balances: bals,
RandaoMixes: rm,
Slashings: slashings,
PreviousEpochParticipation: prevPart,
CurrentEpochParticipation: currPart,
JustificationBits: hexutil.Encode(st.JustificationBits()),
PreviousJustifiedCheckpoint: CheckpointFromConsensus(st.PreviousJustifiedCheckpoint()),
CurrentJustifiedCheckpoint: CheckpointFromConsensus(st.CurrentJustifiedCheckpoint()),
FinalizedCheckpoint: CheckpointFromConsensus(st.FinalizedCheckpoint()),
InactivityScores: is,
CurrentSyncCommittee: SyncCommitteeFromConsensus(currSc),
NextSyncCommittee: SyncCommitteeFromConsensus(nextSc),
}, nil
}
func BeaconStateBellatrixFromConsensus(st beaconState.BeaconState) (*BeaconStateBellatrix, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
for i, r := range srcBr {
br[i] = hexutil.Encode(r)
}
srcSr := st.StateRoots()
sr := make([]string, len(srcSr))
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
}
srcVotes := st.Eth1DataVotes()
votes := make([]*Eth1Data, len(srcVotes))
for i, e := range srcVotes {
votes[i] = Eth1DataFromConsensus(e)
}
srcVals := st.Validators()
vals := make([]*Validator, len(srcVals))
for i, v := range srcVals {
vals[i] = ValidatorFromConsensus(v)
}
srcBals := st.Balances()
bals := make([]string, len(srcBals))
for i, b := range srcBals {
bals[i] = fmt.Sprintf("%d", b)
}
srcRm := st.RandaoMixes()
rm := make([]string, len(srcRm))
for i, m := range srcRm {
rm[i] = hexutil.Encode(m)
}
srcSlashings := st.Slashings()
slashings := make([]string, len(srcSlashings))
for i, s := range srcSlashings {
slashings[i] = fmt.Sprintf("%d", s)
}
srcPrevPart, err := st.PreviousEpochParticipation()
if err != nil {
return nil, err
}
prevPart := make([]string, len(srcPrevPart))
for i, p := range srcPrevPart {
prevPart[i] = fmt.Sprintf("%d", p)
}
srcCurrPart, err := st.CurrentEpochParticipation()
if err != nil {
return nil, err
}
currPart := make([]string, len(srcCurrPart))
for i, p := range srcCurrPart {
currPart[i] = fmt.Sprintf("%d", p)
}
srcIs, err := st.InactivityScores()
if err != nil {
return nil, err
}
is := make([]string, len(srcIs))
for i, s := range srcIs {
is[i] = fmt.Sprintf("%d", s)
}
currSc, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSc, err := st.NextSyncCommittee()
if err != nil {
return nil, err
}
execData, err := st.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
srcPayload, ok := execData.Proto().(*enginev1.ExecutionPayloadHeader)
if !ok {
return nil, errPayloadHeaderNotFound
}
payload, err := ExecutionPayloadHeaderFromConsensus(srcPayload)
if err != nil {
return nil, err
}
return &BeaconStateBellatrix{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
LatestBlockHeader: BeaconBlockHeaderFromConsensus(st.LatestBlockHeader()),
BlockRoots: br,
StateRoots: sr,
HistoricalRoots: hr,
Eth1Data: Eth1DataFromConsensus(st.Eth1Data()),
Eth1DataVotes: votes,
Eth1DepositIndex: fmt.Sprintf("%d", st.Eth1DepositIndex()),
Validators: vals,
Balances: bals,
RandaoMixes: rm,
Slashings: slashings,
PreviousEpochParticipation: prevPart,
CurrentEpochParticipation: currPart,
JustificationBits: hexutil.Encode(st.JustificationBits()),
PreviousJustifiedCheckpoint: CheckpointFromConsensus(st.PreviousJustifiedCheckpoint()),
CurrentJustifiedCheckpoint: CheckpointFromConsensus(st.CurrentJustifiedCheckpoint()),
FinalizedCheckpoint: CheckpointFromConsensus(st.FinalizedCheckpoint()),
InactivityScores: is,
CurrentSyncCommittee: SyncCommitteeFromConsensus(currSc),
NextSyncCommittee: SyncCommitteeFromConsensus(nextSc),
LatestExecutionPayloadHeader: payload,
}, nil
}
func BeaconStateCapellaFromConsensus(st beaconState.BeaconState) (*BeaconStateCapella, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
for i, r := range srcBr {
br[i] = hexutil.Encode(r)
}
srcSr := st.StateRoots()
sr := make([]string, len(srcSr))
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
}
srcVotes := st.Eth1DataVotes()
votes := make([]*Eth1Data, len(srcVotes))
for i, e := range srcVotes {
votes[i] = Eth1DataFromConsensus(e)
}
srcVals := st.Validators()
vals := make([]*Validator, len(srcVals))
for i, v := range srcVals {
vals[i] = ValidatorFromConsensus(v)
}
srcBals := st.Balances()
bals := make([]string, len(srcBals))
for i, b := range srcBals {
bals[i] = fmt.Sprintf("%d", b)
}
srcRm := st.RandaoMixes()
rm := make([]string, len(srcRm))
for i, m := range srcRm {
rm[i] = hexutil.Encode(m)
}
srcSlashings := st.Slashings()
slashings := make([]string, len(srcSlashings))
for i, s := range srcSlashings {
slashings[i] = fmt.Sprintf("%d", s)
}
srcPrevPart, err := st.PreviousEpochParticipation()
if err != nil {
return nil, err
}
prevPart := make([]string, len(srcPrevPart))
for i, p := range srcPrevPart {
prevPart[i] = fmt.Sprintf("%d", p)
}
srcCurrPart, err := st.CurrentEpochParticipation()
if err != nil {
return nil, err
}
currPart := make([]string, len(srcCurrPart))
for i, p := range srcCurrPart {
currPart[i] = fmt.Sprintf("%d", p)
}
srcIs, err := st.InactivityScores()
if err != nil {
return nil, err
}
is := make([]string, len(srcIs))
for i, s := range srcIs {
is[i] = fmt.Sprintf("%d", s)
}
currSc, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSc, err := st.NextSyncCommittee()
if err != nil {
return nil, err
}
execData, err := st.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
srcPayload, ok := execData.Proto().(*enginev1.ExecutionPayloadHeaderCapella)
if !ok {
return nil, errPayloadHeaderNotFound
}
payload, err := ExecutionPayloadHeaderCapellaFromConsensus(srcPayload)
if err != nil {
return nil, err
}
srcHs, err := st.HistoricalSummaries()
if err != nil {
return nil, err
}
hs := make([]*HistoricalSummary, len(srcHs))
for i, s := range srcHs {
hs[i] = HistoricalSummaryFromConsensus(s)
}
nwi, err := st.NextWithdrawalIndex()
if err != nil {
return nil, err
}
nwvi, err := st.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
return &BeaconStateCapella{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
LatestBlockHeader: BeaconBlockHeaderFromConsensus(st.LatestBlockHeader()),
BlockRoots: br,
StateRoots: sr,
HistoricalRoots: hr,
Eth1Data: Eth1DataFromConsensus(st.Eth1Data()),
Eth1DataVotes: votes,
Eth1DepositIndex: fmt.Sprintf("%d", st.Eth1DepositIndex()),
Validators: vals,
Balances: bals,
RandaoMixes: rm,
Slashings: slashings,
PreviousEpochParticipation: prevPart,
CurrentEpochParticipation: currPart,
JustificationBits: hexutil.Encode(st.JustificationBits()),
PreviousJustifiedCheckpoint: CheckpointFromConsensus(st.PreviousJustifiedCheckpoint()),
CurrentJustifiedCheckpoint: CheckpointFromConsensus(st.CurrentJustifiedCheckpoint()),
FinalizedCheckpoint: CheckpointFromConsensus(st.FinalizedCheckpoint()),
InactivityScores: is,
CurrentSyncCommittee: SyncCommitteeFromConsensus(currSc),
NextSyncCommittee: SyncCommitteeFromConsensus(nextSc),
LatestExecutionPayloadHeader: payload,
NextWithdrawalIndex: fmt.Sprintf("%d", nwi),
NextWithdrawalValidatorIndex: fmt.Sprintf("%d", nwvi),
HistoricalSummaries: hs,
}, nil
}
func BeaconStateDenebFromConsensus(st beaconState.BeaconState) (*BeaconStateDeneb, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
for i, r := range srcBr {
br[i] = hexutil.Encode(r)
}
srcSr := st.StateRoots()
sr := make([]string, len(srcSr))
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
}
srcVotes := st.Eth1DataVotes()
votes := make([]*Eth1Data, len(srcVotes))
for i, e := range srcVotes {
votes[i] = Eth1DataFromConsensus(e)
}
srcVals := st.Validators()
vals := make([]*Validator, len(srcVals))
for i, v := range srcVals {
vals[i] = ValidatorFromConsensus(v)
}
srcBals := st.Balances()
bals := make([]string, len(srcBals))
for i, b := range srcBals {
bals[i] = fmt.Sprintf("%d", b)
}
srcRm := st.RandaoMixes()
rm := make([]string, len(srcRm))
for i, m := range srcRm {
rm[i] = hexutil.Encode(m)
}
srcSlashings := st.Slashings()
slashings := make([]string, len(srcSlashings))
for i, s := range srcSlashings {
slashings[i] = fmt.Sprintf("%d", s)
}
srcPrevPart, err := st.PreviousEpochParticipation()
if err != nil {
return nil, err
}
prevPart := make([]string, len(srcPrevPart))
for i, p := range srcPrevPart {
prevPart[i] = fmt.Sprintf("%d", p)
}
srcCurrPart, err := st.CurrentEpochParticipation()
if err != nil {
return nil, err
}
currPart := make([]string, len(srcCurrPart))
for i, p := range srcCurrPart {
currPart[i] = fmt.Sprintf("%d", p)
}
srcIs, err := st.InactivityScores()
if err != nil {
return nil, err
}
is := make([]string, len(srcIs))
for i, s := range srcIs {
is[i] = fmt.Sprintf("%d", s)
}
currSc, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSc, err := st.NextSyncCommittee()
if err != nil {
return nil, err
}
execData, err := st.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
srcPayload, ok := execData.Proto().(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, errPayloadHeaderNotFound
}
payload, err := ExecutionPayloadHeaderDenebFromConsensus(srcPayload)
if err != nil {
return nil, err
}
srcHs, err := st.HistoricalSummaries()
if err != nil {
return nil, err
}
hs := make([]*HistoricalSummary, len(srcHs))
for i, s := range srcHs {
hs[i] = HistoricalSummaryFromConsensus(s)
}
nwi, err := st.NextWithdrawalIndex()
if err != nil {
return nil, err
}
nwvi, err := st.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
return &BeaconStateDeneb{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
LatestBlockHeader: BeaconBlockHeaderFromConsensus(st.LatestBlockHeader()),
BlockRoots: br,
StateRoots: sr,
HistoricalRoots: hr,
Eth1Data: Eth1DataFromConsensus(st.Eth1Data()),
Eth1DataVotes: votes,
Eth1DepositIndex: fmt.Sprintf("%d", st.Eth1DepositIndex()),
Validators: vals,
Balances: bals,
RandaoMixes: rm,
Slashings: slashings,
PreviousEpochParticipation: prevPart,
CurrentEpochParticipation: currPart,
JustificationBits: hexutil.Encode(st.JustificationBits()),
PreviousJustifiedCheckpoint: CheckpointFromConsensus(st.PreviousJustifiedCheckpoint()),
CurrentJustifiedCheckpoint: CheckpointFromConsensus(st.CurrentJustifiedCheckpoint()),
FinalizedCheckpoint: CheckpointFromConsensus(st.FinalizedCheckpoint()),
InactivityScores: is,
CurrentSyncCommittee: SyncCommitteeFromConsensus(currSc),
NextSyncCommittee: SyncCommitteeFromConsensus(nextSc),
LatestExecutionPayloadHeader: payload,
NextWithdrawalIndex: fmt.Sprintf("%d", nwi),
NextWithdrawalValidatorIndex: fmt.Sprintf("%d", nwvi),
HistoricalSummaries: hs,
}, nil
}

View File

@@ -1,52 +1,45 @@
package beacon
package structs
import (
"encoding/json"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
)
type BlockRootResponse struct {
Data *struct {
Root string `json:"root"`
} `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data *BlockRoot `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type BlockRoot struct {
Root string `json:"root"`
}
type GetCommitteesResponse struct {
Data []*shared.Committee `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type DepositContractResponse struct {
Data *struct {
ChainId string `json:"chain_id"`
Address string `json:"address"`
} `json:"data"`
Data []*Committee `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type ListAttestationsResponse struct {
Data []*shared.Attestation `json:"data"`
Data []*Attestation `json:"data"`
}
type SubmitAttestationsRequest struct {
Data []*shared.Attestation `json:"data"`
Data []*Attestation `json:"data"`
}
type ListVoluntaryExitsResponse struct {
Data []*shared.SignedVoluntaryExit `json:"data"`
Data []*SignedVoluntaryExit `json:"data"`
}
type SubmitSyncCommitteeSignaturesRequest struct {
Data []*shared.SyncCommitteeMessage `json:"data"`
Data []*SyncCommitteeMessage `json:"data"`
}
type GetStateForkResponse struct {
Data *shared.Fork `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data *Fork `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type GetFinalityCheckpointsResponse struct {
@@ -56,9 +49,9 @@ type GetFinalityCheckpointsResponse struct {
}
type FinalityCheckpoints struct {
PreviousJustified *shared.Checkpoint `json:"previous_justified"`
CurrentJustified *shared.Checkpoint `json:"current_justified"`
Finalized *shared.Checkpoint `json:"finalized"`
PreviousJustified *Checkpoint `json:"previous_justified"`
CurrentJustified *Checkpoint `json:"current_justified"`
Finalized *Checkpoint `json:"finalized"`
}
type GetGenesisResponse struct {
@@ -72,15 +65,20 @@ type Genesis struct {
}
type GetBlockHeadersResponse struct {
Data []*shared.SignedBeaconBlockHeaderContainer `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*SignedBeaconBlockHeaderContainer `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type GetBlockHeaderResponse struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data *shared.SignedBeaconBlockHeaderContainer `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data *SignedBeaconBlockHeaderContainer `json:"data"`
}
type GetValidatorsRequest struct {
Ids []string `json:"ids"`
Statuses []string `json:"statuses"`
}
type GetValidatorsResponse struct {
@@ -108,17 +106,6 @@ type ValidatorContainer struct {
Validator *Validator `json:"validator"`
}
type Validator struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
EffectiveBalance string `json:"effective_balance"`
Slashed bool `json:"slashed"`
ActivationEligibilityEpoch string `json:"activation_eligibility_epoch"`
ActivationEpoch string `json:"activation_epoch"`
ExitEpoch string `json:"exit_epoch"`
WithdrawableEpoch string `json:"withdrawable_epoch"`
}
type ValidatorBalance struct {
Index string `json:"index"`
Balance string `json:"balance"`
@@ -141,9 +128,9 @@ type SignedBlock struct {
}
type GetBlockAttestationsResponse struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*shared.Attestation `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*Attestation `json:"data"`
}
type GetStateRootResponse struct {
@@ -178,5 +165,22 @@ type SyncCommitteeValidators struct {
}
type BLSToExecutionChangesPoolResponse struct {
Data []*shared.SignedBLSToExecutionChange `json:"data"`
Data []*SignedBLSToExecutionChange `json:"data"`
}
type GetAttesterSlashingsResponse struct {
Data []*AttesterSlashing `json:"data"`
}
type GetProposerSlashingsResponse struct {
Data []*ProposerSlashing `json:"data"`
}
type GetWeakSubjectivityResponse struct {
Data *WeakSubjectivityData `json:"data"`
}
type WeakSubjectivityData struct {
WsCheckpoint *Checkpoint `json:"ws_checkpoint"`
StateRoot string `json:"state_root"`
}

View File

@@ -0,0 +1,14 @@
package structs
type SidecarsResponse struct {
Data []*Sidecar `json:"data"`
}
type Sidecar struct {
Index string `json:"index"`
Blob string `json:"blob"`
SignedBeaconBlockHeader *SignedBeaconBlockHeader `json:"signed_block_header"`
KzgCommitment string `json:"kzg_commitment"`
KzgProof string `json:"kzg_proof"`
CommitmentInclusionProof []string `json:"kzg_commitment_inclusion_proof"`
}

View File

@@ -1,4 +1,4 @@
package builder
package structs
type ExpectedWithdrawalsResponse struct {
Data []*ExpectedWithdrawal `json:"data"`

View File

@@ -0,0 +1,18 @@
package structs
type GetDepositContractResponse struct {
Data *DepositContractData `json:"data"`
}
type DepositContractData struct {
ChainId string `json:"chain_id"`
Address string `json:"address"`
}
type GetForkScheduleResponse struct {
Data []*Fork `json:"data"`
}
type GetSpecResponse struct {
Data interface{} `json:"data"`
}

View File

@@ -0,0 +1,57 @@
package structs
import (
"encoding/json"
)
type GetBeaconStateV2Response struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data json.RawMessage `json:"data"` // represents the state values based on the version
}
type GetForkChoiceHeadsV2Response struct {
Data []*ForkChoiceHead `json:"data"`
}
type ForkChoiceHead struct {
Root string `json:"root"`
Slot string `json:"slot"`
ExecutionOptimistic bool `json:"execution_optimistic"`
}
type GetForkChoiceDumpResponse struct {
JustifiedCheckpoint *Checkpoint `json:"justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
ForkChoiceNodes []*ForkChoiceNode `json:"fork_choice_nodes"`
ExtraData *ForkChoiceDumpExtraData `json:"extra_data"`
}
type ForkChoiceDumpExtraData struct {
UnrealizedJustifiedCheckpoint *Checkpoint `json:"unrealized_justified_checkpoint"`
UnrealizedFinalizedCheckpoint *Checkpoint `json:"unrealized_finalized_checkpoint"`
ProposerBoostRoot string `json:"proposer_boost_root"`
PreviousProposerBoostRoot string `json:"previous_proposer_boost_root"`
HeadRoot string `json:"head_root"`
}
type ForkChoiceNode struct {
Slot string `json:"slot"`
BlockRoot string `json:"block_root"`
ParentRoot string `json:"parent_root"`
JustifiedEpoch string `json:"justified_epoch"`
FinalizedEpoch string `json:"finalized_epoch"`
Weight string `json:"weight"`
Validity string `json:"validity"`
ExecutionBlockHash string `json:"execution_block_hash"`
ExtraData *ForkChoiceNodeExtraData `json:"extra_data"`
}
type ForkChoiceNodeExtraData struct {
UnrealizedJustifiedEpoch string `json:"unrealized_justified_epoch"`
UnrealizedFinalizedEpoch string `json:"unrealized_finalized_epoch"`
Balance string `json:"balance"`
ExecutionOptimistic bool `json:"execution_optimistic"`
TimeStamp string `json:"timestamp"`
}

View File

@@ -0,0 +1,116 @@
package structs
import (
"encoding/json"
)
type HeadEvent struct {
Slot string `json:"slot"`
Block string `json:"block"`
State string `json:"state"`
EpochTransition bool `json:"epoch_transition"`
ExecutionOptimistic bool `json:"execution_optimistic"`
PreviousDutyDependentRoot string `json:"previous_duty_dependent_root"`
CurrentDutyDependentRoot string `json:"current_duty_dependent_root"`
}
type BlockEvent struct {
Slot string `json:"slot"`
Block string `json:"block"`
ExecutionOptimistic bool `json:"execution_optimistic"`
}
type AggregatedAttEventSource struct {
Aggregate *Attestation `json:"aggregate"`
}
type UnaggregatedAttEventSource struct {
AggregationBits string `json:"aggregation_bits"`
Data *AttestationData `json:"data"`
Signature string `json:"signature"`
}
type FinalizedCheckpointEvent struct {
Block string `json:"block"`
State string `json:"state"`
Epoch string `json:"epoch"`
ExecutionOptimistic bool `json:"execution_optimistic"`
}
type ChainReorgEvent struct {
Slot string `json:"slot"`
Depth string `json:"depth"`
OldHeadBlock string `json:"old_head_block"`
NewHeadBlock string `json:"old_head_state"`
OldHeadState string `json:"new_head_block"`
NewHeadState string `json:"new_head_state"`
Epoch string `json:"epoch"`
ExecutionOptimistic bool `json:"execution_optimistic"`
}
type PayloadAttributesEvent struct {
Version string `json:"version"`
Data json.RawMessage `json:"data"`
}
type PayloadAttributesEventData struct {
ProposerIndex string `json:"proposer_index"`
ProposalSlot string `json:"proposal_slot"`
ParentBlockNumber string `json:"parent_block_number"`
ParentBlockRoot string `json:"parent_block_root"`
ParentBlockHash string `json:"parent_block_hash"`
PayloadAttributes json.RawMessage `json:"payload_attributes"`
}
type PayloadAttributesV1 struct {
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
}
type PayloadAttributesV2 struct {
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
Withdrawals []*Withdrawal `json:"withdrawals"`
}
type PayloadAttributesV3 struct {
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
Withdrawals []*Withdrawal `json:"withdrawals"`
ParentBeaconBlockRoot string `json:"parent_beacon_block_root"`
}
type BlobSidecarEvent struct {
BlockRoot string `json:"block_root"`
Index string `json:"index"`
Slot string `json:"slot"`
KzgCommitment string `json:"kzg_commitment"`
VersionedHash string `json:"versioned_hash"`
}
type LightClientFinalityUpdateEvent struct {
Version string `json:"version"`
Data *LightClientFinalityUpdate `json:"data"`
}
type LightClientFinalityUpdate struct {
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
FinalizedHeader *BeaconBlockHeader `json:"finalized_header"`
FinalityBranch []string `json:"finality_branch"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientOptimisticUpdateEvent struct {
Version string `json:"version"`
Data *LightClientOptimisticUpdate `json:"data"`
}
type LightClientOptimisticUpdate struct {
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}

View File

@@ -0,0 +1,31 @@
package structs
type LightClientBootstrapResponse struct {
Version string `json:"version"`
Data *LightClientBootstrap `json:"data"`
}
type LightClientBootstrap struct {
Header *BeaconBlockHeader `json:"header"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
CurrentSyncCommitteeBranch []string `json:"current_sync_committee_branch"`
}
type LightClientUpdate struct {
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee,omitempty"`
FinalizedHeader *BeaconBlockHeader `json:"finalized_header,omitempty"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
NextSyncCommitteeBranch []string `json:"next_sync_committee_branch,omitempty"`
FinalityBranch []string `json:"finality_branch,omitempty"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientUpdateWithVersion struct {
Version string `json:"version"`
Data *LightClientUpdate `json:"data"`
}
type LightClientUpdatesByRangeResponse struct {
Updates []*LightClientUpdateWithVersion `json:"updates"`
}

View File

@@ -1,4 +1,4 @@
package node
package structs
type SyncStatusResponse struct {
Data *SyncStatusResponseData `json:"data"`
@@ -63,3 +63,11 @@ type GetVersionResponse struct {
type Version struct {
Version string `json:"version"`
}
type AddrRequest struct {
Addr string `json:"addr"`
}
type PeersResponse struct {
Peers []*Peer `json:"peers"`
}

View File

@@ -1,4 +1,4 @@
package rewards
package structs
type BlockRewardsResponse struct {
Data *BlockRewards `json:"data"`
@@ -31,6 +31,7 @@ type IdealAttestationReward struct {
Head string `json:"head"`
Target string `json:"target"`
Source string `json:"source"`
Inactivity string `json:"inactivity"`
}
type TotalAttestationReward struct {
@@ -38,7 +39,7 @@ type TotalAttestationReward struct {
Head string `json:"head"`
Target string `json:"target"`
Source string `json:"source"`
InclusionDelay string `json:"inclusion_delay"`
Inactivity string `json:"inactivity"`
}
type SyncCommitteeRewardsResponse struct {

View File

@@ -1,37 +1,37 @@
package validator
package structs
import (
"encoding/json"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
)
type AggregateAttestationResponse struct {
Data *shared.Attestation `json:"data"`
Data *Attestation `json:"data"`
}
type SubmitContributionAndProofsRequest struct {
Data []*shared.SignedContributionAndProof `json:"data"`
Data []*SignedContributionAndProof `json:"data"`
}
type SubmitAggregateAndProofsRequest struct {
Data []*shared.SignedAggregateAttestationAndProof `json:"data"`
Data []*SignedAggregateAttestationAndProof `json:"data"`
}
type SubmitSyncCommitteeSubscriptionsRequest struct {
Data []*shared.SyncCommitteeSubscription `json:"data"`
Data []*SyncCommitteeSubscription `json:"data"`
}
type SubmitBeaconCommitteeSubscriptionsRequest struct {
Data []*shared.BeaconCommitteeSubscription `json:"data"`
Data []*BeaconCommitteeSubscription `json:"data"`
}
type GetAttestationDataResponse struct {
Data *shared.AttestationData `json:"data"`
Data *AttestationData `json:"data"`
}
type ProduceSyncCommitteeContributionResponse struct {
Data *shared.SyncCommitteeContribution `json:"data"`
Data *SyncCommitteeContribution `json:"data"`
}
type GetAttesterDutiesResponse struct {
@@ -83,10 +83,38 @@ type ProduceBlockV3Response struct {
}
type GetLivenessResponse struct {
Data []*ValidatorLiveness `json:"data"`
Data []*Liveness `json:"data"`
}
type ValidatorLiveness struct {
type Liveness struct {
Index string `json:"index"`
IsLive bool `json:"is_live"`
}
type GetValidatorCountResponse struct {
ExecutionOptimistic string `json:"execution_optimistic"`
Finalized string `json:"finalized"`
Data []*ValidatorCount `json:"data"`
}
type ValidatorCount struct {
Status string `json:"status"`
Count string `json:"count"`
}
type GetValidatorPerformanceRequest struct {
PublicKeys [][]byte `json:"public_keys,omitempty"`
Indices []primitives.ValidatorIndex `json:"indices,omitempty"`
}
type GetValidatorPerformanceResponse struct {
PublicKeys [][]byte `json:"public_keys,omitempty"`
CorrectlyVotedSource []bool `json:"correctly_voted_source,omitempty"`
CorrectlyVotedTarget []bool `json:"correctly_voted_target,omitempty"`
CorrectlyVotedHead []bool `json:"correctly_voted_head,omitempty"`
CurrentEffectiveBalances []uint64 `json:"current_effective_balances,omitempty"`
BalancesBeforeEpochTransition []uint64 `json:"balances_before_epoch_transition,omitempty"`
BalancesAfterEpochTransition []uint64 `json:"balances_after_epoch_transition,omitempty"`
MissingValidators [][]byte `json:"missing_validators,omitempty"`
InactivityScores []uint64 `json:"inactivity_scores,omitempty"`
}

209
api/server/structs/other.go Normal file
View File

@@ -0,0 +1,209 @@
package structs
type Validator struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
EffectiveBalance string `json:"effective_balance"`
Slashed bool `json:"slashed"`
ActivationEligibilityEpoch string `json:"activation_eligibility_epoch"`
ActivationEpoch string `json:"activation_epoch"`
ExitEpoch string `json:"exit_epoch"`
WithdrawableEpoch string `json:"withdrawable_epoch"`
}
type PendingAttestation struct {
AggregationBits string `json:"aggregation_bits"`
Data *AttestationData `json:"data"`
InclusionDelay string `json:"inclusion_delay"`
ProposerIndex string `json:"proposer_index"`
}
type HistoricalSummary struct {
BlockSummaryRoot string `json:"block_summary_root"`
StateSummaryRoot string `json:"state_summary_root"`
}
type Attestation struct {
AggregationBits string `json:"aggregation_bits"`
Data *AttestationData `json:"data"`
Signature string `json:"signature"`
}
type AttestationData struct {
Slot string `json:"slot"`
CommitteeIndex string `json:"index"`
BeaconBlockRoot string `json:"beacon_block_root"`
Source *Checkpoint `json:"source"`
Target *Checkpoint `json:"target"`
}
type Checkpoint struct {
Epoch string `json:"epoch"`
Root string `json:"root"`
}
type Committee struct {
Index string `json:"index"`
Slot string `json:"slot"`
Validators []string `json:"validators"`
}
type SignedContributionAndProof struct {
Message *ContributionAndProof `json:"message"`
Signature string `json:"signature"`
}
type ContributionAndProof struct {
AggregatorIndex string `json:"aggregator_index"`
Contribution *SyncCommitteeContribution `json:"contribution"`
SelectionProof string `json:"selection_proof"`
}
type SyncCommitteeContribution struct {
Slot string `json:"slot"`
BeaconBlockRoot string `json:"beacon_block_root"`
SubcommitteeIndex string `json:"subcommittee_index"`
AggregationBits string `json:"aggregation_bits"`
Signature string `json:"signature"`
}
type SignedAggregateAttestationAndProof struct {
Message *AggregateAttestationAndProof `json:"message"`
Signature string `json:"signature"`
}
type AggregateAttestationAndProof struct {
AggregatorIndex string `json:"aggregator_index"`
Aggregate *Attestation `json:"aggregate"`
SelectionProof string `json:"selection_proof"`
}
type SyncCommitteeSubscription struct {
ValidatorIndex string `json:"validator_index"`
SyncCommitteeIndices []string `json:"sync_committee_indices"`
UntilEpoch string `json:"until_epoch"`
}
type BeaconCommitteeSubscription struct {
ValidatorIndex string `json:"validator_index"`
CommitteeIndex string `json:"committee_index"`
CommitteesAtSlot string `json:"committees_at_slot"`
Slot string `json:"slot"`
IsAggregator bool `json:"is_aggregator"`
}
type ValidatorRegistration struct {
FeeRecipient string `json:"fee_recipient"`
GasLimit string `json:"gas_limit"`
Timestamp string `json:"timestamp"`
Pubkey string `json:"pubkey"`
}
type SignedValidatorRegistration struct {
Message *ValidatorRegistration `json:"message"`
Signature string `json:"signature"`
}
type FeeRecipient struct {
ValidatorIndex string `json:"validator_index"`
FeeRecipient string `json:"fee_recipient"`
}
type SignedVoluntaryExit struct {
Message *VoluntaryExit `json:"message"`
Signature string `json:"signature"`
}
type VoluntaryExit struct {
Epoch string `json:"epoch"`
ValidatorIndex string `json:"validator_index"`
}
type Fork struct {
PreviousVersion string `json:"previous_version"`
CurrentVersion string `json:"current_version"`
Epoch string `json:"epoch"`
}
type SignedBLSToExecutionChange struct {
Message *BLSToExecutionChange `json:"message"`
Signature string `json:"signature"`
}
type BLSToExecutionChange struct {
ValidatorIndex string `json:"validator_index"`
FromBLSPubkey string `json:"from_bls_pubkey"`
ToExecutionAddress string `json:"to_execution_address"`
}
type SyncCommitteeMessage struct {
Slot string `json:"slot"`
BeaconBlockRoot string `json:"beacon_block_root"`
ValidatorIndex string `json:"validator_index"`
Signature string `json:"signature"`
}
type SyncCommittee struct {
Pubkeys []string `json:"pubkeys"`
AggregatePubkey string `json:"aggregate_pubkey"`
}
// SyncDetails contains information about node sync status.
type SyncDetails struct {
HeadSlot string `json:"head_slot"`
SyncDistance string `json:"sync_distance"`
IsSyncing bool `json:"is_syncing"`
IsOptimistic bool `json:"is_optimistic"`
ElOffline bool `json:"el_offline"`
}
// SyncDetailsContainer is a wrapper for Data.
type SyncDetailsContainer struct {
Data *SyncDetails `json:"data"`
}
type Eth1Data struct {
DepositRoot string `json:"deposit_root"`
DepositCount string `json:"deposit_count"`
BlockHash string `json:"block_hash"`
}
type ProposerSlashing struct {
SignedHeader1 *SignedBeaconBlockHeader `json:"signed_header_1"`
SignedHeader2 *SignedBeaconBlockHeader `json:"signed_header_2"`
}
type AttesterSlashing struct {
Attestation1 *IndexedAttestation `json:"attestation_1"`
Attestation2 *IndexedAttestation `json:"attestation_2"`
}
type Deposit struct {
Proof []string `json:"proof"`
Data *DepositData `json:"data"`
}
type DepositData struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
Amount string `json:"amount"`
Signature string `json:"signature"`
}
type IndexedAttestation struct {
AttestingIndices []string `json:"attesting_indices"`
Data *AttestationData `json:"data"`
Signature string `json:"signature"`
}
type SyncAggregate struct {
SyncCommitteeBits string `json:"sync_committee_bits"`
SyncCommitteeSignature string `json:"sync_committee_signature"`
}
type Withdrawal struct {
WithdrawalIndex string `json:"index"`
ValidatorIndex string `json:"validator_index"`
ExecutionAddress string `json:"address"`
Amount string `json:"amount"`
}

142
api/server/structs/state.go Normal file
View File

@@ -0,0 +1,142 @@
package structs
type BeaconState struct {
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root"`
Slot string `json:"slot"`
Fork *Fork `json:"fork"`
LatestBlockHeader *BeaconBlockHeader `json:"latest_block_header"`
BlockRoots []string `json:"block_roots"`
StateRoots []string `json:"state_roots"`
HistoricalRoots []string `json:"historical_roots"`
Eth1Data *Eth1Data `json:"eth1_data"`
Eth1DataVotes []*Eth1Data `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*Validator `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes"`
Slashings []string `json:"slashings"`
PreviousEpochAttestations []*PendingAttestation `json:"previous_epoch_attestations"`
CurrentEpochAttestations []*PendingAttestation `json:"current_epoch_attestations"`
JustificationBits string `json:"justification_bits"`
PreviousJustifiedCheckpoint *Checkpoint `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *Checkpoint `json:"current_justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
}
type BeaconStateAltair struct {
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root"`
Slot string `json:"slot"`
Fork *Fork `json:"fork"`
LatestBlockHeader *BeaconBlockHeader `json:"latest_block_header"`
BlockRoots []string `json:"block_roots"`
StateRoots []string `json:"state_roots"`
HistoricalRoots []string `json:"historical_roots"`
Eth1Data *Eth1Data `json:"eth1_data"`
Eth1DataVotes []*Eth1Data `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*Validator `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes"`
Slashings []string `json:"slashings"`
PreviousEpochParticipation []string `json:"previous_epoch_participation"`
CurrentEpochParticipation []string `json:"current_epoch_participation"`
JustificationBits string `json:"justification_bits"`
PreviousJustifiedCheckpoint *Checkpoint `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *Checkpoint `json:"current_justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
InactivityScores []string `json:"inactivity_scores"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee"`
}
type BeaconStateBellatrix struct {
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root"`
Slot string `json:"slot"`
Fork *Fork `json:"fork"`
LatestBlockHeader *BeaconBlockHeader `json:"latest_block_header"`
BlockRoots []string `json:"block_roots"`
StateRoots []string `json:"state_roots"`
HistoricalRoots []string `json:"historical_roots"`
Eth1Data *Eth1Data `json:"eth1_data"`
Eth1DataVotes []*Eth1Data `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*Validator `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes"`
Slashings []string `json:"slashings"`
PreviousEpochParticipation []string `json:"previous_epoch_participation"`
CurrentEpochParticipation []string `json:"current_epoch_participation"`
JustificationBits string `json:"justification_bits"`
PreviousJustifiedCheckpoint *Checkpoint `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *Checkpoint `json:"current_justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
InactivityScores []string `json:"inactivity_scores"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee"`
LatestExecutionPayloadHeader *ExecutionPayloadHeader `json:"latest_execution_payload_header"`
}
type BeaconStateCapella struct {
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root"`
Slot string `json:"slot"`
Fork *Fork `json:"fork"`
LatestBlockHeader *BeaconBlockHeader `json:"latest_block_header"`
BlockRoots []string `json:"block_roots"`
StateRoots []string `json:"state_roots"`
HistoricalRoots []string `json:"historical_roots"`
Eth1Data *Eth1Data `json:"eth1_data"`
Eth1DataVotes []*Eth1Data `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*Validator `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes"`
Slashings []string `json:"slashings"`
PreviousEpochParticipation []string `json:"previous_epoch_participation"`
CurrentEpochParticipation []string `json:"current_epoch_participation"`
JustificationBits string `json:"justification_bits"`
PreviousJustifiedCheckpoint *Checkpoint `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *Checkpoint `json:"current_justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
InactivityScores []string `json:"inactivity_scores"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee"`
LatestExecutionPayloadHeader *ExecutionPayloadHeaderCapella `json:"latest_execution_payload_header"`
NextWithdrawalIndex string `json:"next_withdrawal_index"`
NextWithdrawalValidatorIndex string `json:"next_withdrawal_validator_index"`
HistoricalSummaries []*HistoricalSummary `json:"historical_summaries"`
}
type BeaconStateDeneb struct {
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root"`
Slot string `json:"slot"`
Fork *Fork `json:"fork"`
LatestBlockHeader *BeaconBlockHeader `json:"latest_block_header"`
BlockRoots []string `json:"block_roots"`
StateRoots []string `json:"state_roots"`
HistoricalRoots []string `json:"historical_roots"`
Eth1Data *Eth1Data `json:"eth1_data"`
Eth1DataVotes []*Eth1Data `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*Validator `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes"`
Slashings []string `json:"slashings"`
PreviousEpochParticipation []string `json:"previous_epoch_participation"`
CurrentEpochParticipation []string `json:"current_epoch_participation"`
JustificationBits string `json:"justification_bits"`
PreviousJustifiedCheckpoint *Checkpoint `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *Checkpoint `json:"current_justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
InactivityScores []string `json:"inactivity_scores"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee"`
LatestExecutionPayloadHeader *ExecutionPayloadHeaderDeneb `json:"latest_execution_payload_header"`
NextWithdrawalIndex string `json:"next_withdrawal_index"`
NextWithdrawalValidatorIndex string `json:"next_withdrawal_validator_index"`
HistoricalSummaries []*HistoricalSummary `json:"historical_summaries"`
}

View File

@@ -1,19 +0,0 @@
load("//tools:target_migration.bzl", "moved_targets")
moved_targets(
[
":push_images_debug",
":push_images_alpine",
":push_images",
":image_bundle_debug",
":image_debug",
":image_bundle_alpine",
":image_bundle",
":image_with_creation_time",
":image_alpine",
":image",
":go_default_test",
":beacon-chain",
],
"//cmd/beacon-chain",
)

View File

@@ -6,6 +6,7 @@ go_library(
"chain_info.go",
"chain_info_forkchoice.go",
"currently_syncing_block.go",
"defragment.go",
"error.go",
"execution_engine.go",
"forkchoice_update_execution.go",
@@ -26,6 +27,7 @@ go_library(
"receive_blob.go",
"receive_block.go",
"service.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain",
@@ -49,9 +51,10 @@ go_library(
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
@@ -69,6 +72,7 @@ go_library(
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -97,26 +101,22 @@ go_library(
],
)
test_suite(
name = "go_default_test",
tests = [
":go_raceoff_test",
":go_raceon_test",
],
)
go_test(
name = "go_raceoff_test",
name = "go_default_test",
size = "medium",
srcs = [
"blockchain_test.go",
"chain_info_norace_test.go",
"chain_info_test.go",
"checktags_test.go",
"error_test.go",
"execution_engine_test.go",
"forkchoice_update_execution_test.go",
"head_sync_committee_info_test.go",
"head_test.go",
"init_sync_process_block_test.go",
"init_test.go",
"lightclient_test.go",
"log_test.go",
"metrics_test.go",
"mock_test.go",
@@ -125,98 +125,67 @@ go_test(
"process_block_test.go",
"receive_attestation_test.go",
"receive_block_test.go",
"service_norace_test.go",
"service_test.go",
"setup_test.go",
"weak_subjectivity_checks_test.go",
],
embed = [":go_default_library"],
gotags = ["develop"],
tags = ["CI_race_detection"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stateutil:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/blocks/testing:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/trie:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@in_gopkg_d4l3k_messagediff_v1//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
"@org_golang_x_net//context:go_default_library",
],
)
go_test(
name = "go_raceon_test",
srcs = [
"chain_info_norace_test.go",
"checktags_test.go",
"init_test.go",
"mock_test.go",
"receive_block_test.go",
"service_norace_test.go",
"setup_test.go",
],
embed = [":go_default_library"],
gc_goopts = [
# Go 1.14 enables checkptr by default when building with -race or -msan. There is a pointer
# issue in boltdb, so must disable checkptr at compile time. This flag can be removed once
# the project is migrated to etcd's version of boltdb and the issue has been fixed.
# See: https://github.com/etcd-io/bbolt/issues/187.
"-d=checkptr=0",
],
gotags = ["develop"],
race = "on",
tags = ["race_on"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/blocks/testing:go_default_library",
"//container/trie:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_holiman_uint256//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
"@org_golang_x_net//context:go_default_library",
],
)

View File

@@ -6,19 +6,21 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
f "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// ChainInfoFetcher defines a common interface for methods in blockchain service which
@@ -45,7 +47,7 @@ type ForkchoiceFetcher interface {
HighestReceivedBlockSlot() primitives.Slot
ReceivedBlocksLastEpoch() (uint64, error)
InsertNode(context.Context, state.BeaconState, [32]byte) error
ForkChoiceDump(context.Context) (*ethpbv1.ForkChoiceDump, error)
ForkChoiceDump(context.Context) (*forkchoice.Dump, error)
NewSlot(context.Context, primitives.Slot) error
ProposerBoost() [32]byte
}
@@ -75,6 +77,7 @@ type HeadFetcher interface {
HeadPublicKeyToValidatorIndex(pubKey [fieldparams.BLSPubkeyLength]byte) (primitives.ValidatorIndex, bool)
HeadValidatorIndexToPublicKey(ctx context.Context, index primitives.ValidatorIndex) ([fieldparams.BLSPubkeyLength]byte, error)
ChainHeads() ([][32]byte, []primitives.Slot)
TargetRootForEpoch([32]byte, primitives.Epoch) ([32]byte, error)
HeadSyncCommitteeFetcher
HeadDomainFetcher
}
@@ -333,12 +336,21 @@ func (s *Service) HeadValidatorIndexToPublicKey(_ context.Context, index primiti
return v.PublicKey(), nil
}
// ForkChoicer returns the forkchoice interface.
func (s *Service) ForkChoicer() f.ForkChoicer {
return s.cfg.ForkChoiceStore
}
// IsOptimistic returns true if the current head is optimistic.
func (s *Service) IsOptimistic(_ context.Context) (bool, error) {
if slots.ToEpoch(s.CurrentSlot()) < params.BeaconConfig().BellatrixForkEpoch {
return false, nil
}
s.headLock.RLock()
if s.head == nil {
s.headLock.RUnlock()
return false, ErrNilHead
}
headRoot := s.head.root
headSlot := s.head.slot
headOptimistic := s.head.optimistic
@@ -464,6 +476,13 @@ func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool,
return !isCanonical, nil
}
// TargetRootForEpoch wraps the corresponding method in forkchoice
func (s *Service) TargetRootForEpoch(root [32]byte, epoch primitives.Epoch) ([32]byte, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.TargetRootForEpoch(root, epoch)
}
// Ancestor returns the block root of an ancestry block from the input block root.
//
// Spec pseudocode definition:
@@ -530,3 +549,17 @@ func (s *Service) recoverStateSummary(ctx context.Context, blockRoot [32]byte) (
func (s *Service) BlockBeingSynced(root [32]byte) bool {
return s.blockBeingSynced.isSyncing(root)
}
// RecentBlockSlot returns block slot form fork choice store
func (s *Service) RecentBlockSlot(root [32]byte) (primitives.Slot, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.Slot(root)
}
// inRegularSync queries the initial sync service to
// determine if the node is in regular sync or is still
// syncing to the head of the chain.
func (s *Service) inRegularSync() bool {
return s.cfg.SyncChecker.Synced()
}

View File

@@ -4,8 +4,8 @@ import (
"context"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
)
// CachedHeadRoot returns the corresponding value from Forkchoice
@@ -22,6 +22,13 @@ func (s *Service) GetProposerHead() [32]byte {
return s.cfg.ForkChoiceStore.GetProposerHead()
}
// ShouldOverrideFCU returns the corresponding value from forkchoice
func (s *Service) ShouldOverrideFCU() bool {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ShouldOverrideFCU()
}
// SetForkChoiceGenesisTime sets the genesis time in Forkchoice
func (s *Service) SetForkChoiceGenesisTime(timestamp uint64) {
s.cfg.ForkChoiceStore.Lock()
@@ -51,7 +58,7 @@ func (s *Service) InsertNode(ctx context.Context, st state.BeaconState, root [32
}
// ForkChoiceDump returns the corresponding value from forkchoice
func (s *Service) ForkChoiceDump(ctx context.Context) (*ethpbv1.ForkChoiceDump, error) {
func (s *Service) ForkChoiceDump(ctx context.Context) (*forkchoice.Dump, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ForkChoiceDump(ctx)

View File

@@ -429,6 +429,11 @@ func TestService_IsOptimistic(t *testing.T) {
opt, err = c.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, opt)
// If head is nil, for some reason, an error should be returned rather than panic.
c = &Service{}
_, err = c.IsOptimistic(ctx)
require.ErrorIs(t, err, ErrNilHead)
}
func TestService_IsOptimisticBeforeBellatrix(t *testing.T) {

View File

@@ -0,0 +1,27 @@
package blockchain
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/time"
)
var stateDefragmentationTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "head_state_defragmentation_milliseconds",
Help: "Milliseconds it takes to defragment the head state",
})
// This method defragments our state, so that any specific fields which have
// a higher number of fragmented indexes are reallocated to a new separate slice for
// that field.
func (s *Service) defragmentState(st state.BeaconState) {
if !features.Get().EnableExperimentalState {
return
}
startTime := time.Now()
st.Defragment()
elapsedTime := time.Since(startTime)
stateDefragmentationTime.Observe(float64(elapsedTime.Milliseconds()))
}

View File

@@ -28,8 +28,12 @@ var (
// ErrNotCheckpoint is returned when a given checkpoint is not a
// checkpoint in any chain known to forkchoice
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
// ErrNilHead is returned when no head is present in the blockchain service.
ErrNilHead = errors.New("nil head")
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")
// An invalid block is the block that fails state transition based on the core protocol rules.
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.
// Some examples of invalid blocks are:

View File

@@ -7,11 +7,11 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
@@ -32,21 +32,18 @@ const blobCommitmentVersionKZG uint8 = 0x01
var defaultLatestValidHash = bytesutil.PadTo([]byte{0xff}, 32)
// notifyForkchoiceUpdateArg is the argument for the forkchoice update notification `notifyForkchoiceUpdate`.
type notifyForkchoiceUpdateArg struct {
headState state.BeaconState
headRoot [32]byte
headBlock interfaces.ReadOnlyBeaconBlock
}
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
// 1. Re-organizes the execution payload chain and corresponding state to make head_block_hash the head.
// 2. Applies finality to the execution state: it irreversibly persists the chain of all execution payloads and corresponding state, up to and including finalized_block_hash.
func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkchoiceUpdateArg) (*enginev1.PayloadIDBytes, error) {
func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*enginev1.PayloadIDBytes, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyForkchoiceUpdate")
defer span.End()
headBlk := arg.headBlock
if arg.headBlock.IsNil() {
log.Error("Head block is nil")
return nil, nil
}
headBlk := arg.headBlock.Block()
if headBlk == nil || headBlk.IsNil() || headBlk.Body().IsNil() {
log.Error("Head block is nil")
return nil, nil
@@ -72,11 +69,10 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
SafeBlockHash: justifiedHash[:],
FinalizedBlockHash: finalizedHash[:],
}
nextSlot := s.CurrentSlot() + 1 // Cache payload ID for next slot proposer.
hasAttr, attr, proposerId := s.getPayloadAttribute(ctx, arg.headState, nextSlot, arg.headRoot[:])
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, attr)
if arg.attributes == nil {
arg.attributes = payloadattribute.EmptyWithVersion(headBlk.Version())
}
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, arg.attributes)
if err != nil {
switch err {
case execution.ErrAcceptedSyncingPayloadStatus:
@@ -122,10 +118,11 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
log.WithError(err).Error("Could not get head state")
return nil, nil
}
pid, err := s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: st,
headRoot: r,
headBlock: b.Block(),
pid, err := s.notifyForkchoiceUpdate(ctx, &fcuConfig{
headState: st,
headRoot: r,
headBlock: b,
attributes: arg.attributes,
})
if err != nil {
return nil, err // Returning err because it's recursive here.
@@ -153,11 +150,18 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
log.WithError(err).Error("Could not set head root to valid")
return nil, nil
}
// If the forkchoice update call has an attribute, update the proposer payload ID cache.
// If the forkchoice update call has an attribute, update the payload ID cache.
hasAttr := arg.attributes != nil && !arg.attributes.IsEmpty()
nextSlot := s.CurrentSlot() + 1
if hasAttr && payloadID != nil {
var pId [8]byte
copy(pId[:], payloadID[:])
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nextSlot, proposerId, pId, arg.headRoot)
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(arg.headRoot[:])),
"headSlot": headBlk.Slot(),
"payloadID": fmt.Sprintf("%#x", bytesutil.Trunc(payloadID[:])),
}).Info("Forkchoice updated with payload attributes for proposal")
s.cfg.PayloadIDCache.Set(nextSlot, arg.headRoot, pId)
} else if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", headPayload.BlockHash()),
@@ -272,56 +276,50 @@ func (s *Service) pruneInvalidBlock(ctx context.Context, root, parentRoot, lvh [
// getPayloadAttributes returns the payload attributes for the given state and slot.
// The attribute is required to initiate a payload build process in the context of an `engine_forkchoiceUpdated` call.
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot primitives.Slot, headRoot []byte) (bool, payloadattribute.Attributer, primitives.ValidatorIndex) {
func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState, slot primitives.Slot, headRoot []byte) payloadattribute.Attributer {
emptyAttri := payloadattribute.EmptyWithVersion(st.Version())
// Root is `[32]byte{}` since we are retrieving proposer ID of a given slot. During insertion at assignment the root was not known.
proposerID, _, ok := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(slot, [32]byte{} /* root */)
if !ok && !features.Get().PrepareAllPayloads { // There's no need to build attribute if there is no proposer for slot.
return false, emptyAttri, 0
}
// Get previous randao.
// If it is an epoch boundary then process slots to get the right
// shuffling before checking if the proposer is tracked. Otherwise
// perform this check before. This is cheap as the NSC has already been updated.
var val cache.TrackedValidator
var ok bool
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(st.Slot())
if e == stateEpoch {
val, ok = s.trackedProposer(st, slot)
if !ok {
return emptyAttri
}
}
st = st.Copy()
if slot > st.Slot() {
var err error
st, err = transition.ProcessSlotsUsingNextSlotCache(ctx, st, headRoot, slot)
if err != nil {
log.WithError(err).Error("Could not process slots to get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
}
if e > stateEpoch {
emptyAttri := payloadattribute.EmptyWithVersion(st.Version())
val, ok = s.trackedProposer(st, slot)
if !ok {
return emptyAttri
}
}
// Get previous randao.
prevRando, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
if err != nil {
log.WithError(err).Error("Could not get randao mix to get payload attribute")
return false, emptyAttri, 0
}
// Get fee recipient.
feeRecipient := params.BeaconConfig().DefaultFeeRecipient
recipient, err := s.cfg.BeaconDB.FeeRecipientByValidatorID(ctx, proposerID)
switch {
case errors.Is(err, kv.ErrNotFoundFeeRecipient):
if feeRecipient.String() == params.BeaconConfig().EthBurnAddressHex {
logrus.WithFields(logrus.Fields{
"validatorIndex": proposerID,
"burnAddress": params.BeaconConfig().EthBurnAddressHex,
}).Warn("Fee recipient is currently using the burn address, " +
"you will not be rewarded transaction fees on this setting. " +
"Please set a different eth address as the fee recipient. " +
"Please refer to our documentation for instructions")
}
case err != nil:
log.WithError(err).Error("Could not get fee recipient to get payload attribute")
return false, emptyAttri, 0
default:
feeRecipient = recipient
return emptyAttri
}
// Get timestamp.
t, err := slots.ToTime(uint64(s.genesisTime.Unix()), slot)
if err != nil {
log.WithError(err).Error("Could not get timestamp to get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
var attr payloadattribute.Attributer
@@ -330,51 +328,51 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
attr, err = payloadattribute.New(&enginev1.PayloadAttributesV3{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecipient.Bytes(),
SuggestedFeeRecipient: val.FeeRecipient[:],
Withdrawals: withdrawals,
ParentBeaconBlockRoot: headRoot,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
case version.Capella:
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
attr, err = payloadattribute.New(&enginev1.PayloadAttributesV2{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecipient.Bytes(),
SuggestedFeeRecipient: val.FeeRecipient[:],
Withdrawals: withdrawals,
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
case version.Bellatrix:
attr, err = payloadattribute.New(&enginev1.PayloadAttributes{
Timestamp: uint64(t.Unix()),
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecipient.Bytes(),
SuggestedFeeRecipient: val.FeeRecipient[:],
})
if err != nil {
log.WithError(err).Error("Could not get payload attribute")
return false, emptyAttri, 0
return emptyAttri
}
default:
log.WithField("version", st.Version()).Error("Could not get payload attribute due to unknown state version")
return false, emptyAttri, 0
return emptyAttri
}
return true, attr, proposerID
return attr
}
// removeInvalidBlockAndState removes the invalid block, blob and its corresponding state from the cache and DB.
@@ -389,9 +387,9 @@ func (s *Service) removeInvalidBlockAndState(ctx context.Context, blkRoots [][32
// This is an irreparable condition, it would me a justified or finalized block has become invalid.
return err
}
// No op if the sidecar does not exist.
if err := s.cfg.BeaconDB.DeleteBlobSidecars(ctx, root); err != nil {
return err
if err := s.blobStorage.Remove(root); err != nil {
// Blobs may not exist for some blocks, leading to deletion failures. Log such errors at debug level.
log.WithError(err).Debug("Could not remove blob from blob storage")
}
}
return nil

View File

@@ -26,11 +26,10 @@ import (
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
altairBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockAltair())
@@ -57,11 +56,14 @@ func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
sb := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
},
})
}
b, err := consensusblocks.NewSignedBeaconBlock(sb)
require.NoError(t, err)
pid := &v1.PayloadIDBytes{1}
@@ -73,20 +75,20 @@ func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
// Intentionally generate a bad state such that `hash_tree_root` fails during `process_slot`
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
arg := &notifyForkchoiceUpdateArg{
arg := &fcuConfig{
headState: s,
headRoot: [32]byte{},
headBlock: b,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(1, 0, [8]byte{}, [32]byte{})
service.cfg.PayloadIDCache.Set(1, [32]byte{}, [8]byte{})
got, err := service.notifyForkchoiceUpdate(ctx, arg)
require.NoError(t, err)
require.DeepEqual(t, got, pid) // We still get a payload ID even though the state is bad. This means it returns until the end.
}
func Test_NotifyForkchoiceUpdate(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
altairBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockAltair())
@@ -114,7 +116,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
tests := []struct {
name string
blk interfaces.ReadOnlyBeaconBlock
blk interfaces.ReadOnlySignedBeaconBlock
headRoot [32]byte
finalizedRoot [32]byte
justifiedRoot [32]byte
@@ -123,24 +125,24 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
}{
{
name: "phase0 block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}})
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}})
require.NoError(t, err)
return b
}(),
},
{
name: "altair block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockAltair{Body: &ethpb.BeaconBlockBodyAltair{}})
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockAltair{Block: &ethpb.BeaconBlockAltair{Body: &ethpb.BeaconBlockBodyAltair{}}})
require.NoError(t, err)
return b
}(),
},
{
name: "not execution block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
@@ -149,23 +151,25 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
})
}})
require.NoError(t, err)
return b
}(),
},
{
name: "happy case: finalized root is altair block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
}})
require.NoError(t, err)
return b
}(),
@@ -174,12 +178,12 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
},
{
name: "happy case: finalized root is bellatrix block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
}})
require.NoError(t, err)
return b
}(),
@@ -188,12 +192,12 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
},
{
name: "forkchoice updated with optimistic block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
}})
require.NoError(t, err)
return b
}(),
@@ -203,12 +207,12 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
},
{
name: "forkchoice updated with invalid block",
blk: func() interfaces.ReadOnlyBeaconBlock {
b, err := consensusblocks.NewBeaconBlock(&ethpb.BeaconBlockBellatrix{
blk: func() interfaces.ReadOnlySignedBeaconBlock {
b, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &v1.ExecutionPayload{},
},
})
}})
require.NoError(t, err)
return b
}(),
@@ -226,7 +230,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
st, _ := util.DeterministicGenesisState(t, 1)
require.NoError(t, beaconDB.SaveState(ctx, st, tt.finalizedRoot))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, tt.finalizedRoot))
arg := &notifyForkchoiceUpdateArg{
arg := &fcuConfig{
headState: st,
headRoot: tt.headRoot,
headBlock: tt.blk,
@@ -246,7 +250,7 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
}
func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
// Prepare blocks
@@ -306,9 +310,9 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &notifyForkchoiceUpdateArg{
a := &fcuConfig{
headState: st,
headBlock: wbd.Block(),
headBlock: wbd,
headRoot: brd,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
@@ -334,7 +338,7 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
// 3. the blockchain package calls fcu to obtain heads G -> F -> D.
func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
// Prepare blocks
@@ -443,9 +447,9 @@ func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
require.NoError(t, beaconDB.SaveState(ctx, st, bra))
require.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, bra))
a := &notifyForkchoiceUpdateArg{
a := &fcuConfig{
headState: st,
headBlock: wbg.Block(),
headBlock: wbg,
headRoot: brg,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
@@ -467,7 +471,7 @@ func Test_NotifyNewPayload(t *testing.T) {
cfg := params.BeaconConfig()
cfg.TerminalTotalDifficulty = "2"
params.OverrideBeaconConfig(cfg)
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
phase0State, _ := util.DeterministicGenesisState(t, 1)
@@ -492,8 +496,10 @@ func Test_NotifyNewPayload(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
@@ -595,8 +601,10 @@ func Test_NotifyNewPayload(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
},
@@ -709,7 +717,7 @@ func Test_NotifyNewPayload_SetOptimisticToValid(t *testing.T) {
cfg.TerminalTotalDifficulty = "2"
params.OverrideBeaconConfig(cfg)
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
bellatrixState, _ := util.DeterministicGenesisStateBellatrix(t, 2)
@@ -777,83 +785,70 @@ func Test_reportInvalidBlock(t *testing.T) {
}
func Test_GetPayloadAttribute(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
hasPayload, _, vId := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, false, hasPayload)
require.Equal(t, primitives.ValidatorIndex(0), vId)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, true, attr.IsEmpty())
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
// Cache hit, advance state, no fee recipient
suggestedVid := primitives.ValidatorIndex(1)
slot := primitives.Slot(1)
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hook := logTest.NewGlobal()
hasPayload, attr, vId := service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
require.NoError(t, service.cfg.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, []primitives.ValidatorIndex{suggestedVid}, []common.Address{suggestedAddr}))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hasPayload, attr, vId = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
}
func Test_GetPayloadAttribute_PrepareAllPayloads(t *testing.T) {
hook := logTest.NewGlobal()
resetCfg := features.InitWithReset(&features.Flags{
PrepareAllPayloads: true,
})
defer resetCfg()
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
hasPayload, attr, vId := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, true, hasPayload)
require.Equal(t, primitives.ValidatorIndex(0), vId)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
}
func Test_GetPayloadAttributeV2(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateCapella(t, 1)
hasPayload, _, vId := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, false, hasPayload)
require.Equal(t, primitives.ValidatorIndex(0), vId)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, true, attr.IsEmpty())
// Cache hit, advance state, no fee recipient
suggestedVid := primitives.ValidatorIndex(1)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
slot := primitives.Slot(1)
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hook := logTest.NewGlobal()
hasPayload, attr, vId := service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
a, err := attr.Withdrawals()
require.NoError(t, err)
require.Equal(t, 0, len(a))
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
require.NoError(t, service.cfg.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, []primitives.ValidatorIndex{suggestedVid}, []common.Address{suggestedAddr}))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hasPayload, attr, vId = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
a, err = attr.Withdrawals()
require.NoError(t, err)
@@ -861,35 +856,30 @@ func Test_GetPayloadAttributeV2(t *testing.T) {
}
func Test_GetPayloadAttributeDeneb(t *testing.T) {
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateDeneb(t, 1)
hasPayload, _, vId := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, false, hasPayload)
require.Equal(t, primitives.ValidatorIndex(0), vId)
attr := service.getPayloadAttribute(ctx, st, 0, []byte{})
require.Equal(t, true, attr.IsEmpty())
// Cache hit, advance state, no fee recipient
suggestedVid := primitives.ValidatorIndex(1)
slot := primitives.Slot(1)
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hook := logTest.NewGlobal()
hasPayload, attr, vId := service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(attr.SuggestedFeeRecipient()).String())
require.LogsContain(t, hook, "Fee recipient is currently using the burn address")
a, err := attr.Withdrawals()
require.NoError(t, err)
require.Equal(t, 0, len(a))
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
require.NoError(t, service.cfg.BeaconDB.SaveFeeRecipientsByValidatorIDs(ctx, []primitives.ValidatorIndex{suggestedVid}, []common.Address{suggestedAddr}))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(slot, suggestedVid, [8]byte{}, [32]byte{})
hasPayload, attr, vId = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, true, hasPayload)
require.Equal(t, suggestedVid, vId)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
attr = service.getPayloadAttribute(ctx, st, slot, params.BeaconConfig().ZeroHash[:])
require.Equal(t, false, attr.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(attr.SuggestedFeeRecipient()))
a, err = attr.Withdrawals()
require.NoError(t, err)
@@ -1112,3 +1102,35 @@ func TestKZGCommitmentToVersionedHashes(t *testing.T) {
require.Equal(t, vhs[0].String(), vh0)
require.Equal(t, vhs[1].String(), vh1)
}
func TestComputePayloadAttribute(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, Index: 0})
// Cache hit, advance state, no fee recipient
slot := primitives.Slot(1)
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
cfg := &postBlockProcessConfig{
ctx: ctx,
blockRoot: [32]byte{'a'},
}
fcu := &fcuConfig{
headState: st,
proposingSlot: slot,
headRoot: [32]byte{},
}
require.NoError(t, service.computePayloadAttributes(cfg, fcu))
require.Equal(t, false, fcu.attributes.IsEmpty())
require.Equal(t, params.BeaconConfig().EthBurnAddressHex, common.BytesToAddress(fcu.attributes.SuggestedFeeRecipient()).String())
// Cache hit, advance state, has fee recipient
suggestedAddr := common.HexToAddress("123")
service.cfg.TrackedValidatorsCache.Set(cache.TrackedValidator{Active: true, FeeRecipient: primitives.ExecutionAddress(suggestedAddr), Index: 0})
service.cfg.PayloadIDCache.Set(slot, [32]byte{}, [8]byte{})
require.NoError(t, service.computePayloadAttributes(cfg, fcu))
require.Equal(t, false, fcu.attributes.IsEmpty())
require.Equal(t, suggestedAddr, common.BytesToAddress(fcu.attributes.SuggestedFeeRecipient()))
}

View File

@@ -8,20 +8,15 @@ import (
"github.com/pkg/errors"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
payloadattribute "github.com/prysmaticlabs/prysm/v4/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
func (s *Service) isNewProposer(slot primitives.Slot) bool {
_, _, ok := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(slot, [32]byte{} /* root */)
return ok || features.Get().PrepareAllPayloads
}
func (s *Service) isNewHead(r [32]byte) bool {
s.headLock.RLock()
defer s.headLock.RUnlock()
@@ -49,48 +44,69 @@ func (s *Service) getStateAndBlock(ctx context.Context, r [32]byte) (state.Beaco
return headState, newHeadBlock, nil
}
type fcuConfig struct {
headState state.BeaconState
headBlock interfaces.ReadOnlySignedBeaconBlock
headRoot [32]byte
proposingSlot primitives.Slot
attributes payloadattribute.Attributer
}
// sendFCU handles the logic to notify the engine of a forckhoice update
// for the first time when processing an incoming block during regular sync. It
// always updates the shuffling caches and handles epoch transitions when the
// incoming block is late, preparing payload attributes in this case while it
// only sends a message with empty attributes for early blocks.
func (s *Service) sendFCU(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if !s.isNewHead(cfg.headRoot) {
return nil
}
if fcuArgs.attributes != nil && !fcuArgs.attributes.IsEmpty() && s.shouldOverrideFCU(cfg.headRoot, s.CurrentSlot()+1) {
return nil
}
return s.forkchoiceUpdateWithExecution(cfg.ctx, fcuArgs)
}
// sendFCUWithAttributes computes the payload attributes and sends an FCU message
// to the engine if needed
func (s *Service) sendFCUWithAttributes(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
cfg.ctx = slotCtx
if err := s.computePayloadAttributes(cfg, fcuArgs); err != nil {
log.WithError(err).Error("could not compute payload attributes")
return
}
if fcuArgs.attributes.IsEmpty() {
return
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
if _, err := s.notifyForkchoiceUpdate(cfg.ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice with payload attributes for proposal")
}
}
// fockchoiceUpdateWithExecution is a wrapper around notifyForkchoiceUpdate. It decides whether a new call to FCU should be made.
// it returns true if the new head is updated
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, newHeadRoot [32]byte, proposingSlot primitives.Slot) (bool, error) {
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, args *fcuConfig) error {
_, span := trace.StartSpan(ctx, "beacon-chain.blockchain.forkchoiceUpdateWithExecution")
defer span.End()
// Note: Use the service context here to avoid the parent context being ended during a forkchoice update.
ctx = trace.NewContext(s.ctx, span)
isNewHead := s.isNewHead(newHeadRoot)
if !isNewHead {
return false, nil
}
isNewProposer := s.isNewProposer(proposingSlot)
if isNewProposer && !features.Get().DisableReorgLateBlocks {
if s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return false, nil
}
}
headState, headBlock, err := s.getStateAndBlock(ctx, newHeadRoot)
_, err := s.notifyForkchoiceUpdate(ctx, args)
if err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return false, nil
return errors.Wrap(err, "could not notify forkchoice update")
}
_, err = s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: headState,
headRoot: newHeadRoot,
headBlock: headBlock.Block(),
})
if err != nil {
return false, errors.Wrap(err, "could not notify forkchoice update")
}
if err := s.saveHead(ctx, newHeadRoot, headBlock, headState); err != nil {
if err := s.saveHead(ctx, args.headRoot, args.headBlock, args.headState); err != nil {
log.WithError(err).Error("could not save head")
}
// Only need to prune attestations from pool if the head has changed.
if err := s.pruneAttsFromPool(headBlock); err != nil {
if err := s.pruneAttsFromPool(args.headBlock); err != nil {
log.WithError(err).Error("could not prune attestations from pool")
}
return true, nil
return nil
}
// shouldOverrideFCU checks whether the incoming block is still subject to being

View File

@@ -17,15 +17,6 @@ import (
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestService_isNewProposer(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
require.Equal(t, false, service.isNewProposer(service.CurrentSlot()+1))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(service.CurrentSlot()+1, 0, [8]byte{}, [32]byte{} /* root */)
require.Equal(t, true, service.isNewProposer(service.CurrentSlot()+1))
}
func TestService_isNewHead(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
@@ -67,33 +58,14 @@ func TestService_getHeadStateAndBlock(t *testing.T) {
}
func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
_, err = service.forkchoiceUpdateWithExecution(ctx, service.headRoot(), service.CurrentSlot()+1)
require.NoError(t, err)
hookErr := "could not notify forkchoice update"
invalidStateErr := "could not get state summary: could not find block in DB"
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
gb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
require.NoError(t, service.saveInitSyncBlock(ctx, [32]byte{'a'}, gb))
_, err = service.forkchoiceUpdateWithExecution(ctx, [32]byte{'a'}, service.CurrentSlot()+1)
require.NoError(t, err)
require.LogsContain(t, hook, invalidStateErr)
service.cfg.PayloadIDCache = cache.NewPayloadIDCache()
service.cfg.TrackedValidatorsCache = cache.NewTrackedValidatorsCache()
hook.Reset()
service.head = &head{
root: [32]byte{'a'},
block: nil, /* should not panic if notify head uses correct head */
}
// Block in Cache
b := util.NewBeaconBlock()
b.Block.Slot = 2
wsb, err := blocks.NewSignedBeaconBlock(b)
@@ -107,13 +79,7 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
block: wsb,
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1}, [32]byte{2})
_, err = service.forkchoiceUpdateWithExecution(ctx, r1, service.CurrentSlot())
require.NoError(t, err)
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
// Block in DB
service.cfg.PayloadIDCache.Set(2, [32]byte{2}, [8]byte{1})
b = util.NewBeaconBlock()
b.Block.Slot = 3
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
@@ -125,25 +91,22 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
block: wsb,
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1}, [32]byte{2})
_, err = service.forkchoiceUpdateWithExecution(ctx, r1, service.CurrentSlot()+1)
require.NoError(t, err)
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
vId, payloadID, has := service.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(2, [32]byte{2})
require.Equal(t, true, has)
require.Equal(t, primitives.ValidatorIndex(1), vId)
require.Equal(t, [8]byte{1}, payloadID)
service.cfg.PayloadIDCache.Set(2, [32]byte{2}, [8]byte{1})
args := &fcuConfig{
headState: st,
headRoot: r1,
headBlock: wsb,
proposingSlot: service.CurrentSlot() + 1,
}
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, args))
// Test zero headRoot returns immediately.
headRoot := service.headRoot()
_, err = service.forkchoiceUpdateWithExecution(ctx, [32]byte{}, service.CurrentSlot()+1)
require.NoError(t, err)
require.Equal(t, service.headRoot(), headRoot)
payloadID, has := service.cfg.PayloadIDCache.PayloadID(2, [32]byte{2})
require.Equal(t, true, has)
require.Equal(t, primitives.PayloadID{1}, payloadID)
}
func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testing.T) {
service, tr := minimalTestService(t)
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
altairBlk := util.SaveBlock(t, ctx, beaconDB, util.NewBeaconBlockAltair())
@@ -182,10 +145,14 @@ func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testin
service.head.root = r
service.head.block = sb
service.head.state = st
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(service.CurrentSlot()+1, 0, [8]byte{}, [32]byte{} /* root */)
_, err = service.forkchoiceUpdateWithExecution(ctx, r, service.CurrentSlot()+1)
require.NoError(t, err)
service.cfg.PayloadIDCache.Set(service.CurrentSlot()+1, [32]byte{} /* root */, [8]byte{})
args := &fcuConfig{
headState: st,
headBlock: sb,
headRoot: r,
proposingSlot: service.CurrentSlot() + 1,
}
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, args))
}
func TestShouldOverrideFCU(t *testing.T) {

View File

@@ -276,7 +276,7 @@ func (s *Service) headBlock() (interfaces.ReadOnlySignedBeaconBlock, error) {
// It does a full copy on head state for immutability.
// This is a lock free version.
func (s *Service) headState(ctx context.Context) state.BeaconState {
ctx, span := trace.StartSpan(ctx, "blockChain.headState")
_, span := trace.StartSpan(ctx, "blockChain.headState")
defer span.End()
return s.head.state.Copy()
@@ -286,7 +286,7 @@ func (s *Service) headState(ctx context.Context) state.BeaconState {
// It does not perform a copy of the head state.
// This is a lock free version.
func (s *Service) headStateReadOnly(ctx context.Context) state.ReadOnlyBeaconState {
ctx, span := trace.StartSpan(ctx, "blockChain.headStateReadOnly")
_, span := trace.StartSpan(ctx, "blockChain.headStateReadOnly")
defer span.End()
return s.head.state

View File

@@ -10,7 +10,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/kzg",
visibility = ["//visibility:public"],
deps = [
"//proto/prysm/v1alpha1:go_default_library",
"//consensus-types/blocks:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
@@ -24,8 +24,10 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//proto/prysm/v1alpha1:go_default_library",
"//consensus-types/blocks:go_default_library",
"//testing/require:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

File diff suppressed because one or more lines are too long

View File

@@ -1,32 +1,28 @@
package kzg
import (
"fmt"
GoKZG "github.com/crate-crypto/go-kzg-4844"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
)
// IsDataAvailable checks that
// - all blobs in the block are available
// - Expected KZG commitments match the number of blobs in the block
// - That the number of proofs match the number of blobs
// - That the proofs are verified against the KZG commitments
func IsDataAvailable(commitments [][]byte, sidecars []*ethpb.BlobSidecar) error {
if len(commitments) != len(sidecars) {
return fmt.Errorf("could not check data availability, expected %d commitments, obtained %d",
len(commitments), len(sidecars))
}
if len(commitments) == 0 {
// Verify performs single or batch verification of commitments depending on the number of given BlobSidecars.
func Verify(sidecars ...blocks.ROBlob) error {
if len(sidecars) == 0 {
return nil
}
blobs := make([]GoKZG.Blob, len(commitments))
proofs := make([]GoKZG.KZGProof, len(commitments))
cmts := make([]GoKZG.KZGCommitment, len(commitments))
if len(sidecars) == 1 {
return kzgContext.VerifyBlobKZGProof(
bytesToBlob(sidecars[0].Blob),
bytesToCommitment(sidecars[0].KzgCommitment),
bytesToKZGProof(sidecars[0].KzgProof))
}
blobs := make([]GoKZG.Blob, len(sidecars))
cmts := make([]GoKZG.KZGCommitment, len(sidecars))
proofs := make([]GoKZG.KZGProof, len(sidecars))
for i, sidecar := range sidecars {
blobs[i] = bytesToBlob(sidecar.Blob)
cmts[i] = bytesToCommitment(sidecar.KzgCommitment)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)
cmts[i] = bytesToCommitment(commitments[i])
}
return kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs)
}

View File

@@ -1,17 +1,66 @@
package kzg
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"testing"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/sirupsen/logrus"
)
func TestIsDataAvailable(t *testing.T) {
sidecars := make([]*ethpb.BlobSidecar, 0)
commitments := make([][]byte, 0)
require.NoError(t, IsDataAvailable(commitments, sidecars))
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func GetRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func GetRandBlob(seed int64) GoKZG.Blob {
var blob GoKZG.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := GetRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}
func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZGProof, error) {
commitment, err := kzgContext.BlobToKZGCommitment(blob, 0)
if err != nil {
return GoKZG.KZGCommitment{}, GoKZG.KZGProof{}, err
}
proof, err := kzgContext.ComputeBlobKZGProof(blob, commitment, 0)
if err != nil {
return GoKZG.KZGCommitment{}, GoKZG.KZGProof{}, err
}
return commitment, proof, err
}
func TestVerify(t *testing.T) {
sidecars := make([]blocks.ROBlob, 0)
require.NoError(t, Verify(sidecars...))
}
func TestBytesToAny(t *testing.T) {
@@ -23,3 +72,13 @@ func TestBytesToAny(t *testing.T) {
require.DeepEqual(t, commitment, bytesToCommitment(bytes))
require.DeepEqual(t, proof, bytesToKZGProof(bytes))
}
func TestGenerateCommitmentAndProof(t *testing.T) {
blob := GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
expectedCommitment := GoKZG.KZGCommitment{180, 218, 156, 194, 59, 20, 10, 189, 186, 254, 132, 93, 7, 127, 104, 172, 238, 240, 237, 70, 83, 89, 1, 152, 99, 0, 165, 65, 143, 62, 20, 215, 230, 14, 205, 95, 28, 245, 54, 25, 160, 16, 178, 31, 232, 207, 38, 85}
expectedProof := GoKZG.KZGProof{128, 110, 116, 170, 56, 111, 126, 87, 229, 234, 211, 42, 110, 150, 129, 206, 73, 142, 167, 243, 90, 149, 240, 240, 236, 204, 143, 182, 229, 249, 81, 27, 153, 171, 83, 70, 144, 250, 42, 1, 188, 215, 71, 235, 30, 7, 175, 86}
require.Equal(t, expectedCommitment, commitment)
require.Equal(t, expectedProof, proof)
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
@@ -63,7 +62,7 @@ func NewLightClientOptimisticUpdateFromBeaconState(
attestedState state.BeaconState) (*ethpbv2.LightClientUpdate, error) {
// assert compute_epoch_at_slot(attested_state.slot) >= ALTAIR_FORK_EPOCH
attestedEpoch := slots.ToEpoch(attestedState.Slot())
if attestedEpoch < types.Epoch(params.BeaconConfig().AltairForkEpoch) {
if attestedEpoch < params.BeaconConfig().AltairForkEpoch {
return nil, fmt.Errorf("invalid attested epoch %d", attestedEpoch)
}

View File

@@ -73,7 +73,7 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
return nil
}
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesisTime uint64) error {
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesisTime uint64, daWaitedTime time.Duration) error {
startTime, err := slots.ToTime(genesisTime, block.Slot())
if err != nil {
return err
@@ -93,7 +93,7 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"deposits": len(block.Body().Deposits()),
}
log.WithFields(lf).Debug("Synced new block")
@@ -147,15 +147,3 @@ func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
log.WithFields(fields).Debug("Synced new payload")
return nil
}
func logBlobSidecar(scs []*ethpb.BlobSidecar, startTime time.Time) {
if len(scs) == 0 {
return
}
log.WithFields(logrus.Fields{
"count": len(scs),
"slot": scs[0].Slot,
"block": hex.EncodeToString(scs[0].BlockRoot),
"validationTime": time.Since(startTime),
}).Debug("Synced new blob sidecars")
}

View File

@@ -182,6 +182,10 @@ var (
Name: "chain_service_processing_milliseconds",
Help: "Total time to call a chain service in ReceiveBlock()",
})
dataAvailWaitedTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "da_waited_time_milliseconds",
Help: "Total time spent waiting for a data availability check in ReceiveBlock()",
})
processAttsElapsedTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "process_attestations_milliseconds",
@@ -358,6 +362,7 @@ func reportEpochMetrics(ctx context.Context, postState, headState state.BeaconSt
for name, val := range refMap {
stateTrieReferences.WithLabelValues(name).Set(float64(val))
}
postState.RecordStateMetrics()
return nil
}

View File

@@ -5,6 +5,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/attestations"
@@ -68,10 +69,18 @@ func WithDepositCache(c cache.DepositCache) Option {
}
}
// WithProposerIdsCache for proposer id cache.
func WithProposerIdsCache(c *cache.ProposerPayloadIDsCache) Option {
// WithPayloadIDCache for payload ID cache.
func WithPayloadIDCache(c *cache.PayloadIDCache) Option {
return func(s *Service) error {
s.cfg.ProposerSlotIndexCache = c
s.cfg.PayloadIDCache = c
return nil
}
}
// WithTrackedValidatorsCache for tracked validators cache.
func WithTrackedValidatorsCache(c *cache.TrackedValidatorsCache) Option {
return func(s *Service) error {
s.cfg.TrackedValidatorsCache = c
return nil
}
}
@@ -164,6 +173,8 @@ func WithFinalizedStateAtStartUp(st state.BeaconState) Option {
}
}
// WithClockSynchronizer sets the ClockSetter/ClockWaiter values to be used by services that need to block until
// the genesis timestamp is known (ClockWaiter) or which determine the genesis timestamp (ClockSetter).
func WithClockSynchronizer(gs *startup.ClockSynchronizer) Option {
return func(s *Service) error {
s.clockSetter = gs
@@ -172,9 +183,25 @@ func WithClockSynchronizer(gs *startup.ClockSynchronizer) Option {
}
}
// WithSyncComplete sets a channel that is used to notify blockchain service that the node has synced to head.
func WithSyncComplete(c chan struct{}) Option {
return func(s *Service) error {
s.syncComplete = c
return nil
}
}
// WithBlobStorage sets the blob storage backend for the blockchain service.
func WithBlobStorage(b *filesystem.BlobStorage) Option {
return func(s *Service) error {
s.blobStorage = b
return nil
}
}
func WithSyncChecker(checker Checker) Option {
return func(s *Service) error {
s.cfg.SyncChecker = checker
return nil
}
}

View File

@@ -2,6 +2,7 @@ package blockchain
import (
"context"
"sync"
"testing"
"time"
@@ -145,6 +146,61 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
require.NoError(t, service.OnAttestation(ctx, att[0], 0))
}
func TestService_GetAttPreState_Concurrency(t *testing.T) {
service, _ := minimalTestService(t)
ctx := context.Background()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
err = s.SetFinalizedCheckpoint(&ethpb.Checkpoint{Root: ckRoot})
require.NoError(t, err)
val := &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte("foo"), 48), WithdrawalCredentials: bytesutil.PadTo([]byte("bar"), fieldparams.RootLength)}
err = s.SetValidators([]*ethpb.Validator{val})
require.NoError(t, err)
err = s.SetBalances([]uint64{0})
require.NoError(t, err)
r := [32]byte{'g'}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, r))
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: ckRoot}))
st, root, err := prepareForkchoiceState(ctx, 100, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, cp1, cp1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
var wg sync.WaitGroup
errChan := make(chan error, 1000)
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}
_, err := service.getAttPreState(ctx, cp1)
if err != nil {
errChan <- err
}
}()
}
go func() {
wg.Wait()
close(errChan)
}()
select {
case <-time.After(10 * time.Second):
t.Fatal("Test timed out")
case err, ok := <-errChan:
if ok && err != nil {
require.ErrorContains(t, "not a checkpoint in forkchoice", err)
}
}
}
func TestStore_SaveCheckpointState(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
@@ -179,14 +235,14 @@ func TestStore_SaveCheckpointState(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'B'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'B'}, fieldparams.RootLength)}))
s2, err := service.getAttPreState(ctx, cp2)
_, err = service.getAttPreState(ctx, cp2)
require.ErrorContains(t, "epoch 2 root 0x4200000000000000000000000000000000000000000000000000000000000000: not a checkpoint in forkchoice", err)
st, root, err = prepareForkchoiceState(ctx, 33, [32]byte(cp2.Root), [32]byte(cp1.Root), [32]byte{'R'}, cp2, cp2)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
s2, err = service.getAttPreState(ctx, cp2)
s2, err := service.getAttPreState(ctx, cp2)
require.NoError(t, err)
assert.Equal(t, 2*params.BeaconConfig().SlotsPerEpoch, s2.Slot(), "Unexpected state slot")

View File

@@ -6,17 +6,20 @@ import (
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/kzg"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
@@ -28,8 +31,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// A custom slot deadline for processing state slots in our cache.
@@ -41,106 +42,71 @@ const depositDeadline = 20 * time.Second
// This defines size of the upper bound for initial sync block cache.
var initialSyncBlockCacheSize = uint64(2 * params.BeaconConfig().SlotsPerEpoch)
// postBlockProcessConfig is a structure that contains the data needed to
// process the beacon block after validating the state transition function
type postBlockProcessConfig struct {
ctx context.Context
signed interfaces.ReadOnlySignedBeaconBlock
blockRoot [32]byte
headRoot [32]byte
postState state.BeaconState
isValidPayload bool
}
// postBlockProcess is called when a gossip block is received. This function performs
// several duties most importantly informing the engine if head was updated,
// saving the new head information to the blockchain package and
// handling attestations, slashings and similar included in the block.
func (s *Service) postBlockProcess(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, postState state.BeaconState, isValidPayload bool) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlock")
func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
ctx, span := trace.StartSpan(cfg.ctx, "blockChain.onBlock")
defer span.End()
if err := consensusblocks.BeaconBlockIsNil(signed); err != nil {
cfg.ctx = ctx
if err := consensusblocks.BeaconBlockIsNil(cfg.signed); err != nil {
return invalidBlock{error: err}
}
startTime := time.Now()
b := signed.Block()
fcuArgs := &fcuConfig{}
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, postState, blockRoot); err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", signed.Block().Slot())
if s.inRegularSync() {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
if err := s.handleBlockAttestations(ctx, signed.Block(), postState); err != nil {
defer s.sendLightClientFeeds(cfg)
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
defer reportAttestationInclusion(cfg.signed.Block())
err := s.cfg.ForkChoiceStore.InsertNode(ctx, cfg.postState, cfg.blockRoot)
if err != nil {
return errors.Wrapf(err, "could not insert block %d to fork choice store", cfg.signed.Block().Slot())
}
if err := s.handleBlockAttestations(ctx, cfg.signed.Block(), cfg.postState); err != nil {
return errors.Wrap(err, "could not handle block's attestations")
}
s.InsertSlashingsToForkChoiceStore(ctx, signed.Block().Body().AttesterSlashings())
if isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, blockRoot); err != nil {
s.InsertSlashingsToForkChoiceStore(ctx, cfg.signed.Block().Body().AttesterSlashings())
if cfg.isValidPayload {
if err := s.cfg.ForkChoiceStore.SetOptimisticToValid(ctx, cfg.blockRoot); err != nil {
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
start := time.Now()
headRoot, err := s.cfg.ForkChoiceStore.Head(ctx)
cfg.headRoot, err = s.cfg.ForkChoiceStore.Head(ctx)
if err != nil {
log.WithError(err).Warn("Could not update head")
}
if blockRoot != headRoot {
receivedWeight, err := s.cfg.ForkChoiceStore.Weight(blockRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", blockRoot)).Warn("could not determine node weight")
}
headWeight, err := s.cfg.ForkChoiceStore.Weight(headRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", headRoot)).Warn("could not determine node weight")
}
log.WithFields(logrus.Fields{
"receivedRoot": fmt.Sprintf("%#x", blockRoot),
"receivedWeight": receivedWeight,
"headRoot": fmt.Sprintf("%#x", headRoot),
"headWeight": headWeight,
}).Debug("Head block is not the received block")
}
newBlockHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
// verify conditions for FCU, notifies FCU, and saves the new head.
// This function also prunes attestations, other similar operations happen in prunePostBlockOperationPools.
if _, err := s.forkchoiceUpdateWithExecution(ctx, headRoot, s.CurrentSlot()+1); err != nil {
return err
if cfg.headRoot != cfg.blockRoot {
s.logNonCanonicalBlockReceived(cfg.blockRoot, cfg.headRoot)
return nil
}
if err := s.getFCUArgs(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return nil
}
if err := s.sendFCU(cfg, fcuArgs); err != nil {
return errors.Wrap(err, "could not send FCU to engine")
}
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(blockRoot)
if err != nil {
log.WithError(err).Debug("Could not check if block is optimistic")
optimistic = true
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: signed.Block().Slot(),
BlockRoot: blockRoot,
SignedBlock: signed,
Verified: true,
Optimistic: optimistic,
},
})
defer reportAttestationInclusion(b)
if headRoot == blockRoot {
// Updating next slot state cache can happen in the background
// except in the epoch boundary in which case we lock to handle
// the shuffling and proposer caches updates.
// We handle these caches only on canonical
// blocks, otherwise this will be handled by lateBlockTasks
slot := postState.Slot()
if slots.IsEpochEnd(slot) {
if err := transition.UpdateNextSlotCache(ctx, blockRoot[:], postState); err != nil {
return errors.Wrap(err, "could not update next slot state cache")
}
if err := s.handleEpochBoundary(ctx, slot, postState, blockRoot[:]); err != nil {
return errors.Wrap(err, "could not handle epoch boundary")
}
} else {
go func() {
slotCtx, cancel := context.WithTimeout(context.Background(), slotDeadline)
defer cancel()
if err := transition.UpdateNextSlotCache(slotCtx, blockRoot[:], postState); err != nil {
log.WithError(err).Error("could not update next slot state cache")
}
}()
}
}
onBlockProcessingTime.Observe(float64(time.Since(startTime).Milliseconds()))
return nil
}
@@ -162,7 +128,7 @@ func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionD
return preStateVersion, preStateHeader, nil
}
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock) error {
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlockBatch")
defer span.End()
@@ -265,8 +231,8 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return err
}
}
if err := s.databaseDACheck(ctx, b); err != nil {
return errors.Wrap(err, "could not validate blob data availability")
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
return errors.Wrapf(err, "could not validate blob data availability at slot %d", b.Block().Slot())
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b.Block(),
JustifiedCheckpoint: jCheckpoints[i],
@@ -322,10 +288,10 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
arg := &notifyForkchoiceUpdateArg{
arg := &fcuConfig{
headState: preState,
headRoot: lastBR,
headBlock: lastB.Block(),
headBlock: lastB,
}
if _, err := s.notifyForkchoiceUpdate(ctx, arg); err != nil {
return err
@@ -333,33 +299,6 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func commitmentsToCheck(b consensusblocks.ROBlock, current primitives.Slot) [][]byte {
if b.Version() < version.Deneb {
return nil
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
return nil
}
kzgCommitments, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return nil
}
return kzgCommitments
}
func (s *Service) databaseDACheck(ctx context.Context, b consensusblocks.ROBlock) error {
commitments := commitmentsToCheck(b, s.CurrentSlot())
if len(commitments) == 0 {
return nil
}
sidecars, err := s.cfg.BeaconDB.BlobSidecarsByRoot(ctx, b.Root())
if err != nil {
return errors.Wrap(err, "could not get blob sidecars")
}
return kzg.IsDataAvailable(commitments, sidecars)
}
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
@@ -377,10 +316,27 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
if err := helpers.UpdateCommitteeCache(slotCtx, st, e+1); err != nil {
log.WithError(err).Warn("Could not update committee cache")
}
if err := helpers.UpdateProposerIndicesInCache(slotCtx, st, e+1); err != nil {
log.WithError(err).Warn("Failed to cache next epoch proposers")
}
}()
// The latest block header is from the previous epoch
r, err := st.LatestBlockHeader().HashTreeRoot()
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
return nil
}
// The proposer indices cache takes the target root for the previous
// epoch as key
if e > 0 {
e = e - 1
}
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e)
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
return nil
}
err = helpers.UpdateCachedCheckpointToStateRoot(st, &forkchoicetypes.Checkpoint{Epoch: e, Root: target})
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
}
return nil
}
@@ -529,11 +485,43 @@ func (s *Service) runLateBlockTasks() {
}
}
// missingIndices uses the expected commitments from the block to determine
// which BlobSidecar indices would need to be in the database for DA success.
// It returns a map where each key represents a missing BlobSidecar index.
// An empty map means we have all indices; a non-empty map can be used to compare incoming
// BlobSidecars against the set of known missing sidecars.
func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte) (map[uint64]struct{}, error) {
if len(expected) == 0 {
return nil, nil
}
if len(expected) > fieldparams.MaxBlobsPerBlock {
return nil, errMaxBlobsExceeded
}
indices, err := bs.Indices(root)
if err != nil {
return nil, err
}
missing := make(map[uint64]struct{}, len(expected))
for i := range expected {
ui := uint64(i)
if len(expected[i]) > 0 {
if !indices[i] {
missing[ui] = struct{}{}
}
}
}
return missing, nil
}
// isDataAvailable blocks until all BlobSidecars committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
// The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if signed.Version() < version.Deneb {
return nil
}
t := time.Now()
block := signed.Block()
if block == nil {
@@ -552,59 +540,35 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
if err != nil {
return errors.Wrap(err, "could not get KZG commitments")
}
// expected is the number of kzg commitments observed in the block.
expected := len(kzgCommitments)
if expected == 0 {
return nil
}
// Read first from db in case we have the blobs
sidecars, err := s.cfg.BeaconDB.BlobSidecarsByRoot(ctx, root)
switch {
case err == nil:
if len(sidecars) >= expected {
s.blobNotifiers.delete(root)
if err := kzg.IsDataAvailable(kzgCommitments, sidecars); err != nil {
log.WithField("root", fmt.Sprintf("%#x", root)).Warn("removing blob sidecars with invalid proofs")
if err2 := s.cfg.BeaconDB.DeleteBlobSidecars(ctx, root); err2 != nil {
log.WithError(err2).Error("could not delete sidecars")
}
return err
}
logBlobSidecar(sidecars, t)
return nil
}
case errors.Is(err, db.ErrNotFound):
// If the blob sidecars haven't arrived yet, the subsequent code will wait for them.
// Note: The system will not exit with an error in this scenario.
default:
log.WithError(err).Error("could not get blob sidecars from DB")
// get a map of BlobSidecar indices that are not currently available.
missing, err := missingIndices(s.blobStorage, root, kzgCommitments)
if err != nil {
return err
}
// If there are no missing indices, all BlobSidecars are available.
if len(missing) == 0 {
return nil
}
found := map[uint64]struct{}{}
for _, sc := range sidecars {
found[sc.Index] = struct{}{}
}
// The gossip handler for blobs writes the index of each verified blob referencing the given
// root to the channel returned by blobNotifiers.forRoot.
nc := s.blobNotifiers.forRoot(root)
for {
select {
case idx := <-nc:
found[idx] = struct{}{}
if len(found) != expected {
// Delete each index seen in the notification channel.
delete(missing, idx)
// Read from the channel until there are no more missing sidecars.
if len(missing) > 0 {
continue
}
// Once all sidecars have been observed, clean up the notification channel.
s.blobNotifiers.delete(root)
sidecars, err := s.cfg.BeaconDB.BlobSidecarsByRoot(ctx, root)
if err != nil {
return errors.Wrap(err, "could not get blob sidecars")
}
if err := kzg.IsDataAvailable(kzgCommitments, sidecars); err != nil {
log.WithField("root", fmt.Sprintf("%#x", root)).Warn("removing blob sidecars with invalid proofs")
if err2 := s.cfg.BeaconDB.DeleteBlobSidecars(ctx, root); err2 != nil {
log.WithError(err2).Error("could not delete sidecars")
}
return err
}
logBlobSidecar(sidecars, t)
return nil
case <-ctx.Done():
return errors.Wrap(ctx.Err(), "context deadline waiting for blob sidecars")
@@ -621,10 +585,15 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if s.CurrentSlot() == s.HeadSlot() {
return
}
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
// return early if we are in init sync
if !s.inRegularSync() {
return
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
@@ -642,10 +611,16 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if err := s.handleEpochBoundary(ctx, currentSlot, headState, headRoot[:]); err != nil {
log.WithError(err).Error("lateBlockTasks: could not update epoch boundary caches")
}
// Head root should be empty when retrieving proposer index for the next slot.
_, id, has := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot()+1, [32]byte{} /* head root */)
// There exists proposer for next slot, but we haven't called fcu w/ payload attribute yet.
if (!has && !features.Get().PrepareAllPayloads) || id != [8]byte{} {
// return early if we already started building a block for the current
// head root
_, has := s.cfg.PayloadIDCache.PayloadID(s.CurrentSlot()+1, headRoot)
if has {
return
}
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
return
}
@@ -657,13 +632,14 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
return
}
s.headLock.RUnlock()
s.cfg.ForkChoiceStore.RLock()
_, err = s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: headState,
headRoot: headRoot,
headBlock: headBlock.Block(),
})
s.cfg.ForkChoiceStore.RUnlock()
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}

View File

@@ -3,21 +3,28 @@ package blockchain
import (
"context"
"fmt"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v4/math"
ethpbv2 "github.com/prysmaticlabs/prysm/v4/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// CurrentSlot returns the current slot based on time.
@@ -25,6 +32,252 @@ func (s *Service) CurrentSlot() primitives.Slot {
return slots.CurrentSlot(uint64(s.genesisTime.Unix()))
}
// getFCUArgs returns the arguments to call forkchoice update
func (s *Service) getFCUArgs(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if err := s.getFCUArgsEarlyBlock(cfg, fcuArgs); err != nil {
return err
}
if !s.inRegularSync() {
return nil
}
slot := cfg.signed.Block().Slot()
if slots.WithinVotingWindow(uint64(s.genesisTime.Unix()), slot) {
return nil
}
return s.computePayloadAttributes(cfg, fcuArgs)
}
func (s *Service) getFCUArgsEarlyBlock(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if cfg.blockRoot == cfg.headRoot {
fcuArgs.headState = cfg.postState
fcuArgs.headBlock = cfg.signed
fcuArgs.headRoot = cfg.headRoot
fcuArgs.proposingSlot = s.CurrentSlot() + 1
return nil
}
return s.fcuArgsNonCanonicalBlock(cfg, fcuArgs)
}
// logNonCanonicalBlockReceived prints a message informing that the received
// block is not the head of the chain. It requires the caller holds a lock on
// Foprkchoice.
func (s *Service) logNonCanonicalBlockReceived(blockRoot [32]byte, headRoot [32]byte) {
receivedWeight, err := s.cfg.ForkChoiceStore.Weight(blockRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", blockRoot)).Warn("could not determine node weight")
}
headWeight, err := s.cfg.ForkChoiceStore.Weight(headRoot)
if err != nil {
log.WithField("root", fmt.Sprintf("%#x", headRoot)).Warn("could not determine node weight")
}
log.WithFields(logrus.Fields{
"receivedRoot": fmt.Sprintf("%#x", blockRoot),
"receivedWeight": receivedWeight,
"headRoot": fmt.Sprintf("%#x", headRoot),
"headWeight": headWeight,
}).Debug("Head block is not the received block")
}
// fcuArgsNonCanonicalBlock returns the arguments to the FCU call when the
// incoming block is non-canonical, that is, based on the head root.
func (s *Service) fcuArgsNonCanonicalBlock(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
headState, headBlock, err := s.getStateAndBlock(cfg.ctx, cfg.headRoot)
if err != nil {
return err
}
fcuArgs.headState = headState
fcuArgs.headBlock = headBlock
fcuArgs.headRoot = cfg.headRoot
fcuArgs.proposingSlot = s.CurrentSlot() + 1
return nil
}
// sendStateFeedOnBlock sends an event that a new block has been synced
func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(cfg.blockRoot)
if err != nil {
log.WithError(err).Debug("Could not check if block is optimistic")
optimistic = true
}
// Send notification of the processed block to the state feed.
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: cfg.signed.Block().Slot(),
BlockRoot: cfg.blockRoot,
SignedBlock: cfg.signed,
Verified: true,
Optimistic: optimistic,
},
})
}
// sendLightClientFeeds sends the light client feeds when feature flag is enabled.
func (s *Service) sendLightClientFeeds(cfg *postBlockProcessConfig) {
if features.Get().EnableLightClient {
if _, err := s.sendLightClientOptimisticUpdate(cfg.ctx, cfg.signed, cfg.postState); err != nil {
log.WithError(err).Error("Failed to send light client optimistic update")
}
// Get the finalized checkpoint
finalized := s.ForkChoicer().FinalizedCheckpoint()
// LightClientFinalityUpdate needs super majority
s.tryPublishLightClientFinalityUpdate(cfg.ctx, cfg.signed, finalized, cfg.postState)
}
}
func (s *Service) tryPublishLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, finalized *forkchoicetypes.Checkpoint, postState state.BeaconState) {
if finalized.Epoch <= s.lastPublishedLightClientEpoch {
return
}
config := params.BeaconConfig()
if finalized.Epoch < config.AltairForkEpoch {
return
}
syncAggregate, err := signed.Block().Body().SyncAggregate()
if err != nil || syncAggregate == nil {
return
}
// LightClientFinalityUpdate needs super majority
if syncAggregate.SyncCommitteeBits.Count()*3 < config.SyncCommitteeSize*2 {
return
}
_, err = s.sendLightClientFinalityUpdate(ctx, signed, postState)
if err != nil {
log.WithError(err).Error("Failed to send light client finality update")
} else {
s.lastPublishedLightClientEpoch = finalized.Epoch
}
}
// sendLightClientFinalityUpdate sends a light client finality update notification to the state feed.
func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
}
// Get finalized block
var finalizedBlock interfaces.ReadOnlySignedBeaconBlock
finalizedCheckPoint := attestedState.FinalizedCheckpoint()
if finalizedCheckPoint != nil {
finalizedRoot := bytesutil.ToBytes32(finalizedCheckPoint.Root)
finalizedBlock, err = s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
finalizedBlock = nil
}
}
update, err := NewLightClientFinalityUpdateFromBeaconState(
ctx,
postState,
signed,
attestedState,
finalizedBlock,
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
}
// Return the result
result := &ethpbv2.LightClientFinalityUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: CreateLightClientFinalityUpdate(update),
}
// Send event
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: result,
}), nil
}
// sendLightClientOptimisticUpdate sends a light client optimistic update notification to the state feed.
func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
}
update, err := NewLightClientOptimisticUpdateFromBeaconState(
ctx,
postState,
signed,
attestedState,
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
}
// Return the result
result := &ethpbv2.LightClientOptimisticUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: CreateLightClientOptimisticUpdate(update),
}
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: result,
}), nil
}
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch
// boundary in order to compute the right proposer indices after processing
// state transition. This function is called on late blocks while still locked,
// before sending FCU to the engine.
func (s *Service) updateCachesPostBlockProcessing(cfg *postBlockProcessConfig) error {
slot := cfg.postState.Slot()
if err := transition.UpdateNextSlotCache(cfg.ctx, cfg.blockRoot[:], cfg.postState); err != nil {
return errors.Wrap(err, "could not update next slot state cache")
}
if !slots.IsEpochEnd(slot) {
return nil
}
return s.handleEpochBoundary(cfg.ctx, slot, cfg.postState, cfg.blockRoot[:])
}
// handleSecondFCUCall handles a second call to FCU when syncing a new block.
// This is useful when proposing in the next block and we want to defer the
// computation of the next slot shuffling.
func (s *Service) handleSecondFCUCall(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) {
if (fcuArgs.attributes == nil || fcuArgs.attributes.IsEmpty()) && cfg.headRoot == cfg.blockRoot {
go s.sendFCUWithAttributes(cfg, fcuArgs)
}
}
// reportProcessingTime reports the metric of how long it took to process the
// current block
func reportProcessingTime(startTime time.Time) {
onBlockProcessingTime.Observe(float64(time.Since(startTime).Milliseconds()))
}
// computePayloadAttributes modifies the passed FCU arguments to
// contain the right payload attributes with the tracked proposer. It gets
// called on blocks that arrive after the attestation voting window, or in a
// background routine after syncing early blocks.
func (s *Service) computePayloadAttributes(cfg *postBlockProcessConfig, fcuArgs *fcuConfig) error {
if cfg.blockRoot == cfg.headRoot {
if err := s.updateCachesPostBlockProcessing(cfg); err != nil {
return err
}
}
fcuArgs.attributes = s.getPayloadAttribute(cfg.ctx, fcuArgs.headState, fcuArgs.proposingSlot, cfg.headRoot[:])
return nil
}
// getBlockPreState returns the pre state of an incoming block. It uses the parent root of the block
// to retrieve the state in DB. It verifies the pre state's validity and the incoming block
// is in the correct time window.
@@ -46,7 +299,7 @@ func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBea
}
// Verify block slot time is not from the future.
if err := slots.VerifyTime(uint64(s.genesisTime.Unix()), b.Slot(), params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
if err := slots.VerifyTime(uint64(s.genesisTime.Unix()), b.Slot(), params.BeaconConfig().MaximumGossipClockDisparityDuration()); err != nil {
return nil, err
}

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"math/big"
@@ -17,7 +16,9 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
mockExecution "github.com/prysmaticlabs/prysm/v4/beacon-chain/execution/testing"
@@ -39,7 +40,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -68,7 +68,7 @@ func TestStore_OnBlockBatch(t *testing.T) {
require.NoError(t, err)
blks = append(blks, rwsb)
}
err := service.onBlockBatch(ctx, blks)
err := service.onBlockBatch(ctx, blks, &das.MockAvailabilityStore{})
require.NoError(t, err)
jcp := service.CurrentJustifiedCheckpt()
jroot := bytesutil.ToBytes32(jcp.Root)
@@ -98,7 +98,7 @@ func TestStore_OnBlockBatch_NotifyNewPayload(t *testing.T) {
require.NoError(t, service.saveInitSyncBlock(ctx, rwsb.Root(), wsb))
blks = append(blks, rwsb)
}
require.NoError(t, service.onBlockBatch(ctx, blks))
require.NoError(t, service.onBlockBatch(ctx, blks, &das.MockAvailabilityStore{}))
}
func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
@@ -566,7 +566,7 @@ func TestOnBlock_CanFinalize_WithOnTick(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, true))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, r, [32]byte{}, postState, true}))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -614,7 +614,7 @@ func TestOnBlock_CanFinalize(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, true))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, r, [32]byte{}, postState, true}))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -640,7 +640,7 @@ func TestOnBlock_CanFinalize(t *testing.T) {
func TestOnBlock_NilBlock(t *testing.T) {
service, tr := minimalTestService(t)
err := service.postBlockProcess(tr.ctx, nil, [32]byte{}, nil, true)
err := service.postBlockProcess(&postBlockProcessConfig{tr.ctx, nil, [32]byte{}, [32]byte{}, nil, true})
require.Equal(t, true, IsInvalidBlock(err))
}
@@ -688,7 +688,7 @@ func TestOnBlock_CallNewPayloadAndForkchoiceUpdated(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, r, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, r, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, r, [32]byte{}, postState, false}))
testState, err = service.cfg.StateGen.StateByRoot(ctx, r)
require.NoError(t, err)
}
@@ -894,7 +894,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
cfg.TerminalBlockHash = params.BeaconConfig().ZeroHash
params.OverrideBeaconConfig(cfg)
service, tr := minimalTestService(t, WithProposerIdsCache(cache.NewProposerPayloadIDsCache()))
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx := tr.ctx
aHash := common.BytesToHash([]byte("a"))
@@ -911,7 +911,6 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
name: "state older than Bellatrix, nil payload",
stateVersion: 1,
payload: nil,
errString: "attempted to wrap nil",
},
{
name: "state older than Bellatrix, empty payload",
@@ -923,8 +922,10 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
},
},
{
@@ -938,7 +939,6 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
name: "state is Bellatrix, nil payload",
stateVersion: 2,
payload: nil,
errString: "attempted to wrap nil",
},
{
name: "state is Bellatrix, empty payload",
@@ -968,6 +968,7 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
@@ -1103,12 +1104,15 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
var wg sync.WaitGroup
wg.Add(4)
var lock sync.Mutex
go func() {
preState, err := service.getBlockPreState(ctx, wsb1.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb1)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb1, r1, postState, true))
lock.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb1, r1, [32]byte{}, postState, true}))
lock.Unlock()
wg.Done()
}()
go func() {
@@ -1116,7 +1120,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb2)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb2, r2, postState, true))
lock.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb2, r2, [32]byte{}, postState, true}))
lock.Unlock()
wg.Done()
}()
go func() {
@@ -1124,7 +1130,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb3)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb3, r3, postState, true))
lock.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb3, r3, [32]byte{}, postState, true}))
lock.Unlock()
wg.Done()
}()
go func() {
@@ -1132,7 +1140,9 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb4)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(ctx, wsb4, r4, postState, true))
lock.Lock()
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb4, r4, [32]byte{}, postState, true}))
lock.Unlock()
wg.Done()
}()
wg.Wait()
@@ -1206,7 +1216,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
}
for i := 6; i < 12; i++ {
@@ -1224,7 +1234,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
@@ -1243,7 +1253,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
// Check that we haven't justified the second epoch yet
@@ -1265,7 +1275,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, firstInvalidRoot, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, firstInvalidRoot, [32]byte{}, postState, false})
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
@@ -1293,7 +1303,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head is the last invalid block imported. The
// store's headroot is the previous head (since the invalid block did
@@ -1322,7 +1332,7 @@ func TestStore_NoViableHead_FCU(t *testing.T) {
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, true})
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
@@ -1384,7 +1394,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
}
for i := 6; i < 12; i++ {
@@ -1402,7 +1412,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
@@ -1422,7 +1432,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
// Check that we haven't justified the second epoch yet
@@ -1444,7 +1454,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, firstInvalidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, firstInvalidRoot, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, firstInvalidRoot, [32]byte{}, postState, false})
require.NoError(t, err)
jc = service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
@@ -1500,7 +1510,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, true})
require.NoError(t, err)
// Check the newly imported block is head, it justified the right
// checkpoint and the node is no longer optimistic
@@ -1564,7 +1574,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
}
for i := 6; i < 12; i++ {
@@ -1583,7 +1593,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
@@ -1602,7 +1612,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, lastValidRoot, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, lastValidRoot, [32]byte{}, postState, false})
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -1629,7 +1639,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, invalidRoots[i-13], wsb, postState))
err = service.postBlockProcess(ctx, wsb, invalidRoots[i-13], postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, invalidRoots[i-13], [32]byte{}, postState, false})
require.NoError(t, err)
}
// Check that we have justified the second epoch
@@ -1694,7 +1704,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, true))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, true}))
// Check that the head is still INVALID and the node is still optimistic
require.Equal(t, invalidHeadRoot, service.cfg.ForkChoiceStore.CachedHeadRoot())
optimistic, err = service.IsOptimistic(ctx)
@@ -1717,7 +1727,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, true})
require.NoError(t, err)
st, err = service.cfg.StateGen.StateByRoot(ctx, root)
require.NoError(t, err)
@@ -1743,7 +1753,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, true)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, true})
require.NoError(t, err)
require.Equal(t, root, service.cfg.ForkChoiceStore.CachedHeadRoot())
sjc = service.CurrentJustifiedCheckpt()
@@ -1799,7 +1809,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
}
for i := 6; i < 12; i++ {
@@ -1817,7 +1827,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
err = service.postBlockProcess(ctx, wsb, root, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false})
require.NoError(t, err)
}
@@ -1836,7 +1846,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, lastValidRoot, wsb, postState))
err = service.postBlockProcess(ctx, wsb, lastValidRoot, postState, false)
err = service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, lastValidRoot, [32]byte{}, postState, false})
require.NoError(t, err)
// save the post state and the payload Hash of this block since it will
// be the LVH
@@ -1865,7 +1875,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
require.NoError(t, service.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch))
_, err = service.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
require.NoError(t, err)
@@ -1936,7 +1946,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
rwsb, err := consensusblocks.NewROBlock(wsb)
require.NoError(t, err)
// We use onBlockBatch here because the valid chain is missing in forkchoice
require.NoError(t, service.onBlockBatch(ctx, []consensusblocks.ROBlock{rwsb}))
require.NoError(t, service.onBlockBatch(ctx, []consensusblocks.ROBlock{rwsb}, &das.MockAvailabilityStore{}))
// Check that the head is now VALID and the node is not optimistic
require.Equal(t, genesisRoot, service.ensureRootNotZeros(service.cfg.ForkChoiceStore.CachedHeadRoot()))
headRoot, err = service.HeadRoot(ctx)
@@ -1980,7 +1990,7 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, root, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, root, [32]byte{}, postState, false}))
st, err = service.HeadState(ctx)
require.NoError(t, err)
@@ -2035,74 +2045,131 @@ func TestFillMissingBlockPayloadId_PrepareAllPayloads(t *testing.T) {
// Helper function to simulate the block being on time or delayed for proposer
// boost. It alters the genesisTime tracked by the store.
func driftGenesisTime(s *Service, slot, delay int64) {
offset := slot*int64(params.BeaconConfig().SecondsPerSlot) - delay
offset := slot*int64(params.BeaconConfig().SecondsPerSlot) + delay
s.SetGenesisTime(time.Unix(time.Now().Unix()-offset, 0))
}
func Test_commitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconNetworkConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
commits := [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
func TestMissingIndices(t *testing.T) {
cases := []struct {
name string
commits [][]byte
block func(*testing.T) consensusblocks.ROBlock
slot primitives.Slot
name string
expected [][]byte
present []uint64
result map[uint64]struct{}
root [32]byte
err error
}{
{
name: "pre deneb",
block: func(t *testing.T) consensusblocks.ROBlock {
bb := util.NewBeaconBlockBellatrix()
sb, err := consensusblocks.NewSignedBeaconBlock(bb)
require.NoError(t, err)
rb, err := consensusblocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
name: "zero len",
},
{
name: "commitments within da",
block: func(t *testing.T) consensusblocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d.Block.Body.BlobKzgCommitments = commits
d.Block.Slot = 100
sb, err := consensusblocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := consensusblocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
commits: commits,
slot: 100,
name: "expected exceeds max",
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock + 1),
err: errMaxBlobsExceeded,
},
{
name: "commitments outside da",
block: func(t *testing.T) consensusblocks.ROBlock {
d := util.NewBeaconBlockDeneb()
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
sb, err := consensusblocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := consensusblocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
slot: windowSlots + 1,
name: "first missing",
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
present: []uint64{1, 2, 3, 4, 5},
result: fakeResult([]uint64{0}),
},
{
name: "all missing",
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
result: fakeResult([]uint64{0, 1, 2, 3, 4, 5}),
},
{
name: "none missing",
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
present: []uint64{0, 1, 2, 3, 4, 5},
result: fakeResult([]uint64{}),
},
{
name: "one commitment, missing",
expected: fakeCommitments(1),
present: []uint64{},
result: fakeResult([]uint64{0}),
},
{
name: "3 commitments, 1 missing",
expected: fakeCommitments(3),
present: []uint64{1},
result: fakeResult([]uint64{0, 2}),
},
{
name: "3 commitments, none missing",
expected: fakeCommitments(3),
present: []uint64{0, 1, 2},
result: fakeResult([]uint64{}),
},
{
name: "3 commitments, all missing",
expected: fakeCommitments(3),
present: []uint64{},
result: fakeResult([]uint64{0, 1, 2}),
},
}
for _, c := range cases {
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
t.Run(c.name, func(t *testing.T) {
b := c.block(t)
co := commitmentsToCheck(b, c.slot)
require.Equal(t, len(c.commits), len(co))
for i := 0; i < len(c.commits); i++ {
require.Equal(t, true, bytes.Equal(c.commits[i], co[i]))
require.NoError(t, bm.CreateFakeIndices(c.root, c.present))
missing, err := missingIndices(bs, c.root, c.expected)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, len(c.result), len(missing))
for key := range c.result {
m, ok := missing[key]
require.Equal(t, true, ok)
require.Equal(t, c.result[key], m)
}
})
}
}
func Test_getFCUArgs(t *testing.T) {
s, tr := minimalTestService(t)
ctx := tr.ctx
st, keys := util.DeterministicGenesisState(t, 64)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
signed: wsb,
blockRoot: [32]byte{'a'},
postState: st,
isValidPayload: true,
}
// error branch
fcuArgs := &fcuConfig{}
err = s.getFCUArgs(cfg, fcuArgs)
require.ErrorContains(t, "block does not exist", err)
// canonical branch
cfg.headRoot = cfg.blockRoot
fcuArgs = &fcuConfig{}
err = s.getFCUArgs(cfg, fcuArgs)
require.NoError(t, err)
require.Equal(t, cfg.blockRoot, fcuArgs.headRoot)
}
func fakeCommitments(n int) [][]byte {
f := make([][]byte, n)
for i := range f {
f[i] = make([]byte, 48)
}
return f
}
func fakeResult(missing []uint64) map[uint64]struct{} {
r := make(map[uint64]struct{}, len(missing))
for i := range missing {
r[missing[i]] = struct{}{}
}
return r
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
@@ -53,15 +52,11 @@ func (s *Service) AttestationTargetState(ctx context.Context, target *ethpb.Chec
// VerifyLmdFfgConsistency verifies that attestation's LMD and FFG votes are consistency to each other.
func (s *Service) VerifyLmdFfgConsistency(ctx context.Context, a *ethpb.Attestation) error {
targetSlot, err := slots.EpochStart(a.Data.Target.Epoch)
r, err := s.TargetRootForEpoch([32]byte(a.Data.BeaconBlockRoot), a.Data.Target.Epoch)
if err != nil {
return err
}
r, err := s.Ancestor(ctx, a.Data.BeaconBlockRoot, targetSlot)
if err != nil {
return err
}
if !bytes.Equal(a.Data.Target.Root, r) {
if !bytes.Equal(a.Data.Target.Root, r[:]) {
return fmt.Errorf("FFG and LMD votes are not consistent, block root: %#x, target root: %#x, canonical target root: %#x", a.Data.BeaconBlockRoot, a.Data.Target.Root, r)
}
return nil
@@ -125,36 +120,44 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
// This function is only called at 10 seconds or 0 seconds into the slot
disparity := params.BeaconNetworkConfig().MaximumGossipClockDisparity
if !features.Get().DisableReorgLateBlocks {
disparity += reorgLateBlockCountAttestations
}
disparity := params.BeaconConfig().MaximumGossipClockDisparityDuration()
disparity += reorgLateBlockCountAttestations
s.processAttestations(ctx, disparity)
processAttsElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
start = time.Now()
// return early if we haven't changed head
newHeadRoot, err := s.cfg.ForkChoiceStore.Head(ctx)
if err != nil {
log.WithError(err).Error("Could not compute head from new attestations")
// Fallback to our current head root in the event of a failure.
s.headLock.RLock()
newHeadRoot = s.headRoot()
s.headLock.RUnlock()
return
}
if !s.isNewHead(newHeadRoot) {
return
}
log.WithField("newHeadRoot", fmt.Sprintf("%#x", newHeadRoot)).Debug("Head changed due to attestations")
headState, headBlock, err := s.getStateAndBlock(ctx, newHeadRoot)
if err != nil {
log.WithError(err).Error("could not get head block")
return
}
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
changed, err := s.forkchoiceUpdateWithExecution(s.ctx, newHeadRoot, proposingSlot)
if err != nil {
log.WithError(err).Error("could not update forkchoice")
fcuArgs := &fcuConfig{
headState: headState,
headRoot: newHeadRoot,
headBlock: headBlock,
proposingSlot: proposingSlot,
}
if changed {
s.headLock.RLock()
log.WithFields(logrus.Fields{
"oldHeadRoot": fmt.Sprintf("%#x", s.headRoot()),
"newHeadRoot": fmt.Sprintf("%#x", newHeadRoot),
}).Debug("Head changed due to attestations")
s.headLock.RUnlock()
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
if err := s.forkchoiceUpdateWithExecution(s.ctx, fcuArgs); err != nil {
log.WithError(err).Error("could not update forkchoice")
}
}

View File

@@ -36,50 +36,28 @@ func TestAttestationCheckPtState_FarFutureSlot(t *testing.T) {
require.ErrorContains(t, "exceeds max allowed value relative to the local clock", err)
}
func TestVerifyLMDFFGConsistent_NotOK(t *testing.T) {
func TestVerifyLMDFFGConsistent(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b32)
r32, err := b32.Block.HashTreeRoot()
f := service.cfg.ForkChoiceStore
fc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, r32, err := prepareForkchoiceState(ctx, 32, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, fc, fc)
require.NoError(t, err)
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, f.InsertNode(ctx, state, r32))
state, r33, err := prepareForkchoiceState(ctx, 33, [32]byte{'b'}, r32, params.BeaconConfig().ZeroHash, fc, fc)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, r33))
wanted := "FFG and LMD votes are not consistent"
a := util.NewAttestation()
a.Data.Target.Epoch = 1
a.Data.Target.Root = []byte{'a'}
a.Data.Target.Root = []byte{'c'}
a.Data.BeaconBlockRoot = r33[:]
require.ErrorContains(t, wanted, service.VerifyLmdFfgConsistency(context.Background(), a))
}
func TestVerifyLMDFFGConsistent_OK(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b32)
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
a := util.NewAttestation()
a.Data.Target.Epoch = 1
a.Data.Target.Root = r32[:]
a.Data.BeaconBlockRoot = r33[:]
err = service.VerifyLmdFfgConsistency(context.Background(), a)
require.NoError(t, err, "Could not verify LMD and FFG votes to be consistent")
}
@@ -134,7 +112,7 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, tRoot, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, tRoot, [32]byte{}, postState, false}))
copied, err = service.cfg.StateGen.StateByRoot(ctx, tRoot)
require.NoError(t, err)
require.Equal(t, 2, fcs.NodeCount())
@@ -190,7 +168,7 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
require.NoError(t, service.postBlockProcess(ctx, wsb, tRoot, postState, false))
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, wsb, tRoot, [32]byte{}, postState, false}))
require.Equal(t, 2, fcs.NodeCount())
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb))
require.Equal(t, tRoot, service.head.root)

View File

@@ -3,21 +3,21 @@ package blockchain
import (
"context"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
)
// SendNewBlobEvent sends a message to the BlobNotifier channel that the blob
// for the blocroot `root` is ready in the database
// for the block root `root` is ready in the database
func (s *Service) sendNewBlobEvent(root [32]byte, index uint64) {
s.blobNotifiers.forRoot(root) <- index
s.blobNotifiers.notifyIndex(root, index)
}
// ReceiveBlob saves the blob to database and sends the new event
func (s *Service) ReceiveBlob(ctx context.Context, b *ethpb.BlobSidecar) error {
if err := s.cfg.BeaconDB.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{b}); err != nil {
func (s *Service) ReceiveBlob(ctx context.Context, b blocks.VerifiedROBlob) error {
if err := s.blobStorage.Save(b); err != nil {
return err
}
s.sendNewBlobEvent([32]byte(b.BlockRoot), b.Index)
s.sendNewBlobEvent(b.BlockRoot(), b.Index)
return nil
}

View File

@@ -3,6 +3,7 @@ package blockchain
import (
"bytes"
"context"
"fmt"
"time"
"github.com/pkg/errors"
@@ -11,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -32,8 +34,8 @@ var epochsSinceFinalitySaveHotStateDB = primitives.Epoch(100)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock) error
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
@@ -42,7 +44,7 @@ type BlockReceiver interface {
// BlobReceiver interface defines the methods of chain service for receiving new
// blobs
type BlobReceiver interface {
ReceiveBlob(context.Context, *ethpb.BlobSidecar) error
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
@@ -55,9 +57,14 @@ type SlashingReceiver interface {
// 1. Validate block, apply state transition and update checkpoints
// 2. Apply fork choice to the processed block
// 3. Save latest head info
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) error {
func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlock")
defer span.End()
// Return early if the block has been synced
if s.InForkchoice(blockRoot) {
log.WithField("blockRoot", fmt.Sprintf("%#x", blockRoot)).Debug("Ignoring already synced block")
return nil
}
receivedTime := time.Now()
s.blockBeingSynced.set(blockRoot)
defer s.blockBeingSynced.unset(blockRoot)
@@ -66,6 +73,10 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
rob, err := blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
return err
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
if err != nil {
@@ -100,21 +111,41 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err := eg.Wait(); err != nil {
return err
}
if err := s.isDataAvailable(ctx, blockRoot, blockCopy); err != nil {
return errors.Wrap(err, "could not validate blob data availability")
daStartTime := time.Now()
if avs != nil {
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
return errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
}
} else {
if err := s.isDataAvailable(ctx, blockRoot, blockCopy); err != nil {
return errors.Wrap(err, "could not validate blob data availability")
}
}
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
// Defragment the state before continuing block processing.
s.defragmentState(postState)
// The rest of block processing takes a lock on forkchoice.
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
if err := s.savePostStateInfo(ctx, blockRoot, blockCopy, postState); err != nil {
return errors.Wrap(err, "could not save post state info")
}
if err := s.postBlockProcess(ctx, blockCopy, blockRoot, postState, isValidPayload); err != nil {
args := &postBlockProcessConfig{
ctx: ctx,
signed: blockCopy,
blockRoot: blockRoot,
postState: postState,
isValidPayload: isValidPayload,
}
if err := s.postBlockProcess(args); err != nil {
err := errors.Wrap(err, "could not process block")
tracing.AnnotateError(span, err)
return err
}
if coreTime.CurrentEpoch(postState) > currentEpoch {
if coreTime.CurrentEpoch(postState) > currentEpoch && s.cfg.ForkChoiceStore.IsCanonical(blockRoot) {
headSt, err := s.HeadState(ctx)
if err != nil {
return errors.Wrap(err, "could not get head state")
@@ -165,7 +196,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
// Log block sync status.
cp = s.cfg.ForkChoiceStore.JustifiedCheckpoint()
justified := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
if err := logBlockSyncStatus(blockCopy.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix())); err != nil {
if err := logBlockSyncStatus(blockCopy.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix()), daWaitedTime); err != nil {
log.WithError(err).Error("Unable to log block sync status")
}
// Log payload data
@@ -177,7 +208,8 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
log.WithError(err).Error("Unable to log state transition data")
}
chainServiceProcessingTime.Observe(float64(time.Since(receivedTime).Milliseconds()))
timeWithoutDaWait := time.Since(receivedTime) - daWaitedTime
chainServiceProcessingTime.Observe(float64(timeWithoutDaWait.Milliseconds()))
return nil
}
@@ -185,7 +217,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning
// the state, performing batch verification of all collected signatures and then performing the appropriate
// actions for a block post-transition.
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock) error {
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlockBatch")
defer span.End()
@@ -193,7 +225,7 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock
defer s.cfg.ForkChoiceStore.Unlock()
// Apply state transition on the incoming newly received block batches, one by one.
if err := s.onBlockBatch(ctx, blocks); err != nil {
if err := s.onBlockBatch(ctx, blocks, avs); err != nil {
err := errors.Wrap(err, "could not process block in batch")
tracing.AnnotateError(span, err)
return err
@@ -252,11 +284,6 @@ func (s *Service) HasBlock(ctx context.Context, root [32]byte) bool {
return s.hasBlockInInitSyncOrDB(ctx, root)
}
// RecentBlockSlot returns block slot form fork choice store
func (s *Service) RecentBlockSlot(root [32]byte) (primitives.Slot, error) {
return s.cfg.ForkChoiceStore.Slot(root)
}
// ReceiveAttesterSlashing receives an attester slashing and inserts it to forkchoice
func (s *Service) ReceiveAttesterSlashing(ctx context.Context, slashing *ethpb.AttesterSlashing) {
s.cfg.ForkChoiceStore.Lock()

View File

@@ -7,6 +7,8 @@ import (
"time"
blockchainTesting "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -130,7 +132,9 @@ func TestService_ReceiveBlock(t *testing.T) {
s, tr := minimalTestService(t,
WithFinalizedStateAtStartUp(genesis),
WithExitPool(voluntaryexits.NewPool()),
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}))
WithStateNotifier(&blockchainTesting.MockStateNotifier{RecordEvents: true}),
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
)
beaconDB := tr.db
genesisBlockRoot := bytesutil.ToBytes32(nil)
@@ -143,7 +147,7 @@ func TestService_ReceiveBlock(t *testing.T) {
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(tt.args.block)
require.NoError(t, err)
err = s.ReceiveBlock(ctx, wsb, root)
err = s.ReceiveBlock(ctx, wsb, root, nil)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
} else {
@@ -176,7 +180,7 @@ func TestService_ReceiveBlockUpdateHead(t *testing.T) {
go func() {
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, s.ReceiveBlock(ctx, wsb, root))
require.NoError(t, s.ReceiveBlock(ctx, wsb, root, nil))
wg.Done()
}()
wg.Wait()
@@ -240,7 +244,7 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
require.NoError(t, err)
rwsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
err = s.ReceiveBlockBatch(ctx, []blocks.ROBlock{rwsb})
err = s.ReceiveBlockBatch(ctx, []blocks.ROBlock{rwsb}, &das.MockAvailabilityStore{})
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
} else {

View File

@@ -11,6 +11,8 @@ import (
"time"
"github.com/pkg/errors"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v4/async/event"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
@@ -20,6 +22,7 @@ import (
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
f "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
@@ -36,33 +39,35 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
)
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
lastPublishedLightClientEpoch primitives.Epoch
}
// config options for the service.
@@ -71,7 +76,8 @@ type config struct {
ChainStartFetcher execution.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
DepositCache cache.DepositCache
ProposerSlotIndexCache *cache.ProposerPayloadIDsCache
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
AttPool attestations.Pool
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
@@ -87,6 +93,13 @@ type config struct {
BlockFetcher execution.POWBlockFetcher
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
}
// Checker is an interface used to determine if a node is in initial sync
// or regular sync.
type Checker interface {
Synced() bool
}
var ErrMissingClockSetter = errors.New("blockchain Service initialized without a startup.ClockSetter")
@@ -94,6 +107,35 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
type blobNotifierMap struct {
sync.RWMutex
notifiers map[[32]byte]chan uint64
seenIndex map[[32]byte][fieldparams.MaxBlobsPerBlock]bool
}
// notifyIndex notifies a blob by its index for a given root.
// It uses internal maps to keep track of seen indices and notifier channels.
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
if idx >= fieldparams.MaxBlobsPerBlock {
return
}
bn.Lock()
seen := bn.seenIndex[root]
if seen[idx] {
bn.Unlock()
return
}
seen[idx] = true
bn.seenIndex[root] = seen
// Retrieve or create the notifier channel for the given root.
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
bn.notifiers[root] = c
}
bn.Unlock()
c <- idx
}
func (bn *blobNotifierMap) forRoot(root [32]byte) chan uint64 {
@@ -110,6 +152,7 @@ func (bn *blobNotifierMap) forRoot(root [32]byte) chan uint64 {
func (bn *blobNotifierMap) delete(root [32]byte) {
bn.Lock()
defer bn.Unlock()
delete(bn.seenIndex, root)
delete(bn.notifiers, root)
}
@@ -126,6 +169,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
bn := &blobNotifierMap{
notifiers: make(map[[32]byte]chan uint64),
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
}
srv := &Service{
ctx: ctx,
@@ -134,7 +178,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
checkpointStateCache: cache.NewCheckpointStateCache(),
initSyncBlocks: make(map[[32]byte]interfaces.ReadOnlySignedBeaconBlock),
blobNotifiers: bn,
cfg: &config{ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache()},
cfg: &config{},
blockBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
}
for _, opt := range opts {

View File

@@ -25,6 +25,7 @@ import (
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v4/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
@@ -99,7 +100,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithForkChoiceStore(fc),
WithAttestationService(attService),
WithStateGen(stateGen),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
WithPayloadIDCache(cache.NewPayloadIDCache()),
WithClockSynchronizer(startup.NewClockSynchronizer()),
}
@@ -445,11 +446,10 @@ func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
}
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
blk := util.NewBeaconBlock()
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)
bs := &ethpb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}}
beaconState, err := state_native.InitializeFromProtoPhase0(bs)
beaconState, err := util.NewBeaconState()
require.NoError(b, err)
require.NoError(b, s.cfg.ForkChoiceStore.InsertNode(ctx, beaconState, r))
@@ -514,3 +514,48 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
s.G = g
return s.Err
}
func TestNotifyIndex(t *testing.T) {
// Initialize a blobNotifierMap
bn := &blobNotifierMap{
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
notifiers: make(map[[32]byte]chan uint64),
}
// Sample root and index
var root [32]byte
copy(root[:], "exampleRoot")
// Test notifying a new index
bn.notifyIndex(root, 1)
if !bn.seenIndex[root][1] {
t.Errorf("Index was not marked as seen")
}
// Test that a new channel is created
if _, ok := bn.notifiers[root]; !ok {
t.Errorf("Notifier channel was not created")
}
// Test notifying an already seen index
bn.notifyIndex(root, 1)
if len(bn.notifiers[root]) > 1 {
t.Errorf("Notifier channel should not receive multiple messages for the same index")
}
// Test notifying a new index again
bn.notifyIndex(root, 2)
if !bn.seenIndex[root][2] {
t.Errorf("Index was not marked as seen")
}
// Test that the notifier channel receives the index
select {
case idx := <-bn.notifiers[root]:
if idx != 1 {
t.Errorf("Received index on channel is incorrect")
}
default:
t.Errorf("Notifier channel did not receive the index")
}
}

View File

@@ -2,12 +2,16 @@ package blockchain
import (
"context"
"sync"
"testing"
"github.com/prysmaticlabs/prysm/v4/async/event"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
@@ -23,10 +27,13 @@ import (
type mockBeaconNode struct {
stateFeed *event.Feed
mu sync.Mutex
}
// StateFeed mocks the same method in the beacon node.
func (mbn *mockBeaconNode) StateFeed() *event.Feed {
mbn.mu.Lock()
defer mbn.mu.Unlock()
if mbn.stateFeed == nil {
mbn.stateFeed = new(event.Feed)
}
@@ -52,7 +59,7 @@ func (mb *mockBroadcaster) BroadcastSyncCommitteeMessage(_ context.Context, _ ui
return nil
}
func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.SignedBlobSidecar) error {
func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
mb.broadcastCalled = true
return nil
}
@@ -110,6 +117,9 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithAttestationService(req.attSrv),
WithBLSToExecPool(req.blsPool),
WithDepositCache(dc),
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithSyncChecker(mock.MockChecker{}),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -17,6 +17,7 @@ go_library(
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/state:go_default_library",
@@ -24,11 +25,11 @@ go_library(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -16,6 +16,7 @@ import (
opfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
@@ -23,11 +24,11 @@ import (
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
forkchoice2 "github.com/prysmaticlabs/prysm/v4/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
)
@@ -72,7 +73,8 @@ type ChainService struct {
OptimisticRoots map[[32]byte]bool
BlockSlot primitives.Slot
SyncingRoot [32]byte
Blobs []*ethpb.BlobSidecar
Blobs []blocks.VerifiedROBlob
TargetRoot [32]byte
}
func (s *ChainService) Ancestor(ctx context.Context, root []byte, slot primitives.Slot) ([]byte, error) {
@@ -178,6 +180,14 @@ func (mon *MockOperationNotifier) OperationFeed() *event.Feed {
return mon.feed
}
// MockChecker is a mock sync checker.
type MockChecker struct{}
// Synced returns true.
func (_ MockChecker) Synced() bool {
return true
}
// ReceiveBlockInitialSync mocks ReceiveBlockInitialSync method in chain service.
func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte) error {
if s.State == nil {
@@ -207,7 +217,7 @@ func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interf
}
// ReceiveBlockBatch processes blocks in batches from initial-sync.
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock) error {
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, _ das.AvailabilityStore) error {
if s.State == nil {
return ErrNilState
}
@@ -237,7 +247,7 @@ func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBl
}
// ReceiveBlock mocks ReceiveBlock method in chain service.
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte) error {
func (s *ChainService) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte, _ das.AvailabilityStore) error {
if s.ReceiveBlockMockErr != nil {
return s.ReceiveBlockMockErr
}
@@ -319,7 +329,7 @@ func (s *ChainService) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
}
// ReceiveAttestation mocks ReceiveAttestation method in chain service.
func (_ *ChainService) ReceiveAttestation(_ context.Context, _ *ethpb.Attestation) error {
func (*ChainService) ReceiveAttestation(_ context.Context, _ *ethpb.Attestation) error {
return nil
}
@@ -399,12 +409,12 @@ func (s *ChainService) RecentBlockSlot([32]byte) (primitives.Slot, error) {
}
// HeadGenesisValidatorsRoot mocks HeadGenesisValidatorsRoot method in chain service.
func (_ *ChainService) HeadGenesisValidatorsRoot() [32]byte {
func (*ChainService) HeadGenesisValidatorsRoot() [32]byte {
return [32]byte{}
}
// VerifyLmdFfgConsistency mocks VerifyLmdFfgConsistency and always returns nil.
func (_ *ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attestation) error {
func (*ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attestation) error {
if !bytes.Equal(a.Data.BeaconBlockRoot, a.Data.Target.Root) {
return errors.New("LMD and FFG miss matched")
}
@@ -412,7 +422,7 @@ func (_ *ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attes
}
// ChainHeads mocks ChainHeads and always return nil.
func (_ *ChainService) ChainHeads() ([][32]byte, []primitives.Slot) {
func (*ChainService) ChainHeads() ([][32]byte, []primitives.Slot) {
return [][32]byte{
bytesutil.ToBytes32(bytesutil.PadTo([]byte("foo"), 32)),
bytesutil.ToBytes32(bytesutil.PadTo([]byte("bar"), 32)),
@@ -421,7 +431,7 @@ func (_ *ChainService) ChainHeads() ([][32]byte, []primitives.Slot) {
}
// HeadPublicKeyToValidatorIndex mocks HeadPublicKeyToValidatorIndex and always return 0 and true.
func (_ *ChainService) HeadPublicKeyToValidatorIndex(_ [fieldparams.BLSPubkeyLength]byte) (primitives.ValidatorIndex, bool) {
func (*ChainService) HeadPublicKeyToValidatorIndex(_ [fieldparams.BLSPubkeyLength]byte) (primitives.ValidatorIndex, bool) {
return 0, true
}
@@ -485,7 +495,7 @@ func (s *ChainService) UpdateHead(ctx context.Context, slot primitives.Slot) {
}
// ReceiveAttesterSlashing mocks the same method in the chain service.
func (s *ChainService) ReceiveAttesterSlashing(context.Context, *ethpb.AttesterSlashing) {}
func (*ChainService) ReceiveAttesterSlashing(context.Context, *ethpb.AttesterSlashing) {}
// IsFinalized mocks the same method in the chain service.
func (s *ChainService) IsFinalized(_ context.Context, blockRoot [32]byte) bool {
@@ -574,7 +584,7 @@ func (s *ChainService) InsertNode(ctx context.Context, st state.BeaconState, roo
}
// ForkChoiceDump mocks the same method in the chain service
func (s *ChainService) ForkChoiceDump(ctx context.Context) (*ethpbv1.ForkChoiceDump, error) {
func (s *ChainService) ForkChoiceDump(ctx context.Context) (*forkchoice2.Dump, error) {
if s.ForkChoiceStore != nil {
return s.ForkChoiceStore.ForkChoiceDump(ctx)
}
@@ -598,12 +608,12 @@ func (s *ChainService) ProposerBoost() [32]byte {
}
// FinalizedBlockHash mocks the same method in the chain service
func (s *ChainService) FinalizedBlockHash() [32]byte {
func (*ChainService) FinalizedBlockHash() [32]byte {
return [32]byte{}
}
// UnrealizedJustifiedPayloadBlockHash mocks the same method in the chain service
func (s *ChainService) UnrealizedJustifiedPayloadBlockHash() [32]byte {
func (*ChainService) UnrealizedJustifiedPayloadBlockHash() [32]byte {
return [32]byte{}
}
@@ -613,7 +623,12 @@ func (c *ChainService) BlockBeingSynced(root [32]byte) bool {
}
// ReceiveBlob implements the same method in the chain service
func (c *ChainService) ReceiveBlob(_ context.Context, b *ethpb.BlobSidecar) error {
func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) error {
c.Blobs = append(c.Blobs, b)
return nil
}
// TargetRootForEpoch mocks the same method in the chain service
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil
}

View File

@@ -0,0 +1,27 @@
package blockchain
import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
)
// trackedProposer returns whether the beacon node was informed, via the
// validators/prepare_proposer endpoint, of the proposer at the given slot.
// It only returns true if the tracked proposer is present and active.
func (s *Service) trackedProposer(st state.ReadOnlyBeaconState, slot primitives.Slot) (cache.TrackedValidator, bool) {
if features.Get().PrepareAllPayloads {
return cache.TrackedValidator{Active: true}, true
}
id, err := helpers.BeaconProposerIndexAtSlot(s.ctx, st, slot)
if err != nil {
return cache.TrackedValidator{}, false
}
val, ok := s.cfg.TrackedValidatorsCache.Validator(id)
if !ok {
return cache.TrackedValidator{}, false
}
return val, val.Active
}

View File

@@ -15,7 +15,6 @@ go_library(
"//beacon-chain/cache:go_default_library",
"//beacon-chain/db:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",

Some files were not shown because too many files have changed in this diff Show More