Compare commits

...

57 Commits

Author SHA1 Message Date
Potuz
888e76f960 Do not include IL signature in the block 2024-02-06 09:01:11 -03:00
Potuz
cac7024f13 Add beacon block protos 2024-02-06 08:57:34 -03:00
Potuz
e1b9ccd6dc add ssz marshalling 2024-02-06 08:57:34 -03:00
Potuz
2ba1cf494b Add IL and payload protos 2024-02-06 08:57:34 -03:00
Thabokani
692ebd313f Fix typos in doc (#13583)
Signed-off-by: Thabokani <149070269+Thabokani@users.noreply.github.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-02-06 10:18:21 +00:00
Nishant Das
6fa656c1ee Add Sync Checker (#13580)
* fix it

* add it in

* typo

* fix tests

* fix tests

* export and add test

* preston's review
2024-02-06 02:34:30 +00:00
Dhruv Bodani
55a29a4670 Implement beacon committee selections (#13503)
* implement beacon committee selections

* fix build

* fix lint

* fix lint

* Update beacon-chain/rpc/eth/shared/structs.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/beacon_committee_selections.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/beacon_committee_selections.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/beacon_committee_selections.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* move beacon committee selection structs to validator module

* fix bazel build files

* add support for POST and GET endpoints for get state validators query

* add a handler to return error from beacon node

* move beacon committee selection to validator top-level module

* fix bazel

* re-arrange fields to fix lint

* fix TestServer_InitializeRoutes

* fix build and lint

* fix build and lint

* fix TestSubmitAggregateAndProof_Distributed

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-02-05 15:43:51 +00:00
Potuz
e2e7e84a96 Get the right head state when proposing a failed reorg (#13579)
* Get the right head state when proposing a failed reorg

* add unit test

* split logic
2024-02-05 13:40:35 +00:00
terence
91b0a93df7 Enhance EL block height log (#13582) 2024-02-05 01:52:01 +00:00
Preston Van Loon
8839015312 docker: Add coreutils to docker images (#13564)
* Add coreutils to docker images

* add coreutils dependencies

* Add a prysmaticlabs.com/uploads backup of the deb files

* Run gazelle and fix issues

* Remove broken tar, change http_archive deps to debian_archive, remove http mirrors in favor of snapshot

* Add comments about which deps are required by other deps
2024-02-03 19:21:21 +00:00
terence
61ab4bf7ca Rename block by range request log (#13561) 2024-02-03 19:20:04 +00:00
Radosław Kapka
e3ce1bde45 Move API structs to api module (#13577) 2024-02-03 11:57:01 +00:00
Nishant Das
9d1189b222 Do Not Cache For Non Active Public Keys (#13581)
* fix it

* clean up
2024-02-03 05:19:54 +00:00
KeienWang
74f5452a64 Fix typo in [beacon-chain/cache/depositsnapshot/deposit_cache_test.go]: Corrected a spelling error. (#13532)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-02-03 05:14:32 +00:00
Nishant Das
ea1204d3c7 Fix Slashing Gossip Checks (#13574)
* fix it

* add for proposals too
2024-02-02 23:13:22 +00:00
Radosław Kapka
d9ac69752b Return consensus block value in Wei (#13575)
* Return consensus block value in Wei

* Return consensus block value in Wei

* review
2024-02-02 18:17:40 +00:00
terence
52af63f25a Revise blob sidecar not found log (#13571)
* Update blob sidecar not found log

* Use fields
2024-02-01 20:48:59 +00:00
james-prysm
2dad245bc8 handle slice out of range (#13568)
* handle slice out of range

* adding some tests
2024-02-01 16:59:40 +00:00
Potuz
9a9990605c Update Gohashtree to v0.0.4-beta (#13569)
* Update Gohashtree to v0.0.4-beta

* go mod tidy

* go mod tidy

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-02-01 15:42:56 +00:00
james-prysm
2cddb5ca86 fixing jwt auth checks (#13565) 2024-02-01 15:13:52 +00:00
Nishant Das
73ce28c356 make it the default (#13556) 2024-01-31 10:27:26 +00:00
Manu NALEPA
7a294e861e Beacon node slasher improvement (#13549)
* Slasher: Ensure all gorouting are stopped before running `Stop` actions.

Fixes #13550.
In tests, `exitChan` are now useless since waitgroup are used to wait
for all goroutines to be stopped.

* `slasher.go`: Add comments and rename some variables. - NFC

* `detect_blocks.go`: Improve. - NFC

- Rename some variables.
- Add comments.
- Use second element of `range` when possible.

* `chunks.go`: Remove `_`receivers. - NFC

* `validateAttestationIntegrity`: Improve documentation. - NFC

* `filterAttestations`: Avoid `else`and rename variable. - NFC

* `slasher.go`: Fix and add comments.

* `SaveAttestationRecordsForValidators`: Remove unused code.

* `LastEpochWrittenForValidators`: Name variables consistently. - NFC

Avoid mixes between `indice(s)`and `index(es)`.

* `SaveLastEpochsWrittenForValidators`: Name variables consistently. - NFC

* `CheckAttesterDoubleVotes`: Rename variables and add comments. - NFC

* `schema.go`: Add comments. - NFC

* `processQueuedAttestations`: Add comments. - NFC

* `checkDoubleVotes`: Rename variable. - NFC

* `Test_processQueuedAttestations`: Ensure there is no error log.

* `shouldNotBeSlashable` => `shouldBeSlashable`

* `Test_processQueuedAttestations`: Add 2 test cases:
- Same target with different signing roots
- Same target with same signing roots

* `checkDoubleVotesOnDisk` ==> `checkDoubleVotes`.

Before this commit, `checkDoubleVotes` did two tasks:
- Checking if there are any slashable double votes in the input
  list of attestations with respect to each other.
- Checking if there are any slashable double votes in the input
  list of attestations with respect to our database.

However, `checkDoubleVotes` is called only in
`checkSlashableAttestations`.

And `checkSlashableAttestations` is called only in:
- `processQueuedAttestations`, and in
- `IsSlashableAttestation`

Study of case `processQueuedAttestations`:
---------------------------------------------
In `processQueuedAttestations`, `checkSlashableAttestations`
is ALWAYS called after
`Database.SaveAttestationRecordsForValidators`.

It means that, when calling `checkSlashableAttestations`,
`validAtts` are ALREADY stored in the DB.

Each attestation of `validAtts` will be checked twice:
- Against the other attestations of `validAtts` (the portion of
  deleted code)
- Against the content of the database.

One of those two checks is redundent.
==> We can remove the check against other attestations in `validAtts`.

Study of case `Database.SaveAttestationRecordsForValidators`:
----------------------------------------------------------------
In `Database.SaveAttestationRecordsForValidators`,
`checkSlashableAttestations` is ALWAYS called with a list of
attestations containing only ONE attestation.

This only attestaion will be checked twice:
- Against itself, and an attestation cannot conflict with itself.
- Against the content of the database.

==> We can remove the check against other attestations in `validAtts`.

=========================

In both cases, we showed that we can remove the check of attestation
against the content of `validAtts`, and the corresponding test
`Test_checkDoubleVotes_SlashableInputAttestations`.

* `Test_processQueuedBlocks_DetectsDoubleProposals`: Wrap proposals.

So we can add new proposals later.

* Fix slasher multiple proposals false negative.

If a first batch of blocks is sent with:
- validator 1 - slot 4 - signing root 1
- validator 1 - slot 5 - signing root 1

Then, if a second batch of blocks is sent with:
- validator 1 - slot 4 - signing root 2

Because we have two blocks proposed by the same validator (1) and for
the same slot (4), but with two different signing roots (1 and 2), the
validator 1 should be slashed.

This is not the case before this commit.
A new test case has been added as well to check this.

Fixes #13551

* `params.go`: Change comments. - NFC

* `CheckSlashable`: Keep the happy path without indentation.

* `detectAllAttesterSlashings` => `checkSurrounds`.

* Update beacon-chain/db/slasherkv/slasher.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/db/slasherkv/slasher.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* `CheckAttesterDoubleVotes`: Keep happy path without indentation.

Well, even if, in our case, "happy path" mean slashing.

* 'SaveAttestationRecordsForValidators': Save the first attestation.

In case of multiple votes, arbitrarily save the first attestation.
Saving the first one in particular has no functional impact,
since in any case all attestations will be tested against
the content of the database. So all but the first one will be
detected as slashable.

However, saving the first one and not an other one let us not
to modify the end to end tests, since they expect the first one
to be saved in the database.

* Rename `min` => `minimum`.

Not to conflict with the new `min` built-in function.

* `couldNotSaveSlashableAtt` ==> `couldNotCheckSlashableAtt`

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-01-31 09:49:14 +00:00
james-prysm
258123341e add a log and update size for promptui (#13542) 2024-01-30 17:19:31 +00:00
Preston Van Loon
224b136737 Revert "set limit to multiple of burst for goerli" (#13552)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-30 06:10:12 +00:00
Nishant Das
3ed4866eec Makes Our New Deposit Trie The Default (#13555)
* make 4881 the default

* fix failed build
2024-01-30 05:15:52 +00:00
kasey
373c853d17 set limit to multiple of burst for goerli (#13544)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-27 22:12:08 +00:00
terence
23b0718b5f Add metric for data availability wait time (#13534)
* Add metric for data availability wait time

* Kasey's feedback

* Kasey's feedback
2024-01-26 18:17:25 +00:00
terence
3a9854145c Correct metrics from ns to ms (#13540) 2024-01-26 17:43:30 +00:00
Radosław Kapka
1b70d2b566 Fetch unaggregated atts in GetAggregateAttestation (#13533) 2024-01-26 17:08:58 +00:00
Nishant Das
59b310a221 make it the same (#13531) 2024-01-26 05:35:27 +00:00
Nishant Das
22b6d1751d Enable Backfill in E2E (#13524)
* enable backfill for devmode

* enable backfill

* gaz

* move to its own package

* fix panic

* fix bug

* gaz

* kasey's review
2024-01-26 04:37:41 +00:00
Potuz
9c13d47f4c fix off by one (#13529) 2024-01-26 00:05:56 +00:00
Justin Traglia
835dce5f6e Enable wastedassign linter & fix findings (#13507)
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-01-25 17:07:48 +00:00
james-prysm
c4c28e4825 fixing small typo in error messages (#13525) 2024-01-25 04:56:17 +00:00
Radosław Kapka
c996109b3a Return payload value in Wei from /eth/v3/validator/blocks (#13497)
* Add value in Wei to execution payload

* simplify how payload is returned

* test fix

* fix issues

* review

* fix block handlers
2024-01-24 20:58:35 +00:00
terence
e397f8a2bd Skip origin root when cleaning dirty state (#13521)
* Skip origin root when cleaning dirty state

* Clean up
2024-01-24 17:22:50 +00:00
Radosław Kapka
6438060733 Clear cache everywhere in tests of core helpers (#13509) 2024-01-24 16:11:43 +00:00
Nishant Das
a2892b1ed5 clean up validate beacon block (#13517) 2024-01-24 05:48:15 +00:00
Nishant Das
f4ab2ca79f lower it (#13516) 2024-01-24 01:28:36 +00:00
kasey
dbcf5c29cd moving some blob rpc validation close to peer read (#13511)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-23 22:54:16 +00:00
james-prysm
c9fe53bc32 Blob API: make errors more generic (#13513)
* make api response more generic

* gaz
2024-01-23 20:07:46 +00:00
terence
8522febd88 Add Holesky Deneb Epoch (#13506)
* Add Holesky Deneb Epoch

* Fix fork version

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Fix config

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-01-23 19:29:17 +00:00
james-prysm
75a28310c2 fixing route to match specs (#13510) 2024-01-23 18:04:03 +00:00
kasey
1df173e701 Block backfilling (#12968)
* backfill service

* fix bug where origin state is never unlocked

* support mvslice states

* use renamed interface

* refactor db code to skip block cache for backfill

* lint

* add test for verifier.verify

* enable service in service init test

* cancellation cleanup

* adding nil checks to configset juggling

* assume blocks are available by default

As long as we're sure the AvailableBlocker is initialized correctly
during node startup, defaulting to assuming we aren't in a checkpoint
sync simplifies things greatly for tests.

* block saving path refactor and bugfix

* fix fillback test

* fix BackfillStatus init tests

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-01-23 07:54:30 +00:00
terence
3187a05a76 Align aggregated att gossip validations (#13490)
* Align aggregated att gossip validations

* Feedback on reusing existing methods

* Nishant's feedback

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-23 04:37:06 +00:00
Justin Traglia
4e24102237 Fix minor issue in blsToExecChange validator (#13498)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-23 03:26:57 +00:00
james-prysm
8dd5e96b29 re-enabling jwt on keymanager API (#13492)
* re-enabling jwt on keymanager API

* adding tests

* Update validator/rpc/intercepter.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* handling error in test

* remove debugging logs

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-01-22 22:16:10 +00:00
james-prysm
4afb379f8d cleanup duties naming (#13451)
* updating some naming to reflect changes to duties

* fixing unit tests

* fixing more tests
2024-01-22 16:58:25 +00:00
Nishant Das
5a2453ac9c Add Debug State Transition Method (#13495)
* add it

* lint
2024-01-22 14:46:20 +00:00
Nishant Das
e610d2a5de fix it (#13496) 2024-01-22 14:26:14 +00:00
Preston Van Loon
233aaf2f9e e2e: Fix multiclient lighthouse flag removal (#13494) 2024-01-21 21:11:11 +00:00
Nishant Das
a49bdcaa1f fix it (#13493) 2024-01-20 16:15:38 +00:00
Gaki
bdd7b2caa9 chore: typo fix (#13461)
* messsage

* cancellation
2024-01-20 01:07:17 +00:00
terence
8de0e3804b Update Sepolia Deneb fork epoch (#13491) 2024-01-19 18:47:07 +00:00
Ying Quan Tan
bfb648067b Re-enable Slasher E2E Test (#13420)
* re-enable e2e slashing test #12415

* refactored slashing evaluator

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-01-19 04:44:27 +00:00
terence
852db1f3eb Remove debug setting highest slot log (#13488) 2024-01-19 04:25:15 +00:00
Nishant Das
5d3663ef8d update lighthouse and tests (#13470) 2024-01-19 03:46:36 +00:00
403 changed files with 15176 additions and 5672 deletions

View File

@@ -80,7 +80,6 @@ linters:
- thelper
- unparam
- varnamelen
- wastedassign
- wrapcheck
- wsl

View File

@@ -55,7 +55,7 @@ bazel build //beacon-chain --config=release
## Adding / updating dependencies
1. Add your dependency as you would with go modules. I.e. `go get ...`
1. Run `gazelle update-repos -from_file=go.mod` to update the bazel managed dependencies.
1. Run `bazel run //:gazelle -- update-repos -from_file=go.mod` to update the bazel managed dependencies.
Example:

View File

@@ -106,6 +106,13 @@ load("@rules_distroless//distroless:dependencies.bzl", "rules_distroless_depende
rules_distroless_dependencies()
http_archive(
name = "distroless",
integrity = "sha256-Cf00kUp1NyXA3LzbdyYy4Kda27wbkB8+A9MliTxq4jE=",
strip_prefix = "distroless-9dc924b9fe812eec2fa0061824dcad39eb09d0d6",
url = "https://github.com/GoogleContainerTools/distroless/archive/9dc924b9fe812eec2fa0061824dcad39eb09d0d6.tar.gz", # 2024-01-24
)
load("@aspect_bazel_lib//lib:repositories.bzl", "aspect_bazel_lib_dependencies", "aspect_bazel_lib_register_toolchains")
aspect_bazel_lib_dependencies()
@@ -144,6 +151,10 @@ http_archive(
],
)
load("//:distroless_deps.bzl", "distroless_deps")
distroless_deps()
# Override default import in rules_go with special patch until
# https://github.com/gogo/protobuf/pull/582 is merged.
git_repository(
@@ -349,9 +360,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "9f66d8d5644982d3d0d2e3d2b9ebe77a5f96638a5d7fcd715599c32818195cb3",
strip_prefix = "holesky-ea39b9006210848e13f28d92e12a30548cecd41d",
url = "https://github.com/eth-clients/holesky/archive/ea39b9006210848e13f28d92e12a30548cecd41d.tar.gz", # 2023-09-21
sha256 = "5f4be6fd088683ea9db45c863b9c5a1884422449e5b59fd2d561d3ba0f73ffd9",
strip_prefix = "holesky-9d9aabf2d4de51334ee5fed6c79a4d55097d1a43",
url = "https://github.com/eth-clients/holesky/archive/9d9aabf2d4de51334ee5fed6c79a4d55097d1a43.tar.gz", # 2024-01-22
)
http_archive(

View File

@@ -12,11 +12,8 @@ go_library(
deps = [
"//api/client:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/rpc/eth/beacon:go_default_library",
"//beacon-chain/rpc/eth/config:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/prysm/beacon:go_default_library",
"//beacon-chain/state:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -17,10 +17,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/api/server"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/beacon"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/config"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
apibeacon "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/beacon"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/network/forks"
@@ -150,8 +147,8 @@ func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fo
if err != nil {
return nil, errors.Wrapf(err, "error requesting fork by state id = %s", stateId)
}
fr := &shared.Fork{}
dataWrapper := &struct{ Data *shared.Fork }{Data: fr}
fr := &structs.Fork{}
dataWrapper := &struct{ Data *structs.Fork }{Data: fr}
err = json.Unmarshal(body, dataWrapper)
if err != nil {
return nil, errors.Wrap(err, "error decoding json response in GetFork")
@@ -179,12 +176,12 @@ func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, er
}
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
func (c *Client) GetConfigSpec(ctx context.Context) (*config.GetSpecResponse, error) {
func (c *Client) GetConfigSpec(ctx context.Context) (*structs.GetSpecResponse, error) {
body, err := c.Get(ctx, getConfigSpecPath)
if err != nil {
return nil, errors.Wrap(err, "error requesting configSpecPath")
}
fsr := &config.GetSpecResponse{}
fsr := &structs.GetSpecResponse{}
err = json.Unmarshal(body, fsr)
if err != nil {
return nil, err
@@ -259,7 +256,7 @@ func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData
if err != nil {
return nil, err
}
v := &apibeacon.GetWeakSubjectivityResponse{}
v := &structs.GetWeakSubjectivityResponse{}
err = json.Unmarshal(body, v)
if err != nil {
return nil, err
@@ -285,7 +282,7 @@ func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData
// SubmitChangeBLStoExecution calls a beacon API endpoint to set the withdrawal addresses based on the given signed messages.
// If the API responds with something other than OK there will be failure messages associated to the corresponding request message.
func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*shared.SignedBLSToExecutionChange) error {
func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*structs.SignedBLSToExecutionChange) error {
u := c.BaseURL().ResolveReference(&url.URL{Path: changeBLStoExecutionPath})
body, err := json.Marshal(request)
if err != nil {
@@ -324,12 +321,12 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*shar
// GetBLStoExecutionChanges gets all the set withdrawal messages in the node's operation pool.
// Returns a struct representation of json response.
func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*beacon.BLSToExecutionChangesPoolResponse, error) {
func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*structs.BLSToExecutionChangesPoolResponse, error) {
body, err := c.Get(ctx, changeBLStoExecutionPath)
if err != nil {
return nil, err
}
poolResponse := &beacon.BLSToExecutionChangesPoolResponse{}
poolResponse := &structs.BLSToExecutionChangesPoolResponse{}
err = json.Unmarshal(body, poolResponse)
if err != nil {
return nil, err
@@ -338,7 +335,7 @@ func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*beacon.BLSToExe
}
type forkScheduleResponse struct {
Data []shared.Fork
Data []structs.Fork
}
func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, error) {

View File

@@ -11,7 +11,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v4/api/client/builder",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/rpc/eth/shared:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -40,7 +40,7 @@ go_test(
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//beacon-chain/rpc/eth/shared:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -165,7 +165,7 @@ func WrappedBuilderBidCapella(p *ethpb.BuilderBidCapella) (Bid, error) {
// Header returns the execution data interface.
func (b builderBidCapella) Header() (interfaces.ExecutionData, error) {
// We have to convert big endian to little endian because the value is coming from the execution layer.
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header, blocks.PayloadValueToGwei(b.p.Value))
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header, blocks.PayloadValueToWei(b.p.Value))
}
// BlobKzgCommitments --
@@ -249,7 +249,7 @@ func (b builderBidDeneb) HashTreeRootWith(hh *ssz.Hasher) error {
// Header --
func (b builderBidDeneb) Header() (interfaces.ExecutionData, error) {
// We have to convert big endian to little endian because the value is coming from the execution layer.
return blocks.WrappedExecutionPayloadHeaderDeneb(b.p.Header, blocks.PayloadValueToGwei(b.p.Value))
return blocks.WrappedExecutionPayloadHeaderDeneb(b.p.Header, blocks.PayloadValueToWei(b.p.Value))
}
// BlobKzgCommitments --

View File

@@ -6,6 +6,7 @@ import (
"encoding/json"
"fmt"
"io"
"math/big"
"net"
"net/http"
"net/url"
@@ -13,7 +14,7 @@ import (
"text/template"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
@@ -266,9 +267,9 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
tracing.AnnotateError(span, err)
return err
}
vs := make([]*shared.SignedValidatorRegistration, len(svr))
vs := make([]*structs.SignedValidatorRegistration, len(svr))
for i := 0; i < len(svr); i++ {
vs[i] = shared.SignedValidatorRegistrationFromConsensus(svr[i])
vs[i] = structs.SignedValidatorRegistrationFromConsensus(svr[i])
}
body, err := json.Marshal(vs)
if err != nil {
@@ -293,7 +294,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := shared.SignedBlindedBeaconBlockBellatrixFromConsensus(&ethpb.SignedBlindedBeaconBlockBellatrix{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
b, err := structs.SignedBlindedBeaconBlockBellatrixFromConsensus(&ethpb.SignedBlindedBeaconBlockBellatrix{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockBellatrix to json marshalable type")
}
@@ -330,7 +331,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := shared.SignedBlindedBeaconBlockCapellaFromConsensus(&ethpb.SignedBlindedBeaconBlockCapella{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
b, err := structs.SignedBlindedBeaconBlockCapellaFromConsensus(&ethpb.SignedBlindedBeaconBlockCapella{Block: psb.Block, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockCapella to json marshalable type")
}
@@ -357,7 +358,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not extract proto message from payload")
}
payload, err := blocks.WrappedExecutionPayloadCapella(p, 0)
payload, err := blocks.WrappedExecutionPayloadCapella(p, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrapf(err, "could not wrap execution payload in interface")
}
@@ -367,7 +368,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not get protobuf block")
}
b, err := shared.SignedBlindedBeaconBlockDenebFromConsensus(&ethpb.SignedBlindedBeaconBlockDeneb{Message: psb.Message, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
b, err := structs.SignedBlindedBeaconBlockDenebFromConsensus(&ethpb.SignedBlindedBeaconBlockDeneb{Message: psb.Message, Signature: bytesutil.SafeCopyBytes(psb.Signature)})
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert SignedBlindedBeaconBlockDeneb to json marshalable type")
}
@@ -394,7 +395,7 @@ func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlyS
if err != nil {
return nil, nil, errors.Wrapf(err, "could not extract proto message from payload")
}
payload, err := blocks.WrappedExecutionPayloadDeneb(p, 0)
payload, err := blocks.WrappedExecutionPayloadDeneb(p, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrapf(err, "could not wrap execution payload in interface")
}

View File

@@ -13,7 +13,7 @@ import (
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
types "github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
@@ -376,7 +376,7 @@ func TestSubmitBlindedBlock(t *testing.T) {
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "deneb", r.Header.Get("Eth-Consensus-Version"))
var req shared.SignedBlindedBeaconBlockDeneb
var req structs.SignedBlindedBeaconBlockDeneb
err := json.NewDecoder(r.Body).Decode(&req)
require.NoError(t, err)
block, err := req.ToConsensus()

View File

@@ -13,7 +13,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/math"
v1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
@@ -38,7 +38,7 @@ func TestSignedValidatorRegistration_MarshalJSON(t *testing.T) {
},
Signature: make([]byte, 96),
}
a := shared.SignedValidatorRegistrationFromConsensus(svr)
a := structs.SignedValidatorRegistrationFromConsensus(svr)
je, err := json.Marshal(a)
require.NoError(t, err)
// decode with a struct w/ plain strings so we can check the string encoding of the hex fields
@@ -55,7 +55,7 @@ func TestSignedValidatorRegistration_MarshalJSON(t *testing.T) {
require.Equal(t, "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000", un.Message.Pubkey)
t.Run("roundtrip", func(t *testing.T) {
b := &shared.SignedValidatorRegistration{}
b := &structs.SignedValidatorRegistration{}
if err := json.Unmarshal(je, b); err != nil {
require.NoError(t, err)
}
@@ -1718,7 +1718,7 @@ func TestUint256UnmarshalTooBig(t *testing.T) {
func TestMarshalBlindedBeaconBlockBodyBellatrix(t *testing.T) {
expected, err := os.ReadFile("testdata/blinded-block.json")
require.NoError(t, err)
b, err := shared.BlindedBeaconBlockBellatrixFromConsensus(&eth.BlindedBeaconBlockBellatrix{
b, err := structs.BlindedBeaconBlockBellatrixFromConsensus(&eth.BlindedBeaconBlockBellatrix{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
@@ -1748,7 +1748,7 @@ func TestMarshalBlindedBeaconBlockBodyBellatrix(t *testing.T) {
func TestMarshalBlindedBeaconBlockBodyCapella(t *testing.T) {
expected, err := os.ReadFile("testdata/blinded-block-capella.json")
require.NoError(t, err)
b, err := shared.BlindedBeaconBlockCapellaFromConsensus(&eth.BlindedBeaconBlockCapella{
b, err := structs.BlindedBeaconBlockCapellaFromConsensus(&eth.BlindedBeaconBlockCapella{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),

View File

@@ -1,3 +1,7 @@
package api
const WebUrlPrefix = "/v2/validator/"
const (
WebUrlPrefix = "/v2/validator/"
WebApiUrlPrefix = "/api/v2/validator/"
KeymanagerApiPrefix = "/eth/v1"
)

View File

@@ -0,0 +1,40 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"block.go",
"conversions.go",
"conversions_block.go",
"conversions_state.go",
"endpoints_beacon.go",
"endpoints_blob.go",
"endpoints_builder.go",
"endpoints_config.go",
"endpoints_debug.go",
"endpoints_events.go",
"endpoints_lightclient.go",
"endpoints_node.go",
"endpoints_rewards.go",
"endpoints_validator.go",
"other.go",
"state.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/api/server/structs",
visibility = ["//visibility:public"],
deps = [
"//api/server:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -1,4 +1,4 @@
package shared
package structs
type SignedBeaconBlock struct {
Message *BeaconBlock `json:"message"`

View File

@@ -1,4 +1,4 @@
package shared
package structs
import (
"fmt"
@@ -23,7 +23,7 @@ var errNilValue = errors.New("nil value")
func ValidatorFromConsensus(v *eth.Validator) *Validator {
return &Validator{
PublicKey: hexutil.Encode(v.PublicKey),
Pubkey: hexutil.Encode(v.PublicKey),
WithdrawalCredentials: hexutil.Encode(v.WithdrawalCredentials),
EffectiveBalance: fmt.Sprintf("%d", v.EffectiveBalance),
Slashed: v.Slashed,

View File

@@ -1,4 +1,4 @@
package shared
package structs
import (
"fmt"
@@ -559,7 +559,7 @@ func (b *SignedBlindedBeaconBlockBellatrix) ToGeneric() (*eth.GenericSignedBeaco
Block: bl,
Signature: sig,
}
return &eth.GenericSignedBeaconBlock{Block: &eth.GenericSignedBeaconBlock_BlindedBellatrix{BlindedBellatrix: block}, IsBlinded: true, PayloadValue: 0 /* can't get payload value from blinded block */}, nil
return &eth.GenericSignedBeaconBlock{Block: &eth.GenericSignedBeaconBlock_BlindedBellatrix{BlindedBellatrix: block}, IsBlinded: true}, nil
}
func (b *BlindedBeaconBlockBellatrix) ToGeneric() (*eth.GenericBeaconBlock, error) {
@@ -567,7 +567,7 @@ func (b *BlindedBeaconBlockBellatrix) ToGeneric() (*eth.GenericBeaconBlock, erro
if err != nil {
return nil, err
}
return &eth.GenericBeaconBlock{Block: &eth.GenericBeaconBlock_BlindedBellatrix{BlindedBellatrix: block}, IsBlinded: true, PayloadValue: 0 /* can't get payload value from blinded block */}, nil
return &eth.GenericBeaconBlock{Block: &eth.GenericBeaconBlock_BlindedBellatrix{BlindedBellatrix: block}, IsBlinded: true}, nil
}
func (b *BlindedBeaconBlockBellatrix) ToConsensus() (*eth.BlindedBeaconBlockBellatrix, error) {
@@ -1016,7 +1016,7 @@ func (b *SignedBlindedBeaconBlockCapella) ToGeneric() (*eth.GenericSignedBeaconB
Block: bl,
Signature: sig,
}
return &eth.GenericSignedBeaconBlock{Block: &eth.GenericSignedBeaconBlock_BlindedCapella{BlindedCapella: block}, IsBlinded: true, PayloadValue: 0 /* can't get payload value from blinded block */}, nil
return &eth.GenericSignedBeaconBlock{Block: &eth.GenericSignedBeaconBlock_BlindedCapella{BlindedCapella: block}, IsBlinded: true}, nil
}
func (b *BlindedBeaconBlockCapella) ToGeneric() (*eth.GenericBeaconBlock, error) {
@@ -1024,7 +1024,7 @@ func (b *BlindedBeaconBlockCapella) ToGeneric() (*eth.GenericBeaconBlock, error)
if err != nil {
return nil, err
}
return &eth.GenericBeaconBlock{Block: &eth.GenericBeaconBlock_BlindedCapella{BlindedCapella: block}, IsBlinded: true, PayloadValue: 0 /* can't get payload value from blinded block */}, nil
return &eth.GenericBeaconBlock{Block: &eth.GenericBeaconBlock_BlindedCapella{BlindedCapella: block}, IsBlinded: true}, nil
}
func (b *BlindedBeaconBlockCapella) ToConsensus() (*eth.BlindedBeaconBlockCapella, error) {

View File

@@ -1,4 +1,4 @@
package shared
package structs
import (
"errors"

View File

@@ -1,9 +1,7 @@
package beacon
package structs
import (
"encoding/json"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
)
type BlockRootResponse struct {
@@ -17,31 +15,31 @@ type BlockRoot struct {
}
type GetCommitteesResponse struct {
Data []*shared.Committee `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*Committee `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type ListAttestationsResponse struct {
Data []*shared.Attestation `json:"data"`
Data []*Attestation `json:"data"`
}
type SubmitAttestationsRequest struct {
Data []*shared.Attestation `json:"data"`
Data []*Attestation `json:"data"`
}
type ListVoluntaryExitsResponse struct {
Data []*shared.SignedVoluntaryExit `json:"data"`
Data []*SignedVoluntaryExit `json:"data"`
}
type SubmitSyncCommitteeSignaturesRequest struct {
Data []*shared.SyncCommitteeMessage `json:"data"`
Data []*SyncCommitteeMessage `json:"data"`
}
type GetStateForkResponse struct {
Data *shared.Fork `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data *Fork `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type GetFinalityCheckpointsResponse struct {
@@ -51,9 +49,9 @@ type GetFinalityCheckpointsResponse struct {
}
type FinalityCheckpoints struct {
PreviousJustified *shared.Checkpoint `json:"previous_justified"`
CurrentJustified *shared.Checkpoint `json:"current_justified"`
Finalized *shared.Checkpoint `json:"finalized"`
PreviousJustified *Checkpoint `json:"previous_justified"`
CurrentJustified *Checkpoint `json:"current_justified"`
Finalized *Checkpoint `json:"finalized"`
}
type GetGenesisResponse struct {
@@ -67,15 +65,15 @@ type Genesis struct {
}
type GetBlockHeadersResponse struct {
Data []*shared.SignedBeaconBlockHeaderContainer `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*SignedBeaconBlockHeaderContainer `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type GetBlockHeaderResponse struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data *shared.SignedBeaconBlockHeaderContainer `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data *SignedBeaconBlockHeaderContainer `json:"data"`
}
type GetValidatorsRequest struct {
@@ -108,17 +106,6 @@ type ValidatorContainer struct {
Validator *Validator `json:"validator"`
}
type Validator struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
EffectiveBalance string `json:"effective_balance"`
Slashed bool `json:"slashed"`
ActivationEligibilityEpoch string `json:"activation_eligibility_epoch"`
ActivationEpoch string `json:"activation_epoch"`
ExitEpoch string `json:"exit_epoch"`
WithdrawableEpoch string `json:"withdrawable_epoch"`
}
type ValidatorBalance struct {
Index string `json:"index"`
Balance string `json:"balance"`
@@ -141,9 +128,9 @@ type SignedBlock struct {
}
type GetBlockAttestationsResponse struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*shared.Attestation `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*Attestation `json:"data"`
}
type GetStateRootResponse struct {
@@ -178,13 +165,22 @@ type SyncCommitteeValidators struct {
}
type BLSToExecutionChangesPoolResponse struct {
Data []*shared.SignedBLSToExecutionChange `json:"data"`
Data []*SignedBLSToExecutionChange `json:"data"`
}
type GetAttesterSlashingsResponse struct {
Data []*shared.AttesterSlashing `json:"data"`
Data []*AttesterSlashing `json:"data"`
}
type GetProposerSlashingsResponse struct {
Data []*shared.ProposerSlashing `json:"data"`
Data []*ProposerSlashing `json:"data"`
}
type GetWeakSubjectivityResponse struct {
Data *WeakSubjectivityData `json:"data"`
}
type WeakSubjectivityData struct {
WsCheckpoint *Checkpoint `json:"ws_checkpoint"`
StateRoot string `json:"state_root"`
}

View File

@@ -0,0 +1,14 @@
package structs
type SidecarsResponse struct {
Data []*Sidecar `json:"data"`
}
type Sidecar struct {
Index string `json:"index"`
Blob string `json:"blob"`
SignedBeaconBlockHeader *SignedBeaconBlockHeader `json:"signed_block_header"`
KzgCommitment string `json:"kzg_commitment"`
KzgProof string `json:"kzg_proof"`
CommitmentInclusionProof []string `json:"kzg_commitment_inclusion_proof"`
}

View File

@@ -1,4 +1,4 @@
package builder
package structs
type ExpectedWithdrawalsResponse struct {
Data []*ExpectedWithdrawal `json:"data"`

View File

@@ -1,6 +1,4 @@
package config
import "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
package structs
type GetDepositContractResponse struct {
Data *DepositContractData `json:"data"`
@@ -12,7 +10,7 @@ type DepositContractData struct {
}
type GetForkScheduleResponse struct {
Data []*shared.Fork `json:"data"`
Data []*Fork `json:"data"`
}
type GetSpecResponse struct {

View File

@@ -1,9 +1,7 @@
package debug
package structs
import (
"encoding/json"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
)
type GetBeaconStateV2Response struct {
@@ -24,18 +22,18 @@ type ForkChoiceHead struct {
}
type GetForkChoiceDumpResponse struct {
JustifiedCheckpoint *shared.Checkpoint `json:"justified_checkpoint"`
FinalizedCheckpoint *shared.Checkpoint `json:"finalized_checkpoint"`
JustifiedCheckpoint *Checkpoint `json:"justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
ForkChoiceNodes []*ForkChoiceNode `json:"fork_choice_nodes"`
ExtraData *ForkChoiceDumpExtraData `json:"extra_data"`
}
type ForkChoiceDumpExtraData struct {
UnrealizedJustifiedCheckpoint *shared.Checkpoint `json:"unrealized_justified_checkpoint"`
UnrealizedFinalizedCheckpoint *shared.Checkpoint `json:"unrealized_finalized_checkpoint"`
ProposerBoostRoot string `json:"proposer_boost_root"`
PreviousProposerBoostRoot string `json:"previous_proposer_boost_root"`
HeadRoot string `json:"head_root"`
UnrealizedJustifiedCheckpoint *Checkpoint `json:"unrealized_justified_checkpoint"`
UnrealizedFinalizedCheckpoint *Checkpoint `json:"unrealized_finalized_checkpoint"`
ProposerBoostRoot string `json:"proposer_boost_root"`
PreviousProposerBoostRoot string `json:"previous_proposer_boost_root"`
HeadRoot string `json:"head_root"`
}
type ForkChoiceNode struct {

View File

@@ -1,9 +1,7 @@
package events
package structs
import (
"encoding/json"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
)
type HeadEvent struct {
@@ -23,13 +21,13 @@ type BlockEvent struct {
}
type AggregatedAttEventSource struct {
Aggregate *shared.Attestation `json:"aggregate"`
Aggregate *Attestation `json:"aggregate"`
}
type UnaggregatedAttEventSource struct {
AggregationBits string `json:"aggregation_bits"`
Data *shared.AttestationData `json:"data"`
Signature string `json:"signature"`
AggregationBits string `json:"aggregation_bits"`
Data *AttestationData `json:"data"`
Signature string `json:"signature"`
}
type FinalizedCheckpointEvent struct {
@@ -71,18 +69,18 @@ type PayloadAttributesV1 struct {
}
type PayloadAttributesV2 struct {
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
Withdrawals []*shared.Withdrawal `json:"withdrawals"`
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
Withdrawals []*Withdrawal `json:"withdrawals"`
}
type PayloadAttributesV3 struct {
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
Withdrawals []*shared.Withdrawal `json:"withdrawals"`
ParentBeaconBlockRoot string `json:"parent_beacon_block_root"`
Timestamp string `json:"timestamp"`
PrevRandao string `json:"prev_randao"`
SuggestedFeeRecipient string `json:"suggested_fee_recipient"`
Withdrawals []*Withdrawal `json:"withdrawals"`
ParentBeaconBlockRoot string `json:"parent_beacon_block_root"`
}
type BlobSidecarEvent struct {
@@ -99,11 +97,11 @@ type LightClientFinalityUpdateEvent struct {
}
type LightClientFinalityUpdate struct {
AttestedHeader *shared.BeaconBlockHeader `json:"attested_header"`
FinalizedHeader *shared.BeaconBlockHeader `json:"finalized_header"`
FinalityBranch []string `json:"finality_branch"`
SyncAggregate *shared.SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
FinalizedHeader *BeaconBlockHeader `json:"finalized_header"`
FinalityBranch []string `json:"finality_branch"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientOptimisticUpdateEvent struct {
@@ -112,7 +110,7 @@ type LightClientOptimisticUpdateEvent struct {
}
type LightClientOptimisticUpdate struct {
AttestedHeader *shared.BeaconBlockHeader `json:"attested_header"`
SyncAggregate *shared.SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}

View File

@@ -0,0 +1,31 @@
package structs
type LightClientBootstrapResponse struct {
Version string `json:"version"`
Data *LightClientBootstrap `json:"data"`
}
type LightClientBootstrap struct {
Header *BeaconBlockHeader `json:"header"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
CurrentSyncCommitteeBranch []string `json:"current_sync_committee_branch"`
}
type LightClientUpdate struct {
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee,omitempty"`
FinalizedHeader *BeaconBlockHeader `json:"finalized_header,omitempty"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
NextSyncCommitteeBranch []string `json:"next_sync_committee_branch,omitempty"`
FinalityBranch []string `json:"finality_branch,omitempty"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientUpdateWithVersion struct {
Version string `json:"version"`
Data *LightClientUpdate `json:"data"`
}
type LightClientUpdatesByRangeResponse struct {
Updates []*LightClientUpdateWithVersion `json:"updates"`
}

View File

@@ -1,4 +1,4 @@
package node
package structs
type SyncStatusResponse struct {
Data *SyncStatusResponseData `json:"data"`
@@ -63,3 +63,11 @@ type GetVersionResponse struct {
type Version struct {
Version string `json:"version"`
}
type AddrRequest struct {
Addr string `json:"addr"`
}
type PeersResponse struct {
Peers []*Peer `json:"peers"`
}

View File

@@ -1,4 +1,4 @@
package rewards
package structs
type BlockRewardsResponse struct {
Data *BlockRewards `json:"data"`

View File

@@ -1,37 +1,37 @@
package validator
package structs
import (
"encoding/json"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
)
type AggregateAttestationResponse struct {
Data *shared.Attestation `json:"data"`
Data *Attestation `json:"data"`
}
type SubmitContributionAndProofsRequest struct {
Data []*shared.SignedContributionAndProof `json:"data"`
Data []*SignedContributionAndProof `json:"data"`
}
type SubmitAggregateAndProofsRequest struct {
Data []*shared.SignedAggregateAttestationAndProof `json:"data"`
Data []*SignedAggregateAttestationAndProof `json:"data"`
}
type SubmitSyncCommitteeSubscriptionsRequest struct {
Data []*shared.SyncCommitteeSubscription `json:"data"`
Data []*SyncCommitteeSubscription `json:"data"`
}
type SubmitBeaconCommitteeSubscriptionsRequest struct {
Data []*shared.BeaconCommitteeSubscription `json:"data"`
Data []*BeaconCommitteeSubscription `json:"data"`
}
type GetAttestationDataResponse struct {
Data *shared.AttestationData `json:"data"`
Data *AttestationData `json:"data"`
}
type ProduceSyncCommitteeContributionResponse struct {
Data *shared.SyncCommitteeContribution `json:"data"`
Data *SyncCommitteeContribution `json:"data"`
}
type GetAttesterDutiesResponse struct {
@@ -90,3 +90,31 @@ type Liveness struct {
Index string `json:"index"`
IsLive bool `json:"is_live"`
}
type GetValidatorCountResponse struct {
ExecutionOptimistic string `json:"execution_optimistic"`
Finalized string `json:"finalized"`
Data []*ValidatorCount `json:"data"`
}
type ValidatorCount struct {
Status string `json:"status"`
Count string `json:"count"`
}
type GetValidatorPerformanceRequest struct {
PublicKeys [][]byte `json:"public_keys,omitempty"`
Indices []primitives.ValidatorIndex `json:"indices,omitempty"`
}
type GetValidatorPerformanceResponse struct {
PublicKeys [][]byte `json:"public_keys,omitempty"`
CorrectlyVotedSource []bool `json:"correctly_voted_source,omitempty"`
CorrectlyVotedTarget []bool `json:"correctly_voted_target,omitempty"`
CorrectlyVotedHead []bool `json:"correctly_voted_head,omitempty"`
CurrentEffectiveBalances []uint64 `json:"current_effective_balances,omitempty"`
BalancesBeforeEpochTransition []uint64 `json:"balances_before_epoch_transition,omitempty"`
BalancesAfterEpochTransition []uint64 `json:"balances_after_epoch_transition,omitempty"`
MissingValidators [][]byte `json:"missing_validators,omitempty"`
InactivityScores []uint64 `json:"inactivity_scores,omitempty"`
}

View File

@@ -1,7 +1,7 @@
package shared
package structs
type Validator struct {
PublicKey string `json:"pubkey"`
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
EffectiveBalance string `json:"effective_balance"`
Slashed bool `json:"slashed"`

View File

@@ -1,4 +1,4 @@
package shared
package structs
type BeaconState struct {
GenesisTime string `json:"genesis_time"`

View File

@@ -557,17 +557,9 @@ func (s *Service) RecentBlockSlot(root [32]byte) (primitives.Slot, error) {
return s.cfg.ForkChoiceStore.Slot(root)
}
// inRegularSync applies the following heuristics to decide if the node is in
// regular sync mode vs init sync mode using only forkchoice.
// It checks that the highest received block is behind the current time by at least 2 epochs
// and that it was imported at least one epoch late if both of these
// tests pass then the node is in init sync. The caller of this function MUST
// have a lock on forkchoice
// inRegularSync queries the initial sync service to
// determine if the node is in regular sync or is still
// syncing to the head of the chain.
func (s *Service) inRegularSync() bool {
currentSlot := s.CurrentSlot()
fc := s.cfg.ForkChoiceStore
if currentSlot-fc.HighestReceivedBlockSlot() < 2*params.BeaconConfig().SlotsPerEpoch {
return true
}
return fc.HighestReceivedBlockDelay() < params.BeaconConfig().SlotsPerEpoch
return s.cfg.SyncChecker.Synced()
}

View File

@@ -593,26 +593,3 @@ func TestService_IsFinalized(t *testing.T) {
require.Equal(t, true, c.IsFinalized(ctx, br))
require.Equal(t, false, c.IsFinalized(ctx, [32]byte{'c'}))
}
func TestService_inRegularSync(t *testing.T) {
ctx := context.Background()
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
require.Equal(t, false, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
st, blkRoot, err = prepareForkchoiceState(ctx, 128, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
require.Equal(t, false, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-5*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
require.Equal(t, true, c.inRegularSync())
c.SetGenesisTime(time.Now().Add(time.Second * time.Duration(-1*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot))))
c.cfg.ForkChoiceStore.SetGenesisTime(uint64(time.Now().Unix()))
require.Equal(t, true, c.inRegularSync())
}

View File

@@ -182,6 +182,10 @@ var (
Name: "chain_service_processing_milliseconds",
Help: "Total time to call a chain service in ReceiveBlock()",
})
dataAvailWaitedTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "da_waited_time_milliseconds",
Help: "Total time spent waiting for a data availability check in ReceiveBlock()",
})
processAttsElapsedTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "process_attestations_milliseconds",

View File

@@ -198,3 +198,10 @@ func WithBlobStorage(b *filesystem.BlobStorage) Option {
return nil
}
}
func WithSyncChecker(checker Checker) Option {
return func(s *Service) error {
s.cfg.SyncChecker = checker
return nil
}
}

View File

@@ -325,7 +325,10 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
}
// The proposer indices cache takes the target root for the previous
// epoch as key
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e-1)
if e > 0 {
e = e - 1
}
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e)
if err != nil {
log.WithError(err).Error("could not update proposer index state-root map")
return nil

View File

@@ -911,7 +911,6 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
name: "state older than Bellatrix, nil payload",
stateVersion: 1,
payload: nil,
errString: "attempted to wrap nil",
},
{
name: "state older than Bellatrix, empty payload",
@@ -940,7 +939,6 @@ func Test_validateMergeTransitionBlock(t *testing.T) {
name: "state is Bellatrix, nil payload",
stateVersion: 2,
payload: nil,
errString: "attempted to wrap nil",
},
{
name: "state is Bellatrix, empty payload",

View File

@@ -122,6 +122,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
}
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
// Defragment the state before continuing block processing.
s.defragmentState(postState)

View File

@@ -93,6 +93,13 @@ type config struct {
BlockFetcher execution.POWBlockFetcher
FinalizedStateAtStartUp state.BeaconState
ExecutionEngineCaller execution.EngineCaller
SyncChecker Checker
}
// Checker is an interface used to determine if a node is in initial sync
// or regular sync.
type Checker interface {
Synced() bool
}
var ErrMissingClockSetter = errors.New("blockchain Service initialized without a startup.ClockSetter")

View File

@@ -6,6 +6,7 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v4/async/event"
mock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache/depositcache"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
@@ -118,6 +119,7 @@ func minimalTestService(t *testing.T, opts ...Option) (*Service, *testServiceReq
WithDepositCache(dc),
WithTrackedValidatorsCache(cache.NewTrackedValidatorsCache()),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
WithSyncChecker(mock.MockChecker{}),
}
// append the variadic opts so they override the defaults by being processed afterwards
opts = append(defOpts, opts...)

View File

@@ -180,6 +180,14 @@ func (mon *MockOperationNotifier) OperationFeed() *event.Feed {
return mon.feed
}
// MockChecker is a mock sync checker.
type MockChecker struct{}
// Synced returns true.
func (_ MockChecker) Synced() bool {
return true
}
// ReceiveBlockInitialSync mocks ReceiveBlockInitialSync method in chain service.
func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, _ [32]byte) error {
if s.State == nil {

View File

@@ -2,6 +2,7 @@ package testing
import (
"context"
"math/big"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api/client/builder"
@@ -54,13 +55,13 @@ func (s *MockBuilderService) SubmitBlindedBlock(_ context.Context, b interfaces.
}
return w, nil, s.ErrSubmitBlindedBlock
case version.Capella:
w, err := blocks.WrappedExecutionPayloadCapella(s.PayloadCapella, 0)
w, err := blocks.WrappedExecutionPayloadCapella(s.PayloadCapella, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrap(err, "could not wrap capella payload")
}
return w, nil, s.ErrSubmitBlindedBlock
case version.Deneb:
w, err := blocks.WrappedExecutionPayloadDeneb(s.PayloadDeneb, 0)
w, err := blocks.WrappedExecutionPayloadDeneb(s.PayloadDeneb, big.NewInt(0))
if err != nil {
return nil, nil, errors.Wrap(err, "could not wrap deneb payload")
}

View File

@@ -796,7 +796,7 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
err = dc.InsertFinalizedDeposits(context.Background(), 4, [32]byte{}, 0)
require.NoError(t, err)
// Mimick finalized deposit trie fetch.
// Mimic finalized deposit trie fetch.
fd, err := dc.FinalizedDeposits(context.Background())
require.NoError(t, err)
deps := dc.NonFinalizedDeposits(context.Background(), fd.MerkleTrieIndex(), nil)

View File

@@ -115,6 +115,7 @@ func (p *ProposerIndicesCache) IndicesFromCheckpoint(c forkchoicetypes.Checkpoin
root, ok := p.rootMap[c]
p.Unlock()
if !ok {
ProposerIndicesCacheMiss.Inc()
return emptyIndices, ok
}
return p.ProposerIndices(c.Epoch+1, root)

View File

@@ -37,70 +37,69 @@ func TestProposerCache_Set(t *testing.T) {
func TestProposerCache_CheckpointAndPrune(t *testing.T) {
cache := NewProposerIndicesCache()
indices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
root := [32]byte{'a'}
cpRoot := [32]byte{'b'}
copy(indices[3:], []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6})
for i := 1; i < 10; i++ {
root := [32]byte{byte(i)}
cache.Set(primitives.Epoch(i), root, indices)
cpRoot := [32]byte{byte(i - 1)}
cache.SetCheckpoint(forkchoicetypes.Checkpoint{Epoch: primitives.Epoch(i - 1), Root: cpRoot}, root)
}
received, ok := cache.ProposerIndices(1, root)
received, ok := cache.ProposerIndices(1, [32]byte{1})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.ProposerIndices(4, root)
received, ok = cache.ProposerIndices(4, [32]byte{4})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.ProposerIndices(9, root)
received, ok = cache.ProposerIndices(9, [32]byte{9})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 0, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: [32]byte{3}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: [32]byte{4}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: [32]byte{8}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
cache.Prune(5)
emptyIndices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
received, ok = cache.ProposerIndices(1, root)
received, ok = cache.ProposerIndices(1, [32]byte{1})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.ProposerIndices(4, root)
received, ok = cache.ProposerIndices(4, [32]byte{4})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.ProposerIndices(9, root)
received, ok = cache.ProposerIndices(9, [32]byte{9})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 0, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 0, Root: [32]byte{0}})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: [32]byte{3}})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: [32]byte{4}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: cpRoot})
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: [32]byte{8}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
}

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"math/big"
"testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
@@ -609,7 +610,7 @@ func Test_ProcessPayloadCapella(t *testing.T) {
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
payload.PrevRandao = random
wrapped, err := consensusblocks.WrappedExecutionPayloadCapella(payload, 0)
wrapped, err := consensusblocks.WrappedExecutionPayloadCapella(payload, big.NewInt(0))
require.NoError(t, err)
_, err = blocks.ProcessPayload(st, wrapped)
require.NoError(t, err)
@@ -873,7 +874,7 @@ func emptyPayloadHeaderCapella() (interfaces.ExecutionData, error) {
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
}, 0)
}, big.NewInt(0))
}
func emptyPayload() *enginev1.ExecutionPayload {

View File

@@ -1,6 +1,7 @@
package blocks_test
import (
"math/big"
"math/rand"
"testing"
@@ -642,7 +643,10 @@ func TestProcessBlindWithdrawals(t *testing.T) {
require.NoError(t, err)
wdRoot, err := ssz.WithdrawalSliceRoot(test.Args.Withdrawals, fieldparams.MaxWithdrawalsPerPayload)
require.NoError(t, err)
p, err := consensusblocks.WrappedExecutionPayloadHeaderCapella(&enginev1.ExecutionPayloadHeaderCapella{WithdrawalsRoot: wdRoot[:]}, 0)
p, err := consensusblocks.WrappedExecutionPayloadHeaderCapella(
&enginev1.ExecutionPayloadHeaderCapella{WithdrawalsRoot: wdRoot[:]},
big.NewInt(0),
)
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, p)
if test.Control.ExpectedError {
@@ -1060,7 +1064,7 @@ func TestProcessWithdrawals(t *testing.T) {
}
st, err := prepareValidators(spb, test.Args)
require.NoError(t, err)
p, err := consensusblocks.WrappedExecutionPayloadCapella(&enginev1.ExecutionPayloadCapella{Withdrawals: test.Args.Withdrawals}, 0)
p, err := consensusblocks.WrappedExecutionPayloadCapella(&enginev1.ExecutionPayloadCapella{Withdrawals: test.Args.Withdrawals}, big.NewInt(0))
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, p)
if test.Control.ExpectedError {

View File

@@ -50,7 +50,6 @@ go_test(
"attestation_test.go",
"beacon_committee_test.go",
"block_test.go",
"main_test.go",
"randao_test.go",
"rewards_penalties_test.go",
"shuffle_test.go",

View File

@@ -20,6 +20,8 @@ import (
func TestAttestation_IsAggregator(t *testing.T) {
t.Run("aggregator", func(t *testing.T) {
helpers.ClearCache()
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
committee, err := helpers.BeaconCommitteeFromState(context.Background(), beaconState, 0, 0)
require.NoError(t, err)
@@ -30,6 +32,8 @@ func TestAttestation_IsAggregator(t *testing.T) {
})
t.Run("not aggregator", func(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)
params.OverrideBeaconConfig(params.MinimalSpecConfig())
beaconState, privKeys := util.DeterministicGenesisState(t, 2048)
@@ -44,6 +48,8 @@ func TestAttestation_IsAggregator(t *testing.T) {
}
func TestAttestation_ComputeSubnetForAttestation(t *testing.T) {
helpers.ClearCache()
// Create 10 committees
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize
@@ -204,6 +210,8 @@ func Test_ValidateAttestationTime(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
err := helpers.ValidateAttestationTime(tt.args.attSlot, tt.args.genesisTime,
params.BeaconConfig().MaximumGossipClockDisparityDuration())
if tt.wantedErr != "" {
@@ -216,6 +224,8 @@ func Test_ValidateAttestationTime(t *testing.T) {
}
func TestVerifyCheckpointEpoch_Ok(t *testing.T) {
helpers.ClearCache()
// Genesis was 6 epochs ago exactly.
offset := params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot * 6)
genesis := time.Now().Add(-1 * time.Second * time.Duration(offset))
@@ -285,6 +295,8 @@ func TestValidateNilAttestation(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
if tt.errString != "" {
require.ErrorContains(t, tt.errString, helpers.ValidateNilAttestation(tt.attestation))
} else {
@@ -326,6 +338,8 @@ func TestValidateSlotTargetEpoch(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
if tt.errString != "" {
require.ErrorContains(t, tt.errString, helpers.ValidateSlotTargetEpoch(tt.attestation.Data))
} else {

View File

@@ -379,7 +379,7 @@ func UpdateCachedCheckpointToStateRoot(state state.ReadOnlyBeaconState, cp *fork
if cp.Epoch <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
return nil
}
slot, err := slots.EpochEnd(cp.Epoch - 1)
slot, err := slots.EpochEnd(cp.Epoch)
if err != nil {
return err
}

View File

@@ -21,6 +21,8 @@ import (
)
func TestComputeCommittee_WithoutCache(t *testing.T) {
ClearCache()
// Create 10 committees
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize
@@ -71,6 +73,8 @@ func TestComputeCommittee_WithoutCache(t *testing.T) {
}
func TestComputeCommittee_RegressionTest(t *testing.T) {
ClearCache()
indices := []primitives.ValidatorIndex{1, 3, 8, 16, 18, 19, 20, 23, 30, 35, 43, 46, 47, 54, 56, 58, 69, 70, 71, 83, 84, 85, 91, 96, 100, 103, 105, 106, 112, 121, 127, 128, 129, 140, 142, 144, 146, 147, 149, 152, 153, 154, 157, 160, 173, 175, 180, 182, 188, 189, 191, 194, 201, 204, 217, 221, 226, 228, 230, 231, 239, 241, 249, 250, 255}
seed := [32]byte{68, 110, 161, 250, 98, 230, 161, 172, 227, 226, 99, 11, 138, 124, 201, 134, 38, 197, 0, 120, 6, 165, 122, 34, 19, 216, 43, 226, 210, 114, 165, 183}
index := uint64(215)
@@ -80,6 +84,8 @@ func TestComputeCommittee_RegressionTest(t *testing.T) {
}
func TestVerifyBitfieldLength_OK(t *testing.T) {
ClearCache()
bf := bitfield.Bitlist{0xFF, 0x01}
committeeSize := uint64(8)
assert.NoError(t, VerifyBitfieldLength(bf, committeeSize), "Bitfield is not validated when it was supposed to be")
@@ -91,7 +97,7 @@ func TestVerifyBitfieldLength_OK(t *testing.T) {
func TestCommitteeAssignments_CannotRetrieveFutureEpoch(t *testing.T) {
ClearCache()
defer ClearCache()
epoch := primitives.Epoch(1)
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: 0, // Epoch 0.
@@ -103,7 +109,7 @@ func TestCommitteeAssignments_CannotRetrieveFutureEpoch(t *testing.T) {
func TestCommitteeAssignments_NoProposerForSlot0(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
var activationEpoch primitives.Epoch
@@ -190,10 +196,10 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
},
}
defer ClearCache()
for i, tt := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
ClearCache()
validatorIndexToCommittee, proposerIndexToSlots, err := CommitteeAssignments(context.Background(), state, slots.ToEpoch(tt.slot))
require.NoError(t, err, "Failed to determine CommitteeAssignments")
cac := validatorIndexToCommittee[tt.index]
@@ -209,6 +215,8 @@ func TestCommitteeAssignments_CanRetrieve(t *testing.T) {
}
func TestCommitteeAssignments_CannotRetrieveFuture(t *testing.T) {
ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
@@ -239,6 +247,8 @@ func TestCommitteeAssignments_CannotRetrieveFuture(t *testing.T) {
}
func TestCommitteeAssignments_CannotRetrieveOlderThanSlotsPerHistoricalRoot(t *testing.T) {
ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
@@ -259,7 +269,7 @@ func TestCommitteeAssignments_CannotRetrieveOlderThanSlotsPerHistoricalRoot(t *t
func TestCommitteeAssignments_EverySlotHasMin1Proposer(t *testing.T) {
ClearCache()
defer ClearCache()
// Initialize test with 256 validators, each slot and each index gets 4 validators.
validators := make([]*ethpb.Validator, 4*params.BeaconConfig().SlotsPerEpoch)
for i := 0; i < len(validators); i++ {
@@ -380,9 +390,9 @@ func TestVerifyAttestationBitfieldLengths_OK(t *testing.T) {
},
}
defer ClearCache()
for i, tt := range tests {
ClearCache()
require.NoError(t, state.SetSlot(tt.stateSlot))
err := VerifyAttestationBitfieldLengths(context.Background(), state, tt.attestation)
if tt.verificationFailure {
@@ -395,7 +405,7 @@ func TestVerifyAttestationBitfieldLengths_OK(t *testing.T) {
func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
ClearCache()
defer ClearCache()
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount
validators := make([]*ethpb.Validator, validatorCount)
indices := make([]primitives.ValidatorIndex, validatorCount)
@@ -425,7 +435,7 @@ func TestUpdateCommitteeCache_CanUpdate(t *testing.T) {
func TestUpdateCommitteeCache_CanUpdateAcrossEpochs(t *testing.T) {
ClearCache()
defer ClearCache()
validatorCount := params.BeaconConfig().MinGenesisActiveValidatorCount
validators := make([]*ethpb.Validator, validatorCount)
indices := make([]primitives.ValidatorIndex, validatorCount)

View File

@@ -60,6 +60,8 @@ func TestBlockRootAtSlot_CorrectBlockRoot(t *testing.T) {
}
for i, tt := range tests {
t.Run(fmt.Sprintf("%d", i), func(t *testing.T) {
helpers.ClearCache()
s.Slot = tt.stateSlot
state, err := state_native.InitializeFromProtoPhase0(s)
require.NoError(t, err)
@@ -110,6 +112,8 @@ func TestBlockRootAtSlot_OutOfBounds(t *testing.T) {
},
}
for _, tt := range tests {
helpers.ClearCache()
state.Slot = tt.stateSlot
s, err := state_native.InitializeFromProtoPhase0(state)
require.NoError(t, err)

View File

@@ -1,13 +0,0 @@
package helpers
import (
"os"
"testing"
)
// run ClearCache before each test to prevent cross-test side effects
func TestMain(m *testing.M) {
ClearCache()
code := m.Run()
os.Exit(code)
}

View File

@@ -40,6 +40,8 @@ func TestRandaoMix_OK(t *testing.T) {
},
}
for _, test := range tests {
ClearCache()
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(test.epoch+1))))
mix, err := RandaoMix(state, test.epoch)
require.NoError(t, err)
@@ -74,6 +76,8 @@ func TestRandaoMix_CopyOK(t *testing.T) {
},
}
for _, test := range tests {
ClearCache()
require.NoError(t, state.SetSlot(params.BeaconConfig().SlotsPerEpoch.Mul(uint64(test.epoch+1))))
mix, err := RandaoMix(state, test.epoch)
require.NoError(t, err)
@@ -88,6 +92,8 @@ func TestRandaoMix_CopyOK(t *testing.T) {
}
func TestGenerateSeed_OK(t *testing.T) {
ClearCache()
randaoMixes := make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector)
for i := 0; i < len(randaoMixes); i++ {
intInBytes := make([]byte, 32)

View File

@@ -14,6 +14,8 @@ import (
)
func TestTotalBalance_OK(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: []*ethpb.Validator{
{EffectiveBalance: 27 * 1e9}, {EffectiveBalance: 28 * 1e9},
{EffectiveBalance: 32 * 1e9}, {EffectiveBalance: 40 * 1e9},
@@ -27,6 +29,8 @@ func TestTotalBalance_OK(t *testing.T) {
}
func TestTotalBalance_ReturnsEffectiveBalanceIncrement(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Validators: []*ethpb.Validator{}})
require.NoError(t, err)
@@ -47,6 +51,8 @@ func TestGetBalance_OK(t *testing.T) {
{i: 2, b: []uint64{0, 0, 0}},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{Balances: test.b})
require.NoError(t, err)
assert.Equal(t, test.b[test.i], state.Balances()[test.i], "Incorrect Validator balance")
@@ -62,6 +68,8 @@ func TestTotalActiveBalance(t *testing.T) {
{10000},
}
for _, test := range tests {
ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
validators = append(validators, &ethpb.Validator{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance, ExitEpoch: 1})
@@ -75,8 +83,6 @@ func TestTotalActiveBalance(t *testing.T) {
}
func TestTotalActiveBal_ReturnMin(t *testing.T) {
ClearCache()
defer ClearCache()
tests := []struct {
vCount int
}{
@@ -85,6 +91,8 @@ func TestTotalActiveBal_ReturnMin(t *testing.T) {
{10000},
}
for _, test := range tests {
ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
validators = append(validators, &ethpb.Validator{EffectiveBalance: 1, ExitEpoch: 1})
@@ -98,8 +106,6 @@ func TestTotalActiveBal_ReturnMin(t *testing.T) {
}
func TestTotalActiveBalance_WithCache(t *testing.T) {
ClearCache()
defer ClearCache()
tests := []struct {
vCount int
wantCount int
@@ -109,6 +115,8 @@ func TestTotalActiveBalance_WithCache(t *testing.T) {
{vCount: 10000, wantCount: 10000},
}
for _, test := range tests {
ClearCache()
validators := make([]*ethpb.Validator, 0)
for i := 0; i < test.vCount; i++ {
validators = append(validators, &ethpb.Validator{EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance, ExitEpoch: 1})
@@ -133,6 +141,8 @@ func TestIncreaseBalance_OK(t *testing.T) {
{i: 2, b: []uint64{27 * 1e9, 28 * 1e9, 32 * 1e9}, nb: 33 * 1e9, eb: 65 * 1e9},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
{EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 4}},
@@ -157,6 +167,8 @@ func TestDecreaseBalance_OK(t *testing.T) {
{i: 3, b: []uint64{27 * 1e9, 28 * 1e9, 1, 28 * 1e9}, nb: 28 * 1e9, eb: 0},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
{EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 3}},
@@ -169,6 +181,8 @@ func TestDecreaseBalance_OK(t *testing.T) {
}
func TestFinalityDelay(t *testing.T) {
ClearCache()
base := buildState(params.BeaconConfig().SlotsPerEpoch*10, 1)
base.FinalizedCheckpoint = &ethpb.Checkpoint{Epoch: 3}
beaconState, err := state_native.InitializeFromProtoPhase0(base)
@@ -199,6 +213,8 @@ func TestFinalityDelay(t *testing.T) {
}
func TestIsInInactivityLeak(t *testing.T) {
ClearCache()
base := buildState(params.BeaconConfig().SlotsPerEpoch*10, 1)
base.FinalizedCheckpoint = &ethpb.Checkpoint{Epoch: 3}
beaconState, err := state_native.InitializeFromProtoPhase0(base)
@@ -269,6 +285,8 @@ func TestIncreaseBadBalance_NotOK(t *testing.T) {
{i: 2, b: []uint64{math.MaxUint64, math.MaxUint64, math.MaxUint64}, nb: 33 * 1e9},
}
for _, test := range tests {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: []*ethpb.Validator{
{EffectiveBalance: 4}, {EffectiveBalance: 4}, {EffectiveBalance: 4}},

View File

@@ -13,6 +13,8 @@ import (
)
func TestShuffleList_InvalidValidatorCount(t *testing.T) {
ClearCache()
maxShuffleListSize = 20
list := make([]primitives.ValidatorIndex, 21)
if _, err := ShuffleList(list, [32]byte{123, 125}); err == nil {
@@ -23,6 +25,8 @@ func TestShuffleList_InvalidValidatorCount(t *testing.T) {
}
func TestShuffleList_OK(t *testing.T) {
ClearCache()
var list1 []primitives.ValidatorIndex
seed1 := [32]byte{1, 128, 12}
seed2 := [32]byte{2, 128, 12}
@@ -47,6 +51,8 @@ func TestShuffleList_OK(t *testing.T) {
}
func TestSplitIndices_OK(t *testing.T) {
ClearCache()
var l []uint64
numValidators := uint64(64000)
for i := uint64(0); i < numValidators; i++ {
@@ -61,6 +67,8 @@ func TestSplitIndices_OK(t *testing.T) {
}
func TestShuffleList_Vs_ShuffleIndex(t *testing.T) {
ClearCache()
var list []primitives.ValidatorIndex
listSize := uint64(1000)
seed := [32]byte{123, 42}
@@ -125,6 +133,8 @@ func BenchmarkShuffleList(b *testing.B) {
}
func TestShuffledIndex(t *testing.T) {
ClearCache()
var list []primitives.ValidatorIndex
listSize := uint64(399)
for i := primitives.ValidatorIndex(0); uint64(i) < listSize; i++ {
@@ -147,6 +157,8 @@ func TestShuffledIndex(t *testing.T) {
}
func TestSplitIndicesAndOffset_OK(t *testing.T) {
ClearCache()
var l []uint64
validators := uint64(64000)
for i := uint64(0); i < validators; i++ {

View File

@@ -18,7 +18,7 @@ import (
func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -49,7 +49,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -77,7 +77,7 @@ func TestIsCurrentEpochSyncCommittee_UsingCommittee(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -105,7 +105,7 @@ func TestIsCurrentEpochSyncCommittee_DoesNotExist(t *testing.T) {
func TestIsNextEpochSyncCommittee_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -135,6 +135,8 @@ func TestIsNextEpochSyncCommittee_UsingCache(t *testing.T) {
}
func TestIsNextEpochSyncCommittee_UsingCommittee(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -161,6 +163,8 @@ func TestIsNextEpochSyncCommittee_UsingCommittee(t *testing.T) {
}
func TestIsNextEpochSyncCommittee_DoesNotExist(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -188,7 +192,7 @@ func TestIsNextEpochSyncCommittee_DoesNotExist(t *testing.T) {
func TestCurrentEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -219,7 +223,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -260,7 +264,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -288,7 +292,7 @@ func TestCurrentEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
func TestNextEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -318,6 +322,8 @@ func TestNextEpochSyncSubcommitteeIndices_UsingCache(t *testing.T) {
}
func TestNextEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -345,7 +351,7 @@ func TestNextEpochSyncSubcommitteeIndices_UsingCommittee(t *testing.T) {
func TestNextEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),
@@ -372,6 +378,8 @@ func TestNextEpochSyncSubcommitteeIndices_DoesNotExist(t *testing.T) {
}
func TestUpdateSyncCommitteeCache_BadSlot(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: 1,
})
@@ -388,6 +396,8 @@ func TestUpdateSyncCommitteeCache_BadSlot(t *testing.T) {
}
func TestUpdateSyncCommitteeCache_BadRoot(t *testing.T) {
ClearCache()
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Slot: primitives.Slot(params.BeaconConfig().EpochsPerSyncCommitteePeriod)*params.BeaconConfig().SlotsPerEpoch - 1,
LatestBlockHeader: &ethpb.BeaconBlockHeader{StateRoot: params.BeaconConfig().ZeroHash[:]},
@@ -399,7 +409,7 @@ func TestUpdateSyncCommitteeCache_BadRoot(t *testing.T) {
func TestIsCurrentEpochSyncCommittee_SameBlockRoot(t *testing.T) {
ClearCache()
defer ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
AggregatePubkey: bytesutil.PadTo([]byte{}, params.BeaconConfig().BLSPubkeyLength),

View File

@@ -179,8 +179,6 @@ func TestIsSlashableValidator_OK(t *testing.T) {
func TestBeaconProposerIndex_OK(t *testing.T) {
params.SetupTestConfigCleanup(t)
ClearCache()
defer ClearCache()
c := params.BeaconConfig()
c.MinGenesisActiveValidatorCount = 16384
params.OverrideBeaconConfig(c)
@@ -224,9 +222,9 @@ func TestBeaconProposerIndex_OK(t *testing.T) {
},
}
defer ClearCache()
for _, tt := range tests {
ClearCache()
require.NoError(t, state.SetSlot(tt.slot))
result, err := BeaconProposerIndex(context.Background(), state)
require.NoError(t, err, "Failed to get shard and committees at slot")
@@ -235,9 +233,9 @@ func TestBeaconProposerIndex_OK(t *testing.T) {
}
func TestBeaconProposerIndex_BadState(t *testing.T) {
params.SetupTestConfigCleanup(t)
ClearCache()
defer ClearCache()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig()
c.MinGenesisActiveValidatorCount = 16384
params.OverrideBeaconConfig(c)
@@ -268,6 +266,8 @@ func TestBeaconProposerIndex_BadState(t *testing.T) {
}
func TestComputeProposerIndex_Compatibility(t *testing.T) {
ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().MinGenesisActiveValidatorCount)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
@@ -309,12 +309,16 @@ func TestComputeProposerIndex_Compatibility(t *testing.T) {
}
func TestDelayedActivationExitEpoch_OK(t *testing.T) {
ClearCache()
epoch := primitives.Epoch(9999)
wanted := epoch + 1 + params.BeaconConfig().MaxSeedLookahead
assert.Equal(t, wanted, ActivationExitEpoch(epoch))
}
func TestActiveValidatorCount_Genesis(t *testing.T) {
ClearCache()
c := 1000
validators := make([]*ethpb.Validator, c)
for i := 0; i < len(validators); i++ {
@@ -348,7 +352,6 @@ func TestChurnLimit_OK(t *testing.T) {
{validatorCount: 1000000, wantedChurn: 15 /* validatorCount/churnLimitQuotient */},
{validatorCount: 2000000, wantedChurn: 30 /* validatorCount/churnLimitQuotient */},
}
defer ClearCache()
for _, test := range tests {
ClearCache()
@@ -382,9 +385,6 @@ func TestChurnLimitDeneb_OK(t *testing.T) {
{1000000, params.BeaconConfig().MaxPerEpochActivationChurnLimit},
{2000000, params.BeaconConfig().MaxPerEpochActivationChurnLimit},
}
defer ClearCache()
for _, test := range tests {
ClearCache()
@@ -417,7 +417,7 @@ func TestChurnLimitDeneb_OK(t *testing.T) {
// Test basic functionality of ActiveValidatorIndices without caching. This test will need to be
// rewritten when releasing some cache flag.
func TestActiveValidatorIndices(t *testing.T) {
farFutureEpoch := params.BeaconConfig().FarFutureEpoch
//farFutureEpoch := params.BeaconConfig().FarFutureEpoch
type args struct {
state *ethpb.BeaconState
epoch primitives.Epoch
@@ -428,7 +428,7 @@ func TestActiveValidatorIndices(t *testing.T) {
want []primitives.ValidatorIndex
wantedErr string
}{
{
/*{
name: "all_active_epoch_10",
args: args{
state: &ethpb.BeaconState{
@@ -559,7 +559,7 @@ func TestActiveValidatorIndices(t *testing.T) {
epoch: 10,
},
want: []primitives.ValidatorIndex{0, 2, 3},
},
},*/
{
name: "impossible_zero_validators", // Regression test for issue #13051
args: args{
@@ -569,22 +569,21 @@ func TestActiveValidatorIndices(t *testing.T) {
},
epoch: 10,
},
wantedErr: "no active validator indices",
wantedErr: "state has nil validator slice",
},
}
defer ClearCache()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
s, err := state_native.InitializeFromProtoPhase0(tt.args.state)
require.NoError(t, err)
require.NoError(t, s.SetValidators(tt.args.state.Validators))
got, err := ActiveValidatorIndices(context.Background(), s, tt.args.epoch)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
return
}
assert.DeepEqual(t, tt.want, got, "ActiveValidatorIndices()")
ClearCache()
})
}
}
@@ -685,6 +684,8 @@ func TestComputeProposerIndex(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
bState := &ethpb.BeaconState{Validators: tt.args.validators}
stTrie, err := state_native.InitializeFromProtoUnsafePhase0(bState)
require.NoError(t, err)
@@ -717,6 +718,8 @@ func TestIsEligibleForActivationQueue(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
assert.Equal(t, tt.want, IsEligibleForActivationQueue(tt.validator), "IsEligibleForActivationQueue()")
})
}
@@ -744,6 +747,8 @@ func TestIsIsEligibleForActivation(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ClearCache()
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
assert.Equal(t, tt.want, IsEligibleForActivation(s, tt.validator), "IsEligibleForActivation()")
@@ -782,6 +787,8 @@ func computeProposerIndexWithValidators(validators []*ethpb.Validator, activeInd
}
func TestLastActivatedValidatorIndex_OK(t *testing.T) {
ClearCache()
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
require.NoError(t, err)
@@ -805,6 +812,8 @@ func TestLastActivatedValidatorIndex_OK(t *testing.T) {
}
func TestProposerIndexFromCheckpoint(t *testing.T) {
ClearCache()
e := primitives.Epoch(2)
r := [32]byte{'a'}
root := [32]byte{'b'}

View File

@@ -202,3 +202,14 @@ func ParseWeakSubjectivityInputString(wsCheckpointString string) (*v1alpha1.Chec
Root: bRoot,
}, nil
}
// MinEpochsForBlockRequests computes the number of epochs of block history that we need to maintain,
// relative to the current epoch, per the p2p specs. This is used to compute the slot where backfill is complete.
// value defined:
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#configuration
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY + CHURN_LIMIT_QUOTIENT // 2 (= 33024, ~5 months)
// detailed rationale: https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
func MinEpochsForBlockRequests() primitives.Epoch {
return params.BeaconConfig().MinValidatorWithdrawabilityDelay +
primitives.Epoch(params.BeaconConfig().ChurnLimitQuotient/2)
}

View File

@@ -48,6 +48,7 @@ func TestWeakSubjectivity_ComputeWeakSubjectivityPeriod(t *testing.T) {
t.Run(fmt.Sprintf("valCount: %d, avgBalance: %d", tt.valCount, tt.avgBalance), func(t *testing.T) {
// Reset committee cache - as we need to recalculate active validator set for each test.
helpers.ClearCache()
got, err := helpers.ComputeWeakSubjectivityPeriod(context.Background(), genState(t, tt.valCount, tt.avgBalance), params.BeaconConfig())
require.NoError(t, err)
assert.Equal(t, tt.want, got, "valCount: %v, avgBalance: %v", tt.valCount, tt.avgBalance)
@@ -177,6 +178,8 @@ func TestWeakSubjectivity_IsWithinWeakSubjectivityPeriod(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
sr, _, e := tt.genWsCheckpoint()
got, err := helpers.IsWithinWeakSubjectivityPeriod(context.Background(), tt.epoch, tt.genWsState(), sr, e, params.BeaconConfig())
if tt.wantedErr != "" {
@@ -247,6 +250,8 @@ func TestWeakSubjectivity_ParseWeakSubjectivityInputString(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
helpers.ClearCache()
wsCheckpt, err := helpers.ParseWeakSubjectivityInputString(tt.input)
if tt.wantedErr != "" {
require.ErrorContains(t, tt.wantedErr, err)
@@ -281,3 +286,21 @@ func genState(t *testing.T, valCount, avgBalance uint64) state.BeaconState {
return beaconState
}
func TestMinEpochsForBlockRequests(t *testing.T) {
helpers.ClearCache()
params.SetActiveTestCleanup(t, params.MainnetConfig())
var expected primitives.Epoch = 33024
// expected value of 33024 via spec commentary:
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md#why-are-blocksbyrange-requests-only-required-to-be-served-for-the-latest-min_epochs_for_block_requests-epochs
// MIN_EPOCHS_FOR_BLOCK_REQUESTS is calculated using the arithmetic from compute_weak_subjectivity_period found in the weak subjectivity guide. Specifically to find this max epoch range, we use the worst case event of a very large validator size (>= MIN_PER_EPOCH_CHURN_LIMIT * CHURN_LIMIT_QUOTIENT).
//
// MIN_EPOCHS_FOR_BLOCK_REQUESTS = (
// MIN_VALIDATOR_WITHDRAWABILITY_DELAY
// + MAX_SAFETY_DECAY * CHURN_LIMIT_QUOTIENT // (2 * 100)
// )
//
// Where MAX_SAFETY_DECAY = 100 and thus MIN_EPOCHS_FOR_BLOCK_REQUESTS = 33024 (~5 months).
require.Equal(t, expected, helpers.MinEpochsForBlockRequests())
}

View File

@@ -22,9 +22,6 @@ var ErrNotFoundOriginBlockRoot = kv.ErrNotFoundOriginBlockRoot
// ErrNotFoundBackfillBlockRoot wraps ErrNotFound for an error specific to the backfill block root.
var ErrNotFoundBackfillBlockRoot = kv.ErrNotFoundBackfillBlockRoot
// ErrNotFoundGenesisBlockRoot means no genesis block root was found, indicating the db was not initialized with genesis
var ErrNotFoundGenesisBlockRoot = kv.ErrNotFoundGenesisBlockRoot
// IsNotFound allows callers to treat errors from a flat-file database, where the file record is missing,
// as equivalent to db.ErrNotFound.
func IsNotFound(err error) bool {

View File

@@ -13,9 +13,11 @@ go_library(
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/slasher/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/backup:go_default_library",
"//proto/dbval:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
],

View File

@@ -11,9 +11,11 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filters"
slashertypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/slasher/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/monitoring/backup"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
@@ -57,7 +59,7 @@ type ReadOnlyDatabase interface {
// origin checkpoint sync support
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
BackfillBlockRoot(ctx context.Context) ([32]byte, error)
BackfillStatus(context.Context) (*dbval.BackfillStatus, error)
}
// NoHeadAccessDatabase defines a struct without access to chain head data.
@@ -68,6 +70,7 @@ type NoHeadAccessDatabase interface {
DeleteBlock(ctx context.Context, root [32]byte) error
SaveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) error
SaveBlocks(ctx context.Context, blocks []interfaces.ReadOnlySignedBeaconBlock) error
SaveROBlocks(ctx context.Context, blks []blocks.ROBlock, cache bool) error
SaveGenesisBlockRoot(ctx context.Context, blockRoot [32]byte) error
// State related methods.
SaveState(ctx context.Context, state state.ReadOnlyBeaconState, blockRoot [32]byte) error
@@ -106,9 +109,10 @@ type HeadAccessDatabase interface {
SaveGenesisData(ctx context.Context, state state.BeaconState) error
EnsureEmbeddedGenesis(ctx context.Context) error
// initialization method needed for origin checkpoint sync
// Support for checkpoint sync and backfill.
SaveOrigin(ctx context.Context, serState, serBlock []byte) error
SaveBackfillBlockRoot(ctx context.Context, blockRoot [32]byte) error
SaveBackfillStatus(context.Context, *dbval.BackfillStatus) error
BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error
}
// SlasherDatabase interface for persisting data related to detecting slashable offenses on Ethereum.

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"archived_point.go",
"backfill.go",
"backup.go",
"blocks.go",
"checkpoint.go",
@@ -48,6 +49,7 @@ go_library(
"//io/file:go_default_library",
"//monitoring/progress:go_default_library",
"//monitoring/tracing:go_default_library",
"//proto/dbval:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time:go_default_library",
@@ -73,6 +75,7 @@ go_test(
name = "go_default_test",
srcs = [
"archived_point_test.go",
"backfill_test.go",
"backup_test.go",
"blocks_test.go",
"checkpoint_test.go",
@@ -107,6 +110,7 @@ go_test(
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/dbval:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/testing:go_default_library",

View File

@@ -0,0 +1,44 @@
package kv
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
bolt "go.etcd.io/bbolt"
"go.opencensus.io/trace"
"google.golang.org/protobuf/proto"
)
// SaveBackfillStatus encodes the given BackfillStatus protobuf struct and writes it to a single key in the db.
// This value is used by the backfill service to keep track of the range of blocks that need to be synced. It is also used by the
// code that serves blocks or regenerates states to keep track of what range of blocks are available.
func (s *Store) SaveBackfillStatus(ctx context.Context, bf *dbval.BackfillStatus) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveBackfillStatus")
defer span.End()
bfb, err := proto.Marshal(bf)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(blocksBucket)
return bucket.Put(backfillStatusKey, bfb)
})
}
// BackfillStatus retrieves the most recently saved version of the BackfillStatus protobuf struct.
// This is used to persist information about backfill status across restarts.
func (s *Store) BackfillStatus(ctx context.Context) (*dbval.BackfillStatus, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.BackfillStatus")
defer span.End()
bf := &dbval.BackfillStatus{}
err := s.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(blocksBucket)
bs := bucket.Get(backfillStatusKey)
if len(bs) == 0 {
return errors.Wrap(ErrNotFound, "BackfillStatus not found")
}
return proto.Unmarshal(bs, bf)
})
return bf, err
}

View File

@@ -0,0 +1,35 @@
package kv
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"google.golang.org/protobuf/proto"
)
func TestBackfillRoundtrip(t *testing.T) {
db := setupDB(t)
b := &dbval.BackfillStatus{}
b.LowSlot = 23
b.LowRoot = bytesutil.PadTo([]byte("low"), 32)
b.LowParentRoot = bytesutil.PadTo([]byte("parent"), 32)
m, err := proto.Marshal(b)
require.NoError(t, err)
ub := &dbval.BackfillStatus{}
require.NoError(t, proto.Unmarshal(m, ub))
require.Equal(t, b.LowSlot, ub.LowSlot)
require.DeepEqual(t, b.LowRoot, ub.LowRoot)
require.DeepEqual(t, b.LowParentRoot, ub.LowParentRoot)
ctx := context.Background()
require.NoError(t, db.SaveBackfillStatus(ctx, b))
dbub, err := db.BackfillStatus(ctx)
require.NoError(t, err)
require.Equal(t, b.LowSlot, dbub.LowSlot)
require.DeepEqual(t, b.LowRoot, dbub.LowRoot)
require.DeepEqual(t, b.LowParentRoot, dbub.LowParentRoot)
}

View File

@@ -70,25 +70,6 @@ func (s *Store) OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
return root, err
}
// BackfillBlockRoot keeps track of the highest block available before the OriginCheckpointBlockRoot
func (s *Store) BackfillBlockRoot(ctx context.Context) ([32]byte, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.BackfillBlockRoot")
defer span.End()
var root [32]byte
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
rootSlice := bkt.Get(backfillBlockRootKey)
if len(rootSlice) == 0 {
return ErrNotFoundBackfillBlockRoot
}
root = bytesutil.ToBytes32(rootSlice)
return nil
})
return root, err
}
// HeadBlock returns the latest canonical block in the Ethereum Beacon Chain.
func (s *Store) HeadBlock(ctx context.Context) (interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.HeadBlock")
@@ -292,55 +273,95 @@ func (s *Store) SaveBlocks(ctx context.Context, blks []interfaces.ReadOnlySigned
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlocks")
defer span.End()
// Performing marshaling, hashing, and indexing outside the bolt transaction
// to minimize the time we hold the DB lock.
blockRoots := make([][]byte, len(blks))
encodedBlocks := make([][]byte, len(blks))
indicesForBlocks := make([]map[string][]byte, len(blks))
for i, blk := range blks {
blockRoot, err := blk.Block().HashTreeRoot()
robs := make([]blocks.ROBlock, len(blks))
for i := range blks {
rb, err := blocks.NewROBlock(blks[i])
if err != nil {
return err
return errors.Wrapf(err, "failed to make an ROBlock for a block in SaveBlocks")
}
enc, err := s.marshalBlock(ctx, blk)
if err != nil {
return err
}
blockRoots[i] = blockRoot[:]
encodedBlocks[i] = enc
indicesByBucket := createBlockIndicesFromBlock(ctx, blk.Block())
indicesForBlocks[i] = indicesByBucket
robs[i] = rb
}
saveBlinded, err := s.shouldSaveBlinded(ctx)
return s.SaveROBlocks(ctx, robs, true)
}
type blockBatchEntry struct {
root []byte
block interfaces.ReadOnlySignedBeaconBlock
enc []byte
updated bool
indices map[string][]byte
}
func prepareBlockBatch(blks []blocks.ROBlock, shouldBlind bool) ([]blockBatchEntry, error) {
batch := make([]blockBatchEntry, len(blks))
for i := range blks {
batch[i].root, batch[i].block = blks[i].RootSlice(), blks[i].ReadOnlySignedBeaconBlock
batch[i].indices = blockIndices(batch[i].block.Block().Slot(), batch[i].block.Block().ParentRoot())
if shouldBlind {
blinded, err := batch[i].block.ToBlinded()
if err != nil {
if !errors.Is(err, blocks.ErrUnsupportedVersion) {
return nil, errors.Wrapf(err, "could not convert block to blinded format for root %#x", batch[i].root)
}
// Pre-deneb blocks give ErrUnsupportedVersion; use the full block already in the batch entry.
} else {
batch[i].block = blinded
}
}
enc, err := encodeBlock(batch[i].block)
if err != nil {
return nil, errors.Wrapf(err, "failed to encode block for root %#x", batch[i].root)
}
batch[i].enc = enc
}
return batch, nil
}
func (s *Store) SaveROBlocks(ctx context.Context, blks []blocks.ROBlock, cache bool) error {
shouldBlind, err := s.shouldSaveBlinded(ctx)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
// Precompute expensive values outside the db transaction.
batch, err := prepareBlockBatch(blks, shouldBlind)
if err != nil {
return errors.Wrap(err, "failed to encode all blocks in batch for saving to the db")
}
err = s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
for i, blk := range blks {
if existingBlock := bkt.Get(blockRoots[i]); existingBlock != nil {
for i := range batch {
if exists := bkt.Get(batch[i].root); exists != nil {
continue
}
if err := updateValueForIndices(ctx, indicesForBlocks[i], blockRoots[i], tx); err != nil {
return errors.Wrap(err, "could not update DB indices")
if err := bkt.Put(batch[i].root, batch[i].enc); err != nil {
return errors.Wrapf(err, "could write block to db with root %#x", batch[i].root)
}
if saveBlinded {
blindedBlock, err := blk.ToBlinded()
if err != nil {
if !errors.Is(err, blocks.ErrUnsupportedVersion) {
return err
}
} else {
blk = blindedBlock
}
}
s.blockCache.Set(string(blockRoots[i]), blk, int64(len(encodedBlocks[i])))
if err := bkt.Put(blockRoots[i], encodedBlocks[i]); err != nil {
return err
if err := updateValueForIndices(ctx, batch[i].indices, batch[i].root, tx); err != nil {
return errors.Wrapf(err, "could not update DB indices for root %#x", batch[i].root)
}
batch[i].updated = true
}
return nil
})
if !cache {
return err
}
for i := range batch {
if batch[i].updated {
s.blockCache.Set(string(batch[i].root), batch[i].block, int64(len(batch[i].enc)))
}
}
return err
}
// blockIndices takes in a beacon block and returns
// a map of bolt DB index buckets corresponding to each particular key for indices for
// data, such as (shard indices bucket -> shard 5).
func blockIndices(slot primitives.Slot, parentRoot [32]byte) map[string][]byte {
return map[string][]byte{
string(blockSlotIndicesBucket): bytesutil.SlotToBytesBigEndian(slot),
string(blockParentRootIndicesBucket): parentRoot[:],
}
}
// SaveHeadBlockRoot to the db.
@@ -417,17 +438,6 @@ func (s *Store) SaveOriginCheckpointBlockRoot(ctx context.Context, blockRoot [32
})
}
// SaveBackfillBlockRoot is used to keep track of the most recently backfilled block root when
// the node was initialized via checkpoint sync.
func (s *Store) SaveBackfillBlockRoot(ctx context.Context, blockRoot [32]byte) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveBackfillBlockRoot")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(blocksBucket)
return bucket.Put(backfillBlockRootKey, blockRoot[:])
})
}
// HighestRootsBelowSlot returns roots from the database slot index from the highest slot below the input slot.
// The slot value at the beginning of the return list is the slot where the roots were found. This is helpful so that
// calling code can make decisions based on the slot without resolving the blocks to discover their slot (for instance
@@ -726,31 +736,6 @@ func blockRootsBySlot(ctx context.Context, tx *bolt.Tx, slot primitives.Slot) ([
return [][32]byte{}, nil
}
// createBlockIndicesFromBlock takes in a beacon block and returns
// a map of bolt DB index buckets corresponding to each particular key for indices for
// data, such as (shard indices bucket -> shard 5).
func createBlockIndicesFromBlock(ctx context.Context, block interfaces.ReadOnlyBeaconBlock) map[string][]byte {
_, span := trace.StartSpan(ctx, "BeaconDB.createBlockIndicesFromBlock")
defer span.End()
indicesByBucket := make(map[string][]byte)
// Every index has a unique bucket for fast, binary-search
// range scans for filtering across keys.
buckets := [][]byte{
blockSlotIndicesBucket,
}
indices := [][]byte{
bytesutil.SlotToBytesBigEndian(block.Slot()),
}
buckets = append(buckets, blockParentRootIndicesBucket)
parentRoot := block.ParentRoot()
indices = append(indices, parentRoot[:])
for i := 0; i < len(buckets); i++ {
indicesByBucket[string(buckets[i])] = indices[i]
}
return indicesByBucket
}
// createBlockFiltersFromIndices takes in filter criteria and returns
// a map with a single key-value pair: "block-parent-root-indices” -> parentRoot (array of bytes).
//
@@ -838,74 +823,44 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
return blocks.NewSignedBeaconBlock(rawBlock)
}
func (s *Store) marshalBlock(
ctx context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
shouldBlind, err := s.shouldSaveBlinded(ctx)
func encodeBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
key, err := keyForBlock(blk)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "could not determine version encoding key for block")
}
if shouldBlind {
return marshalBlockBlinded(ctx, blk)
enc, err := blk.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal block")
}
return marshalBlockFull(ctx, blk)
dbfmt := make([]byte, len(key)+len(enc))
if len(key) > 0 {
copy(dbfmt, key)
}
copy(dbfmt[len(key):], enc)
return snappy.Encode(nil, dbfmt), nil
}
// Encodes a full beacon block to the DB with its associated key.
func marshalBlockFull(
_ context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
var encodedBlock []byte
var err error
encodedBlock, err = blk.MarshalSSZ()
if err != nil {
return nil, err
}
func keyForBlock(blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
switch blk.Version() {
case version.Deneb:
return snappy.Encode(nil, append(denebKey, encodedBlock...)), nil
case version.Capella:
return snappy.Encode(nil, append(capellaKey, encodedBlock...)), nil
case version.Bellatrix:
return snappy.Encode(nil, append(bellatrixKey, encodedBlock...)), nil
case version.Altair:
return snappy.Encode(nil, append(altairKey, encodedBlock...)), nil
case version.Phase0:
return snappy.Encode(nil, encodedBlock), nil
default:
return nil, errors.New("unknown block version")
}
}
// Encodes a blinded beacon block with its associated key.
// If the block does not support blinding, we then encode it as a full
// block with its associated key by calling marshalBlockFull.
func marshalBlockBlinded(
ctx context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
blindedBlock, err := blk.ToBlinded()
if err != nil {
switch {
case errors.Is(err, blocks.ErrUnsupportedVersion):
return marshalBlockFull(ctx, blk)
default:
return nil, errors.Wrap(err, "could not convert block to blinded format")
if blk.IsBlinded() {
return denebBlindKey, nil
}
}
encodedBlock, err := blindedBlock.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal blinded block")
}
switch blk.Version() {
case version.Deneb:
return snappy.Encode(nil, append(denebBlindKey, encodedBlock...)), nil
return denebKey, nil
case version.Capella:
return snappy.Encode(nil, append(capellaBlindKey, encodedBlock...)), nil
if blk.IsBlinded() {
return capellaBlindKey, nil
}
return capellaKey, nil
case version.Bellatrix:
return snappy.Encode(nil, append(bellatrixBlindKey, encodedBlock...)), nil
if blk.IsBlinded() {
return bellatrixBlindKey, nil
}
return bellatrixKey, nil
case version.Altair:
return altairKey, nil
case version.Phase0:
return nil, nil
default:
return nil, fmt.Errorf("unsupported block version: %v", blk.Version())
}

View File

@@ -126,23 +126,6 @@ var blockTests = []struct {
},
}
func TestStore_SaveBackfillBlockRoot(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
_, err := db.BackfillBlockRoot(ctx)
require.ErrorIs(t, err, ErrNotFoundBackfillBlockRoot)
var expected [32]byte
copy(expected[:], []byte{0x23})
err = db.SaveBackfillBlockRoot(ctx, expected)
require.NoError(t, err)
actual, err := db.BackfillBlockRoot(ctx)
require.NoError(t, err)
require.Equal(t, expected, actual)
}
func TestStore_SaveBlock_NoDuplicates(t *testing.T) {
BlockCacheSize = 1
slot := primitives.Slot(20)

View File

@@ -21,3 +21,8 @@ var ErrNotFoundBackfillBlockRoot = errors.Wrap(ErrNotFound, "BackfillBlockRoot")
// ErrNotFoundFeeRecipient is a not found error specifically for the fee recipient getter
var ErrNotFoundFeeRecipient = errors.Wrap(ErrNotFound, "fee recipient")
var errEmptyBlockSlice = errors.New("[]blocks.ROBlock is empty")
var errIncorrectBlockParent = errors.New("unexpected missing or forked blocks in a []ROBlock")
var errFinalizedChildNotFound = errors.New("unable to find finalized root descending from backfill batch")
var errNotConnectedToFinalized = errors.New("unable to finalize backfill blocks, finalized parent_root does not match")

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
@@ -163,6 +164,83 @@ func (s *Store) updateFinalizedBlockRoots(ctx context.Context, tx *bolt.Tx, chec
return bkt.Put(previousFinalizedCheckpointKey, enc)
}
// BackfillFinalizedIndex updates the finalized index for a contiguous chain of blocks that are the ancestors of the
// given finalized child root. This is needed to update the finalized index during backfill, because the usual
// updateFinalizedBlockRoots has assumptions that are incompatible with backfill processing.
func (s *Store) BackfillFinalizedIndex(ctx context.Context, blocks []blocks.ROBlock, finalizedChildRoot [32]byte) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BackfillFinalizedIndex")
defer span.End()
if len(blocks) == 0 {
return errEmptyBlockSlice
}
fbrs := make([]*ethpb.FinalizedBlockRootContainer, len(blocks))
encs := make([][]byte, len(blocks))
for i := range blocks {
pr := blocks[i].Block().ParentRoot()
fbrs[i] = &ethpb.FinalizedBlockRootContainer{
ParentRoot: pr[:],
// ChildRoot: will be filled in on the next iteration when we look at the descendent block.
}
if i == 0 {
continue
}
if blocks[i-1].Root() != blocks[i].Block().ParentRoot() {
return errors.Wrapf(errIncorrectBlockParent, "previous root=%#x, slot=%d; child parent_root=%#x, root=%#x, slot=%d",
blocks[i-1].Root(), blocks[i-1].Block().Slot(), blocks[i].Block().ParentRoot(), blocks[i].Root(), blocks[i].Block().Slot())
}
// We know the previous index is the parent of this one thanks to the assertion above,
// so we can set the ChildRoot of the previous value to the root of the current value.
fbrs[i-1].ChildRoot = blocks[i].RootSlice()
// Now that the value for fbrs[i-1] is complete, perform encoding here to minimize time in Update,
// which holds the global db lock.
penc, err := encode(ctx, fbrs[i-1])
if err != nil {
tracing.AnnotateError(span, err)
return err
}
encs[i-1] = penc
// The final element is the parent of finalizedChildRoot. This is checked inside the db transaction using
// the parent_root value stored in the index data for finalizedChildRoot.
if i == len(blocks)-1 {
fbrs[i].ChildRoot = finalizedChildRoot[:]
// Final element is complete, so it is pre-encoded like the others.
enc, err := encode(ctx, fbrs[i])
if err != nil {
tracing.AnnotateError(span, err)
return err
}
encs[i] = enc
}
}
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(finalizedBlockRootsIndexBucket)
child := bkt.Get(finalizedChildRoot[:])
if len(child) == 0 {
return errFinalizedChildNotFound
}
fcc := &ethpb.FinalizedBlockRootContainer{}
if err := decode(ctx, child, fcc); err != nil {
return errors.Wrapf(err, "unable to decode finalized block root container for root=%#x", finalizedChildRoot)
}
// Ensure that the existing finalized chain descends from the new segment.
if !bytes.Equal(fcc.ParentRoot, blocks[len(blocks)-1].RootSlice()) {
return errors.Wrapf(errNotConnectedToFinalized, "finalized block root container for root=%#x has parent_root=%#x, not %#x",
finalizedChildRoot, fcc.ParentRoot, blocks[len(blocks)-1].RootSlice())
}
// Update the finalized index with entries for each block in the new segment.
for i := range fbrs {
if err := bkt.Put(blocks[i].RootSlice(), encs[i]); err != nil {
return err
}
}
return nil
})
}
// IsFinalizedBlock returns true if the block root is present in the finalized block root index.
// A beacon block root contained exists in this index if it is considered finalized and canonical.
// Note: beacon blocks from the latest finalized epoch return true, whether or not they are

View File

@@ -1,6 +1,7 @@
package kv
import (
"bytes"
"context"
"testing"
@@ -14,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
bolt "go.etcd.io/bbolt"
)
var genesisBlockRoot = bytesutil.ToBytes32([]byte{'G', 'E', 'N', 'E', 'S', 'I', 'S'})
@@ -234,3 +236,64 @@ func makeBlocksAltair(t *testing.T, startIdx, num uint64, previousRoot [32]byte)
}
return ifaceBlocks
}
func TestStore_BackfillFinalizedIndex(t *testing.T) {
db := setupDB(t)
ctx := context.Background()
require.ErrorIs(t, db.BackfillFinalizedIndex(ctx, []consensusblocks.ROBlock{}, [32]byte{}), errEmptyBlockSlice)
blks, err := consensusblocks.NewROBlockSlice(makeBlocks(t, 0, 66, [32]byte{}))
require.NoError(t, err)
// set up existing finalized block
ebpr := blks[64].Block().ParentRoot()
ebr := blks[64].Root()
chldr := blks[65].Root()
ebf := &ethpb.FinalizedBlockRootContainer{
ParentRoot: ebpr[:],
ChildRoot: chldr[:],
}
disjoint := []consensusblocks.ROBlock{
blks[0],
blks[2],
}
enc, err := encode(ctx, ebf)
require.NoError(t, err)
err = db.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(finalizedBlockRootsIndexBucket)
return bkt.Put(ebr[:], enc)
})
// reslice to remove the existing blocks
blks = blks[0:64]
// check the other error conditions with a descendent root that really doesn't exist
require.NoError(t, err)
require.ErrorIs(t, db.BackfillFinalizedIndex(ctx, disjoint, [32]byte{}), errIncorrectBlockParent)
require.NoError(t, err)
require.ErrorIs(t, errFinalizedChildNotFound, db.BackfillFinalizedIndex(ctx, blks, [32]byte{}))
// use the real root so that it succeeds
require.NoError(t, db.BackfillFinalizedIndex(ctx, blks, ebr))
for i := range blks {
require.NoError(t, db.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(finalizedBlockRootsIndexBucket)
encfr := bkt.Get(blks[i].RootSlice())
require.Equal(t, true, len(encfr) > 0)
fr := &ethpb.FinalizedBlockRootContainer{}
require.NoError(t, decode(ctx, encfr, fr))
require.Equal(t, 32, len(fr.ParentRoot))
require.Equal(t, 32, len(fr.ChildRoot))
pr := blks[i].Block().ParentRoot()
require.Equal(t, true, bytes.Equal(fr.ParentRoot, pr[:]))
if i > 0 {
require.Equal(t, true, bytes.Equal(fr.ParentRoot, blks[i-1].RootSlice()))
}
if i < len(blks)-1 {
require.DeepEqual(t, fr.ChildRoot, blks[i+1].RootSlice())
}
if i == len(blks)-1 {
require.DeepEqual(t, fr.ChildRoot, ebr[:])
}
return nil
}))
}
}

View File

@@ -1,6 +1,7 @@
package kv
import (
"bytes"
"context"
"fmt"
"testing"
@@ -116,13 +117,25 @@ func Test_setupBlockStorageType(t *testing.T) {
root, err = wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
retrievedBlk, err = store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, true, retrievedBlk.IsBlinded())
wrappedBlinded, err := wrappedBlock.ToBlinded()
require.NoError(t, err)
require.DeepEqual(t, wrappedBlinded, retrievedBlk)
retrievedBlk, err = store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, true, retrievedBlk.IsBlinded())
// Compare retrieved value by root, and marshaled bytes.
mSrc, err := wrappedBlinded.MarshalSSZ()
require.NoError(t, err)
mTgt, err := retrievedBlk.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(mSrc, mTgt))
rSrc, err := wrappedBlinded.Block().HashTreeRoot()
require.NoError(t, err)
rTgt, err := retrievedBlk.Block().HashTreeRoot()
require.NoError(t, err)
require.Equal(t, rSrc, rTgt)
})
t.Run("existing database with full blocks type should continue storing full blocks", func(t *testing.T) {
store := setupDB(t)
@@ -155,10 +168,21 @@ func Test_setupBlockStorageType(t *testing.T) {
root, err = wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
retrievedBlk, err = store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, false, retrievedBlk.IsBlinded())
require.DeepEqual(t, wrappedBlock, retrievedBlk)
// Compare retrieved value by root, and marshaled bytes.
mSrc, err := wrappedBlock.MarshalSSZ()
require.NoError(t, err)
mTgt, err := retrievedBlk.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(mSrc, mTgt))
rTgt, err := retrievedBlk.Block().HashTreeRoot()
require.NoError(t, err)
require.Equal(t, root, rTgt)
})
t.Run("existing database with blinded blocks type should error if user enables full blocks feature flag", func(t *testing.T) {
store := setupDB(t)

View File

@@ -61,8 +61,8 @@ var (
// block root included in the beacon state used by weak subjectivity initial sync
originCheckpointBlockRootKey = []byte("origin-checkpoint-block-root")
// block root tracking the progress of backfill, or pointing at genesis if backfill has not been initiated
backfillBlockRootKey = []byte("backfill-block-root")
// tracking data about an ongoing backfill
backfillStatusKey = []byte("backfill-status")
// Deprecated: This index key was migrated in PR 6461. Do not use, except for migrations.
lastArchivedIndexKey = []byte("last-archived")

View File

@@ -893,6 +893,7 @@ func createStateIndicesFromStateSlot(ctx context.Context, slot primitives.Slot)
//
// 3.) state with current finalized root
// 4.) unfinalized States
// 5.) not origin root
func (s *Store) CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB. CleanUpDirtyStates")
defer span.End()
@@ -907,6 +908,11 @@ func (s *Store) CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint pr
}
deletedRoots := make([][32]byte, 0)
oRoot, err := s.OriginCheckpointBlockRoot(ctx)
if err != nil {
return err
}
err = s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(stateSlotIndicesBucket)
return bkt.ForEach(func(k, v []byte) error {
@@ -914,15 +920,31 @@ func (s *Store) CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint pr
return ctx.Err()
}
finalizedChkpt := bytesutil.ToBytes32(f.Root) == bytesutil.ToBytes32(v)
root := bytesutil.ToBytes32(v)
slot := bytesutil.BytesToSlotBigEndian(k)
mod := slot % slotsPerArchivedPoint
nonFinalized := slot > finalizedSlot
// The following conditions cover 1, 2, 3 and 4 above.
if mod != 0 && mod <= slotsPerArchivedPoint-slotsPerArchivedPoint/3 && !finalizedChkpt && !nonFinalized {
deletedRoots = append(deletedRoots, bytesutil.ToBytes32(v))
if mod == 0 {
return nil
}
if mod > slotsPerArchivedPoint-slotsPerArchivedPoint/3 {
return nil
}
if bytesutil.ToBytes32(f.Root) == root {
return nil
}
if slot > finalizedSlot {
return nil
}
if oRoot == root {
return nil
}
deletedRoots = append(deletedRoots, root)
return nil
})
})

View File

@@ -3,6 +3,7 @@ package kv
import (
"context"
"encoding/binary"
"math/big"
"math/rand"
"strconv"
"testing"
@@ -99,7 +100,7 @@ func TestState_CanSaveRetrieve(t *testing.T) {
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),
WithdrawalsRoot: make([]byte, 32),
}, 0)
}, big.NewInt(0))
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(p))
return st
@@ -124,7 +125,7 @@ func TestState_CanSaveRetrieve(t *testing.T) {
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),
WithdrawalsRoot: make([]byte, 32),
}, 0)
}, big.NewInt(0))
require.NoError(t, err)
require.NoError(t, st.SetLatestExecutionPayloadHeader(p))
return st
@@ -675,6 +676,7 @@ func TestStore_CleanUpDirtyStates_AboveThreshold(t *testing.T) {
genesisRoot := [32]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(context.Background(), genesisRoot))
require.NoError(t, db.SaveState(context.Background(), genesisState, genesisRoot))
require.NoError(t, db.SaveOriginCheckpointBlockRoot(context.Background(), [32]byte{'a'}))
bRoots := make([][32]byte, 0)
slotsPerArchivedPoint := primitives.Slot(128)
@@ -720,6 +722,7 @@ func TestStore_CleanUpDirtyStates_Finalized(t *testing.T) {
genesisRoot := [32]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(context.Background(), genesisRoot))
require.NoError(t, db.SaveState(context.Background(), genesisState, genesisRoot))
require.NoError(t, db.SaveOriginCheckpointBlockRoot(context.Background(), [32]byte{'a'}))
for i := primitives.Slot(1); i <= params.BeaconConfig().SlotsPerEpoch; i++ {
b := util.NewBeaconBlock()
@@ -741,6 +744,35 @@ func TestStore_CleanUpDirtyStates_Finalized(t *testing.T) {
require.Equal(t, true, db.HasState(context.Background(), genesisRoot))
}
func TestStore_CleanUpDirtyStates_OriginRoot(t *testing.T) {
db := setupDB(t)
genesisState, err := util.NewBeaconState()
require.NoError(t, err)
r := [32]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(context.Background(), r))
require.NoError(t, db.SaveState(context.Background(), genesisState, r))
for i := primitives.Slot(1); i <= params.BeaconConfig().SlotsPerEpoch; i++ {
b := util.NewBeaconBlock()
b.Block.Slot = i
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
wsb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(context.Background(), wsb))
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetSlot(i))
require.NoError(t, db.SaveState(context.Background(), st, r))
}
require.NoError(t, db.SaveOriginCheckpointBlockRoot(context.Background(), r))
require.NoError(t, db.CleanUpDirtyStates(context.Background(), params.BeaconConfig().SlotsPerEpoch))
require.Equal(t, true, db.HasState(context.Background(), r))
}
func TestStore_CleanUpDirtyStates_DontDeleteNonFinalized(t *testing.T) {
db := setupDB(t)
@@ -749,6 +781,7 @@ func TestStore_CleanUpDirtyStates_DontDeleteNonFinalized(t *testing.T) {
genesisRoot := [32]byte{'a'}
require.NoError(t, db.SaveGenesisBlockRoot(context.Background(), genesisRoot))
require.NoError(t, db.SaveState(context.Background(), genesisState, genesisRoot))
require.NoError(t, db.SaveOriginCheckpointBlockRoot(context.Background(), [32]byte{'a'}))
var unfinalizedRoots [][32]byte
for i := primitives.Slot(1); i <= params.BeaconConfig().SlotsPerEpoch; i++ {

View File

@@ -8,6 +8,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v4/proto/dbval"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
)
@@ -17,18 +18,6 @@ import (
// syncing, using the provided values as their point of origin. This is an alternative
// to syncing from genesis, and should only be run on an empty database.
func (s *Store) SaveOrigin(ctx context.Context, serState, serBlock []byte) error {
genesisRoot, err := s.GenesisBlockRoot(ctx)
if err != nil {
if errors.Is(err, ErrNotFoundGenesisBlockRoot) {
return errors.Wrap(err, "genesis block root not found: genesis must be provided for checkpoint sync")
}
return errors.Wrap(err, "genesis block root query error: checkpoint sync must verify genesis to proceed")
}
err = s.SaveBackfillBlockRoot(ctx, genesisRoot)
if err != nil {
return errors.Wrap(err, "unable to save genesis root as initial backfill starting point for checkpoint sync")
}
cf, err := detect.FromState(serState)
if err != nil {
return errors.Wrap(err, "could not sniff config+fork for origin state bytes")
@@ -50,11 +39,24 @@ func (s *Store) SaveOrigin(ctx context.Context, serState, serBlock []byte) error
}
blk := wblk.Block()
// save block
blockRoot, err := blk.HashTreeRoot()
if err != nil {
return errors.Wrap(err, "could not compute HashTreeRoot of checkpoint block")
}
pr := blk.ParentRoot()
bf := &dbval.BackfillStatus{
LowSlot: uint64(wblk.Block().Slot()),
LowRoot: blockRoot[:],
LowParentRoot: pr[:],
OriginRoot: blockRoot[:],
OriginSlot: uint64(wblk.Block().Slot()),
}
if err = s.SaveBackfillStatus(ctx, bf); err != nil {
return errors.Wrap(err, "unable to save backfill status data to db for checkpoint sync")
}
log.Infof("saving checkpoint block to db, w/ root=%#x", blockRoot)
if err := s.SaveBlock(ctx, wblk); err != nil {
return errors.Wrap(err, "could not save checkpoint block")

View File

@@ -7,10 +7,21 @@ package slasherkv
// it easy to scan for keys that have a certain shard number as a prefix and return those
// corresponding attestations.
var (
// Slasher buckets.
attestedEpochsByValidator = []byte("attested-epochs-by-validator")
attestationRecordsBucket = []byte("attestation-records")
// key: (encoded) ValidatorIndex
// value: (encoded) Epoch
attestedEpochsByValidator = []byte("attested-epochs-by-validator")
// key: attestation SigningRoot
// value: (encoded + compressed) IndexedAttestation
attestationRecordsBucket = []byte("attestation-records")
// key: (encoded) Target Epoch + (encoded) ValidatorIndex
// value: attestation SigningRoot
attestationDataRootsBucket = []byte("attestation-data-roots")
proposalRecordsBucket = []byte("proposal-records")
slasherChunksBucket = []byte("slasher-chunks")
// key: Slot+ValidatorIndex
// value: (encoded) SignedBlockHeaderWrapper
proposalRecordsBucket = []byte("proposal-records")
slasherChunksBucket = []byte("slasher-chunks")
)

View File

@@ -29,72 +29,90 @@ const (
// LastEpochWrittenForValidators given a list of validator indices returns the latest
// epoch we have recorded the validators writing data for.
func (s *Store) LastEpochWrittenForValidators(
ctx context.Context, validatorIndices []primitives.ValidatorIndex,
ctx context.Context, validatorIndexes []primitives.ValidatorIndex,
) ([]*slashertypes.AttestedEpochForValidator, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.LastEpochWrittenForValidators")
defer span.End()
attestedEpochs := make([]*slashertypes.AttestedEpochForValidator, 0)
encodedIndices := make([][]byte, len(validatorIndices))
for i, valIdx := range validatorIndices {
encodedIndices[i] = encodeValidatorIndex(valIdx)
encodedIndexes := make([][]byte, len(validatorIndexes))
for i, validatorIndex := range validatorIndexes {
encodedIndexes[i] = encodeValidatorIndex(validatorIndex)
}
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(attestedEpochsByValidator)
for i, encodedIndex := range encodedIndices {
for i, encodedIndex := range encodedIndexes {
var epoch primitives.Epoch
epochBytes := bkt.Get(encodedIndex)
if epochBytes != nil {
if err := epoch.UnmarshalSSZ(epochBytes); err != nil {
return err
}
}
attestedEpochs = append(attestedEpochs, &slashertypes.AttestedEpochForValidator{
ValidatorIndex: validatorIndices[i],
ValidatorIndex: validatorIndexes[i],
Epoch: epoch,
})
}
return nil
})
return attestedEpochs, err
}
// SaveLastEpochsWrittenForValidators updates the latest epoch a slice
// of validator indices has attested to.
func (s *Store) SaveLastEpochsWrittenForValidators(
ctx context.Context, epochByValidator map[primitives.ValidatorIndex]primitives.Epoch,
ctx context.Context, epochByValIndex map[primitives.ValidatorIndex]primitives.Epoch,
) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveLastEpochsWrittenForValidators")
defer span.End()
encodedIndices := make([][]byte, 0, len(epochByValidator))
encodedEpochs := make([][]byte, 0, len(epochByValidator))
for valIdx, epoch := range epochByValidator {
const batchSize = 10000
encodedIndexes := make([][]byte, 0, len(epochByValIndex))
encodedEpochs := make([][]byte, 0, len(epochByValIndex))
for valIndex, epoch := range epochByValIndex {
if ctx.Err() != nil {
return ctx.Err()
}
encodedEpoch, err := epoch.MarshalSSZ()
if err != nil {
return err
}
encodedIndices = append(encodedIndices, encodeValidatorIndex(valIdx))
encodedIndexes = append(encodedIndexes, encodeValidatorIndex(valIndex))
encodedEpochs = append(encodedEpochs, encodedEpoch)
}
// The list of validators might be too massive for boltdb to handle in a single transaction,
// so instead we split it into batches and write each batch.
batchSize := 10000
for i := 0; i < len(encodedIndices); i += batchSize {
for i := 0; i < len(encodedIndexes); i += batchSize {
if ctx.Err() != nil {
return ctx.Err()
}
if err := s.db.Update(func(tx *bolt.Tx) error {
if ctx.Err() != nil {
return ctx.Err()
}
bkt := tx.Bucket(attestedEpochsByValidator)
min := i + batchSize
if min > len(encodedIndices) {
min = len(encodedIndices)
minimum := i + batchSize
if minimum > len(encodedIndexes) {
minimum = len(encodedIndexes)
}
for j, encodedIndex := range encodedIndices[i:min] {
for j, encodedIndex := range encodedIndexes[i:minimum] {
if ctx.Err() != nil {
return ctx.Err()
}
@@ -102,79 +120,106 @@ func (s *Store) SaveLastEpochsWrittenForValidators(
return err
}
}
return nil
}); err != nil {
return err
}
}
return nil
}
// CheckAttesterDoubleVotes retries any slashable double votes that exist
// for a series of input attestations.
// CheckAttesterDoubleVotes retrieves any slashable double votes that exist
// for a series of input attestations with respect to the database.
func (s *Store) CheckAttesterDoubleVotes(
ctx context.Context, attestations []*slashertypes.IndexedAttestationWrapper,
) ([]*slashertypes.AttesterDoubleVote, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.CheckAttesterDoubleVotes")
defer span.End()
doubleVotes := make([]*slashertypes.AttesterDoubleVote, 0)
doubleVotesMu := sync.Mutex{}
mu := sync.Mutex{}
eg, egctx := errgroup.WithContext(ctx)
for _, att := range attestations {
for _, attestation := range attestations {
// Copy the iteration instance to a local variable to give each go-routine its own copy to play with.
// See https://golang.org/doc/faq#closures_and_goroutines for more details.
attToProcess := att
// process every attestation parallelly.
attToProcess := attestation
// Process each attestation in parallel.
eg.Go(func() error {
err := s.db.View(func(tx *bolt.Tx) error {
signingRootsBkt := tx.Bucket(attestationDataRootsBucket)
attRecordsBkt := tx.Bucket(attestationRecordsBucket)
encEpoch := encodeTargetEpoch(attToProcess.IndexedAttestation.Data.Target.Epoch)
localDoubleVotes := make([]*slashertypes.AttesterDoubleVote, 0)
localDoubleVotes := []*slashertypes.AttesterDoubleVote{}
for _, valIdx := range attToProcess.IndexedAttestation.AttestingIndices {
// Check if there is signing root in the database for this combination
// of validator index and target epoch.
encIdx := encodeValidatorIndex(primitives.ValidatorIndex(valIdx))
validatorEpochKey := append(encEpoch, encIdx...)
attRecordsKey := signingRootsBkt.Get(validatorEpochKey)
// An attestation record key is comprised of a signing root (32 bytes).
if len(attRecordsKey) < attestationRecordKeySize {
// If there is no signing root for this combination,
// then there is no double vote. We can continue to the next validator.
continue
}
// Retrieve the attestation record corresponding to the signing root
// from the database.
encExistingAttRecord := attRecordsBkt.Get(attRecordsKey)
if encExistingAttRecord == nil {
continue
}
existingSigningRoot := bytesutil.ToBytes32(attRecordsKey[:signingRootSize])
if existingSigningRoot != attToProcess.SigningRoot {
existingAttRecord, err := decodeAttestationRecord(encExistingAttRecord)
if err != nil {
return err
}
slashAtt := &slashertypes.AttesterDoubleVote{
ValidatorIndex: primitives.ValidatorIndex(valIdx),
Target: attToProcess.IndexedAttestation.Data.Target.Epoch,
PrevAttestationWrapper: existingAttRecord,
AttestationWrapper: attToProcess,
}
localDoubleVotes = append(localDoubleVotes, slashAtt)
if existingSigningRoot == attToProcess.SigningRoot {
continue
}
// There is a double vote.
existingAttRecord, err := decodeAttestationRecord(encExistingAttRecord)
if err != nil {
return err
}
// Build the proof of double vote.
slashAtt := &slashertypes.AttesterDoubleVote{
ValidatorIndex: primitives.ValidatorIndex(valIdx),
Target: attToProcess.IndexedAttestation.Data.Target.Epoch,
PrevAttestationWrapper: existingAttRecord,
AttestationWrapper: attToProcess,
}
localDoubleVotes = append(localDoubleVotes, slashAtt)
}
// if any routine is cancelled, then cancel this routine too
// If any routine is cancelled, then cancel this routine too.
select {
case <-egctx.Done():
return egctx.Err()
default:
}
// if there are any doible votes in this attestation, add it to the global double votes
// If there are any double votes in this attestation, add it to the global double votes.
if len(localDoubleVotes) > 0 {
doubleVotesMu.Lock()
defer doubleVotesMu.Unlock()
mu.Lock()
defer mu.Unlock()
doubleVotes = append(doubleVotes, localDoubleVotes...)
}
return nil
})
return err
})
}
return doubleVotes, eg.Wait()
}
@@ -211,6 +256,8 @@ func (s *Store) AttestationRecordForValidator(
}
// SaveAttestationRecordsForValidators saves attestation records for the specified indices.
// If multiple attestations are provided for the same validator index + target epoch combination,
// then only the first one is (arbitrarily) saved in the `attestationDataRootsBucket` bucket.
func (s *Store) SaveAttestationRecordsForValidators(
ctx context.Context,
attestations []*slashertypes.IndexedAttestationWrapper,
@@ -219,37 +266,40 @@ func (s *Store) SaveAttestationRecordsForValidators(
defer span.End()
encodedTargetEpoch := make([][]byte, len(attestations))
encodedRecords := make([][]byte, len(attestations))
encodedIndices := make([][]byte, len(attestations))
for i, att := range attestations {
encEpoch := encodeTargetEpoch(att.IndexedAttestation.Data.Target.Epoch)
value, err := encodeAttestationRecord(att)
for i, attestation := range attestations {
encEpoch := encodeTargetEpoch(attestation.IndexedAttestation.Data.Target.Epoch)
value, err := encodeAttestationRecord(attestation)
if err != nil {
return err
}
indicesBytes := make([]byte, len(att.IndexedAttestation.AttestingIndices)*8)
for _, idx := range att.IndexedAttestation.AttestingIndices {
encodedIdx := encodeValidatorIndex(primitives.ValidatorIndex(idx))
indicesBytes = append(indicesBytes, encodedIdx...)
}
encodedIndices[i] = indicesBytes
encodedTargetEpoch[i] = encEpoch
encodedRecords[i] = value
}
return s.db.Update(func(tx *bolt.Tx) error {
attRecordsBkt := tx.Bucket(attestationRecordsBucket)
signingRootsBkt := tx.Bucket(attestationDataRootsBucket)
for i, att := range attestations {
if err := attRecordsBkt.Put(att.SigningRoot[:], encodedRecords[i]); err != nil {
for i := len(attestations) - 1; i >= 0; i-- {
attestation := attestations[i]
if err := attRecordsBkt.Put(attestation.SigningRoot[:], encodedRecords[i]); err != nil {
return err
}
for _, valIdx := range att.IndexedAttestation.AttestingIndices {
for _, valIdx := range attestation.IndexedAttestation.AttestingIndices {
encIdx := encodeValidatorIndex(primitives.ValidatorIndex(valIdx))
key := append(encodedTargetEpoch[i], encIdx...)
if err := signingRootsBkt.Put(key, att.SigningRoot[:]); err != nil {
if err := signingRootsBkt.Put(key, attestation.SigningRoot[:]); err != nil {
return err
}
}
}
return nil
})
}
@@ -314,43 +364,60 @@ func (s *Store) SaveSlasherChunks(
}
// CheckDoubleBlockProposals takes in a list of proposals and for each,
// checks if there already exists a proposal at the same slot+validatorIndex combination. If so,
// We check if the existing signing root is not-empty and is different than the incoming
// proposal signing root. If so, we return a double block proposal object.
// checks if there already exists a proposal at the same slot+validatorIndex combination.
// If so, it checks if the existing signing root is not-empty and is different than
// the incoming proposal signing root.
// If so, it returns a double block proposal object.
func (s *Store) CheckDoubleBlockProposals(
ctx context.Context, proposals []*slashertypes.SignedBlockHeaderWrapper,
ctx context.Context, incomingProposals []*slashertypes.SignedBlockHeaderWrapper,
) ([]*ethpb.ProposerSlashing, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.CheckDoubleBlockProposals")
defer span.End()
proposerSlashings := make([]*ethpb.ProposerSlashing, 0, len(proposals))
proposerSlashings := make([]*ethpb.ProposerSlashing, 0, len(incomingProposals))
err := s.db.View(func(tx *bolt.Tx) error {
// Retrieve the proposal records bucket
bkt := tx.Bucket(proposalRecordsBucket)
for _, proposal := range proposals {
for _, incomingProposal := range incomingProposals {
// Build the key corresponding to this slot + validator index combination
key, err := keyForValidatorProposal(
proposal.SignedBeaconBlockHeader.Header.Slot,
proposal.SignedBeaconBlockHeader.Header.ProposerIndex,
incomingProposal.SignedBeaconBlockHeader.Header.Slot,
incomingProposal.SignedBeaconBlockHeader.Header.ProposerIndex,
)
if err != nil {
return err
}
// Retrieve the existing proposal record from the database
encExistingProposalWrapper := bkt.Get(key)
// If there is no existing proposal record (empty result), then there is no double proposal.
// We can continue to the next proposal.
if len(encExistingProposalWrapper) < signingRootSize {
continue
}
// Compare the proposal signing root in the DB with the incoming proposal signing root.
// If they differ, we have a double proposal.
existingSigningRoot := bytesutil.ToBytes32(encExistingProposalWrapper[:signingRootSize])
if existingSigningRoot != proposal.SigningRoot {
if existingSigningRoot != incomingProposal.SigningRoot {
existingProposalWrapper, err := decodeProposalRecord(encExistingProposalWrapper)
if err != nil {
return err
}
proposerSlashings = append(proposerSlashings, &ethpb.ProposerSlashing{
Header_1: existingProposalWrapper.SignedBeaconBlockHeader,
Header_2: proposal.SignedBeaconBlockHeader,
Header_2: incomingProposal.SignedBeaconBlockHeader,
})
}
}
return nil
})
return proposerSlashings, err
}
@@ -384,14 +451,20 @@ func (s *Store) BlockProposalForValidator(
// SaveBlockProposals takes in a list of block proposals and saves them to our
// proposal records bucket in the database.
// If multiple proposals are provided for the same slot + validatorIndex combination,
// then only the last one is saved in the database.
func (s *Store) SaveBlockProposals(
ctx context.Context, proposals []*slashertypes.SignedBlockHeaderWrapper,
) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveBlockProposals")
defer span.End()
encodedKeys := make([][]byte, len(proposals))
encodedProposals := make([][]byte, len(proposals))
// Loop over all proposals to encode keys and proposals themselves.
for i, proposal := range proposals {
// Encode the key for this proposal.
key, err := keyForValidatorProposal(
proposal.SignedBeaconBlockHeader.Header.Slot,
proposal.SignedBeaconBlockHeader.Header.ProposerIndex,
@@ -399,20 +472,29 @@ func (s *Store) SaveBlockProposals(
if err != nil {
return err
}
// Encode the proposal itself.
enc, err := encodeProposalRecord(proposal)
if err != nil {
return err
}
encodedKeys[i] = key
encodedProposals[i] = enc
}
// All proposals are saved into the DB in a single transaction.
return s.db.Update(func(tx *bolt.Tx) error {
// Retrieve the proposal records bucket.
bkt := tx.Bucket(proposalRecordsBucket)
// Save all proposals.
for i := range proposals {
if err := bkt.Put(encodedKeys[i], encodedProposals[i]); err != nil {
return err
}
}
return nil
})
}
@@ -472,7 +554,7 @@ func suffixForAttestationRecordsKey(key, encodedValidatorIndex []byte) bool {
return bytes.Equal(encIdx, encodedValidatorIndex)
}
// Disk key for a validator proposal, including a slot+validatorIndex as a byte slice.
// keyForValidatorProposal returns a disk key for a validator proposal, including a slot+validatorIndex as a byte slice.
func keyForValidatorProposal(slot primitives.Slot, proposerIndex primitives.ValidatorIndex) ([]byte, error) {
encSlot, err := slot.MarshalSSZ()
if err != nil {
@@ -512,37 +594,55 @@ func decodeSlasherChunk(enc []byte) ([]uint16, error) {
return chunk, nil
}
// Decode attestation record from bytes.
// Encode attestation record to bytes.
// The output encoded attestation record consists in the signing root concatened with the compressed attestation record.
func encodeAttestationRecord(att *slashertypes.IndexedAttestationWrapper) ([]byte, error) {
if att == nil || att.IndexedAttestation == nil {
return []byte{}, errors.New("nil proposal record")
}
// Encode attestation.
encodedAtt, err := att.IndexedAttestation.MarshalSSZ()
if err != nil {
return nil, err
}
// Compress attestation.
compressedAtt := snappy.Encode(nil, encodedAtt)
return append(att.SigningRoot[:], compressedAtt...), nil
}
// Decode attestation record from bytes.
// The input encoded attestation record consists in the signing root concatened with the compressed attestation record.
func decodeAttestationRecord(encoded []byte) (*slashertypes.IndexedAttestationWrapper, error) {
if len(encoded) < signingRootSize {
return nil, fmt.Errorf("wrong length for encoded attestation record, want 32, got %d", len(encoded))
return nil, fmt.Errorf("wrong length for encoded attestation record, want minimum %d, got %d", signingRootSize, len(encoded))
}
signingRoot := encoded[:signingRootSize]
decodedAtt := &ethpb.IndexedAttestation{}
// Decompress attestation.
decodedAttBytes, err := snappy.Decode(nil, encoded[signingRootSize:])
if err != nil {
return nil, err
}
// Decode attestation.
decodedAtt := &ethpb.IndexedAttestation{}
if err := decodedAtt.UnmarshalSSZ(decodedAttBytes); err != nil {
return nil, err
}
return &slashertypes.IndexedAttestationWrapper{
// Decode signing root.
signingRootBytes := encoded[:signingRootSize]
signingRoot := bytesutil.ToBytes32(signingRootBytes)
// Return decoded attestation.
attestation := &slashertypes.IndexedAttestationWrapper{
IndexedAttestation: decodedAtt,
SigningRoot: bytesutil.ToBytes32(signingRoot),
}, nil
SigningRoot: signingRoot,
}
return attestation, nil
}
func encodeProposalRecord(blkHdr *slashertypes.SignedBlockHeaderWrapper) ([]byte, error) {

View File

@@ -113,7 +113,7 @@ func (s *Service) BlockByTimestamp(ctx context.Context, time uint64) (*types.Hea
cursorNum := big.NewInt(0).SetUint64(latestBlkHeight)
cursorTime := latestBlkTime
numOfBlocks := uint64(0)
var numOfBlocks uint64
estimatedBlk := cursorNum.Uint64()
maxTimeBuffer := searchThreshold * params.BeaconConfig().SecondsPerETH1Block
// Terminate if we can't find an acceptable block after

View File

@@ -260,7 +260,7 @@ func (s *Service) GetPayload(ctx context.Context, payloadId [8]byte, slot primit
if err != nil {
return nil, nil, false, handleRPCError(err)
}
ed, err := blocks.WrappedExecutionPayloadDeneb(result.Payload, blocks.PayloadValueToGwei(result.Value))
ed, err := blocks.WrappedExecutionPayloadDeneb(result.Payload, blocks.PayloadValueToWei(result.Value))
if err != nil {
return nil, nil, false, err
}
@@ -273,7 +273,7 @@ func (s *Service) GetPayload(ctx context.Context, payloadId [8]byte, slot primit
if err != nil {
return nil, nil, false, handleRPCError(err)
}
ed, err := blocks.WrappedExecutionPayloadCapella(result.Payload, blocks.PayloadValueToGwei(result.Value))
ed, err := blocks.WrappedExecutionPayloadCapella(result.Payload, blocks.PayloadValueToWei(result.Value))
if err != nil {
return nil, nil, false, err
}
@@ -734,7 +734,7 @@ func fullPayloadFromExecutionBlock(
BlockHash: blockHash[:],
Transactions: txs,
Withdrawals: block.Withdrawals,
}, 0) // We can't get the block value and don't care about the block value for this instance
}, big.NewInt(0)) // We can't get the block value and don't care about the block value for this instance
case version.Deneb:
ebg, err := header.ExcessBlobGas()
if err != nil {
@@ -763,7 +763,7 @@ func fullPayloadFromExecutionBlock(
Withdrawals: block.Withdrawals,
BlobGasUsed: bgu,
ExcessBlobGas: ebg,
}, 0) // We can't get the block value and don't care about the block value for this instance
}, big.NewInt(0)) // We can't get the block value and don't care about the block value for this instance
default:
return nil, fmt.Errorf("unknown execution block version %d", block.Version)
}
@@ -811,7 +811,7 @@ func fullPayloadFromPayloadBody(
BlockHash: header.BlockHash(),
Transactions: body.Transactions,
Withdrawals: body.Withdrawals,
}, 0) // We can't get the block value and don't care about the block value for this instance
}, big.NewInt(0)) // We can't get the block value and don't care about the block value for this instance
case version.Deneb:
ebg, err := header.ExcessBlobGas()
if err != nil {
@@ -840,7 +840,7 @@ func fullPayloadFromPayloadBody(
Withdrawals: body.Withdrawals,
ExcessBlobGas: ebg,
BlobGasUsed: bgu,
}, 0) // We can't get the block value and don't care about the block value for this instance
}, big.NewInt(0)) // We can't get the block value and don't care about the block value for this instance
default:
return nil, fmt.Errorf("unknown execution block version for payload %d", bVersion)
}

View File

@@ -127,7 +127,7 @@ func TestClient_IPC(t *testing.T) {
require.Equal(t, true, ok)
req, ok := fix["ExecutionPayloadCapella"].(*pb.ExecutionPayloadCapella)
require.Equal(t, true, ok)
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(req, 0)
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(req, big.NewInt(0))
require.NoError(t, err)
latestValidHash, err := srv.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.NoError(t, err)
@@ -476,7 +476,7 @@ func TestClient_HTTP(t *testing.T) {
client := newPayloadV2Setup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload, 0)
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload, big.NewInt(0))
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.NoError(t, err)
@@ -490,7 +490,7 @@ func TestClient_HTTP(t *testing.T) {
client := newPayloadV3Setup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload, 0)
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload, big.NewInt(0))
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
require.NoError(t, err)
@@ -518,7 +518,7 @@ func TestClient_HTTP(t *testing.T) {
client := newPayloadV2Setup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload, 0)
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload, big.NewInt(0))
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
@@ -532,7 +532,7 @@ func TestClient_HTTP(t *testing.T) {
client := newPayloadV3Setup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload, 0)
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload, big.NewInt(0))
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
require.ErrorIs(t, ErrAcceptedSyncingPayloadStatus, err)
@@ -560,7 +560,7 @@ func TestClient_HTTP(t *testing.T) {
client := newPayloadV2Setup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload, 0)
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload, big.NewInt(0))
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
@@ -574,7 +574,7 @@ func TestClient_HTTP(t *testing.T) {
client := newPayloadV3Setup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload, 0)
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload, big.NewInt(0))
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
require.ErrorIs(t, ErrInvalidBlockHashPayloadStatus, err)
@@ -602,7 +602,7 @@ func TestClient_HTTP(t *testing.T) {
client := newPayloadV2Setup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload, 0)
wrappedPayload, err := blocks.WrappedExecutionPayloadCapella(execPayload, big.NewInt(0))
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{})
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
@@ -616,7 +616,7 @@ func TestClient_HTTP(t *testing.T) {
client := newPayloadV3Setup(t, want, execPayload)
// We call the RPC method via HTTP and expect a proper result.
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload, 0)
wrappedPayload, err := blocks.WrappedExecutionPayloadDeneb(execPayload, big.NewInt(0))
require.NoError(t, err)
resp, err := client.NewPayload(ctx, wrappedPayload, []common.Hash{}, &common.Hash{'a'})
require.ErrorIs(t, ErrInvalidPayloadStatus, err)
@@ -1537,7 +1537,7 @@ func Test_fullPayloadFromExecutionBlockCapella(t *testing.T) {
p, err := blocks.WrappedExecutionPayloadCapella(&pb.ExecutionPayloadCapella{
BlockHash: wantedHash[:],
Transactions: [][]byte{},
}, 0)
}, big.NewInt(0))
require.NoError(t, err)
return p
},
@@ -1545,7 +1545,7 @@ func Test_fullPayloadFromExecutionBlockCapella(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wrapped, err := blocks.WrappedExecutionPayloadHeaderCapella(tt.args.header, 0)
wrapped, err := blocks.WrappedExecutionPayloadHeaderCapella(tt.args.header, big.NewInt(0))
require.NoError(t, err)
got, err := fullPayloadFromExecutionBlock(tt.args.version, wrapped, tt.args.block)
if err != nil {
@@ -1598,7 +1598,7 @@ func Test_fullPayloadFromExecutionBlockDeneb(t *testing.T) {
p, err := blocks.WrappedExecutionPayloadDeneb(&pb.ExecutionPayloadDeneb{
BlockHash: wantedHash[:],
Transactions: [][]byte{},
}, 0)
}, big.NewInt(0))
require.NoError(t, err)
return p
},
@@ -1606,7 +1606,7 @@ func Test_fullPayloadFromExecutionBlockDeneb(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
wrapped, err := blocks.WrappedExecutionPayloadHeaderDeneb(tt.args.header, 0)
wrapped, err := blocks.WrappedExecutionPayloadHeaderDeneb(tt.args.header, big.NewInt(0))
require.NoError(t, err)
got, err := fullPayloadFromExecutionBlock(tt.args.version, wrapped, tt.args.block)
if err != nil {

View File

@@ -479,10 +479,10 @@ func (s *Service) handleETH1FollowDistance() {
// If the last requested block has not changed,
// we do not request batched logs as this means there are no new
// logs for the powchain service to process. Also it is a potential
// logs for the execution service to process. Also it is a potential
// failure condition as would mean we have not respected the protocol threshold.
if s.latestEth1Data.LastRequestedBlock == s.latestEth1Data.BlockHeight {
log.Error("Beacon node is not respecting the follow distance")
log.WithField("lastBlockNumber", s.latestEth1Data.LastRequestedBlock).Error("Beacon node is not respecting the follow distance. EL client is syncing.")
return
}
if err := s.requestBatchedHeadersAndLogs(ctx); err != nil {
@@ -753,7 +753,7 @@ func (s *Service) initializeEth1Data(ctx context.Context, eth1DataInDB *ethpb.ET
} else {
if eth1DataInDB.Trie == nil && eth1DataInDB.DepositSnapshot != nil {
return errors.Errorf("trying to use old deposit trie after migration to the new trie. "+
"Run with the --%s flag to resume normal operations.", features.EnableEIP4881.Name)
"Remove the --%s flag to resume normal operations.", features.DisableEIP4881.Name)
}
s.depositTrie, err = trie.CreateTrieFromProto(eth1DataInDB.Trie)
}

View File

@@ -23,7 +23,6 @@ go_library(
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",

View File

@@ -14,7 +14,6 @@ import (
payloadattribute "github.com/prysmaticlabs/prysm/v4/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/math"
pb "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
@@ -63,14 +62,14 @@ func (e *EngineClient) ForkchoiceUpdated(
// GetPayload --
func (e *EngineClient) GetPayload(_ context.Context, _ [8]byte, s primitives.Slot) (interfaces.ExecutionData, *pb.BlobsBundle, bool, error) {
if slots.ToEpoch(s) >= params.BeaconConfig().DenebForkEpoch {
ed, err := blocks.WrappedExecutionPayloadDeneb(e.ExecutionPayloadDeneb, math.Gwei(e.BlockValue))
ed, err := blocks.WrappedExecutionPayloadDeneb(e.ExecutionPayloadDeneb, big.NewInt(int64(e.BlockValue)))
if err != nil {
return nil, nil, false, err
}
return ed, e.BlobsBundle, e.BuilderOverride, nil
}
if slots.ToEpoch(s) >= params.BeaconConfig().CapellaForkEpoch {
ed, err := blocks.WrappedExecutionPayloadCapella(e.ExecutionPayloadCapella, math.Gwei(e.BlockValue))
ed, err := blocks.WrappedExecutionPayloadCapella(e.ExecutionPayloadCapella, big.NewInt(int64(e.BlockValue)))
if err != nil {
return nil, nil, false, err
}

View File

@@ -40,6 +40,7 @@ go_library(
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",
"//beacon-chain/rpc:go_default_library",
"//beacon-chain/slasher:go_default_library",
"//beacon-chain/startup:go_default_library",
@@ -47,6 +48,7 @@ go_library(
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync:go_default_library",
"//beacon-chain/sync/backfill:go_default_library",
"//beacon-chain/sync/backfill/coverage:go_default_library",
"//beacon-chain/sync/checkpoint:go_default_library",
"//beacon-chain/sync/genesis:go_default_library",
"//beacon-chain/sync/initial-sync:go_default_library",

View File

@@ -42,6 +42,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/synccommittee"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/slasher"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/startup"
@@ -49,6 +50,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state/stategen"
regularsync "github.com/prysmaticlabs/prysm/v4/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/backfill"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/backfill/coverage"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/checkpoint"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/genesis"
initialsync "github.com/prysmaticlabs/prysm/v4/beacon-chain/sync/initial-sync"
@@ -113,10 +115,12 @@ type BeaconNode struct {
CheckpointInitializer checkpoint.Initializer
forkChoicer forkchoice.ForkChoicer
clockWaiter startup.ClockWaiter
BackfillOpts []backfill.ServiceOption
initialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
blobRetentionEpochs primitives.Epoch
verifyInitWaiter *verification.InitializerWaiter
syncChecker *initialsync.SyncChecker
}
// New creates a new node instance, sets up configuration options, and registers
@@ -189,6 +193,7 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
}
beacon.initialSyncComplete = make(chan struct{})
beacon.syncChecker = &initialsync.SyncChecker{}
for _, opt := range opts {
if err := opt(beacon); err != nil {
return nil, err
@@ -215,10 +220,23 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
return nil, err
}
bfs := backfill.NewStatus(beacon.db)
if err := bfs.Reload(ctx); err != nil {
log.Debugln("Registering P2P Service")
if err := beacon.registerP2P(cliCtx); err != nil {
return nil, err
}
bfs, err := backfill.NewUpdater(ctx, beacon.db)
if err != nil {
return nil, errors.Wrap(err, "backfill status initialization error")
}
pa := peers.NewAssigner(beacon.fetchP2P().Peers(), beacon.forkChoicer)
bf, err := backfill.NewService(ctx, bfs, beacon.clockWaiter, beacon.fetchP2P(), pa, beacon.BackfillOpts...)
if err != nil {
return nil, errors.Wrap(err, "error initializing backfill service")
}
if err := beacon.services.RegisterService(bf); err != nil {
return nil, errors.Wrap(err, "error registering backfill service")
}
log.Debugln("Starting State Gen")
if err := beacon.startStateGen(ctx, bfs, beacon.forkChoicer); err != nil {
@@ -233,11 +251,6 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.verifyInitWaiter = verification.NewInitializerWaiter(
beacon.clockWaiter, forkchoice.NewROForkChoice(beacon.forkChoicer), beacon.stateGen)
log.Debugln("Registering P2P Service")
if err := beacon.registerP2P(cliCtx); err != nil {
return nil, err
}
log.Debugln("Registering POW Chain Service")
if err := beacon.registerPOWChainService(); err != nil {
return nil, err
@@ -534,8 +547,8 @@ func (b *BeaconNode) startSlasherDB(cliCtx *cli.Context) error {
return nil
}
func (b *BeaconNode) startStateGen(ctx context.Context, bfs *backfill.Status, fc forkchoice.ForkChoicer) error {
opts := []stategen.Option{stategen.WithBackfillStatus(bfs)}
func (b *BeaconNode) startStateGen(ctx context.Context, bfs coverage.AvailableBlocker, fc forkchoice.ForkChoicer) error {
opts := []stategen.Option{stategen.WithAvailableBlocker(bfs)}
sg := stategen.New(b.db, fc, opts...)
cp, err := b.db.FinalizedCheckpoint(ctx)
@@ -663,6 +676,7 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
blockchain.WithBlobStorage(b.BlobStorage),
blockchain.WithTrackedValidatorsCache(b.trackedValidatorsCache),
blockchain.WithPayloadIDCache(b.payloadIDCache),
blockchain.WithSyncChecker(b.syncChecker),
)
blockchainService, err := blockchain.NewService(b.ctx, opts...)
@@ -756,6 +770,7 @@ func (b *BeaconNode) registerInitialSyncService(complete chan struct{}) error {
opts := []initialsync.Option{
initialsync.WithVerifierWaiter(b.verifyInitWaiter),
initialsync.WithSyncChecker(b.syncChecker),
}
is := initialsync.NewService(b.ctx, &initialsync.Config{
DB: b.db,

View File

@@ -27,6 +27,7 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime"
"github.com/prysmaticlabs/prysm/v4/runtime/interop"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/urfave/cli/v2"
@@ -91,6 +92,30 @@ func TestNodeStart_Ok(t *testing.T) {
require.LogsContain(t, hook, "Starting beacon node")
}
func TestNodeStart_SyncChecker(t *testing.T) {
hook := logTest.NewGlobal()
app := cli.App{}
tmp := fmt.Sprintf("%s/datadirtest2", t.TempDir())
set := flag.NewFlagSet("test", 0)
set.String("datadir", tmp, "node data directory")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
ctx, cancel := newCliContextWithCancel(&app, set)
node, err := New(ctx, cancel, WithBlockchainFlagOptions([]blockchain.Option{}),
WithBuilderFlagOptions([]builder.Option{}),
WithExecutionChainOptions([]execution.Option{}),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)))
require.NoError(t, err)
go func() {
node.Start()
}()
time.Sleep(3 * time.Second)
assert.NotNil(t, node.syncChecker.Svc)
node.Close()
require.LogsContain(t, hook, "Starting beacon node")
}
func TestNodeStart_Ok_registerDeterministicGenesisService(t *testing.T) {
numValidators := uint64(1)
hook := logTest.NewGlobal()

View File

@@ -83,7 +83,7 @@ func postAltairMsgID(pmsg *pubsubpb.Message, fEpoch primitives.Epoch) string {
decodedData, err := encoder.DecodeSnappy(pmsg.Data, gossipPubSubSize)
if err != nil {
totalLength, err := math.AddInt(
len(params.BeaconConfig().MessageDomainValidSnappy),
len(params.BeaconConfig().MessageDomainInvalidSnappy),
len(topicLenBytes),
topicLen,
len(pmsg.Data),

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"assigner.go",
"log.go",
"status.go",
],
@@ -12,8 +13,10 @@ go_library(
"//cmd:__subpackages__",
],
deps = [
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/p2p/peers/peerdata:go_default_library",
"//beacon-chain/p2p/peers/scorers:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -28,6 +31,7 @@ go_library(
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_multiformats_go_multiaddr//net:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
@@ -36,6 +40,7 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"assigner_test.go",
"benchmark_test.go",
"peers_test.go",
"status_test.go",

View File

@@ -0,0 +1,78 @@
package peers
import (
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/sirupsen/logrus"
)
// FinalizedCheckpointer describes the minimum capability that Assigner needs from forkchoice.
// That is, the ability to retrieve the latest finalized checkpoint to help with peer evaluation.
type FinalizedCheckpointer interface {
FinalizedCheckpoint() *forkchoicetypes.Checkpoint
}
// NewAssigner assists in the correct construction of an Assigner by code in other packages,
// assuring all the important private member fields are given values.
// The FinalizedCheckpointer is used to retrieve the latest finalized checkpoint each time peers are requested.
// Peers that report an older finalized checkpoint are filtered out.
func NewAssigner(s *Status, fc FinalizedCheckpointer) *Assigner {
return &Assigner{
ps: s,
fc: fc,
}
}
// Assigner uses the "BestFinalized" peer scoring method to pick the next-best peer to receive rpc requests.
type Assigner struct {
ps *Status
fc FinalizedCheckpointer
}
// ErrInsufficientSuitable is a sentinel error, signaling that a peer couldn't be assigned because there are currently
// not enough peers that match our selection criteria to serve rpc requests. It is the responsibility of the caller to
// look for this error and continue to try calling Assign with appropriate backoff logic.
var ErrInsufficientSuitable = errors.New("no suitable peers")
func (a *Assigner) freshPeers() ([]peer.ID, error) {
required := params.BeaconConfig().MaxPeersToSync
if flags.Get().MinimumSyncPeers < required {
required = flags.Get().MinimumSyncPeers
}
_, peers := a.ps.BestFinalized(params.BeaconConfig().MaxPeersToSync, a.fc.FinalizedCheckpoint().Epoch)
if len(peers) < required {
log.WithFields(logrus.Fields{
"suitable": len(peers),
"required": required}).Warn("Unable to assign peer while suitable peers < required ")
return nil, ErrInsufficientSuitable
}
return peers, nil
}
// Assign uses the "BestFinalized" method to select the best peers that agree on a canonical block
// for the configured finalized epoch. At most `n` peers will be returned. The `busy` param can be used
// to filter out peers that we know we don't want to connect to, for instance if we are trying to limit
// the number of outbound requests to each peer from a given component.
func (a *Assigner) Assign(busy map[peer.ID]bool, n int) ([]peer.ID, error) {
best, err := a.freshPeers()
if err != nil {
return nil, err
}
return pickBest(busy, n, best), nil
}
func pickBest(busy map[peer.ID]bool, n int, best []peer.ID) []peer.ID {
ps := make([]peer.ID, 0, n)
for _, p := range best {
if len(ps) == n {
return ps
}
if !busy[p] {
ps = append(ps, p)
}
}
return ps
}

View File

@@ -0,0 +1,114 @@
package peers
import (
"fmt"
"testing"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
func TestPickBest(t *testing.T) {
best := testPeerIds(10)
cases := []struct {
name string
busy map[peer.ID]bool
n int
best []peer.ID
expected []peer.ID
}{
{
name: "",
n: 0,
},
{
name: "none busy",
n: 1,
expected: best[0:1],
},
{
name: "all busy except last",
n: 1,
busy: testBusyMap(best[0 : len(best)-1]),
expected: best[len(best)-1:],
},
{
name: "all busy except i=5",
n: 1,
busy: testBusyMap(append(append([]peer.ID{}, best[0:5]...), best[6:]...)),
expected: []peer.ID{best[5]},
},
{
name: "all busy - 0 results",
n: 1,
busy: testBusyMap(best),
},
{
name: "first half busy",
n: 5,
busy: testBusyMap(best[0:5]),
expected: best[5:],
},
{
name: "back half busy",
n: 5,
busy: testBusyMap(best[5:]),
expected: best[0:5],
},
{
name: "pick all ",
n: 10,
expected: best,
},
{
name: "none available",
n: 10,
best: []peer.ID{},
},
{
name: "not enough",
n: 10,
best: best[0:1],
expected: best[0:1],
},
{
name: "not enough, some busy",
n: 10,
best: best[0:6],
busy: testBusyMap(best[0:5]),
expected: best[5:6],
},
}
for _, c := range cases {
name := fmt.Sprintf("n=%d", c.n)
if c.name != "" {
name += " " + c.name
}
t.Run(name, func(t *testing.T) {
if c.best == nil {
c.best = best
}
pb := pickBest(c.busy, c.n, c.best)
require.Equal(t, len(c.expected), len(pb))
for i := range c.expected {
require.Equal(t, c.expected[i], pb[i])
}
})
}
}
func testBusyMap(b []peer.ID) map[peer.ID]bool {
m := make(map[peer.ID]bool)
for i := range b {
m[b[i]] = true
}
return m
}
func testPeerIds(n int) []peer.ID {
pids := make([]peer.ID, n)
for i := range pids {
pids[i] = peer.ID(fmt.Sprintf("%d", i))
}
return pids
}

View File

@@ -9,13 +9,13 @@ go_library(
"handlers_validator.go",
"log.go",
"server.go",
"structs.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/beacon",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
@@ -75,6 +75,7 @@ go_test(
deps = [
"//api:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
@@ -90,7 +91,6 @@ go_test(
"//beacon-chain/operations/voluntaryexits/mock:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/eth/shared/testing:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",

View File

@@ -14,6 +14,7 @@ import (
"github.com/gorilla/mux"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
corehelpers "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filters"
@@ -76,7 +77,7 @@ func (s *Server) getBlock(ctx context.Context, w http.ResponseWriter, blk interf
httputil.HandleError(w, "Could not get block: "+err.Error(), http.StatusInternalServerError)
return
}
resp := &GetBlockResponse{Data: v2Resp.Data}
resp := &structs.GetBlockResponse{Data: v2Resp.Data}
httputil.WriteJson(w, resp)
}
@@ -121,7 +122,7 @@ func (s *Server) getBlockV2(ctx context.Context, w http.ResponseWriter, blk inte
}
finalized := s.FinalizationFetcher.IsFinalized(ctx, blkRoot)
getBlockHandler := func(get func(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (*GetBlockV2Response, error)) handled {
getBlockHandler := func(get func(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (*structs.GetBlockV2Response, error)) handled {
result, err := get(ctx, blk)
if result != nil {
result.Finalized = finalized
@@ -221,7 +222,7 @@ func (s *Server) getBlindedBlock(ctx context.Context, w http.ResponseWriter, blk
}
finalized := s.FinalizationFetcher.IsFinalized(ctx, blkRoot)
getBlockHandler := func(get func(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (*GetBlockV2Response, error)) handled {
getBlockHandler := func(get func(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (*structs.GetBlockV2Response, error)) handled {
result, err := get(ctx, blk)
if result != nil {
result.Finalized = finalized
@@ -290,7 +291,7 @@ func (s *Server) getBlindedBlockSSZ(ctx context.Context, w http.ResponseWriter,
httputil.HandleError(w, fmt.Sprintf("Unknown block type %T", blk), http.StatusInternalServerError)
}
func (*Server) getBlockPhase0(_ context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*GetBlockV2Response, error) {
func (*Server) getBlockPhase0(_ context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*structs.GetBlockV2Response, error) {
consensusBlk, err := blk.PbPhase0Block()
if err != nil {
return nil, err
@@ -298,22 +299,22 @@ func (*Server) getBlockPhase0(_ context.Context, blk interfaces.ReadOnlySignedBe
if consensusBlk == nil {
return nil, errNilBlock
}
respBlk := shared.SignedBeaconBlockFromConsensus(consensusBlk)
respBlk := structs.SignedBeaconBlockFromConsensus(consensusBlk)
jsonBytes, err := json.Marshal(respBlk.Message)
if err != nil {
return nil, err
}
return &GetBlockV2Response{
return &structs.GetBlockV2Response{
Version: version.String(version.Phase0),
ExecutionOptimistic: false,
Data: &SignedBlock{
Data: &structs.SignedBlock{
Message: jsonBytes,
Signature: respBlk.Signature,
},
}, nil
}
func (*Server) getBlockAltair(_ context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*GetBlockV2Response, error) {
func (*Server) getBlockAltair(_ context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*structs.GetBlockV2Response, error) {
consensusBlk, err := blk.PbAltairBlock()
if err != nil {
return nil, err
@@ -321,22 +322,22 @@ func (*Server) getBlockAltair(_ context.Context, blk interfaces.ReadOnlySignedBe
if consensusBlk == nil {
return nil, errNilBlock
}
respBlk := shared.SignedBeaconBlockAltairFromConsensus(consensusBlk)
respBlk := structs.SignedBeaconBlockAltairFromConsensus(consensusBlk)
jsonBytes, err := json.Marshal(respBlk.Message)
if err != nil {
return nil, err
}
return &GetBlockV2Response{
return &structs.GetBlockV2Response{
Version: version.String(version.Altair),
ExecutionOptimistic: false,
Data: &SignedBlock{
Data: &structs.SignedBlock{
Message: jsonBytes,
Signature: respBlk.Signature,
},
}, nil
}
func (s *Server) getBlockBellatrix(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*GetBlockV2Response, error) {
func (s *Server) getBlockBellatrix(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*structs.GetBlockV2Response, error) {
consensusBlk, err := blk.PbBellatrixBlock()
if err != nil {
// ErrUnsupportedField means that we have another block type
@@ -372,7 +373,7 @@ func (s *Server) getBlockBellatrix(ctx context.Context, blk interfaces.ReadOnlyS
if err != nil {
return nil, errors.Wrapf(err, "could not check if block is optimistic")
}
respBlk, err := shared.SignedBeaconBlockBellatrixFromConsensus(consensusBlk)
respBlk, err := structs.SignedBeaconBlockBellatrixFromConsensus(consensusBlk)
if err != nil {
return nil, err
}
@@ -380,17 +381,17 @@ func (s *Server) getBlockBellatrix(ctx context.Context, blk interfaces.ReadOnlyS
if err != nil {
return nil, err
}
return &GetBlockV2Response{
return &structs.GetBlockV2Response{
Version: version.String(version.Bellatrix),
ExecutionOptimistic: isOptimistic,
Data: &SignedBlock{
Data: &structs.SignedBlock{
Message: jsonBytes,
Signature: respBlk.Signature,
},
}, nil
}
func (s *Server) getBlockCapella(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*GetBlockV2Response, error) {
func (s *Server) getBlockCapella(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*structs.GetBlockV2Response, error) {
consensusBlk, err := blk.PbCapellaBlock()
if err != nil {
// ErrUnsupportedField means that we have another block type
@@ -426,7 +427,7 @@ func (s *Server) getBlockCapella(ctx context.Context, blk interfaces.ReadOnlySig
if err != nil {
return nil, errors.Wrapf(err, "could not check if block is optimistic")
}
respBlk, err := shared.SignedBeaconBlockCapellaFromConsensus(consensusBlk)
respBlk, err := structs.SignedBeaconBlockCapellaFromConsensus(consensusBlk)
if err != nil {
return nil, err
}
@@ -434,17 +435,17 @@ func (s *Server) getBlockCapella(ctx context.Context, blk interfaces.ReadOnlySig
if err != nil {
return nil, err
}
return &GetBlockV2Response{
return &structs.GetBlockV2Response{
Version: version.String(version.Capella),
ExecutionOptimistic: isOptimistic,
Data: &SignedBlock{
Data: &structs.SignedBlock{
Message: jsonBytes,
Signature: respBlk.Signature,
},
}, nil
}
func (s *Server) getBlockDeneb(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*GetBlockV2Response, error) {
func (s *Server) getBlockDeneb(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*structs.GetBlockV2Response, error) {
consensusBlk, err := blk.PbDenebBlock()
if err != nil {
// ErrUnsupportedGetter means that we have another block type
@@ -480,7 +481,7 @@ func (s *Server) getBlockDeneb(ctx context.Context, blk interfaces.ReadOnlySigne
if err != nil {
return nil, errors.Wrapf(err, "could not check if block is optimistic")
}
respBlk, err := shared.SignedBeaconBlockDenebFromConsensus(consensusBlk)
respBlk, err := structs.SignedBeaconBlockDenebFromConsensus(consensusBlk)
if err != nil {
return nil, err
}
@@ -488,10 +489,10 @@ func (s *Server) getBlockDeneb(ctx context.Context, blk interfaces.ReadOnlySigne
if err != nil {
return nil, err
}
return &GetBlockV2Response{
return &structs.GetBlockV2Response{
Version: version.String(version.Deneb),
ExecutionOptimistic: isOptimistic,
Data: &SignedBlock{
Data: &structs.SignedBlock{
Message: jsonBytes,
Signature: respBlk.Signature,
},
@@ -633,7 +634,7 @@ func (s *Server) getBlockDenebSSZ(ctx context.Context, blk interfaces.ReadOnlySi
return sszData, nil
}
func (s *Server) getBlindedBlockBellatrix(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*GetBlockV2Response, error) {
func (s *Server) getBlindedBlockBellatrix(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*structs.GetBlockV2Response, error) {
blindedConsensusBlk, err := blk.PbBlindedBellatrixBlock()
if err != nil {
// ErrUnsupportedField means that we have another block type
@@ -669,7 +670,7 @@ func (s *Server) getBlindedBlockBellatrix(ctx context.Context, blk interfaces.Re
if err != nil {
return nil, errors.Wrapf(err, "could not check if block is optimistic")
}
respBlk, err := shared.SignedBlindedBeaconBlockBellatrixFromConsensus(blindedConsensusBlk)
respBlk, err := structs.SignedBlindedBeaconBlockBellatrixFromConsensus(blindedConsensusBlk)
if err != nil {
return nil, err
}
@@ -677,17 +678,17 @@ func (s *Server) getBlindedBlockBellatrix(ctx context.Context, blk interfaces.Re
if err != nil {
return nil, err
}
return &GetBlockV2Response{
return &structs.GetBlockV2Response{
Version: version.String(version.Bellatrix),
ExecutionOptimistic: isOptimistic,
Data: &SignedBlock{
Data: &structs.SignedBlock{
Message: jsonBytes,
Signature: respBlk.Signature,
},
}, nil
}
func (s *Server) getBlindedBlockCapella(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*GetBlockV2Response, error) {
func (s *Server) getBlindedBlockCapella(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*structs.GetBlockV2Response, error) {
blindedConsensusBlk, err := blk.PbBlindedCapellaBlock()
if err != nil {
// ErrUnsupportedField means that we have another block type
@@ -723,7 +724,7 @@ func (s *Server) getBlindedBlockCapella(ctx context.Context, blk interfaces.Read
if err != nil {
return nil, errors.Wrapf(err, "could not check if block is optimistic")
}
respBlk, err := shared.SignedBlindedBeaconBlockCapellaFromConsensus(blindedConsensusBlk)
respBlk, err := structs.SignedBlindedBeaconBlockCapellaFromConsensus(blindedConsensusBlk)
if err != nil {
return nil, err
}
@@ -731,17 +732,17 @@ func (s *Server) getBlindedBlockCapella(ctx context.Context, blk interfaces.Read
if err != nil {
return nil, err
}
return &GetBlockV2Response{
return &structs.GetBlockV2Response{
Version: version.String(version.Capella),
ExecutionOptimistic: isOptimistic,
Data: &SignedBlock{
Data: &structs.SignedBlock{
Message: jsonBytes,
Signature: respBlk.Signature,
},
}, nil
}
func (s *Server) getBlindedBlockDeneb(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*GetBlockV2Response, error) {
func (s *Server) getBlindedBlockDeneb(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*structs.GetBlockV2Response, error) {
blindedConsensusBlk, err := blk.PbBlindedDenebBlock()
if err != nil {
// ErrUnsupportedGetter means that we have another block type
@@ -777,7 +778,7 @@ func (s *Server) getBlindedBlockDeneb(ctx context.Context, blk interfaces.ReadOn
if err != nil {
return nil, errors.Wrapf(err, "could not check if block is optimistic")
}
respBlk, err := shared.SignedBlindedBeaconBlockDenebFromConsensus(blindedConsensusBlk)
respBlk, err := structs.SignedBlindedBeaconBlockDenebFromConsensus(blindedConsensusBlk)
if err != nil {
return nil, err
}
@@ -785,10 +786,10 @@ func (s *Server) getBlindedBlockDeneb(ctx context.Context, blk interfaces.ReadOn
if err != nil {
return nil, err
}
return &GetBlockV2Response{
return &structs.GetBlockV2Response{
Version: version.String(version.Deneb),
ExecutionOptimistic: isOptimistic,
Data: &SignedBlock{
Data: &structs.SignedBlock{
Message: jsonBytes,
Signature: respBlk.Signature,
},
@@ -916,9 +917,9 @@ func (s *Server) GetBlockAttestations(w http.ResponseWriter, r *http.Request) {
}
consensusAtts := blk.Block().Body().Attestations()
atts := make([]*shared.Attestation, len(consensusAtts))
atts := make([]*structs.Attestation, len(consensusAtts))
for i, att := range consensusAtts {
atts[i] = shared.AttFromConsensus(att)
atts[i] = structs.AttFromConsensus(att)
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
@@ -931,7 +932,7 @@ func (s *Server) GetBlockAttestations(w http.ResponseWriter, r *http.Request) {
return
}
resp := &GetBlockAttestationsResponse{
resp := &structs.GetBlockAttestationsResponse{
Data: atts,
ExecutionOptimistic: isOptimistic,
Finalized: s.FinalizationFetcher.IsFinalized(ctx, root),
@@ -1126,7 +1127,7 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
var consensusBlock *eth.GenericSignedBeaconBlock
var denebBlock *shared.SignedBlindedBeaconBlockDeneb
var denebBlock *structs.SignedBlindedBeaconBlockDeneb
if err = unmarshalStrict(body, &denebBlock); err == nil {
consensusBlock, err = denebBlock.ToGeneric()
if err == nil {
@@ -1147,7 +1148,7 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
return
}
var capellaBlock *shared.SignedBlindedBeaconBlockCapella
var capellaBlock *structs.SignedBlindedBeaconBlockCapella
if err = unmarshalStrict(body, &capellaBlock); err == nil {
consensusBlock, err = capellaBlock.ToGeneric()
if err == nil {
@@ -1168,7 +1169,7 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
return
}
var bellatrixBlock *shared.SignedBlindedBeaconBlockBellatrix
var bellatrixBlock *structs.SignedBlindedBeaconBlockBellatrix
if err = unmarshalStrict(body, &bellatrixBlock); err == nil {
consensusBlock, err = bellatrixBlock.ToGeneric()
if err == nil {
@@ -1189,7 +1190,7 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
return
}
var altairBlock *shared.SignedBeaconBlockAltair
var altairBlock *structs.SignedBeaconBlockAltair
if err = unmarshalStrict(body, &altairBlock); err == nil {
consensusBlock, err = altairBlock.ToGeneric()
if err == nil {
@@ -1210,7 +1211,7 @@ func (s *Server) publishBlindedBlock(ctx context.Context, w http.ResponseWriter,
return
}
var phase0Block *shared.SignedBeaconBlock
var phase0Block *structs.SignedBeaconBlock
if err = unmarshalStrict(body, &phase0Block); err == nil {
consensusBlock, err = phase0Block.ToGeneric()
if err == nil {
@@ -1421,7 +1422,7 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
var consensusBlock *eth.GenericSignedBeaconBlock
var denebBlockContents *shared.SignedBeaconBlockContentsDeneb
var denebBlockContents *structs.SignedBeaconBlockContentsDeneb
if err = unmarshalStrict(body, &denebBlockContents); err == nil {
consensusBlock, err = denebBlockContents.ToGeneric()
if err == nil {
@@ -1442,7 +1443,7 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
return
}
var capellaBlock *shared.SignedBeaconBlockCapella
var capellaBlock *structs.SignedBeaconBlockCapella
if err = unmarshalStrict(body, &capellaBlock); err == nil {
consensusBlock, err = capellaBlock.ToGeneric()
if err == nil {
@@ -1463,7 +1464,7 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
return
}
var bellatrixBlock *shared.SignedBeaconBlockBellatrix
var bellatrixBlock *structs.SignedBeaconBlockBellatrix
if err = unmarshalStrict(body, &bellatrixBlock); err == nil {
consensusBlock, err = bellatrixBlock.ToGeneric()
if err == nil {
@@ -1484,7 +1485,7 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
return
}
var altairBlock *shared.SignedBeaconBlockAltair
var altairBlock *structs.SignedBeaconBlockAltair
if err = unmarshalStrict(body, &altairBlock); err == nil {
consensusBlock, err = altairBlock.ToGeneric()
if err == nil {
@@ -1505,7 +1506,7 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
return
}
var phase0Block *shared.SignedBeaconBlock
var phase0Block *structs.SignedBeaconBlock
if err = unmarshalStrict(body, &phase0Block); err == nil {
consensusBlock, err = phase0Block.ToGeneric()
if err == nil {
@@ -1700,8 +1701,8 @@ func (s *Server) GetBlockRoot(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not check if block is optimistic: "+err.Error(), http.StatusInternalServerError)
return
}
response := &BlockRootResponse{
Data: &BlockRoot{
response := &structs.BlockRootResponse{
Data: &structs.BlockRoot{
Root: hexutil.Encode(root),
},
ExecutionOptimistic: isOptimistic,
@@ -1736,8 +1737,8 @@ func (s *Server) GetStateFork(w http.ResponseWriter, r *http.Request) {
return
}
isFinalized := s.FinalizationFetcher.IsFinalized(ctx, blockRoot)
response := &GetStateForkResponse{
Data: &shared.Fork{
response := &structs.GetStateForkResponse{
Data: &structs.Fork{
PreviousVersion: hexutil.Encode(fork.PreviousVersion),
CurrentVersion: hexutil.Encode(fork.CurrentVersion),
Epoch: fmt.Sprintf("%d", fork.Epoch),
@@ -1800,7 +1801,7 @@ func (s *Server) GetCommittees(w http.ResponseWriter, r *http.Request) {
return
}
committeesPerSlot := corehelpers.SlotCommitteeCount(activeCount)
committees := make([]*shared.Committee, 0)
committees := make([]*structs.Committee, 0)
for slot := startSlot; slot <= endSlot; slot++ {
if rawSlot != "" && slot != primitives.Slot(sl) {
continue
@@ -1818,7 +1819,7 @@ func (s *Server) GetCommittees(w http.ResponseWriter, r *http.Request) {
for _, v := range committee {
validators = append(validators, strconv.FormatUint(uint64(v), 10))
}
committeeContainer := &shared.Committee{
committeeContainer := &structs.Committee{
Index: strconv.FormatUint(uint64(index), 10),
Slot: strconv.FormatUint(uint64(slot), 10),
Validators: validators,
@@ -1839,7 +1840,7 @@ func (s *Server) GetCommittees(w http.ResponseWriter, r *http.Request) {
return
}
isFinalized := s.FinalizationFetcher.IsFinalized(ctx, blockRoot)
httputil.WriteJson(w, &GetCommitteesResponse{Data: committees, ExecutionOptimistic: isOptimistic, Finalized: isFinalized})
httputil.WriteJson(w, &structs.GetCommitteesResponse{Data: committees, ExecutionOptimistic: isOptimistic, Finalized: isFinalized})
}
// GetBlockHeaders retrieves block headers matching given query. By default it will fetch current head slot blocks.
@@ -1889,7 +1890,7 @@ func (s *Server) GetBlockHeaders(w http.ResponseWriter, r *http.Request) {
isOptimistic := false
isFinalized := true
blkHdrs := make([]*shared.SignedBeaconBlockHeaderContainer, len(blks))
blkHdrs := make([]*structs.SignedBeaconBlockHeaderContainer, len(blks))
for i, bl := range blks {
v1alpha1Header, err := bl.Header()
if err != nil {
@@ -1916,9 +1917,9 @@ func (s *Server) GetBlockHeaders(w http.ResponseWriter, r *http.Request) {
if isFinalized {
isFinalized = s.FinalizationFetcher.IsFinalized(ctx, blkRoots[i])
}
blkHdrs[i] = &shared.SignedBeaconBlockHeaderContainer{
Header: &shared.SignedBeaconBlockHeader{
Message: shared.BeaconBlockHeaderFromConsensus(v1alpha1Header.Header),
blkHdrs[i] = &structs.SignedBeaconBlockHeaderContainer{
Header: &structs.SignedBeaconBlockHeader{
Message: structs.BeaconBlockHeaderFromConsensus(v1alpha1Header.Header),
Signature: hexutil.Encode(v1alpha1Header.Signature),
},
Root: hexutil.Encode(headerRoot[:]),
@@ -1926,7 +1927,7 @@ func (s *Server) GetBlockHeaders(w http.ResponseWriter, r *http.Request) {
}
}
response := &GetBlockHeadersResponse{
response := &structs.GetBlockHeadersResponse{
Data: blkHdrs,
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
@@ -1976,12 +1977,12 @@ func (s *Server) GetBlockHeader(w http.ResponseWriter, r *http.Request) {
return
}
resp := &GetBlockHeaderResponse{
Data: &shared.SignedBeaconBlockHeaderContainer{
resp := &structs.GetBlockHeaderResponse{
Data: &structs.SignedBeaconBlockHeaderContainer{
Root: hexutil.Encode(headerRoot[:]),
Canonical: canonical,
Header: &shared.SignedBeaconBlockHeader{
Message: shared.BeaconBlockHeaderFromConsensus(blockHeader.Header),
Header: &structs.SignedBeaconBlockHeader{
Message: structs.BeaconBlockHeaderFromConsensus(blockHeader.Header),
Signature: hexutil.Encode(blockHeader.Signature),
},
},
@@ -2023,17 +2024,17 @@ func (s *Server) GetFinalityCheckpoints(w http.ResponseWriter, r *http.Request)
pj := st.PreviousJustifiedCheckpoint()
cj := st.CurrentJustifiedCheckpoint()
f := st.FinalizedCheckpoint()
resp := &GetFinalityCheckpointsResponse{
Data: &FinalityCheckpoints{
PreviousJustified: &shared.Checkpoint{
resp := &structs.GetFinalityCheckpointsResponse{
Data: &structs.FinalityCheckpoints{
PreviousJustified: &structs.Checkpoint{
Epoch: strconv.FormatUint(uint64(pj.Epoch), 10),
Root: hexutil.Encode(pj.Root),
},
CurrentJustified: &shared.Checkpoint{
CurrentJustified: &structs.Checkpoint{
Epoch: strconv.FormatUint(uint64(cj.Epoch), 10),
Root: hexutil.Encode(cj.Root),
},
Finalized: &shared.Checkpoint{
Finalized: &structs.Checkpoint{
Epoch: strconv.FormatUint(uint64(f.Epoch), 10),
Root: hexutil.Encode(f.Root),
},
@@ -2061,8 +2062,8 @@ func (s *Server) GetGenesis(w http.ResponseWriter, r *http.Request) {
}
forkVersion := params.BeaconConfig().GenesisForkVersion
resp := &GetGenesisResponse{
Data: &Genesis{
resp := &structs.GetGenesisResponse{
Data: &structs.Genesis{
GenesisTime: strconv.FormatUint(uint64(genesisTime.Unix()), 10),
GenesisValidatorsRoot: hexutil.Encode(validatorsRoot[:]),
GenesisForkVersion: hexutil.Encode(forkVersion),

View File

@@ -11,6 +11,7 @@ import (
"time"
"github.com/prysmaticlabs/prysm/v4/api/server"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
@@ -55,25 +56,25 @@ func (s *Server) ListAttestations(w http.ResponseWriter, r *http.Request) {
attestations = append(attestations, unaggAtts...)
isEmptyReq := rawSlot == "" && rawCommitteeIndex == ""
if isEmptyReq {
allAtts := make([]*shared.Attestation, len(attestations))
allAtts := make([]*structs.Attestation, len(attestations))
for i, att := range attestations {
allAtts[i] = shared.AttFromConsensus(att)
allAtts[i] = structs.AttFromConsensus(att)
}
httputil.WriteJson(w, &ListAttestationsResponse{Data: allAtts})
httputil.WriteJson(w, &structs.ListAttestationsResponse{Data: allAtts})
return
}
bothDefined := rawSlot != "" && rawCommitteeIndex != ""
filteredAtts := make([]*shared.Attestation, 0, len(attestations))
filteredAtts := make([]*structs.Attestation, 0, len(attestations))
for _, att := range attestations {
committeeIndexMatch := rawCommitteeIndex != "" && att.Data.CommitteeIndex == primitives.CommitteeIndex(committeeIndex)
slotMatch := rawSlot != "" && att.Data.Slot == primitives.Slot(slot)
shouldAppend := (bothDefined && committeeIndexMatch && slotMatch) || (!bothDefined && (committeeIndexMatch || slotMatch))
if shouldAppend {
filteredAtts = append(filteredAtts, shared.AttFromConsensus(att))
filteredAtts = append(filteredAtts, structs.AttFromConsensus(att))
}
}
httputil.WriteJson(w, &ListAttestationsResponse{Data: filteredAtts})
httputil.WriteJson(w, &structs.ListAttestationsResponse{Data: filteredAtts})
}
// SubmitAttestations submits an attestation object to node. If the attestation passes all validation
@@ -82,7 +83,7 @@ func (s *Server) SubmitAttestations(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttestations")
defer span.End()
var req SubmitAttestationsRequest
var req structs.SubmitAttestationsRequest
err := json.NewDecoder(r.Body).Decode(&req.Data)
switch {
case err == io.EOF:
@@ -186,12 +187,12 @@ func (s *Server) ListVoluntaryExits(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not get exits from the pool: "+err.Error(), http.StatusInternalServerError)
return
}
exits := make([]*shared.SignedVoluntaryExit, len(sourceExits))
exits := make([]*structs.SignedVoluntaryExit, len(sourceExits))
for i, e := range sourceExits {
exits[i] = shared.SignedExitFromConsensus(e)
exits[i] = structs.SignedExitFromConsensus(e)
}
httputil.WriteJson(w, &ListVoluntaryExitsResponse{Data: exits})
httputil.WriteJson(w, &structs.ListVoluntaryExitsResponse{Data: exits})
}
// SubmitVoluntaryExit submits a SignedVoluntaryExit object to node's pool
@@ -200,7 +201,7 @@ func (s *Server) SubmitVoluntaryExit(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitVoluntaryExit")
defer span.End()
var req shared.SignedVoluntaryExit
var req structs.SignedVoluntaryExit
err := json.NewDecoder(r.Body).Decode(&req)
switch {
case err == io.EOF:
@@ -258,7 +259,7 @@ func (s *Server) SubmitSyncCommitteeSignatures(w http.ResponseWriter, r *http.Re
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitPoolSyncCommitteeSignatures")
defer span.End()
var req SubmitSyncCommitteeSignaturesRequest
var req structs.SubmitSyncCommitteeSignaturesRequest
err := json.NewDecoder(r.Body).Decode(&req.Data)
switch {
case err == io.EOF:
@@ -317,7 +318,7 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
var failures []*server.IndexedVerificationFailure
var toBroadcast []*eth.SignedBLSToExecutionChange
var req []*shared.SignedBLSToExecutionChange
var req []*structs.SignedBLSToExecutionChange
err = json.NewDecoder(r.Body).Decode(&req)
switch {
case err == io.EOF:
@@ -437,8 +438,8 @@ func (s *Server) ListBLSToExecutionChanges(w http.ResponseWriter, r *http.Reques
return
}
httputil.WriteJson(w, &BLSToExecutionChangesPoolResponse{
Data: shared.SignedBLSChangesFromConsensus(sourceChanges),
httputil.WriteJson(w, &structs.BLSToExecutionChangesPoolResponse{
Data: structs.SignedBLSChangesFromConsensus(sourceChanges),
})
}
@@ -454,9 +455,9 @@ func (s *Server) GetAttesterSlashings(w http.ResponseWriter, r *http.Request) {
return
}
sourceSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, headState, true /* return unlimited slashings */)
slashings := shared.AttesterSlashingsFromConsensus(sourceSlashings)
slashings := structs.AttesterSlashingsFromConsensus(sourceSlashings)
httputil.WriteJson(w, &GetAttesterSlashingsResponse{Data: slashings})
httputil.WriteJson(w, &structs.GetAttesterSlashingsResponse{Data: slashings})
}
// SubmitAttesterSlashing submits an attester slashing object to node's pool and
@@ -465,7 +466,7 @@ func (s *Server) SubmitAttesterSlashing(w http.ResponseWriter, r *http.Request)
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttesterSlashing")
defer span.End()
var req shared.AttesterSlashing
var req structs.AttesterSlashing
err := json.NewDecoder(r.Body).Decode(&req)
switch {
case err == io.EOF:
@@ -528,9 +529,9 @@ func (s *Server) GetProposerSlashings(w http.ResponseWriter, r *http.Request) {
return
}
sourceSlashings := s.SlashingsPool.PendingProposerSlashings(ctx, headState, true /* return unlimited slashings */)
slashings := shared.ProposerSlashingsFromConsensus(sourceSlashings)
slashings := structs.ProposerSlashingsFromConsensus(sourceSlashings)
httputil.WriteJson(w, &GetProposerSlashingsResponse{Data: slashings})
httputil.WriteJson(w, &structs.GetProposerSlashingsResponse{Data: slashings})
}
// SubmitProposerSlashing submits a proposer slashing object to node's pool and if
@@ -539,7 +540,7 @@ func (s *Server) SubmitProposerSlashing(w http.ResponseWriter, r *http.Request)
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitProposerSlashing")
defer span.End()
var req shared.ProposerSlashing
var req structs.ProposerSlashing
err := json.NewDecoder(r.Body).Decode(&req)
switch {
case err == io.EOF:

View File

@@ -14,6 +14,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/api/server"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
blockchainmock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/signing"
prysmtime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
@@ -26,7 +27,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/voluntaryexits/mock"
p2pMock "github.com/prysmaticlabs/prysm/v4/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/core"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
@@ -126,7 +126,7 @@ func TestListAttestations(t *testing.T) {
s.ListAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &ListAttestationsResponse{}
resp := &structs.ListAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
@@ -140,7 +140,7 @@ func TestListAttestations(t *testing.T) {
s.ListAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &ListAttestationsResponse{}
resp := &structs.ListAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
@@ -157,7 +157,7 @@ func TestListAttestations(t *testing.T) {
s.ListAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &ListAttestationsResponse{}
resp := &structs.ListAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
@@ -174,7 +174,7 @@ func TestListAttestations(t *testing.T) {
s.ListAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &ListAttestationsResponse{}
resp := &structs.ListAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
@@ -340,7 +340,7 @@ func TestListVoluntaryExits(t *testing.T) {
s.ListVoluntaryExits(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &ListVoluntaryExitsResponse{}
resp := &structs.ListVoluntaryExitsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
@@ -660,11 +660,11 @@ func TestListBLSToExecutionChanges(t *testing.T) {
s.ListBLSToExecutionChanges(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &BLSToExecutionChangesPoolResponse{}
resp := &structs.BLSToExecutionChangesPoolResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data))
assert.DeepEqual(t, shared.SignedBLSChangeFromConsensus(change1), resp.Data[0])
assert.DeepEqual(t, shared.SignedBLSChangeFromConsensus(change2), resp.Data[1])
assert.DeepEqual(t, structs.SignedBLSChangeFromConsensus(change1), resp.Data[0])
assert.DeepEqual(t, structs.SignedBLSChangeFromConsensus(change2), resp.Data[1])
}
func TestSubmitSignedBLSToExecutionChanges_Ok(t *testing.T) {
@@ -722,12 +722,12 @@ func TestSubmitSignedBLSToExecutionChanges_Ok(t *testing.T) {
st, err := state_native.InitializeFromProtoCapella(spb)
require.NoError(t, err)
signedChanges := make([]*shared.SignedBLSToExecutionChange, numValidators)
signedChanges := make([]*structs.SignedBLSToExecutionChange, numValidators)
for i, message := range blsChanges {
signature, err := signing.ComputeDomainAndSign(st, prysmtime.CurrentEpoch(st), message, params.BeaconConfig().DomainBLSToExecutionChange, privKeys[i])
require.NoError(t, err)
signed := &shared.SignedBLSToExecutionChange{
Message: shared.BLSChangeFromConsensus(message),
signed := &structs.SignedBLSToExecutionChange{
Message: structs.BLSChangeFromConsensus(message),
Signature: hexutil.Encode(signature),
}
signedChanges[i] = signed
@@ -834,13 +834,13 @@ func TestSubmitSignedBLSToExecutionChanges_Bellatrix(t *testing.T) {
stc, err := state_native.InitializeFromProtoCapella(spc)
require.NoError(t, err)
signedChanges := make([]*shared.SignedBLSToExecutionChange, numValidators)
signedChanges := make([]*structs.SignedBLSToExecutionChange, numValidators)
for i, message := range blsChanges {
signature, err := signing.ComputeDomainAndSign(stc, prysmtime.CurrentEpoch(stc), message, params.BeaconConfig().DomainBLSToExecutionChange, privKeys[i])
require.NoError(t, err)
signedChanges[i] = &shared.SignedBLSToExecutionChange{
Message: shared.BLSChangeFromConsensus(message),
signedChanges[i] = &structs.SignedBLSToExecutionChange{
Message: structs.BLSChangeFromConsensus(message),
Signature: hexutil.Encode(signature),
}
}
@@ -934,15 +934,15 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
st, err := state_native.InitializeFromProtoCapella(spb)
require.NoError(t, err)
signedChanges := make([]*shared.SignedBLSToExecutionChange, numValidators)
signedChanges := make([]*structs.SignedBLSToExecutionChange, numValidators)
for i, message := range blsChanges {
signature, err := signing.ComputeDomainAndSign(st, prysmtime.CurrentEpoch(st), message, params.BeaconConfig().DomainBLSToExecutionChange, privKeys[i])
require.NoError(t, err)
if i == 1 {
signature[0] = 0x00
}
signedChanges[i] = &shared.SignedBLSToExecutionChange{
Message: shared.BLSChangeFromConsensus(message),
signedChanges[i] = &structs.SignedBLSToExecutionChange{
Message: structs.BLSChangeFromConsensus(message),
Signature: hexutil.Encode(signature),
}
}
@@ -975,10 +975,10 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
poolChanges, err := s.BLSChangesPool.PendingBLSToExecChanges()
require.Equal(t, len(poolChanges)+1, len(signedChanges))
require.NoError(t, err)
require.DeepEqual(t, shared.SignedBLSChangeFromConsensus(poolChanges[0]), signedChanges[0])
require.DeepEqual(t, structs.SignedBLSChangeFromConsensus(poolChanges[0]), signedChanges[0])
for i := 2; i < numValidators; i++ {
require.DeepEqual(t, shared.SignedBLSChangeFromConsensus(poolChanges[i-1]), signedChanges[i])
require.DeepEqual(t, structs.SignedBLSChangeFromConsensus(poolChanges[i-1]), signedChanges[i])
}
}
@@ -1069,7 +1069,7 @@ func TestGetAttesterSlashings(t *testing.T) {
s.GetAttesterSlashings(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetAttesterSlashingsResponse{}
resp := &structs.GetAttesterSlashingsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
@@ -1135,7 +1135,7 @@ func TestGetProposerSlashings(t *testing.T) {
s.GetProposerSlashings(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetProposerSlashingsResponse{}
resp := &structs.GetProposerSlashingsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
@@ -1213,7 +1213,7 @@ func TestSubmitAttesterSlashing_Ok(t *testing.T) {
OperationNotifier: chainmock.OperationNotifier(),
}
toSubmit := shared.AttesterSlashingsFromConsensus([]*ethpbv1alpha1.AttesterSlashing{slashing})
toSubmit := structs.AttesterSlashingsFromConsensus([]*ethpbv1alpha1.AttesterSlashing{slashing})
b, err := json.Marshal(toSubmit[0])
require.NoError(t, err)
var body bytes.Buffer
@@ -1305,7 +1305,7 @@ func TestSubmitAttesterSlashing_AcrossFork(t *testing.T) {
OperationNotifier: chainmock.OperationNotifier(),
}
toSubmit := shared.AttesterSlashingsFromConsensus([]*ethpbv1alpha1.AttesterSlashing{slashing})
toSubmit := structs.AttesterSlashingsFromConsensus([]*ethpbv1alpha1.AttesterSlashing{slashing})
b, err := json.Marshal(toSubmit[0])
require.NoError(t, err)
var body bytes.Buffer
@@ -1416,7 +1416,7 @@ func TestSubmitProposerSlashing_Ok(t *testing.T) {
OperationNotifier: chainmock.OperationNotifier(),
}
toSubmit := shared.ProposerSlashingsFromConsensus([]*ethpbv1alpha1.ProposerSlashing{slashing})
toSubmit := structs.ProposerSlashingsFromConsensus([]*ethpbv1alpha1.ProposerSlashing{slashing})
b, err := json.Marshal(toSubmit[0])
require.NoError(t, err)
var body bytes.Buffer
@@ -1500,7 +1500,7 @@ func TestSubmitProposerSlashing_AcrossFork(t *testing.T) {
OperationNotifier: chainmock.OperationNotifier(),
}
toSubmit := shared.ProposerSlashingsFromConsensus([]*ethpbv1alpha1.ProposerSlashing{slashing})
toSubmit := structs.ProposerSlashingsFromConsensus([]*ethpbv1alpha1.ProposerSlashing{slashing})
b, err := json.Marshal(toSubmit[0])
require.NoError(t, err)
var body bytes.Buffer

View File

@@ -8,6 +8,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
@@ -67,8 +68,8 @@ func (s *Server) GetStateRoot(w http.ResponseWriter, r *http.Request) {
}
isFinalized := s.FinalizationFetcher.IsFinalized(ctx, blockRoot)
resp := &GetStateRootResponse{
Data: &StateRoot{
resp := &structs.GetStateRootResponse{
Data: &structs.StateRoot{
Root: hexutil.Encode(stateRoot),
},
ExecutionOptimistic: isOptimistic,
@@ -137,8 +138,8 @@ func (s *Server) GetRandao(w http.ResponseWriter, r *http.Request) {
}
isFinalized := s.FinalizationFetcher.IsFinalized(ctx, blockRoot)
resp := &GetRandaoResponse{
Data: &Randao{Randao: hexutil.Encode(randao)},
resp := &structs.GetRandaoResponse{
Data: &structs.Randao{Randao: hexutil.Encode(randao)},
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
}
@@ -239,8 +240,8 @@ func (s *Server) GetSyncCommittees(w http.ResponseWriter, r *http.Request) {
}
isFinalized := s.FinalizationFetcher.IsFinalized(ctx, blockRoot)
resp := GetSyncCommitteeResponse{
Data: &SyncCommitteeValidators{
resp := structs.GetSyncCommitteeResponse{
Data: &structs.SyncCommitteeValidators{
Validators: committeeIndices,
ValidatorAggregates: subcommittees,
},

View File

@@ -13,6 +13,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
chainMock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
dbTest "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
@@ -61,7 +62,7 @@ func TestGetStateRoot(t *testing.T) {
s.GetStateRoot(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetStateRootResponse{}
resp := &structs.GetStateRootResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, hexutil.Encode(stateRoot[:]), resp.Data.Root)
@@ -85,7 +86,7 @@ func TestGetStateRoot(t *testing.T) {
s.GetStateRoot(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetStateRootResponse{}
resp := &structs.GetStateRootResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.DeepEqual(t, true, resp.ExecutionOptimistic)
})
@@ -116,7 +117,7 @@ func TestGetStateRoot(t *testing.T) {
s.GetStateRoot(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetStateRootResponse{}
resp := &structs.GetStateRootResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.DeepEqual(t, true, resp.Finalized)
})
@@ -163,7 +164,7 @@ func TestGetRandao(t *testing.T) {
s.GetRandao(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetRandaoResponse{}
resp := &structs.GetRandaoResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, hexutil.Encode(mixCurrent[:]), resp.Data.Randao)
})
@@ -175,7 +176,7 @@ func TestGetRandao(t *testing.T) {
s.GetRandao(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetRandaoResponse{}
resp := &structs.GetRandaoResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, hexutil.Encode(mixCurrent[:]), resp.Data.Randao)
})
@@ -187,7 +188,7 @@ func TestGetRandao(t *testing.T) {
s.GetRandao(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetRandaoResponse{}
resp := &structs.GetRandaoResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, hexutil.Encode(mixOld[:]), resp.Data.Randao)
})
@@ -203,7 +204,7 @@ func TestGetRandao(t *testing.T) {
s.GetRandao(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetRandaoResponse{}
resp := &structs.GetRandaoResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, hexutil.Encode(headRandao[:]), resp.Data.Randao)
})
@@ -262,7 +263,7 @@ func TestGetRandao(t *testing.T) {
s.GetRandao(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetRandaoResponse{}
resp := &structs.GetRandaoResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.DeepEqual(t, true, resp.ExecutionOptimistic)
})
@@ -299,7 +300,7 @@ func TestGetRandao(t *testing.T) {
s.GetRandao(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetRandaoResponse{}
resp := &structs.GetRandaoResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.DeepEqual(t, true, resp.Finalized)
})
@@ -458,7 +459,7 @@ func TestGetSyncCommittees(t *testing.T) {
s.GetSyncCommittees(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetSyncCommitteeResponse{}
resp := &structs.GetSyncCommitteeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
committeeVals := resp.Data.Validators
require.Equal(t, params.BeaconConfig().SyncCommitteeSize, uint64(len(committeeVals)))
@@ -508,7 +509,7 @@ func TestGetSyncCommittees(t *testing.T) {
s.GetSyncCommittees(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetSyncCommitteeResponse{}
resp := &structs.GetSyncCommitteeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -552,7 +553,7 @@ func TestGetSyncCommittees(t *testing.T) {
s.GetSyncCommittees(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetSyncCommitteeResponse{}
resp := &structs.GetSyncCommitteeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
@@ -629,7 +630,7 @@ func TestGetSyncCommittees_Future(t *testing.T) {
writer.Body = &bytes.Buffer{}
s.GetSyncCommittees(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetSyncCommitteeResponse{}
resp := &structs.GetSyncCommitteeResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
committeeVals := resp.Data.Validators
require.Equal(t, params.BeaconConfig().SyncCommitteeSize, uint64(len(committeeVals)))

View File

@@ -18,12 +18,12 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
chainMock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
dbTest "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
rpctesting "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/lookup"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
@@ -97,9 +97,9 @@ func TestGetBlock(t *testing.T) {
s.GetBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockResponse{}
resp := &structs.GetBlockResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
sbb := &shared.SignedBeaconBlock{Message: &shared.BeaconBlock{}}
sbb := &structs.SignedBeaconBlock{Message: &structs.BeaconBlock{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
genericBlk, err := sbb.ToGeneric()
@@ -153,10 +153,10 @@ func TestGetBlockV2(t *testing.T) {
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Phase0), resp.Version)
sbb := &shared.SignedBeaconBlock{Message: &shared.BeaconBlock{}}
sbb := &structs.SignedBeaconBlock{Message: &structs.BeaconBlock{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
genericBlk, err := sbb.ToGeneric()
@@ -186,10 +186,10 @@ func TestGetBlockV2(t *testing.T) {
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Altair), resp.Version)
sbb := &shared.SignedBeaconBlockAltair{Message: &shared.BeaconBlockAltair{}}
sbb := &structs.SignedBeaconBlockAltair{Message: &structs.BeaconBlockAltair{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
genericBlk, err := sbb.ToGeneric()
@@ -220,10 +220,10 @@ func TestGetBlockV2(t *testing.T) {
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Bellatrix), resp.Version)
sbb := &shared.SignedBeaconBlockBellatrix{Message: &shared.BeaconBlockBellatrix{}}
sbb := &structs.SignedBeaconBlockBellatrix{Message: &structs.BeaconBlockBellatrix{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
genericBlk, err := sbb.ToGeneric()
@@ -254,10 +254,10 @@ func TestGetBlockV2(t *testing.T) {
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Capella), resp.Version)
sbb := &shared.SignedBeaconBlockCapella{Message: &shared.BeaconBlockCapella{}}
sbb := &structs.SignedBeaconBlockCapella{Message: &structs.BeaconBlockCapella{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
genericBlk, err := sbb.ToGeneric()
@@ -288,10 +288,10 @@ func TestGetBlockV2(t *testing.T) {
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Deneb), resp.Version)
sbb := &shared.SignedBeaconBlockDeneb{Message: &shared.BeaconBlockDeneb{}}
sbb := &structs.SignedBeaconBlockDeneb{Message: &structs.BeaconBlockDeneb{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
blk, err := sbb.ToConsensus()
@@ -322,7 +322,7 @@ func TestGetBlockV2(t *testing.T) {
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -349,7 +349,7 @@ func TestGetBlockV2(t *testing.T) {
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
@@ -368,7 +368,7 @@ func TestGetBlockV2(t *testing.T) {
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, false, resp.Finalized)
})
@@ -552,7 +552,7 @@ func TestGetBlockAttestations(t *testing.T) {
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockAttestationsResponse{}
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, len(b.Block.Body.Attestations), len(resp.Data))
atts := make([]*eth.Attestation, len(b.Block.Body.Attestations))
@@ -586,7 +586,7 @@ func TestGetBlockAttestations(t *testing.T) {
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockAttestationsResponse{}
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -613,7 +613,7 @@ func TestGetBlockAttestations(t *testing.T) {
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockAttestationsResponse{}
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
@@ -632,7 +632,7 @@ func TestGetBlockAttestations(t *testing.T) {
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockAttestationsResponse{}
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, false, resp.ExecutionOptimistic)
})
@@ -657,10 +657,10 @@ func TestGetBlindedBlock(t *testing.T) {
s.GetBlindedBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Phase0), resp.Version)
sbb := &shared.SignedBeaconBlock{Message: &shared.BeaconBlock{}}
sbb := &structs.SignedBeaconBlock{Message: &structs.BeaconBlock{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
genericBlk, err := sbb.ToGeneric()
@@ -686,10 +686,10 @@ func TestGetBlindedBlock(t *testing.T) {
s.GetBlindedBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Altair), resp.Version)
sbb := &shared.SignedBeaconBlockAltair{Message: &shared.BeaconBlockAltair{}}
sbb := &structs.SignedBeaconBlockAltair{Message: &structs.BeaconBlockAltair{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
genericBlk, err := sbb.ToGeneric()
@@ -717,10 +717,10 @@ func TestGetBlindedBlock(t *testing.T) {
s.GetBlindedBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Bellatrix), resp.Version)
sbb := &shared.SignedBlindedBeaconBlockBellatrix{Message: &shared.BlindedBeaconBlockBellatrix{}}
sbb := &structs.SignedBlindedBeaconBlockBellatrix{Message: &structs.BlindedBeaconBlockBellatrix{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
genericBlk, err := sbb.ToGeneric()
@@ -748,10 +748,10 @@ func TestGetBlindedBlock(t *testing.T) {
s.GetBlindedBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Capella), resp.Version)
sbb := &shared.SignedBlindedBeaconBlockCapella{Message: &shared.BlindedBeaconBlockCapella{}}
sbb := &structs.SignedBlindedBeaconBlockCapella{Message: &structs.BlindedBeaconBlockCapella{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
genericBlk, err := sbb.ToGeneric()
@@ -779,10 +779,10 @@ func TestGetBlindedBlock(t *testing.T) {
s.GetBlindedBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Deneb), resp.Version)
sbb := &shared.SignedBlindedBeaconBlockDeneb{Message: &shared.BlindedBeaconBlockDeneb{}}
sbb := &structs.SignedBlindedBeaconBlockDeneb{Message: &structs.BlindedBeaconBlockDeneb{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
blk, err := sbb.ToConsensus()
@@ -812,7 +812,7 @@ func TestGetBlindedBlock(t *testing.T) {
s.GetBlindedBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -838,7 +838,7 @@ func TestGetBlindedBlock(t *testing.T) {
s.GetBlindedBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
@@ -864,7 +864,7 @@ func TestGetBlindedBlock(t *testing.T) {
s.GetBlindedBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockV2Response{}
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, false, resp.Finalized)
})
@@ -989,10 +989,10 @@ func TestPublishBlock(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Phase0)
var signedblock *shared.SignedBeaconBlock
var signedblock *structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1011,10 +1011,10 @@ func TestPublishBlock(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Altair)
var signedblock *shared.SignedBeaconBlockAltair
var signedblock *structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1033,9 +1033,9 @@ func TestPublishBlock(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Bellatrix)
converted, err := shared.BeaconBlockBellatrixFromConsensus(block.Bellatrix.Block)
converted, err := structs.BeaconBlockBellatrixFromConsensus(block.Bellatrix.Block)
require.NoError(t, err)
var signedblock *shared.SignedBeaconBlockBellatrix
var signedblock *structs.SignedBeaconBlockBellatrix
err = json.Unmarshal([]byte(rpctesting.BellatrixBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
@@ -1057,9 +1057,9 @@ func TestPublishBlock(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Capella)
converted, err := shared.BeaconBlockCapellaFromConsensus(block.Capella.Block)
converted, err := structs.BeaconBlockCapellaFromConsensus(block.Capella.Block)
require.NoError(t, err)
var signedblock *shared.SignedBeaconBlockCapella
var signedblock *structs.SignedBeaconBlockCapella
err = json.Unmarshal([]byte(rpctesting.CapellaBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
@@ -1081,9 +1081,9 @@ func TestPublishBlock(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Deneb)
converted, err := shared.SignedBeaconBlockContentsDenebFromConsensus(block.Deneb)
converted, err := structs.SignedBeaconBlockContentsDenebFromConsensus(block.Deneb)
require.NoError(t, err)
var signedblock *shared.SignedBeaconBlockContentsDeneb
var signedblock *structs.SignedBeaconBlockContentsDeneb
err = json.Unmarshal([]byte(rpctesting.DenebBlockContents), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock)
@@ -1150,10 +1150,10 @@ func TestPublishBlockSSZ(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Phase0)
var signedblock *shared.SignedBeaconBlock
var signedblock *structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1161,7 +1161,7 @@ func TestPublishBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlock
var blk structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1180,10 +1180,10 @@ func TestPublishBlockSSZ(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Altair)
var signedblock *shared.SignedBeaconBlockAltair
var signedblock *structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1191,7 +1191,7 @@ func TestPublishBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockAltair
var blk structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1216,7 +1216,7 @@ func TestPublishBlockSSZ(t *testing.T) {
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockBellatrix
var blk structs.SignedBeaconBlockBellatrix
err := json.Unmarshal([]byte(rpctesting.BellatrixBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1242,7 +1242,7 @@ func TestPublishBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockCapella
var blk structs.SignedBeaconBlockCapella
err := json.Unmarshal([]byte(rpctesting.CapellaBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1268,7 +1268,7 @@ func TestPublishBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockContentsDeneb
var blk structs.SignedBeaconBlockContentsDeneb
err := json.Unmarshal([]byte(rpctesting.DenebBlockContents), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1288,7 +1288,7 @@ func TestPublishBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBlindedBeaconBlockBellatrix
var blk structs.SignedBlindedBeaconBlockBellatrix
err := json.Unmarshal([]byte(rpctesting.BlindedBellatrixBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1309,7 +1309,7 @@ func TestPublishBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockBellatrix
var blk structs.SignedBeaconBlockBellatrix
err := json.Unmarshal([]byte(rpctesting.BellatrixBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1350,10 +1350,10 @@ func TestPublishBlindedBlock(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Phase0)
var signedblock *shared.SignedBeaconBlock
var signedblock *structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1372,10 +1372,10 @@ func TestPublishBlindedBlock(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Altair)
var signedblock *shared.SignedBeaconBlockAltair
var signedblock *structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1394,9 +1394,9 @@ func TestPublishBlindedBlock(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedBellatrix)
converted, err := shared.BlindedBeaconBlockBellatrixFromConsensus(block.BlindedBellatrix.Block)
converted, err := structs.BlindedBeaconBlockBellatrixFromConsensus(block.BlindedBellatrix.Block)
require.NoError(t, err)
var signedblock *shared.SignedBlindedBeaconBlockBellatrix
var signedblock *structs.SignedBlindedBeaconBlockBellatrix
err = json.Unmarshal([]byte(rpctesting.BlindedBellatrixBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
@@ -1418,9 +1418,9 @@ func TestPublishBlindedBlock(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedCapella)
converted, err := shared.BlindedBeaconBlockCapellaFromConsensus(block.BlindedCapella.Block)
converted, err := structs.BlindedBeaconBlockCapellaFromConsensus(block.BlindedCapella.Block)
require.NoError(t, err)
var signedblock *shared.SignedBlindedBeaconBlockCapella
var signedblock *structs.SignedBlindedBeaconBlockCapella
err = json.Unmarshal([]byte(rpctesting.BlindedCapellaBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
@@ -1442,9 +1442,9 @@ func TestPublishBlindedBlock(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedDeneb)
converted, err := shared.BlindedBeaconBlockDenebFromConsensus(block.BlindedDeneb.Message)
converted, err := structs.BlindedBeaconBlockDenebFromConsensus(block.BlindedDeneb.Message)
require.NoError(t, err)
var signedblock *shared.SignedBlindedBeaconBlockDeneb
var signedblock *structs.SignedBlindedBeaconBlockDeneb
err = json.Unmarshal([]byte(rpctesting.BlindedDenebBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
@@ -1512,10 +1512,10 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Phase0)
var signedblock *shared.SignedBeaconBlock
var signedblock *structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1523,7 +1523,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlock
var blk structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1542,10 +1542,10 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Altair)
var signedblock *shared.SignedBeaconBlockAltair
var signedblock *structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1553,7 +1553,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockAltair
var blk structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1579,7 +1579,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBlindedBeaconBlockBellatrix
var blk structs.SignedBlindedBeaconBlockBellatrix
err := json.Unmarshal([]byte(rpctesting.BlindedBellatrixBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1605,7 +1605,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBlindedBeaconBlockCapella
var blk structs.SignedBlindedBeaconBlockCapella
err := json.Unmarshal([]byte(rpctesting.BlindedCapellaBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1631,7 +1631,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBlindedBeaconBlockDeneb
var blk structs.SignedBlindedBeaconBlockDeneb
err := json.Unmarshal([]byte(rpctesting.BlindedDenebBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1664,7 +1664,7 @@ func TestPublishBlindedBlockSSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBlindedBeaconBlockBellatrix
var blk structs.SignedBlindedBeaconBlockBellatrix
err := json.Unmarshal([]byte(rpctesting.BlindedBellatrixBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1705,10 +1705,10 @@ func TestPublishBlockV2(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Phase0)
var signedblock *shared.SignedBeaconBlock
var signedblock *structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1727,10 +1727,10 @@ func TestPublishBlockV2(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Altair)
var signedblock *shared.SignedBeaconBlockAltair
var signedblock *structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1749,9 +1749,9 @@ func TestPublishBlockV2(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Bellatrix)
converted, err := shared.BeaconBlockBellatrixFromConsensus(block.Bellatrix.Block)
converted, err := structs.BeaconBlockBellatrixFromConsensus(block.Bellatrix.Block)
require.NoError(t, err)
var signedblock *shared.SignedBeaconBlockBellatrix
var signedblock *structs.SignedBeaconBlockBellatrix
err = json.Unmarshal([]byte(rpctesting.BellatrixBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
@@ -1773,9 +1773,9 @@ func TestPublishBlockV2(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Capella)
converted, err := shared.BeaconBlockCapellaFromConsensus(block.Capella.Block)
converted, err := structs.BeaconBlockCapellaFromConsensus(block.Capella.Block)
require.NoError(t, err)
var signedblock *shared.SignedBeaconBlockCapella
var signedblock *structs.SignedBeaconBlockCapella
err = json.Unmarshal([]byte(rpctesting.CapellaBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
@@ -1797,9 +1797,9 @@ func TestPublishBlockV2(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Deneb)
converted, err := shared.SignedBeaconBlockContentsDenebFromConsensus(block.Deneb)
converted, err := structs.SignedBeaconBlockContentsDenebFromConsensus(block.Deneb)
require.NoError(t, err)
var signedblock *shared.SignedBeaconBlockContentsDeneb
var signedblock *structs.SignedBeaconBlockContentsDeneb
err = json.Unmarshal([]byte(rpctesting.DenebBlockContents), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock)
@@ -1880,10 +1880,10 @@ func TestPublishBlockV2SSZ(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Phase0)
var signedblock *shared.SignedBeaconBlock
var signedblock *structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1891,7 +1891,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlock
var blk structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1910,10 +1910,10 @@ func TestPublishBlockV2SSZ(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Altair)
var signedblock *shared.SignedBeaconBlockAltair
var signedblock *structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -1921,7 +1921,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockAltair
var blk structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1946,7 +1946,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
V1Alpha1ValidatorServer: v1alpha1Server,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockBellatrix
var blk structs.SignedBeaconBlockBellatrix
err := json.Unmarshal([]byte(rpctesting.BellatrixBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1972,7 +1972,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockCapella
var blk structs.SignedBeaconBlockCapella
err := json.Unmarshal([]byte(rpctesting.CapellaBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -1998,7 +1998,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockContentsDeneb
var blk structs.SignedBeaconBlockContentsDeneb
err := json.Unmarshal([]byte(rpctesting.DenebBlockContents), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -2018,7 +2018,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBlindedBeaconBlockBellatrix
var blk structs.SignedBlindedBeaconBlockBellatrix
err := json.Unmarshal([]byte(rpctesting.BlindedBellatrixBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -2039,7 +2039,7 @@ func TestPublishBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockBellatrix
var blk structs.SignedBeaconBlockBellatrix
err := json.Unmarshal([]byte(rpctesting.BellatrixBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -2093,10 +2093,10 @@ func TestPublishBlindedBlockV2(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Phase0)
var signedblock *shared.SignedBeaconBlock
var signedblock *structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -2115,10 +2115,10 @@ func TestPublishBlindedBlockV2(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Altair)
var signedblock *shared.SignedBeaconBlockAltair
var signedblock *structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -2137,9 +2137,9 @@ func TestPublishBlindedBlockV2(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedBellatrix)
converted, err := shared.BlindedBeaconBlockBellatrixFromConsensus(block.BlindedBellatrix.Block)
converted, err := structs.BlindedBeaconBlockBellatrixFromConsensus(block.BlindedBellatrix.Block)
require.NoError(t, err)
var signedblock *shared.SignedBlindedBeaconBlockBellatrix
var signedblock *structs.SignedBlindedBeaconBlockBellatrix
err = json.Unmarshal([]byte(rpctesting.BlindedBellatrixBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
@@ -2161,9 +2161,9 @@ func TestPublishBlindedBlockV2(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedCapella)
converted, err := shared.BlindedBeaconBlockCapellaFromConsensus(block.BlindedCapella.Block)
converted, err := structs.BlindedBeaconBlockCapellaFromConsensus(block.BlindedCapella.Block)
require.NoError(t, err)
var signedblock *shared.SignedBlindedBeaconBlockCapella
var signedblock *structs.SignedBlindedBeaconBlockCapella
err = json.Unmarshal([]byte(rpctesting.BlindedCapellaBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
@@ -2185,9 +2185,9 @@ func TestPublishBlindedBlockV2(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_BlindedDeneb)
converted, err := shared.BlindedBeaconBlockDenebFromConsensus(block.BlindedDeneb.Message)
converted, err := structs.BlindedBeaconBlockDenebFromConsensus(block.BlindedDeneb.Message)
require.NoError(t, err)
var signedblock *shared.SignedBlindedBeaconBlockDeneb
var signedblock *structs.SignedBlindedBeaconBlockDeneb
err = json.Unmarshal([]byte(rpctesting.BlindedDenebBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, converted, signedblock.Message)
@@ -2267,10 +2267,10 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Phase0)
var signedblock *shared.SignedBeaconBlock
var signedblock *structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockFromConsensus(block.Phase0.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -2278,7 +2278,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlock
var blk structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -2297,10 +2297,10 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().ProposeBeaconBlock(gomock.Any(), mock.MatchedBy(func(req *eth.GenericSignedBeaconBlock) bool {
block, ok := req.Block.(*eth.GenericSignedBeaconBlock_Altair)
var signedblock *shared.SignedBeaconBlockAltair
var signedblock *structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &signedblock)
require.NoError(t, err)
require.DeepEqual(t, shared.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
require.DeepEqual(t, structs.BeaconBlockAltairFromConsensus(block.Altair.Block), signedblock.Message)
return ok
}))
server := &Server{
@@ -2308,7 +2308,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBeaconBlockAltair
var blk structs.SignedBeaconBlockAltair
err := json.Unmarshal([]byte(rpctesting.AltairBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -2334,7 +2334,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBlindedBeaconBlockBellatrix
var blk structs.SignedBlindedBeaconBlockBellatrix
err := json.Unmarshal([]byte(rpctesting.BlindedBellatrixBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -2360,7 +2360,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBlindedBeaconBlockCapella
var blk structs.SignedBlindedBeaconBlockCapella
err := json.Unmarshal([]byte(rpctesting.BlindedCapellaBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -2386,7 +2386,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBlindedBeaconBlockDeneb
var blk structs.SignedBlindedBeaconBlockDeneb
err := json.Unmarshal([]byte(rpctesting.BlindedDenebBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -2419,7 +2419,7 @@ func TestPublishBlindedBlockV2SSZ(t *testing.T) {
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var blk shared.SignedBlindedBeaconBlockBellatrix
var blk structs.SignedBlindedBeaconBlockBellatrix
err := json.Unmarshal([]byte(rpctesting.BlindedBellatrixBlock), &blk)
require.NoError(t, err)
genericBlock, err := blk.ToGeneric()
@@ -2639,7 +2639,7 @@ func TestServer_GetBlockRoot(t *testing.T) {
bs.GetBlockRoot(writer, request)
assert.Equal(t, tt.wantCode, writer.Code)
resp := &BlockRootResponse{}
resp := &structs.BlockRootResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
if tt.wantErr != "" {
require.ErrorContains(t, tt.wantErr, errors.New(writer.Body.String()))
@@ -2681,7 +2681,7 @@ func TestServer_GetBlockRoot(t *testing.T) {
bs.GetBlockRoot(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &BlockRootResponse{}
resp := &structs.BlockRootResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.DeepEqual(t, resp.ExecutionOptimistic, true)
})
@@ -2716,7 +2716,7 @@ func TestServer_GetBlockRoot(t *testing.T) {
bs.GetBlockRoot(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &BlockRootResponse{}
resp := &structs.BlockRootResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.DeepEqual(t, resp.Finalized, true)
})
@@ -2728,7 +2728,7 @@ func TestServer_GetBlockRoot(t *testing.T) {
bs.GetBlockRoot(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &BlockRootResponse{}
resp := &structs.BlockRootResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.DeepEqual(t, resp.Finalized, false)
})
@@ -2768,7 +2768,7 @@ func TestGetStateFork(t *testing.T) {
server.GetStateFork(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
var stateForkReponse *GetStateForkResponse
var stateForkReponse *structs.GetStateForkResponse
err = json.Unmarshal(writer.Body.Bytes(), &stateForkReponse)
require.NoError(t, err)
expectedFork := fakeState.Fork()
@@ -2872,7 +2872,7 @@ func TestGetCommittees(t *testing.T) {
writer.Body = &bytes.Buffer{}
s.GetCommittees(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetCommitteesResponse{}
resp := &structs.GetCommitteesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, int(params.BeaconConfig().SlotsPerEpoch)*2, len(resp.Data))
for _, datum := range resp.Data {
@@ -2893,7 +2893,7 @@ func TestGetCommittees(t *testing.T) {
writer.Body = &bytes.Buffer{}
s.GetCommittees(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetCommitteesResponse{}
resp := &structs.GetCommitteesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
for _, datum := range resp.Data {
slot, err := strconv.ParseUint(datum.Slot, 10, 32)
@@ -2910,7 +2910,7 @@ func TestGetCommittees(t *testing.T) {
writer.Body = &bytes.Buffer{}
s.GetCommittees(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetCommitteesResponse{}
resp := &structs.GetCommitteesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, 2, len(resp.Data))
@@ -2936,7 +2936,7 @@ func TestGetCommittees(t *testing.T) {
writer.Body = &bytes.Buffer{}
s.GetCommittees(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetCommitteesResponse{}
resp := &structs.GetCommitteesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, int(params.BeaconConfig().SlotsPerEpoch), len(resp.Data))
@@ -2962,7 +2962,7 @@ func TestGetCommittees(t *testing.T) {
writer.Body = &bytes.Buffer{}
s.GetCommittees(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetCommitteesResponse{}
resp := &structs.GetCommitteesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, 1, len(resp.Data))
@@ -3005,7 +3005,7 @@ func TestGetCommittees(t *testing.T) {
writer.Body = &bytes.Buffer{}
s.GetCommittees(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetCommitteesResponse{}
resp := &structs.GetCommitteesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -3042,7 +3042,7 @@ func TestGetCommittees(t *testing.T) {
writer.Body = &bytes.Buffer{}
s.GetCommittees(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetCommitteesResponse{}
resp := &structs.GetCommitteesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NoError(t, err)
assert.Equal(t, true, resp.Finalized)
@@ -3141,7 +3141,7 @@ func TestGetBlockHeaders(t *testing.T) {
bs.GetBlockHeaders(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockHeadersResponse{}
resp := &structs.GetBlockHeadersResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, len(tt.want), len(resp.Data))
@@ -3158,7 +3158,7 @@ func TestGetBlockHeaders(t *testing.T) {
expectedHeaderRoot, err := expectedHeader.HashTreeRoot()
require.NoError(t, err)
assert.DeepEqual(t, hexutil.Encode(expectedHeaderRoot[:]), resp.Data[i].Root)
assert.DeepEqual(t, shared.BeaconBlockHeaderFromConsensus(expectedHeader), resp.Data[i].Header.Message)
assert.DeepEqual(t, structs.BeaconBlockHeaderFromConsensus(expectedHeader), resp.Data[i].Header.Message)
}
})
}
@@ -3193,7 +3193,7 @@ func TestGetBlockHeaders(t *testing.T) {
bs.GetBlockHeaders(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockHeadersResponse{}
resp := &structs.GetBlockHeadersResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -3237,7 +3237,7 @@ func TestGetBlockHeaders(t *testing.T) {
bs.GetBlockHeaders(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockHeadersResponse{}
resp := &structs.GetBlockHeadersResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
@@ -3251,7 +3251,7 @@ func TestGetBlockHeaders(t *testing.T) {
bs.GetBlockHeaders(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockHeadersResponse{}
resp := &structs.GetBlockHeadersResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, false, resp.Finalized)
})
@@ -3264,7 +3264,7 @@ func TestGetBlockHeaders(t *testing.T) {
bs.GetBlockHeaders(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockHeadersResponse{}
resp := &structs.GetBlockHeadersResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, false, resp.Finalized)
})
@@ -3315,7 +3315,7 @@ func TestServer_GetBlockHeader(t *testing.T) {
s.GetBlockHeader(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockHeaderResponse{}
resp := &structs.GetBlockHeaderResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Data.Canonical)
assert.Equal(t, "0xd7d92f6206707f2c9c4e7e82320617d5abac2b6461a65ea5bb1a154b5b5ea2fa", resp.Data.Root)
@@ -3359,7 +3359,7 @@ func TestServer_GetBlockHeader(t *testing.T) {
s.GetBlockHeader(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockHeaderResponse{}
resp := &structs.GetBlockHeaderResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -3383,7 +3383,7 @@ func TestServer_GetBlockHeader(t *testing.T) {
s.GetBlockHeader(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockHeaderResponse{}
resp := &structs.GetBlockHeaderResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
@@ -3403,7 +3403,7 @@ func TestServer_GetBlockHeader(t *testing.T) {
s.GetBlockHeader(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &GetBlockHeaderResponse{}
resp := &structs.GetBlockHeaderResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, false, resp.Finalized)
})
@@ -3455,7 +3455,7 @@ func TestGetFinalityCheckpoints(t *testing.T) {
s.GetFinalityCheckpoints(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetFinalityCheckpointsResponse{}
resp := &structs.GetFinalityCheckpointsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp.Data)
assert.Equal(t, strconv.FormatUint(uint64(fakeState.FinalizedCheckpoint().Epoch), 10), resp.Data.Finalized.Epoch)
@@ -3508,7 +3508,7 @@ func TestGetFinalityCheckpoints(t *testing.T) {
s.GetFinalityCheckpoints(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetFinalityCheckpointsResponse{}
resp := &structs.GetFinalityCheckpointsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -3536,7 +3536,7 @@ func TestGetFinalityCheckpoints(t *testing.T) {
s.GetFinalityCheckpoints(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetFinalityCheckpointsResponse{}
resp := &structs.GetFinalityCheckpointsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
@@ -3566,7 +3566,7 @@ func TestGetGenesis(t *testing.T) {
s.GetGenesis(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetGenesisResponse{}
resp := &structs.GetGenesisResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp.Data)

View File

@@ -10,6 +10,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
@@ -51,7 +52,7 @@ func (s *Server) GetValidators(w http.ResponseWriter, r *http.Request) {
}
isFinalized := s.FinalizationFetcher.IsFinalized(ctx, blockRoot)
var req GetValidatorsRequest
var req structs.GetValidatorsRequest
if r.Method == http.MethodPost {
err = json.NewDecoder(r.Body).Decode(&req)
switch {
@@ -83,8 +84,8 @@ func (s *Server) GetValidators(w http.ResponseWriter, r *http.Request) {
}
// return no data if all IDs are ignored
if len(rawIds) > 0 && len(ids) == 0 {
resp := &GetValidatorsResponse{
Data: []*ValidatorContainer{},
resp := &structs.GetValidatorsResponse{
Data: []*structs.ValidatorContainer{},
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
}
@@ -100,7 +101,7 @@ func (s *Server) GetValidators(w http.ResponseWriter, r *http.Request) {
// Exit early if no matching validators were found or we don't want to further filter validators by status.
if len(readOnlyVals) == 0 || len(statuses) == 0 {
containers := make([]*ValidatorContainer, len(readOnlyVals))
containers := make([]*structs.ValidatorContainer, len(readOnlyVals))
for i, val := range readOnlyVals {
valStatus, err := helpers.ValidatorSubStatus(val, epoch)
if err != nil {
@@ -118,7 +119,7 @@ func (s *Server) GetValidators(w http.ResponseWriter, r *http.Request) {
}
containers[i] = valContainerFromReadOnlyVal(val, id, balance, valStatus)
}
resp := &GetValidatorsResponse{
resp := &structs.GetValidatorsResponse{
Data: containers,
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
@@ -136,7 +137,7 @@ func (s *Server) GetValidators(w http.ResponseWriter, r *http.Request) {
}
filteredStatuses[vs] = true
}
valContainers := make([]*ValidatorContainer, 0, len(readOnlyVals))
valContainers := make([]*structs.ValidatorContainer, 0, len(readOnlyVals))
for i, val := range readOnlyVals {
valStatus, err := helpers.ValidatorStatus(val, epoch)
if err != nil {
@@ -149,7 +150,7 @@ func (s *Server) GetValidators(w http.ResponseWriter, r *http.Request) {
return
}
if filteredStatuses[valStatus] || filteredStatuses[valSubStatus] {
var container *ValidatorContainer
var container *structs.ValidatorContainer
id := primitives.ValidatorIndex(i)
if len(ids) > 0 {
id = ids[i]
@@ -164,7 +165,7 @@ func (s *Server) GetValidators(w http.ResponseWriter, r *http.Request) {
}
}
resp := &GetValidatorsResponse{
resp := &structs.GetValidatorsResponse{
Data: valContainers,
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
@@ -229,7 +230,7 @@ func (s *Server) GetValidator(w http.ResponseWriter, r *http.Request) {
}
isFinalized := s.FinalizationFetcher.IsFinalized(ctx, blockRoot)
resp := &GetValidatorResponse{
resp := &structs.GetValidatorResponse{
Data: container,
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
@@ -286,8 +287,8 @@ func (s *Server) GetValidatorBalances(w http.ResponseWriter, r *http.Request) {
}
// return no data if all IDs are ignored
if len(rawIds) > 0 && len(ids) == 0 {
resp := &GetValidatorBalancesResponse{
Data: []*ValidatorBalance{},
resp := &structs.GetValidatorBalancesResponse{
Data: []*structs.ValidatorBalance{},
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
}
@@ -296,26 +297,26 @@ func (s *Server) GetValidatorBalances(w http.ResponseWriter, r *http.Request) {
}
bals := st.Balances()
var valBalances []*ValidatorBalance
var valBalances []*structs.ValidatorBalance
if len(ids) == 0 {
valBalances = make([]*ValidatorBalance, len(bals))
valBalances = make([]*structs.ValidatorBalance, len(bals))
for i, b := range bals {
valBalances[i] = &ValidatorBalance{
valBalances[i] = &structs.ValidatorBalance{
Index: strconv.FormatUint(uint64(i), 10),
Balance: strconv.FormatUint(b, 10),
}
}
} else {
valBalances = make([]*ValidatorBalance, len(ids))
valBalances = make([]*structs.ValidatorBalance, len(ids))
for i, id := range ids {
valBalances[i] = &ValidatorBalance{
valBalances[i] = &structs.ValidatorBalance{
Index: strconv.FormatUint(uint64(id), 10),
Balance: strconv.FormatUint(bals[id], 10),
}
}
}
resp := &GetValidatorBalancesResponse{
resp := &structs.GetValidatorBalancesResponse{
Data: valBalances,
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
@@ -404,13 +405,13 @@ func valContainerFromReadOnlyVal(
id primitives.ValidatorIndex,
bal uint64,
valStatus validator.Status,
) *ValidatorContainer {
) *structs.ValidatorContainer {
pubkey := val.PublicKey()
return &ValidatorContainer{
return &structs.ValidatorContainer{
Index: strconv.FormatUint(uint64(id), 10),
Balance: strconv.FormatUint(bal, 10),
Status: valStatus.String(),
Validator: &Validator{
Validator: &structs.Validator{
Pubkey: hexutil.Encode(pubkey[:]),
WithdrawalCredentials: hexutil.Encode(val.WithdrawalCredentials()),
EffectiveBalance: strconv.FormatUint(val.EffectiveBalance(), 10),

View File

@@ -12,6 +12,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
chainMock "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/lookup"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/testutil"
@@ -52,7 +53,7 @@ func TestGetValidators(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
val := resp.Data[0]
@@ -91,7 +92,7 @@ func TestGetValidators(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data))
assert.Equal(t, "0", resp.Data[0].Index)
@@ -123,7 +124,7 @@ func TestGetValidators(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data))
assert.Equal(t, "0", resp.Data[0].Index)
@@ -153,7 +154,7 @@ func TestGetValidators(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data))
assert.Equal(t, "0", resp.Data[0].Index)
@@ -202,7 +203,7 @@ func TestGetValidators(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Data))
assert.Equal(t, "1", resp.Data[0].Index)
@@ -225,7 +226,7 @@ func TestGetValidators(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Data))
assert.Equal(t, "1", resp.Data[0].Index)
@@ -248,7 +249,7 @@ func TestGetValidators(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -276,7 +277,7 @@ func TestGetValidators(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
@@ -292,7 +293,7 @@ func TestGetValidators(t *testing.T) {
}
var body bytes.Buffer
req := &GetValidatorsRequest{
req := &structs.GetValidatorsRequest{
Ids: []string{"0", strconv.Itoa(exitedValIndex)},
Statuses: []string{"exited"},
}
@@ -311,7 +312,7 @@ func TestGetValidators(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Data))
assert.Equal(t, "3", resp.Data[0].Index)
@@ -328,7 +329,7 @@ func TestGetValidators(t *testing.T) {
}
var body bytes.Buffer
req := &GetValidatorsRequest{
req := &structs.GetValidatorsRequest{
Ids: nil,
Statuses: nil,
}
@@ -347,7 +348,7 @@ func TestGetValidators(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
})
@@ -497,7 +498,7 @@ func TestGetValidators_FilterByStatus(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, 3, len(resp.Data))
for _, vc := range resp.Data {
@@ -528,7 +529,7 @@ func TestGetValidators_FilterByStatus(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, 2, len(resp.Data))
for _, vc := range resp.Data {
@@ -558,7 +559,7 @@ func TestGetValidators_FilterByStatus(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, 4, len(resp.Data))
for _, vc := range resp.Data {
@@ -591,7 +592,7 @@ func TestGetValidators_FilterByStatus(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, 4, len(resp.Data))
for _, vc := range resp.Data {
@@ -624,7 +625,7 @@ func TestGetValidators_FilterByStatus(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, 2, len(resp.Data))
for _, vc := range resp.Data {
@@ -659,7 +660,7 @@ func TestGetValidator(t *testing.T) {
s.GetValidator(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorResponse{}
resp := &structs.GetValidatorResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, "0", resp.Data.Index)
assert.Equal(t, "32000000000", resp.Data.Balance)
@@ -694,7 +695,7 @@ func TestGetValidator(t *testing.T) {
s.GetValidator(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorResponse{}
resp := &structs.GetValidatorResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, "0", resp.Data.Index)
})
@@ -796,7 +797,7 @@ func TestGetValidator(t *testing.T) {
s.GetValidator(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorResponse{}
resp := &structs.GetValidatorResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -824,7 +825,7 @@ func TestGetValidator(t *testing.T) {
s.GetValidator(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorResponse{}
resp := &structs.GetValidatorResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
@@ -858,7 +859,7 @@ func TestGetValidatorBalances(t *testing.T) {
s.GetValidatorBalances(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorBalancesResponse{}
resp := &structs.GetValidatorBalancesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
val := resp.Data[3]
@@ -887,7 +888,7 @@ func TestGetValidatorBalances(t *testing.T) {
s.GetValidatorBalances(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorBalancesResponse{}
resp := &structs.GetValidatorBalancesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data))
assert.Equal(t, "0", resp.Data[0].Index)
@@ -919,7 +920,7 @@ func TestGetValidatorBalances(t *testing.T) {
s.GetValidatorBalances(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorBalancesResponse{}
resp := &structs.GetValidatorBalancesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data))
assert.Equal(t, "0", resp.Data[0].Index)
@@ -949,7 +950,7 @@ func TestGetValidatorBalances(t *testing.T) {
s.GetValidators(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorsResponse{}
resp := &structs.GetValidatorsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data))
assert.Equal(t, "0", resp.Data[0].Index)
@@ -979,7 +980,7 @@ func TestGetValidatorBalances(t *testing.T) {
s.GetValidatorBalances(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorBalancesResponse{}
resp := &structs.GetValidatorBalancesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Data))
assert.Equal(t, "1", resp.Data[0].Index)
@@ -1002,7 +1003,7 @@ func TestGetValidatorBalances(t *testing.T) {
s.GetValidatorBalances(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorBalancesResponse{}
resp := &structs.GetValidatorBalancesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Data))
assert.Equal(t, "1", resp.Data[0].Index)
@@ -1048,7 +1049,7 @@ func TestGetValidatorBalances(t *testing.T) {
s.GetValidatorBalances(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorBalancesResponse{}
resp := &structs.GetValidatorBalancesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
@@ -1080,7 +1081,7 @@ func TestGetValidatorBalances(t *testing.T) {
s.GetValidatorBalances(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorBalancesResponse{}
resp := &structs.GetValidatorBalancesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
@@ -1113,7 +1114,7 @@ func TestGetValidatorBalances(t *testing.T) {
s.GetValidatorBalances(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &GetValidatorBalancesResponse{}
resp := &structs.GetValidatorBalancesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 2, len(resp.Data))
assert.Equal(t, "0", resp.Data[0].Index)

View File

@@ -5,13 +5,12 @@ go_library(
srcs = [
"handlers.go",
"server.go",
"structs.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/blob",
visibility = ["//visibility:public"],
deps = [
"//api/server/structs:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//config/fieldparams:go_default_library",
"//network/httputil:go_default_library",
@@ -26,6 +25,7 @@ go_test(
srcs = ["handlers_test.go"],
embed = [":go_default_library"],
deps = [
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/testing:go_default_library",

View File

@@ -7,8 +7,8 @@ import (
"strings"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/core"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
field_params "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/network/httputil"
eth "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
@@ -85,18 +85,18 @@ loop:
return indices
}
func buildSidecarsResponse(sidecars []*eth.BlobSidecar) *SidecarsResponse {
resp := &SidecarsResponse{Data: make([]*Sidecar, len(sidecars))}
func buildSidecarsResponse(sidecars []*eth.BlobSidecar) *structs.SidecarsResponse {
resp := &structs.SidecarsResponse{Data: make([]*structs.Sidecar, len(sidecars))}
for i, sc := range sidecars {
proofs := make([]string, len(sc.CommitmentInclusionProof))
for j := range sc.CommitmentInclusionProof {
proofs[j] = hexutil.Encode(sc.CommitmentInclusionProof[j])
}
resp.Data[i] = &Sidecar{
resp.Data[i] = &structs.Sidecar{
Index: strconv.FormatUint(sc.Index, 10),
Blob: hexutil.Encode(sc.Blob),
KzgCommitment: hexutil.Encode(sc.KzgCommitment),
SignedBeaconBlockHeader: shared.SignedBeaconBlockHeaderFromConsensus(sc.SignedBlockHeader),
SignedBeaconBlockHeader: structs.SignedBeaconBlockHeaderFromConsensus(sc.SignedBlockHeader),
KzgProof: hexutil.Encode(sc.KzgProof),
CommitmentInclusionProof: proofs,
}

View File

@@ -11,6 +11,7 @@ import (
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/prysmaticlabs/prysm/v4/api/server/structs"
mockChain "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
@@ -80,7 +81,7 @@ func TestBlobs(t *testing.T) {
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SidecarsResponse{}
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
sidecar := resp.Data[0]
@@ -125,7 +126,7 @@ func TestBlobs(t *testing.T) {
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SidecarsResponse{}
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
})
@@ -146,7 +147,7 @@ func TestBlobs(t *testing.T) {
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SidecarsResponse{}
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
})
@@ -166,7 +167,7 @@ func TestBlobs(t *testing.T) {
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SidecarsResponse{}
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
})
@@ -186,7 +187,7 @@ func TestBlobs(t *testing.T) {
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SidecarsResponse{}
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
})
@@ -207,7 +208,7 @@ func TestBlobs(t *testing.T) {
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SidecarsResponse{}
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 1, len(resp.Data))
sidecar := resp.Data[0]
@@ -233,7 +234,7 @@ func TestBlobs(t *testing.T) {
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &SidecarsResponse{}
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, len(resp.Data), 0)
})

Some files were not shown because too many files have changed in this diff Show More