Compare commits

...

27 Commits

Author SHA1 Message Date
terence tsao
b0659d0488 Consolidate ApplyDeposit for Altair and Electra 2024-10-14 18:49:30 -07:00
Sammy Rosso
7238848d81 Fix exchange capabilities (#14533)
* add proto marshal and unmarshal

* changelog

* Use strings

* fix missing error handling

* fix test

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-10-14 14:33:20 +00:00
terence
80cafaa6df Fix partial withdrawals (#14509) 2024-10-14 13:40:50 +00:00
james-prysm
8a0545c3d7 Eip6110 queue deposit requests (#14430)
* wip

* updating types and wip on functions

* renaming to MAX_PENDING_DEPOSITS_PER_EPOCH

* fixing linting and conversions

* adding queue deposit changes

* fixing test and cloning

* removing unneeded test based on update

* gaz

* wip apply pending deposit

* fixing replay test and adding apply pending deposit

* fixing setters test

* updating transition test

* changelog

* updating pending deposits

* fixing ProcessPendingDeposit unit tests

* gaz

* fixing cyclic dependencies

* fix visiblity

* missed adding the right signature verification

* adding point to infinity topup test

* adding apply pending deposit test

* making changes based on eip6110 changes

* fixing ineffassign

* gaz

* adding batched verifications sigs

* fixing broken type

* fixing proto

* updated consensus spec tests and fixed consensus bug tests

* testing readability improvement by avoiding ApplyPendingDeposit

* removing the boolean from apply pending deposit

* improve naming

* review comments and fixing a small bug using wrong variable

* fixing tests and skipping a test

* adding some test skips

* fixing bugs terence found

* adding test for batchProcessNewPendingDeposits

* gaz

* adding churn test

* updating spec tests to alpha.8

* adding pr to changelog

* addressing terence's comments

* Update beacon-chain/core/electra/validator.go

Co-authored-by: terence <terence@prysmaticlabs.com>

* adding tests for batch verify and rename some variables

* skipping tests , add them back in later

* skipping one more test

---------

Co-authored-by: terence <terence@prysmaticlabs.com>
2024-10-14 01:21:42 +00:00
james-prysm
9c61117b71 update batch deposit message verification for better readability (#14526)
* reversing boolean return for better readability

* skips more reversals

* changelog
2024-10-11 13:58:34 +00:00
james-prysm
6c22edeecc Replace validator wait for activation stream with polling (#14514)
* wip, waitForNextEpoch Broken

* fixing wait for activation and timings

* updating tests wip

* fixing tests

* deprecating wait for activation stream

* removing duplicate test

* Update validator/client/wait_for_activation.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update CHANGELOG.md

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update CHANGELOG.md

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/wait_for_activation.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* moving seconds until next epoch start to slottime and adding unit test

* removing seconds into slot buffer, will need to test

* fixing waittime bug

* adding pr to changelog

* Update validator/client/wait_for_activation.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/wait_for_activation.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* fixing incorect log

* refactoring based on feedback

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-10-10 20:29:56 +00:00
Sammy Rosso
57cc4950c0 Add /eth/v2/beacon/blocks/{block_id}/attestations (#14478)
* add endpoint

* changelog

* fix response

* improvement

* fix test

* James' review

* fix test

* fix version check

* add test for non electra V2

* Review items

* James' review

* Radek' review

* Update CHANGELOG.md

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-10-10 10:19:26 +00:00
Nishant Das
2c981d5564 Reboot Discovery Listener (#14487)
* Add Current Changes To Routine

* Add In New Test

* Add Feature Flag

* Add Discovery Rebooter feature

* Do Not Export Mutex And Use Zero Value Mutex

* Wrap Error For Better Debugging

* Fix Function Name and Add Specific Test For it

* Manu's Review
2024-10-10 08:22:42 +00:00
zhaochonghe
492c8af83f Fix mesh size in libp2p-pubsub (#14521)
* Fix Mesh size in libp2p-pubsub

* Update pubsub.go

* Update CHANGELOG.md
2024-10-09 14:08:26 +00:00
Rupam Dey
e40d2cbd2c feat: add Bellatrix tests for light clients (#14520)
* add tests for Bellatrix

* update `CHANGELOG.md`

* Update CHANGELOG.md

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
2024-10-09 09:02:48 +00:00
kasey
3fa6d3bd9d update to latest version of our fastssz fork (#14519)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-10-08 19:24:50 +00:00
Rupam Dey
56f0eb1437 feat: add Electra support to light client functions (#14506)
* add Electra to switch case in light client functions

* replace `!=` with `<` in `blockToLightClientHeaderXXX`

* add Electra tests

* update `CHANGELOG.md`

* add constant for Electra

* add constant to `minimal.go`

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-10-08 18:13:13 +00:00
Bastin
7fc5c714a1 Light Client consensus types (#14518)
* protos/SSZ

* interfaces

* types

* cleanup/dedup

* changelog

* use createBranch in headers

* error handling

---------

Co-authored-by: rkapka <radoslaw.kapka@gmail.com>
2024-10-08 17:07:56 +00:00
james-prysm
cfbfccb203 Fix: validator status cache does not clear upon key removal (#14504)
* adding fix and unit test

* Update validator.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* fixing linting

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-10-07 14:35:10 +00:00
Potuz
884b663455 Move back ConvertKzgCommitmentToVersionedHash to primitives package (#14508)
* Move back ConvertKzgCommitmentToVersionedHash to primitives package

* Changelog
2024-10-07 14:33:23 +00:00
Radosław Kapka
0f1d16c599 Flip committee aware packing flag (#14507)
* Flip committee aware packing flag

* changelog

* fix deprecated flag name

* test fixes

* properly use feature in tests

* Update beacon-chain/rpc/prysm/v1alpha1/validator/proposer_attestations_test.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-10-07 13:36:10 +00:00
kasey
c11e3392d4 SSE implementation that sheds stuck clients (#14413)
* sse implementation that sheds stuck clients

* Radek and James feedback

* Refactor event streamer code for readability

* less-flaky test signaling

* test case where queue fills; fixes

* add changelog entry

* james and preston feedback

* swap our Subscription interface with an alias

* event.Data can be nil for the payload attr event

* deepsource

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-10-04 21:18:17 +00:00
Radosław Kapka
f498463843 Register deposit snapshot endpoint (#14503) 2024-10-04 16:42:30 +00:00
Radosław Kapka
cf4ffc97e2 Update block Beacon APIs to Electra (#14488)
* Update block Beacon APIs to Electra

* CHANGELOG

* Revert "Auxiliary commit to revert individual files from 9bf238279a696dbcd65440606b0e3173f3be5e05"

This reverts commit a7ef57a2532f9ee02831d180926f7b84f5104a2b.

* review

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-10-04 08:21:08 +00:00
Bastin
3824e8a463 Make block_to_lightclient_header_XXX functions private (#14502)
* make block to lightclient header functions private

* changelog

* fix `helpers.go`

---------

Co-authored-by: Rupam Dey <117000803+rupam-04@users.noreply.github.com>
Co-authored-by: rupam-04 <rpmdey2004@gmail.com>
2024-10-03 21:59:07 +00:00
Rupam Dey
21ca4e008f Update lc functions to use the dev branch of CL specs (#14471)
* create finalized header based on finalized block version

* changelog entry

* pass attested block from handlers

* fix core tests

* add test for attested header exectution fields

* changelog entry

* remove unused functions

* update `createLightClientBootstrapXXX` functions

* fix for getBootstrapAltair

* fix for getBootstrapAltair

* fix tests for Capella and Deneb

* update `CHANGELOG.md`

* remove Electra from `createLightClientBootstrap` switch case

* update dependencies

* minor fixes

* remove unnecessary comments

* replace `!reflect.DeepEqual` with `!=`

* replace `%s` with `%#x` for `[32]byte`

---------

Co-authored-by: Inspector-Butters <mohamadbastin@gmail.com>
Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2024-10-03 20:52:21 +00:00
Bastin
6af44a1466 Fix lc execution header bug (#14468)
* create finalized header based on finalized block version

* changelog entry

* pass attested block from handlers

* fix core tests

* add test for attested header exectution fields

* changelog entry

* remove unused functions

* Update beacon-chain/core/light-client/lightclient.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* Update beacon-chain/core/light-client/lightclient.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* Update beacon-chain/core/light-client/lightclient.go

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>

* remove finalized header from default update

* remove unused functions

* bazel deps

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
2024-10-03 17:29:22 +00:00
Owen
2e29164582 allow users to publish blobs (#14442)
* allow users to publish blobs

Allowing users to publish blobs before publishing blocks, gives the blobs a head start. They can begin to propagate around the network while the block is being validated.

* Update beacon-chain/rpc/prysm/beacon/handlers.go

* Update beacon-chain/rpc/prysm/beacon/handlers.go

* Update beacon-chain/rpc/prysm/beacon/handlers.go

* Update beacon-chain/rpc/prysm/beacon/handlers.go

* Update beacon-chain/rpc/prysm/beacon/handlers.go

* Update beacon-chain/rpc/prysm/beacon/handlers.go

* Update beacon-chain/rpc/prysm/beacon/handlers.go

* Update beacon-chain/rpc/prysm/beacon/handlers.go

* Update beacon-chain/rpc/prysm/beacon/handlers.go

* Update beacon-chain/rpc/prysm/beacon/handlers.go

---------

Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
2024-10-01 20:13:41 +00:00
Ferran Borreguero
6d499bc9fc Enable electra interop genesis (#14465)
* Add Electra hard fork to interop genesis

* Update Changelog

* Fix develop merge

* gazelle

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-10-01 14:35:36 +00:00
Bastin
7786cb5684 create lc finalized header based on finalized block version (#14457)
* create finalized header based on finalized block version

* changelog entry

* refactor tests
2024-09-30 16:38:51 +00:00
terence
003b70c34b Update sepolia bootnodes (#14485)
* Add sepolia bootnodes

* Change log
2024-09-26 23:28:36 +00:00
Preston Van Loon
71edf96c7d Add sepolia config.yaml check with eth-clients/sepolia (#14484)
* Add sepolia config.yaml check with eth-clients/sepolia

* Update CHANGELOG.md
2024-09-26 13:26:12 +00:00
207 changed files with 12647 additions and 4446 deletions

View File

@@ -13,12 +13,24 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Light client support: Implement `ComputeFieldRootsForBlockBody`.
- Light client support: Add light client database changes.
- Light client support: Implement capella and deneb changes.
- Light client support: Implement `BlockToLightClientHeaderXXX` functions upto Deneb
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430)
- Light client support: Implement `BlockToLightClientHeader` function.
- Light client support: Consensus types.
- GetBeaconStateV2: add Electra case.
- Implement [consensus-specs/3875](https://github.com/ethereum/consensus-specs/pull/3875)
- Implement [consensus-specs/3875](https://github.com/ethereum/consensus-specs/pull/3875).
- Tests to ensure sepolia config matches the official upstream yaml.
- HTTP endpoint for PublishBlobs.
- GetBlockV2, GetBlindedBlock, ProduceBlockV2, ProduceBlockV3: add Electra case.
- Add Electra support and tests for light client functions.
- fastssz version bump (better error messages).
- SSE implementation that sheds stuck clients. [pr](https://github.com/prysmaticlabs/prysm/pull/14413)
- Add Bellatrix tests for light client functions.
- Add Discovery Rebooter Feature.
- Added GetBlockAttestationsV2 endpoint.
### Changed
- Electra: Updated interop genesis generator to support Electra.
- `getLocalPayload` has been refactored to enable work in ePBS branch.
- `TestNodeServer_GetPeer` and `TestNodeServer_ListPeers` test flakes resolved by iterating the whole peer list to find
a match rather than taking the first peer in the map.
@@ -33,17 +45,26 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- `grpc-gateway-corsdomain` is renamed to http-cors-domain. The old name can still be used as an alias.
- `api-timeout` is changed from int flag to duration flag, default value updated.
- Light client support: abstracted out the light client headers with different versions.
- Electra EIP6110: Queue deposit requests changes from consensus spec pr #3818
- `ApplyToEveryValidator` has been changed to prevent misuse bugs, it takes a closure that takes a `ReadOnlyValidator` and returns a raw pointer to a `Validator`.
- Removed gorilla mux library and replaced it with net/http updates in go 1.22.
- Clean up `ProposeBlock` for validator client to reduce cognitive scoring and enable further changes.
- Updated k8s-io/client-go to v0.30.4 and k8s-io/apimachinery to v0.30.4
- Migrated tracing library from opencensus to opentelemetry for both the beacon node and validator.
- Refactored light client code to make it more readable and make future PRs easier.
- Update light client helper functions to reference `dev` branch of CL specs
- Updated Libp2p Dependencies to allow prysm to use gossipsub v1.2 .
- Updated Sepolia bootnodes.
- Make committee aware packing the default by deprecating `--enable-committee-aware-packing`.
- Moved `ConvertKzgCommitmentToVersionedHash` to the `primitives` package.
- reversed the boolean return on `BatchVerifyDepositsSignatures`, from need verification, to all keys successfully verified
- Fix `engine_exchangeCapabilities` implementation.
- Consolidate `ApplyDeposit` for Altair and Electra.
### Deprecated
- `--disable-grpc-gateway` flag is deprecated due to grpc gateway removal.
- `--enable-experimental-state` flag is deprecated. This feature is now on by default. Opt-out with `--disable-experimental-state`.
- `/eth/v1alpha1/validator/activation/stream` grpc wait for activation stream is deprecated. [pr](https://github.com/prysmaticlabs/prysm/pull/14514)
### Removed
@@ -64,7 +85,12 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Core: Fixed slash processing causing extra hashing.
- Core: Fixed extra allocations when processing slashings.
- remove unneeded container in blob sidecar ssz response
- Light client support: create finalized header based on finalizedBlock's version, not attestedBlock.
- Light client support: fix light client attested header execution fields' wrong version bug.
- Testing: added custom matcher for better push settings testing.
- Registered `GetDepositSnapshot` Beacon API endpoint.
- Fixed mesh size by appending `gParams.Dhi = gossipSubDhi`
- Fix skipping partial withdrawals count.
### Security

View File

@@ -227,7 +227,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-alpha.6"
consensus_spec_version = "v1.5.0-alpha.8"
bls_test_version = "v0.1.1"
@@ -243,7 +243,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-M7u/Ot/Vzorww+dFbHp0cxLyM2mezJjijCzq+LY3uvs=",
integrity = "sha256-BsGIbEyJuYrzhShGl0tHhR4lP5Qwno8R3k8a6YBR/DA=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -259,7 +259,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-deOSeLRsmHXvkRp8n2bs3HXdkGUJWWqu8KFM/QABbZg=",
integrity = "sha256-DkdvhPP2KiqUOpwFXQIFDCWCwsUDIC/xhTBD+TZevm0=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -275,7 +275,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-Zz7YCf6XVf57nzSEGq9ToflJFHM0lAGwhd18l9Rf3hA=",
integrity = "sha256-vkZqV0HB8A2Uc56C1Us/p5G57iaHL+zw2No93Xt6M/4=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -290,7 +290,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-BoXckDxXnDcEmAjg/dQgf/tLiJsb6CT0aZvmWHFijrY=",
integrity = "sha256-D/HPAW61lKqjoWwl7N0XvhdX+67dCEFAy8JxVzqBGtU=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -342,6 +342,22 @@ filegroup(
url = "https://github.com/eth-clients/holesky/archive/874c199423ccd180607320c38cbaca05d9a1573a.tar.gz", # 2024-06-18
)
http_archive(
name = "sepolia_testnet",
build_file_content = """
filegroup(
name = "configs",
srcs = [
"metadata/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-cY/UgpCcYEhQf7JefD65FI8tn/A+rAvKhcm2/qiVdqY=",
strip_prefix = "sepolia-f2c219a93c4491cee3d90c18f2f8e82aed850eab",
url = "https://github.com/eth-clients/sepolia/archive/f2c219a93c4491cee3d90c18f2f8e82aed850eab.tar.gz", # 2024-09-19
)
http_archive(
name = "com_google_protobuf",
sha256 = "9bd87b8280ef720d3240514f884e56a712f2218f0d693b48050c836028940a42",

View File

@@ -1,5 +1,7 @@
package api
import "net/http"
const (
VersionHeader = "Eth-Consensus-Version"
ExecutionPayloadBlindedHeader = "Eth-Execution-Payload-Blinded"
@@ -10,3 +12,9 @@ const (
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
)
// SetSSEHeaders sets the headers needed for a server-sent event response.
func SetSSEHeaders(w http.ResponseWriter) {
w.Header().Set("Content-Type", EventStreamMediaType)
w.Header().Set("Connection", KeepAlive)
}

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"block.go",
"conversions.go",
"conversions_blob.go",
"conversions_block.go",
"conversions_lightclient.go",
"conversions_state.go",

View File

@@ -15,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/math"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethv1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -1475,12 +1476,15 @@ func DepositSnapshotFromConsensus(ds *eth.DepositSnapshot) *DepositSnapshot {
}
}
func PendingBalanceDepositsFromConsensus(ds []*eth.PendingBalanceDeposit) []*PendingBalanceDeposit {
deposits := make([]*PendingBalanceDeposit, len(ds))
func PendingDepositsFromConsensus(ds []*eth.PendingDeposit) []*PendingDeposit {
deposits := make([]*PendingDeposit, len(ds))
for i, d := range ds {
deposits[i] = &PendingBalanceDeposit{
Index: fmt.Sprintf("%d", d.Index),
Amount: fmt.Sprintf("%d", d.Amount),
deposits[i] = &PendingDeposit{
Pubkey: hexutil.Encode(d.PublicKey),
WithdrawalCredentials: hexutil.Encode(d.WithdrawalCredentials),
Amount: fmt.Sprintf("%d", d.Amount),
Signature: hexutil.Encode(d.Signature),
Slot: fmt.Sprintf("%d", d.Slot),
}
}
return deposits
@@ -1508,3 +1512,37 @@ func PendingConsolidationsFromConsensus(cs []*eth.PendingConsolidation) []*Pendi
}
return consolidations
}
func HeadEventFromV1(event *ethv1.EventHead) *HeadEvent {
return &HeadEvent{
Slot: fmt.Sprintf("%d", event.Slot),
Block: hexutil.Encode(event.Block),
State: hexutil.Encode(event.State),
EpochTransition: event.EpochTransition,
ExecutionOptimistic: event.ExecutionOptimistic,
PreviousDutyDependentRoot: hexutil.Encode(event.PreviousDutyDependentRoot),
CurrentDutyDependentRoot: hexutil.Encode(event.CurrentDutyDependentRoot),
}
}
func FinalizedCheckpointEventFromV1(event *ethv1.EventFinalizedCheckpoint) *FinalizedCheckpointEvent {
return &FinalizedCheckpointEvent{
Block: hexutil.Encode(event.Block),
State: hexutil.Encode(event.State),
Epoch: fmt.Sprintf("%d", event.Epoch),
ExecutionOptimistic: event.ExecutionOptimistic,
}
}
func EventChainReorgFromV1(event *ethv1.EventChainReorg) *ChainReorgEvent {
return &ChainReorgEvent{
Slot: fmt.Sprintf("%d", event.Slot),
Depth: fmt.Sprintf("%d", event.Depth),
OldHeadBlock: hexutil.Encode(event.OldHeadBlock),
NewHeadBlock: hexutil.Encode(event.NewHeadBlock),
OldHeadState: hexutil.Encode(event.OldHeadState),
NewHeadState: hexutil.Encode(event.NewHeadState),
Epoch: fmt.Sprintf("%d", event.Epoch),
ExecutionOptimistic: event.ExecutionOptimistic,
}
}

View File

@@ -0,0 +1,61 @@
package structs
import (
"strconv"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
func (sc *Sidecar) ToConsensus() (*eth.BlobSidecar, error) {
if sc == nil {
return nil, errNilValue
}
index, err := strconv.ParseUint(sc.Index, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Index")
}
blob, err := bytesutil.DecodeHexWithLength(sc.Blob, 131072)
if err != nil {
return nil, server.NewDecodeError(err, "Blob")
}
kzgCommitment, err := bytesutil.DecodeHexWithLength(sc.KzgCommitment, 48)
if err != nil {
return nil, server.NewDecodeError(err, "KzgCommitment")
}
kzgProof, err := bytesutil.DecodeHexWithLength(sc.KzgProof, 48)
if err != nil {
return nil, server.NewDecodeError(err, "KzgProof")
}
header, err := sc.SignedBeaconBlockHeader.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "SignedBeaconBlockHeader")
}
// decode the commitment inclusion proof
var commitmentInclusionProof [][]byte
for _, proof := range sc.CommitmentInclusionProof {
proofBytes, err := bytesutil.DecodeHexWithLength(proof, 32)
if err != nil {
return nil, server.NewDecodeError(err, "CommitmentInclusionProof")
}
commitmentInclusionProof = append(commitmentInclusionProof, proofBytes)
}
bsc := &eth.BlobSidecar{
Index: index,
Blob: blob,
KzgCommitment: kzgCommitment,
KzgProof: kzgProof,
SignedBlockHeader: header,
CommitmentInclusionProof: commitmentInclusionProof,
}
return bsc, nil
}

View File

@@ -20,6 +20,9 @@ import (
var ErrUnsupportedConversion = errors.New("Could not determine api struct type to use for value")
func (h *SignedBeaconBlockHeader) ToConsensus() (*eth.SignedBeaconBlockHeader, error) {
if h == nil {
return nil, errNilValue
}
msg, err := h.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
@@ -36,6 +39,9 @@ func (h *SignedBeaconBlockHeader) ToConsensus() (*eth.SignedBeaconBlockHeader, e
}
func (h *BeaconBlockHeader) ToConsensus() (*eth.BeaconBlockHeader, error) {
if h == nil {
return nil, errNilValue
}
s, err := strconv.ParseUint(h.Slot, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Slot")
@@ -2548,6 +2554,8 @@ func SignedBeaconBlockMessageJsoner(block interfaces.ReadOnlySignedBeaconBlock)
return SignedBlindedBeaconBlockDenebFromConsensus(pbStruct)
case *eth.SignedBeaconBlockDeneb:
return SignedBeaconBlockDenebFromConsensus(pbStruct)
case *eth.SignedBlindedBeaconBlockElectra:
return SignedBlindedBeaconBlockElectraFromConsensus(pbStruct)
case *eth.SignedBeaconBlockElectra:
return SignedBeaconBlockElectraFromConsensus(pbStruct)
default:

View File

@@ -84,6 +84,11 @@ func syncAggregateToJSON(input *v1.SyncAggregate) *SyncAggregate {
}
func lightClientHeaderContainerToJSON(container *v2.LightClientHeaderContainer) (json.RawMessage, error) {
// In the case that a finalizedHeader is nil.
if container == nil {
return nil, nil
}
beacon, err := container.GetBeacon()
if err != nil {
return nil, errors.Wrap(err, "could not get beacon block header")

View File

@@ -722,7 +722,7 @@ func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateEl
if err != nil {
return nil, err
}
pbd, err := st.PendingBalanceDeposits()
pbd, err := st.PendingDeposits()
if err != nil {
return nil, err
}
@@ -770,7 +770,7 @@ func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateEl
EarliestExitEpoch: fmt.Sprintf("%d", eee),
ConsolidationBalanceToConsume: fmt.Sprintf("%d", cbtc),
EarliestConsolidationEpoch: fmt.Sprintf("%d", ece),
PendingBalanceDeposits: PendingBalanceDepositsFromConsensus(pbd),
PendingDeposits: PendingDepositsFromConsensus(pbd),
PendingPartialWithdrawals: PendingPartialWithdrawalsFromConsensus(ppw),
PendingConsolidations: PendingConsolidationsFromConsensus(pc),
}, nil

View File

@@ -133,6 +133,13 @@ type GetBlockAttestationsResponse struct {
Data []*Attestation `json:"data"`
}
type GetBlockAttestationsV2Response struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data json.RawMessage `json:"data"` // Accepts both `Attestation` and `AttestationElectra` types
}
type GetStateRootResponse struct {
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`

View File

@@ -12,3 +12,12 @@ type Sidecar struct {
KzgProof string `json:"kzg_proof"`
CommitmentInclusionProof []string `json:"kzg_commitment_inclusion_proof"`
}
type BlobSidecars struct {
Sidecars []*Sidecar `json:"sidecars"`
}
type PublishBlobsRequest struct {
BlobSidecars *BlobSidecars `json:"blob_sidecars"`
BlockRoot string `json:"block_root"`
}

View File

@@ -257,9 +257,12 @@ type ConsolidationRequest struct {
TargetPubkey string `json:"target_pubkey"`
}
type PendingBalanceDeposit struct {
Index string `json:"index"`
Amount string `json:"amount"`
type PendingDeposit struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
Amount string `json:"amount"`
Signature string `json:"signature"`
Slot string `json:"slot"`
}
type PendingPartialWithdrawal struct {

View File

@@ -176,7 +176,7 @@ type BeaconStateElectra struct {
EarliestExitEpoch string `json:"earliest_exit_epoch"`
ConsolidationBalanceToConsume string `json:"consolidation_balance_to_consume"`
EarliestConsolidationEpoch string `json:"earliest_consolidation_epoch"`
PendingBalanceDeposits []*PendingBalanceDeposit `json:"pending_balance_deposits"`
PendingDeposits []*PendingDeposit `json:"pending_deposits"`
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
}

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"feed.go",
"interface.go",
"subscription.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/async/event",

View File

@@ -22,3 +22,4 @@ import (
// Feed is a re-export of the go-ethereum event feed.
type Feed = geth_event.Feed
type Subscription = geth_event.Subscription

8
async/event/interface.go Normal file
View File

@@ -0,0 +1,8 @@
package event
// SubscriberSender is an abstract representation of an *event.Feed
// to use in describing types that accept or return an *event.Feed.
type SubscriberSender interface {
Subscribe(channel interface{}) Subscription
Send(value interface{}) (nsent int)
}

View File

@@ -28,25 +28,6 @@ import (
// request backoff time.
const waitQuotient = 10
// Subscription represents a stream of events. The carrier of the events is typically a
// channel, but isn't part of the interface.
//
// Subscriptions can fail while established. Failures are reported through an error
// channel. It receives a value if there is an issue with the subscription (e.g. the
// network connection delivering the events has been closed). Only one value will ever be
// sent.
//
// The error channel is closed when the subscription ends successfully (i.e. when the
// source of events is closed). It is also closed when Unsubscribe is called.
//
// The Unsubscribe method cancels the sending of events. You must call Unsubscribe in all
// cases to ensure that resources related to the subscription are released. It can be
// called any number of times.
type Subscription interface {
Err() <-chan error // returns the error channel
Unsubscribe() // cancels sending of events, closing the error channel
}
// NewSubscription runs a producer function as a subscription in a new goroutine. The
// channel given to the producer is closed when Unsubscribe is called. If fn returns an
// error, it is sent on the subscription's error channel.

View File

@@ -2,7 +2,6 @@ package blockchain
import (
"context"
"crypto/sha256"
"fmt"
"github.com/ethereum/go-ethereum/common"
@@ -28,8 +27,6 @@ import (
"github.com/sirupsen/logrus"
)
const blobCommitmentVersionKZG uint8 = 0x01
var defaultLatestValidHash = bytesutil.PadTo([]byte{0xff}, 32)
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
@@ -402,13 +399,7 @@ func kzgCommitmentsToVersionedHashes(body interfaces.ReadOnlyBeaconBlockBody) ([
versionedHashes := make([]common.Hash, len(commitments))
for i, commitment := range commitments {
versionedHashes[i] = ConvertKzgCommitmentToVersionedHash(commitment)
versionedHashes[i] = primitives.ConvertKzgCommitmentToVersionedHash(commitment)
}
return versionedHashes, nil
}
func ConvertKzgCommitmentToVersionedHash(commitment []byte) common.Hash {
versionedHash := sha256.Sum256(commitment)
versionedHash[0] = blobCommitmentVersionKZG
return versionedHash
}

View File

@@ -162,6 +162,10 @@ func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed inte
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedBlock, err := s.cfg.BeaconDB.Block(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested block")
}
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
@@ -183,6 +187,7 @@ func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed inte
postState,
signed,
attestedState,
attestedBlock,
finalizedBlock,
)
@@ -208,6 +213,10 @@ func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed in
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedBlock, err := s.cfg.BeaconDB.Block(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested block")
}
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
@@ -218,6 +227,7 @@ func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed in
postState,
signed,
attestedState,
attestedBlock,
)
if err != nil {

View File

@@ -32,7 +32,7 @@ type mockBeaconNode struct {
}
// StateFeed mocks the same method in the beacon node.
func (mbn *mockBeaconNode) StateFeed() *event.Feed {
func (mbn *mockBeaconNode) StateFeed() event.SubscriberSender {
mbn.mu.Lock()
defer mbn.mu.Unlock()
if mbn.stateFeed == nil {

View File

@@ -98,6 +98,44 @@ func (s *ChainService) BlockNotifier() blockfeed.Notifier {
return s.blockNotifier
}
type EventFeedWrapper struct {
feed *event.Feed
subscribed chan struct{} // this channel is closed once a subscription is made
}
func (w *EventFeedWrapper) Subscribe(channel interface{}) event.Subscription {
select {
case <-w.subscribed:
break // already closed
default:
close(w.subscribed)
}
return w.feed.Subscribe(channel)
}
func (w *EventFeedWrapper) Send(value interface{}) int {
return w.feed.Send(value)
}
// WaitForSubscription allows test to wait for the feed to have a subscription before beginning to send events.
func (w *EventFeedWrapper) WaitForSubscription(ctx context.Context) error {
select {
case <-w.subscribed:
return nil
case <-ctx.Done():
return ctx.Err()
}
}
var _ event.SubscriberSender = &EventFeedWrapper{}
func NewEventFeedWrapper() *EventFeedWrapper {
return &EventFeedWrapper{
feed: new(event.Feed),
subscribed: make(chan struct{}),
}
}
// MockBlockNotifier mocks the block notifier.
type MockBlockNotifier struct {
feed *event.Feed
@@ -131,7 +169,7 @@ func (msn *MockStateNotifier) ReceivedEvents() []*feed.Event {
}
// StateFeed returns a state feed.
func (msn *MockStateNotifier) StateFeed() *event.Feed {
func (msn *MockStateNotifier) StateFeed() event.SubscriberSender {
msn.feedLock.Lock()
defer msn.feedLock.Unlock()
@@ -159,6 +197,23 @@ func (msn *MockStateNotifier) StateFeed() *event.Feed {
return msn.feed
}
// NewSimpleStateNotifier makes a state feed without the custom mock feed machinery.
func NewSimpleStateNotifier() *MockStateNotifier {
return &MockStateNotifier{feed: new(event.Feed)}
}
type SimpleNotifier struct {
Feed event.SubscriberSender
}
func (n *SimpleNotifier) StateFeed() event.SubscriberSender {
return n.Feed
}
func (n *SimpleNotifier) OperationFeed() event.SubscriberSender {
return n.Feed
}
// OperationNotifier mocks the same method in the chain service.
func (s *ChainService) OperationNotifier() opfeed.Notifier {
if s.opNotifier == nil {
@@ -173,7 +228,7 @@ type MockOperationNotifier struct {
}
// OperationFeed returns an operation feed.
func (mon *MockOperationNotifier) OperationFeed() *event.Feed {
func (mon *MockOperationNotifier) OperationFeed() event.SubscriberSender {
if mon.feed == nil {
mon.feed = new(event.Feed)
}

View File

@@ -61,6 +61,7 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/helpers:go_default_library",

View File

@@ -5,12 +5,8 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// ProcessPreGenesisDeposits processes a deposit for the beacon state before chainstart.
@@ -20,7 +16,7 @@ func ProcessPreGenesisDeposits(
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
var err error
beaconState, err = ProcessDeposits(ctx, beaconState, deposits)
beaconState, err = blocks.ProcessDeposits(ctx, beaconState, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit")
}
@@ -30,182 +26,3 @@ func ProcessPreGenesisDeposits(
}
return beaconState, nil
}
// ProcessDeposits processes validator deposits for beacon state Altair.
func ProcessDeposits(
ctx context.Context,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
batchVerified, err := blocks.BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, err
}
for _, deposit := range deposits {
if deposit == nil || deposit.Data == nil {
return nil, errors.New("got a nil deposit in block")
}
beaconState, err = ProcessDeposit(beaconState, deposit, batchVerified)
if err != nil {
return nil, errors.Wrapf(err, "could not process deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
}
return beaconState, nil
}
// ProcessDeposit takes in a deposit object and inserts it
// into the registry as a new validator or balance change.
// Returns the resulting state, a boolean to indicate whether or not the deposit
// resulted in a new validator entry into the beacon state, and any error.
//
// Spec pseudocode definition:
// def process_deposit(state: BeaconState, deposit: Deposit) -> None:
//
// # Verify the Merkle branch
// assert is_valid_merkle_branch(
// leaf=hash_tree_root(deposit.data),
// branch=deposit.proof,
// depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
// index=state.eth1_deposit_index,
// root=state.eth1_data.deposit_root,
// )
//
// # Deposits must be processed in order
// state.eth1_deposit_index += 1
//
// apply_deposit(
// state=state,
// pubkey=deposit.data.pubkey,
// withdrawal_credentials=deposit.data.withdrawal_credentials,
// amount=deposit.data.amount,
// signature=deposit.data.signature,
// )
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, error) {
if err := blocks.VerifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, err
}
return nil, errors.Wrapf(err, "could not verify deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
if err := beaconState.SetEth1DepositIndex(beaconState.Eth1DepositIndex() + 1); err != nil {
return nil, err
}
return ApplyDeposit(beaconState, deposit.Data, verifySignature)
}
// ApplyDeposit
// Spec pseudocode definition:
// def apply_deposit(state: BeaconState, pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> None:
//
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// deposit_message = DepositMessage(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// amount=amount,
// )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// if bls.Verify(pubkey, signing_root, signature):
// add_validator_to_registry(state, pubkey, withdrawal_credentials, amount)
// else:
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// increase_balance(state, index, amount)
func ApplyDeposit(beaconState state.BeaconState, data *ethpb.Deposit_Data, verifySignature bool) (state.BeaconState, error) {
pubKey := data.PublicKey
amount := data.Amount
withdrawalCredentials := data.WithdrawalCredentials
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
if verifySignature {
valid, err := blocks.IsValidDepositSignature(data)
if err != nil {
return nil, err
}
if !valid {
return beaconState, nil
}
}
if err := AddValidatorToRegistry(beaconState, pubKey, withdrawalCredentials, amount); err != nil {
return nil, err
}
} else {
if err := helpers.IncreaseBalance(beaconState, index, amount); err != nil {
return nil, err
}
}
return beaconState, nil
}
// AddValidatorToRegistry updates the beacon state with validator information
// def add_validator_to_registry(state: BeaconState,
//
// pubkey: BLSPubkey,
// withdrawal_credentials: Bytes32,
// amount: uint64) -> None:
// index = get_index_for_new_validator(state)
// validator = get_validator_from_deposit(pubkey, withdrawal_credentials)
// set_or_append_list(state.validators, index, validator)
// set_or_append_list(state.balances, index, 0)
// set_or_append_list(state.previous_epoch_participation, index, ParticipationFlags(0b0000_0000)) // New in Altair
// set_or_append_list(state.current_epoch_participation, index, ParticipationFlags(0b0000_0000)) // New in Altair
// set_or_append_list(state.inactivity_scores, index, uint64(0)) // New in Altair
func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdrawalCredentials []byte, amount uint64) error {
val := GetValidatorFromDeposit(pubKey, withdrawalCredentials, amount)
if err := beaconState.AppendValidator(val); err != nil {
return err
}
if err := beaconState.AppendBalance(amount); err != nil {
return err
}
// only active in altair and only when it's a new validator (after append balance)
if beaconState.Version() >= version.Altair {
if err := beaconState.AppendInactivityScore(0); err != nil {
return err
}
if err := beaconState.AppendPreviousParticipationBits(0); err != nil {
return err
}
if err := beaconState.AppendCurrentParticipationBits(0); err != nil {
return err
}
}
return nil
}
// GetValidatorFromDeposit gets a new validator object with provided parameters
//
// def get_validator_from_deposit(pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64) -> Validator:
//
// effective_balance = min(amount - amount % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE)
//
// return Validator(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// activation_eligibility_epoch=FAR_FUTURE_EPOCH,
// activation_epoch=FAR_FUTURE_EPOCH,
// exit_epoch=FAR_FUTURE_EPOCH,
// withdrawable_epoch=FAR_FUTURE_EPOCH,
// effective_balance=effective_balance,
// )
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) *ethpb.Validator {
effectiveBalance := amount - (amount % params.BeaconConfig().EffectiveBalanceIncrement)
if params.BeaconConfig().MaxEffectiveBalance < effectiveBalance {
effectiveBalance = params.BeaconConfig().MaxEffectiveBalance
}
return &ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: effectiveBalance,
}
}

View File

@@ -6,6 +6,7 @@ import (
fuzz "github.com/google/gofuzz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -23,7 +24,7 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
}
s, err := state_native.InitializeFromProtoUnsafeAltair(state)
require.NoError(t, err)
r, err := altair.ProcessDeposits(ctx, s, deposits)
r, err := blocks.ProcessDeposits(ctx, s, deposits)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
}
@@ -76,7 +77,7 @@ func TestFuzzProcessDeposit_Phase0_10000(t *testing.T) {
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := altair.ProcessDeposit(s, deposit, true)
r, err := blocks.ProcessDeposit(s, deposit, true)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}
@@ -93,7 +94,7 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafeAltair(state)
require.NoError(t, err)
r, err := altair.ProcessDeposit(s, deposit, true)
r, err := blocks.ProcessDeposit(s, deposit, true)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}

View File

@@ -5,6 +5,7 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
@@ -42,7 +43,7 @@ func TestProcessDeposits_SameValidatorMultipleDepositsSameBlock(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{dep[0], dep[1], dep[2]})
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{dep[0], dep[1], dep[2]})
require.NoError(t, err, "Expected block deposits to process correctly")
require.Equal(t, 2, len(newState.Validators()), "Incorrect validator count")
}
@@ -70,7 +71,7 @@ func TestProcessDeposits_AddsNewValidatorDeposit(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{dep[0]})
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{dep[0]})
require.NoError(t, err, "Expected block deposits to process correctly")
if newState.Balances()[1] != dep[0].Data.Amount {
t.Errorf(
@@ -127,7 +128,7 @@ func TestProcessDeposits_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T)
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{deposit})
newState, err := blocks.ProcessDeposits(context.Background(), beaconState, []*ethpb.Deposit{deposit})
require.NoError(t, err, "Process deposit failed")
require.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
}
@@ -156,7 +157,7 @@ func TestProcessDeposit_AddsNewValidatorDeposit(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposit(beaconState, dep[0], true)
newState, err := blocks.ProcessDeposit(beaconState, dep[0], true)
require.NoError(t, err, "Process deposit failed")
require.Equal(t, 2, len(newState.Validators()), "Expected validator list to have length 2")
require.Equal(t, 2, len(newState.Balances()), "Expected validator balances list to have length 2")
@@ -199,7 +200,7 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposit(beaconState, dep[0], true)
newState, err := blocks.ProcessDeposit(beaconState, dep[0], false)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
if newState.Eth1DepositIndex() != 1 {
@@ -333,7 +334,7 @@ func TestProcessDeposit_RepeatedDeposit_IncreasesValidatorBalance(t *testing.T)
},
})
require.NoError(t, err)
newState, err := altair.ProcessDeposit(beaconState, deposit, true /*verifySignature*/)
newState, err := blocks.ProcessDeposit(beaconState, deposit, true /*verifySignature*/)
require.NoError(t, err, "Process deposit failed")
require.Equal(t, uint64(1000+50), newState.Balances()[1], "Expected balance at index 1 to be 1050")
}
@@ -370,6 +371,6 @@ func TestProcessDeposits_MerkleBranchFailsVerification(t *testing.T) {
})
require.NoError(t, err)
want := "deposit root did not verify"
_, err = altair.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
_, err = blocks.ProcessDeposits(context.Background(), beaconState, b.Block.Body.Deposits)
assert.ErrorContains(t, want, err)
}

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -14,8 +15,208 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/math"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// ProcessDeposits processes validator deposits for beacon state.
func ProcessDeposits(
ctx context.Context,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
allSignaturesVerified, err := BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, err
}
for _, deposit := range deposits {
if deposit == nil || deposit.Data == nil {
return nil, errors.New("got a nil deposit in block")
}
beaconState, err = ProcessDeposit(beaconState, deposit, allSignaturesVerified)
if err != nil {
return nil, errors.Wrapf(err, "could not process deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
}
return beaconState, nil
}
// ProcessDeposit takes in a deposit object and inserts it
// into the registry as a new validator or balance change.
// Returns the resulting state, a boolean to indicate whether or not the deposit
// resulted in a new validator entry into the beacon state, and any error.
//
// Spec pseudocode definition:
// def process_deposit(state: BeaconState, deposit: Deposit) -> None:
//
// # Verify the Merkle branch
// assert is_valid_merkle_branch(
// leaf=hash_tree_root(deposit.data),
// branch=deposit.proof,
// depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
// index=state.eth1_deposit_index,
// root=state.eth1_data.deposit_root,
// )
//
// # Deposits must be processed in order
// state.eth1_deposit_index += 1
//
// apply_deposit(
// state=state,
// pubkey=deposit.data.pubkey,
// withdrawal_credentials=deposit.data.withdrawal_credentials,
// amount=deposit.data.amount,
// signature=deposit.data.signature,
// )
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, allSignaturesVerified bool) (state.BeaconState, error) {
if err := VerifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, err
}
return nil, errors.Wrapf(err, "could not verify deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
if err := beaconState.SetEth1DepositIndex(beaconState.Eth1DepositIndex() + 1); err != nil {
return nil, err
}
return ApplyDeposit(beaconState, deposit.Data, allSignaturesVerified)
}
// ApplyDeposit applies a deposit to the beacon state.
//
// Altair spec pseudocode definition:
// def apply_deposit(state: BeaconState, pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> None:
//
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// deposit_message = DepositMessage(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// amount=amount,
// )
// domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
// signing_root = compute_signing_root(deposit_message, domain)
// if bls.Verify(pubkey, signing_root, signature):
// add_validator_to_registry(state, pubkey, withdrawal_credentials, amount)
// else:
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// increase_balance(state, index, amount)
//
// Electra spec pseudocode definition:
// def apply_deposit(state: BeaconState,
//
// pubkey: BLSPubkey,
// withdrawal_credentials: Bytes32,
// amount: uint64,
// signature: BLSSignature) -> None:
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// if is_valid_deposit_signature(pubkey, withdrawal_credentials, amount, signature):
// add_validator_to_registry(state, pubkey, withdrawal_credentials, Gwei(0)) # [Modified in Electra:EIP7251]
// # [New in Electra:EIP7251]
// state.pending_deposits.append(PendingDeposit(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// amount=amount,
// signature=signature,
// slot=GENESIS_SLOT, # Use GENESIS_SLOT to distinguish from a pending deposit request
// ))
// else:
// # Increase balance by deposit amount
// # [Modified in Electra:EIP7251]
// state.pending_deposits.append(PendingDeposit(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// amount=amount,
// signature=signature,
// slot=GENESIS_SLOT # Use GENESIS_SLOT to distinguish from a pending deposit request
// ))
func ApplyDeposit(beaconState state.BeaconState, data *ethpb.Deposit_Data, allSignaturesVerified bool) (state.BeaconState, error) {
pubKey := data.PublicKey
amount := data.Amount
withdrawalCredentials := data.WithdrawalCredentials
// Check if validator exists
if index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey)); ok {
// Validator exists and balance can be increased if version is before Electra
if beaconState.Version() < version.Electra {
return beaconState, helpers.IncreaseBalance(beaconState, index, amount)
}
} else {
// Validator doesn't exist, check if signature is valid if not all signatures verified
if !allSignaturesVerified {
valid, err := IsValidDepositSignature(data)
if err != nil || !valid {
return beaconState, err
}
}
// If Electra version, amount is reset to zero
if beaconState.Version() >= version.Electra {
amount = 0
}
// Add new validator
if err := AddValidatorToRegistry(beaconState, pubKey, withdrawalCredentials, amount); err != nil {
return nil, err
}
}
// Handle deposits in Electra version or beyond
if beaconState.Version() >= version.Electra {
pendingDeposit := &ethpb.PendingDeposit{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
Amount: data.Amount,
Signature: data.Signature,
Slot: params.BeaconConfig().GenesisSlot,
}
return beaconState, beaconState.AppendPendingDeposit(pendingDeposit)
}
return beaconState, nil
}
// AddValidatorToRegistry updates the beacon state with validator information
// def add_validator_to_registry(state: BeaconState,
//
// pubkey: BLSPubkey,
// withdrawal_credentials: Bytes32,
// amount: uint64) -> None:
// index = get_index_for_new_validator(state)
// validator = get_validator_from_deposit(pubkey, withdrawal_credentials)
// set_or_append_list(state.validators, index, validator)
// set_or_append_list(state.balances, index, 0)
// set_or_append_list(state.previous_epoch_participation, index, ParticipationFlags(0b0000_0000)) // New in Altair
// set_or_append_list(state.current_epoch_participation, index, ParticipationFlags(0b0000_0000)) // New in Altair
// set_or_append_list(state.inactivity_scores, index, uint64(0)) // New in Altair
func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdrawalCredentials []byte, amount uint64) error {
val := GetValidatorFromDeposit(pubKey, withdrawalCredentials, amount)
if err := beaconState.AppendValidator(val); err != nil {
return err
}
if err := beaconState.AppendBalance(amount); err != nil {
return err
}
// only active in altair and only when it's a new validator (after append balance)
if beaconState.Version() >= version.Altair {
if err := beaconState.AppendInactivityScore(0); err != nil {
return err
}
if err := beaconState.AppendPreviousParticipationBits(0); err != nil {
return err
}
if err := beaconState.AppendCurrentParticipationBits(0); err != nil {
return err
}
}
return nil
}
// ActivateValidatorWithEffectiveBalance updates validator's effective balance, and if it's above MaxEffectiveBalance, validator becomes active in genesis.
func ActivateValidatorWithEffectiveBalance(beaconState state.BeaconState, deposits []*ethpb.Deposit) (state.BeaconState, error) {
for _, d := range deposits {
@@ -55,12 +256,26 @@ func BatchVerifyDepositsSignatures(ctx context.Context, deposits []*ethpb.Deposi
return false, err
}
verified := false
if err := verifyDepositDataWithDomain(ctx, deposits, domain); err != nil {
log.WithError(err).Debug("Failed to batch verify deposits signatures, will try individual verify")
verified = true
return false, nil
}
return verified, nil
return true, nil
}
// BatchVerifyPendingDepositsSignatures batch verifies pending deposit signatures.
func BatchVerifyPendingDepositsSignatures(ctx context.Context, deposits []*ethpb.PendingDeposit) (bool, error) {
var err error
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
if err != nil {
return false, err
}
if err := verifyPendingDepositDataWithDomain(ctx, deposits, domain); err != nil {
log.WithError(err).Debug("Failed to batch verify deposits signatures, will try individual verify")
return false, nil
}
return true, nil
}
// IsValidDepositSignature returns whether deposit_data is valid
@@ -115,6 +330,38 @@ func VerifyDeposit(beaconState state.ReadOnlyBeaconState, deposit *ethpb.Deposit
return nil
}
// GetValidatorFromDeposit gets a new validator object with provided parameters
//
// def get_validator_from_deposit(pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64) -> Validator:
//
// effective_balance = min(amount - amount % EFFECTIVE_BALANCE_INCREMENT, MAX_EFFECTIVE_BALANCE)
//
// return Validator(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// activation_eligibility_epoch=FAR_FUTURE_EPOCH,
// activation_epoch=FAR_FUTURE_EPOCH,
// exit_epoch=FAR_FUTURE_EPOCH,
// withdrawable_epoch=FAR_FUTURE_EPOCH,
// effective_balance=effective_balance,
// )
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) *ethpb.Validator {
effectiveBalance := amount - (amount % params.BeaconConfig().EffectiveBalanceIncrement)
if params.BeaconConfig().MaxEffectiveBalance < effectiveBalance {
effectiveBalance = params.BeaconConfig().MaxEffectiveBalance
}
return &ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: effectiveBalance,
}
}
func verifyDepositDataSigningRoot(obj *ethpb.Deposit_Data, domain []byte) error {
return deposit.VerifyDepositSignature(obj, domain)
}
@@ -159,3 +406,44 @@ func verifyDepositDataWithDomain(ctx context.Context, deps []*ethpb.Deposit, dom
}
return nil
}
func verifyPendingDepositDataWithDomain(ctx context.Context, deps []*ethpb.PendingDeposit, domain []byte) error {
if len(deps) == 0 {
return nil
}
pks := make([]bls.PublicKey, len(deps))
sigs := make([][]byte, len(deps))
msgs := make([][32]byte, len(deps))
for i, dep := range deps {
if ctx.Err() != nil {
return ctx.Err()
}
if dep == nil {
return errors.New("nil deposit")
}
dpk, err := bls.PublicKeyFromBytes(dep.PublicKey)
if err != nil {
return err
}
pks[i] = dpk
sigs[i] = dep.Signature
depositMessage := &ethpb.DepositMessage{
PublicKey: dep.PublicKey,
WithdrawalCredentials: dep.WithdrawalCredentials,
Amount: dep.Amount,
}
sr, err := signing.ComputeSigningRoot(depositMessage, domain)
if err != nil {
return err
}
msgs[i] = sr
}
verify, err := bls.VerifyMultipleSignatures(sigs, msgs, pks)
if err != nil {
return errors.Errorf("could not verify multiple signatures: %v", err)
}
if !verify {
return errors.New("one or more deposit signatures did not verify")
}
return nil
}

View File

@@ -17,6 +17,41 @@ import (
)
func TestBatchVerifyDepositsSignatures_Ok(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
require.NoError(t, err)
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: make([]byte, 32),
Amount: 3000,
},
}
sr, err := signing.ComputeSigningRoot(&ethpb.DepositMessage{
PublicKey: deposit.Data.PublicKey,
WithdrawalCredentials: deposit.Data.WithdrawalCredentials,
Amount: 3000,
}, domain)
require.NoError(t, err)
sig := sk.Sign(sr[:])
deposit.Data.Signature = sig.Marshal()
leaf, err := deposit.Data.HashTreeRoot()
require.NoError(t, err)
// We then create a merkle branch for the test.
depositTrie, err := trie.GenerateTrieFromItems([][]byte{leaf[:]}, params.BeaconConfig().DepositContractTreeDepth)
require.NoError(t, err, "Could not generate trie")
proof, err := depositTrie.MerkleProof(0)
require.NoError(t, err, "Could not generate proof")
deposit.Proof = proof
require.NoError(t, err)
verified, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
require.NoError(t, err)
require.Equal(t, true, verified)
}
func TestBatchVerifyDepositsSignatures_InvalidSignature(t *testing.T) {
deposit := &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, 48),
@@ -34,9 +69,9 @@ func TestBatchVerifyDepositsSignatures_Ok(t *testing.T) {
deposit.Proof = proof
require.NoError(t, err)
ok, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
verified, err := blocks.BatchVerifyDepositsSignatures(context.Background(), []*ethpb.Deposit{deposit})
require.NoError(t, err)
require.Equal(t, true, ok)
require.Equal(t, false, verified)
}
func TestVerifyDeposit_MerkleBranchFailsVerification(t *testing.T) {
@@ -93,3 +128,54 @@ func TestIsValidDepositSignature_Ok(t *testing.T) {
require.NoError(t, err)
require.Equal(t, true, valid)
}
func TestBatchVerifyPendingDepositsSignatures_Ok(t *testing.T) {
sk, err := bls.RandKey()
require.NoError(t, err)
domain, err := signing.ComputeDomain(params.BeaconConfig().DomainDeposit, nil, nil)
require.NoError(t, err)
pendingDeposit := &ethpb.PendingDeposit{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: make([]byte, 32),
Amount: 3000,
}
sr, err := signing.ComputeSigningRoot(&ethpb.DepositMessage{
PublicKey: pendingDeposit.PublicKey,
WithdrawalCredentials: pendingDeposit.WithdrawalCredentials,
Amount: 3000,
}, domain)
require.NoError(t, err)
sig := sk.Sign(sr[:])
pendingDeposit.Signature = sig.Marshal()
sk2, err := bls.RandKey()
require.NoError(t, err)
pendingDeposit2 := &ethpb.PendingDeposit{
PublicKey: sk2.PublicKey().Marshal(),
WithdrawalCredentials: make([]byte, 32),
Amount: 4000,
}
sr2, err := signing.ComputeSigningRoot(&ethpb.DepositMessage{
PublicKey: pendingDeposit2.PublicKey,
WithdrawalCredentials: pendingDeposit2.WithdrawalCredentials,
Amount: 4000,
}, domain)
require.NoError(t, err)
sig2 := sk2.Sign(sr2[:])
pendingDeposit2.Signature = sig2.Marshal()
verified, err := blocks.BatchVerifyPendingDepositsSignatures(context.Background(), []*ethpb.PendingDeposit{pendingDeposit, pendingDeposit2})
require.NoError(t, err)
require.Equal(t, true, verified)
}
func TestBatchVerifyPendingDepositsSignatures_InvalidSignature(t *testing.T) {
pendingDeposit := &ethpb.PendingDeposit{
PublicKey: bytesutil.PadTo([]byte{1, 2, 3}, 48),
WithdrawalCredentials: make([]byte, 32),
Signature: make([]byte, 96),
}
verified, err := blocks.BatchVerifyPendingDepositsSignatures(context.Background(), []*ethpb.PendingDeposit{pendingDeposit})
require.NoError(t, err)
require.Equal(t, false, verified)
}

View File

@@ -32,11 +32,13 @@ go_library(
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//contracts/deposit:go_default_library",
"//crypto/bls/common:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -52,24 +54,28 @@ go_test(
"deposit_fuzz_test.go",
"deposits_test.go",
"effective_balance_updates_test.go",
"export_test.go",
"registry_updates_test.go",
"transition_test.go",
"upgrade_test.go",
"validator_test.go",
"withdrawals_test.go",
],
embed = [":go_default_library"],
deps = [
":go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -201,38 +201,6 @@ func TestProcessPendingConsolidations(t *testing.T) {
}
}
func stateWithActiveBalanceETH(t *testing.T, balETH uint64) state.BeaconState {
gwei := balETH * 1_000_000_000
balPerVal := params.BeaconConfig().MinActivationBalance
numVals := gwei / balPerVal
vals := make([]*eth.Validator, numVals)
bals := make([]uint64, numVals)
for i := uint64(0); i < numVals; i++ {
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(i)
vals[i] = &eth.Validator{
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: balPerVal,
WithdrawalCredentials: wc,
}
bals[i] = balPerVal
}
st, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: 10 * params.BeaconConfig().SlotsPerEpoch,
Validators: vals,
Balances: bals,
Fork: &eth.Fork{
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
},
})
require.NoError(t, err)
return st
}
func TestProcessConsolidationRequests(t *testing.T) {
tests := []struct {
name string

View File

@@ -5,7 +5,7 @@ import (
"testing"
fuzz "github.com/google/gofuzz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -23,7 +23,7 @@ func TestFuzzProcessDeposits_10000(t *testing.T) {
}
s, err := state_native.InitializeFromProtoUnsafeElectra(state)
require.NoError(t, err)
r, err := electra.ProcessDeposits(ctx, s, deposits)
r, err := blocks.ProcessDeposits(ctx, s, deposits)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposits)
}
@@ -40,7 +40,7 @@ func TestFuzzProcessDeposit_10000(t *testing.T) {
fuzzer.Fuzz(deposit)
s, err := state_native.InitializeFromProtoUnsafeElectra(state)
require.NoError(t, err)
r, err := electra.ProcessDeposit(s, deposit, true)
r, err := blocks.ProcessDeposit(s, deposit, true)
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, deposit)
}

View File

@@ -2,7 +2,6 @@ package electra
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
@@ -17,150 +16,11 @@ import (
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
// ProcessDeposits is one of the operations performed on each processed
// beacon block to verify queued validators from the Ethereum 1.0 Deposit Contract
// into the beacon chain.
//
// Spec pseudocode definition:
//
// For each deposit in block.body.deposits:
// process_deposit(state, deposit)
func ProcessDeposits(
ctx context.Context,
beaconState state.BeaconState,
deposits []*ethpb.Deposit,
) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "electra.ProcessDeposits")
defer span.End()
// Attempt to verify all deposit signatures at once, if this fails then fall back to processing
// individual deposits with signature verification enabled.
batchVerified, err := blocks.BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not verify deposit signatures in batch")
}
for _, d := range deposits {
if d == nil || d.Data == nil {
return nil, errors.New("got a nil deposit in block")
}
beaconState, err = ProcessDeposit(beaconState, d, batchVerified)
if err != nil {
return nil, errors.Wrapf(err, "could not process deposit from %#x", bytesutil.Trunc(d.Data.PublicKey))
}
}
return beaconState, nil
}
// ProcessDeposit takes in a deposit object and inserts it
// into the registry as a new validator or balance change.
// Returns the resulting state, a boolean to indicate whether or not the deposit
// resulted in a new validator entry into the beacon state, and any error.
//
// Spec pseudocode definition:
// def process_deposit(state: BeaconState, deposit: Deposit) -> None:
//
// # Verify the Merkle branch
// assert is_valid_merkle_branch(
// leaf=hash_tree_root(deposit.data),
// branch=deposit.proof,
// depth=DEPOSIT_CONTRACT_TREE_DEPTH + 1, # Add 1 for the List length mix-in
// index=state.eth1_deposit_index,
// root=state.eth1_data.deposit_root,
// )
//
// # Deposits must be processed in order
// state.eth1_deposit_index += 1
//
// apply_deposit(
// state=state,
// pubkey=deposit.data.pubkey,
// withdrawal_credentials=deposit.data.withdrawal_credentials,
// amount=deposit.data.amount,
// signature=deposit.data.signature,
// )
func ProcessDeposit(beaconState state.BeaconState, deposit *ethpb.Deposit, verifySignature bool) (state.BeaconState, error) {
if err := blocks.VerifyDeposit(beaconState, deposit); err != nil {
if deposit == nil || deposit.Data == nil {
return nil, err
}
return nil, errors.Wrapf(err, "could not verify deposit from %#x", bytesutil.Trunc(deposit.Data.PublicKey))
}
if err := beaconState.SetEth1DepositIndex(beaconState.Eth1DepositIndex() + 1); err != nil {
return nil, err
}
return ApplyDeposit(beaconState, deposit.Data, verifySignature)
}
// ApplyDeposit
// def apply_deposit(state: BeaconState, pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> None:
// validator_pubkeys = [v.pubkey for v in state.validators]
// if pubkey not in validator_pubkeys:
//
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// if is_valid_deposit_signature(pubkey, withdrawal_credentials, amount, signature):
// add_validator_to_registry(state, pubkey, withdrawal_credentials, amount)
//
// else:
//
// # Increase balance by deposit amount
// index = ValidatorIndex(validator_pubkeys.index(pubkey))
// state.pending_balance_deposits.append(PendingBalanceDeposit(index=index, amount=amount)) # [Modified in Electra:EIP-7251]
// # Check if valid deposit switch to compounding credentials
//
// if ( is_compounding_withdrawal_credential(withdrawal_credentials) and has_eth1_withdrawal_credential(state.validators[index])
//
// and is_valid_deposit_signature(pubkey, withdrawal_credentials, amount, signature)
// ):
// switch_to_compounding_validator(state, index)
func ApplyDeposit(beaconState state.BeaconState, data *ethpb.Deposit_Data, verifySignature bool) (state.BeaconState, error) {
pubKey := data.PublicKey
amount := data.Amount
withdrawalCredentials := data.WithdrawalCredentials
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
if verifySignature {
valid, err := IsValidDepositSignature(data)
if err != nil {
return nil, errors.Wrap(err, "could not verify deposit signature")
}
if !valid {
return beaconState, nil
}
}
if err := AddValidatorToRegistry(beaconState, pubKey, withdrawalCredentials, amount); err != nil {
return nil, errors.Wrap(err, "could not add validator to registry")
}
} else {
// no validation on top-ups (phase0 feature). no validation before state change
if err := beaconState.AppendPendingBalanceDeposit(index, amount); err != nil {
return nil, err
}
val, err := beaconState.ValidatorAtIndex(index)
if err != nil {
return nil, err
}
if helpers.IsCompoundingWithdrawalCredential(withdrawalCredentials) && helpers.HasETH1WithdrawalCredential(val) {
if verifySignature {
valid, err := IsValidDepositSignature(data)
if err != nil {
return nil, errors.Wrap(err, "could not verify deposit signature")
}
if !valid {
return beaconState, nil
}
}
if err := SwitchToCompoundingValidator(beaconState, index); err != nil {
return nil, errors.Wrap(err, "could not switch to compound validator")
}
}
}
return beaconState, nil
}
// IsValidDepositSignature returns whether deposit_data is valid
// def is_valid_deposit_signature(pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64, signature: BLSSignature) -> bool:
//
@@ -185,152 +45,364 @@ func verifyDepositDataSigningRoot(obj *ethpb.Deposit_Data, domain []byte) error
return deposit.VerifyDepositSignature(obj, domain)
}
// ProcessPendingBalanceDeposits implements the spec definition below. This method mutates the state.
// ProcessPendingDeposits implements the spec definition below. This method mutates the state.
// Iterating over `pending_deposits` queue this function runs the following checks before applying pending deposit:
// 1. All Eth1 bridge deposits are processed before the first deposit request gets processed.
// 2. Deposit position in the queue is finalized.
// 3. Deposit does not exceed the `MAX_PENDING_DEPOSITS_PER_EPOCH` limit.
// 4. Deposit does not exceed the activation churn limit.
//
// Spec definition:
//
// def process_pending_balance_deposits(state: BeaconState) -> None:
// available_for_processing = state.deposit_balance_to_consume + get_activation_exit_churn_limit(state)
// processed_amount = 0
// next_deposit_index = 0
// deposits_to_postpone = []
// def process_pending_deposits(state: BeaconState) -> None:
//
// for deposit in state.pending_balance_deposits:
// validator = state.validators[deposit.index]
// next_epoch = Epoch(get_current_epoch(state) + 1)
// available_for_processing = state.deposit_balance_to_consume + get_activation_exit_churn_limit(state)
// processed_amount = 0
// next_deposit_index = 0
// deposits_to_postpone = []
// is_churn_limit_reached = False
// finalized_slot = compute_start_slot_at_epoch(state.finalized_checkpoint.epoch)
//
// for deposit in state.pending_deposits:
// # Do not process deposit requests if Eth1 bridge deposits are not yet applied.
// if (
// # Is deposit request
// deposit.slot > GENESIS_SLOT and
// # There are pending Eth1 bridge deposits
// state.eth1_deposit_index < state.deposit_requests_start_index
// ):
// break
//
// # Check if deposit has been finalized, otherwise, stop processing.
// if deposit.slot > finalized_slot:
// break
//
// # Check if number of processed deposits has not reached the limit, otherwise, stop processing.
// if next_deposit_index >= MAX_PENDING_DEPOSITS_PER_EPOCH:
// break
//
// # Read validator state
// is_validator_exited = False
// is_validator_withdrawn = False
// validator_pubkeys = [v.pubkey for v in state.validators]
// if deposit.pubkey in validator_pubkeys:
// validator = state.validators[ValidatorIndex(validator_pubkeys.index(deposit.pubkey))]
// is_validator_exited = validator.exit_epoch < FAR_FUTURE_EPOCH
// is_validator_withdrawn = validator.withdrawable_epoch < next_epoch
//
// if is_validator_withdrawn:
// # Deposited balance will never become active. Increase balance but do not consume churn
// apply_pending_deposit(state, deposit)
// elif is_validator_exited:
// # Validator is exiting, postpone the deposit until after withdrawable epoch
// if validator.exit_epoch < FAR_FUTURE_EPOCH:
// if get_current_epoch(state) <= validator.withdrawable_epoch:
// deposits_to_postpone.append(deposit)
// # Deposited balance will never become active. Increase balance but do not consume churn
// else:
// increase_balance(state, deposit.index, deposit.amount)
// # Validator is not exiting, attempt to process deposit
// else:
// # Deposit does not fit in the churn, no more deposit processing in this epoch.
// if processed_amount + deposit.amount > available_for_processing:
// break
// # Deposit fits in the churn, process it. Increase balance and consume churn.
// else:
// increase_balance(state, deposit.index, deposit.amount)
// processed_amount += deposit.amount
// # Regardless of how the deposit was handled, we move on in the queue.
// next_deposit_index += 1
//
// state.pending_balance_deposits = state.pending_balance_deposits[next_deposit_index:]
//
// if len(state.pending_balance_deposits) == 0:
// state.deposit_balance_to_consume = Gwei(0)
// deposits_to_postpone.append(deposit)
// else:
// state.deposit_balance_to_consume = available_for_processing - processed_amount
// # Check if deposit fits in the churn, otherwise, do no more deposit processing in this epoch.
// is_churn_limit_reached = processed_amount + deposit.amount > available_for_processing
// if is_churn_limit_reached:
// break
//
// state.pending_balance_deposits += deposits_to_postpone
func ProcessPendingBalanceDeposits(ctx context.Context, st state.BeaconState, activeBalance primitives.Gwei) error {
_, span := trace.StartSpan(ctx, "electra.ProcessPendingBalanceDeposits")
// # Consume churn and apply deposit.
// processed_amount += deposit.amount
// apply_pending_deposit(state, deposit)
//
// # Regardless of how the deposit was handled, we move on in the queue.
// next_deposit_index += 1
//
// state.pending_deposits = state.pending_deposits[next_deposit_index:] + deposits_to_postpone
//
// # Accumulate churn only if the churn limit has been hit.
// if is_churn_limit_reached:
// state.deposit_balance_to_consume = available_for_processing - processed_amount
// else:
// state.deposit_balance_to_consume = Gwei(0)
func ProcessPendingDeposits(ctx context.Context, st state.BeaconState, activeBalance primitives.Gwei) error {
_, span := trace.StartSpan(ctx, "electra.ProcessPendingDeposits")
defer span.End()
if st == nil || st.IsNil() {
return errors.New("nil state")
}
// constants & initializations
nextEpoch := slots.ToEpoch(st.Slot()) + 1
processedAmount := uint64(0)
nextDepositIndex := uint64(0)
isChurnLimitReached := false
var pendingDepositsToBatchVerify []*ethpb.PendingDeposit
var pendingDepositsToPostpone []*eth.PendingDeposit
depBalToConsume, err := st.DepositBalanceToConsume()
if err != nil {
return err
return errors.Wrap(err, "could not get deposit balance to consume")
}
availableForProcessing := depBalToConsume + helpers.ActivationExitChurnLimit(activeBalance)
processedAmount := uint64(0)
nextDepositIndex := 0
var depositsToPostpone []*eth.PendingBalanceDeposit
deposits, err := st.PendingBalanceDeposits()
finalizedSlot, err := slots.EpochStart(st.FinalizedCheckpoint().Epoch)
if err != nil {
return errors.Wrap(err, "could not get finalized slot")
}
startIndex, err := st.DepositRequestsStartIndex()
if err != nil {
return errors.Wrap(err, "could not get starting pendingDeposit index")
}
pendingDeposits, err := st.PendingDeposits()
if err != nil {
return err
}
// constants
ffe := params.BeaconConfig().FarFutureEpoch
nextEpoch := slots.ToEpoch(st.Slot()) + 1
for _, balanceDeposit := range deposits {
v, err := st.ValidatorAtIndexReadOnly(balanceDeposit.Index)
if err != nil {
return fmt.Errorf("failed to fetch validator at index: %w", err)
for _, pendingDeposit := range pendingDeposits {
// Do not process pendingDeposit requests if Eth1 bridge deposits are not yet applied.
if pendingDeposit.Slot > params.BeaconConfig().GenesisSlot && st.Eth1DepositIndex() < startIndex {
break
}
// If the validator is currently exiting, postpone the deposit until after the withdrawable
// epoch.
if v.ExitEpoch() < ffe {
if nextEpoch <= v.WithdrawableEpoch() {
depositsToPostpone = append(depositsToPostpone, balanceDeposit)
} else {
// The deposited balance will never become active. Therefore, we increase the balance but do
// not consume the churn.
if err := helpers.IncreaseBalance(st, balanceDeposit.Index, balanceDeposit.Amount); err != nil {
return err
}
// Check if pendingDeposit has been finalized, otherwise, stop processing.
if pendingDeposit.Slot > finalizedSlot {
break
}
// Check if number of processed deposits has not reached the limit, otherwise, stop processing.
if nextDepositIndex >= params.BeaconConfig().MaxPendingDepositsPerEpoch {
break
}
var isValidatorExited bool
var isValidatorWithdrawn bool
index, found := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(pendingDeposit.PublicKey))
if found {
val, err := st.ValidatorAtIndexReadOnly(index)
if err != nil {
return errors.Wrap(err, "could not get validator")
}
isValidatorExited = val.ExitEpoch() < params.BeaconConfig().FarFutureEpoch
isValidatorWithdrawn = val.WithdrawableEpoch() < nextEpoch
}
if isValidatorWithdrawn {
// note: the validator will never be active, just increase the balance
if err := helpers.IncreaseBalance(st, index, pendingDeposit.Amount); err != nil {
return errors.Wrap(err, "could not increase balance")
}
} else if isValidatorExited {
pendingDepositsToPostpone = append(pendingDepositsToPostpone, pendingDeposit)
} else {
// Validator is not exiting, attempt to process deposit.
if primitives.Gwei(processedAmount+balanceDeposit.Amount) > availableForProcessing {
isChurnLimitReached = primitives.Gwei(processedAmount+pendingDeposit.Amount) > availableForProcessing
if isChurnLimitReached {
break
}
// Deposit fits in churn, process it. Increase balance and consume churn.
if err := helpers.IncreaseBalance(st, balanceDeposit.Index, balanceDeposit.Amount); err != nil {
return err
processedAmount += pendingDeposit.Amount
// note: the following code deviates from the spec in order to perform batch signature verification
if found {
if err := helpers.IncreaseBalance(st, index, pendingDeposit.Amount); err != nil {
return errors.Wrap(err, "could not increase balance")
}
} else {
// Collect deposit for batch signature verification
pendingDepositsToBatchVerify = append(pendingDepositsToBatchVerify, pendingDeposit)
}
processedAmount += balanceDeposit.Amount
}
// Regardless of how the deposit was handled, we move on in the queue.
// Regardless of how the pendingDeposit was handled, we move on in the queue.
nextDepositIndex++
}
// Perform batch signature verification on pending deposits that require validator registration
if err = batchProcessNewPendingDeposits(ctx, st, pendingDepositsToBatchVerify); err != nil {
return errors.Wrap(err, "could not process pending deposits with new public keys")
}
// Combined operation:
// - state.pending_balance_deposits = state.pending_balance_deposits[next_deposit_index:]
// - state.pending_balance_deposits += deposits_to_postpone
// However, the number of remaining deposits must be maintained to properly update the deposit
// - state.pending_deposits = state.pending_deposits[next_deposit_index:]
// - state.pending_deposits += deposits_to_postpone
// However, the number of remaining deposits must be maintained to properly update the pendingDeposit
// balance to consume.
numRemainingDeposits := len(deposits[nextDepositIndex:])
deposits = append(deposits[nextDepositIndex:], depositsToPostpone...)
if err := st.SetPendingBalanceDeposits(deposits); err != nil {
pendingDeposits = append(pendingDeposits[nextDepositIndex:], pendingDepositsToPostpone...)
if err := st.SetPendingDeposits(pendingDeposits); err != nil {
return errors.Wrap(err, "could not set pending deposits")
}
// Accumulate churn only if the churn limit has been hit.
if isChurnLimitReached {
return st.SetDepositBalanceToConsume(availableForProcessing - primitives.Gwei(processedAmount))
}
return st.SetDepositBalanceToConsume(0)
}
// batchProcessNewPendingDeposits should only be used to process new deposits that require validator registration
func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState, pendingDeposits []*ethpb.PendingDeposit) error {
// Return early if there are no deposits to process
if len(pendingDeposits) == 0 {
return nil
}
// Try batch verification of all deposit signatures
allSignaturesVerified, err := blocks.BatchVerifyPendingDepositsSignatures(ctx, pendingDeposits)
if err != nil {
return errors.Wrap(err, "batch signature verification failed")
}
// Process each deposit individually
for _, pendingDeposit := range pendingDeposits {
validSignature := allSignaturesVerified
// If batch verification failed, check the individual deposit signature
if !allSignaturesVerified {
validSignature, err = blocks.IsValidDepositSignature(&ethpb.Deposit_Data{
PublicKey: bytesutil.SafeCopyBytes(pendingDeposit.PublicKey),
WithdrawalCredentials: bytesutil.SafeCopyBytes(pendingDeposit.WithdrawalCredentials),
Amount: pendingDeposit.Amount,
Signature: bytesutil.SafeCopyBytes(pendingDeposit.Signature),
})
if err != nil {
return errors.Wrap(err, "individual deposit signature verification failed")
}
}
// Add validator to the registry if the signature is valid
if validSignature {
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
}
}
}
return nil
}
// ApplyPendingDeposit implements the spec definition below.
// Note : This function is NOT used by ProcessPendingDeposits due to simplified logic for more readable batch processing
//
// Spec Definition:
//
// def apply_pending_deposit(state: BeaconState, deposit: PendingDeposit) -> None:
//
// """
// Applies ``deposit`` to the ``state``.
// """
// validator_pubkeys = [v.pubkey for v in state.validators]
// if deposit.pubkey not in validator_pubkeys:
// # Verify the deposit signature (proof of possession) which is not checked by the deposit contract
// if is_valid_deposit_signature(
// deposit.pubkey,
// deposit.withdrawal_credentials,
// deposit.amount,
// deposit.signature
// ):
// add_validator_to_registry(state, deposit.pubkey, deposit.withdrawal_credentials, deposit.amount)
// else:
// validator_index = ValidatorIndex(validator_pubkeys.index(deposit.pubkey))
// # Increase balance
// increase_balance(state, validator_index, deposit.amount)
func ApplyPendingDeposit(ctx context.Context, st state.BeaconState, deposit *ethpb.PendingDeposit) error {
_, span := trace.StartSpan(ctx, "electra.ApplyPendingDeposit")
defer span.End()
index, ok := st.ValidatorIndexByPubkey(bytesutil.ToBytes48(deposit.PublicKey))
if !ok {
verified, err := blocks.IsValidDepositSignature(&ethpb.Deposit_Data{
PublicKey: bytesutil.SafeCopyBytes(deposit.PublicKey),
WithdrawalCredentials: bytesutil.SafeCopyBytes(deposit.WithdrawalCredentials),
Amount: deposit.Amount,
Signature: bytesutil.SafeCopyBytes(deposit.Signature),
})
if err != nil {
return errors.Wrap(err, "could not verify deposit signature")
}
if verified {
if err := AddValidatorToRegistry(st, deposit.PublicKey, deposit.WithdrawalCredentials, deposit.Amount); err != nil {
return errors.Wrap(err, "could not add validator to registry")
}
}
return nil
}
return helpers.IncreaseBalance(st, index, deposit.Amount)
}
// AddValidatorToRegistry updates the beacon state with validator information
// def add_validator_to_registry(state: BeaconState, pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64) -> None:
//
// index = get_index_for_new_validator(state)
// validator = get_validator_from_deposit(pubkey, withdrawal_credentials, amount) # [Modified in Electra:EIP7251]
// set_or_append_list(state.validators, index, validator)
// set_or_append_list(state.balances, index, amount)
// set_or_append_list(state.previous_epoch_participation, index, ParticipationFlags(0b0000_0000))
// set_or_append_list(state.current_epoch_participation, index, ParticipationFlags(0b0000_0000))
// set_or_append_list(state.inactivity_scores, index, uint64(0))
func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdrawalCredentials []byte, amount uint64) error {
val := GetValidatorFromDeposit(pubKey, withdrawalCredentials, amount)
if err := beaconState.AppendValidator(val); err != nil {
return err
}
if err := beaconState.AppendBalance(amount); err != nil {
return err
}
if numRemainingDeposits == 0 {
return st.SetDepositBalanceToConsume(0)
} else {
return st.SetDepositBalanceToConsume(availableForProcessing - primitives.Gwei(processedAmount))
// only active in altair and only when it's a new validator (after append balance)
if beaconState.Version() >= version.Altair {
if err := beaconState.AppendInactivityScore(0); err != nil {
return err
}
if err := beaconState.AppendPreviousParticipationBits(0); err != nil {
return err
}
if err := beaconState.AppendCurrentParticipationBits(0); err != nil {
return err
}
}
return nil
}
// GetValidatorFromDeposit gets a new validator object with provided parameters
//
// def get_validator_from_deposit(pubkey: BLSPubkey, withdrawal_credentials: Bytes32, amount: uint64) -> Validator:
//
// validator = Validator(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// activation_eligibility_epoch=FAR_FUTURE_EPOCH,
// activation_epoch=FAR_FUTURE_EPOCH,
// exit_epoch=FAR_FUTURE_EPOCH,
// withdrawable_epoch=FAR_FUTURE_EPOCH,
// effective_balance=Gwei(0),
// )
//
// # [Modified in Electra:EIP7251]
// max_effective_balance = get_max_effective_balance(validator)
// validator.effective_balance = min(amount - amount % EFFECTIVE_BALANCE_INCREMENT, max_effective_balance)
//
// return validator
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) *ethpb.Validator {
validator := &ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: 0,
}
maxEffectiveBalance := helpers.ValidatorMaxEffectiveBalance(validator)
validator.EffectiveBalance = min(amount-(amount%params.BeaconConfig().EffectiveBalanceIncrement), maxEffectiveBalance)
return validator
}
// ProcessDepositRequests is a function as part of electra to process execution layer deposits
func ProcessDepositRequests(ctx context.Context, beaconState state.BeaconState, requests []*enginev1.DepositRequest) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "electra.ProcessDepositRequests")
_, span := trace.StartSpan(ctx, "electra.ProcessDepositRequests")
defer span.End()
if len(requests) == 0 {
return beaconState, nil
}
deposits := make([]*ethpb.Deposit, 0)
for _, req := range requests {
if req == nil {
return nil, errors.New("got a nil DepositRequest")
}
deposits = append(deposits, &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: req.Pubkey,
WithdrawalCredentials: req.WithdrawalCredentials,
Amount: req.Amount,
Signature: req.Signature,
},
})
}
batchVerified, err := blocks.BatchVerifyDepositsSignatures(ctx, deposits)
if err != nil {
return nil, errors.Wrap(err, "could not verify deposit signatures in batch")
}
var err error
for _, receipt := range requests {
beaconState, err = processDepositRequest(beaconState, receipt, batchVerified)
beaconState, err = processDepositRequest(beaconState, receipt)
if err != nil {
return nil, errors.Wrap(err, "could not apply deposit request")
}
@@ -342,30 +414,38 @@ func ProcessDepositRequests(ctx context.Context, beaconState state.BeaconState,
// def process_deposit_request(state: BeaconState, deposit_request: DepositRequest) -> None:
//
// # Set deposit request start index
// if state.deposit_requests_start_index == UNSET_DEPOSIT_REQUEST_START_INDEX:
// if state.deposit_requests_start_index == UNSET_DEPOSIT_REQUESTS_START_INDEX:
// state.deposit_requests_start_index = deposit_request.index
//
// apply_deposit(
// state=state,
// # Create pending deposit
// state.pending_deposits.append(PendingDeposit(
// pubkey=deposit_request.pubkey,
// withdrawal_credentials=deposit_request.withdrawal_credentials,
// amount=deposit_request.amount,
// signature=deposit_request.signature,
// )
func processDepositRequest(beaconState state.BeaconState, request *enginev1.DepositRequest, verifySignature bool) (state.BeaconState, error) {
// slot=state.slot,
// ))
func processDepositRequest(beaconState state.BeaconState, request *enginev1.DepositRequest) (state.BeaconState, error) {
requestsStartIndex, err := beaconState.DepositRequestsStartIndex()
if err != nil {
return nil, errors.Wrap(err, "could not get deposit requests start index")
}
if requestsStartIndex == params.BeaconConfig().UnsetDepositRequestsStartIndex {
if request == nil {
return nil, errors.New("nil deposit request")
}
if err := beaconState.SetDepositRequestsStartIndex(request.Index); err != nil {
return nil, errors.Wrap(err, "could not set deposit requests start index")
}
}
return ApplyDeposit(beaconState, &ethpb.Deposit_Data{
if err := beaconState.AppendPendingDeposit(&ethpb.PendingDeposit{
PublicKey: bytesutil.SafeCopyBytes(request.Pubkey),
Amount: request.Amount,
WithdrawalCredentials: bytesutil.SafeCopyBytes(request.WithdrawalCredentials),
Signature: bytesutil.SafeCopyBytes(request.Signature),
}, verifySignature)
Slot: beaconState.Slot(),
}); err != nil {
return nil, errors.Wrap(err, "could not append deposit request")
}
return beaconState, nil
}

View File

@@ -4,15 +4,18 @@ import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
stateTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/testing"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/crypto/bls/common"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -20,7 +23,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessPendingBalanceDeposits(t *testing.T) {
func TestProcessPendingDeposits(t *testing.T) {
tests := []struct {
name string
state state.BeaconState
@@ -48,17 +51,10 @@ func TestProcessPendingBalanceDeposits(t *testing.T) {
{
name: "more deposits than balance to consume processes partial deposits",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 1_000)
require.NoError(t, st.SetDepositBalanceToConsume(100))
amountAvailForProcessing := helpers.ActivationExitChurnLimit(1_000 * 1e9)
deps := make([]*eth.PendingBalanceDeposit, 20)
for i := 0; i < len(deps); i += 1 {
deps[i] = &eth.PendingBalanceDeposit{
Amount: uint64(amountAvailForProcessing) / 10,
Index: primitives.ValidatorIndex(i),
}
}
require.NoError(t, st.SetPendingBalanceDeposits(deps))
depositAmount := uint64(amountAvailForProcessing) / 10
st := stateWithPendingDeposits(t, 1_000, 20, depositAmount)
require.NoError(t, st.SetDepositBalanceToConsume(100))
return st
}(),
check: func(t *testing.T, st state.BeaconState) {
@@ -74,25 +70,45 @@ func TestProcessPendingBalanceDeposits(t *testing.T) {
}
// Half of the balance deposits should have been processed.
remaining, err := st.PendingBalanceDeposits()
remaining, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 10, len(remaining))
},
},
{
name: "withdrawn validators should not consume churn",
state: func() state.BeaconState {
amountAvailForProcessing := helpers.ActivationExitChurnLimit(1_000 * 1e9)
depositAmount := uint64(amountAvailForProcessing)
// set the pending deposits to the maximum churn limit
st := stateWithPendingDeposits(t, 1_000, 2, depositAmount)
vals := st.Validators()
vals[1].WithdrawableEpoch = 0
require.NoError(t, st.SetValidators(vals))
return st
}(),
check: func(t *testing.T, st state.BeaconState) {
amountAvailForProcessing := helpers.ActivationExitChurnLimit(1_000 * 1e9)
// Validators 0..9 should have their balance increased
for i := primitives.ValidatorIndex(0); i < 2; i++ {
b, err := st.BalanceAtIndex(i)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance+uint64(amountAvailForProcessing), b)
}
// All pending deposits should have been processed
remaining, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(remaining))
},
},
{
name: "less deposits than balance to consume processes all deposits",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 1_000)
require.NoError(t, st.SetDepositBalanceToConsume(0))
amountAvailForProcessing := helpers.ActivationExitChurnLimit(1_000 * 1e9)
deps := make([]*eth.PendingBalanceDeposit, 5)
for i := 0; i < len(deps); i += 1 {
deps[i] = &eth.PendingBalanceDeposit{
Amount: uint64(amountAvailForProcessing) / 5,
Index: primitives.ValidatorIndex(i),
}
}
require.NoError(t, st.SetPendingBalanceDeposits(deps))
depositAmount := uint64(amountAvailForProcessing) / 5
st := stateWithPendingDeposits(t, 1_000, 5, depositAmount)
require.NoError(t, st.SetDepositBalanceToConsume(0))
return st
}(),
check: func(t *testing.T, st state.BeaconState) {
@@ -108,7 +124,74 @@ func TestProcessPendingBalanceDeposits(t *testing.T) {
}
// All of the balance deposits should have been processed.
remaining, err := st.PendingBalanceDeposits()
remaining, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(remaining))
},
},
{
name: "process pending deposit for unknown key, activates new key",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 0)
sk, err := bls.RandKey()
require.NoError(t, err)
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
dep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
require.NoError(t, st.SetPendingDeposits([]*eth.PendingDeposit{dep}))
require.Equal(t, 0, len(st.Validators()))
require.Equal(t, 0, len(st.Balances()))
return st
}(),
check: func(t *testing.T, st state.BeaconState) {
res, err := st.DepositBalanceToConsume()
require.NoError(t, err)
require.Equal(t, primitives.Gwei(0), res)
b, err := st.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, b)
// All of the balance deposits should have been processed.
remaining, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(remaining))
// validator becomes active
require.Equal(t, 1, len(st.Validators()))
require.Equal(t, 1, len(st.Balances()))
},
},
{
name: "process excess balance that uses a point to infinity signature, processed as a topup",
state: func() state.BeaconState {
excessBalance := uint64(100)
st := stateWithActiveBalanceETH(t, 32)
validators := st.Validators()
sk, err := bls.RandKey()
require.NoError(t, err)
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
validators[0].PublicKey = sk.PublicKey().Marshal()
validators[0].WithdrawalCredentials = wc
dep := stateTesting.GeneratePendingDeposit(t, sk, excessBalance, bytesutil.ToBytes32(wc), 0)
dep.Signature = common.InfiniteSignature[:]
require.NoError(t, st.SetValidators(validators))
st.SaveValidatorIndices()
require.NoError(t, st.SetPendingDeposits([]*eth.PendingDeposit{dep}))
return st
}(),
check: func(t *testing.T, st state.BeaconState) {
res, err := st.DepositBalanceToConsume()
require.NoError(t, err)
require.Equal(t, primitives.Gwei(0), res)
b, err := st.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance+uint64(100), b)
// All of the balance deposits should have been processed.
remaining, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(remaining))
},
@@ -116,17 +199,10 @@ func TestProcessPendingBalanceDeposits(t *testing.T) {
{
name: "exiting validator deposit postponed",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 1_000)
require.NoError(t, st.SetDepositBalanceToConsume(0))
amountAvailForProcessing := helpers.ActivationExitChurnLimit(1_000 * 1e9)
deps := make([]*eth.PendingBalanceDeposit, 5)
for i := 0; i < len(deps); i += 1 {
deps[i] = &eth.PendingBalanceDeposit{
Amount: uint64(amountAvailForProcessing) / 5,
Index: primitives.ValidatorIndex(i),
}
}
require.NoError(t, st.SetPendingBalanceDeposits(deps))
depositAmount := uint64(amountAvailForProcessing) / 5
st := stateWithPendingDeposits(t, 1_000, 5, depositAmount)
require.NoError(t, st.SetDepositBalanceToConsume(0))
v, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
v.ExitEpoch = 10
@@ -148,7 +224,7 @@ func TestProcessPendingBalanceDeposits(t *testing.T) {
// All of the balance deposits should have been processed, except validator index 0 was
// added back to the pending deposits queue.
remaining, err := st.PendingBalanceDeposits()
remaining, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(remaining))
},
@@ -156,15 +232,7 @@ func TestProcessPendingBalanceDeposits(t *testing.T) {
{
name: "exited validator balance increased",
state: func() state.BeaconState {
st := stateWithActiveBalanceETH(t, 1_000)
deps := make([]*eth.PendingBalanceDeposit, 1)
for i := 0; i < len(deps); i += 1 {
deps[i] = &eth.PendingBalanceDeposit{
Amount: 1_000_000,
Index: primitives.ValidatorIndex(i),
}
}
require.NoError(t, st.SetPendingBalanceDeposits(deps))
st := stateWithPendingDeposits(t, 1_000, 1, 1_000_000)
v, err := st.ValidatorAtIndex(0)
require.NoError(t, err)
v.ExitEpoch = 2
@@ -182,7 +250,7 @@ func TestProcessPendingBalanceDeposits(t *testing.T) {
require.Equal(t, uint64(1_100_000), b)
// All of the balance deposits should have been processed.
remaining, err := st.PendingBalanceDeposits()
remaining, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(remaining))
},
@@ -199,7 +267,7 @@ func TestProcessPendingBalanceDeposits(t *testing.T) {
tab, err = helpers.TotalActiveBalance(tt.state)
}
require.NoError(t, err)
err = electra.ProcessPendingBalanceDeposits(context.TODO(), tt.state, primitives.Gwei(tab))
err = electra.ProcessPendingDeposits(context.TODO(), tt.state, primitives.Gwei(tab))
require.Equal(t, tt.wantErr, err != nil, "wantErr=%v, got err=%s", tt.wantErr, err)
if tt.check != nil {
tt.check(t, tt.state)
@@ -208,6 +276,27 @@ func TestProcessPendingBalanceDeposits(t *testing.T) {
}
}
func TestBatchProcessNewPendingDeposits(t *testing.T) {
t.Run("invalid batch initiates correct individual validation", func(t *testing.T) {
st := stateWithActiveBalanceETH(t, 0)
require.Equal(t, 0, len(st.Validators()))
require.Equal(t, 0, len(st.Balances()))
sk, err := bls.RandKey()
require.NoError(t, err)
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
invalidDep := &eth.PendingDeposit{}
// have a combination of valid and invalid deposits
deps := []*eth.PendingDeposit{validDep, invalidDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))
// successfully added to register
require.Equal(t, 1, len(st.Validators()))
require.Equal(t, 1, len(st.Balances()))
})
}
func TestProcessDepositRequests(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 1)
sk, err := bls.RandKey()
@@ -220,7 +309,7 @@ func TestProcessDepositRequests(t *testing.T) {
})
t.Run("nil request errors", func(t *testing.T) {
_, err = electra.ProcessDepositRequests(context.Background(), st, []*enginev1.DepositRequest{nil})
require.ErrorContains(t, "got a nil DepositRequest", err)
require.ErrorContains(t, "nil deposit request", err)
})
vals := st.Validators()
@@ -230,7 +319,7 @@ func TestProcessDepositRequests(t *testing.T) {
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 2000
require.NoError(t, st.SetBalances(bals))
require.NoError(t, st.SetPendingBalanceDeposits(make([]*eth.PendingBalanceDeposit, 0))) // reset pbd as the determinitstic state populates this already
require.NoError(t, st.SetPendingDeposits(make([]*eth.PendingDeposit, 0))) // reset pbd as the determinitstic state populates this already
withdrawalCred := make([]byte, 32)
withdrawalCred[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
depositMessage := &eth.DepositMessage{
@@ -255,11 +344,10 @@ func TestProcessDepositRequests(t *testing.T) {
st, err = electra.ProcessDepositRequests(context.Background(), st, requests)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
pbd, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 2, len(pbd))
require.Equal(t, 1, len(pbd))
require.Equal(t, uint64(1000), pbd[0].Amount)
require.Equal(t, uint64(2000), pbd[1].Amount)
}
func TestProcessDeposit_Electra_Simple(t *testing.T) {
@@ -284,9 +372,9 @@ func TestProcessDeposit_Electra_Simple(t *testing.T) {
},
})
require.NoError(t, err)
pdSt, err := electra.ProcessDeposits(context.Background(), st, deps)
pdSt, err := blocks.ProcessDeposits(context.Background(), st, deps)
require.NoError(t, err)
pbd, err := pdSt.PendingBalanceDeposits()
pbd, err := pdSt.PendingDeposits()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, pbd[2].Amount)
require.Equal(t, 3, len(pbd))
@@ -322,7 +410,7 @@ func TestProcessDeposit_SkipsInvalidDeposit(t *testing.T) {
},
})
require.NoError(t, err)
newState, err := electra.ProcessDeposit(beaconState, dep[0], true)
newState, err := blocks.ProcessDeposit(beaconState, dep[0], false)
require.NoError(t, err, "Expected invalid block deposit to be ignored without error")
if newState.Eth1DepositIndex() != 1 {
@@ -359,42 +447,130 @@ func TestApplyDeposit_TopUps_WithBadSignature(t *testing.T) {
vals[0].PublicKey = sk.PublicKey().Marshal()
vals[0].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
require.NoError(t, st.SetValidators(vals))
adSt, err := electra.ApplyDeposit(st, depositData, true)
adSt, err := blocks.ApplyDeposit(st, depositData, false)
require.NoError(t, err)
pbd, err := adSt.PendingBalanceDeposits()
pbd, err := adSt.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd))
require.Equal(t, topUpAmount, pbd[0].Amount)
}
func TestApplyDeposit_Electra_SwitchToCompoundingValidator(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, 3)
// stateWithActiveBalanceETH generates a mock beacon state given a balance in eth
func stateWithActiveBalanceETH(t *testing.T, balETH uint64) state.BeaconState {
gwei := balETH * 1_000_000_000
balPerVal := params.BeaconConfig().MinActivationBalance
numVals := gwei / balPerVal
vals := make([]*eth.Validator, numVals)
bals := make([]uint64, numVals)
for i := uint64(0); i < numVals; i++ {
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(i)
vals[i] = &eth.Validator{
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: balPerVal,
WithdrawalCredentials: wc,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
}
bals[i] = balPerVal
}
st, err := state_native.InitializeFromProtoUnsafeElectra(&eth.BeaconStateElectra{
Slot: 10 * params.BeaconConfig().SlotsPerEpoch,
Validators: vals,
Balances: bals,
Fork: &eth.Fork{
CurrentVersion: params.BeaconConfig().ElectraForkVersion,
},
})
require.NoError(t, err)
// set some fake finalized checkpoint
require.NoError(t, st.SetFinalizedCheckpoint(&eth.Checkpoint{
Epoch: 0,
Root: make([]byte, 32),
}))
return st
}
// stateWithPendingDeposits with pending deposits and existing ethbalance
func stateWithPendingDeposits(t *testing.T, balETH uint64, numDeposits, amount uint64) state.BeaconState {
st := stateWithActiveBalanceETH(t, balETH)
deps := make([]*eth.PendingDeposit, numDeposits)
validators := st.Validators()
for i := 0; i < len(deps); i += 1 {
sk, err := bls.RandKey()
require.NoError(t, err)
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(i)
validators[i].PublicKey = sk.PublicKey().Marshal()
validators[i].WithdrawalCredentials = wc
deps[i] = stateTesting.GeneratePendingDeposit(t, sk, amount, bytesutil.ToBytes32(wc), 0)
}
require.NoError(t, st.SetValidators(validators))
st.SaveValidatorIndices()
require.NoError(t, st.SetPendingDeposits(deps))
return st
}
func TestApplyPendingDeposit_TopUp(t *testing.T) {
excessBalance := uint64(100)
st := stateWithActiveBalanceETH(t, 32)
validators := st.Validators()
sk, err := bls.RandKey()
require.NoError(t, err)
withdrawalCred := make([]byte, 32)
withdrawalCred[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
depositData := &eth.Deposit_Data{
PublicKey: sk.PublicKey().Marshal(),
Amount: 1000,
WithdrawalCredentials: withdrawalCred,
Signature: make([]byte, fieldparams.BLSSignatureLength),
}
vals := st.Validators()
vals[0].PublicKey = sk.PublicKey().Marshal()
vals[0].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
require.NoError(t, st.SetValidators(vals))
bals := st.Balances()
bals[0] = params.BeaconConfig().MinActivationBalance + 2000
require.NoError(t, st.SetBalances(bals))
sr, err := signing.ComputeSigningRoot(depositData, bytesutil.ToBytes(3, 32))
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
validators[0].PublicKey = sk.PublicKey().Marshal()
validators[0].WithdrawalCredentials = wc
dep := stateTesting.GeneratePendingDeposit(t, sk, excessBalance, bytesutil.ToBytes32(wc), 0)
dep.Signature = common.InfiniteSignature[:]
require.NoError(t, st.SetValidators(validators))
st.SaveValidatorIndices()
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
b, err := st.BalanceAtIndex(0)
require.NoError(t, err)
sig := sk.Sign(sr[:])
depositData.Signature = sig.Marshal()
adSt, err := electra.ApplyDeposit(st, depositData, false)
require.NoError(t, err)
pbd, err := adSt.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 2, len(pbd))
require.Equal(t, uint64(1000), pbd[0].Amount)
require.Equal(t, uint64(2000), pbd[1].Amount)
require.Equal(t, params.BeaconConfig().MinActivationBalance+uint64(excessBalance), b)
}
func TestApplyPendingDeposit_UnknownKey(t *testing.T) {
st := stateWithActiveBalanceETH(t, 0)
sk, err := bls.RandKey()
require.NoError(t, err)
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
dep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
require.Equal(t, 0, len(st.Validators()))
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
// activates new validator
require.Equal(t, 1, len(st.Validators()))
b, err := st.BalanceAtIndex(0)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, b)
}
func TestApplyPendingDeposit_InvalidSignature(t *testing.T) {
st := stateWithActiveBalanceETH(t, 0)
sk, err := bls.RandKey()
require.NoError(t, err)
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
dep := &eth.PendingDeposit{
PublicKey: sk.PublicKey().Marshal(),
WithdrawalCredentials: wc,
Amount: 100,
}
require.Equal(t, 0, len(st.Validators()))
require.NoError(t, electra.ApplyPendingDeposit(context.Background(), st, dep))
// no validator added
require.Equal(t, 0, len(st.Validators()))
// no topup either
require.Equal(t, 0, len(st.Balances()))
}

View File

@@ -0,0 +1,3 @@
package electra
var BatchProcessNewPendingDeposits = batchProcessNewPendingDeposits

View File

@@ -44,7 +44,7 @@ var (
// process_registry_updates(state)
// process_slashings(state)
// process_eth1_data_reset(state)
// process_pending_balance_deposits(state) # New in EIP7251
// process_pending_deposits(state) # New in EIP7251
// process_pending_consolidations(state) # New in EIP7251
// process_effective_balance_updates(state)
// process_slashings_reset(state)
@@ -94,7 +94,7 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
return err
}
if err = ProcessPendingBalanceDeposits(ctx, state, primitives.Gwei(bp.ActiveCurrentEpoch)); err != nil {
if err = ProcessPendingDeposits(ctx, state, primitives.Gwei(bp.ActiveCurrentEpoch)); err != nil {
return err
}
if err = ProcessPendingConsolidations(ctx, state); err != nil {

View File

@@ -66,7 +66,7 @@ func ProcessOperations(
if err != nil {
return nil, errors.Wrap(err, "could not process altair attestation")
}
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
if _, err := blocks.ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits())

View File

@@ -57,14 +57,17 @@ func TestProcessEpoch_CanProcessElectra(t *testing.T) {
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, st.SetDepositBalanceToConsume(100))
amountAvailForProcessing := helpers.ActivationExitChurnLimit(1_000 * 1e9)
deps := make([]*ethpb.PendingBalanceDeposit, 20)
validators := st.Validators()
deps := make([]*ethpb.PendingDeposit, 20)
for i := 0; i < len(deps); i += 1 {
deps[i] = &ethpb.PendingBalanceDeposit{
Amount: uint64(amountAvailForProcessing) / 10,
Index: primitives.ValidatorIndex(i),
deps[i] = &ethpb.PendingDeposit{
PublicKey: validators[i].PublicKey,
WithdrawalCredentials: validators[i].WithdrawalCredentials,
Amount: uint64(amountAvailForProcessing) / 10,
Slot: 0,
}
}
require.NoError(t, st.SetPendingBalanceDeposits(deps))
require.NoError(t, st.SetPendingDeposits(deps))
require.NoError(t, st.SetPendingConsolidations([]*ethpb.PendingConsolidation{
{
SourceIndex: 2,
@@ -108,7 +111,7 @@ func TestProcessEpoch_CanProcessElectra(t *testing.T) {
require.Equal(t, primitives.Gwei(100), res)
// Half of the balance deposits should have been processed.
remaining, err := st.PendingBalanceDeposits()
remaining, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 10, len(remaining))

View File

@@ -101,7 +101,7 @@ import (
// earliest_exit_epoch=earliest_exit_epoch,
// consolidation_balance_to_consume=0,
// earliest_consolidation_epoch=compute_activation_exit_epoch(get_current_epoch(pre)),
// pending_balance_deposits=[],
// pending_deposits=[],
// pending_partial_withdrawals=[],
// pending_consolidations=[],
// )
@@ -272,7 +272,7 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
EarliestExitEpoch: earliestExitEpoch,
ConsolidationBalanceToConsume: helpers.ConsolidationChurnLimit(primitives.Gwei(tab)),
EarliestConsolidationEpoch: helpers.ActivationExitEpoch(slots.ToEpoch(beaconState.Slot())),
PendingBalanceDeposits: make([]*ethpb.PendingBalanceDeposit, 0),
PendingDeposits: make([]*ethpb.PendingDeposit, 0),
PendingPartialWithdrawals: make([]*ethpb.PendingPartialWithdrawal, 0),
PendingConsolidations: make([]*ethpb.PendingConsolidation, 0),
}

View File

@@ -169,10 +169,10 @@ func TestUpgradeToElectra(t *testing.T) {
require.NoError(t, err)
require.Equal(t, helpers.ActivationExitEpoch(slots.ToEpoch(preForkState.Slot())), earliestConsolidationEpoch)
pendingBalanceDeposits, err := mSt.PendingBalanceDeposits()
pendingDeposits, err := mSt.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 2, len(pendingBalanceDeposits))
require.Equal(t, uint64(1000), pendingBalanceDeposits[1].Amount)
require.Equal(t, 2, len(pendingDeposits))
require.Equal(t, uint64(1000), pendingDeposits[1].Amount)
numPendingPartialWithdrawals, err := mSt.NumPendingPartialWithdrawals()
require.NoError(t, err)

View File

@@ -7,83 +7,20 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/crypto/bls/common"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// AddValidatorToRegistry updates the beacon state with validator information
// def add_validator_to_registry(state: BeaconState,
//
// pubkey: BLSPubkey,
// withdrawal_credentials: Bytes32,
// amount: uint64) -> None:
// index = get_index_for_new_validator(state)
// validator = get_validator_from_deposit(pubkey, withdrawal_credentials)
// set_or_append_list(state.validators, index, validator)
// set_or_append_list(state.balances, index, 0) # [Modified in Electra:EIP7251]
// set_or_append_list(state.previous_epoch_participation, index, ParticipationFlags(0b0000_0000))
// set_or_append_list(state.current_epoch_participation, index, ParticipationFlags(0b0000_0000))
// set_or_append_list(state.inactivity_scores, index, uint64(0))
// state.pending_balance_deposits.append(PendingBalanceDeposit(index=index, amount=amount)) # [New in Electra:EIP7251]
func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdrawalCredentials []byte, amount uint64) error {
val := ValidatorFromDeposit(pubKey, withdrawalCredentials)
if err := beaconState.AppendValidator(val); err != nil {
return err
}
index, ok := beaconState.ValidatorIndexByPubkey(bytesutil.ToBytes48(pubKey))
if !ok {
return errors.New("could not find validator in registry")
}
if err := beaconState.AppendBalance(0); err != nil {
return err
}
if err := beaconState.AppendPendingBalanceDeposit(index, amount); err != nil {
return err
}
if err := beaconState.AppendInactivityScore(0); err != nil {
return err
}
if err := beaconState.AppendPreviousParticipationBits(0); err != nil {
return err
}
return beaconState.AppendCurrentParticipationBits(0)
}
// ValidatorFromDeposit gets a new validator object with provided parameters
//
// def get_validator_from_deposit(pubkey: BLSPubkey, withdrawal_credentials: Bytes32) -> Validator:
//
// return Validator(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// activation_eligibility_epoch=FAR_FUTURE_EPOCH,
// activation_epoch=FAR_FUTURE_EPOCH,
// exit_epoch=FAR_FUTURE_EPOCH,
// withdrawable_epoch=FAR_FUTURE_EPOCH,
// effective_balance=0, # [Modified in Electra:EIP7251]
//
// )
func ValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte) *ethpb.Validator {
return &ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: 0, // [Modified in Electra:EIP7251]
}
}
// SwitchToCompoundingValidator
//
// Spec definition:
//
// def switch_to_compounding_validator(state: BeaconState, index: ValidatorIndex) -> None:
// validator = state.validators[index]
// if has_eth1_withdrawal_credential(validator):
// validator.withdrawal_credentials = COMPOUNDING_WITHDRAWAL_PREFIX + validator.withdrawal_credentials[1:]
// queue_excess_active_balance(state, index)
// def switch_to_compounding_validator(state: BeaconState, index: ValidatorIndex) -> None:
//
// validator = state.validators[index]
// if has_eth1_withdrawal_credential(validator):
// validator.withdrawal_credentials = COMPOUNDING_WITHDRAWAL_PREFIX + validator.withdrawal_credentials[1:]
// queue_excess_active_balance(state, index)
func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
v, err := s.ValidatorAtIndex(idx)
if err != nil {
@@ -102,18 +39,24 @@ func SwitchToCompoundingValidator(s state.BeaconState, idx primitives.ValidatorI
return nil
}
// QueueExcessActiveBalance queues validators with balances above the min activation balance and adds to pending balance deposit.
// QueueExcessActiveBalance queues validators with balances above the min activation balance and adds to pending deposit.
//
// Spec definition:
//
// def queue_excess_active_balance(state: BeaconState, index: ValidatorIndex) -> None:
// balance = state.balances[index]
// if balance > MIN_ACTIVATION_BALANCE:
// excess_balance = balance - MIN_ACTIVATION_BALANCE
// state.balances[index] = MIN_ACTIVATION_BALANCE
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=excess_balance)
// )
// def queue_excess_active_balance(state: BeaconState, index: ValidatorIndex) -> None:
//
// balance = state.balances[index]
// if balance > MIN_ACTIVATION_BALANCE:
// excess_balance = balance - MIN_ACTIVATION_BALANCE
// state.balances[index] = MIN_ACTIVATION_BALANCE
// validator = state.validators[index]
// state.pending_deposits.append(PendingDeposit(
// pubkey=validator.pubkey,
// withdrawal_credentials=validator.withdrawal_credentials,
// amount=excess_balance,
// signature=bls.G2_POINT_AT_INFINITY,
// slot=GENESIS_SLOT,
// ))
func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex) error {
bal, err := s.BalanceAtIndex(idx)
if err != nil {
@@ -121,11 +64,21 @@ func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex
}
if bal > params.BeaconConfig().MinActivationBalance {
excessBalance := bal - params.BeaconConfig().MinActivationBalance
if err := s.UpdateBalancesAtIndex(idx, params.BeaconConfig().MinActivationBalance); err != nil {
return err
}
return s.AppendPendingBalanceDeposit(idx, excessBalance)
excessBalance := bal - params.BeaconConfig().MinActivationBalance
val, err := s.ValidatorAtIndex(idx)
if err != nil {
return err
}
return s.AppendPendingDeposit(&ethpb.PendingDeposit{
PublicKey: val.PublicKey,
WithdrawalCredentials: val.WithdrawalCredentials,
Amount: excessBalance,
Signature: common.InfiniteSignature[:],
Slot: params.BeaconConfig().GenesisSlot,
})
}
return nil
}
@@ -134,15 +87,21 @@ func QueueExcessActiveBalance(s state.BeaconState, idx primitives.ValidatorIndex
//
// Spec definition:
//
// def queue_entire_balance_and_reset_validator(state: BeaconState, index: ValidatorIndex) -> None:
// balance = state.balances[index]
// state.balances[index] = 0
// validator = state.validators[index]
// validator.effective_balance = 0
// validator.activation_eligibility_epoch = FAR_FUTURE_EPOCH
// state.pending_balance_deposits.append(
// PendingBalanceDeposit(index=index, amount=balance)
// )
// def queue_entire_balance_and_reset_validator(state: BeaconState, index: ValidatorIndex) -> None:
//
// balance = state.balances[index]
// state.balances[index] = 0
// validator = state.validators[index]
// validator.effective_balance = 0
// validator.activation_eligibility_epoch = FAR_FUTURE_EPOCH
// state.pending_deposits.append(PendingDeposit(
// pubkey=validator.pubkey,
// withdrawal_credentials=validator.withdrawal_credentials,
// amount=balance,
// signature=bls.G2_POINT_AT_INFINITY,
// slot=GENESIS_SLOT,
//
// ))
//
//nolint:dupword
func QueueEntireBalanceAndResetValidator(s state.BeaconState, idx primitives.ValidatorIndex) error {
@@ -166,5 +125,11 @@ func QueueEntireBalanceAndResetValidator(s state.BeaconState, idx primitives.Val
return err
}
return s.AppendPendingBalanceDeposit(idx, bal)
return s.AppendPendingDeposit(&ethpb.PendingDeposit{
PublicKey: v.PublicKey,
WithdrawalCredentials: v.WithdrawalCredentials,
Amount: bal,
Signature: common.InfiniteSignature[:],
Slot: params.BeaconConfig().GenesisSlot,
})
}

View File

@@ -6,7 +6,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -14,20 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestAddValidatorToRegistry(t *testing.T) {
st, err := state_native.InitializeFromProtoElectra(&eth.BeaconStateElectra{})
require.NoError(t, err)
require.NoError(t, electra.AddValidatorToRegistry(st, make([]byte, fieldparams.BLSPubkeyLength), make([]byte, fieldparams.RootLength), 100))
balances := st.Balances()
require.Equal(t, 1, len(balances))
require.Equal(t, uint64(0), balances[0])
pbds, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbds))
require.Equal(t, uint64(100), pbds[0].Amount)
require.Equal(t, primitives.ValidatorIndex(0), pbds[0].Index)
}
func TestSwitchToCompoundingValidator(t *testing.T) {
s, err := state_native.InitializeFromProtoElectra(&eth.BeaconStateElectra{
Validators: []*eth.Validator{
@@ -60,7 +45,7 @@ func TestSwitchToCompoundingValidator(t *testing.T) {
b, err := s.BalanceAtIndex(1)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, b, "balance was changed")
pbd, err := s.PendingBalanceDeposits()
pbd, err := s.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(pbd), "pending balance deposits should be empty")
@@ -69,11 +54,10 @@ func TestSwitchToCompoundingValidator(t *testing.T) {
b, err = s.BalanceAtIndex(2)
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MinActivationBalance, b, "balance was not changed")
pbd, err = s.PendingBalanceDeposits()
pbd, err = s.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd), "pending balance deposits should have one element")
require.Equal(t, uint64(100_000), pbd[0].Amount, "pending balance deposit amount is incorrect")
require.Equal(t, primitives.ValidatorIndex(2), pbd[0].Index, "pending balance deposit index is incorrect")
}
func TestQueueEntireBalanceAndResetValidator(t *testing.T) {
@@ -97,11 +81,10 @@ func TestQueueEntireBalanceAndResetValidator(t *testing.T) {
require.NoError(t, err)
require.Equal(t, uint64(0), v.EffectiveBalance, "effective balance was not reset")
require.Equal(t, params.BeaconConfig().FarFutureEpoch, v.ActivationEligibilityEpoch, "activation eligibility epoch was not reset")
pbd, err := s.PendingBalanceDeposits()
pbd, err := s.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd), "pending balance deposits should have one element")
require.Equal(t, params.BeaconConfig().MinActivationBalance+100_000, pbd[0].Amount, "pending balance deposit amount is incorrect")
require.Equal(t, primitives.ValidatorIndex(0), pbd[0].Index, "pending balance deposit index is incorrect")
}
func TestSwitchToCompoundingValidator_Ok(t *testing.T) {
@@ -114,7 +97,7 @@ func TestSwitchToCompoundingValidator_Ok(t *testing.T) {
require.NoError(t, st.SetBalances(bals))
require.NoError(t, electra.SwitchToCompoundingValidator(st, 0))
pbd, err := st.PendingBalanceDeposits()
pbd, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1010), pbd[0].Amount) // appends it at the end
val, err := st.ValidatorAtIndex(0)
@@ -132,7 +115,7 @@ func TestQueueExcessActiveBalance_Ok(t *testing.T) {
err := electra.QueueExcessActiveBalance(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
pbd, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, uint64(1000), pbd[0].Amount) // appends it at the end
@@ -149,7 +132,7 @@ func TestQueueEntireBalanceAndResetValidator_Ok(t *testing.T) {
err := electra.QueueEntireBalanceAndResetValidator(st, 0)
require.NoError(t, err)
pbd, err := st.PendingBalanceDeposits()
pbd, err := st.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd))
require.Equal(t, params.BeaconConfig().MinActivationBalance-1000, pbd[0].Amount)

View File

@@ -4,5 +4,5 @@ import "github.com/prysmaticlabs/prysm/v5/async/event"
// Notifier interface defines the methods of the service that provides beacon block operation updates to consumers.
type Notifier interface {
OperationFeed() *event.Feed
OperationFeed() event.SubscriberSender
}

View File

@@ -4,5 +4,5 @@ import "github.com/prysmaticlabs/prysm/v5/async/event"
// Notifier interface defines the methods of the service that provides state updates to consumers.
type Notifier interface {
StateFeed() *event.Feed
StateFeed() event.SubscriberSender
}

View File

@@ -63,6 +63,7 @@ go_test(
"validators_test.go",
"weak_subjectivity_test.go",
],
data = glob(["testdata/**"]),
embed = [":go_default_library"],
shard_count = 2,
tags = ["CI_race_detection"],

View File

@@ -12,12 +12,10 @@ go_library(
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/migration:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -29,11 +27,13 @@ go_test(
srcs = ["lightclient_test.go"],
deps = [
":go_default_library",
"//config/params:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -13,15 +13,11 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
v11 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpbv1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v5/proto/migration"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
)
const (
@@ -29,16 +25,6 @@ const (
executionBranchNumOfLeaves = 4
)
// createLightClientFinalityUpdate - implements https://github.com/ethereum/consensus-specs/blob/3d235740e5f1e641d3b160c8688f26e7dc5a1894/specs/altair/light-client/full-node.md#create_light_client_finality_update
// def create_light_client_finality_update(update: LightClientUpdate) -> LightClientFinalityUpdate:
//
// return LightClientFinalityUpdate(
// attested_header=update.attested_header,
// finalized_header=update.finalized_header,
// finality_branch=update.finality_branch,
// sync_aggregate=update.sync_aggregate,
// signature_slot=update.signature_slot,
// )
func createLightClientFinalityUpdate(update *ethpbv2.LightClientUpdate) *ethpbv2.LightClientFinalityUpdate {
finalityUpdate := &ethpbv2.LightClientFinalityUpdate{
AttestedHeader: update.AttestedHeader,
@@ -51,14 +37,6 @@ func createLightClientFinalityUpdate(update *ethpbv2.LightClientUpdate) *ethpbv2
return finalityUpdate
}
// createLightClientOptimisticUpdate - implements https://github.com/ethereum/consensus-specs/blob/3d235740e5f1e641d3b160c8688f26e7dc5a1894/specs/altair/light-client/full-node.md#create_light_client_optimistic_update
// def create_light_client_optimistic_update(update: LightClientUpdate) -> LightClientOptimisticUpdate:
//
// return LightClientOptimisticUpdate(
// attested_header=update.attested_header,
// sync_aggregate=update.sync_aggregate,
// signature_slot=update.signature_slot,
// )
func createLightClientOptimisticUpdate(update *ethpbv2.LightClientUpdate) *ethpbv2.LightClientOptimisticUpdate {
optimisticUpdate := &ethpbv2.LightClientOptimisticUpdate{
AttestedHeader: update.AttestedHeader,
@@ -74,9 +52,10 @@ func NewLightClientFinalityUpdateFromBeaconState(
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
finalizedBlock interfaces.ReadOnlySignedBeaconBlock,
) (*ethpbv2.LightClientFinalityUpdate, error) {
update, err := NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, finalizedBlock)
update, err := NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return nil, err
}
@@ -89,8 +68,9 @@ func NewLightClientOptimisticUpdateFromBeaconState(
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
) (*ethpbv2.LightClientOptimisticUpdate, error) {
update, err := NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, nil)
update, err := NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock, nil)
if err != nil {
return nil, err
}
@@ -98,68 +78,12 @@ func NewLightClientOptimisticUpdateFromBeaconState(
return createLightClientOptimisticUpdate(update), nil
}
// NewLightClientUpdateFromBeaconState implements https://github.com/ethereum/consensus-specs/blob/d70dcd9926a4bbe987f1b4e65c3e05bd029fcfb8/specs/altair/light-client/full-node.md#create_light_client_update
// def create_light_client_update(state: BeaconState,
//
// block: SignedBeaconBlock,
// attested_state: BeaconState,
// finalized_block: Optional[SignedBeaconBlock]) -> LightClientUpdate:
// assert compute_epoch_at_slot(attested_state.slot) >= ALTAIR_FORK_EPOCH
// assert sum(block.message.body.sync_aggregate.sync_committee_bits) >= MIN_SYNC_COMMITTEE_PARTICIPANTS
//
// assert state.slot == state.latest_block_header.slot
// header = state.latest_block_header.copy()
// header.state_root = hash_tree_root(state)
// assert hash_tree_root(header) == hash_tree_root(block.message)
// update_signature_period = compute_sync_committee_period(compute_epoch_at_slot(block.message.slot))
//
// assert attested_state.slot == attested_state.latest_block_header.slot
// attested_header = attested_state.latest_block_header.copy()
// attested_header.state_root = hash_tree_root(attested_state)
// assert hash_tree_root(attested_header) == block.message.parent_root
// update_attested_period = compute_sync_committee_period(compute_epoch_at_slot(attested_header.slot))
//
// # `next_sync_committee` is only useful if the message is signed by the current sync committee
// if update_attested_period == update_signature_period:
// next_sync_committee = attested_state.next_sync_committee
// next_sync_committee_branch = compute_merkle_proof_for_state(attested_state, NEXT_SYNC_COMMITTEE_INDEX)
// else:
// next_sync_committee = SyncCommittee()
// next_sync_committee_branch = [Bytes32() for _ in range(floorlog2(NEXT_SYNC_COMMITTEE_INDEX))]
//
// # Indicate finality whenever possible
// if finalized_block is not None:
// if finalized_block.message.slot != GENESIS_SLOT:
// finalized_header = BeaconBlockHeader(
// slot=finalized_block.message.slot,
// proposer_index=finalized_block.message.proposer_index,
// parent_root=finalized_block.message.parent_root,
// state_root=finalized_block.message.state_root,
// body_root=hash_tree_root(finalized_block.message.body),
// )
// assert hash_tree_root(finalized_header) == attested_state.finalized_checkpoint.root
// else:
// assert attested_state.finalized_checkpoint.root == Bytes32()
// finalized_header = BeaconBlockHeader()
// finality_branch = compute_merkle_proof_for_state(attested_state, FINALIZED_ROOT_INDEX)
// else:
// finalized_header = BeaconBlockHeader()
// finality_branch = [Bytes32() for _ in range(floorlog2(FINALIZED_ROOT_INDEX))]
//
// return LightClientUpdate(
// attested_header=attested_header,
// next_sync_committee=next_sync_committee,
// next_sync_committee_branch=next_sync_committee_branch,
// finalized_header=finalized_header,
// finality_branch=finality_branch,
// sync_aggregate=block.message.body.sync_aggregate,
// signature_slot=block.message.slot,
// )
func NewLightClientUpdateFromBeaconState(
ctx context.Context,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
finalizedBlock interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientUpdate, error) {
// assert compute_epoch_at_slot(attested_state.slot) >= ALTAIR_FORK_EPOCH
attestedEpoch := slots.ToEpoch(attestedState.Slot())
@@ -223,73 +147,30 @@ func NewLightClientUpdateFromBeaconState(
if err != nil {
return nil, errors.Wrap(err, "could not get attested header root")
}
if attestedHeaderRoot != block.Block().ParentRoot() {
return nil, fmt.Errorf("attested header root %#x not equal to block parent root %#x", attestedHeaderRoot, block.Block().ParentRoot())
attestedBlockRoot, err := attestedBlock.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get attested block root")
}
// assert hash_tree_root(attested_header) == hash_tree_root(attested_block.message) == block.message.parent_root
if attestedHeaderRoot != block.Block().ParentRoot() || attestedHeaderRoot != attestedBlockRoot {
return nil, fmt.Errorf("attested header root %#x not equal to block parent root %#x or attested block root %#x", attestedHeaderRoot, block.Block().ParentRoot(), attestedBlockRoot)
}
// update_attested_period = compute_sync_committee_period(compute_epoch_at_slot(attested_header.slot))
updateAttestedPeriod := slots.SyncCommitteePeriod(slots.ToEpoch(attestedHeader.Slot))
// update_attested_period = compute_sync_committee_period_at_slot(attested_block.message.slot)
updateAttestedPeriod := slots.SyncCommitteePeriod(slots.ToEpoch(attestedBlock.Block().Slot()))
// update = LightClientUpdate()
result, err := createDefaultLightClientUpdate(block.Block().Version())
result, err := createDefaultLightClientUpdate()
if err != nil {
return nil, errors.Wrap(err, "could not create default light client update")
}
// update.attested_header = block_to_light_client_header(attested_block)
blockHeader := &ethpbv1.BeaconBlockHeader{
Slot: attestedHeader.Slot,
ProposerIndex: attestedHeader.ProposerIndex,
ParentRoot: attestedHeader.ParentRoot,
StateRoot: attestedHeader.StateRoot,
BodyRoot: attestedHeader.BodyRoot,
}
switch block.Block().Version() {
case version.Altair, version.Bellatrix:
result.AttestedHeader = &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderAltair{
HeaderAltair: &ethpbv2.LightClientHeader{Beacon: blockHeader},
},
}
case version.Capella:
executionPayloadHeader, err := getExecutionPayloadHeaderCapella(block)
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload header")
}
executionPayloadProof, err := blocks.PayloadProof(ctx, block.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
result.AttestedHeader = &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderCapella{
HeaderCapella: &ethpbv2.LightClientHeaderCapella{
Beacon: blockHeader,
Execution: executionPayloadHeader,
ExecutionBranch: executionPayloadProof,
},
},
}
case version.Deneb:
executionPayloadHeader, err := getExecutionPayloadHeaderDeneb(block)
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload header")
}
executionPayloadProof, err := blocks.PayloadProof(ctx, block.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
result.AttestedHeader = &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderDeneb{
HeaderDeneb: &ethpbv2.LightClientHeaderDeneb{
Beacon: blockHeader,
Execution: executionPayloadHeader,
ExecutionBranch: executionPayloadProof,
},
},
}
default:
return nil, fmt.Errorf("unsupported block version %s", version.String(block.Block().Version()))
attestedLightClientHeader, err := BlockToLightClientHeader(attestedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not get attested light client header")
}
result.AttestedHeader = attestedLightClientHeader
// if update_attested_period == update_signature_period
if updateAttestedPeriod == updateSignaturePeriod {
@@ -319,70 +200,11 @@ func NewLightClientUpdateFromBeaconState(
// if finalized_block.message.slot != GENESIS_SLOT
if finalizedBlock.Block().Slot() != 0 {
// update.finalized_header = block_to_light_client_header(finalized_block)
v1alpha1FinalizedHeader, err := finalizedBlock.Header()
finalizedLightClientHeader, err := BlockToLightClientHeader(finalizedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not get finalized header")
}
finalizedHeader := migration.V1Alpha1SignedHeaderToV1(v1alpha1FinalizedHeader).GetMessage()
finalizedHeaderRoot, err := finalizedHeader.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get finalized header root")
}
switch block.Block().Version() {
case version.Altair, version.Bellatrix:
result.FinalizedHeader = &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderAltair{
HeaderAltair: &ethpbv2.LightClientHeader{Beacon: finalizedHeader},
},
}
case version.Capella:
executionPayloadHeader, err := getExecutionPayloadHeaderCapella(finalizedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload header")
}
executionPayloadProof, err := blocks.PayloadProof(ctx, finalizedBlock.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
result.FinalizedHeader = &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderCapella{
HeaderCapella: &ethpbv2.LightClientHeaderCapella{
Beacon: finalizedHeader,
Execution: executionPayloadHeader,
ExecutionBranch: executionPayloadProof,
},
},
}
case version.Deneb:
executionPayloadHeader, err := getExecutionPayloadHeaderDeneb(finalizedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload header")
}
executionPayloadProof, err := blocks.PayloadProof(ctx, finalizedBlock.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
result.FinalizedHeader = &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderDeneb{
HeaderDeneb: &ethpbv2.LightClientHeaderDeneb{
Beacon: finalizedHeader,
Execution: executionPayloadHeader,
ExecutionBranch: executionPayloadProof,
},
},
}
default:
return nil, fmt.Errorf("unsupported block version %s", version.String(block.Block().Version()))
}
// assert hash_tree_root(update.finalized_header.beacon) == attested_state.finalized_checkpoint.root
if finalizedHeaderRoot != bytesutil.ToBytes32(attestedState.FinalizedCheckpoint().Root) {
return nil, fmt.Errorf(
"finalized header root %#x not equal to attested finalized checkpoint root %#x",
finalizedHeaderRoot,
bytesutil.ToBytes32(attestedState.FinalizedCheckpoint().Root),
)
return nil, errors.Wrap(err, "could not get finalized light client header")
}
result.FinalizedHeader = finalizedLightClientHeader
} else {
// assert attested_state.finalized_checkpoint.root == Bytes32()
if !bytes.Equal(attestedState.FinalizedCheckpoint().Root, make([]byte, 32)) {
@@ -411,7 +233,7 @@ func NewLightClientUpdateFromBeaconState(
return result, nil
}
func createDefaultLightClientUpdate(v int) (*ethpbv2.LightClientUpdate, error) {
func createDefaultLightClientUpdate() (*ethpbv2.LightClientUpdate, error) {
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize
pubKeys := make([][]byte, syncCommitteeSize)
for i := uint64(0); i < syncCommitteeSize; i++ {
@@ -421,218 +243,26 @@ func createDefaultLightClientUpdate(v int) (*ethpbv2.LightClientUpdate, error) {
Pubkeys: pubKeys,
AggregatePubkey: make([]byte, fieldparams.BLSPubkeyLength),
}
nextSyncCommitteeBranch := make([][]byte, fieldparams.NextSyncCommitteeBranchDepth)
for i := 0; i < fieldparams.NextSyncCommitteeBranchDepth; i++ {
nextSyncCommitteeBranch := make([][]byte, fieldparams.SyncCommitteeBranchDepth)
for i := 0; i < fieldparams.SyncCommitteeBranchDepth; i++ {
nextSyncCommitteeBranch[i] = make([]byte, fieldparams.RootLength)
}
executionBranch := make([][]byte, executionBranchNumOfLeaves)
for i := 0; i < executionBranchNumOfLeaves; i++ {
executionBranch[i] = make([]byte, 32)
}
finalizedBlockHeader := &ethpbv1.BeaconBlockHeader{
Slot: 0,
ProposerIndex: 0,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
}
finalityBranch := make([][]byte, FinalityBranchNumOfLeaves)
for i := 0; i < FinalityBranchNumOfLeaves; i++ {
finalityBranch[i] = make([]byte, 32)
}
var finalizedHeader *ethpbv2.LightClientHeaderContainer
switch v {
case version.Altair, version.Bellatrix:
finalizedHeader = &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderAltair{
HeaderAltair: &ethpbv2.LightClientHeader{
Beacon: finalizedBlockHeader,
},
},
}
case version.Capella:
finalizedHeader = &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderCapella{
HeaderCapella: &ethpbv2.LightClientHeaderCapella{
Beacon: finalizedBlockHeader,
Execution: createEmptyExecutionPayloadHeaderCapella(),
ExecutionBranch: executionBranch,
},
},
}
case version.Deneb:
finalizedHeader = &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderDeneb{
HeaderDeneb: &ethpbv2.LightClientHeaderDeneb{
Beacon: finalizedBlockHeader,
Execution: createEmptyExecutionPayloadHeaderDeneb(),
ExecutionBranch: executionBranch,
},
},
}
default:
return nil, fmt.Errorf("unsupported block version %s", version.String(v))
}
return &ethpbv2.LightClientUpdate{
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalizedHeader: finalizedHeader,
FinalityBranch: finalityBranch,
}, nil
}
func createEmptyExecutionPayloadHeaderCapella() *enginev1.ExecutionPayloadHeaderCapella {
return &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BlockNumber: 0,
GasLimit: 0,
GasUsed: 0,
Timestamp: 0,
ExtraData: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),
WithdrawalsRoot: make([]byte, 32),
}
}
func createEmptyExecutionPayloadHeaderDeneb() *enginev1.ExecutionPayloadHeaderDeneb {
return &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: make([]byte, 32),
FeeRecipient: make([]byte, 20),
StateRoot: make([]byte, 32),
ReceiptsRoot: make([]byte, 32),
LogsBloom: make([]byte, 256),
PrevRandao: make([]byte, 32),
BlockNumber: 0,
GasLimit: 0,
GasUsed: 0,
Timestamp: 0,
ExtraData: make([]byte, 32),
BaseFeePerGas: make([]byte, 32),
BlockHash: make([]byte, 32),
TransactionsRoot: make([]byte, 32),
WithdrawalsRoot: make([]byte, 32),
}
}
func getExecutionPayloadHeaderCapella(block interfaces.ReadOnlySignedBeaconBlock) (*enginev1.ExecutionPayloadHeaderCapella, error) {
payloadInterface, err := block.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution data")
}
transactionsRoot, err := payloadInterface.TransactionsRoot()
if errors.Is(err, consensus_types.ErrUnsupportedField) {
transactions, err := payloadInterface.Transactions()
if err != nil {
return nil, errors.Wrap(err, "could not get transactions")
}
transactionsRootArray, err := ssz.TransactionsRoot(transactions)
if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
transactionsRoot = transactionsRootArray[:]
} else if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := payloadInterface.WithdrawalsRoot()
if errors.Is(err, consensus_types.ErrUnsupportedField) {
withdrawals, err := payloadInterface.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals")
}
withdrawalsRootArray, err := ssz.WithdrawalSliceRoot(withdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
withdrawalsRoot = withdrawalsRootArray[:]
} else if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
execution := &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: payloadInterface.ParentHash(),
FeeRecipient: payloadInterface.FeeRecipient(),
StateRoot: payloadInterface.StateRoot(),
ReceiptsRoot: payloadInterface.ReceiptsRoot(),
LogsBloom: payloadInterface.LogsBloom(),
PrevRandao: payloadInterface.PrevRandao(),
BlockNumber: payloadInterface.BlockNumber(),
GasLimit: payloadInterface.GasLimit(),
GasUsed: payloadInterface.GasUsed(),
Timestamp: payloadInterface.Timestamp(),
ExtraData: payloadInterface.ExtraData(),
BaseFeePerGas: payloadInterface.BaseFeePerGas(),
BlockHash: payloadInterface.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
return execution, nil
}
func getExecutionPayloadHeaderDeneb(block interfaces.ReadOnlySignedBeaconBlock) (*enginev1.ExecutionPayloadHeaderDeneb, error) {
payloadInterface, err := block.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution data")
}
transactionsRoot, err := payloadInterface.TransactionsRoot()
if errors.Is(err, consensus_types.ErrUnsupportedField) {
transactions, err := payloadInterface.Transactions()
if err != nil {
return nil, errors.Wrap(err, "could not get transactions")
}
transactionsRootArray, err := ssz.TransactionsRoot(transactions)
if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
transactionsRoot = transactionsRootArray[:]
} else if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := payloadInterface.WithdrawalsRoot()
if errors.Is(err, consensus_types.ErrUnsupportedField) {
withdrawals, err := payloadInterface.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals")
}
withdrawalsRootArray, err := ssz.WithdrawalSliceRoot(withdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
withdrawalsRoot = withdrawalsRootArray[:]
} else if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
execution := &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadInterface.ParentHash(),
FeeRecipient: payloadInterface.FeeRecipient(),
StateRoot: payloadInterface.StateRoot(),
ReceiptsRoot: payloadInterface.ReceiptsRoot(),
LogsBloom: payloadInterface.LogsBloom(),
PrevRandao: payloadInterface.PrevRandao(),
BlockNumber: payloadInterface.BlockNumber(),
GasLimit: payloadInterface.GasLimit(),
GasUsed: payloadInterface.GasUsed(),
Timestamp: payloadInterface.Timestamp(),
ExtraData: payloadInterface.ExtraData(),
BaseFeePerGas: payloadInterface.BaseFeePerGas(),
BlockHash: payloadInterface.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
return execution, nil
}
func ComputeTransactionsRoot(payload interfaces.ExecutionData) ([]byte, error) {
transactionsRoot, err := payload.TransactionsRoot()
if errors.Is(err, consensus_types.ErrUnsupportedField) {
@@ -669,8 +299,45 @@ func ComputeWithdrawalsRoot(payload interfaces.ExecutionData) ([]byte, error) {
return withdrawalsRoot, nil
}
func BlockToLightClientHeaderAltair(block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeader, error) {
if block.Version() != version.Altair {
func BlockToLightClientHeader(block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeaderContainer, error) {
switch block.Version() {
case version.Altair, version.Bellatrix:
altairHeader, err := blockToLightClientHeaderAltair(block)
if err != nil {
return nil, errors.Wrap(err, "could not get header")
}
return &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderAltair{
HeaderAltair: altairHeader,
},
}, nil
case version.Capella:
capellaHeader, err := blockToLightClientHeaderCapella(context.Background(), block)
if err != nil {
return nil, errors.Wrap(err, "could not get capella header")
}
return &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderCapella{
HeaderCapella: capellaHeader,
},
}, nil
case version.Deneb, version.Electra:
denebHeader, err := blockToLightClientHeaderDeneb(context.Background(), block)
if err != nil {
return nil, errors.Wrap(err, "could not get header")
}
return &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderDeneb{
HeaderDeneb: denebHeader,
},
}, nil
default:
return nil, fmt.Errorf("unsupported block version %s", version.String(block.Version()))
}
}
func blockToLightClientHeaderAltair(block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeader, error) {
if block.Version() < version.Altair {
return nil, fmt.Errorf("block version is %s instead of Altair", version.String(block.Version()))
}
@@ -692,8 +359,8 @@ func BlockToLightClientHeaderAltair(block interfaces.ReadOnlySignedBeaconBlock)
}, nil
}
func BlockToLightClientHeaderCapella(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeaderCapella, error) {
if block.Version() != version.Capella {
func blockToLightClientHeaderCapella(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeaderCapella, error) {
if block.Version() < version.Capella {
return nil, fmt.Errorf("block version is %s instead of Capella", version.String(block.Version()))
}
@@ -754,9 +421,9 @@ func BlockToLightClientHeaderCapella(ctx context.Context, block interfaces.ReadO
}, nil
}
func BlockToLightClientHeaderDeneb(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeaderDeneb, error) {
if block.Version() != version.Deneb {
return nil, fmt.Errorf("block version is %s instead of Deneb", version.String(block.Version()))
func blockToLightClientHeaderDeneb(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeaderDeneb, error) {
if block.Version() < version.Deneb {
return nil, fmt.Errorf("block version is %s instead of Deneb/Electra", version.String(block.Version()))
}
payload, err := block.Block().Body().Execution()

File diff suppressed because it is too large Load Diff

View File

@@ -391,7 +391,7 @@ func altairOperations(
if err != nil {
return nil, errors.Wrap(err, "could not process altair attestation")
}
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
if _, err := b.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())
@@ -418,7 +418,7 @@ func phase0Operations(
if err != nil {
return nil, errors.Wrap(err, "could not process block attestations")
}
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
if _, err := b.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process deposits")
}
return b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())

View File

@@ -297,13 +297,16 @@ func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExchangeCapabilities")
defer span.End()
result := &pb.ExchangeCapabilities{}
var result []string
err := s.rpcClient.CallContext(ctx, &result, ExchangeCapabilities, supportedEngineEndpoints)
if err != nil {
return nil, handleRPCError(err)
}
var unsupported []string
for _, s1 := range supportedEngineEndpoints {
supported := false
for _, s2 := range result.SupportedMethods {
for _, s2 := range result {
if s1 == s2 {
supported = true
break
@@ -316,7 +319,7 @@ func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
if len(unsupported) != 0 {
log.Warnf("Please update client, detected the following unsupported engine methods: %s", unsupported)
}
return result.SupportedMethods, handleRPCError(err)
return result, handleRPCError(err)
}
// GetTerminalBlockHash returns the valid terminal block hash based on total difficulty.

View File

@@ -1996,11 +1996,10 @@ func Test_ExchangeCapabilities(t *testing.T) {
defer func() {
require.NoError(t, r.Body.Close())
}()
exchangeCapabilities := &pb.ExchangeCapabilities{}
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": exchangeCapabilities,
"result": []string{},
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
@@ -2029,14 +2028,11 @@ func Test_ExchangeCapabilities(t *testing.T) {
defer func() {
require.NoError(t, r.Body.Close())
}()
exchangeCapabilities := &pb.ExchangeCapabilities{
SupportedMethods: []string{"A", "B", "C"},
}
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": exchangeCapabilities,
"result": []string{"A", "B", "C"},
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)

View File

@@ -73,7 +73,7 @@ type goodNotifier struct {
MockStateFeed *event.Feed
}
func (g *goodNotifier) StateFeed() *event.Feed {
func (g *goodNotifier) StateFeed() event.SubscriberSender {
if g.MockStateFeed == nil {
g.MockStateFeed = new(event.Feed)
}

View File

@@ -398,7 +398,7 @@ func initSyncWaiter(ctx context.Context, complete chan struct{}) func() error {
}
// StateFeed implements statefeed.Notifier.
func (b *BeaconNode) StateFeed() *event.Feed {
func (b *BeaconNode) StateFeed() event.SubscriberSender {
return b.stateFeed
}
@@ -408,7 +408,7 @@ func (b *BeaconNode) BlockFeed() *event.Feed {
}
// OperationFeed implements opfeed.Notifier.
func (b *BeaconNode) OperationFeed() *event.Feed {
func (b *BeaconNode) OperationFeed() event.SubscriberSender {
return b.opFeed
}

View File

@@ -9,7 +9,6 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/discover"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/host"
"github.com/prysmaticlabs/go-bitfield"
@@ -236,7 +235,7 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
bootNode := bootListener.Self()
subnet := uint64(5)
var listeners []*discover.UDPv5
var listeners []*listenerWrapper
var hosts []host.Host
// setup other nodes.
cfg = &Config{

View File

@@ -50,7 +50,7 @@ func TestPeer_AtMaxLimit(t *testing.T) {
}()
for i := 0; i < highWatermarkBuffer; i++ {
addPeer(t, s.peers, peers.PeerConnected)
addPeer(t, s.peers, peers.PeerConnected, false)
}
// create alternate host
@@ -159,7 +159,7 @@ func TestService_RejectInboundPeersBeyondLimit(t *testing.T) {
inboundLimit += 1
// Add in up to inbound peer limit.
for i := 0; i < int(inboundLimit); i++ {
addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), false)
}
valid = s.InterceptAccept(&maEndpoints{raddr: multiAddress})
if valid {

View File

@@ -24,6 +24,11 @@ import (
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
type ListenerRebooter interface {
Listener
RebootListener() error
}
// Listener defines the discovery V5 network interface that is used
// to communicate with other peers.
type Listener interface {
@@ -47,6 +52,87 @@ type quicProtocol uint16
// quicProtocol is the "quic" key, which holds the QUIC port of the node.
func (quicProtocol) ENRKey() string { return "quic" }
type listenerWrapper struct {
mu sync.RWMutex
listener *discover.UDPv5
listenerCreator func() (*discover.UDPv5, error)
}
func newListener(listenerCreator func() (*discover.UDPv5, error)) (*listenerWrapper, error) {
rawListener, err := listenerCreator()
if err != nil {
return nil, errors.Wrap(err, "could not create new listener")
}
return &listenerWrapper{
listener: rawListener,
listenerCreator: listenerCreator,
}, nil
}
func (l *listenerWrapper) Self() *enode.Node {
l.mu.RLock()
defer l.mu.RUnlock()
return l.listener.Self()
}
func (l *listenerWrapper) Close() {
l.mu.RLock()
defer l.mu.RUnlock()
l.listener.Close()
}
func (l *listenerWrapper) Lookup(id enode.ID) []*enode.Node {
l.mu.RLock()
defer l.mu.RUnlock()
return l.listener.Lookup(id)
}
func (l *listenerWrapper) Resolve(node *enode.Node) *enode.Node {
l.mu.RLock()
defer l.mu.RUnlock()
return l.listener.Resolve(node)
}
func (l *listenerWrapper) RandomNodes() enode.Iterator {
l.mu.RLock()
defer l.mu.RUnlock()
return l.listener.RandomNodes()
}
func (l *listenerWrapper) Ping(node *enode.Node) error {
l.mu.RLock()
defer l.mu.RUnlock()
return l.listener.Ping(node)
}
func (l *listenerWrapper) RequestENR(node *enode.Node) (*enode.Node, error) {
l.mu.RLock()
defer l.mu.RUnlock()
return l.listener.RequestENR(node)
}
func (l *listenerWrapper) LocalNode() *enode.LocalNode {
l.mu.RLock()
defer l.mu.RUnlock()
return l.listener.LocalNode()
}
func (l *listenerWrapper) RebootListener() error {
l.mu.Lock()
defer l.mu.Unlock()
// Close current listener
l.listener.Close()
newListener, err := l.listenerCreator()
if err != nil {
return err
}
l.listener = newListener
return nil
}
// RefreshENR uses an epoch to refresh the enr entry for our node
// with the tracked committee ids for the epoch, allowing our node
// to be dynamically discoverable by others given our tracked committee ids.
@@ -110,55 +196,78 @@ func (s *Service) RefreshENR() {
func (s *Service) listenForNewNodes() {
iterator := filterNodes(s.ctx, s.dv5Listener.RandomNodes(), s.filterPeer)
defer iterator.Close()
connectivityTicker := time.NewTicker(1 * time.Minute)
thresholdCount := 0
for {
// Exit if service's context is canceled.
if s.ctx.Err() != nil {
break
}
if s.isPeerAtLimit(false /* inbound */) {
// Pause the main loop for a period to stop looking
// for new peers.
log.Trace("Not looking for peers, at peer limit")
time.Sleep(pollingPeriod)
continue
}
wantedCount := s.wantedPeerDials()
if wantedCount == 0 {
log.Trace("Not looking for peers, at peer limit")
time.Sleep(pollingPeriod)
continue
}
// Restrict dials if limit is applied.
if flags.MaxDialIsActive() {
wantedCount = min(wantedCount, flags.Get().MaxConcurrentDials)
}
wantedNodes := enode.ReadNodes(iterator, wantedCount)
wg := new(sync.WaitGroup)
for i := 0; i < len(wantedNodes); i++ {
node := wantedNodes[i]
peerInfo, _, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Error("Could not convert to peer info")
select {
case <-s.ctx.Done():
return
case <-connectivityTicker.C:
// Skip the connectivity check if not enabled.
if !features.Get().EnableDiscoveryReboot {
continue
}
if peerInfo == nil {
if !s.isBelowOutboundPeerThreshold() {
// Reset counter if we are beyond the threshold
thresholdCount = 0
continue
}
// Make sure that peer is not dialed too often, for each connection attempt there's a backoff period.
s.Peers().RandomizeBackOff(peerInfo.ID)
wg.Add(1)
go func(info *peer.AddrInfo) {
if err := s.connectWithPeer(s.ctx, *info); err != nil {
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
thresholdCount++
// Reboot listener if connectivity drops
if thresholdCount > 5 {
log.WithField("outboundConnectionCount", len(s.peers.OutboundConnected())).Warn("Rebooting discovery listener, reached threshold.")
if err := s.dv5Listener.RebootListener(); err != nil {
log.WithError(err).Error("Could not reboot listener")
continue
}
wg.Done()
}(peerInfo)
iterator = filterNodes(s.ctx, s.dv5Listener.RandomNodes(), s.filterPeer)
thresholdCount = 0
}
default:
if s.isPeerAtLimit(false /* inbound */) {
// Pause the main loop for a period to stop looking
// for new peers.
log.Trace("Not looking for peers, at peer limit")
time.Sleep(pollingPeriod)
continue
}
wantedCount := s.wantedPeerDials()
if wantedCount == 0 {
log.Trace("Not looking for peers, at peer limit")
time.Sleep(pollingPeriod)
continue
}
// Restrict dials if limit is applied.
if flags.MaxDialIsActive() {
wantedCount = min(wantedCount, flags.Get().MaxConcurrentDials)
}
wantedNodes := enode.ReadNodes(iterator, wantedCount)
wg := new(sync.WaitGroup)
for i := 0; i < len(wantedNodes); i++ {
node := wantedNodes[i]
peerInfo, _, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Error("Could not convert to peer info")
continue
}
if peerInfo == nil {
continue
}
// Make sure that peer is not dialed too often, for each connection attempt there's a backoff period.
s.Peers().RandomizeBackOff(peerInfo.ID)
wg.Add(1)
go func(info *peer.AddrInfo) {
if err := s.connectWithPeer(s.ctx, *info); err != nil {
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
}
wg.Done()
}(peerInfo)
}
wg.Wait()
}
wg.Wait()
}
}
@@ -299,14 +408,17 @@ func (s *Service) createLocalNode(
func (s *Service) startDiscoveryV5(
addr net.IP,
privKey *ecdsa.PrivateKey,
) (*discover.UDPv5, error) {
listener, err := s.createListener(addr, privKey)
) (*listenerWrapper, error) {
createListener := func() (*discover.UDPv5, error) {
return s.createListener(addr, privKey)
}
wrappedListener, err := newListener(createListener)
if err != nil {
return nil, errors.Wrap(err, "could not create listener")
}
record := listener.Self()
record := wrappedListener.Self()
log.WithField("ENR", record.String()).Info("Started discovery v5")
return listener, nil
return wrappedListener, nil
}
// filterPeer validates each node that we retrieve from our dht. We
@@ -398,6 +510,22 @@ func (s *Service) isPeerAtLimit(inbound bool) bool {
return activePeers >= maxPeers || numOfConns >= maxPeers
}
// isBelowOutboundPeerThreshold checks if the number of outbound peers that
// we are connected to satisfies the minimum expected outbound peer count
// according to our peer limit.
func (s *Service) isBelowOutboundPeerThreshold() bool {
maxPeers := int(s.cfg.MaxPeers)
inBoundLimit := s.Peers().InboundLimit()
// Impossible Condition
if maxPeers < inBoundLimit {
return false
}
outboundFloor := maxPeers - inBoundLimit
outBoundThreshold := outboundFloor / 2
outBoundCount := len(s.Peers().OutboundConnected())
return outBoundCount < outBoundThreshold
}
func (s *Service) wantedPeerDials() int {
maxPeers := int(s.cfg.MaxPeers)

View File

@@ -95,7 +95,7 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
bootNode := bootListener.Self()
var listeners []*discover.UDPv5
var listeners []*listenerWrapper
for i := 1; i <= 5; i++ {
port = 3000 + i
cfg := &Config{
@@ -231,6 +231,37 @@ func TestCreateLocalNode(t *testing.T) {
}
}
func TestRebootDiscoveryListener(t *testing.T) {
port := 1024
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
require.NoError(t, err)
currentPubkey := listener.Self().Pubkey()
currentID := listener.Self().ID()
currentPort := listener.Self().UDP()
currentAddr := listener.Self().IP()
assert.NoError(t, listener.RebootListener())
newPubkey := listener.Self().Pubkey()
newID := listener.Self().ID()
newPort := listener.Self().UDP()
newAddr := listener.Self().IP()
assert.Equal(t, true, currentPubkey.Equal(newPubkey))
assert.Equal(t, currentID, newID)
assert.Equal(t, currentPort, newPort)
assert.Equal(t, currentAddr.String(), newAddr.String())
}
func TestMultiAddrsConversion_InvalidIPAddr(t *testing.T) {
addr := net.ParseIP("invalidIP")
_, pkey := createAddrAndPrivKey(t)
@@ -347,19 +378,44 @@ func TestInboundPeerLimit(t *testing.T) {
}
for i := 0; i < 30; i++ {
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), false)
}
require.Equal(t, true, s.isPeerAtLimit(false), "not at limit for outbound peers")
require.Equal(t, false, s.isPeerAtLimit(true), "at limit for inbound peers")
for i := 0; i < highWatermarkBuffer; i++ {
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), false)
}
require.Equal(t, true, s.isPeerAtLimit(true), "not at limit for inbound peers")
}
func TestOutboundPeerThreshold(t *testing.T) {
fakePeer := testp2p.NewTestP2P(t)
s := &Service{
cfg: &Config{MaxPeers: 30},
ipLimiter: leakybucket.NewCollector(ipLimit, ipBurst, 1*time.Second, false),
peers: peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{},
}),
host: fakePeer.BHost,
}
for i := 0; i < 2; i++ {
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), true)
}
require.Equal(t, true, s.isBelowOutboundPeerThreshold(), "not at outbound peer threshold")
for i := 0; i < 3; i++ {
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), true)
}
require.Equal(t, false, s.isBelowOutboundPeerThreshold(), "still at outbound peer threshold")
}
func TestUDPMultiAddress(t *testing.T) {
port := 6500
ipAddr, pkey := createAddrAndPrivKey(t)
@@ -370,7 +426,11 @@ func TestUDPMultiAddress(t *testing.T) {
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
listener, err := s.createListener(ipAddr, pkey)
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
require.NoError(t, err)
defer listener.Close()
s.dv5Listener = listener
@@ -417,7 +477,7 @@ func TestCorrectUDPVersion(t *testing.T) {
}
// addPeer is a helper to add a peer with a given connection state)
func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState) peer.ID {
func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState, outbound bool) peer.ID {
// Set up some peers with different states
mhBytes := []byte{0x11, 0x04}
idBytes := make([]byte, 4)
@@ -426,7 +486,11 @@ func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState)
mhBytes = append(mhBytes, idBytes...)
id, err := peer.IDFromBytes(mhBytes)
require.NoError(t, err)
p.Add(new(enr.Record), id, nil, network.DirInbound)
dir := network.DirInbound
if outbound {
dir = network.DirOutbound
}
p.Add(new(enr.Record), id, nil, dir)
p.SetConnectionState(id, state)
p.SetMetadata(id, wrapper.WrappedMetadataV0(&ethpb.MetaDataV0{
SeqNumber: 0,
@@ -455,7 +519,10 @@ func TestRefreshENR_ForkBoundaries(t *testing.T) {
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
listener, err := s.createListener(ipAddr, pkey)
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
@@ -484,7 +551,10 @@ func TestRefreshENR_ForkBoundaries(t *testing.T) {
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
listener, err := s.createListener(ipAddr, pkey)
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
@@ -506,7 +576,10 @@ func TestRefreshENR_ForkBoundaries(t *testing.T) {
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
listener, err := s.createListener(ipAddr, pkey)
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
// Update params
@@ -537,7 +610,10 @@ func TestRefreshENR_ForkBoundaries(t *testing.T) {
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
listener, err := s.createListener(ipAddr, pkey)
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
// Update params
@@ -575,7 +651,10 @@ func TestRefreshENR_ForkBoundaries(t *testing.T) {
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
listener, err := s.createListener(ipAddr, pkey)
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
// Update params

View File

@@ -110,7 +110,7 @@ type BeaconStateElectraCreator struct{}
type PowBlockCreator struct{}
type HistoricalSummaryCreator struct{}
type BlobIdentifierCreator struct{}
type PendingBalanceDepositCreator struct{}
type PendingDepositCreator struct{}
type PendingPartialWithdrawalCreator struct{}
type PendingConsolidationCreator struct{}
type StatusCreator struct{}
@@ -279,8 +279,8 @@ func (BeaconStateElectraCreator) Create() MarshalerProtoMessage { return &ethpb.
func (PowBlockCreator) Create() MarshalerProtoMessage { return &ethpb.PowBlock{} }
func (HistoricalSummaryCreator) Create() MarshalerProtoMessage { return &ethpb.HistoricalSummary{} }
func (BlobIdentifierCreator) Create() MarshalerProtoMessage { return &ethpb.BlobIdentifier{} }
func (PendingBalanceDepositCreator) Create() MarshalerProtoMessage {
return &ethpb.PendingBalanceDeposit{}
func (PendingDepositCreator) Create() MarshalerProtoMessage {
return &ethpb.PendingDeposit{}
}
func (PendingPartialWithdrawalCreator) Create() MarshalerProtoMessage {
return &ethpb.PendingPartialWithdrawal{}
@@ -397,7 +397,7 @@ var creators = []MarshalerProtoCreator{
PowBlockCreator{},
HistoricalSummaryCreator{},
BlobIdentifierCreator{},
PendingBalanceDepositCreator{},
PendingDepositCreator{},
PendingPartialWithdrawalCreator{},
PendingConsolidationCreator{},
StatusCreator{},

View File

@@ -9,7 +9,6 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
ma "github.com/multiformats/go-multiaddr"
@@ -52,7 +51,7 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
StateNotifier: &mock.MockStateNotifier{},
}
var listeners []*discover.UDPv5
var listeners []*listenerWrapper
for i := 1; i <= 5; i++ {
port := 3000 + i
cfg.UDPPort = uint(port)
@@ -139,7 +138,7 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
UDPPort: uint(port),
}
var listeners []*discover.UDPv5
var listeners []*listenerWrapper
for i := 1; i <= 5; i++ {
port := 3000 + i
cfg.UDPPort = uint(port)

View File

@@ -182,6 +182,7 @@ func pubsubGossipParam() pubsub.GossipSubParams {
gParams := pubsub.DefaultGossipSubParams()
gParams.Dlo = gossipSubDlo
gParams.D = gossipSubD
gParams.Dhi = gossipSubDhi
gParams.HeartbeatInterval = gossipSubHeartbeatInterval
gParams.HistoryLength = gossipSubMcacheLen
gParams.HistoryGossip = gossipSubMcacheGossip

View File

@@ -71,7 +71,7 @@ type Service struct {
subnetsLock map[uint64]*sync.RWMutex
subnetsLockLock sync.Mutex // Lock access to subnetsLock
initializationLock sync.Mutex
dv5Listener Listener
dv5Listener ListenerRebooter
startupErr error
ctx context.Context
host host.Host

View File

@@ -8,7 +8,6 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/host"
@@ -69,6 +68,8 @@ func (mockListener) RandomNodes() enode.Iterator {
panic("implement me")
}
func (mockListener) RebootListener() error { panic("implement me") }
func createHost(t *testing.T, port int) (host.Host, *ecdsa.PrivateKey, net.IP) {
_, pkey := createAddrAndPrivKey(t)
ipAddr := net.ParseIP("127.0.0.1")
@@ -210,7 +211,7 @@ func TestListenForNewNodes(t *testing.T) {
bootNode := bootListener.Self()
var listeners []*discover.UDPv5
var listeners []*listenerWrapper
var hosts []host.Host
// setup other nodes.
cs := startup.NewClockSynchronizer()

View File

@@ -89,5 +89,6 @@ go_test(
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_x_exp//maps:go_default_library",
],
)

View File

@@ -585,6 +585,15 @@ func (s *Service) beaconEndpoints(
handler: server.GetBlockAttestations,
methods: []string{http.MethodGet},
},
{
template: "/eth/v2/beacon/blocks/{block_id}/attestations",
name: namespace + ".GetBlockAttestationsV2",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.GetBlockAttestations,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/beacon/blinded_blocks/{block_id}",
name: namespace + ".GetBlindedBlock",
@@ -773,6 +782,15 @@ func (s *Service) beaconEndpoints(
handler: server.GetValidatorBalances,
methods: []string{http.MethodGet, http.MethodPost},
},
{
template: "/eth/v1/beacon/deposit_snapshot",
name: namespace + ".GetDepositSnapshot",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.GetDepositSnapshot,
methods: []string{http.MethodGet},
},
}
}
@@ -942,6 +960,8 @@ func (s *Service) prysmBeaconEndpoints(
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
FinalizationFetcher: s.cfg.FinalizationFetcher,
CoreService: coreService,
Broadcaster: s.cfg.Broadcaster,
BlobReceiver: s.cfg.BlobReceiver,
}
const namespace = "prysm.beacon"
@@ -992,6 +1012,16 @@ func (s *Service) prysmBeaconEndpoints(
handler: server.GetChainHead,
methods: []string{http.MethodGet},
},
{
template: "/prysm/v1/beacon/blobs",
name: namespace + ".PublishBlobs",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.PublishBlobs,
methods: []string{http.MethodPost},
},
}
}

View File

@@ -2,9 +2,11 @@ package rpc
import (
"net/http"
"slices"
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"golang.org/x/exp/maps"
)
func Test_endpoints(t *testing.T) {
@@ -31,10 +33,10 @@ func Test_endpoints(t *testing.T) {
"/eth/v2/beacon/blinded_blocks": {http.MethodPost},
"/eth/v1/beacon/blocks": {http.MethodPost},
"/eth/v2/beacon/blocks": {http.MethodPost},
"/eth/v1/beacon/blocks/{block_id}": {http.MethodGet},
"/eth/v2/beacon/blocks/{block_id}": {http.MethodGet},
"/eth/v1/beacon/blocks/{block_id}/root": {http.MethodGet},
"/eth/v1/beacon/blocks/{block_id}/attestations": {http.MethodGet},
"/eth/v2/beacon/blocks/{block_id}/attestations": {http.MethodGet},
"/eth/v1/beacon/blob_sidecars/{block_id}": {http.MethodGet},
"/eth/v1/beacon/deposit_snapshot": {http.MethodGet},
"/eth/v1/beacon/blinded_blocks/{block_id}": {http.MethodGet},
@@ -69,7 +71,6 @@ func Test_endpoints(t *testing.T) {
}
debugRoutes := map[string][]string{
"/eth/v1/debug/beacon/states/{state_id}": {http.MethodGet},
"/eth/v2/debug/beacon/states/{state_id}": {http.MethodGet},
"/eth/v2/debug/beacon/heads": {http.MethodGet},
"/eth/v1/debug/fork_choice": {http.MethodGet},
@@ -115,6 +116,7 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/beacon/states/{state_id}/validator_count": {http.MethodGet},
"/prysm/v1/beacon/states/{state_id}/validator_count": {http.MethodGet},
"/prysm/v1/beacon/chain_head": {http.MethodGet},
"/prysm/v1/beacon/blobs": {http.MethodPost},
}
prysmNodeRoutes := map[string][]string{
@@ -133,22 +135,18 @@ func Test_endpoints(t *testing.T) {
s := &Service{cfg: &Config{}}
routesMap := combineMaps(beaconRoutes, builderRoutes, configRoutes, debugRoutes, eventsRoutes, nodeRoutes, validatorRoutes, rewardsRoutes, lightClientRoutes, blobRoutes, prysmValidatorRoutes, prysmNodeRoutes, prysmBeaconRoutes)
actual := s.endpoints(true, nil, nil, nil, nil, nil, nil)
for _, e := range actual {
methods, ok := routesMap[e.template]
assert.Equal(t, true, ok, "endpoint "+e.template+" not found")
if ok {
for _, em := range e.methods {
methodFound := false
for _, m := range methods {
if m == em {
methodFound = true
break
}
}
assert.Equal(t, true, methodFound, "method "+em+" for endpoint "+e.template+" not found")
}
endpoints := s.endpoints(true, nil, nil, nil, nil, nil, nil)
actualRoutes := make(map[string][]string, len(endpoints))
for _, e := range endpoints {
if _, ok := actualRoutes[e.template]; ok {
actualRoutes[e.template] = append(actualRoutes[e.template], e.methods...)
} else {
actualRoutes[e.template] = e.methods
}
}
expectedRoutes := combineMaps(beaconRoutes, builderRoutes, configRoutes, debugRoutes, eventsRoutes, nodeRoutes, validatorRoutes, rewardsRoutes, lightClientRoutes, blobRoutes, prysmValidatorRoutes, prysmNodeRoutes, prysmBeaconRoutes)
assert.Equal(t, true, maps.EqualFunc(expectedRoutes, actualRoutes, func(actualMethods []string, expectedMethods []string) bool {
return slices.Equal(expectedMethods, actualMethods)
}))
}

View File

@@ -200,16 +200,10 @@ func (s *Server) GetBlockAttestations(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.GetBlockAttestations")
defer span.End()
blockId := r.PathValue("block_id")
if blockId == "" {
httputil.HandleError(w, "block_id is required in URL params", http.StatusBadRequest)
blk, isOptimistic, root := s.blockData(ctx, w, r)
if blk == nil {
return
}
blk, err := s.Blocker.Block(ctx, []byte(blockId))
if !shared.WriteBlockFetchError(w, blk, err) {
return
}
consensusAtts := blk.Block().Body().Attestations()
atts := make([]*structs.Attestation, len(consensusAtts))
for i, att := range consensusAtts {
@@ -221,17 +215,6 @@ func (s *Server) GetBlockAttestations(w http.ResponseWriter, r *http.Request) {
return
}
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
httputil.HandleError(w, "Could not get block root: "+err.Error(), http.StatusInternalServerError)
return
}
isOptimistic, err := s.OptimisticModeFetcher.IsOptimisticForRoot(ctx, root)
if err != nil {
httputil.HandleError(w, "Could not check if block is optimistic: "+err.Error(), http.StatusInternalServerError)
return
}
resp := &structs.GetBlockAttestationsResponse{
Data: atts,
ExecutionOptimistic: isOptimistic,
@@ -240,6 +223,79 @@ func (s *Server) GetBlockAttestations(w http.ResponseWriter, r *http.Request) {
httputil.WriteJson(w, resp)
}
// GetBlockAttestationsV2 retrieves attestation included in requested block.
func (s *Server) GetBlockAttestationsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.GetBlockAttestationsV2")
defer span.End()
blk, isOptimistic, root := s.blockData(ctx, w, r)
if blk == nil {
return
}
consensusAtts := blk.Block().Body().Attestations()
v := blk.Block().Version()
var attStructs []interface{}
if v >= version.Electra {
for _, att := range consensusAtts {
a, ok := att.(*eth.AttestationElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("unable to convert consensus attestations electra of type %T", att), http.StatusInternalServerError)
return
}
attStruct := structs.AttElectraFromConsensus(a)
attStructs = append(attStructs, attStruct)
}
} else {
for _, att := range consensusAtts {
a, ok := att.(*eth.Attestation)
if !ok {
httputil.HandleError(w, fmt.Sprintf("unable to convert consensus attestation of type %T", att), http.StatusInternalServerError)
return
}
attStruct := structs.AttFromConsensus(a)
attStructs = append(attStructs, attStruct)
}
}
attBytes, err := json.Marshal(attStructs)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("failed to marshal attestations: %v", err), http.StatusInternalServerError)
return
}
resp := &structs.GetBlockAttestationsV2Response{
Version: version.String(v),
ExecutionOptimistic: isOptimistic,
Finalized: s.FinalizationFetcher.IsFinalized(ctx, root),
Data: attBytes,
}
httputil.WriteJson(w, resp)
}
func (s *Server) blockData(ctx context.Context, w http.ResponseWriter, r *http.Request) (interfaces.ReadOnlySignedBeaconBlock, bool, [32]byte) {
blockId := r.PathValue("block_id")
if blockId == "" {
httputil.HandleError(w, "block_id is required in URL params", http.StatusBadRequest)
return nil, false, [32]byte{}
}
blk, err := s.Blocker.Block(ctx, []byte(blockId))
if !shared.WriteBlockFetchError(w, blk, err) {
return nil, false, [32]byte{}
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
httputil.HandleError(w, "Could not get block root: "+err.Error(), http.StatusInternalServerError)
return nil, false, [32]byte{}
}
isOptimistic, err := s.OptimisticModeFetcher.IsOptimisticForRoot(ctx, root)
if err != nil {
httputil.HandleError(w, "Could not check if block is optimistic: "+err.Error(), http.StatusInternalServerError)
return nil, false, [32]byte{}
}
return blk, isOptimistic, root
}
// PublishBlindedBlock instructs the beacon node to use the components of the `SignedBlindedBeaconBlock` to construct
// and publish a SignedBeaconBlock by swapping out the transactions_root for the corresponding full list of `transactions`.
// The beacon node should broadcast a newly constructed SignedBeaconBlock to the beacon network, to be included in the
@@ -1512,10 +1568,6 @@ func (s *Server) GetDepositSnapshot(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.GetDepositSnapshot")
defer span.End()
if s.BeaconDB == nil {
httputil.HandleError(w, "Could not retrieve beaconDB", http.StatusInternalServerError)
return
}
eth1data, err := s.BeaconDB.ExecutionChainData(ctx)
if err != nil {
httputil.HandleError(w, "Could not retrieve execution chain data: "+err.Error(), http.StatusInternalServerError)
@@ -1527,7 +1579,7 @@ func (s *Server) GetDepositSnapshot(w http.ResponseWriter, r *http.Request) {
}
snapshot := eth1data.DepositSnapshot
if snapshot == nil || len(snapshot.Finalized) == 0 {
httputil.HandleError(w, "No Finalized Snapshot Available", http.StatusNotFound)
httputil.HandleError(w, "No finalized snapshot available", http.StatusNotFound)
return
}
if len(snapshot.Finalized) > depositsnapshot.DepositContractDepth {

View File

@@ -268,6 +268,38 @@ func TestGetBlockV2(t *testing.T) {
require.NoError(t, err)
assert.DeepEqual(t, blk, b)
})
t.Run("electra", func(t *testing.T) {
b := util.NewBeaconBlockElectra()
b.Block.Slot = 123
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
mockBlockFetcher := &testutil.MockBlocker{BlockToReturn: sb}
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: mockBlockFetcher,
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Electra), resp.Version)
sbb := &structs.SignedBeaconBlockElectra{Message: &structs.BeaconBlockElectra{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
blk, err := sbb.ToConsensus()
require.NoError(t, err)
assert.DeepEqual(t, blk, b)
})
t.Run("execution optimistic", func(t *testing.T) {
b := util.NewBeaconBlockBellatrix()
sb, err := blocks.NewSignedBeaconBlock(b)
@@ -461,141 +493,167 @@ func TestGetBlockSSZV2(t *testing.T) {
require.NoError(t, err)
assert.DeepEqual(t, sszExpected, writer.Body.Bytes())
})
t.Run("electra", func(t *testing.T) {
b := util.NewBeaconBlockElectra()
b.Block.Slot = 123
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
s := &Server{
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}", nil)
request.SetPathValue("block_id", "head")
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, version.String(version.Electra), writer.Header().Get(api.VersionHeader))
sszExpected, err := b.MarshalSSZ()
require.NoError(t, err)
assert.DeepEqual(t, sszExpected, writer.Body.Bytes())
})
}
func TestGetBlockAttestations(t *testing.T) {
t.Run("ok", func(t *testing.T) {
b := util.NewBeaconBlock()
b.Block.Body.Attestations = []*eth.Attestation{
{
AggregationBits: bitfield.Bitlist{0x00},
Data: &eth.AttestationData{
Slot: 123,
CommitteeIndex: 123,
BeaconBlockRoot: bytesutil.PadTo([]byte("root1"), 32),
Source: &eth.Checkpoint{
Epoch: 123,
Root: bytesutil.PadTo([]byte("root1"), 32),
},
Target: &eth.Checkpoint{
Epoch: 123,
Root: bytesutil.PadTo([]byte("root1"), 32),
},
preElectraAtts := []*eth.Attestation{
{
AggregationBits: bitfield.Bitlist{0x00},
Data: &eth.AttestationData{
Slot: 123,
CommitteeIndex: 123,
BeaconBlockRoot: bytesutil.PadTo([]byte("root1"), 32),
Source: &eth.Checkpoint{
Epoch: 123,
Root: bytesutil.PadTo([]byte("root1"), 32),
},
Signature: bytesutil.PadTo([]byte("sig1"), 96),
},
{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 456,
CommitteeIndex: 456,
BeaconBlockRoot: bytesutil.PadTo([]byte("root2"), 32),
Source: &eth.Checkpoint{
Epoch: 456,
Root: bytesutil.PadTo([]byte("root2"), 32),
},
Target: &eth.Checkpoint{
Epoch: 456,
Root: bytesutil.PadTo([]byte("root2"), 32),
},
Target: &eth.Checkpoint{
Epoch: 123,
Root: bytesutil.PadTo([]byte("root1"), 32),
},
Signature: bytesutil.PadTo([]byte("sig2"), 96),
},
}
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
mockBlockFetcher := &testutil.MockBlocker{BlockToReturn: sb}
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: mockBlockFetcher,
}
Signature: bytesutil.PadTo([]byte("sig1"), 96),
},
{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 456,
CommitteeIndex: 456,
BeaconBlockRoot: bytesutil.PadTo([]byte("root2"), 32),
Source: &eth.Checkpoint{
Epoch: 456,
Root: bytesutil.PadTo([]byte("root2"), 32),
},
Target: &eth.Checkpoint{
Epoch: 456,
Root: bytesutil.PadTo([]byte("root2"), 32),
},
},
Signature: bytesutil.PadTo([]byte("sig2"), 96),
},
}
electraAtts := []*eth.AttestationElectra{
{
AggregationBits: bitfield.Bitlist{0x00},
Data: &eth.AttestationData{
Slot: 123,
CommitteeIndex: 123,
BeaconBlockRoot: bytesutil.PadTo([]byte("root1"), 32),
Source: &eth.Checkpoint{
Epoch: 123,
Root: bytesutil.PadTo([]byte("root1"), 32),
},
Target: &eth.Checkpoint{
Epoch: 123,
Root: bytesutil.PadTo([]byte("root1"), 32),
},
},
Signature: bytesutil.PadTo([]byte("sig1"), 96),
CommitteeBits: primitives.NewAttestationCommitteeBits(),
},
{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 456,
CommitteeIndex: 456,
BeaconBlockRoot: bytesutil.PadTo([]byte("root2"), 32),
Source: &eth.Checkpoint{
Epoch: 456,
Root: bytesutil.PadTo([]byte("root2"), 32),
},
Target: &eth.Checkpoint{
Epoch: 456,
Root: bytesutil.PadTo([]byte("root2"), 32),
},
},
Signature: bytesutil.PadTo([]byte("sig2"), 96),
CommitteeBits: primitives.NewAttestationCommitteeBits(),
},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
b := util.NewBeaconBlock()
b.Block.Body.Attestations = preElectraAtts
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, len(b.Block.Body.Attestations), len(resp.Data))
atts := make([]*eth.Attestation, len(b.Block.Body.Attestations))
for i, a := range resp.Data {
atts[i], err = a.ToConsensus()
require.NoError(t, err)
}
assert.DeepEqual(t, b.Block.Body.Attestations, atts)
})
t.Run("execution optimistic", func(t *testing.T) {
b := util.NewBeaconBlockBellatrix()
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
r, err := sb.Block().HashTreeRoot()
require.NoError(t, err)
mockBlockFetcher := &testutil.MockBlocker{BlockToReturn: sb}
mockChainService := &chainMock.ChainService{
OptimisticRoots: map[[32]byte]bool{r: true},
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: mockBlockFetcher,
}
bb := util.NewBeaconBlockBellatrix()
bb.Block.Body.Attestations = preElectraAtts
bsb, err := blocks.NewSignedBeaconBlock(bb)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
eb := util.NewBeaconBlockElectra()
eb.Block.Body.Attestations = electraAtts
esb, err := blocks.NewSignedBeaconBlock(eb)
require.NoError(t, err)
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
})
t.Run("finalized", func(t *testing.T) {
b := util.NewBeaconBlock()
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
r, err := sb.Block().HashTreeRoot()
require.NoError(t, err)
mockBlockFetcher := &testutil.MockBlocker{BlockToReturn: sb}
t.Run("v1", func(t *testing.T) {
t.Run("ok", func(t *testing.T) {
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
t.Run("true", func(t *testing.T) {
mockChainService := &chainMock.ChainService{FinalizedRoots: map[[32]byte]bool{r: true}}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: mockBlockFetcher,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v1/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
require.Equal(t, len(b.Block.Body.Attestations), len(resp.Data))
atts := make([]*eth.Attestation, len(b.Block.Body.Attestations))
for i, a := range resp.Data {
atts[i], err = a.ToConsensus()
require.NoError(t, err)
}
assert.DeepEqual(t, b.Block.Body.Attestations, atts)
})
t.Run("false", func(t *testing.T) {
mockChainService := &chainMock.ChainService{FinalizedRoots: map[[32]byte]bool{r: false}}
t.Run("execution-optimistic", func(t *testing.T) {
r, err := bsb.Block().HashTreeRoot()
require.NoError(t, err)
mockChainService := &chainMock.ChainService{
OptimisticRoots: map[[32]byte]bool{r: true},
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: mockBlockFetcher,
Blocker: &testutil.MockBlocker{BlockToReturn: bsb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v1/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -604,7 +662,194 @@ func TestGetBlockAttestations(t *testing.T) {
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, false, resp.ExecutionOptimistic)
assert.Equal(t, true, resp.ExecutionOptimistic)
})
t.Run("finalized", func(t *testing.T) {
r, err := sb.Block().HashTreeRoot()
require.NoError(t, err)
t.Run("true", func(t *testing.T) {
mockChainService := &chainMock.ChainService{FinalizedRoots: map[[32]byte]bool{r: true}}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v1/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
})
t.Run("false", func(t *testing.T) {
mockChainService := &chainMock.ChainService{FinalizedRoots: map[[32]byte]bool{r: false}}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v1/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, false, resp.ExecutionOptimistic)
})
})
})
t.Run("V2", func(t *testing.T) {
t.Run("ok-pre-electra", func(t *testing.T) {
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestationsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
var attStructs []structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attStructs))
atts := make([]*eth.Attestation, len(attStructs))
for i, attStruct := range attStructs {
atts[i], err = attStruct.ToConsensus()
require.NoError(t, err)
}
assert.DeepEqual(t, b.Block.Body.Attestations, atts)
assert.Equal(t, "phase0", resp.Version)
})
t.Run("ok-post-electra", func(t *testing.T) {
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: esb},
}
mockBlockFetcher := &testutil.MockBlocker{BlockToReturn: esb}
s.Blocker = mockBlockFetcher
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestationsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
var attStructs []structs.AttestationElectra
require.NoError(t, json.Unmarshal(resp.Data, &attStructs))
atts := make([]*eth.AttestationElectra, len(attStructs))
for i, attStruct := range attStructs {
atts[i], err = attStruct.ToConsensus()
require.NoError(t, err)
}
assert.DeepEqual(t, eb.Block.Body.Attestations, atts)
assert.Equal(t, "electra", resp.Version)
})
t.Run("execution-optimistic", func(t *testing.T) {
r, err := bsb.Block().HashTreeRoot()
require.NoError(t, err)
mockChainService := &chainMock.ChainService{
OptimisticRoots: map[[32]byte]bool{r: true},
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: bsb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestationsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.ExecutionOptimistic)
assert.Equal(t, "bellatrix", resp.Version)
})
t.Run("finalized", func(t *testing.T) {
r, err := sb.Block().HashTreeRoot()
require.NoError(t, err)
t.Run("true", func(t *testing.T) {
mockChainService := &chainMock.ChainService{FinalizedRoots: map[[32]byte]bool{r: true}}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestationsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, true, resp.Finalized)
assert.Equal(t, "phase0", resp.Version)
})
t.Run("false", func(t *testing.T) {
mockChainService := &chainMock.ChainService{FinalizedRoots: map[[32]byte]bool{r: false}}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestationsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, false, resp.ExecutionOptimistic)
assert.Equal(t, "phase0", resp.Version)
})
})
})
}
@@ -759,6 +1004,35 @@ func TestGetBlindedBlock(t *testing.T) {
require.NoError(t, err)
assert.DeepEqual(t, blk, b)
})
t.Run("electra", func(t *testing.T) {
b := util.NewBlindedBeaconBlockElectra()
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
mockChainService := &chainMock.ChainService{}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v1/beacon/blinded_blocks/{block_id}", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlindedBlock(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
assert.Equal(t, version.String(version.Electra), resp.Version)
sbb := &structs.SignedBlindedBeaconBlockElectra{Message: &structs.BlindedBeaconBlockElectra{}}
require.NoError(t, json.Unmarshal(resp.Data.Message, sbb.Message))
sbb.Signature = resp.Data.Signature
blk, err := sbb.ToConsensus()
require.NoError(t, err)
assert.DeepEqual(t, blk, b)
})
t.Run("execution optimistic", func(t *testing.T) {
b := util.NewBlindedBeaconBlockBellatrix()
sb, err := blocks.NewSignedBeaconBlock(b)

View File

@@ -138,7 +138,7 @@ func TestGetSpec(t *testing.T) {
config.WhistleBlowerRewardQuotientElectra = 79
config.PendingPartialWithdrawalsLimit = 80
config.MinActivationBalance = 81
config.PendingBalanceDepositLimit = 82
config.PendingDepositLimit = 82
config.MaxPendingPartialsPerWithdrawalsSweep = 83
config.PendingConsolidationsLimit = 84
config.MaxPartialWithdrawalsPerPayload = 85
@@ -150,6 +150,7 @@ func TestGetSpec(t *testing.T) {
config.MaxCellsInExtendedMatrix = 91
config.UnsetDepositRequestsStartIndex = 92
config.MaxDepositRequestsPerPayload = 93
config.MaxPendingDepositsPerEpoch = 94
var dbp [4]byte
copy(dbp[:], []byte{'0', '0', '0', '1'})
@@ -188,7 +189,7 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
assert.Equal(t, 154, len(data))
assert.Equal(t, 155, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -499,7 +500,7 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "80", v)
case "MIN_ACTIVATION_BALANCE":
assert.Equal(t, "81", v)
case "PENDING_BALANCE_DEPOSITS_LIMIT":
case "PENDING_DEPOSITS_LIMIT":
assert.Equal(t, "82", v)
case "MAX_PENDING_PARTIALS_PER_WITHDRAWALS_SWEEP":
assert.Equal(t, "83", v)
@@ -523,6 +524,8 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "92", v)
case "MAX_DEPOSIT_REQUESTS_PER_PAYLOAD":
assert.Equal(t, "93", v)
case "MAX_PENDING_DEPOSITS_PER_EPOCH":
assert.Equal(t, "94", v)
default:
t.Errorf("Incorrect key: %s", k)
}

View File

@@ -20,6 +20,7 @@ go_library(
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/eth/v1:go_default_library",
@@ -29,12 +30,16 @@ go_library(
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["events_test.go"],
srcs = [
"events_test.go",
"http_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
@@ -49,9 +54,9 @@ go_test(
"//consensus-types/primitives:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_r3labs_sse_v2//:go_default_library",
],
)

View File

@@ -1,24 +1,26 @@
package events
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
time2 "time"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
chaintime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
@@ -26,9 +28,13 @@ import (
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
const DefaultEventFeedDepth = 1000
const (
InvalidTopic = "__invalid__"
// HeadTopic represents a new chain head event topic.
HeadTopic = "head"
// BlockTopic represents a new produced block event topic.
@@ -59,25 +65,83 @@ const (
LightClientOptimisticUpdateTopic = "light_client_optimistic_update"
)
const topicDataMismatch = "Event data type %T does not correspond to event topic %s"
var (
errInvalidTopicName = errors.New("invalid topic name")
errNoValidTopics = errors.New("no valid topics specified")
errSlowReader = errors.New("client failed to read fast enough to keep outgoing buffer below threshold")
errNotRequested = errors.New("event not requested by client")
errUnhandledEventData = errors.New("unable to represent event data in the event stream")
)
const chanBuffer = 1000
// StreamingResponseWriter defines a type that can be used by the eventStreamer.
// This must be an http.ResponseWriter that supports flushing and hijacking.
type StreamingResponseWriter interface {
http.ResponseWriter
http.Flusher
}
var casesHandled = map[string]bool{
HeadTopic: true,
BlockTopic: true,
AttestationTopic: true,
VoluntaryExitTopic: true,
FinalizedCheckpointTopic: true,
ChainReorgTopic: true,
SyncCommitteeContributionTopic: true,
BLSToExecutionChangeTopic: true,
PayloadAttributesTopic: true,
BlobSidecarTopic: true,
ProposerSlashingTopic: true,
AttesterSlashingTopic: true,
LightClientFinalityUpdateTopic: true,
LightClientOptimisticUpdateTopic: true,
// The eventStreamer uses lazyReaders to defer serialization until the moment the value is ready to be written to the client.
type lazyReader func() io.Reader
var opsFeedEventTopics = map[feed.EventType]string{
operation.AggregatedAttReceived: AttestationTopic,
operation.UnaggregatedAttReceived: AttestationTopic,
operation.ExitReceived: VoluntaryExitTopic,
operation.SyncCommitteeContributionReceived: SyncCommitteeContributionTopic,
operation.BLSToExecutionChangeReceived: BLSToExecutionChangeTopic,
operation.BlobSidecarReceived: BlobSidecarTopic,
operation.AttesterSlashingReceived: AttesterSlashingTopic,
operation.ProposerSlashingReceived: ProposerSlashingTopic,
}
var stateFeedEventTopics = map[feed.EventType]string{
statefeed.NewHead: HeadTopic,
statefeed.MissedSlot: PayloadAttributesTopic,
statefeed.FinalizedCheckpoint: FinalizedCheckpointTopic,
statefeed.LightClientFinalityUpdate: LightClientFinalityUpdateTopic,
statefeed.LightClientOptimisticUpdate: LightClientOptimisticUpdateTopic,
statefeed.Reorg: ChainReorgTopic,
statefeed.BlockProcessed: BlockTopic,
}
var topicsForStateFeed = topicsForFeed(stateFeedEventTopics)
var topicsForOpsFeed = topicsForFeed(opsFeedEventTopics)
func topicsForFeed(em map[feed.EventType]string) map[string]bool {
topics := make(map[string]bool, len(em))
for _, topic := range em {
topics[topic] = true
}
return topics
}
type topicRequest struct {
topics map[string]bool
needStateFeed bool
needOpsFeed bool
}
func (req *topicRequest) requested(topic string) bool {
return req.topics[topic]
}
func newTopicRequest(topics []string) (*topicRequest, error) {
req := &topicRequest{topics: make(map[string]bool)}
for _, name := range topics {
if topicsForStateFeed[name] {
req.needStateFeed = true
} else if topicsForOpsFeed[name] {
req.needOpsFeed = true
} else {
return nil, errors.Wrapf(errInvalidTopicName, name)
}
req.topics[name] = true
}
if len(req.topics) == 0 || (!req.needStateFeed && !req.needOpsFeed) {
return nil, errNoValidTopics
}
return req, nil
}
// StreamEvents provides an endpoint to subscribe to the beacon node Server-Sent-Events stream.
@@ -88,326 +152,412 @@ func (s *Server) StreamEvents(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "events.StreamEvents")
defer span.End()
flusher, ok := w.(http.Flusher)
topics, err := newTopicRequest(r.URL.Query()["topics"])
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
sw, ok := w.(StreamingResponseWriter)
if !ok {
httputil.HandleError(w, "Streaming unsupported!", http.StatusInternalServerError)
msg := "beacon node misconfiguration: http stack may not support required response handling features, like flushing"
httputil.HandleError(w, msg, http.StatusInternalServerError)
return
}
topics := r.URL.Query()["topics"]
if len(topics) == 0 {
httputil.HandleError(w, "No topics specified to subscribe to", http.StatusBadRequest)
return
depth := s.EventFeedDepth
if depth == 0 {
depth = DefaultEventFeedDepth
}
topicsMap := make(map[string]bool)
for _, topic := range topics {
if _, ok := casesHandled[topic]; !ok {
httputil.HandleError(w, fmt.Sprintf("Invalid topic: %s", topic), http.StatusBadRequest)
return
}
topicsMap[topic] = true
}
// Subscribe to event feeds from information received in the beacon node runtime.
opsChan := make(chan *feed.Event, chanBuffer)
opsSub := s.OperationNotifier.OperationFeed().Subscribe(opsChan)
stateChan := make(chan *feed.Event, chanBuffer)
stateSub := s.StateNotifier.StateFeed().Subscribe(stateChan)
defer opsSub.Unsubscribe()
defer stateSub.Unsubscribe()
// Set up SSE response headers
w.Header().Set("Content-Type", api.EventStreamMediaType)
w.Header().Set("Connection", api.KeepAlive)
// Handle each event received and context cancellation.
// We send a keepalive dummy message immediately to prevent clients
// stalling while waiting for the first response chunk.
// After that we send a keepalive dummy message every SECONDS_PER_SLOT
// to prevent anyone (e.g. proxy servers) from closing connections.
if err := sendKeepalive(w, flusher); err != nil {
es, err := newEventStreamer(depth, s.KeepAliveInterval)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
keepaliveTicker := time2.NewTicker(time2.Duration(params.BeaconConfig().SecondsPerSlot) * time2.Second)
ctx, cancel := context.WithCancel(ctx)
defer cancel()
api.SetSSEHeaders(sw)
go es.outboxWriteLoop(ctx, cancel, sw)
if err := es.recvEventLoop(ctx, cancel, topics, s); err != nil {
log.WithError(err).Debug("Shutting down StreamEvents handler.")
}
}
func newEventStreamer(buffSize int, ka time.Duration) (*eventStreamer, error) {
if ka == 0 {
ka = time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
}
return &eventStreamer{
outbox: make(chan lazyReader, buffSize),
keepAlive: ka,
}, nil
}
type eventStreamer struct {
outbox chan lazyReader
keepAlive time.Duration
}
func (es *eventStreamer) recvEventLoop(ctx context.Context, cancel context.CancelFunc, req *topicRequest, s *Server) error {
eventsChan := make(chan *feed.Event, len(es.outbox))
if req.needOpsFeed {
opsSub := s.OperationNotifier.OperationFeed().Subscribe(eventsChan)
defer opsSub.Unsubscribe()
}
if req.needStateFeed {
stateSub := s.StateNotifier.StateFeed().Subscribe(eventsChan)
defer stateSub.Unsubscribe()
}
for {
select {
case event := <-opsChan:
if err := handleBlockOperationEvents(w, flusher, topicsMap, event); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
case event := <-stateChan:
if err := s.handleStateEvents(ctx, w, flusher, topicsMap, event); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
case <-keepaliveTicker.C:
if err := sendKeepalive(w, flusher); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
case <-ctx.Done():
return ctx.Err()
case event := <-eventsChan:
lr, err := s.lazyReaderForEvent(ctx, event, req)
if err != nil {
if !errors.Is(err, errNotRequested) {
log.WithField("event_type", fmt.Sprintf("%v", event.Data)).WithError(err).Error("StreamEvents API endpoint received an event it was unable to handle.")
}
continue
}
// If the client can't keep up, the outbox will eventually completely fill, at which
// safeWrite will error, and we'll hit the below return statement, at which point the deferred
// Unsuscribe calls will be made and the event feed will stop writing to this channel.
// Since the outbox and event stream channels are separately buffered, the event subscription
// channel should stay relatively empty, which gives this loop time to unsubscribe
// and cleanup before the event stream channel fills and disrupts other readers.
if err := es.safeWrite(ctx, lr); err != nil {
cancel()
// note: we could hijack the connection and close it here. Does that cause issues? What are the benefits?
// A benefit of hijack and close is that it may force an error on the remote end, however just closing the context of the
// http handler may be sufficient to cause the remote http response reader to close.
if errors.Is(err, errSlowReader) {
log.WithError(err).Warn("Client is unable to keep up with event stream, shutting down.")
}
return err
}
}
}
}
func (es *eventStreamer) safeWrite(ctx context.Context, rf func() io.Reader) error {
if rf == nil {
return nil
}
select {
case <-ctx.Done():
return ctx.Err()
case es.outbox <- rf:
return nil
default:
// If this is the case, the select case to write to the outbox could not proceed, meaning the outbox is full.
// If a reader can't keep up with the stream, we shut them down.
return errSlowReader
}
}
// newlineReader is used to write keep-alives to the client.
// keep-alives in the sse protocol are a single ':' colon followed by 2 newlines.
func newlineReader() io.Reader {
return bytes.NewBufferString(":\n\n")
}
// outboxWriteLoop runs in a separate goroutine. Its job is to write the values in the outbox to
// the client as fast as the client can read them.
func (es *eventStreamer) outboxWriteLoop(ctx context.Context, cancel context.CancelFunc, w StreamingResponseWriter) {
var err error
defer func() {
if err != nil {
log.WithError(err).Debug("Event streamer shutting down due to error.")
}
}()
defer cancel()
// Write a keepalive at the start to test the connection and simplify test setup.
if err = es.writeOutbox(ctx, w, nil); err != nil {
return
}
kaT := time.NewTimer(es.keepAlive)
// Ensure the keepalive timer is stopped and drained if it has fired.
defer func() {
if !kaT.Stop() {
<-kaT.C
}
}()
for {
select {
case <-ctx.Done():
err = ctx.Err()
return
case <-kaT.C:
if err = es.writeOutbox(ctx, w, nil); err != nil {
return
}
// In this case the timer doesn't need to be Stopped before the Reset call after the select statement,
// because the timer has already fired.
case lr := <-es.outbox:
if err = es.writeOutbox(ctx, w, lr); err != nil {
return
}
// We don't know if the timer fired concurrently to this case being ready, so we need to check the return
// of Stop and drain the timer channel if it fired. We won't need to do this in go 1.23.
if !kaT.Stop() {
<-kaT.C
}
}
kaT.Reset(es.keepAlive)
}
}
func (es *eventStreamer) writeOutbox(ctx context.Context, w StreamingResponseWriter, first lazyReader) error {
needKeepAlive := true
if first != nil {
if _, err := io.Copy(w, first()); err != nil {
return err
}
needKeepAlive = false
}
// While the first event was being read by the client, further events may be queued in the outbox.
// We can drain them right away rather than go back out to the outer select statement, where the keepAlive timer
// may have fired, triggering an unnecessary extra keep-alive write and flush.
for {
select {
case <-ctx.Done():
return ctx.Err()
case rf := <-es.outbox:
if _, err := io.Copy(w, rf()); err != nil {
return err
}
needKeepAlive = false
default:
if needKeepAlive {
if _, err := io.Copy(w, newlineReader()); err != nil {
return err
}
}
w.Flush()
return nil
}
}
}
func handleBlockOperationEvents(w http.ResponseWriter, flusher http.Flusher, requestedTopics map[string]bool, event *feed.Event) error {
switch event.Type {
case operation.AggregatedAttReceived:
if _, ok := requestedTopics[AttestationTopic]; !ok {
return nil
}
attData, ok := event.Data.(*operation.AggregatedAttReceivedData)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
}
att := structs.AttFromConsensus(attData.Attestation.Aggregate)
return send(w, flusher, AttestationTopic, att)
case operation.UnaggregatedAttReceived:
if _, ok := requestedTopics[AttestationTopic]; !ok {
return nil
}
attData, ok := event.Data.(*operation.UnAggregatedAttReceivedData)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
}
a, ok := attData.Attestation.(*eth.Attestation)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
}
att := structs.AttFromConsensus(a)
return send(w, flusher, AttestationTopic, att)
case operation.ExitReceived:
if _, ok := requestedTopics[VoluntaryExitTopic]; !ok {
return nil
}
exitData, ok := event.Data.(*operation.ExitReceivedData)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, VoluntaryExitTopic)
}
exit := structs.SignedExitFromConsensus(exitData.Exit)
return send(w, flusher, VoluntaryExitTopic, exit)
case operation.SyncCommitteeContributionReceived:
if _, ok := requestedTopics[SyncCommitteeContributionTopic]; !ok {
return nil
}
contributionData, ok := event.Data.(*operation.SyncCommitteeContributionReceivedData)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, SyncCommitteeContributionTopic)
}
contribution := structs.SignedContributionAndProofFromConsensus(contributionData.Contribution)
return send(w, flusher, SyncCommitteeContributionTopic, contribution)
case operation.BLSToExecutionChangeReceived:
if _, ok := requestedTopics[BLSToExecutionChangeTopic]; !ok {
return nil
}
changeData, ok := event.Data.(*operation.BLSToExecutionChangeReceivedData)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, BLSToExecutionChangeTopic)
}
return send(w, flusher, BLSToExecutionChangeTopic, structs.SignedBLSChangeFromConsensus(changeData.Change))
case operation.BlobSidecarReceived:
if _, ok := requestedTopics[BlobSidecarTopic]; !ok {
return nil
}
blobData, ok := event.Data.(*operation.BlobSidecarReceivedData)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, BlobSidecarTopic)
}
versionedHash := blockchain.ConvertKzgCommitmentToVersionedHash(blobData.Blob.KzgCommitment)
blobEvent := &structs.BlobSidecarEvent{
BlockRoot: hexutil.Encode(blobData.Blob.BlockRootSlice()),
Index: fmt.Sprintf("%d", blobData.Blob.Index),
Slot: fmt.Sprintf("%d", blobData.Blob.Slot()),
VersionedHash: versionedHash.String(),
KzgCommitment: hexutil.Encode(blobData.Blob.KzgCommitment),
}
return send(w, flusher, BlobSidecarTopic, blobEvent)
case operation.AttesterSlashingReceived:
if _, ok := requestedTopics[AttesterSlashingTopic]; !ok {
return nil
}
attesterSlashingData, ok := event.Data.(*operation.AttesterSlashingReceivedData)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, AttesterSlashingTopic)
}
slashing, ok := attesterSlashingData.AttesterSlashing.(*eth.AttesterSlashing)
if ok {
return send(w, flusher, AttesterSlashingTopic, structs.AttesterSlashingFromConsensus(slashing))
}
// TODO: extend to Electra
case operation.ProposerSlashingReceived:
if _, ok := requestedTopics[ProposerSlashingTopic]; !ok {
return nil
}
proposerSlashingData, ok := event.Data.(*operation.ProposerSlashingReceivedData)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, ProposerSlashingTopic)
}
return send(w, flusher, ProposerSlashingTopic, structs.ProposerSlashingFromConsensus(proposerSlashingData.ProposerSlashing))
func jsonMarshalReader(name string, v any) io.Reader {
d, err := json.Marshal(v)
if err != nil {
log.WithError(err).WithField("type_name", fmt.Sprintf("%T", v)).Error("Could not marshal event data.")
return nil
}
return nil
return bytes.NewBufferString("event: " + name + "\ndata: " + string(d) + "\n\n")
}
func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, flusher http.Flusher, requestedTopics map[string]bool, event *feed.Event) error {
switch event.Type {
case statefeed.NewHead:
if _, ok := requestedTopics[HeadTopic]; ok {
headData, ok := event.Data.(*ethpb.EventHead)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, HeadTopic)
}
head := &structs.HeadEvent{
Slot: fmt.Sprintf("%d", headData.Slot),
Block: hexutil.Encode(headData.Block),
State: hexutil.Encode(headData.State),
EpochTransition: headData.EpochTransition,
ExecutionOptimistic: headData.ExecutionOptimistic,
PreviousDutyDependentRoot: hexutil.Encode(headData.PreviousDutyDependentRoot),
CurrentDutyDependentRoot: hexutil.Encode(headData.CurrentDutyDependentRoot),
}
return send(w, flusher, HeadTopic, head)
func topicForEvent(event *feed.Event) string {
switch event.Data.(type) {
case *operation.AggregatedAttReceivedData:
return AttestationTopic
case *operation.UnAggregatedAttReceivedData:
return AttestationTopic
case *operation.ExitReceivedData:
return VoluntaryExitTopic
case *operation.SyncCommitteeContributionReceivedData:
return SyncCommitteeContributionTopic
case *operation.BLSToExecutionChangeReceivedData:
return BLSToExecutionChangeTopic
case *operation.BlobSidecarReceivedData:
return BlobSidecarTopic
case *operation.AttesterSlashingReceivedData:
return AttesterSlashingTopic
case *operation.ProposerSlashingReceivedData:
return ProposerSlashingTopic
case *ethpb.EventHead:
return HeadTopic
case *ethpb.EventFinalizedCheckpoint:
return FinalizedCheckpointTopic
case *ethpbv2.LightClientFinalityUpdateWithVersion:
return LightClientFinalityUpdateTopic
case *ethpbv2.LightClientOptimisticUpdateWithVersion:
return LightClientOptimisticUpdateTopic
case *ethpb.EventChainReorg:
return ChainReorgTopic
case *statefeed.BlockProcessedData:
return BlockTopic
default:
if event.Type == statefeed.MissedSlot {
return PayloadAttributesTopic
}
if _, ok := requestedTopics[PayloadAttributesTopic]; ok {
return s.sendPayloadAttributes(ctx, w, flusher)
}
case statefeed.MissedSlot:
if _, ok := requestedTopics[PayloadAttributesTopic]; ok {
return s.sendPayloadAttributes(ctx, w, flusher)
}
case statefeed.FinalizedCheckpoint:
if _, ok := requestedTopics[FinalizedCheckpointTopic]; !ok {
return nil
}
checkpointData, ok := event.Data.(*ethpb.EventFinalizedCheckpoint)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, FinalizedCheckpointTopic)
}
checkpoint := &structs.FinalizedCheckpointEvent{
Block: hexutil.Encode(checkpointData.Block),
State: hexutil.Encode(checkpointData.State),
Epoch: fmt.Sprintf("%d", checkpointData.Epoch),
ExecutionOptimistic: checkpointData.ExecutionOptimistic,
}
return send(w, flusher, FinalizedCheckpointTopic, checkpoint)
case statefeed.LightClientFinalityUpdate:
if _, ok := requestedTopics[LightClientFinalityUpdateTopic]; !ok {
return nil
}
updateData, ok := event.Data.(*ethpbv2.LightClientFinalityUpdateWithVersion)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, LightClientFinalityUpdateTopic)
}
update, err := structs.LightClientFinalityUpdateFromConsensus(updateData.Data)
if err != nil {
return err
}
updateEvent := &structs.LightClientFinalityUpdateEvent{
Version: version.String(int(updateData.Version)),
Data: update,
}
return send(w, flusher, LightClientFinalityUpdateTopic, updateEvent)
case statefeed.LightClientOptimisticUpdate:
if _, ok := requestedTopics[LightClientOptimisticUpdateTopic]; !ok {
return nil
}
updateData, ok := event.Data.(*ethpbv2.LightClientOptimisticUpdateWithVersion)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, LightClientOptimisticUpdateTopic)
}
update, err := structs.LightClientOptimisticUpdateFromConsensus(updateData.Data)
if err != nil {
return err
}
updateEvent := &structs.LightClientOptimisticUpdateEvent{
Version: version.String(int(updateData.Version)),
Data: update,
}
return send(w, flusher, LightClientOptimisticUpdateTopic, updateEvent)
case statefeed.Reorg:
if _, ok := requestedTopics[ChainReorgTopic]; !ok {
return nil
}
reorgData, ok := event.Data.(*ethpb.EventChainReorg)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, ChainReorgTopic)
}
reorg := &structs.ChainReorgEvent{
Slot: fmt.Sprintf("%d", reorgData.Slot),
Depth: fmt.Sprintf("%d", reorgData.Depth),
OldHeadBlock: hexutil.Encode(reorgData.OldHeadBlock),
NewHeadBlock: hexutil.Encode(reorgData.NewHeadBlock),
OldHeadState: hexutil.Encode(reorgData.OldHeadState),
NewHeadState: hexutil.Encode(reorgData.NewHeadState),
Epoch: fmt.Sprintf("%d", reorgData.Epoch),
ExecutionOptimistic: reorgData.ExecutionOptimistic,
}
return send(w, flusher, ChainReorgTopic, reorg)
case statefeed.BlockProcessed:
if _, ok := requestedTopics[BlockTopic]; !ok {
return nil
}
blkData, ok := event.Data.(*statefeed.BlockProcessedData)
if !ok {
return write(w, flusher, topicDataMismatch, event.Data, BlockTopic)
}
blockRoot, err := blkData.SignedBlock.Block().HashTreeRoot()
if err != nil {
return write(w, flusher, "Could not get block root: "+err.Error())
}
blk := &structs.BlockEvent{
Slot: fmt.Sprintf("%d", blkData.Slot),
Block: hexutil.Encode(blockRoot[:]),
ExecutionOptimistic: blkData.Optimistic,
}
return send(w, flusher, BlockTopic, blk)
return InvalidTopic
}
}
func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topics *topicRequest) (lazyReader, error) {
eventName := topicForEvent(event)
if !topics.requested(eventName) {
return nil, errNotRequested
}
if eventName == PayloadAttributesTopic {
return s.currentPayloadAttributes(ctx)
}
if event == nil || event.Data == nil {
return nil, errors.New("event or event data is nil")
}
switch v := event.Data.(type) {
case *ethpb.EventHead:
// The head event is a special case because, if the client requested the payload attributes topic,
// we send two event messages in reaction; the head event and the payload attributes.
headReader := func() io.Reader {
return jsonMarshalReader(eventName, structs.HeadEventFromV1(v))
}
// Don't do the expensive attr lookup unless the client requested it.
if !topics.requested(PayloadAttributesTopic) {
return headReader, nil
}
// Since payload attributes could change before the outbox is written, we need to do a blocking operation to
// get the current payload attributes right here.
attrReader, err := s.currentPayloadAttributes(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get payload attributes for head event")
}
return func() io.Reader {
return io.MultiReader(headReader(), attrReader())
}, nil
case *operation.AggregatedAttReceivedData:
return func() io.Reader {
att := structs.AttFromConsensus(v.Attestation.Aggregate)
return jsonMarshalReader(eventName, att)
}, nil
case *operation.UnAggregatedAttReceivedData:
att, ok := v.Attestation.(*eth.Attestation)
if !ok {
return nil, errors.Wrapf(errUnhandledEventData, "Unexpected type %T for the .Attestation field of UnAggregatedAttReceivedData", v.Attestation)
}
return func() io.Reader {
att := structs.AttFromConsensus(att)
return jsonMarshalReader(eventName, att)
}, nil
case *operation.ExitReceivedData:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.SignedExitFromConsensus(v.Exit))
}, nil
case *operation.SyncCommitteeContributionReceivedData:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.SignedContributionAndProofFromConsensus(v.Contribution))
}, nil
case *operation.BLSToExecutionChangeReceivedData:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.SignedBLSChangeFromConsensus(v.Change))
}, nil
case *operation.BlobSidecarReceivedData:
return func() io.Reader {
versionedHash := primitives.ConvertKzgCommitmentToVersionedHash(v.Blob.KzgCommitment)
return jsonMarshalReader(eventName, &structs.BlobSidecarEvent{
BlockRoot: hexutil.Encode(v.Blob.BlockRootSlice()),
Index: fmt.Sprintf("%d", v.Blob.Index),
Slot: fmt.Sprintf("%d", v.Blob.Slot()),
VersionedHash: versionedHash.String(),
KzgCommitment: hexutil.Encode(v.Blob.KzgCommitment),
})
}, nil
case *operation.AttesterSlashingReceivedData:
slashing, ok := v.AttesterSlashing.(*eth.AttesterSlashing)
if !ok {
return nil, errors.Wrapf(errUnhandledEventData, "Unexpected type %T for the .AttesterSlashing field of AttesterSlashingReceivedData", v.AttesterSlashing)
}
return func() io.Reader {
return jsonMarshalReader(eventName, structs.AttesterSlashingFromConsensus(slashing))
}, nil
case *operation.ProposerSlashingReceivedData:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.ProposerSlashingFromConsensus(v.ProposerSlashing))
}, nil
case *ethpb.EventFinalizedCheckpoint:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.FinalizedCheckpointEventFromV1(v))
}, nil
case *ethpbv2.LightClientFinalityUpdateWithVersion:
cv, err := structs.LightClientFinalityUpdateFromConsensus(v.Data)
if err != nil {
return nil, errors.Wrap(err, "LightClientFinalityUpdateWithVersion event conversion failure")
}
ev := &structs.LightClientFinalityUpdateEvent{
Version: version.String(int(v.Version)),
Data: cv,
}
return func() io.Reader {
return jsonMarshalReader(eventName, ev)
}, nil
case *ethpbv2.LightClientOptimisticUpdateWithVersion:
cv, err := structs.LightClientOptimisticUpdateFromConsensus(v.Data)
if err != nil {
return nil, errors.Wrap(err, "LightClientOptimisticUpdateWithVersion event conversion failure")
}
ev := &structs.LightClientOptimisticUpdateEvent{
Version: version.String(int(v.Version)),
Data: cv,
}
return func() io.Reader {
return jsonMarshalReader(eventName, ev)
}, nil
case *ethpb.EventChainReorg:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.EventChainReorgFromV1(v))
}, nil
case *statefeed.BlockProcessedData:
blockRoot, err := v.SignedBlock.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not compute block root for BlockProcessedData state feed event")
}
return func() io.Reader {
blk := &structs.BlockEvent{
Slot: fmt.Sprintf("%d", v.Slot),
Block: hexutil.Encode(blockRoot[:]),
ExecutionOptimistic: v.Optimistic,
}
return jsonMarshalReader(eventName, blk)
}, nil
default:
return nil, errors.Wrapf(errUnhandledEventData, "event data type %T unsupported", v)
}
return nil
}
// This event stream is intended to be used by builders and relays.
// Parent fields are based on state at N_{current_slot}, while the rest of fields are based on state of N_{current_slot + 1}
func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWriter, flusher http.Flusher) error {
func (s *Server) currentPayloadAttributes(ctx context.Context) (lazyReader, error) {
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
return write(w, flusher, "Could not get head root: "+err.Error())
return nil, errors.Wrap(err, "could not get head root")
}
st, err := s.HeadFetcher.HeadState(ctx)
if err != nil {
return write(w, flusher, "Could not get head state: "+err.Error())
return nil, errors.Wrap(err, "could not get head state")
}
// advance the head state
headState, err := transition.ProcessSlotsIfPossible(ctx, st, s.ChainInfoFetcher.CurrentSlot()+1)
if err != nil {
return write(w, flusher, "Could not advance head state: "+err.Error())
return nil, errors.Wrap(err, "could not advance head state")
}
headBlock, err := s.HeadFetcher.HeadBlock(ctx)
if err != nil {
return write(w, flusher, "Could not get head block: "+err.Error())
return nil, errors.Wrap(err, "could not get head block")
}
headPayload, err := headBlock.Block().Body().Execution()
if err != nil {
return write(w, flusher, "Could not get execution payload: "+err.Error())
return nil, errors.Wrap(err, "could not get execution payload")
}
t, err := slots.ToTime(headState.GenesisTime(), headState.Slot())
if err != nil {
return write(w, flusher, "Could not get head state slot time: "+err.Error())
return nil, errors.Wrap(err, "could not get head state slot time")
}
prevRando, err := helpers.RandaoMix(headState, time.CurrentEpoch(headState))
prevRando, err := helpers.RandaoMix(headState, chaintime.CurrentEpoch(headState))
if err != nil {
return write(w, flusher, "Could not get head state randao mix: "+err.Error())
return nil, errors.Wrap(err, "could not get head state randao mix")
}
proposerIndex, err := helpers.BeaconProposerIndex(ctx, headState)
if err != nil {
return write(w, flusher, "Could not get head state proposer index: "+err.Error())
return nil, errors.Wrap(err, "could not get head state proposer index")
}
feeRecipient := params.BeaconConfig().DefaultFeeRecipient.Bytes()
tValidator, exists := s.TrackedValidatorsCache.Validator(proposerIndex)
@@ -425,7 +575,7 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
case version.Capella:
withdrawals, _, err := headState.ExpectedWithdrawals()
if err != nil {
return write(w, flusher, "Could not get head state expected withdrawals: "+err.Error())
return nil, errors.Wrap(err, "could not get head state expected withdrawals")
}
attributes = &structs.PayloadAttributesV2{
Timestamp: fmt.Sprintf("%d", t.Unix()),
@@ -436,11 +586,11 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
case version.Deneb, version.Electra:
withdrawals, _, err := headState.ExpectedWithdrawals()
if err != nil {
return write(w, flusher, "Could not get head state expected withdrawals: "+err.Error())
return nil, errors.Wrap(err, "could not get head state expected withdrawals")
}
parentRoot, err := headBlock.Block().HashTreeRoot()
if err != nil {
return write(w, flusher, "Could not get head block root: "+err.Error())
return nil, errors.Wrap(err, "could not get head block root")
}
attributes = &structs.PayloadAttributesV3{
Timestamp: fmt.Sprintf("%d", t.Unix()),
@@ -450,12 +600,12 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
ParentBeaconBlockRoot: hexutil.Encode(parentRoot[:]),
}
default:
return write(w, flusher, "Payload version %s is not supported", version.String(headState.Version()))
return nil, errors.Wrapf(err, "Payload version %s is not supported", version.String(headState.Version()))
}
attributesBytes, err := json.Marshal(attributes)
if err != nil {
return write(w, flusher, err.Error())
return nil, errors.Wrap(err, "errors marshaling payload attributes to json")
}
eventData := structs.PayloadAttributesEventData{
ProposerIndex: fmt.Sprintf("%d", proposerIndex),
@@ -467,31 +617,12 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
}
eventDataBytes, err := json.Marshal(eventData)
if err != nil {
return write(w, flusher, err.Error())
return nil, errors.Wrap(err, "errors marshaling payload attributes event data to json")
}
return send(w, flusher, PayloadAttributesTopic, &structs.PayloadAttributesEvent{
Version: version.String(headState.Version()),
Data: eventDataBytes,
})
}
func send(w http.ResponseWriter, flusher http.Flusher, name string, data interface{}) error {
j, err := json.Marshal(data)
if err != nil {
return write(w, flusher, "Could not marshal event to JSON: "+err.Error())
}
return write(w, flusher, "event: %s\ndata: %s\n\n", name, string(j))
}
func sendKeepalive(w http.ResponseWriter, flusher http.Flusher) error {
return write(w, flusher, ":\n\n")
}
func write(w http.ResponseWriter, flusher http.Flusher, format string, a ...any) error {
_, err := fmt.Fprintf(w, format, a...)
if err != nil {
return errors.Wrap(err, "could not write to response writer")
}
flusher.Flush()
return nil
return func() io.Reader {
return jsonMarshalReader(PayloadAttributesTopic, &structs.PayloadAttributesEvent{
Version: version.String(headState.Version()),
Data: eventDataBytes,
})
}, nil
}

View File

@@ -1,6 +1,7 @@
package events
import (
"context"
"fmt"
"io"
"math"
@@ -23,56 +24,99 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
sse "github.com/r3labs/sse/v2"
)
type flushableResponseRecorder struct {
*httptest.ResponseRecorder
flushed bool
func requireAllEventsReceived(t *testing.T, stn, opn *mockChain.EventFeedWrapper, events []*feed.Event, req *topicRequest, s *Server, w *StreamingResponseWriterRecorder) {
// maxBufferSize param copied from sse lib client code
sseR := sse.NewEventStreamReader(w.Body(), 1<<24)
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
expected := make(map[string]bool)
for i := range events {
ev := events[i]
// serialize the event the same way the server will so that we can compare expectation to results.
top := topicForEvent(ev)
eb, err := s.lazyReaderForEvent(context.Background(), ev, req)
require.NoError(t, err)
exb, err := io.ReadAll(eb())
require.NoError(t, err)
exs := string(exb[0 : len(exb)-2]) // remove trailing double newline
if topicsForOpsFeed[top] {
if err := opn.WaitForSubscription(ctx); err != nil {
t.Fatal(err)
}
// Send the event on the feed.
s.OperationNotifier.OperationFeed().Send(ev)
} else {
if err := stn.WaitForSubscription(ctx); err != nil {
t.Fatal(err)
}
// Send the event on the feed.
s.StateNotifier.StateFeed().Send(ev)
}
expected[exs] = true
}
done := make(chan struct{})
go func() {
defer close(done)
for {
ev, err := sseR.ReadEvent()
if err == io.EOF {
return
}
require.NoError(t, err)
str := string(ev)
delete(expected, str)
if len(expected) == 0 {
return
}
}
}()
select {
case <-done:
break
case <-ctx.Done():
t.Fatalf("context canceled / timed out waiting for events, err=%v", ctx.Err())
}
require.Equal(t, 0, len(expected), "expected events not seen")
}
func (f *flushableResponseRecorder) Flush() {
f.flushed = true
func (tr *topicRequest) testHttpRequest(_ *testing.T) *http.Request {
tq := make([]string, 0, len(tr.topics))
for topic := range tr.topics {
tq = append(tq, "topics="+topic)
}
return httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://example.com/eth/v1/events?%s", strings.Join(tq, "&")), nil)
}
func TestStreamEvents_OperationsEvents(t *testing.T) {
t.Run("operations", func(t *testing.T) {
s := &Server{
StateNotifier: &mockChain.MockStateNotifier{},
OperationNotifier: &mockChain.MockOperationNotifier{},
}
func operationEventsFixtures(t *testing.T) (*topicRequest, []*feed.Event) {
topics, err := newTopicRequest([]string{
AttestationTopic,
VoluntaryExitTopic,
SyncCommitteeContributionTopic,
BLSToExecutionChangeTopic,
BlobSidecarTopic,
AttesterSlashingTopic,
ProposerSlashingTopic,
})
require.NoError(t, err)
ro, err := blocks.NewROBlob(util.HydrateBlobSidecar(&eth.BlobSidecar{}))
require.NoError(t, err)
vblob := blocks.NewVerifiedROBlob(ro)
topics := []string{
AttestationTopic,
VoluntaryExitTopic,
SyncCommitteeContributionTopic,
BLSToExecutionChangeTopic,
BlobSidecarTopic,
AttesterSlashingTopic,
ProposerSlashingTopic,
}
for i, topic := range topics {
topics[i] = "topics=" + topic
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://example.com/eth/v1/events?%s", strings.Join(topics, "&")), nil)
w := &flushableResponseRecorder{
ResponseRecorder: httptest.NewRecorder(),
}
go func() {
s.StreamEvents(w, request)
}()
// wait for initiation of StreamEvents
time.Sleep(100 * time.Millisecond)
s.OperationNotifier.OperationFeed().Send(&feed.Event{
return topics, []*feed.Event{
&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: util.HydrateAttestation(&eth.Attestation{}),
},
})
s.OperationNotifier.OperationFeed().Send(&feed.Event{
},
&feed.Event{
Type: operation.AggregatedAttReceived,
Data: &operation.AggregatedAttReceivedData{
Attestation: &eth.AggregateAttestationAndProof{
@@ -81,8 +125,8 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
SelectionProof: make([]byte, 96),
},
},
})
s.OperationNotifier.OperationFeed().Send(&feed.Event{
},
&feed.Event{
Type: operation.ExitReceived,
Data: &operation.ExitReceivedData{
Exit: &eth.SignedVoluntaryExit{
@@ -93,8 +137,8 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
Signature: make([]byte, 96),
},
},
})
s.OperationNotifier.OperationFeed().Send(&feed.Event{
},
&feed.Event{
Type: operation.SyncCommitteeContributionReceived,
Data: &operation.SyncCommitteeContributionReceivedData{
Contribution: &eth.SignedContributionAndProof{
@@ -112,8 +156,8 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
Signature: make([]byte, 96),
},
},
})
s.OperationNotifier.OperationFeed().Send(&feed.Event{
},
&feed.Event{
Type: operation.BLSToExecutionChangeReceived,
Data: &operation.BLSToExecutionChangeReceivedData{
Change: &eth.SignedBLSToExecutionChange{
@@ -125,18 +169,14 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
Signature: make([]byte, 96),
},
},
})
ro, err := blocks.NewROBlob(util.HydrateBlobSidecar(&eth.BlobSidecar{}))
require.NoError(t, err)
vblob := blocks.NewVerifiedROBlob(ro)
s.OperationNotifier.OperationFeed().Send(&feed.Event{
},
&feed.Event{
Type: operation.BlobSidecarReceived,
Data: &operation.BlobSidecarReceivedData{
Blob: &vblob,
},
})
s.OperationNotifier.OperationFeed().Send(&feed.Event{
},
&feed.Event{
Type: operation.AttesterSlashingReceived,
Data: &operation.AttesterSlashingReceivedData{
AttesterSlashing: &eth.AttesterSlashing{
@@ -168,9 +208,8 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
},
},
},
})
s.OperationNotifier.OperationFeed().Send(&feed.Event{
},
&feed.Event{
Type: operation.ProposerSlashingReceived,
Data: &operation.ProposerSlashingReceivedData{
ProposerSlashing: &eth.ProposerSlashing{
@@ -192,100 +231,107 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
},
},
},
})
},
}
}
time.Sleep(1 * time.Second)
request.Context().Done()
func TestStreamEvents_OperationsEvents(t *testing.T) {
t.Run("operations", func(t *testing.T) {
stn := mockChain.NewEventFeedWrapper()
opn := mockChain.NewEventFeedWrapper()
s := &Server{
StateNotifier: &mockChain.SimpleNotifier{Feed: stn},
OperationNotifier: &mockChain.SimpleNotifier{Feed: opn},
}
resp := w.Result()
body, err := io.ReadAll(resp.Body)
require.NoError(t, err)
require.NotNil(t, body)
assert.Equal(t, operationsResult, string(body))
topics, events := operationEventsFixtures(t)
request := topics.testHttpRequest(t)
w := NewStreamingResponseWriterRecorder()
go func() {
s.StreamEvents(w, request)
}()
requireAllEventsReceived(t, stn, opn, events, topics, s, w)
})
t.Run("state", func(t *testing.T) {
stn := mockChain.NewEventFeedWrapper()
opn := mockChain.NewEventFeedWrapper()
s := &Server{
StateNotifier: &mockChain.MockStateNotifier{},
OperationNotifier: &mockChain.MockOperationNotifier{},
StateNotifier: &mockChain.SimpleNotifier{Feed: stn},
OperationNotifier: &mockChain.SimpleNotifier{Feed: opn},
}
topics := []string{HeadTopic, FinalizedCheckpointTopic, ChainReorgTopic, BlockTopic}
for i, topic := range topics {
topics[i] = "topics=" + topic
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://example.com/eth/v1/events?%s", strings.Join(topics, "&")), nil)
w := &flushableResponseRecorder{
ResponseRecorder: httptest.NewRecorder(),
topics, err := newTopicRequest([]string{
HeadTopic,
FinalizedCheckpointTopic,
ChainReorgTopic,
BlockTopic,
})
require.NoError(t, err)
request := topics.testHttpRequest(t)
w := NewStreamingResponseWriterRecorder()
b, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlock(&eth.SignedBeaconBlock{}))
require.NoError(t, err)
events := []*feed.Event{
&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: 0,
BlockRoot: [32]byte{},
SignedBlock: b,
Verified: true,
Optimistic: false,
},
},
&feed.Event{
Type: statefeed.NewHead,
Data: &ethpb.EventHead{
Slot: 0,
Block: make([]byte, 32),
State: make([]byte, 32),
EpochTransition: true,
PreviousDutyDependentRoot: make([]byte, 32),
CurrentDutyDependentRoot: make([]byte, 32),
ExecutionOptimistic: false,
},
},
&feed.Event{
Type: statefeed.Reorg,
Data: &ethpb.EventChainReorg{
Slot: 0,
Depth: 0,
OldHeadBlock: make([]byte, 32),
NewHeadBlock: make([]byte, 32),
OldHeadState: make([]byte, 32),
NewHeadState: make([]byte, 32),
Epoch: 0,
ExecutionOptimistic: false,
},
},
&feed.Event{
Type: statefeed.FinalizedCheckpoint,
Data: &ethpb.EventFinalizedCheckpoint{
Block: make([]byte, 32),
State: make([]byte, 32),
Epoch: 0,
ExecutionOptimistic: false,
},
},
}
go func() {
s.StreamEvents(w, request)
}()
// wait for initiation of StreamEvents
time.Sleep(100 * time.Millisecond)
s.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.NewHead,
Data: &ethpb.EventHead{
Slot: 0,
Block: make([]byte, 32),
State: make([]byte, 32),
EpochTransition: true,
PreviousDutyDependentRoot: make([]byte, 32),
CurrentDutyDependentRoot: make([]byte, 32),
ExecutionOptimistic: false,
},
})
s.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.FinalizedCheckpoint,
Data: &ethpb.EventFinalizedCheckpoint{
Block: make([]byte, 32),
State: make([]byte, 32),
Epoch: 0,
ExecutionOptimistic: false,
},
})
s.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.Reorg,
Data: &ethpb.EventChainReorg{
Slot: 0,
Depth: 0,
OldHeadBlock: make([]byte, 32),
NewHeadBlock: make([]byte, 32),
OldHeadState: make([]byte, 32),
NewHeadState: make([]byte, 32),
Epoch: 0,
ExecutionOptimistic: false,
},
})
b, err := blocks.NewSignedBeaconBlock(util.HydrateSignedBeaconBlock(&eth.SignedBeaconBlock{}))
require.NoError(t, err)
s.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.BlockProcessed,
Data: &statefeed.BlockProcessedData{
Slot: 0,
BlockRoot: [32]byte{},
SignedBlock: b,
Verified: true,
Optimistic: false,
},
})
// wait for feed
time.Sleep(1 * time.Second)
request.Context().Done()
resp := w.Result()
body, err := io.ReadAll(resp.Body)
require.NoError(t, err)
require.NotNil(t, body)
assert.Equal(t, stateResult, string(body))
requireAllEventsReceived(t, stn, opn, events, topics, s, w)
})
t.Run("payload attributes", func(t *testing.T) {
type testCase struct {
name string
getState func() state.BeaconState
getBlock func() interfaces.SignedBeaconBlock
expected string
SetTrackedValidatorsCache func(*cache.TrackedValidatorsCache)
}
testCases := []testCase{
@@ -301,7 +347,6 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
require.NoError(t, err)
return b
},
expected: payloadAttributesBellatrixResult,
},
{
name: "capella",
@@ -315,7 +360,6 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
require.NoError(t, err)
return b
},
expected: payloadAttributesCapellaResult,
},
{
name: "deneb",
@@ -329,7 +373,6 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
require.NoError(t, err)
return b
},
expected: payloadAttributesDenebResult,
},
{
name: "electra",
@@ -343,7 +386,6 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
require.NoError(t, err)
return b
},
expected: payloadAttributesElectraResultWithTVC,
SetTrackedValidatorsCache: func(c *cache.TrackedValidatorsCache) {
c.Set(cache.TrackedValidator{
Active: true,
@@ -368,9 +410,11 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
Slot: &currentSlot,
}
stn := mockChain.NewEventFeedWrapper()
opn := mockChain.NewEventFeedWrapper()
s := &Server{
StateNotifier: &mockChain.MockStateNotifier{},
OperationNotifier: &mockChain.MockOperationNotifier{},
StateNotifier: &mockChain.SimpleNotifier{Feed: stn},
OperationNotifier: &mockChain.SimpleNotifier{Feed: opn},
HeadFetcher: mockChainService,
ChainInfoFetcher: mockChainService,
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
@@ -378,100 +422,76 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
if tc.SetTrackedValidatorsCache != nil {
tc.SetTrackedValidatorsCache(s.TrackedValidatorsCache)
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://example.com/eth/v1/events?topics=%s", PayloadAttributesTopic), nil)
w := &flushableResponseRecorder{
ResponseRecorder: httptest.NewRecorder(),
}
topics, err := newTopicRequest([]string{PayloadAttributesTopic})
require.NoError(t, err)
request := topics.testHttpRequest(t)
w := NewStreamingResponseWriterRecorder()
events := []*feed.Event{&feed.Event{Type: statefeed.MissedSlot}}
go func() {
s.StreamEvents(w, request)
}()
// wait for initiation of StreamEvents
time.Sleep(100 * time.Millisecond)
s.StateNotifier.StateFeed().Send(&feed.Event{Type: statefeed.MissedSlot})
// wait for feed
time.Sleep(1 * time.Second)
request.Context().Done()
resp := w.Result()
body, err := io.ReadAll(resp.Body)
require.NoError(t, err)
require.NotNil(t, body)
assert.Equal(t, tc.expected, string(body), "wrong result for "+tc.name)
requireAllEventsReceived(t, stn, opn, events, topics, s, w)
}
})
}
const operationsResult = `:
func TestStuckReader(t *testing.T) {
topics, events := operationEventsFixtures(t)
require.Equal(t, 8, len(events))
// set eventFeedDepth to a number lower than the events we intend to send to force the server to drop the reader.
stn := mockChain.NewEventFeedWrapper()
opn := mockChain.NewEventFeedWrapper()
s := &Server{
StateNotifier: &mockChain.SimpleNotifier{Feed: stn},
OperationNotifier: &mockChain.SimpleNotifier{Feed: opn},
EventFeedDepth: len(events) - 1,
}
event: attestation
data: {"aggregation_bits":"0x00","data":{"slot":"0","index":"0","beacon_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","source":{"epoch":"0","root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"target":{"epoch":"0","root":"0x0000000000000000000000000000000000000000000000000000000000000000"}},"signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
eventsWritten := make(chan struct{})
go func() {
for i := range events {
ev := events[i]
top := topicForEvent(ev)
if topicsForOpsFeed[top] {
err := opn.WaitForSubscription(ctx)
require.NoError(t, err)
s.OperationNotifier.OperationFeed().Send(ev)
} else {
err := stn.WaitForSubscription(ctx)
require.NoError(t, err)
s.StateNotifier.StateFeed().Send(ev)
}
}
close(eventsWritten)
}()
event: attestation
data: {"aggregation_bits":"0x00","data":{"slot":"0","index":"0","beacon_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","source":{"epoch":"0","root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"target":{"epoch":"0","root":"0x0000000000000000000000000000000000000000000000000000000000000000"}},"signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}
request := topics.testHttpRequest(t)
w := NewStreamingResponseWriterRecorder()
event: voluntary_exit
data: {"message":{"epoch":"0","validator_index":"0"},"signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}
handlerFinished := make(chan struct{})
go func() {
s.StreamEvents(w, request)
close(handlerFinished)
}()
event: contribution_and_proof
data: {"message":{"aggregator_index":"0","contribution":{"slot":"0","beacon_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","subcommittee_index":"0","aggregation_bits":"0x00000000000000000000000000000000","signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"},"selection_proof":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"},"signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}
// Make sure that the stream writer shut down when the reader failed to clear the write buffer.
select {
case <-handlerFinished:
// We expect the stream handler to max out the queue buffer and exit gracefully.
return
case <-ctx.Done():
t.Fatalf("context canceled / timed out waiting for handler completion, err=%v", ctx.Err())
}
event: bls_to_execution_change
data: {"message":{"validator_index":"0","from_bls_pubkey":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","to_execution_address":"0x0000000000000000000000000000000000000000"},"signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}
event: blob_sidecar
data: {"block_root":"0xc78009fdf07fc56a11f122370658a353aaa542ed63e44c4bc15ff4cd105ab33c","index":"0","slot":"0","kzg_commitment":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","versioned_hash":"0x01b0761f87b081d5cf10757ccc89f12be355c70e2e29df288b65b30710dcbcd1"}
event: attester_slashing
data: {"attestation_1":{"attesting_indices":["0","1"],"data":{"slot":"0","index":"0","beacon_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","source":{"epoch":"0","root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"target":{"epoch":"0","root":"0x0000000000000000000000000000000000000000000000000000000000000000"}},"signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"},"attestation_2":{"attesting_indices":["0","1"],"data":{"slot":"0","index":"0","beacon_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","source":{"epoch":"0","root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"target":{"epoch":"0","root":"0x0000000000000000000000000000000000000000000000000000000000000000"}},"signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}}
event: proposer_slashing
data: {"signed_header_1":{"message":{"slot":"0","proposer_index":"0","parent_root":"0x0000000000000000000000000000000000000000000000000000000000000000","state_root":"0x0000000000000000000000000000000000000000000000000000000000000000","body_root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"},"signed_header_2":{"message":{"slot":"0","proposer_index":"0","parent_root":"0x0000000000000000000000000000000000000000000000000000000000000000","state_root":"0x0000000000000000000000000000000000000000000000000000000000000000","body_root":"0x0000000000000000000000000000000000000000000000000000000000000000"},"signature":"0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"}}
`
const stateResult = `:
event: head
data: {"slot":"0","block":"0x0000000000000000000000000000000000000000000000000000000000000000","state":"0x0000000000000000000000000000000000000000000000000000000000000000","epoch_transition":true,"execution_optimistic":false,"previous_duty_dependent_root":"0x0000000000000000000000000000000000000000000000000000000000000000","current_duty_dependent_root":"0x0000000000000000000000000000000000000000000000000000000000000000"}
event: finalized_checkpoint
data: {"block":"0x0000000000000000000000000000000000000000000000000000000000000000","state":"0x0000000000000000000000000000000000000000000000000000000000000000","epoch":"0","execution_optimistic":false}
event: chain_reorg
data: {"slot":"0","depth":"0","old_head_block":"0x0000000000000000000000000000000000000000000000000000000000000000","old_head_state":"0x0000000000000000000000000000000000000000000000000000000000000000","new_head_block":"0x0000000000000000000000000000000000000000000000000000000000000000","new_head_state":"0x0000000000000000000000000000000000000000000000000000000000000000","epoch":"0","execution_optimistic":false}
event: block
data: {"slot":"0","block":"0xeade62f0457b2fdf48e7d3fc4b60736688286be7c7a3ac4c9a16a5e0600bd9e4","execution_optimistic":false}
`
const payloadAttributesBellatrixResult = `:
event: payload_attributes
data: {"version":"bellatrix","data":{"proposer_index":"0","proposal_slot":"1","parent_block_number":"0","parent_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","parent_block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000","payload_attributes":{"timestamp":"12","prev_randao":"0x0000000000000000000000000000000000000000000000000000000000000000","suggested_fee_recipient":"0x0000000000000000000000000000000000000000"}}}
`
const payloadAttributesCapellaResult = `:
event: payload_attributes
data: {"version":"capella","data":{"proposer_index":"0","proposal_slot":"1","parent_block_number":"0","parent_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","parent_block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000","payload_attributes":{"timestamp":"12","prev_randao":"0x0000000000000000000000000000000000000000000000000000000000000000","suggested_fee_recipient":"0x0000000000000000000000000000000000000000","withdrawals":[]}}}
`
const payloadAttributesDenebResult = `:
event: payload_attributes
data: {"version":"deneb","data":{"proposer_index":"0","proposal_slot":"1","parent_block_number":"0","parent_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","parent_block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000","payload_attributes":{"timestamp":"12","prev_randao":"0x0000000000000000000000000000000000000000000000000000000000000000","suggested_fee_recipient":"0x0000000000000000000000000000000000000000","withdrawals":[],"parent_beacon_block_root":"0xbef96cb938fd48b2403d3e662664325abb0102ed12737cbb80d717520e50cf4a"}}}
`
const payloadAttributesElectraResultWithTVC = `:
event: payload_attributes
data: {"version":"electra","data":{"proposer_index":"0","proposal_slot":"1","parent_block_number":"0","parent_block_root":"0x0000000000000000000000000000000000000000000000000000000000000000","parent_block_hash":"0x0000000000000000000000000000000000000000000000000000000000000000","payload_attributes":{"timestamp":"12","prev_randao":"0x0000000000000000000000000000000000000000000000000000000000000000","suggested_fee_recipient":"0xd2dbd02e4efe087d7d195de828b9dd25f19a89c9","withdrawals":[],"parent_beacon_block_root":"0xf2110e448638f41cb34514ecdbb49c055536cd5f715f1cb259d1287bb900853e"}}}
`
// Also make sure all the events were written.
select {
case <-eventsWritten:
// We expect the stream handler to max out the queue buffer and exit gracefully.
return
case <-ctx.Done():
t.Fatalf("context canceled / timed out waiting to write all events, err=%v", ctx.Err())
}
}

View File

@@ -0,0 +1,75 @@
package events
import (
"io"
"net/http"
"net/http/httptest"
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
type StreamingResponseWriterRecorder struct {
http.ResponseWriter
r io.Reader
w io.Writer
statusWritten *int
status chan int
bodyRecording []byte
flushed bool
}
func (w *StreamingResponseWriterRecorder) StatusChan() chan int {
return w.status
}
func NewStreamingResponseWriterRecorder() *StreamingResponseWriterRecorder {
r, w := io.Pipe()
return &StreamingResponseWriterRecorder{
ResponseWriter: httptest.NewRecorder(),
r: r,
w: w,
status: make(chan int, 1),
}
}
// Write implements http.ResponseWriter.
func (w *StreamingResponseWriterRecorder) Write(data []byte) (int, error) {
w.WriteHeader(http.StatusOK)
n, err := w.w.Write(data)
if err != nil {
return n, err
}
return w.ResponseWriter.Write(data)
}
// WriteHeader implements http.ResponseWriter.
func (w *StreamingResponseWriterRecorder) WriteHeader(statusCode int) {
if w.statusWritten != nil {
return
}
w.statusWritten = &statusCode
w.status <- statusCode
w.ResponseWriter.WriteHeader(statusCode)
}
func (w *StreamingResponseWriterRecorder) Body() io.Reader {
return w.r
}
func (w *StreamingResponseWriterRecorder) RequireStatus(t *testing.T, status int) {
if w.statusWritten == nil {
t.Fatal("WriteHeader was not called")
}
require.Equal(t, status, *w.statusWritten)
}
func (w *StreamingResponseWriterRecorder) Flush() {
fw, ok := w.ResponseWriter.(http.Flusher)
if ok {
fw.Flush()
}
w.flushed = true
}
var _ http.ResponseWriter = &StreamingResponseWriterRecorder{}

View File

@@ -4,6 +4,8 @@
package events
import (
"time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
opfeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/operation"
@@ -18,4 +20,6 @@ type Server struct {
HeadFetcher blockchain.HeadFetcher
ChainInfoFetcher blockchain.ChainInfoFetcher
TrackedValidatorsCache *cache.TrackedValidatorsCache
KeepAliveInterval time.Duration
EventFeedDepth int
}

View File

@@ -20,14 +20,12 @@ go_library(
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/migration:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
@@ -55,7 +53,6 @@ go_test(
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -47,7 +47,7 @@ func (s *Server) GetLightClientBootstrap(w http.ResponseWriter, req *http.Reques
return
}
bootstrap, err := createLightClientBootstrap(ctx, state, blk.Block())
bootstrap, err := createLightClientBootstrap(ctx, state, blk)
if err != nil {
httputil.HandleError(w, "could not get light client bootstrap: "+err.Error(), http.StatusInternalServerError)
return
@@ -204,6 +204,7 @@ func (s *Server) GetLightClientUpdatesByRange(w http.ResponseWriter, req *http.R
state,
block,
attestedState,
attestedBlock,
finalizedBlock,
)
@@ -267,7 +268,7 @@ func (s *Server) GetLightClientFinalityUpdate(w http.ResponseWriter, req *http.R
return
}
update, err := newLightClientFinalityUpdateFromBeaconState(ctx, st, block, attestedState, finalizedBlock)
update, err := newLightClientFinalityUpdateFromBeaconState(ctx, st, block, attestedState, attestedBlock, finalizedBlock)
if err != nil {
httputil.HandleError(w, "Could not get light client finality update: "+err.Error(), http.StatusInternalServerError)
return
@@ -312,7 +313,7 @@ func (s *Server) GetLightClientOptimisticUpdate(w http.ResponseWriter, req *http
return
}
update, err := newLightClientOptimisticUpdateFromBeaconState(ctx, st, block, attestedState)
update, err := newLightClientOptimisticUpdateFromBeaconState(ctx, st, block, attestedState, attestedBlock)
if err != nil {
httputil.HandleError(w, "Could not get light client optimistic update: "+err.Error(), http.StatusInternalServerError)
return

View File

@@ -7,6 +7,7 @@ import (
"fmt"
"net/http"
"net/http/httptest"
"strconv"
"testing"
"github.com/ethereum/go-ethereum/common/hexutil"
@@ -19,49 +20,29 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestLightClientHandler_GetLightClientBootstrap_Altair(t *testing.T) {
helpers.ClearCache()
slot := primitives.Slot(params.BeaconConfig().AltairForkEpoch * primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)).Add(1)
l := util.NewTestLightClient(t).SetupTestAltair()
b := util.NewBeaconBlockAltair()
b.Block.StateRoot = bytesutil.PadTo([]byte("foo"), 32)
b.Block.Slot = slot
signedBlock, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
header, err := signedBlock.Header()
slot := l.State.Slot()
stateRoot, err := l.State.HashTreeRoot(l.Ctx)
require.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
bs, err := util.NewBeaconStateAltair(func(state *ethpb.BeaconStateAltair) error {
state.BlockRoots[0] = r[:]
return nil
})
require.NoError(t, err)
require.NoError(t, bs.SetSlot(slot))
require.NoError(t, bs.SetLatestBlockHeader(header.Header))
mockBlocker := &testutil.MockBlocker{BlockToReturn: signedBlock}
mockBlocker := &testutil.MockBlocker{BlockToReturn: l.Block}
mockChainService := &mock.ChainService{Optimistic: true, Slot: &slot}
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
slot: bs,
slot: l.State,
}},
Blocker: mockBlocker,
HeadFetcher: mockChainService,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(r[:]))
request.SetPathValue("block_root", hexutil.Encode(stateRoot[:]))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -74,47 +55,74 @@ func TestLightClientHandler_GetLightClientBootstrap_Altair(t *testing.T) {
err = json.Unmarshal(resp.Data.Header, &respHeader)
require.NoError(t, err)
require.Equal(t, "altair", resp.Version)
require.Equal(t, hexutil.Encode(header.Header.BodyRoot), respHeader.Beacon.BodyRoot)
require.NotNil(t, resp.Data)
blockHeader, err := l.Block.Header()
require.NoError(t, err)
require.Equal(t, hexutil.Encode(blockHeader.Header.BodyRoot), respHeader.Beacon.BodyRoot)
require.Equal(t, strconv.FormatUint(uint64(blockHeader.Header.Slot), 10), respHeader.Beacon.Slot)
require.NotNil(t, resp.Data.CurrentSyncCommittee)
require.NotNil(t, resp.Data.CurrentSyncCommitteeBranch)
}
func TestLightClientHandler_GetLightClientBootstrap_Bellatrix(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestBellatrix()
slot := l.State.Slot()
stateRoot, err := l.State.HashTreeRoot(l.Ctx)
require.NoError(t, err)
mockBlocker := &testutil.MockBlocker{BlockToReturn: l.Block}
mockChainService := &mock.ChainService{Optimistic: true, Slot: &slot}
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
slot: l.State,
}},
Blocker: mockBlocker,
HeadFetcher: mockChainService,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(stateRoot[:]))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetLightClientBootstrap(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
var resp structs.LightClientBootstrapResponse
err = json.Unmarshal(writer.Body.Bytes(), &resp)
require.NoError(t, err)
var respHeader structs.LightClientHeader
err = json.Unmarshal(resp.Data.Header, &respHeader)
require.NoError(t, err)
require.Equal(t, "bellatrix", resp.Version)
blockHeader, err := l.Block.Header()
require.NoError(t, err)
require.Equal(t, hexutil.Encode(blockHeader.Header.BodyRoot), respHeader.Beacon.BodyRoot)
require.Equal(t, strconv.FormatUint(uint64(blockHeader.Header.Slot), 10), respHeader.Beacon.Slot)
require.NotNil(t, resp.Data.CurrentSyncCommittee)
require.NotNil(t, resp.Data.CurrentSyncCommitteeBranch)
}
func TestLightClientHandler_GetLightClientBootstrap_Capella(t *testing.T) {
helpers.ClearCache()
slot := primitives.Slot(params.BeaconConfig().CapellaForkEpoch * primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)).Add(1)
l := util.NewTestLightClient(t).SetupTestCapella(false) // result is same for true and false
b := util.NewBeaconBlockCapella()
b.Block.StateRoot = bytesutil.PadTo([]byte("foo"), 32)
b.Block.Slot = slot
signedBlock, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
header, err := signedBlock.Header()
slot := l.State.Slot()
stateRoot, err := l.State.HashTreeRoot(l.Ctx)
require.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
bs, err := util.NewBeaconStateCapella(func(state *ethpb.BeaconStateCapella) error {
state.BlockRoots[0] = r[:]
return nil
})
require.NoError(t, err)
require.NoError(t, bs.SetSlot(slot))
require.NoError(t, bs.SetLatestBlockHeader(header.Header))
mockBlocker := &testutil.MockBlocker{BlockToReturn: signedBlock}
mockBlocker := &testutil.MockBlocker{BlockToReturn: l.Block}
mockChainService := &mock.ChainService{Optimistic: true, Slot: &slot}
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
slot: bs,
slot: l.State,
}},
Blocker: mockBlocker,
HeadFetcher: mockChainService,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(r[:]))
request.SetPathValue("block_root", hexutil.Encode(stateRoot[:]))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -123,51 +131,38 @@ func TestLightClientHandler_GetLightClientBootstrap_Capella(t *testing.T) {
var resp structs.LightClientBootstrapResponse
err = json.Unmarshal(writer.Body.Bytes(), &resp)
require.NoError(t, err)
var respHeader structs.LightClientHeaderCapella
var respHeader structs.LightClientHeader
err = json.Unmarshal(resp.Data.Header, &respHeader)
require.NoError(t, err)
require.Equal(t, "capella", resp.Version)
require.Equal(t, hexutil.Encode(header.Header.BodyRoot), respHeader.Beacon.BodyRoot)
require.NotNil(t, resp.Data)
blockHeader, err := l.Block.Header()
require.NoError(t, err)
require.Equal(t, hexutil.Encode(blockHeader.Header.BodyRoot), respHeader.Beacon.BodyRoot)
require.Equal(t, strconv.FormatUint(uint64(blockHeader.Header.Slot), 10), respHeader.Beacon.Slot)
require.NotNil(t, resp.Data.CurrentSyncCommittee)
require.NotNil(t, resp.Data.CurrentSyncCommitteeBranch)
}
func TestLightClientHandler_GetLightClientBootstrap_Deneb(t *testing.T) {
helpers.ClearCache()
slot := primitives.Slot(params.BeaconConfig().DenebForkEpoch * primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)).Add(1)
l := util.NewTestLightClient(t).SetupTestDeneb(false) // result is same for true and false
b := util.NewBeaconBlockDeneb()
b.Block.StateRoot = bytesutil.PadTo([]byte("foo"), 32)
b.Block.Slot = slot
signedBlock, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
header, err := signedBlock.Header()
slot := l.State.Slot()
stateRoot, err := l.State.HashTreeRoot(l.Ctx)
require.NoError(t, err)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
bs, err := util.NewBeaconStateDeneb(func(state *ethpb.BeaconStateDeneb) error {
state.BlockRoots[0] = r[:]
return nil
})
require.NoError(t, err)
require.NoError(t, bs.SetSlot(slot))
require.NoError(t, bs.SetLatestBlockHeader(header.Header))
mockBlocker := &testutil.MockBlocker{BlockToReturn: signedBlock}
mockBlocker := &testutil.MockBlocker{BlockToReturn: l.Block}
mockChainService := &mock.ChainService{Optimistic: true, Slot: &slot}
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
slot: bs,
slot: l.State,
}},
Blocker: mockBlocker,
HeadFetcher: mockChainService,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(r[:]))
request.SetPathValue("block_root", hexutil.Encode(stateRoot[:]))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
@@ -176,12 +171,58 @@ func TestLightClientHandler_GetLightClientBootstrap_Deneb(t *testing.T) {
var resp structs.LightClientBootstrapResponse
err = json.Unmarshal(writer.Body.Bytes(), &resp)
require.NoError(t, err)
var respHeader structs.LightClientHeaderDeneb
var respHeader structs.LightClientHeader
err = json.Unmarshal(resp.Data.Header, &respHeader)
require.NoError(t, err)
require.Equal(t, "deneb", resp.Version)
require.Equal(t, hexutil.Encode(header.Header.BodyRoot), respHeader.Beacon.BodyRoot)
require.NotNil(t, resp.Data)
blockHeader, err := l.Block.Header()
require.NoError(t, err)
require.Equal(t, hexutil.Encode(blockHeader.Header.BodyRoot), respHeader.Beacon.BodyRoot)
require.Equal(t, strconv.FormatUint(uint64(blockHeader.Header.Slot), 10), respHeader.Beacon.Slot)
require.NotNil(t, resp.Data.CurrentSyncCommittee)
require.NotNil(t, resp.Data.CurrentSyncCommitteeBranch)
}
func TestLightClientHandler_GetLightClientBootstrap_Electra(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestElectra(false) // result is same for true and false
slot := l.State.Slot()
stateRoot, err := l.State.HashTreeRoot(l.Ctx)
require.NoError(t, err)
mockBlocker := &testutil.MockBlocker{BlockToReturn: l.Block}
mockChainService := &mock.ChainService{Optimistic: true, Slot: &slot}
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{
slot: l.State,
}},
Blocker: mockBlocker,
HeadFetcher: mockChainService,
}
request := httptest.NewRequest("GET", "http://foo.com/", nil)
request.SetPathValue("block_root", hexutil.Encode(stateRoot[:]))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetLightClientBootstrap(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
var resp structs.LightClientBootstrapResponse
err = json.Unmarshal(writer.Body.Bytes(), &resp)
require.NoError(t, err)
var respHeader structs.LightClientHeader
err = json.Unmarshal(resp.Data.Header, &respHeader)
require.NoError(t, err)
require.Equal(t, "electra", resp.Version)
blockHeader, err := l.Block.Header()
require.NoError(t, err)
require.Equal(t, hexutil.Encode(blockHeader.Header.BodyRoot), respHeader.Beacon.BodyRoot)
require.Equal(t, strconv.FormatUint(uint64(blockHeader.Header.Slot), 10), respHeader.Beacon.Slot)
require.NotNil(t, resp.Data.CurrentSyncCommittee)
require.NotNil(t, resp.Data.CurrentSyncCommitteeBranch)
}
func TestLightClientHandler_GetLightClientUpdatesByRangeAltair(t *testing.T) {

View File

@@ -7,10 +7,9 @@ import (
"reflect"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/proto/migration"
lightclient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/ethereum/go-ethereum/common/hexutil"
@@ -18,18 +17,17 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
v2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func createLightClientBootstrap(ctx context.Context, state state.BeaconState, blk interfaces.ReadOnlyBeaconBlock) (*structs.LightClientBootstrap, error) {
func createLightClientBootstrap(ctx context.Context, state state.BeaconState, blk interfaces.ReadOnlySignedBeaconBlock) (*structs.LightClientBootstrap, error) {
switch blk.Version() {
case version.Phase0:
return nil, fmt.Errorf("light client bootstrap is not supported for phase0")
case version.Altair, version.Bellatrix:
return createLightClientBootstrapAltair(ctx, state)
return createLightClientBootstrapAltair(ctx, state, blk)
case version.Capella:
return createLightClientBootstrapCapella(ctx, state, blk)
case version.Deneb, version.Electra:
@@ -38,24 +36,7 @@ func createLightClientBootstrap(ctx context.Context, state state.BeaconState, bl
return nil, fmt.Errorf("unsupported block version %s", version.String(blk.Version()))
}
// createLightClientBootstrapAltair - implements https://github.com/ethereum/consensus-specs/blob/3d235740e5f1e641d3b160c8688f26e7dc5a1894/specs/altair/light-client/full-node.md#create_light_client_bootstrap
// def create_light_client_bootstrap(state: BeaconState) -> LightClientBootstrap:
//
// assert compute_epoch_at_slot(state.slot) >= ALTAIR_FORK_EPOCH
// assert state.slot == state.latest_block_header.slot
//
// return LightClientBootstrap(
// header=BeaconBlockHeader(
// slot=state.latest_block_header.slot,
// proposer_index=state.latest_block_header.proposer_index,
// parent_root=state.latest_block_header.parent_root,
// state_root=hash_tree_root(state),
// body_root=state.latest_block_header.body_root,
// ),
// current_sync_committee=state.current_sync_committee,
// current_sync_committee_branch=compute_merkle_proof_for_state(state, CURRENT_SYNC_COMMITTEE_INDEX)
// )
func createLightClientBootstrapAltair(ctx context.Context, state state.BeaconState) (*structs.LightClientBootstrap, error) {
func createLightClientBootstrapAltair(ctx context.Context, state state.BeaconState, block interfaces.ReadOnlySignedBeaconBlock) (*structs.LightClientBootstrap, error) {
// assert compute_epoch_at_slot(state.slot) >= ALTAIR_FORK_EPOCH
if slots.ToEpoch(state.Slot()) < params.BeaconConfig().AltairForkEpoch {
return nil, fmt.Errorf("light client bootstrap is not supported before Altair, invalid slot %d", state.Slot())
@@ -67,55 +48,63 @@ func createLightClientBootstrapAltair(ctx context.Context, state state.BeaconSta
return nil, fmt.Errorf("state slot %d not equal to latest block header slot %d", state.Slot(), latestBlockHeader.Slot)
}
// Prepare data
// header.state_root = hash_tree_root(state)
stateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get state root")
}
latestBlockHeader.StateRoot = stateRoot[:]
// assert hash_tree_root(header) == hash_tree_root(block.message)
latestBlockHeaderRoot, err := latestBlockHeader.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get latest block header root")
}
beaconBlockRoot, err := block.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get block root")
}
if latestBlockHeaderRoot != beaconBlockRoot {
return nil, fmt.Errorf("latest block header root %#x not equal to block root %#x", latestBlockHeaderRoot, beaconBlockRoot)
}
lightClientHeaderContainer, err := lightclient.BlockToLightClientHeader(block)
if err != nil {
return nil, errors.Wrap(err, "could not convert block to light client header")
}
lightClientHeader := lightClientHeaderContainer.GetHeaderAltair()
apiLightClientHeader := &structs.LightClientHeader{
Beacon: structs.BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(lightClientHeader.Beacon)),
}
headerJSON, err := json.Marshal(apiLightClientHeader)
if err != nil {
return nil, errors.Wrap(err, "could not convert header to raw message")
}
currentSyncCommittee, err := state.CurrentSyncCommittee()
if err != nil {
return nil, errors.Wrap(err, "could not get current sync committee")
}
committee := structs.SyncCommitteeFromConsensus(currentSyncCommittee)
currentSyncCommitteeProof, err := state.CurrentSyncCommitteeProof(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get current sync committee proof")
}
branch := make([]string, fieldparams.NextSyncCommitteeBranchDepth)
branch := make([]string, fieldparams.SyncCommitteeBranchDepth)
for i, proof := range currentSyncCommitteeProof {
branch[i] = hexutil.Encode(proof)
}
beacon := structs.BeaconBlockHeaderFromConsensus(latestBlockHeader)
if beacon == nil {
return nil, fmt.Errorf("could not get beacon block header")
}
header := &structs.LightClientHeader{
Beacon: beacon,
}
// Above shared util function won't calculate state root, so we need to do it manually
stateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get state root")
}
header.Beacon.StateRoot = hexutil.Encode(stateRoot[:])
headerJson, err := json.Marshal(header)
if err != nil {
return nil, errors.Wrap(err, "could not convert header to raw message")
}
// Return result
result := &structs.LightClientBootstrap{
Header: headerJson,
CurrentSyncCommittee: committee,
Header: headerJSON,
CurrentSyncCommittee: structs.SyncCommitteeFromConsensus(currentSyncCommittee),
CurrentSyncCommitteeBranch: branch,
}
return result, nil
}
func createLightClientBootstrapCapella(ctx context.Context, state state.BeaconState, block interfaces.ReadOnlyBeaconBlock) (*structs.LightClientBootstrap, error) {
func createLightClientBootstrapCapella(ctx context.Context, state state.BeaconState, block interfaces.ReadOnlySignedBeaconBlock) (*structs.LightClientBootstrap, error) {
// assert compute_epoch_at_slot(state.slot) >= CAPELLA_FORK_EPOCH
if slots.ToEpoch(state.Slot()) < params.BeaconConfig().CapellaForkEpoch {
return nil, fmt.Errorf("creating Capella light client bootstrap is not supported before Capella, invalid slot %d", state.Slot())
@@ -127,111 +116,63 @@ func createLightClientBootstrapCapella(ctx context.Context, state state.BeaconSt
return nil, fmt.Errorf("state slot %d not equal to latest block header slot %d", state.Slot(), latestBlockHeader.Slot)
}
// Prepare data
// header.state_root = hash_tree_root(state)
stateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get state root")
}
latestBlockHeader.StateRoot = stateRoot[:]
// assert hash_tree_root(header) == hash_tree_root(block.message)
latestBlockHeaderRoot, err := latestBlockHeader.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get latest block header root")
}
beaconBlockRoot, err := block.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get block root")
}
if latestBlockHeaderRoot != beaconBlockRoot {
return nil, fmt.Errorf("latest block header root %#x not equal to block root %#x", latestBlockHeaderRoot, beaconBlockRoot)
}
lightClientHeaderContainer, err := lightclient.BlockToLightClientHeader(block)
if err != nil {
return nil, errors.Wrap(err, "could not convert block to light client header")
}
lightClientHeader := lightClientHeaderContainer.GetHeaderCapella()
apiLightClientHeader := &structs.LightClientHeader{
Beacon: structs.BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(lightClientHeader.Beacon)),
}
headerJSON, err := json.Marshal(apiLightClientHeader)
if err != nil {
return nil, errors.Wrap(err, "could not convert header to raw message")
}
currentSyncCommittee, err := state.CurrentSyncCommittee()
if err != nil {
return nil, errors.Wrap(err, "could not get current sync committee")
}
committee := structs.SyncCommitteeFromConsensus(currentSyncCommittee)
currentSyncCommitteeProof, err := state.CurrentSyncCommitteeProof(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get current sync committee proof")
}
branch := make([]string, fieldparams.NextSyncCommitteeBranchDepth)
branch := make([]string, fieldparams.SyncCommitteeBranchDepth)
for i, proof := range currentSyncCommitteeProof {
branch[i] = hexutil.Encode(proof)
}
beacon := structs.BeaconBlockHeaderFromConsensus(latestBlockHeader)
payloadInterface, err := block.Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := payloadInterface.TransactionsRoot()
if errors.Is(err, consensus_types.ErrUnsupportedField) {
transactions, err := payloadInterface.Transactions()
if err != nil {
return nil, errors.Wrap(err, "could not get transactions")
}
transactionsRootArray, err := ssz.TransactionsRoot(transactions)
if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
transactionsRoot = transactionsRootArray[:]
} else if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := payloadInterface.WithdrawalsRoot()
if errors.Is(err, consensus_types.ErrUnsupportedField) {
withdrawals, err := payloadInterface.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals")
}
withdrawalsRootArray, err := ssz.WithdrawalSliceRoot(withdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
withdrawalsRoot = withdrawalsRootArray[:]
}
executionPayloadHeader := &structs.ExecutionPayloadHeaderCapella{
ParentHash: hexutil.Encode(payloadInterface.ParentHash()),
FeeRecipient: hexutil.Encode(payloadInterface.FeeRecipient()),
StateRoot: hexutil.Encode(payloadInterface.StateRoot()),
ReceiptsRoot: hexutil.Encode(payloadInterface.ReceiptsRoot()),
LogsBloom: hexutil.Encode(payloadInterface.LogsBloom()),
PrevRandao: hexutil.Encode(payloadInterface.PrevRandao()),
BlockNumber: hexutil.EncodeUint64(payloadInterface.BlockNumber()),
GasLimit: hexutil.EncodeUint64(payloadInterface.GasLimit()),
GasUsed: hexutil.EncodeUint64(payloadInterface.GasUsed()),
Timestamp: hexutil.EncodeUint64(payloadInterface.Timestamp()),
ExtraData: hexutil.Encode(payloadInterface.ExtraData()),
BaseFeePerGas: hexutil.Encode(payloadInterface.BaseFeePerGas()),
BlockHash: hexutil.Encode(payloadInterface.BlockHash()),
TransactionsRoot: hexutil.Encode(transactionsRoot),
WithdrawalsRoot: hexutil.Encode(withdrawalsRoot),
}
executionPayloadProof, err := blocks.PayloadProof(ctx, block)
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
executionPayloadProofStr := make([]string, len(executionPayloadProof))
for i, proof := range executionPayloadProof {
executionPayloadProofStr[i] = hexutil.Encode(proof)
}
header := &structs.LightClientHeaderCapella{
Beacon: beacon,
Execution: executionPayloadHeader,
ExecutionBranch: executionPayloadProofStr,
}
// Above shared util function won't calculate state root, so we need to do it manually
stateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get state root")
}
header.Beacon.StateRoot = hexutil.Encode(stateRoot[:])
headerJson, err := json.Marshal(header)
if err != nil {
return nil, errors.Wrap(err, "could not convert header to raw message")
}
// Return result
result := &structs.LightClientBootstrap{
Header: headerJson,
CurrentSyncCommittee: committee,
Header: headerJSON,
CurrentSyncCommittee: structs.SyncCommitteeFromConsensus(currentSyncCommittee),
CurrentSyncCommitteeBranch: branch,
}
return result, nil
}
func createLightClientBootstrapDeneb(ctx context.Context, state state.BeaconState, block interfaces.ReadOnlyBeaconBlock) (*structs.LightClientBootstrap, error) {
func createLightClientBootstrapDeneb(ctx context.Context, state state.BeaconState, block interfaces.ReadOnlySignedBeaconBlock) (*structs.LightClientBootstrap, error) {
// assert compute_epoch_at_slot(state.slot) >= DENEB_FORK_EPOCH
if slots.ToEpoch(state.Slot()) < params.BeaconConfig().DenebForkEpoch {
return nil, fmt.Errorf("creating Deneb light client bootstrap is not supported before Deneb, invalid slot %d", state.Slot())
@@ -243,103 +184,61 @@ func createLightClientBootstrapDeneb(ctx context.Context, state state.BeaconStat
return nil, fmt.Errorf("state slot %d not equal to latest block header slot %d", state.Slot(), latestBlockHeader.Slot)
}
// Prepare data
currentSyncCommittee, err := state.CurrentSyncCommittee()
if err != nil {
return nil, errors.Wrap(err, "could not get current sync committee")
}
committee := structs.SyncCommitteeFromConsensus(currentSyncCommittee)
currentSyncCommitteeProof, err := state.CurrentSyncCommitteeProof(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get current sync committee proof")
}
branch := make([]string, fieldparams.NextSyncCommitteeBranchDepth)
for i, proof := range currentSyncCommitteeProof {
branch[i] = hexutil.Encode(proof)
}
beacon := structs.BeaconBlockHeaderFromConsensus(latestBlockHeader)
payloadInterface, err := block.Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := payloadInterface.TransactionsRoot()
if errors.Is(err, consensus_types.ErrUnsupportedField) {
transactions, err := payloadInterface.Transactions()
if err != nil {
return nil, errors.Wrap(err, "could not get transactions")
}
transactionsRootArray, err := ssz.TransactionsRoot(transactions)
if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
transactionsRoot = transactionsRootArray[:]
} else if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := payloadInterface.WithdrawalsRoot()
if errors.Is(err, consensus_types.ErrUnsupportedField) {
withdrawals, err := payloadInterface.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals")
}
withdrawalsRootArray, err := ssz.WithdrawalSliceRoot(withdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
withdrawalsRoot = withdrawalsRootArray[:]
}
executionPayloadHeader := &structs.ExecutionPayloadHeaderDeneb{
ParentHash: hexutil.Encode(payloadInterface.ParentHash()),
FeeRecipient: hexutil.Encode(payloadInterface.FeeRecipient()),
StateRoot: hexutil.Encode(payloadInterface.StateRoot()),
ReceiptsRoot: hexutil.Encode(payloadInterface.ReceiptsRoot()),
LogsBloom: hexutil.Encode(payloadInterface.LogsBloom()),
PrevRandao: hexutil.Encode(payloadInterface.PrevRandao()),
BlockNumber: hexutil.EncodeUint64(payloadInterface.BlockNumber()),
GasLimit: hexutil.EncodeUint64(payloadInterface.GasLimit()),
GasUsed: hexutil.EncodeUint64(payloadInterface.GasUsed()),
Timestamp: hexutil.EncodeUint64(payloadInterface.Timestamp()),
ExtraData: hexutil.Encode(payloadInterface.ExtraData()),
BaseFeePerGas: hexutil.Encode(payloadInterface.BaseFeePerGas()),
BlockHash: hexutil.Encode(payloadInterface.BlockHash()),
TransactionsRoot: hexutil.Encode(transactionsRoot),
WithdrawalsRoot: hexutil.Encode(withdrawalsRoot),
}
executionPayloadProof, err := blocks.PayloadProof(ctx, block)
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
executionPayloadProofStr := make([]string, len(executionPayloadProof))
for i, proof := range executionPayloadProof {
executionPayloadProofStr[i] = hexutil.Encode(proof)
}
header := &structs.LightClientHeaderDeneb{
Beacon: beacon,
Execution: executionPayloadHeader,
ExecutionBranch: executionPayloadProofStr,
}
// Above shared util function won't calculate state root, so we need to do it manually
// header.state_root = hash_tree_root(state)
stateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get state root")
}
header.Beacon.StateRoot = hexutil.Encode(stateRoot[:])
latestBlockHeader.StateRoot = stateRoot[:]
headerJson, err := json.Marshal(header)
// assert hash_tree_root(header) == hash_tree_root(block.message)
latestBlockHeaderRoot, err := latestBlockHeader.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get latest block header root")
}
beaconBlockRoot, err := block.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get block root")
}
if latestBlockHeaderRoot != beaconBlockRoot {
return nil, fmt.Errorf("latest block header root %#x not equal to block root %#x", latestBlockHeaderRoot, beaconBlockRoot)
}
lightClientHeaderContainer, err := lightclient.BlockToLightClientHeader(block)
if err != nil {
return nil, errors.Wrap(err, "could not convert block to light client header")
}
lightClientHeader := lightClientHeaderContainer.GetHeaderDeneb()
apiLightClientHeader := &structs.LightClientHeader{
Beacon: structs.BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(lightClientHeader.Beacon)),
}
headerJSON, err := json.Marshal(apiLightClientHeader)
if err != nil {
return nil, errors.Wrap(err, "could not convert header to raw message")
}
// Return result
currentSyncCommittee, err := state.CurrentSyncCommittee()
if err != nil {
return nil, errors.Wrap(err, "could not get current sync committee")
}
currentSyncCommitteeProof, err := state.CurrentSyncCommitteeProof(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get current sync committee proof")
}
var branch []string
switch block.Version() {
case version.Deneb:
branch = make([]string, fieldparams.SyncCommitteeBranchDepth)
case version.Electra:
branch = make([]string, fieldparams.SyncCommitteeBranchDepthElectra)
}
for i, proof := range currentSyncCommitteeProof {
branch[i] = hexutil.Encode(proof)
}
result := &structs.LightClientBootstrap{
Header: headerJson,
CurrentSyncCommittee: committee,
Header: headerJSON,
CurrentSyncCommittee: structs.SyncCommitteeFromConsensus(currentSyncCommittee),
CurrentSyncCommitteeBranch: branch,
}
@@ -351,9 +250,10 @@ func newLightClientUpdateFromBeaconState(
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
finalizedBlock interfaces.ReadOnlySignedBeaconBlock,
) (*structs.LightClientUpdate, error) {
result, err := lightclient.NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, finalizedBlock)
result, err := lightclient.NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return nil, err
}
@@ -366,9 +266,10 @@ func newLightClientFinalityUpdateFromBeaconState(
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
finalizedBlock interfaces.ReadOnlySignedBeaconBlock,
) (*structs.LightClientFinalityUpdate, error) {
result, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(ctx, state, block, attestedState, finalizedBlock)
result, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return nil, err
}
@@ -381,8 +282,9 @@ func newLightClientOptimisticUpdateFromBeaconState(
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
) (*structs.LightClientOptimisticUpdate, error) {
result, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(ctx, state, block, attestedState)
result, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock)
if err != nil {
return nil, err
}
@@ -391,7 +293,7 @@ func newLightClientOptimisticUpdateFromBeaconState(
}
func IsSyncCommitteeUpdate(update *v2.LightClientUpdate) bool {
nextSyncCommitteeBranch := make([][]byte, fieldparams.NextSyncCommitteeBranchDepth)
nextSyncCommitteeBranch := make([][]byte, fieldparams.SyncCommitteeBranchDepth)
return !reflect.DeepEqual(update.NextSyncCommitteeBranch, nextSyncCommitteeBranch)
}

View File

@@ -13,7 +13,7 @@ import (
// When the update has relevant sync committee
func createNonEmptySyncCommitteeBranch() [][]byte {
res := make([][]byte, fieldparams.NextSyncCommitteeBranchDepth)
res := make([][]byte, fieldparams.SyncCommitteeBranchDepth)
res[0] = []byte("xyz")
return res
}
@@ -101,7 +101,7 @@ func TestIsBetterUpdate(t *testing.T) {
}},
},
},
NextSyncCommitteeBranch: make([][]byte, fieldparams.NextSyncCommitteeBranchDepth),
NextSyncCommitteeBranch: make([][]byte, fieldparams.SyncCommitteeBranchDepth),
SignatureSlot: 9999,
},
newUpdate: &ethpbv2.LightClientUpdate{
@@ -147,7 +147,7 @@ func TestIsBetterUpdate(t *testing.T) {
}},
},
},
NextSyncCommitteeBranch: make([][]byte, fieldparams.NextSyncCommitteeBranchDepth),
NextSyncCommitteeBranch: make([][]byte, fieldparams.SyncCommitteeBranchDepth),
SignatureSlot: 9999,
},
expectedResult: false,

View File

@@ -287,6 +287,18 @@ func (s *Server) produceBlockV3(ctx context.Context, w http.ResponseWriter, r *h
handleProduceDenebV3(w, isSSZ, denebBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
blindedElectraBlockContents, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_BlindedElectra)
if ok {
w.Header().Set(api.VersionHeader, version.String(version.Electra))
handleProduceBlindedElectraV3(w, isSSZ, blindedElectraBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
electraBlockContents, ok := v1alpha1resp.Block.(*eth.GenericBeaconBlock_Electra)
if ok {
w.Header().Set(api.VersionHeader, version.String(version.Electra))
handleProduceElectraV3(w, isSSZ, electraBlockContents, v1alpha1resp.PayloadValue, consensusBlockValue)
return
}
}
func getConsensusBlockValue(ctx context.Context, blockRewardsFetcher rewards.BlockRewardsFetcher, i interface{} /* block as argument */) (string, *httputil.DefaultJsonError) {
@@ -587,3 +599,74 @@ func handleProduceDenebV3(
Data: jsonBytes,
})
}
func handleProduceBlindedElectraV3(
w http.ResponseWriter,
isSSZ bool,
blk *eth.GenericBeaconBlock_BlindedElectra,
executionPayloadValue string,
consensusPayloadValue string,
) {
if isSSZ {
sszResp, err := blk.BlindedElectra.MarshalSSZ()
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteSsz(w, sszResp, "blindedElectraBlockContents.ssz")
return
}
blindedBlock, err := structs.BlindedBeaconBlockElectraFromConsensus(blk.BlindedElectra)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
jsonBytes, err := json.Marshal(blindedBlock)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteJson(w, &structs.ProduceBlockV3Response{
Version: version.String(version.Electra),
ExecutionPayloadBlinded: true,
ExecutionPayloadValue: executionPayloadValue,
ConsensusBlockValue: consensusPayloadValue,
Data: jsonBytes,
})
}
func handleProduceElectraV3(
w http.ResponseWriter,
isSSZ bool,
blk *eth.GenericBeaconBlock_Electra,
executionPayloadValue string,
consensusBlockValue string,
) {
if isSSZ {
sszResp, err := blk.Electra.MarshalSSZ()
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteSsz(w, sszResp, "electraBlockContents.ssz")
return
}
blockContents, err := structs.BeaconBlockContentsElectraFromConsensus(blk.Electra)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
jsonBytes, err := json.Marshal(blockContents)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteJson(w, &structs.ProduceBlockV3Response{
Version: version.String(version.Electra),
ExecutionPayloadBlinded: false,
ExecutionPayloadValue: executionPayloadValue, // mev not available at this point
ConsensusBlockValue: consensusBlockValue,
Data: jsonBytes,
})
}

View File

@@ -308,6 +308,75 @@ func TestProduceBlockV2(t *testing.T) {
assert.Equal(t, http.StatusInternalServerError, e.Code)
assert.StringContains(t, "Prepared block is blinded", e.Message)
})
t.Run("Electra", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsElectra
err = json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &block)
require.NoError(t, err)
jsonBytes, err := json.Marshal(block.ToUnsigned())
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: true,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
b, err := block.ToUnsigned().ToGeneric()
require.NoError(t, err)
b.PayloadValue = "2000"
return b, nil
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v2/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
want := fmt.Sprintf(`{"version":"electra","execution_payload_blinded":false,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
})
t.Run("Blinded Electra", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockElectra
err = json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &block)
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: true,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
return block.Message.ToGeneric()
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v2/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV2(writer, request)
assert.Equal(t, http.StatusInternalServerError, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusInternalServerError, e.Code)
assert.StringContains(t, "Prepared block is blinded", e.Message)
})
t.Run("invalid query parameter slot empty", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
server := &Server{
@@ -650,6 +719,76 @@ func TestProduceBlockV2SSZ(t *testing.T) {
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v2/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV2(writer, request)
assert.Equal(t, http.StatusInternalServerError, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusInternalServerError, e.Code)
assert.StringContains(t, "Prepared block is blinded", e.Message)
})
t.Run("Electra", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsElectra
err = json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &block)
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: true,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
return block.ToUnsigned().ToGeneric()
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v2/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
g, err := block.ToUnsigned().ToGeneric()
require.NoError(t, err)
bl, ok := g.Block.(*eth.GenericBeaconBlock_Electra)
require.Equal(t, true, ok)
ssz, err := bl.Electra.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
})
t.Run("Blinded Electra", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockElectra
err = json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &block)
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: true,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
return block.Message.ToGeneric()
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v2/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
@@ -944,6 +1083,75 @@ func TestProduceBlindedBlock(t *testing.T) {
require.Equal(t, want, body)
require.Equal(t, "deneb", writer.Header().Get(api.VersionHeader))
})
t.Run("Electra", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsElectra
err = json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &block)
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: false,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
return block.ToUnsigned().ToGeneric()
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v1/validator/blinded_blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlindedBlock(writer, request)
assert.Equal(t, http.StatusInternalServerError, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusInternalServerError, e.Code)
assert.StringContains(t, "Prepared block is not blinded", e.Message)
})
t.Run("Blinded Electra", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockElectra
err = json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &block)
require.NoError(t, err)
jsonBytes, err := json.Marshal(block.Message)
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: false,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
b, err := block.Message.ToGeneric()
require.NoError(t, err)
b.PayloadValue = "2000"
return b, nil
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v1/validator/blinded_blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlindedBlock(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
want := fmt.Sprintf(`{"version":"electra","execution_payload_blinded":true,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
})
t.Run("invalid query parameter slot empty", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
server := &Server{
@@ -1309,6 +1517,82 @@ func TestProduceBlockV3(t *testing.T) {
require.Equal(t, "deneb", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Electra", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsElectra
err := json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &block)
require.NoError(t, err)
jsonBytes, err := json.Marshal(block.ToUnsigned())
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: false,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
b, err := block.ToUnsigned().ToGeneric()
require.NoError(t, err)
b.PayloadValue = "2000"
return b, nil
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
want := fmt.Sprintf(`{"version":"electra","execution_payload_blinded":false,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Electra", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockElectra
err := json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &block)
require.NoError(t, err)
jsonBytes, err := json.Marshal(block.Message)
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: false,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
b, err := block.Message.ToGeneric()
require.NoError(t, err)
b.PayloadValue = "2000"
return b, nil
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
want := fmt.Sprintf(`{"version":"electra","execution_payload_blinded":true,"execution_payload_value":"2000","consensus_block_value":"10000000000","data":%s}`, string(jsonBytes))
body := strings.ReplaceAll(writer.Body.String(), "\n", "")
require.Equal(t, want, body)
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("invalid query parameter slot empty", func(t *testing.T) {
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
server := &Server{
@@ -1697,4 +1981,86 @@ func TestProduceBlockV3SSZ(t *testing.T) {
require.Equal(t, "deneb", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Electra", func(t *testing.T) {
var block *structs.SignedBeaconBlockContentsElectra
err := json.Unmarshal([]byte(rpctesting.ElectraBlockContents), &block)
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: false,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
b, err := block.ToUnsigned().ToGeneric()
require.NoError(t, err)
b.PayloadValue = "2000"
return b, nil
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
g, err := block.ToUnsigned().ToGeneric()
require.NoError(t, err)
bl, ok := g.Block.(*eth.GenericBeaconBlock_Electra)
require.Equal(t, true, ok)
ssz, err := bl.Electra.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "false", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
t.Run("Blinded Electra", func(t *testing.T) {
var block *structs.SignedBlindedBeaconBlockElectra
err := json.Unmarshal([]byte(rpctesting.BlindedElectraBlock), &block)
require.NoError(t, err)
v1alpha1Server := mock2.NewMockBeaconNodeValidatorServer(ctrl)
v1alpha1Server.EXPECT().GetBeaconBlock(gomock.Any(), &eth.BlockRequest{
Slot: 1,
RandaoReveal: bRandao,
Graffiti: bGraffiti,
SkipMevBoost: false,
}).Return(
func() (*eth.GenericBeaconBlock, error) {
b, err := block.Message.ToGeneric()
require.NoError(t, err)
b.PayloadValue = "2000"
return b, nil
}())
server := &Server{
V1Alpha1Server: v1alpha1Server,
SyncChecker: syncChecker,
OptimisticModeFetcher: chainService,
BlockRewardFetcher: rewardFetcher,
}
request := httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://foo.example/eth/v3/validator/blocks/1?randao_reveal=%s&graffiti=%s", randao, graffiti), nil)
request.Header.Set("Accept", api.OctetStreamMediaType)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.ProduceBlockV3(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
g, err := block.Message.ToGeneric()
require.NoError(t, err)
bl, ok := g.Block.(*eth.GenericBeaconBlock_BlindedElectra)
require.Equal(t, true, ok)
ssz, err := bl.BlindedElectra.MarshalSSZ()
require.NoError(t, err)
require.Equal(t, string(ssz), writer.Body.String())
require.Equal(t, "true", writer.Header().Get(api.ExecutionPayloadBlindedHeader))
require.Equal(t, "2000", writer.Header().Get(api.ExecutionPayloadValueHeader))
require.Equal(t, "electra", writer.Header().Get(api.VersionHeader))
require.Equal(t, "10000000000", writer.Header().Get(api.ConsensusBlockValueHeader))
})
}

View File

@@ -14,6 +14,7 @@ go_library(
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/eth/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
@@ -22,8 +23,10 @@ go_library(
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/sync:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/validator:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/eth/v1:go_default_library",
@@ -47,13 +50,16 @@ go_test(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/rpc/prysm/testing:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//beacon-chain/state/stategen/mock:go_default_library",
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -15,7 +15,9 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/core"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -183,3 +185,52 @@ func (s *Server) GetChainHead(w http.ResponseWriter, r *http.Request) {
}
httputil.WriteJson(w, response)
}
func (s *Server) PublishBlobs(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.PublishBlobs")
defer span.End()
if shared.IsSyncing(r.Context(), w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
var req structs.PublishBlobsRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
httputil.HandleError(w, "Could not decode JSON request body", http.StatusBadRequest)
return
}
if req.BlobSidecars == nil {
httputil.HandleError(w, "Missing blob sidecars", http.StatusBadRequest)
return
}
root, err := bytesutil.DecodeHexWithLength(req.BlockRoot, 32)
if err != nil {
httputil.HandleError(w, "Could not decode block root: "+err.Error(), http.StatusBadRequest)
return
}
for _, blobSidecar := range req.BlobSidecars.Sidecars {
sc, err := blobSidecar.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not decode blob sidecar: "+err.Error(), http.StatusBadRequest)
return
}
readOnlySc, err := blocks.NewROBlobWithRoot(sc, bytesutil.ToBytes32(root))
if err != nil {
httputil.HandleError(w, "Could not create read-only blob: "+err.Error(), http.StatusInternalServerError)
return
}
verifiedBlob := blocks.NewVerifiedROBlob(readOnlySc)
if err := s.BlobReceiver.ReceiveBlob(ctx, verifiedBlob); err != nil {
httputil.HandleError(w, "Could not receive blob: "+err.Error(), http.StatusInternalServerError)
return
}
if err := s.Broadcaster.BroadcastBlob(ctx, sc.Index, sc); err != nil {
httputil.HandleError(w, "Failed to broadcast blob: "+err.Error(), http.StatusInternalServerError)
return
}
}
}

View File

@@ -18,10 +18,13 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
dbTest "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
mockp2p "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/core"
rpctesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/prysm/testing"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
mockstategen "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen/mock"
mockSync "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/initial-sync/testing"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -870,3 +873,163 @@ func TestServer_GetChainHead(t *testing.T) {
assert.DeepEqual(t, hexutil.Encode(fRoot[:]), ch.FinalizedBlockRoot, "Unexpected FinalizedBlockRoot")
assert.Equal(t, false, ch.OptimisticStatus)
}
func TestPublishBlobs_InvalidJson(t *testing.T) {
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.InvalidJson)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Could not decode JSON request body", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}
func TestPublishBlobs_MissingBlob(t *testing.T) {
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.PublishBlobsRequestMissingBlob)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Could not decode blob sidecar", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}
func TestPublishBlobs_MissingSignedBlockHeader(t *testing.T) {
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.PublishBlobsRequestMissingSignedBlockHeader)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Could not decode blob sidecar", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}
func TestPublishBlobs_MissingSidecars(t *testing.T) {
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.PublishBlobsRequestMissingSidecars)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Missing blob sidecars", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}
func TestPublishBlobs_EmptySidecarsList(t *testing.T) {
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.PublishBlobsRequestEmptySidecarsList)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}
func TestPublishBlobs_NullSidecar(t *testing.T) {
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.PublishBlobsRequestNullSidecar)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Could not decode blob sidecar", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}
func TestPublishBlobs_SeveralFieldsMissing(t *testing.T) {
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.PublishBlobsRequestSeveralFieldsMissing)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Could not decode blob sidecar", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}
func TestPublishBlobs_BadBlockRoot(t *testing.T) {
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.PublishBlobsRequestBadBlockRoot)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Could not decode block root", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}
func TestPublishBlobs(t *testing.T) {
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.PublishBlobsRequest)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 1)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), true)
}

View File

@@ -3,6 +3,7 @@ package beacon
import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
beacondb "github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/core"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/lookup"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
@@ -20,4 +21,6 @@ type Server struct {
ChainInfoFetcher blockchain.ChainInfoFetcher
FinalizationFetcher blockchain.FinalizationFetcher
CoreService *core.Service
Broadcaster p2p.Broadcaster
BlobReceiver blockchain.BlobReceiver
}

View File

@@ -0,0 +1,9 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
testonly = True,
srcs = ["json.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/prysm/testing",
visibility = ["//visibility:public"],
)

File diff suppressed because one or more lines are too long

View File

@@ -217,10 +217,10 @@ func (a proposerAtts) sort() (proposerAtts, error) {
return a, nil
}
if features.Get().EnableCommitteeAwarePacking {
return a.sortBySlotAndCommittee()
if features.Get().DisableCommitteeAwarePacking {
return a.sortByProfitabilityUsingMaxCover()
}
return a.sortByProfitabilityUsingMaxCover()
return a.sortBySlotAndCommittee()
}
// Separate attestations by slot, as slot number takes higher precedence when sorting.

View File

@@ -21,6 +21,11 @@ import (
)
func TestProposer_ProposerAtts_sort(t *testing.T) {
feat := features.Get()
feat.DisableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
type testData struct {
slot primitives.Slot
bits bitfield.Bitlist
@@ -186,11 +191,6 @@ func TestProposer_ProposerAtts_committeeAwareSort(t *testing.T) {
}
t.Run("no atts", func(t *testing.T) {
feat := features.Get()
feat.EnableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
atts := getAtts([]testData{})
want := getAtts([]testData{})
atts, err := atts.sort()
@@ -201,11 +201,6 @@ func TestProposer_ProposerAtts_committeeAwareSort(t *testing.T) {
})
t.Run("single att", func(t *testing.T) {
feat := features.Get()
feat.EnableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
atts := getAtts([]testData{
{4, bitfield.Bitlist{0b11100000, 0b1}},
})
@@ -220,11 +215,6 @@ func TestProposer_ProposerAtts_committeeAwareSort(t *testing.T) {
})
t.Run("single att per slot", func(t *testing.T) {
feat := features.Get()
feat.EnableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
atts := getAtts([]testData{
{1, bitfield.Bitlist{0b11000000, 0b1}},
{4, bitfield.Bitlist{0b11100000, 0b1}},
@@ -241,10 +231,6 @@ func TestProposer_ProposerAtts_committeeAwareSort(t *testing.T) {
})
t.Run("two atts on one of the slots", func(t *testing.T) {
feat := features.Get()
feat.EnableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
atts := getAtts([]testData{
{1, bitfield.Bitlist{0b11000000, 0b1}},
@@ -263,11 +249,6 @@ func TestProposer_ProposerAtts_committeeAwareSort(t *testing.T) {
})
t.Run("compare to native sort", func(t *testing.T) {
feat := features.Get()
feat.EnableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
// The max-cover based approach will select 0b00001100 instead, despite lower bit count
// (since it has two new/unknown bits).
t.Run("max-cover", func(t *testing.T) {
@@ -289,11 +270,6 @@ func TestProposer_ProposerAtts_committeeAwareSort(t *testing.T) {
})
t.Run("multiple slots", func(t *testing.T) {
feat := features.Get()
feat.EnableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
atts := getAtts([]testData{
{2, bitfield.Bitlist{0b11100000, 0b1}},
{4, bitfield.Bitlist{0b11100000, 0b1}},
@@ -316,11 +292,6 @@ func TestProposer_ProposerAtts_committeeAwareSort(t *testing.T) {
})
t.Run("follows max-cover", func(t *testing.T) {
feat := features.Get()
feat.EnableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
// Items at slot 4 must be first split into two lists by max-cover, with
// 0b10000011 being selected and 0b11100001 being leftover (despite naive bit count suggesting otherwise).
atts := getAtts([]testData{
@@ -350,11 +321,6 @@ func TestProposer_ProposerAtts_committeeAwareSort(t *testing.T) {
func TestProposer_sort_DifferentCommittees(t *testing.T) {
t.Run("one att per committee", func(t *testing.T) {
feat := features.Get()
feat.EnableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
c1_a1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11111000, 0b1}, Data: &ethpb.AttestationData{CommitteeIndex: 1}})
c2_a1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11100000, 0b1}, Data: &ethpb.AttestationData{CommitteeIndex: 2}})
atts := proposerAtts{c1_a1, c2_a1}
@@ -364,11 +330,6 @@ func TestProposer_sort_DifferentCommittees(t *testing.T) {
assert.DeepEqual(t, want, atts)
})
t.Run("multiple atts per committee", func(t *testing.T) {
feat := features.Get()
feat.EnableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
c1_a1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11111100, 0b1}, Data: &ethpb.AttestationData{CommitteeIndex: 1}})
c1_a2 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000010, 0b1}, Data: &ethpb.AttestationData{CommitteeIndex: 1}})
c2_a1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11110000, 0b1}, Data: &ethpb.AttestationData{CommitteeIndex: 2}})
@@ -381,11 +342,6 @@ func TestProposer_sort_DifferentCommittees(t *testing.T) {
assert.DeepEqual(t, want, atts)
})
t.Run("multiple atts per committee, multiple slots", func(t *testing.T) {
feat := features.Get()
feat.EnableCommitteeAwarePacking = true
reset := features.InitWithReset(feat)
defer reset()
s2_c1_a1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11111100, 0b1}, Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 1}})
s2_c1_a2 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10000010, 0b1}, Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 1}})
s2_c2_a1 := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b11110000, 0b1}, Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 2}})

View File

@@ -83,6 +83,7 @@ type Server struct {
// WaitForActivation checks if a validator public key exists in the active validator registry of the current
// beacon state, if not, then it creates a stream which listens for canonical states which contain
// the validator with the public key as an active validator record.
// Deprecated: do not use, just poll validator status every epoch.
func (vs *Server) WaitForActivation(req *ethpb.ValidatorActivationRequest, stream ethpb.BeaconNodeValidator_WaitForActivationServer) error {
activeValidatorExists, validatorStatuses, err := vs.activationStatus(stream.Context(), req.PublicKeys)
if err != nil {

View File

@@ -225,7 +225,7 @@ type ReadOnlySyncCommittee interface {
type ReadOnlyDeposits interface {
DepositBalanceToConsume() (primitives.Gwei, error)
DepositRequestsStartIndex() (uint64, error)
PendingBalanceDeposits() ([]*ethpb.PendingBalanceDeposit, error)
PendingDeposits() ([]*ethpb.PendingDeposit, error)
}
type ReadOnlyConsolidations interface {
@@ -331,8 +331,8 @@ type WriteOnlyConsolidations interface {
}
type WriteOnlyDeposits interface {
AppendPendingBalanceDeposit(index primitives.ValidatorIndex, amount uint64) error
AppendPendingDeposit(pd *ethpb.PendingDeposit) error
SetDepositRequestsStartIndex(index uint64) error
SetPendingBalanceDeposits(val []*ethpb.PendingBalanceDeposit) error
SetPendingDeposits(val []*ethpb.PendingDeposit) error
SetDepositBalanceToConsume(primitives.Gwei) error
}

View File

@@ -7,11 +7,11 @@ go_library(
"doc.go",
"error.go",
"getters_attestation.go",
"getters_balance_deposits.go",
"getters_block.go",
"getters_checkpoint.go",
"getters_consolidation.go",
"getters_deposit_requests.go",
"getters_deposits.go",
"getters_eth1.go",
"getters_exit.go",
"getters_misc.go",
@@ -27,12 +27,12 @@ go_library(
"proofs.go",
"readonly_validator.go",
"setters_attestation.go",
"setters_balance_deposits.go",
"setters_block.go",
"setters_checkpoint.go",
"setters_churn.go",
"setters_consolidation.go",
"setters_deposit_requests.go",
"setters_deposits.go",
"setters_eth1.go",
"setters_misc.go",
"setters_participation.go",
@@ -91,11 +91,11 @@ go_test(
name = "go_default_test",
srcs = [
"getters_attestation_test.go",
"getters_balance_deposits_test.go",
"getters_block_test.go",
"getters_checkpoint_test.go",
"getters_consolidation_test.go",
"getters_deposit_requests_test.go",
"getters_deposits_test.go",
"getters_exit_test.go",
"getters_participation_test.go",
"getters_test.go",
@@ -107,10 +107,10 @@ go_test(
"readonly_validator_test.go",
"references_test.go",
"setters_attestation_test.go",
"setters_balance_deposits_test.go",
"setters_churn_test.go",
"setters_consolidation_test.go",
"setters_deposit_requests_test.go",
"setters_deposits_test.go",
"setters_eth1_test.go",
"setters_misc_test.go",
"setters_participation_test.go",

View File

@@ -65,7 +65,7 @@ type BeaconState struct {
earliestExitEpoch primitives.Epoch
consolidationBalanceToConsume primitives.Gwei
earliestConsolidationEpoch primitives.Epoch
pendingBalanceDeposits []*ethpb.PendingBalanceDeposit // pending_balance_deposits: List[PendingBalanceDeposit, PENDING_BALANCE_DEPOSITS_LIMIT]
pendingDeposits []*ethpb.PendingDeposit // pending_deposits: List[PendingDeposit, PENDING_DEPOSITS_LIMIT]
pendingPartialWithdrawals []*ethpb.PendingPartialWithdrawal // pending_partial_withdrawals: List[PartialWithdrawal, PENDING_PARTIAL_WITHDRAWALS_LIMIT]
pendingConsolidations []*ethpb.PendingConsolidation // pending_consolidations: List[PendingConsolidation, PENDING_CONSOLIDATIONS_LIMIT]
@@ -121,7 +121,7 @@ type beaconStateMarshalable struct {
EarliestExitEpoch primitives.Epoch `json:"earliest_exit_epoch" yaml:"earliest_exit_epoch"`
ConsolidationBalanceToConsume primitives.Gwei `json:"consolidation_balance_to_consume" yaml:"consolidation_balance_to_consume"`
EarliestConsolidationEpoch primitives.Epoch `json:"earliest_consolidation_epoch" yaml:"earliest_consolidation_epoch"`
PendingBalanceDeposits []*ethpb.PendingBalanceDeposit `json:"pending_balance_deposits" yaml:"pending_balance_deposits"`
PendingDeposits []*ethpb.PendingDeposit `json:"pending_deposits" yaml:"pending_deposits"`
PendingPartialWithdrawals []*ethpb.PendingPartialWithdrawal `json:"pending_partial_withdrawals" yaml:"pending_partial_withdrawals"`
PendingConsolidations []*ethpb.PendingConsolidation `json:"pending_consolidations" yaml:"pending_consolidations"`
}
@@ -190,7 +190,7 @@ func (b *BeaconState) MarshalJSON() ([]byte, error) {
EarliestExitEpoch: b.earliestExitEpoch,
ConsolidationBalanceToConsume: b.consolidationBalanceToConsume,
EarliestConsolidationEpoch: b.earliestConsolidationEpoch,
PendingBalanceDeposits: b.pendingBalanceDeposits,
PendingDeposits: b.pendingDeposits,
PendingPartialWithdrawals: b.pendingPartialWithdrawals,
PendingConsolidations: b.pendingConsolidations,
}

View File

@@ -18,22 +18,22 @@ func (b *BeaconState) DepositBalanceToConsume() (primitives.Gwei, error) {
return b.depositBalanceToConsume, nil
}
// PendingBalanceDeposits is a non-mutating call to the beacon state which returns a deep copy of
// PendingDeposits is a non-mutating call to the beacon state which returns a deep copy of
// the pending balance deposit slice. This method requires access to the RLock on the state and
// only applies in electra or later.
func (b *BeaconState) PendingBalanceDeposits() ([]*ethpb.PendingBalanceDeposit, error) {
func (b *BeaconState) PendingDeposits() ([]*ethpb.PendingDeposit, error) {
if b.version < version.Electra {
return nil, errNotSupported("PendingBalanceDeposits", b.version)
return nil, errNotSupported("PendingDeposits", b.version)
}
b.lock.RLock()
defer b.lock.RUnlock()
return b.pendingBalanceDepositsVal(), nil
return b.pendingDepositsVal(), nil
}
func (b *BeaconState) pendingBalanceDepositsVal() []*ethpb.PendingBalanceDeposit {
if b.pendingBalanceDeposits == nil {
func (b *BeaconState) pendingDepositsVal() []*ethpb.PendingDeposit {
if b.pendingDeposits == nil {
return nil
}
return ethpb.CopySlice(b.pendingBalanceDeposits)
return ethpb.CopySlice(b.pendingDeposits)
}

View File

@@ -25,21 +25,40 @@ func TestDepositBalanceToConsume(t *testing.T) {
require.ErrorContains(t, "not supported", err)
}
func TestPendingBalanceDeposits(t *testing.T) {
func TestPendingDeposits(t *testing.T) {
s, err := state_native.InitializeFromProtoElectra(&eth.BeaconStateElectra{
PendingBalanceDeposits: []*eth.PendingBalanceDeposit{
{Index: 1, Amount: 2},
{Index: 3, Amount: 4},
PendingDeposits: []*eth.PendingDeposit{
{
PublicKey: []byte{1, 2, 3},
WithdrawalCredentials: []byte{4, 5, 6},
Amount: 2,
Signature: []byte{7, 8, 9},
Slot: 1,
},
{
PublicKey: []byte{11, 22, 33},
WithdrawalCredentials: []byte{44, 55, 66},
Amount: 4,
Signature: []byte{77, 88, 99},
Slot: 2,
},
},
})
require.NoError(t, err)
pbd, err := s.PendingBalanceDeposits()
pbd, err := s.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 2, len(pbd))
require.Equal(t, primitives.ValidatorIndex(1), pbd[0].Index)
require.DeepEqual(t, []byte{1, 2, 3}, pbd[0].PublicKey)
require.DeepEqual(t, []byte{4, 5, 6}, pbd[0].WithdrawalCredentials)
require.Equal(t, uint64(2), pbd[0].Amount)
require.Equal(t, primitives.ValidatorIndex(3), pbd[1].Index)
require.DeepEqual(t, []byte{7, 8, 9}, pbd[0].Signature)
require.Equal(t, primitives.Slot(1), pbd[0].Slot)
require.DeepEqual(t, []byte{11, 22, 33}, pbd[1].PublicKey)
require.DeepEqual(t, []byte{44, 55, 66}, pbd[1].WithdrawalCredentials)
require.Equal(t, uint64(4), pbd[1].Amount)
require.DeepEqual(t, []byte{77, 88, 99}, pbd[1].Signature)
require.Equal(t, primitives.Slot(2), pbd[1].Slot)
// Fails for older than electra state
s, err = state_native.InitializeFromProtoDeneb(&eth.BeaconStateDeneb{})

View File

@@ -208,7 +208,7 @@ func (b *BeaconState) ToProtoUnsafe() interface{} {
EarliestExitEpoch: b.earliestExitEpoch,
ConsolidationBalanceToConsume: b.consolidationBalanceToConsume,
EarliestConsolidationEpoch: b.earliestConsolidationEpoch,
PendingBalanceDeposits: b.pendingBalanceDeposits,
PendingDeposits: b.pendingDeposits,
PendingPartialWithdrawals: b.pendingPartialWithdrawals,
PendingConsolidations: b.pendingConsolidations,
}
@@ -414,7 +414,7 @@ func (b *BeaconState) ToProto() interface{} {
EarliestExitEpoch: b.earliestExitEpoch,
ConsolidationBalanceToConsume: b.consolidationBalanceToConsume,
EarliestConsolidationEpoch: b.earliestConsolidationEpoch,
PendingBalanceDeposits: b.pendingBalanceDepositsVal(),
PendingDeposits: b.pendingDepositsVal(),
PendingPartialWithdrawals: b.pendingPartialWithdrawalsVal(),
PendingConsolidations: b.pendingConsolidationsVal(),
}

View File

@@ -113,6 +113,7 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
epoch := slots.ToEpoch(b.slot)
// Electra partial withdrawals functionality.
var partialWithdrawalsCount uint64
if b.version >= version.Electra {
for _, w := range b.pendingPartialWithdrawals {
if w.WithdrawableEpoch > epoch || len(withdrawals) >= int(params.BeaconConfig().MaxPendingPartialsPerWithdrawalsSweep) {
@@ -139,9 +140,9 @@ func (b *BeaconState) ExpectedWithdrawals() ([]*enginev1.Withdrawal, uint64, err
})
withdrawalIndex++
}
partialWithdrawalsCount++
}
}
partialWithdrawalsCount := uint64(len(withdrawals))
validatorsLen := b.validatorsLen()
bound := mathutil.Min(uint64(validatorsLen), params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)

View File

@@ -343,10 +343,28 @@ func TestExpectedWithdrawals(t *testing.T) {
require.NoError(t, pb.UnmarshalSSZ(serializedSSZ))
s, err := state_native.InitializeFromProtoElectra(pb)
require.NoError(t, err)
t.Log(s.NumPendingPartialWithdrawals())
expected, partialWithdrawalsCount, err := s.ExpectedWithdrawals()
require.NoError(t, err)
require.Equal(t, 8, len(expected))
require.Equal(t, uint64(8), partialWithdrawalsCount)
})
t.Run("electra some pending partial withdrawals", func(t *testing.T) {
// Load a serialized Electra state from disk.
// This spectest has a fully hydrated beacon state with partial pending withdrawals.
serializedBytes, err := util.BazelFileBytes("tests/mainnet/electra/operations/withdrawal_request/pyspec_tests/pending_withdrawals_consume_all_excess_balance/pre.ssz_snappy")
require.NoError(t, err)
serializedSSZ, err := snappy.Decode(nil /* dst */, serializedBytes)
require.NoError(t, err)
pb := &ethpb.BeaconStateElectra{}
require.NoError(t, pb.UnmarshalSSZ(serializedSSZ))
s, err := state_native.InitializeFromProtoElectra(pb)
require.NoError(t, err)
p, err := s.PendingPartialWithdrawals()
require.NoError(t, err)
require.NoError(t, s.UpdateBalancesAtIndex(p[0].Index, 0)) // This should still count as partial withdrawal.
_, partialWithdrawalsCount, err := s.ExpectedWithdrawals()
require.NoError(t, err)
require.Equal(t, uint64(10), partialWithdrawalsCount)
})
}

View File

@@ -296,12 +296,12 @@ func ComputeFieldRootsWithHasher(ctx context.Context, state *BeaconState) ([][]b
eceRoot := ssz.Uint64Root(uint64(state.earliestConsolidationEpoch))
fieldRoots[types.EarliestConsolidationEpoch.RealPosition()] = eceRoot[:]
// PendingBalanceDeposits root.
pbdRoot, err := stateutil.PendingBalanceDepositsRoot(state.pendingBalanceDeposits)
// PendingDeposits root.
pbdRoot, err := stateutil.PendingDepositsRoot(state.pendingDeposits)
if err != nil {
return nil, errors.Wrap(err, "could not compute pending balance deposits merkleization")
}
fieldRoots[types.PendingBalanceDeposits.RealPosition()] = pbdRoot[:]
fieldRoots[types.PendingDeposits.RealPosition()] = pbdRoot[:]
// PendingPartialWithdrawals root.
ppwRoot, err := stateutil.PendingPartialWithdrawalsRoot(state.pendingPartialWithdrawals)

View File

@@ -8,43 +8,43 @@ import (
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// AppendPendingBalanceDeposit is a mutating call to the beacon state to create and append a pending
// AppendPendingDeposit is a mutating call to the beacon state to create and append a pending
// balance deposit object on to the state. This method requires access to the Lock on the state and
// only applies in electra or later.
func (b *BeaconState) AppendPendingBalanceDeposit(index primitives.ValidatorIndex, amount uint64) error {
func (b *BeaconState) AppendPendingDeposit(pd *ethpb.PendingDeposit) error {
if b.version < version.Electra {
return errNotSupported("AppendPendingBalanceDeposit", b.version)
return errNotSupported("AppendPendingDeposit", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
b.sharedFieldReferences[types.PendingBalanceDeposits].MinusRef()
b.sharedFieldReferences[types.PendingBalanceDeposits] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingDeposits].MinusRef()
b.sharedFieldReferences[types.PendingDeposits] = stateutil.NewRef(1)
b.pendingBalanceDeposits = append(b.pendingBalanceDeposits, &ethpb.PendingBalanceDeposit{Index: index, Amount: amount})
b.pendingDeposits = append(b.pendingDeposits, pd)
b.markFieldAsDirty(types.PendingBalanceDeposits)
b.rebuildTrie[types.PendingBalanceDeposits] = true
b.markFieldAsDirty(types.PendingDeposits)
b.rebuildTrie[types.PendingDeposits] = true
return nil
}
// SetPendingBalanceDeposits is a mutating call to the beacon state which replaces the pending
// SetPendingDeposits is a mutating call to the beacon state which replaces the pending
// balance deposit slice with the provided value. This method requires access to the Lock on the
// state and only applies in electra or later.
func (b *BeaconState) SetPendingBalanceDeposits(val []*ethpb.PendingBalanceDeposit) error {
func (b *BeaconState) SetPendingDeposits(val []*ethpb.PendingDeposit) error {
if b.version < version.Electra {
return errNotSupported("SetPendingBalanceDeposits", b.version)
return errNotSupported("SetPendingDeposits", b.version)
}
b.lock.Lock()
defer b.lock.Unlock()
b.sharedFieldReferences[types.PendingBalanceDeposits].MinusRef()
b.sharedFieldReferences[types.PendingBalanceDeposits] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingDeposits].MinusRef()
b.sharedFieldReferences[types.PendingDeposits] = stateutil.NewRef(1)
b.pendingBalanceDeposits = val
b.pendingDeposits = val
b.markFieldAsDirty(types.PendingBalanceDeposits)
b.rebuildTrie[types.PendingBalanceDeposits] = true
b.markFieldAsDirty(types.PendingDeposits)
b.rebuildTrie[types.PendingDeposits] = true
return nil
}

View File

@@ -9,40 +9,52 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestAppendPendingBalanceDeposit(t *testing.T) {
func TestAppendPendingDeposit(t *testing.T) {
s, err := state_native.InitializeFromProtoElectra(&eth.BeaconStateElectra{})
require.NoError(t, err)
pbd, err := s.PendingBalanceDeposits()
pbd, err := s.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(pbd))
require.NoError(t, s.AppendPendingBalanceDeposit(1, 10))
pbd, err = s.PendingBalanceDeposits()
creds := []byte{0xFA, 0xCC}
pubkey := []byte{0xAA, 0xBB}
sig := []byte{0xCC, 0xDD}
require.NoError(t, s.AppendPendingDeposit(&eth.PendingDeposit{
PublicKey: pubkey,
WithdrawalCredentials: creds,
Amount: 10,
Signature: sig,
Slot: 1,
}))
pbd, err = s.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 1, len(pbd))
require.Equal(t, primitives.ValidatorIndex(1), pbd[0].Index)
require.DeepEqual(t, pubkey, pbd[0].PublicKey)
require.Equal(t, uint64(10), pbd[0].Amount)
require.DeepEqual(t, creds, pbd[0].WithdrawalCredentials)
require.Equal(t, primitives.Slot(1), pbd[0].Slot)
require.DeepEqual(t, sig, pbd[0].Signature)
// Fails for versions older than electra
s, err = state_native.InitializeFromProtoDeneb(&eth.BeaconStateDeneb{})
require.NoError(t, err)
require.ErrorContains(t, "not supported", s.AppendPendingBalanceDeposit(1, 1))
require.ErrorContains(t, "not supported", s.AppendPendingDeposit(&eth.PendingDeposit{}))
}
func TestSetPendingBalanceDeposits(t *testing.T) {
func TestSetPendingDeposits(t *testing.T) {
s, err := state_native.InitializeFromProtoElectra(&eth.BeaconStateElectra{})
require.NoError(t, err)
pbd, err := s.PendingBalanceDeposits()
pbd, err := s.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 0, len(pbd))
require.NoError(t, s.SetPendingBalanceDeposits([]*eth.PendingBalanceDeposit{{}, {}, {}}))
pbd, err = s.PendingBalanceDeposits()
require.NoError(t, s.SetPendingDeposits([]*eth.PendingDeposit{{}, {}, {}}))
pbd, err = s.PendingDeposits()
require.NoError(t, err)
require.Equal(t, 3, len(pbd))
// Fails for versions older than electra
s, err = state_native.InitializeFromProtoDeneb(&eth.BeaconStateDeneb{})
require.NoError(t, err)
require.ErrorContains(t, "not supported", s.SetPendingBalanceDeposits([]*eth.PendingBalanceDeposit{{}, {}, {}}))
require.ErrorContains(t, "not supported", s.SetPendingDeposits([]*eth.PendingDeposit{{}, {}, {}}))
}
func TestSetDepositBalanceToConsume(t *testing.T) {

View File

@@ -106,7 +106,7 @@ var electraFields = append(
types.EarliestExitEpoch,
types.ConsolidationBalanceToConsume,
types.EarliestConsolidationEpoch,
types.PendingBalanceDeposits,
types.PendingDeposits,
types.PendingPartialWithdrawals,
types.PendingConsolidations,
)
@@ -755,7 +755,7 @@ func InitializeFromProtoUnsafeElectra(st *ethpb.BeaconStateElectra) (state.Beaco
earliestExitEpoch: st.EarliestExitEpoch,
consolidationBalanceToConsume: st.ConsolidationBalanceToConsume,
earliestConsolidationEpoch: st.EarliestConsolidationEpoch,
pendingBalanceDeposits: st.PendingBalanceDeposits,
pendingDeposits: st.PendingDeposits,
pendingPartialWithdrawals: st.PendingPartialWithdrawals,
pendingConsolidations: st.PendingConsolidations,
@@ -820,7 +820,7 @@ func InitializeFromProtoUnsafeElectra(st *ethpb.BeaconStateElectra) (state.Beaco
b.sharedFieldReferences[types.CurrentEpochParticipationBits] = stateutil.NewRef(1)
b.sharedFieldReferences[types.LatestExecutionPayloadHeaderDeneb] = stateutil.NewRef(1) // New in Electra.
b.sharedFieldReferences[types.HistoricalSummaries] = stateutil.NewRef(1) // New in Capella.
b.sharedFieldReferences[types.PendingBalanceDeposits] = stateutil.NewRef(1) // New in Electra.
b.sharedFieldReferences[types.PendingDeposits] = stateutil.NewRef(1) // New in Electra.
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1) // New in Electra.
b.sharedFieldReferences[types.PendingConsolidations] = stateutil.NewRef(1) // New in Electra.
if !features.Get().EnableExperimentalState {
@@ -898,7 +898,7 @@ func (b *BeaconState) Copy() state.BeaconState {
currentEpochParticipation: b.currentEpochParticipation,
inactivityScores: b.inactivityScores,
inactivityScoresMultiValue: b.inactivityScoresMultiValue,
pendingBalanceDeposits: b.pendingBalanceDeposits,
pendingDeposits: b.pendingDeposits,
pendingPartialWithdrawals: b.pendingPartialWithdrawals,
pendingConsolidations: b.pendingConsolidations,
@@ -1301,8 +1301,8 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
return ssz.Uint64Root(uint64(b.consolidationBalanceToConsume)), nil
case types.EarliestConsolidationEpoch:
return ssz.Uint64Root(uint64(b.earliestConsolidationEpoch)), nil
case types.PendingBalanceDeposits:
return stateutil.PendingBalanceDepositsRoot(b.pendingBalanceDeposits)
case types.PendingDeposits:
return stateutil.PendingDepositsRoot(b.pendingDeposits)
case types.PendingPartialWithdrawals:
return stateutil.PendingPartialWithdrawalsRoot(b.pendingPartialWithdrawals)
case types.PendingConsolidations:

View File

@@ -106,8 +106,8 @@ func (f FieldIndex) String() string {
return "consolidationBalanceToConsume"
case EarliestConsolidationEpoch:
return "earliestConsolidationEpoch"
case PendingBalanceDeposits:
return "pendingBalanceDeposits"
case PendingDeposits:
return "pendingDeposits"
case PendingPartialWithdrawals:
return "pendingPartialWithdrawals"
case PendingConsolidations:
@@ -189,7 +189,7 @@ func (f FieldIndex) RealPosition() int {
return 32
case EarliestConsolidationEpoch:
return 33
case PendingBalanceDeposits:
case PendingDeposits:
return 34
case PendingPartialWithdrawals:
return 35
@@ -256,7 +256,7 @@ const (
EarliestExitEpoch // Electra: EIP-7251
ConsolidationBalanceToConsume // Electra: EIP-7251
EarliestConsolidationEpoch // Electra: EIP-7251
PendingBalanceDeposits // Electra: EIP-7251
PendingDeposits // Electra: EIP-7251
PendingPartialWithdrawals // Electra: EIP-7251
PendingConsolidations // Electra: EIP-7251
)

Some files were not shown because too many files have changed in this diff Show More