Compare commits

..

63 Commits

Author SHA1 Message Date
Taranpreet26311
cffe93ddeb Update linter version 2024-08-22 14:37:46 +04:00
Taranpreet26311
2b58b462e2 Updates to latest 2024-08-12 19:23:51 +04:00
Taranpreet26311
292d42e454 Add "intrange" linter to ignore 2024-08-12 18:45:56 +04:00
Taranpreet26311
057af30542 Try to downgrade to even lower lint version 2024-08-12 18:06:21 +04:00
Taranpreet26311
534459e269 Downgrade lint version 2024-08-12 18:00:39 +04:00
Taranpreet26311
b0fa3017b7 Bump up build version 2024-08-12 17:58:06 +04:00
Taranpreet26311
15574a80a5 Update linter version 2024-08-12 17:16:00 +04:00
Taranpreet26311
b91b9f6fa5 Update lint to go 1.22.6 2024-08-12 15:30:14 +04:00
Manu NALEPA
d48f89c166 DO NOT MERGE: Test range feature. 2024-08-10 14:34:16 +02:00
Manu NALEPA
289c44a1f2 PeerDAS: Add MetadataV3 with custody_subnet_count (#14274)
* `sendPingRequest`: Add some comments.

* `sendPingRequest`: Replace `stream.Conn().RemotePeer()` by `peerID`.

* `pingHandler`: Add comments.

* `sendMetaDataRequest`: Add comments and implement an unique test.

* Gather `SchemaVersion`s in the same `const` definition.

* Define `SchemaVersionV3`.

* `MetaDataV1`: Fix comment.

* Proto: Define `MetaDataV2`.

* `MetaDataV2`: Generate SSZ.

* `newColumnSubnetIDs`: Use smaller lines.

* `metaDataHandler` and `sendMetaDataRequest`: Manage `MetaDataV2`.

* `RefreshPersistentSubnets`: Refactor tests (no functional change).

* `RefreshPersistentSubnets`: Refactor and add comments (no functional change).

* `RefreshPersistentSubnets`: Compare cache with both ENR & metadata.

* `RefreshPersistentSubnets`: Manage peerDAS.

* `registerRPCHandlersPeerDAS`: Register `RPCMetaDataTopicV3`.

* `CustodyCountFromRemotePeer`: Retrieve the count from metadata.

Then default to ENR, then default to the default value.

* Update beacon-chain/sync/rpc_metadata.go

Co-authored-by: Nishant Das <nishdas93@gmail.com>

* Fix duplicate case.

* Remove version testing.

* `debug.proto`: Stop breaking ordering.

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2024-08-05 09:44:49 +02:00
Manu NALEPA
c9fa9389ab Fix data columns sampling (#14263)
* Fix the obvious...

* Data columns sampling: Modify logging.

* `waitForChainStart`: Set it threadsafe - Do only wait once.

* Sampling: Wait for chain start before running the sampling.

Reason: `newDataColumnSampler1D` needs `s.ctxMap`.
`s.ctxMap` is only set when chain is started.

Previously `waitForChainStart` was only called in `s.registerHandlers`, it self called in a go-routine.

==> We had a race condition here: Sometimes `newDataColumnSampler1D` were called once `s.ctxMap` were set, sometimes not.

* Adresse Nishant's comments.

* Sampling: Improve logging.

* `waitForChainStart`: Remove `chainIsStarted` check.
2024-07-29 14:27:09 +02:00
Manu NALEPA
940782f323 PeerDAS: Fix initial sync (#14208)
* `SendDataColumnsByRangeRequest`: Add some new fields in logs.

* `BlobStorageSummary`: Implement `HasDataColumnIndex` and `AllDataColumnsAvailable`.

* Implement `fetchDataColumnsFromPeers`.

* `fetchBlobsFromPeer`: Return only one error.
2024-07-29 13:22:38 +02:00
Manu NALEPA
8c684c1919 Make deepsource happy (#14237)
* DeepSource: Pass heavy objects by pointers.

* `removeBlockFromQueue`: Remove redundant error checking.

* `fetchBlobsFromPeer`: Use same variable for `append`.

* Remove unused arguments.

* Combine types.

* `Persist`: Add documentation.

* Remove unused receiver

* Remove duplicated import.

* Stop using both pointer and value receiver at the same time.

* `verifyAndPopulateColumns`: Remove unused parameter

* Stop using mpty slice literal used to declare a variable.
2024-07-29 13:22:38 +02:00
Manu NALEPA
8f431c1a79 PeerDAS: Run reconstruction in parallel. (#14236)
* PeerDAS: Run reconstruction in parallel.

* `isDataAvailableDataColumns` --> `isDataColumnsAvailable`

* `isDataColumnsAvailable`: Return `nil` as soon as half of the columns are received.

* Make deepsource happy.
2024-07-29 13:22:38 +02:00
Justin Traglia
c3d7069d0d Update ckzg4844 to latest version of das branch (#14223)
* Update ckzg4844 to latest version

* Run go mod tidy

* Remove unnecessary tests & run goimports

* Remove fieldparams from blockchain/kzg

* Add back blank line

* Avoid large copies

* Run gazelle

* Use trusted setup from the specs & fix issue with struct

* Run goimports

* Fix mistake in makeCellsAndProofs

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-07-29 13:22:38 +02:00
Nishant Das
006e8db8bb Add Current Changes (#14231) 2024-07-29 13:22:38 +02:00
Manu NALEPA
5e4f25f25b Implement and use filterPeerForDataColumnsSubnet. (#14230) 2024-07-29 13:22:38 +02:00
Francis Li
52cdd9a538 [PeerDAS] Parallelize data column sampling (#14105)
* PeerDAS: parallelizing sample queries

* PeerDAS: select sample from non custodied columns

* Finish rebase

* Add more test cases
2024-07-29 13:22:38 +02:00
kevaundray
6064e2b5c7 chore!: Use RecoverCellsAndKZGProofs instead of RecoverAllCells -> CellsToBlob -> ComputeCellsAndKZGProofs (#14183)
* use recoverCellsAndKZGProofs

* make recoverAllCells and CellsToBlob private

* chore: all methods now return CellsAndProof struct

* chore: update code
2024-07-29 13:22:38 +02:00
Nishant Das
1fa7903c17 Trigger PeerDAS At Deneb For E2E (#14193)
* Trigger At Deneb

* Fix Rate Limits
2024-07-29 13:22:38 +02:00
Manu NALEPA
467ad45e64 PeerDAS: Add KZG verification when sampling (#14187)
* `validateDataColumn`: Add comments and remove debug computation.

* `sampleDataColumnsFromPeer`: Add KZG verification

* `VerifyKZGInclusionProofColumn`: Add unit test.

* Make deepsource happy.

* Address Nishant's comment.

* Address Nishant's comment.
2024-07-29 13:22:38 +02:00
kevaundray
70a4875c68 chore!: Make Cell be a flat sequence of bytes (#14159)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* make Cells be flattened sequence of bytes

* chore: add test for flattening roundtrip

* chore: remove code that was doing the flattening outside of the kzg package

* fix merge

* fix

* remove now un-needed conversion

* use pointers for Cell parameters

* linter

* rename cell conversion methods (this only applies to old version of c-kzg)
2024-07-29 13:22:38 +02:00
Manu NALEPA
952d2b3b27 Move log from error to debug. (#14194)
Reason: If a peer does not exposes its `csc` field into it's ENR,
then there is nothing we can do.
2024-07-29 13:22:38 +02:00
Nishant Das
0857060867 Activate PeerDAS with the EIP7594 Fork Epoch (#14184)
* Save All the Current Changes

* Add check for data sampling

* Fix Test

* Gazelle

* Manu's Review

* Fix Test
2024-07-29 13:22:38 +02:00
kevaundray
4b011087d2 chore!: Refactor RecoverBlob to RecoverCellsAndProofs (#14160)
* change recoverBlobs to recoverCellsAndProofs

* modify code to take in the cells and proofs for a particular blob instead of the blob itself

* add CellsAndProofs structure

* modify recoverCellsAndProofs to return `cellsAndProofs` structure

* modify `DataColumnSidecarsForReconstruct` to accept the `cellsAndKZGProofs` structure

* bazel run //:gazelle -- fix

* use kzg abstraction for kzg method

* move CellsAndProofs to kzg.go
2024-07-29 13:21:39 +02:00
kevaundray
c14ee7268d chore: Encapsulate all kzg functionality for PeerDAS into the kzg package (#14136)
* chore: move all ckzg related functionality into kzg package

* refactor code to match

* run: bazel run //:gazelle -- fix

* chore: add some docs and stop copying large objects when converting between types

* fixes

* manually add kzg.go dep to Build.Hazel

* move kzg methods to kzg.go

* chore: add RecoverCellsAndProofs method

* bazel run //:gazelle -- fix

* use BytesPerBlob constant

* chore: fix some deepsource issues

* one declaration for commans and blobs
2024-07-29 13:21:39 +02:00
Manu NALEPA
5c7c3a6c40 PeerDAS: Implement IncrementalDAS (#14109)
* `ConvertPeerIDToNodeID`: Add tests.

* Remove `extractNodeID` and uses `ConvertPeerIDToNodeID` instead.

* Implement IncrementalDAS.

* `DataColumnSamplingLoop` ==> `DataColumnSamplingRoutine`.

* HypergeomCDF: Add test.

* `GetValidCustodyPeers`: Optimize and add tests.

* Remove blank identifiers.

* Implement `CustodyCountFromRecord`.

* Implement `TestP2P.CustodyCountFromRemotePeer`.

* `NewTestP2P`: Add `swarmt.Option` parameters.

* `incrementalDAS`: Rework and add tests.

* Remove useless warning.
2024-07-29 13:21:39 +02:00
Francis Li
b30a9d520f PeerDAS: add data column batch config (#14122) 2024-07-29 13:21:39 +02:00
Francis Li
9cc6a7951d PeerDAS: move custody subnet count into helper function (#14117) 2024-07-29 13:21:39 +02:00
Manu NALEPA
73a1130c98 Fix columns sampling (#14118) 2024-07-29 13:21:39 +02:00
Francis Li
c81aaa70c5 [PeerDAS] implement DataColumnSidecarsByRootReq and fix related bugs (#14103)
* [PeerDAS] add data column related protos and fix data column by root bug

* Add more tests
2024-07-29 13:21:39 +02:00
Francis Li
0c80ddb1b6 [PeerDAS] fixes and tests for gossiping out data columns (#14102)
* [PeerDAS] Minor fixes and tests for gossiping out data columns

* Fix metrics
2024-07-29 13:21:39 +02:00
Francis Li
2863dcb9ea [PeerDAS] rework ENR custody_subnet_count and add tests (#14077)
* [PeerDAS] rework ENR custody_subnet_count related code

* update according to proposed spec change

* Run gazelle
2024-07-29 13:21:39 +02:00
Manu NALEPA
ce4b6b5464 PeerDAS: Stop generating new P2P private key at start. (#14099)
* `privKey`: Improve logs.

* peerDAS: Move functions in file. Add documentation.

* PeerDAS: Remove unused `ComputeExtendedMatrix` and `RecoverMatrix` functions.

* PeerDAS: Stop generating new P2P private key at start.

* Fix sammy' comment.
2024-07-29 13:21:39 +02:00
Manu NALEPA
c93db748ba PeerDAS: Gossip the reconstructed columns (#14079)
* PeerDAS: Broadcast not seen via gossip but reconstructed data columns.

* Address Nishant's comment.
2024-07-29 13:21:39 +02:00
Manu NALEPA
ccafc1019c PeerDAS: Only saved custodied columns even after reconstruction. (#14083) 2024-07-29 13:21:39 +02:00
Manu NALEPA
6bf6671725 recoverBlobs: Cover the 0 < blobsCount < fieldparams.MaxBlobsPerBlock case. (#14066)
* `recoverBlobs`: Cover the `0 < blobsCount < fieldparams.MaxBlobsPerBlock` case.

* Fix Nishant's comment.
2024-07-29 13:21:39 +02:00
Manu NALEPA
a5b128b983 PeerDAS: Withhold data on purpose. (#14076)
* Introduce hidden flag `data-columns-withhold-count`.

* Address Nishant's comment.
2024-07-29 13:21:39 +02:00
Manu NALEPA
747de1ac95 PeerDAS: Implement / use data column feed from database. (#14062)
* Remove some `_` identifiers.

* Blob storage: Implement a notifier system for data columns.

* `dataColumnSidecarByRootRPCHandler`: Remove ugly `time.Sleep(100 * time.Millisecond)`.

* Address Nishant's comment.
2024-07-29 13:19:43 +02:00
Manu NALEPA
687e164470 PeerDAS: Implement reconstruction. (#14036)
* Wrap errors, add logs.

* `missingColumnRequest`: Fix blobs <-> data columns mix.

* `ColumnIndices`: Return `map[uint64]bool` instead of `[fieldparams.NumberOfColumns]bool`.

* `DataColumnSidecars`: `interfaces.SignedBeaconBlock` ==> `interfaces.ReadOnlySignedBeaconBlock`.

We don't need any of the non read-only methods.

* Fix comments.

* `handleUnblidedBlock` ==> `handleUnblindedBlock`.

* `SaveDataColumn`: Move log from debug to trace.

If we attempt to save an already existing data column sidecar,
a debug log was printed.

This case could be quite common now with the data column reconstruction enabled.

* `sampling_data_columns.go` --> `data_columns_sampling.go`.

* Reconstruct data columns.
2024-07-29 13:19:43 +02:00
Nishant Das
68b9d9668b Fix Custody Columns (#14021) 2024-07-29 13:19:43 +02:00
Nishant Das
c80c6ef5f4 Disable Evaluators For E2E (#14019)
* Hack E2E

* Fix it For Real

* Gofmt

* Remove
2024-07-29 13:19:43 +02:00
Nishant Das
a279ce6330 Request Data Columns When Fetching Pending Blocks (#14007)
* Support Data Columns For By Root Requests

* Revert Config Changes

* Fix Panic

* Fix Process Block

* Fix Flags

* Lint

* Support Checkpoint Sync

* Manu's Review

* Add Support For Columns in Remaining Methods

* Unmarshal Uncorrectly
2024-07-29 13:19:43 +02:00
Manu NALEPA
e142ece77c Fix CustodyColumns to comply with alpha-2 spectests. (#14008)
* Adding error wrapping

* Fix `CustodyColumnSubnets` tests.
2024-07-29 13:19:43 +02:00
Manu NALEPA
7f6a5f44f3 Fix beacon chain config. (#14017) 2024-07-29 13:19:43 +02:00
Nishant Das
a280c6b671 Set Custody Count Correctly (#14004)
* Set Custody Count Correctly

* Fix Discovery Count
2024-07-29 13:19:43 +02:00
Manu NALEPA
5371299b32 Sample from peers some data columns. (#13980)
* PeerDAS: Implement sampling.

* `TestNewRateLimiter`: Fix with the new number of expected registered topics.
2024-07-29 13:19:43 +02:00
Nishant Das
a23bb9f0bd Implement Data Columns By Range Request And Response Methods (#13972)
* Add Data Structure for New Request Type

* Add Data Column By Range Handler

* Add Data Column Request Methods

* Add new validation for columns by range requests

* Fix Build

* Allow Prysm Node To Fetch Data Columns

* Allow Prysm Node To Fetch Data Columns And Sync

* Bug Fixes For Interop

* GoFmt

* Use different var

* Manu's Review
2024-07-29 13:19:43 +02:00
Nishant Das
bca3806794 Enable E2E For PeerDAS (#13945)
* Enable E2E And Add Fixes

* Register Same Topic For Data Columns

* Initialize Capacity Of Slice

* Fix Initialization of Data Column Receiver

* Remove Mix In From Merkle Proof

* E2E: Subscribe to all subnets.

* Remove Index Check

* Remaining Bug Fixes to Get It Working

* Change Evaluator to Allow Test to Finish

* Fix Build

* Add Data Column Verification

* Fix LoopVar Bug

* Do Not Allocate Memory

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/core/peerdas/helpers.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Gofmt

* Fix It Again

* Fix Test Setup

* Fix Build

* Fix Trusted Setup panic

* Fix Trusted Setup panic

* Use New Test

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-07-29 13:19:43 +02:00
Justin Traglia
a14b1f11a4 [PeerDAS] Upgrade c-kzg-4844 package (#13967)
* Upgrade c-kzg-4844 package

* Upgrade bazel deps
2024-07-29 13:19:42 +02:00
Manu NALEPA
6850904568 SendDataColumnSidecarByRoot: Return RODataColumn instead of ROBlob. (#13957)
* `SendDataColumnSidecarByRoot`: Return `RODataColumn` instead of `ROBlob`.

* Make deepsource happier.
2024-07-29 13:19:42 +02:00
Manu NALEPA
a3887807bc Spectests (#13940)
* Update `consensus_spec_version` to `v1.5.0-alpha.1`.

* `CustodyColumns`: Fix and implement spec tests.

* Make deepsource happy.

* `^uint64(0)` => `math.MaxUint64`.

* Fix `TestLoadConfigFile` test.
2024-07-29 13:19:42 +02:00
Nishant Das
0914df3bbd Add DA Check For Data Columns (#13938)
* Add new DA check

* Exit early in the event no commitments exist.

* Gazelle

* Fix Mock Broadcaster

* Fix Test Setup

* Update beacon-chain/blockchain/process_block.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Fix Build

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-07-29 13:19:42 +02:00
Manu NALEPA
e501279798 Implement peer DAS proposer RPC (#13922)
* Remove capital letter from error messages.

* `[4]byte` => `[fieldparams.VersionLength]byte`.

* Prometheus: Remove extra `committee`.

They are probably due to a bad copy/paste.

Note: The name of the probe itself is remaining,
to ensure backward compatibility.

* Implement Proposer RPC for data columns.

* Fix TestProposer_ProposeBlock_OK test.

* Remove default peerDAS activation.

* `validateDataColumn`: Workaround to return a `VerifiedRODataColumn`
2024-07-29 13:19:42 +02:00
Nishant Das
6d61e1ea2a Update .bazelrc (#13931) 2024-07-29 13:19:42 +02:00
Manu NALEPA
4d45fc0e87 Implement custody_subnet_count ENR field. (#13915)
https://github.com/ethereum/consensus-specs/blob/dev/specs/_features/eip7594/p2p-interface.md#the-discovery-domain-discv5
2024-07-29 13:19:42 +02:00
Manu NALEPA
23d7f87cef Peer das core (#13877)
* Bump `c-kzg-4844` lib to the `das` branch.

* Implement `MerkleProofKZGCommitments`.

* Implement `das-core.md`.

* Use `peerdas.CustodyColumnSubnets` and `peerdas.CustodyColumns`.

* `CustodyColumnSubnets`: Include `i` in the for loop.

* Remove `computeSubscribedColumnSubnet`.

* Remove `peerdas.CustodyColumns` out of the for loop.
2024-07-29 13:19:42 +02:00
Nishant Das
a4db00d7ef Add Request And Response RPC Methods For Data Columns (#13909)
* Add RPC Handler

* Add Column Requests

* Update beacon-chain/db/filesystem/blob.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Update beacon-chain/p2p/rpc_topic_mappings.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Manu's Review

* Manu's Review

* Interface Fixes

* mock manager

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-07-29 13:19:42 +02:00
Nishant Das
b6a4e1213c Add Data Column Gossip Handlers (#13894)
* Add Data Column Subscriber

* Add Data Column Vaidator

* Wire all Handlers In

* Fix Build

* Fix Test

* Fix IP in Test

* Fix IP in Test
2024-07-29 13:19:42 +02:00
Nishant Das
86088d9c85 Add Support For Discovery Of Column Subnets (#13883)
* Add Support For Discovery Of Column Subnets

* Lint for SubnetsPerNode

* Manu's Review

* Change to a better name
2024-07-29 13:19:42 +02:00
Nishant Das
e10e610589 add in networking params (#13866) 2024-07-29 13:19:42 +02:00
Nishant Das
e16a25b6cc add it (#13865) 2024-07-29 13:19:42 +02:00
Manu NALEPA
f129dfdee6 Add in column sidecars protos (#13862) 2024-07-29 13:18:55 +02:00
761 changed files with 36937 additions and 25434 deletions

View File

@@ -22,6 +22,7 @@ coverage --define=coverage_enabled=1
build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false
build --compilation_mode=opt
run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true

View File

@@ -10,7 +10,6 @@
in review.
4. Note that PRs updating dependencies and new Go versions are not accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
@@ -29,9 +28,3 @@
Fixes #
**Other notes for review**
**Acknowledgements**
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have made an appropriate entry to [CHANGELOG.md](https://github.com/prysmaticlabs/prysm/blob/develop/CHANGELOG.md).
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.

View File

@@ -1,33 +0,0 @@
name: CI
on:
pull_request:
branches:
- develop
jobs:
changed_files:
runs-on: ubuntu-latest
name: Check CHANGELOG.md
steps:
- uses: actions/checkout@v4
- name: changelog modified
id: changelog-modified
uses: tj-actions/changed-files@v45
with:
files: CHANGELOG.md
- name: List all changed files
env:
ALL_CHANGED_FILES: ${{ steps.changelog-modified.outputs.all_changed_files }}
run: |
if [[ ${ALL_CHANGED_FILES[*]} =~ (^|[[:space:]])"CHANGELOG.md"($|[[:space:]]) ]];
then
echo "CHANGELOG.md was modified.";
exit 0;
else
echo "CHANGELOG.md was not modified.";
echo "Please see CHANGELOG.md and follow the instructions to add your changes to that file."
echo "In some rare scenarios, a changelog entry is not required and this CI check can be ignored."
exit 1;
fi

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v2
- name: Go mod tidy checker
id: gomodtidy
@@ -27,11 +27,11 @@ jobs:
GO111MODULE: on
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v2
- name: Set up Go 1.22
uses: actions/setup-go@v4
uses: actions/setup-go@v3
with:
go-version: '1.22.6'
go-version: '1.22.3'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
@@ -43,18 +43,18 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v2
- name: Set up Go 1.22
uses: actions/setup-go@v4
uses: actions/setup-go@v3
with:
go-version: '1.22.6'
id: go
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
uses: golangci/golangci-lint-action@v3
with:
version: v1.55.2
version: v1.60.2
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:
@@ -62,13 +62,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.x
uses: actions/setup-go@v4
uses: actions/setup-go@v2
with:
go-version: '1.22.6'
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v4
uses: actions/checkout@v2
- name: Get dependencies
run: |

View File

@@ -26,6 +26,7 @@ approval_rules:
only_changed_files:
paths:
- "*pb.go"
- "*pb.gw.go"
- "*.bazel"
options:
ignore_commits_by:
@@ -68,6 +69,7 @@ approval_rules:
changed_files:
ignore:
- "*pb.go"
- "*pb.gw.go"
- "*.bazel"
options:
ignore_commits_by:

View File

@@ -55,6 +55,13 @@ alias(
visibility = ["//visibility:public"],
)
# Protobuf gRPC gateway compiler
alias(
name = "grpc_gateway_proto_compiler",
actual = "@com_github_grpc_ecosystem_grpc_gateway_v2//protoc-gen-grpc-gateway:go_gen_grpc_gateway",
visibility = ["//visibility:public"],
)
gometalinter(
name = "gometalinter",
config = "//:.gometalinter.json",

File diff suppressed because it is too large Load Diff

View File

@@ -6,9 +6,6 @@ Excited by our work and want to get involved in building out our sharding releas
You can explore our [Open Issues](https://github.com/prysmaticlabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PRs after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/CTYGPUJ) drop us a line there if you want to get more involved or have any questions on our implementation!
> [!IMPORTANT]
> Please, **do not send pull requests for trivial changes**, such as typos, these will be rejected. These types of pull requests incur a cost to reviewers and do not provide much value to the project. If you are unsure, please open an issue first to discuss the change.
## Contribution Steps
**1. Set up Prysm following the instructions in README.md.**
@@ -123,19 +120,15 @@ $ git push myrepo feature-in-progress-branch
Navigate to your fork of the repo on GitHub. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
**16. Add an entry to CHANGELOG.md.**
**16. Create a pull request.**
If your change is user facing, you must include a CHANGELOG.md entry. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
Navigate your browser to https://github.com/prysmaticlabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base master”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/prysmaticlabs/prysm/pulls.
**17. Create a pull request.**
Navigate your browser to https://github.com/prysmaticlabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base master”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/prysmaticlabs/prysm/pulls. Ensure that you have added an entry to CHANGELOG.md if your PR is a user-facing change. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
**18. Respond to comments by Core Contributors.**
**17. Respond to comments by Core Contributors.**
Core Contributors may ask questions and request that you make edits. If you set notifications at the top of the page to “not watching,” you will still be notified by email whenever someone comments on the page of a pull request you have created. If you are asked to modify your pull request, repeat steps 8 through 15, then leave a comment to notify the Core Contributors that the pull request is ready for further review.
**19. If the number of commits becomes excessive, you may be asked to squash your commits.**
**18. If the number of commits becomes excessive, you may be asked to squash your commits.**
You can do this with an interactive rebase. Start by running the following command to determine the commit that is the base of your branch...
@@ -143,7 +136,7 @@ Core Contributors may ask questions and request that you make edits. If you set
$ git merge-base feature-in-progress-branch prysm/master
```
**20. The previous command will return a commit-hash that you should use in the following command.**
**19. The previous command will return a commit-hash that you should use in the following command.**
```
$ git rebase -i commit-hash
@@ -167,30 +160,13 @@ squash hash add a feature
Save and close the file, then a commit command will appear in the terminal that squashes the smaller commits into one. Check to be sure the commit message accurately reflects your changes and then hit enter to execute it.
**21. Update your pull request with the following command.**
**20. Update your pull request with the following command.**
```
$ git push myrepo feature-in-progress-branch -f
```
**22. Finally, again leave a comment to the Core Contributors on the pull request to let them know that the pull request has been updated.**
## Maintaining CHANGELOG.md
This project follows the changelog guidelines from [keepachangelog.com](https://keepachangelog.com/en/1.1.0/).
All PRs with user facing changes should have an entry in the CHANGELOG.md file and the change should be categorized in the appropriate category within the "Unreleased" section. The categories are:
- `Added` for new features.
- `Changed` for changes in existing functionality.
- `Deprecated` for soon-to-be removed features.
- `Removed` for now removed features.
- `Fixed` for any bug fixes.
- `Security` in case of vulnerabilities. Please see the [Security Policy](SECURITY.md) for responsible disclosure before adding a change with this category.
### Releasing
When a new release is made, the "Unreleased" section should be moved to a new section with the release version and the current date. Then a new "Unreleased" section is made at the top of the file with the categories listed above.
**21. Finally, again leave a comment to the Core Contributors on the pull request to let them know that the pull request has been updated.**
## Contributor Responsibilities

View File

@@ -227,7 +227,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-alpha.6"
consensus_spec_version = "v1.5.0-alpha.3"
bls_test_version = "v0.1.1"
@@ -243,7 +243,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-M7u/Ot/Vzorww+dFbHp0cxLyM2mezJjijCzq+LY3uvs=",
integrity = "sha256-+byv+GUOQytex5GgtjBGVoNDseJZbiBdAjEtlgCbjEo=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -259,7 +259,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-deOSeLRsmHXvkRp8n2bs3HXdkGUJWWqu8KFM/QABbZg=",
integrity = "sha256-JJUy/jT1h3kGQkinTuzL7gMOA1+qgmPgJXVrYuH63Cg=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -275,7 +275,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-Zz7YCf6XVf57nzSEGq9ToflJFHM0lAGwhd18l9Rf3hA=",
integrity = "sha256-T2VM4Qd0SwgGnTjWxjOX297DqEsovO9Ueij1UEJy48Y=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -290,7 +290,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-BoXckDxXnDcEmAjg/dQgf/tLiJsb6CT0aZvmWHFijrY=",
integrity = "sha256-OP9BCBcQ7i+93bwj7ktY8pZ5uWsGjgTe4XTp7BDhX+I=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)
@@ -342,22 +342,6 @@ filegroup(
url = "https://github.com/eth-clients/holesky/archive/874c199423ccd180607320c38cbaca05d9a1573a.tar.gz", # 2024-06-18
)
http_archive(
name = "sepolia_testnet",
build_file_content = """
filegroup(
name = "configs",
srcs = [
"metadata/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-cY/UgpCcYEhQf7JefD65FI8tn/A+rAvKhcm2/qiVdqY=",
strip_prefix = "sepolia-f2c219a93c4491cee3d90c18f2f8e82aed850eab",
url = "https://github.com/eth-clients/sepolia/archive/f2c219a93c4491cee3d90c18f2f8e82aed850eab.tar.gz", # 2024-09-19
)
http_archive(
name = "com_google_protobuf",
sha256 = "9bd87b8280ef720d3240514f884e56a712f2218f0d693b48050c836028940a42",

View File

@@ -21,7 +21,6 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
@@ -29,6 +28,7 @@ go_library(
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -19,11 +19,11 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
const (
@@ -146,7 +146,7 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
u := c.baseURL.ResolveReference(&url.URL{Path: path})
span.SetAttributes(trace.StringAttribute("url", u.String()),
span.AddAttributes(trace.StringAttribute("url", u.String()),
trace.StringAttribute("method", method))
req, err := http.NewRequestWithContext(ctx, method, u.String(), body)
@@ -259,7 +259,7 @@ func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash
func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error {
ctx, span := trace.StartSpan(ctx, "builder.client.RegisterValidator")
defer span.End()
span.SetAttributes(trace.Int64Attribute("num_reqs", int64(len(svr))))
span.AddAttributes(trace.Int64Attribute("num_reqs", int64(len(svr))))
if len(svr) == 0 {
err := errors.Wrap(errMalformedRequest, "empty validator registration list")
@@ -278,11 +278,7 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
}
_, err = c.do(ctx, http.MethodPost, postRegisterValidatorPath, bytes.NewBuffer(body))
if err != nil {
return err
}
log.WithField("num_registrations", len(svr)).Info("successfully registered validator(s) on builder")
return nil
return err
}
var errResponseVersionMismatch = errors.New("builder API response uses a different version than requested in " + api.VersionHeader + " header")

43
api/gateway/BUILD.bazel Normal file
View File

@@ -0,0 +1,43 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"gateway.go",
"log.go",
"modifiers.go",
"options.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/gateway",
visibility = [
"//beacon-chain:__subpackages__",
"//validator:__subpackages__",
],
deps = [
"//api/server/middleware:go_default_library",
"//runtime:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_grpc_ecosystem_grpc_gateway_v2//runtime:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//connectivity:go_default_library",
"@org_golang_google_grpc//credentials:go_default_library",
"@org_golang_google_grpc//credentials/insecure:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["gateway_test.go"],
embed = [":go_default_library"],
deps = [
"//cmd/beacon-chain/flags:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_gorilla_mux//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],
)

212
api/gateway/gateway.go Normal file
View File

@@ -0,0 +1,212 @@
// Package gateway defines a grpc-gateway server that serves HTTP-JSON traffic and acts a proxy between HTTP and gRPC.
package gateway
import (
"context"
"fmt"
"net"
"net/http"
"time"
"github.com/gorilla/mux"
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server/middleware"
"github.com/prysmaticlabs/prysm/v5/runtime"
"google.golang.org/grpc"
"google.golang.org/grpc/connectivity"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
)
var _ runtime.Service = (*Gateway)(nil)
// PbMux serves grpc-gateway requests for selected patterns using registered protobuf handlers.
type PbMux struct {
Registrations []PbHandlerRegistration // Protobuf registrations to be registered in Mux.
Patterns []string // URL patterns that will be handled by Mux.
Mux *gwruntime.ServeMux // The router that will be used for grpc-gateway requests.
}
// PbHandlerRegistration is a function that registers a protobuf handler.
type PbHandlerRegistration func(context.Context, *gwruntime.ServeMux, *grpc.ClientConn) error
// MuxHandler is a function that implements the mux handler functionality.
type MuxHandler func(
h http.HandlerFunc,
w http.ResponseWriter,
req *http.Request,
)
// Config parameters for setting up the gateway service.
type config struct {
maxCallRecvMsgSize uint64
remoteCert string
gatewayAddr string
remoteAddr string
allowedOrigins []string
muxHandler MuxHandler
pbHandlers []*PbMux
router *mux.Router
timeout time.Duration
}
// Gateway is the gRPC gateway to serve HTTP JSON traffic as a proxy and forward it to the gRPC server.
type Gateway struct {
cfg *config
conn *grpc.ClientConn
server *http.Server
cancel context.CancelFunc
ctx context.Context
startFailure error
}
// New returns a new instance of the Gateway.
func New(ctx context.Context, opts ...Option) (*Gateway, error) {
g := &Gateway{
ctx: ctx,
cfg: &config{},
}
for _, opt := range opts {
if err := opt(g); err != nil {
return nil, err
}
}
if g.cfg.router == nil {
g.cfg.router = mux.NewRouter()
}
return g, nil
}
// Start the gateway service.
func (g *Gateway) Start() {
ctx, cancel := context.WithCancel(g.ctx)
g.cancel = cancel
conn, err := g.dial(ctx, "tcp", g.cfg.remoteAddr)
if err != nil {
log.WithError(err).Error("Failed to connect to gRPC server")
g.startFailure = err
return
}
g.conn = conn
for _, h := range g.cfg.pbHandlers {
for _, r := range h.Registrations {
if err := r(ctx, h.Mux, g.conn); err != nil {
log.WithError(err).Error("Failed to register handler")
g.startFailure = err
return
}
}
for _, p := range h.Patterns {
g.cfg.router.PathPrefix(p).Handler(h.Mux)
}
}
corsMux := middleware.CorsHandler(g.cfg.allowedOrigins).Middleware(g.cfg.router)
if g.cfg.muxHandler != nil {
g.cfg.router.PathPrefix("/").HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
g.cfg.muxHandler(corsMux.ServeHTTP, w, r)
})
}
g.server = &http.Server{
Addr: g.cfg.gatewayAddr,
Handler: corsMux,
ReadHeaderTimeout: time.Second,
}
go func() {
log.WithField("address", g.cfg.gatewayAddr).Info("Starting gRPC gateway")
if err := g.server.ListenAndServe(); err != http.ErrServerClosed {
log.WithError(err).Error("Failed to start gRPC gateway")
g.startFailure = err
return
}
}()
}
// Status of grpc gateway. Returns an error if this service is unhealthy.
func (g *Gateway) Status() error {
if g.startFailure != nil {
return g.startFailure
}
if s := g.conn.GetState(); s != connectivity.Ready {
return fmt.Errorf("grpc server is %s", s)
}
return nil
}
// Stop the gateway with a graceful shutdown.
func (g *Gateway) Stop() error {
if g.server != nil {
shutdownCtx, shutdownCancel := context.WithTimeout(g.ctx, 2*time.Second)
defer shutdownCancel()
if err := g.server.Shutdown(shutdownCtx); err != nil {
if errors.Is(err, context.DeadlineExceeded) {
log.Warn("Existing connections terminated")
} else {
log.WithError(err).Error("Failed to gracefully shut down server")
}
}
}
if g.cancel != nil {
g.cancel()
}
return nil
}
// dial the gRPC server.
func (g *Gateway) dial(ctx context.Context, network, addr string) (*grpc.ClientConn, error) {
switch network {
case "tcp":
return g.dialTCP(ctx, addr)
case "unix":
return g.dialUnix(ctx, addr)
default:
return nil, fmt.Errorf("unsupported network type %q", network)
}
}
// dialTCP creates a client connection via TCP.
// "addr" must be a valid TCP address with a port number.
func (g *Gateway) dialTCP(ctx context.Context, addr string) (*grpc.ClientConn, error) {
var security grpc.DialOption
if len(g.cfg.remoteCert) > 0 {
creds, err := credentials.NewClientTLSFromFile(g.cfg.remoteCert, "")
if err != nil {
return nil, err
}
security = grpc.WithTransportCredentials(creds)
} else {
// Use insecure credentials when there's no remote cert provided.
security = grpc.WithTransportCredentials(insecure.NewCredentials())
}
opts := []grpc.DialOption{
security,
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(g.cfg.maxCallRecvMsgSize))),
}
return grpc.DialContext(ctx, addr, opts...)
}
// dialUnix creates a client connection via a unix domain socket.
// "addr" must be a valid path to the socket.
func (g *Gateway) dialUnix(ctx context.Context, addr string) (*grpc.ClientConn, error) {
d := func(addr string, timeout time.Duration) (net.Conn, error) {
return net.DialTimeout("unix", addr, timeout)
}
f := func(ctx context.Context, addr string) (net.Conn, error) {
if deadline, ok := ctx.Deadline(); ok {
return d(addr, time.Until(deadline))
}
return d(addr, 0)
}
opts := []grpc.DialOption{
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithContextDialer(f),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(g.cfg.maxCallRecvMsgSize))),
}
return grpc.DialContext(ctx, addr, opts...)
}

107
api/gateway/gateway_test.go Normal file
View File

@@ -0,0 +1,107 @@
package gateway
import (
"context"
"flag"
"fmt"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"github.com/gorilla/mux"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/urfave/cli/v2"
)
func TestGateway_Customized(t *testing.T) {
r := mux.NewRouter()
cert := "cert"
origins := []string{"origin"}
size := uint64(100)
opts := []Option{
WithRouter(r),
WithRemoteCert(cert),
WithAllowedOrigins(origins),
WithMaxCallRecvMsgSize(size),
WithMuxHandler(func(
_ http.HandlerFunc,
_ http.ResponseWriter,
_ *http.Request,
) {
}),
}
g, err := New(context.Background(), opts...)
require.NoError(t, err)
assert.Equal(t, r, g.cfg.router)
assert.Equal(t, cert, g.cfg.remoteCert)
require.Equal(t, 1, len(g.cfg.allowedOrigins))
assert.Equal(t, origins[0], g.cfg.allowedOrigins[0])
assert.Equal(t, size, g.cfg.maxCallRecvMsgSize)
}
func TestGateway_StartStop(t *testing.T) {
hook := logTest.NewGlobal()
app := cli.App{}
set := flag.NewFlagSet("test", 0)
ctx := cli.NewContext(&app, set, nil)
gatewayPort := ctx.Int(flags.GRPCGatewayPort.Name)
gatewayHost := ctx.String(flags.GRPCGatewayHost.Name)
rpcHost := ctx.String(flags.RPCHost.Name)
selfAddress := fmt.Sprintf("%s:%d", rpcHost, ctx.Int(flags.RPCPort.Name))
gatewayAddress := fmt.Sprintf("%s:%d", gatewayHost, gatewayPort)
opts := []Option{
WithGatewayAddr(gatewayAddress),
WithRemoteAddr(selfAddress),
WithMuxHandler(func(
_ http.HandlerFunc,
_ http.ResponseWriter,
_ *http.Request,
) {
}),
}
g, err := New(context.Background(), opts...)
require.NoError(t, err)
g.Start()
go func() {
require.LogsContain(t, hook, "Starting gRPC gateway")
require.LogsDoNotContain(t, hook, "Starting API middleware")
}()
err = g.Stop()
require.NoError(t, err)
}
func TestGateway_NilHandler_NotFoundHandlerRegistered(t *testing.T) {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
ctx := cli.NewContext(&app, set, nil)
gatewayPort := ctx.Int(flags.GRPCGatewayPort.Name)
gatewayHost := ctx.String(flags.GRPCGatewayHost.Name)
rpcHost := ctx.String(flags.RPCHost.Name)
selfAddress := fmt.Sprintf("%s:%d", rpcHost, ctx.Int(flags.RPCPort.Name))
gatewayAddress := fmt.Sprintf("%s:%d", gatewayHost, gatewayPort)
opts := []Option{
WithGatewayAddr(gatewayAddress),
WithRemoteAddr(selfAddress),
}
g, err := New(context.Background(), opts...)
require.NoError(t, err)
writer := httptest.NewRecorder()
g.cfg.router.ServeHTTP(writer, &http.Request{Method: "GET", Host: "localhost", URL: &url.URL{Path: "/foo"}})
assert.Equal(t, http.StatusNotFound, writer.Code)
}

5
api/gateway/log.go Normal file
View File

@@ -0,0 +1,5 @@
package gateway
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "gateway")

30
api/gateway/modifiers.go Normal file
View File

@@ -0,0 +1,30 @@
package gateway
import (
"context"
"net/http"
"strconv"
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"google.golang.org/protobuf/proto"
)
func HttpResponseModifier(ctx context.Context, w http.ResponseWriter, _ proto.Message) error {
md, ok := gwruntime.ServerMetadataFromContext(ctx)
if !ok {
return nil
}
// set http status code
if vals := md.HeaderMD.Get("x-http-code"); len(vals) > 0 {
code, err := strconv.Atoi(vals[0])
if err != nil {
return err
}
// delete the headers to not expose any grpc-metadata in http response
delete(md.HeaderMD, "x-http-code")
delete(w.Header(), "Grpc-Metadata-X-Http-Code")
w.WriteHeader(code)
}
return nil
}

79
api/gateway/options.go Normal file
View File

@@ -0,0 +1,79 @@
package gateway
import (
"time"
"github.com/gorilla/mux"
gwruntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
)
type Option func(g *Gateway) error
func WithPbHandlers(handlers []*PbMux) Option {
return func(g *Gateway) error {
g.cfg.pbHandlers = handlers
return nil
}
}
func WithMuxHandler(m MuxHandler) Option {
return func(g *Gateway) error {
g.cfg.muxHandler = m
return nil
}
}
func WithGatewayAddr(addr string) Option {
return func(g *Gateway) error {
g.cfg.gatewayAddr = addr
return nil
}
}
func WithRemoteAddr(addr string) Option {
return func(g *Gateway) error {
g.cfg.remoteAddr = addr
return nil
}
}
// WithRouter allows adding a custom mux router to the gateway.
func WithRouter(r *mux.Router) Option {
return func(g *Gateway) error {
g.cfg.router = r
return nil
}
}
// WithAllowedOrigins allows adding a set of allowed origins to the gateway.
func WithAllowedOrigins(origins []string) Option {
return func(g *Gateway) error {
g.cfg.allowedOrigins = origins
return nil
}
}
// WithRemoteCert allows adding a custom certificate to the gateway,
func WithRemoteCert(cert string) Option {
return func(g *Gateway) error {
g.cfg.remoteCert = cert
return nil
}
}
// WithMaxCallRecvMsgSize allows specifying the maximum allowed gRPC message size.
func WithMaxCallRecvMsgSize(size uint64) Option {
return func(g *Gateway) error {
g.cfg.maxCallRecvMsgSize = size
return nil
}
}
// WithTimeout allows changing the timeout value for API calls.
func WithTimeout(seconds uint64) Option {
return func(g *Gateway) error {
g.cfg.timeout = time.Second * time.Duration(seconds)
gwruntime.DefaultContextTimeout = time.Second * time.Duration(seconds)
return nil
}
}

View File

@@ -22,7 +22,9 @@ go_test(
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_grpc_ecosystem_grpc_gateway_v2//runtime:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
],
)

View File

@@ -2,6 +2,8 @@ package grpc
import (
"context"
"encoding/json"
"fmt"
"strings"
"time"
@@ -79,3 +81,16 @@ func AppendHeaders(parent context.Context, headers []string) context.Context {
}
return parent
}
// AppendCustomErrorHeader sets a CustomErrorMetadataKey gRPC header on the passed in context,
// using the passed in error data as the header's value. The data is serialized as JSON.
func AppendCustomErrorHeader(ctx context.Context, errorData interface{}) error {
j, err := json.Marshal(errorData)
if err != nil {
return fmt.Errorf("could not marshal error data into JSON: %w", err)
}
if err := grpc.SetHeader(ctx, metadata.Pairs(CustomErrorMetadataKey, string(j))); err != nil {
return fmt.Errorf("could not set custom error header: %w", err)
}
return nil
}

View File

@@ -2,11 +2,15 @@ package grpc
import (
"context"
"encoding/json"
"strings"
"testing"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
)
@@ -58,3 +62,17 @@ func TestAppendHeaders(t *testing.T) {
assert.Equal(t, "value=1", md.Get("first")[0])
})
}
func TestAppendCustomErrorHeader(t *testing.T) {
stream := &runtime.ServerTransportStream{}
ctx := grpc.NewContextWithServerTransportStream(context.Background(), stream)
data := &customErrorData{Message: "foo"}
require.NoError(t, AppendCustomErrorHeader(ctx, data))
// The stream used in test setup sets the metadata key in lowercase.
value, ok := stream.Header()[strings.ToLower(CustomErrorMetadataKey)]
require.Equal(t, true, ok, "Failed to retrieve custom error metadata value")
expected, err := json.Marshal(data)
require.NoError(t, err)
assert.Equal(t, string(expected), value[0])
}

View File

@@ -1,31 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"log.go",
"options.go",
"server.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/server/httprest",
visibility = ["//visibility:public"],
deps = [
"//api/server/middleware:go_default_library",
"//runtime:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["server_test.go"],
embed = [":go_default_library"],
deps = [
"//cmd/beacon-chain/flags:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],
)

View File

@@ -1,5 +0,0 @@
package httprest
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "httprest")

View File

@@ -1,44 +0,0 @@
package httprest
import (
"time"
"net/http"
"github.com/prysmaticlabs/prysm/v5/api/server/middleware"
)
// Option is a http rest server functional parameter type.
type Option func(g *Server) error
// WithMiddlewares sets the list of middlewares to be applied on routes.
func WithMiddlewares(mw []middleware.Middleware) Option {
return func(g *Server) error {
g.cfg.middlewares = mw
return nil
}
}
// WithHTTPAddr sets the full address ( host and port ) of the server.
func WithHTTPAddr(addr string) Option {
return func(g *Server) error {
g.cfg.httpAddr = addr
return nil
}
}
// WithRouter sets the internal router of the server, this is required.
func WithRouter(r *http.ServeMux) Option {
return func(g *Server) error {
g.cfg.router = r
return nil
}
}
// WithTimeout allows changing the timeout value for API calls.
func WithTimeout(duration time.Duration) Option {
return func(g *Server) error {
g.cfg.timeout = duration
return nil
}
}

View File

@@ -1,101 +0,0 @@
package httprest
import (
"context"
"net/http"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server/middleware"
"github.com/prysmaticlabs/prysm/v5/runtime"
)
var _ runtime.Service = (*Server)(nil)
// Config parameters for setting up the http-rest service.
type config struct {
httpAddr string
middlewares []middleware.Middleware
router http.Handler
timeout time.Duration
}
// Server serves HTTP traffic.
type Server struct {
cfg *config
server *http.Server
cancel context.CancelFunc
ctx context.Context
startFailure error
}
// New returns a new instance of the Server.
func New(ctx context.Context, opts ...Option) (*Server, error) {
g := &Server{
ctx: ctx,
cfg: &config{},
}
for _, opt := range opts {
if err := opt(g); err != nil {
return nil, err
}
}
if g.cfg.router == nil {
return nil, errors.New("router option not configured")
}
var handler http.Handler
defaultReadHeaderTimeout := time.Second
handler = middleware.MiddlewareChain(g.cfg.router, g.cfg.middlewares)
if g.cfg.timeout > 0*time.Second {
defaultReadHeaderTimeout = g.cfg.timeout
handler = http.TimeoutHandler(handler, g.cfg.timeout, "request timed out")
}
g.server = &http.Server{
Addr: g.cfg.httpAddr,
Handler: handler,
ReadHeaderTimeout: defaultReadHeaderTimeout,
}
return g, nil
}
// Start the http rest service.
func (g *Server) Start() {
g.ctx, g.cancel = context.WithCancel(g.ctx)
go func() {
log.WithField("address", g.cfg.httpAddr).Info("Starting HTTP server")
if err := g.server.ListenAndServe(); err != http.ErrServerClosed {
log.WithError(err).Error("Failed to start HTTP server")
g.startFailure = err
return
}
}()
}
// Status of the HTTP server. Returns an error if this service is unhealthy.
func (g *Server) Status() error {
if g.startFailure != nil {
return g.startFailure
}
return nil
}
// Stop the HTTP server with a graceful shutdown.
func (g *Server) Stop() error {
if g.server != nil {
shutdownCtx, shutdownCancel := context.WithTimeout(g.ctx, 2*time.Second)
defer shutdownCancel()
if err := g.server.Shutdown(shutdownCtx); err != nil {
if errors.Is(err, context.DeadlineExceeded) {
log.Warn("Existing connections terminated")
} else {
log.WithError(err).Error("Failed to gracefully shut down server")
}
}
}
if g.cancel != nil {
g.cancel()
}
return nil
}

View File

@@ -1,71 +0,0 @@
package httprest
import (
"context"
"flag"
"fmt"
"net"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/urfave/cli/v2"
)
func TestServer_StartStop(t *testing.T) {
hook := logTest.NewGlobal()
app := cli.App{}
set := flag.NewFlagSet("test", 0)
ctx := cli.NewContext(&app, set, nil)
port := ctx.Int(flags.HTTPServerPort.Name)
portStr := fmt.Sprintf("%d", port) // Convert port to string
host := ctx.String(flags.HTTPServerHost.Name)
address := net.JoinHostPort(host, portStr)
handler := http.NewServeMux()
opts := []Option{
WithHTTPAddr(address),
WithRouter(handler),
}
g, err := New(context.Background(), opts...)
require.NoError(t, err)
g.Start()
go func() {
require.LogsContain(t, hook, "Starting HTTP server")
require.LogsDoNotContain(t, hook, "Starting API middleware")
}()
err = g.Stop()
require.NoError(t, err)
}
func TestServer_NilHandler_NotFoundHandlerRegistered(t *testing.T) {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
ctx := cli.NewContext(&app, set, nil)
handler := http.NewServeMux()
port := ctx.Int(flags.HTTPServerPort.Name)
portStr := fmt.Sprintf("%d", port) // Convert port to string
host := ctx.String(flags.HTTPServerHost.Name)
address := net.JoinHostPort(host, portStr)
opts := []Option{
WithHTTPAddr(address),
WithRouter(handler),
}
g, err := New(context.Background(), opts...)
require.NoError(t, err)
writer := httptest.NewRecorder()
g.cfg.router.ServeHTTP(writer, &http.Request{Method: "GET", Host: "localhost", URL: &url.URL{Path: "/foo"}})
assert.Equal(t, http.StatusNotFound, writer.Code)
}

View File

@@ -8,7 +8,10 @@ go_library(
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/server/middleware",
visibility = ["//visibility:public"],
deps = ["@com_github_rs_cors//:go_default_library"],
deps = [
"@com_github_gorilla_mux//:go_default_library",
"@com_github_rs_cors//:go_default_library",
],
)
go_test(

View File

@@ -5,11 +5,10 @@ import (
"net/http"
"strings"
"github.com/gorilla/mux"
"github.com/rs/cors"
)
type Middleware func(http.Handler) http.Handler
// NormalizeQueryValuesHandler normalizes an input query of "key=value1,value2,value3" to "key=value1&key=value2&key=value3"
func NormalizeQueryValuesHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
@@ -22,7 +21,7 @@ func NormalizeQueryValuesHandler(next http.Handler) http.Handler {
}
// CorsHandler sets the cors settings on api endpoints
func CorsHandler(allowOrigins []string) Middleware {
func CorsHandler(allowOrigins []string) mux.MiddlewareFunc {
c := cors.New(cors.Options{
AllowedOrigins: allowOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodDelete, http.MethodOptions},
@@ -35,7 +34,7 @@ func CorsHandler(allowOrigins []string) Middleware {
}
// ContentTypeHandler checks request for the appropriate media types otherwise returning a http.StatusUnsupportedMediaType error
func ContentTypeHandler(acceptedMediaTypes []string) Middleware {
func ContentTypeHandler(acceptedMediaTypes []string) mux.MiddlewareFunc {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// skip the GET request
@@ -68,7 +67,7 @@ func ContentTypeHandler(acceptedMediaTypes []string) Middleware {
}
// AcceptHeaderHandler checks if the client's response preference is handled
func AcceptHeaderHandler(serverAcceptedTypes []string) Middleware {
func AcceptHeaderHandler(serverAcceptedTypes []string) mux.MiddlewareFunc {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
acceptHeader := r.Header.Get("Accept")
@@ -111,15 +110,3 @@ func AcceptHeaderHandler(serverAcceptedTypes []string) Middleware {
})
}
}
func MiddlewareChain(h http.Handler, mw []Middleware) http.Handler {
if len(mw) < 1 {
return h
}
wrapped := h
for i := len(mw) - 1; i >= 0; i-- {
wrapped = mw[i](wrapped)
}
return wrapped
}

View File

@@ -5,9 +5,7 @@ go_library(
srcs = [
"block.go",
"conversions.go",
"conversions_blob.go",
"conversions_block.go",
"conversions_lightclient.go",
"conversions_state.go",
"endpoints_beacon.go",
"endpoints_blob.go",
@@ -35,9 +33,6 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/migration:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",

View File

@@ -317,96 +317,6 @@ type BlindedBeaconBlockBodyDeneb struct {
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
}
type SignedBeaconBlockContentsElectra struct {
SignedBlock *SignedBeaconBlockElectra `json:"signed_block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type BeaconBlockContentsElectra struct {
Block *BeaconBlockElectra `json:"block"`
KzgProofs []string `json:"kzg_proofs"`
Blobs []string `json:"blobs"`
}
type SignedBeaconBlockElectra struct {
Message *BeaconBlockElectra `json:"message"`
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBeaconBlockElectra{}
func (s *SignedBeaconBlockElectra) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBeaconBlockElectra) SigString() string {
return s.Signature
}
type BeaconBlockElectra struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BeaconBlockBodyElectra `json:"body"`
}
type BeaconBlockBodyElectra struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayload *ExecutionPayloadElectra `json:"execution_payload"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}
type BlindedBeaconBlockElectra struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root"`
StateRoot string `json:"state_root"`
Body *BlindedBeaconBlockBodyElectra `json:"body"`
}
type SignedBlindedBeaconBlockElectra struct {
Message *BlindedBeaconBlockElectra `json:"message"`
Signature string `json:"signature"`
}
var _ SignedMessageJsoner = &SignedBlindedBeaconBlockElectra{}
func (s *SignedBlindedBeaconBlockElectra) MessageRawJson() ([]byte, error) {
return json.Marshal(s.Message)
}
func (s *SignedBlindedBeaconBlockElectra) SigString() string {
return s.Signature
}
type BlindedBeaconBlockBodyElectra struct {
RandaoReveal string `json:"randao_reveal"`
Eth1Data *Eth1Data `json:"eth1_data"`
Graffiti string `json:"graffiti"`
ProposerSlashings []*ProposerSlashing `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingElectra `json:"attester_slashings"`
Attestations []*AttestationElectra `json:"attestations"`
Deposits []*Deposit `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExit `json:"voluntary_exits"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeaderElectra `json:"execution_payload_header"`
BLSToExecutionChanges []*SignedBLSToExecutionChange `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
ExecutionRequests *ExecutionRequests `json:"execution_requests"`
}
type SignedBeaconBlockHeaderContainer struct {
Header *SignedBeaconBlockHeader `json:"header"`
Root string `json:"root"`
@@ -516,8 +426,6 @@ type ExecutionPayloadDeneb struct {
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadElectra = ExecutionPayloadDeneb
type ExecutionPayloadHeaderDeneb struct {
ParentHash string `json:"parent_hash"`
FeeRecipient string `json:"fee_recipient"`
@@ -537,11 +445,3 @@ type ExecutionPayloadHeaderDeneb struct {
BlobGasUsed string `json:"blob_gas_used"`
ExcessBlobGas string `json:"excess_blob_gas"`
}
type ExecutionPayloadHeaderElectra = ExecutionPayloadHeaderDeneb
type ExecutionRequests struct {
Deposits []*DepositRequest `json:"deposits"`
Withdrawals []*WithdrawalRequest `json:"withdrawals"`
Consolidations []*ConsolidationRequest `json:"consolidations"`
}

View File

@@ -340,42 +340,6 @@ func (a *AggregateAttestationAndProof) ToConsensus() (*eth.AggregateAttestationA
}, nil
}
func (s *SignedAggregateAttestationAndProofElectra) ToConsensus() (*eth.SignedAggregateAttestationAndProofElectra, error) {
msg, err := s.Message.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Message")
}
sig, err := bytesutil.DecodeHexWithLength(s.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.SignedAggregateAttestationAndProofElectra{
Message: msg,
Signature: sig,
}, nil
}
func (a *AggregateAttestationAndProofElectra) ToConsensus() (*eth.AggregateAttestationAndProofElectra, error) {
aggIndex, err := strconv.ParseUint(a.AggregatorIndex, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "AggregatorIndex")
}
agg, err := a.Aggregate.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Aggregate")
}
proof, err := bytesutil.DecodeHexWithLength(a.SelectionProof, 96)
if err != nil {
return nil, server.NewDecodeError(err, "SelectionProof")
}
return &eth.AggregateAttestationAndProofElectra{
AggregatorIndex: primitives.ValidatorIndex(aggIndex),
Aggregate: agg,
SelectionProof: proof,
}, nil
}
func (a *Attestation) ToConsensus() (*eth.Attestation, error) {
aggBits, err := hexutil.Decode(a.AggregationBits)
if err != nil {
@@ -405,41 +369,6 @@ func AttFromConsensus(a *eth.Attestation) *Attestation {
}
}
func (a *AttestationElectra) ToConsensus() (*eth.AttestationElectra, error) {
aggBits, err := hexutil.Decode(a.AggregationBits)
if err != nil {
return nil, server.NewDecodeError(err, "AggregationBits")
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
}
sig, err := bytesutil.DecodeHexWithLength(a.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
committeeBits, err := hexutil.Decode(a.CommitteeBits)
if err != nil {
return nil, server.NewDecodeError(err, "CommitteeBits")
}
return &eth.AttestationElectra{
AggregationBits: aggBits,
Data: data,
Signature: sig,
CommitteeBits: committeeBits,
}, nil
}
func AttElectraFromConsensus(a *eth.AttestationElectra) *AttestationElectra {
return &AttestationElectra{
AggregationBits: hexutil.Encode(a.AggregationBits),
Data: AttDataFromConsensus(a.Data),
Signature: hexutil.Encode(a.Signature),
CommitteeBits: hexutil.Encode(a.CommitteeBits),
}
}
func (a *AttestationData) ToConsensus() (*eth.AttestationData, error) {
slot, err := strconv.ParseUint(a.Slot, 10, 64)
if err != nil {
@@ -694,18 +623,6 @@ func (s *AttesterSlashing) ToConsensus() (*eth.AttesterSlashing, error) {
return &eth.AttesterSlashing{Attestation_1: att1, Attestation_2: att2}, nil
}
func (s *AttesterSlashingElectra) ToConsensus() (*eth.AttesterSlashingElectra, error) {
att1, err := s.Attestation1.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation1")
}
att2, err := s.Attestation2.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Attestation2")
}
return &eth.AttesterSlashingElectra{Attestation_1: att1, Attestation_2: att2}, nil
}
func (a *IndexedAttestation) ToConsensus() (*eth.IndexedAttestation, error) {
indices := make([]uint64, len(a.AttestingIndices))
var err error
@@ -731,31 +648,6 @@ func (a *IndexedAttestation) ToConsensus() (*eth.IndexedAttestation, error) {
}, nil
}
func (a *IndexedAttestationElectra) ToConsensus() (*eth.IndexedAttestationElectra, error) {
indices := make([]uint64, len(a.AttestingIndices))
var err error
for i, ix := range a.AttestingIndices {
indices[i], err = strconv.ParseUint(ix, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("AttestingIndices[%d]", i))
}
}
data, err := a.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "Data")
}
sig, err := bytesutil.DecodeHexWithLength(a.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
return &eth.IndexedAttestationElectra{
AttestingIndices: indices,
Data: data,
Signature: sig,
}, nil
}
func WithdrawalsFromConsensus(ws []*enginev1.Withdrawal) []*Withdrawal {
result := make([]*Withdrawal, len(ws))
for i, w := range ws {
@@ -773,126 +665,6 @@ func WithdrawalFromConsensus(w *enginev1.Withdrawal) *Withdrawal {
}
}
func WithdrawalRequestsFromConsensus(ws []*enginev1.WithdrawalRequest) []*WithdrawalRequest {
result := make([]*WithdrawalRequest, len(ws))
for i, w := range ws {
result[i] = WithdrawalRequestFromConsensus(w)
}
return result
}
func WithdrawalRequestFromConsensus(w *enginev1.WithdrawalRequest) *WithdrawalRequest {
return &WithdrawalRequest{
SourceAddress: hexutil.Encode(w.SourceAddress),
ValidatorPubkey: hexutil.Encode(w.ValidatorPubkey),
Amount: fmt.Sprintf("%d", w.Amount),
}
}
func (w *WithdrawalRequest) ToConsensus() (*enginev1.WithdrawalRequest, error) {
src, err := bytesutil.DecodeHexWithLength(w.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
pubkey, err := bytesutil.DecodeHexWithLength(w.ValidatorPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "ValidatorPubkey")
}
amount, err := strconv.ParseUint(w.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
return &enginev1.WithdrawalRequest{
SourceAddress: src,
ValidatorPubkey: pubkey,
Amount: amount,
}, nil
}
func ConsolidationRequestsFromConsensus(cs []*enginev1.ConsolidationRequest) []*ConsolidationRequest {
result := make([]*ConsolidationRequest, len(cs))
for i, c := range cs {
result[i] = ConsolidationRequestFromConsensus(c)
}
return result
}
func ConsolidationRequestFromConsensus(c *enginev1.ConsolidationRequest) *ConsolidationRequest {
return &ConsolidationRequest{
SourceAddress: hexutil.Encode(c.SourceAddress),
SourcePubkey: hexutil.Encode(c.SourcePubkey),
TargetPubkey: hexutil.Encode(c.TargetPubkey),
}
}
func (c *ConsolidationRequest) ToConsensus() (*enginev1.ConsolidationRequest, error) {
srcAddress, err := bytesutil.DecodeHexWithLength(c.SourceAddress, common.AddressLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourceAddress")
}
srcPubkey, err := bytesutil.DecodeHexWithLength(c.SourcePubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "SourcePubkey")
}
targetPubkey, err := bytesutil.DecodeHexWithLength(c.TargetPubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "TargetPubkey")
}
return &enginev1.ConsolidationRequest{
SourceAddress: srcAddress,
SourcePubkey: srcPubkey,
TargetPubkey: targetPubkey,
}, nil
}
func DepositRequestsFromConsensus(ds []*enginev1.DepositRequest) []*DepositRequest {
result := make([]*DepositRequest, len(ds))
for i, d := range ds {
result[i] = DepositRequestFromConsensus(d)
}
return result
}
func DepositRequestFromConsensus(d *enginev1.DepositRequest) *DepositRequest {
return &DepositRequest{
Pubkey: hexutil.Encode(d.Pubkey),
WithdrawalCredentials: hexutil.Encode(d.WithdrawalCredentials),
Amount: fmt.Sprintf("%d", d.Amount),
Signature: hexutil.Encode(d.Signature),
Index: fmt.Sprintf("%d", d.Index),
}
}
func (d *DepositRequest) ToConsensus() (*enginev1.DepositRequest, error) {
pubkey, err := bytesutil.DecodeHexWithLength(d.Pubkey, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, "Pubkey")
}
withdrawalCredentials, err := bytesutil.DecodeHexWithLength(d.WithdrawalCredentials, fieldparams.RootLength)
if err != nil {
return nil, server.NewDecodeError(err, "WithdrawalCredentials")
}
amount, err := strconv.ParseUint(d.Amount, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Amount")
}
sig, err := bytesutil.DecodeHexWithLength(d.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, "Signature")
}
index, err := strconv.ParseUint(d.Index, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Index")
}
return &enginev1.DepositRequest{
Pubkey: pubkey,
WithdrawalCredentials: withdrawalCredentials,
Amount: amount,
Signature: sig,
Index: index,
}, nil
}
func ProposerSlashingsToConsensus(src []*ProposerSlashing) ([]*eth.ProposerSlashing, error) {
if src == nil {
return nil, errNilValue
@@ -1158,138 +930,6 @@ func AttesterSlashingFromConsensus(src *eth.AttesterSlashing) *AttesterSlashing
}
}
func AttesterSlashingsElectraToConsensus(src []*AttesterSlashingElectra) ([]*eth.AttesterSlashingElectra, error) {
if src == nil {
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 2)
if err != nil {
return nil, err
}
attesterSlashings := make([]*eth.AttesterSlashingElectra, len(src))
for i, s := range src {
if s == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d]", i))
}
if s.Attestation1 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation1", i))
}
if s.Attestation2 == nil {
return nil, server.NewDecodeError(errNilValue, fmt.Sprintf("[%d].Attestation2", i))
}
a1Sig, err := bytesutil.DecodeHexWithLength(s.Attestation1.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Signature", i))
}
err = slice.VerifyMaxLength(s.Attestation1.AttestingIndices, 2048)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.AttestingIndices", i))
}
a1AttestingIndices := make([]uint64, len(s.Attestation1.AttestingIndices))
for j, ix := range s.Attestation1.AttestingIndices {
attestingIndex, err := strconv.ParseUint(ix, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.AttestingIndices[%d]", i, j))
}
a1AttestingIndices[j] = attestingIndex
}
a1Data, err := s.Attestation1.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation1.Data", i))
}
a2Sig, err := bytesutil.DecodeHexWithLength(s.Attestation2.Signature, fieldparams.BLSSignatureLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.Signature", i))
}
err = slice.VerifyMaxLength(s.Attestation2.AttestingIndices, 2048)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.AttestingIndices", i))
}
a2AttestingIndices := make([]uint64, len(s.Attestation2.AttestingIndices))
for j, ix := range s.Attestation2.AttestingIndices {
attestingIndex, err := strconv.ParseUint(ix, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.AttestingIndices[%d]", i, j))
}
a2AttestingIndices[j] = attestingIndex
}
a2Data, err := s.Attestation2.Data.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d].Attestation2.Data", i))
}
attesterSlashings[i] = &eth.AttesterSlashingElectra{
Attestation_1: &eth.IndexedAttestationElectra{
AttestingIndices: a1AttestingIndices,
Data: a1Data,
Signature: a1Sig,
},
Attestation_2: &eth.IndexedAttestationElectra{
AttestingIndices: a2AttestingIndices,
Data: a2Data,
Signature: a2Sig,
},
}
}
return attesterSlashings, nil
}
func AttesterSlashingsElectraFromConsensus(src []*eth.AttesterSlashingElectra) []*AttesterSlashingElectra {
attesterSlashings := make([]*AttesterSlashingElectra, len(src))
for i, s := range src {
attesterSlashings[i] = AttesterSlashingElectraFromConsensus(s)
}
return attesterSlashings
}
func AttesterSlashingElectraFromConsensus(src *eth.AttesterSlashingElectra) *AttesterSlashingElectra {
a1AttestingIndices := make([]string, len(src.Attestation_1.AttestingIndices))
for j, ix := range src.Attestation_1.AttestingIndices {
a1AttestingIndices[j] = fmt.Sprintf("%d", ix)
}
a2AttestingIndices := make([]string, len(src.Attestation_2.AttestingIndices))
for j, ix := range src.Attestation_2.AttestingIndices {
a2AttestingIndices[j] = fmt.Sprintf("%d", ix)
}
return &AttesterSlashingElectra{
Attestation1: &IndexedAttestationElectra{
AttestingIndices: a1AttestingIndices,
Data: &AttestationData{
Slot: fmt.Sprintf("%d", src.Attestation_1.Data.Slot),
CommitteeIndex: fmt.Sprintf("%d", src.Attestation_1.Data.CommitteeIndex),
BeaconBlockRoot: hexutil.Encode(src.Attestation_1.Data.BeaconBlockRoot),
Source: &Checkpoint{
Epoch: fmt.Sprintf("%d", src.Attestation_1.Data.Source.Epoch),
Root: hexutil.Encode(src.Attestation_1.Data.Source.Root),
},
Target: &Checkpoint{
Epoch: fmt.Sprintf("%d", src.Attestation_1.Data.Target.Epoch),
Root: hexutil.Encode(src.Attestation_1.Data.Target.Root),
},
},
Signature: hexutil.Encode(src.Attestation_1.Signature),
},
Attestation2: &IndexedAttestationElectra{
AttestingIndices: a2AttestingIndices,
Data: &AttestationData{
Slot: fmt.Sprintf("%d", src.Attestation_2.Data.Slot),
CommitteeIndex: fmt.Sprintf("%d", src.Attestation_2.Data.CommitteeIndex),
BeaconBlockRoot: hexutil.Encode(src.Attestation_2.Data.BeaconBlockRoot),
Source: &Checkpoint{
Epoch: fmt.Sprintf("%d", src.Attestation_2.Data.Source.Epoch),
Root: hexutil.Encode(src.Attestation_2.Data.Source.Root),
},
Target: &Checkpoint{
Epoch: fmt.Sprintf("%d", src.Attestation_2.Data.Target.Epoch),
Root: hexutil.Encode(src.Attestation_2.Data.Target.Root),
},
},
Signature: hexutil.Encode(src.Attestation_2.Signature),
},
}
}
func AttsToConsensus(src []*Attestation) ([]*eth.Attestation, error) {
if src == nil {
return nil, errNilValue
@@ -1317,33 +957,6 @@ func AttsFromConsensus(src []*eth.Attestation) []*Attestation {
return atts
}
func AttsElectraToConsensus(src []*AttestationElectra) ([]*eth.AttestationElectra, error) {
if src == nil {
return nil, errNilValue
}
err := slice.VerifyMaxLength(src, 8)
if err != nil {
return nil, err
}
atts := make([]*eth.AttestationElectra, len(src))
for i, a := range src {
atts[i], err = a.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("[%d]", i))
}
}
return atts, nil
}
func AttsElectraFromConsensus(src []*eth.AttestationElectra) []*AttestationElectra {
atts := make([]*AttestationElectra, len(src))
for i, a := range src {
atts[i] = AttElectraFromConsensus(a)
}
return atts
}
func DepositsToConsensus(src []*Deposit) ([]*eth.Deposit, error) {
if src == nil {
return nil, errNilValue
@@ -1474,37 +1087,3 @@ func DepositSnapshotFromConsensus(ds *eth.DepositSnapshot) *DepositSnapshot {
ExecutionBlockHeight: fmt.Sprintf("%d", ds.ExecutionDepth),
}
}
func PendingBalanceDepositsFromConsensus(ds []*eth.PendingBalanceDeposit) []*PendingBalanceDeposit {
deposits := make([]*PendingBalanceDeposit, len(ds))
for i, d := range ds {
deposits[i] = &PendingBalanceDeposit{
Index: fmt.Sprintf("%d", d.Index),
Amount: fmt.Sprintf("%d", d.Amount),
}
}
return deposits
}
func PendingPartialWithdrawalsFromConsensus(ws []*eth.PendingPartialWithdrawal) []*PendingPartialWithdrawal {
withdrawals := make([]*PendingPartialWithdrawal, len(ws))
for i, w := range ws {
withdrawals[i] = &PendingPartialWithdrawal{
Index: fmt.Sprintf("%d", w.Index),
Amount: fmt.Sprintf("%d", w.Amount),
WithdrawableEpoch: fmt.Sprintf("%d", w.WithdrawableEpoch),
}
}
return withdrawals
}
func PendingConsolidationsFromConsensus(cs []*eth.PendingConsolidation) []*PendingConsolidation {
consolidations := make([]*PendingConsolidation, len(cs))
for i, c := range cs {
consolidations[i] = &PendingConsolidation{
SourceIndex: fmt.Sprintf("%d", c.SourceIndex),
TargetIndex: fmt.Sprintf("%d", c.TargetIndex),
}
}
return consolidations
}

View File

@@ -1,61 +0,0 @@
package structs
import (
"strconv"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
func (sc *Sidecar) ToConsensus() (*eth.BlobSidecar, error) {
if sc == nil {
return nil, errNilValue
}
index, err := strconv.ParseUint(sc.Index, 10, 64)
if err != nil {
return nil, server.NewDecodeError(err, "Index")
}
blob, err := bytesutil.DecodeHexWithLength(sc.Blob, 131072)
if err != nil {
return nil, server.NewDecodeError(err, "Blob")
}
kzgCommitment, err := bytesutil.DecodeHexWithLength(sc.KzgCommitment, 48)
if err != nil {
return nil, server.NewDecodeError(err, "KzgCommitment")
}
kzgProof, err := bytesutil.DecodeHexWithLength(sc.KzgProof, 48)
if err != nil {
return nil, server.NewDecodeError(err, "KzgProof")
}
header, err := sc.SignedBeaconBlockHeader.ToConsensus()
if err != nil {
return nil, server.NewDecodeError(err, "SignedBeaconBlockHeader")
}
// decode the commitment inclusion proof
var commitmentInclusionProof [][]byte
for _, proof := range sc.CommitmentInclusionProof {
proofBytes, err := bytesutil.DecodeHexWithLength(proof, 32)
if err != nil {
return nil, server.NewDecodeError(err, "CommitmentInclusionProof")
}
commitmentInclusionProof = append(commitmentInclusionProof, proofBytes)
}
bsc := &eth.BlobSidecar{
Index: index,
Blob: blob,
KzgCommitment: kzgCommitment,
KzgProof: kzgProof,
SignedBlockHeader: header,
CommitmentInclusionProof: commitmentInclusionProof,
}
return bsc, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,122 +0,0 @@
package structs
import (
"encoding/json"
"fmt"
"strconv"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
v1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
v2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v5/proto/migration"
)
func LightClientUpdateFromConsensus(update *v2.LightClientUpdate) (*LightClientUpdate, error) {
attestedHeader, err := lightClientHeaderContainerToJSON(update.AttestedHeader)
if err != nil {
return nil, errors.Wrap(err, "could not marshal attested light client header")
}
finalizedHeader, err := lightClientHeaderContainerToJSON(update.FinalizedHeader)
if err != nil {
return nil, errors.Wrap(err, "could not marshal finalized light client header")
}
return &LightClientUpdate{
AttestedHeader: attestedHeader,
NextSyncCommittee: SyncCommitteeFromConsensus(migration.V2SyncCommitteeToV1Alpha1(update.NextSyncCommittee)),
NextSyncCommitteeBranch: branchToJSON(update.NextSyncCommitteeBranch),
FinalizedHeader: finalizedHeader,
FinalityBranch: branchToJSON(update.FinalityBranch),
SyncAggregate: syncAggregateToJSON(update.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(update.SignatureSlot), 10),
}, nil
}
func LightClientFinalityUpdateFromConsensus(update *v2.LightClientFinalityUpdate) (*LightClientFinalityUpdate, error) {
attestedHeader, err := lightClientHeaderContainerToJSON(update.AttestedHeader)
if err != nil {
return nil, errors.Wrap(err, "could not marshal attested light client header")
}
finalizedHeader, err := lightClientHeaderContainerToJSON(update.FinalizedHeader)
if err != nil {
return nil, errors.Wrap(err, "could not marshal finalized light client header")
}
return &LightClientFinalityUpdate{
AttestedHeader: attestedHeader,
FinalizedHeader: finalizedHeader,
FinalityBranch: branchToJSON(update.FinalityBranch),
SyncAggregate: syncAggregateToJSON(update.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(update.SignatureSlot), 10),
}, nil
}
func LightClientOptimisticUpdateFromConsensus(update *v2.LightClientOptimisticUpdate) (*LightClientOptimisticUpdate, error) {
attestedHeader, err := lightClientHeaderContainerToJSON(update.AttestedHeader)
if err != nil {
return nil, errors.Wrap(err, "could not marshal attested light client header")
}
return &LightClientOptimisticUpdate{
AttestedHeader: attestedHeader,
SyncAggregate: syncAggregateToJSON(update.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(update.SignatureSlot), 10),
}, nil
}
func branchToJSON(branchBytes [][]byte) []string {
if branchBytes == nil {
return nil
}
branch := make([]string, len(branchBytes))
for i, root := range branchBytes {
branch[i] = hexutil.Encode(root)
}
return branch
}
func syncAggregateToJSON(input *v1.SyncAggregate) *SyncAggregate {
return &SyncAggregate{
SyncCommitteeBits: hexutil.Encode(input.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(input.SyncCommitteeSignature),
}
}
func lightClientHeaderContainerToJSON(container *v2.LightClientHeaderContainer) (json.RawMessage, error) {
beacon, err := container.GetBeacon()
if err != nil {
return nil, errors.Wrap(err, "could not get beacon block header")
}
var header any
switch t := (container.Header).(type) {
case *v2.LightClientHeaderContainer_HeaderAltair:
header = &LightClientHeader{Beacon: BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(beacon))}
case *v2.LightClientHeaderContainer_HeaderCapella:
execution, err := ExecutionPayloadHeaderCapellaFromConsensus(t.HeaderCapella.Execution)
if err != nil {
return nil, err
}
header = &LightClientHeaderCapella{
Beacon: BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(beacon)),
Execution: execution,
ExecutionBranch: branchToJSON(t.HeaderCapella.ExecutionBranch),
}
case *v2.LightClientHeaderContainer_HeaderDeneb:
execution, err := ExecutionPayloadHeaderDenebFromConsensus(t.HeaderDeneb.Execution)
if err != nil {
return nil, err
}
header = &LightClientHeaderDeneb{
Beacon: BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(beacon)),
Execution: execution,
ExecutionBranch: branchToJSON(t.HeaderDeneb.ExecutionBranch),
}
default:
return nil, fmt.Errorf("unsupported header type %T", t)
}
return json.Marshal(header)
}

View File

@@ -593,185 +593,3 @@ func BeaconStateDenebFromConsensus(st beaconState.BeaconState) (*BeaconStateDene
HistoricalSummaries: hs,
}, nil
}
func BeaconStateElectraFromConsensus(st beaconState.BeaconState) (*BeaconStateElectra, error) {
srcBr := st.BlockRoots()
br := make([]string, len(srcBr))
for i, r := range srcBr {
br[i] = hexutil.Encode(r)
}
srcSr := st.StateRoots()
sr := make([]string, len(srcSr))
for i, r := range srcSr {
sr[i] = hexutil.Encode(r)
}
srcHr, err := st.HistoricalRoots()
if err != nil {
return nil, err
}
hr := make([]string, len(srcHr))
for i, r := range srcHr {
hr[i] = hexutil.Encode(r)
}
srcVotes := st.Eth1DataVotes()
votes := make([]*Eth1Data, len(srcVotes))
for i, e := range srcVotes {
votes[i] = Eth1DataFromConsensus(e)
}
srcVals := st.Validators()
vals := make([]*Validator, len(srcVals))
for i, v := range srcVals {
vals[i] = ValidatorFromConsensus(v)
}
srcBals := st.Balances()
bals := make([]string, len(srcBals))
for i, b := range srcBals {
bals[i] = fmt.Sprintf("%d", b)
}
srcRm := st.RandaoMixes()
rm := make([]string, len(srcRm))
for i, m := range srcRm {
rm[i] = hexutil.Encode(m)
}
srcSlashings := st.Slashings()
slashings := make([]string, len(srcSlashings))
for i, s := range srcSlashings {
slashings[i] = fmt.Sprintf("%d", s)
}
srcPrevPart, err := st.PreviousEpochParticipation()
if err != nil {
return nil, err
}
prevPart := make([]string, len(srcPrevPart))
for i, p := range srcPrevPart {
prevPart[i] = fmt.Sprintf("%d", p)
}
srcCurrPart, err := st.CurrentEpochParticipation()
if err != nil {
return nil, err
}
currPart := make([]string, len(srcCurrPart))
for i, p := range srcCurrPart {
currPart[i] = fmt.Sprintf("%d", p)
}
srcIs, err := st.InactivityScores()
if err != nil {
return nil, err
}
is := make([]string, len(srcIs))
for i, s := range srcIs {
is[i] = fmt.Sprintf("%d", s)
}
currSc, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSc, err := st.NextSyncCommittee()
if err != nil {
return nil, err
}
execData, err := st.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
srcPayload, ok := execData.Proto().(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, errPayloadHeaderNotFound
}
payload, err := ExecutionPayloadHeaderElectraFromConsensus(srcPayload)
if err != nil {
return nil, err
}
srcHs, err := st.HistoricalSummaries()
if err != nil {
return nil, err
}
hs := make([]*HistoricalSummary, len(srcHs))
for i, s := range srcHs {
hs[i] = HistoricalSummaryFromConsensus(s)
}
nwi, err := st.NextWithdrawalIndex()
if err != nil {
return nil, err
}
nwvi, err := st.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
drsi, err := st.DepositRequestsStartIndex()
if err != nil {
return nil, err
}
dbtc, err := st.DepositBalanceToConsume()
if err != nil {
return nil, err
}
ebtc, err := st.ExitBalanceToConsume()
if err != nil {
return nil, err
}
eee, err := st.EarliestExitEpoch()
if err != nil {
return nil, err
}
cbtc, err := st.ConsolidationBalanceToConsume()
if err != nil {
return nil, err
}
ece, err := st.EarliestConsolidationEpoch()
if err != nil {
return nil, err
}
pbd, err := st.PendingBalanceDeposits()
if err != nil {
return nil, err
}
ppw, err := st.PendingPartialWithdrawals()
if err != nil {
return nil, err
}
pc, err := st.PendingConsolidations()
if err != nil {
return nil, err
}
return &BeaconStateElectra{
GenesisTime: fmt.Sprintf("%d", st.GenesisTime()),
GenesisValidatorsRoot: hexutil.Encode(st.GenesisValidatorsRoot()),
Slot: fmt.Sprintf("%d", st.Slot()),
Fork: ForkFromConsensus(st.Fork()),
LatestBlockHeader: BeaconBlockHeaderFromConsensus(st.LatestBlockHeader()),
BlockRoots: br,
StateRoots: sr,
HistoricalRoots: hr,
Eth1Data: Eth1DataFromConsensus(st.Eth1Data()),
Eth1DataVotes: votes,
Eth1DepositIndex: fmt.Sprintf("%d", st.Eth1DepositIndex()),
Validators: vals,
Balances: bals,
RandaoMixes: rm,
Slashings: slashings,
PreviousEpochParticipation: prevPart,
CurrentEpochParticipation: currPart,
JustificationBits: hexutil.Encode(st.JustificationBits()),
PreviousJustifiedCheckpoint: CheckpointFromConsensus(st.PreviousJustifiedCheckpoint()),
CurrentJustifiedCheckpoint: CheckpointFromConsensus(st.CurrentJustifiedCheckpoint()),
FinalizedCheckpoint: CheckpointFromConsensus(st.FinalizedCheckpoint()),
InactivityScores: is,
CurrentSyncCommittee: SyncCommitteeFromConsensus(currSc),
NextSyncCommittee: SyncCommitteeFromConsensus(nextSc),
LatestExecutionPayloadHeader: payload,
NextWithdrawalIndex: fmt.Sprintf("%d", nwi),
NextWithdrawalValidatorIndex: fmt.Sprintf("%d", nwvi),
HistoricalSummaries: hs,
DepositRequestsStartIndex: fmt.Sprintf("%d", drsi),
DepositBalanceToConsume: fmt.Sprintf("%d", dbtc),
ExitBalanceToConsume: fmt.Sprintf("%d", ebtc),
EarliestExitEpoch: fmt.Sprintf("%d", eee),
ConsolidationBalanceToConsume: fmt.Sprintf("%d", cbtc),
EarliestConsolidationEpoch: fmt.Sprintf("%d", ece),
PendingBalanceDeposits: PendingBalanceDepositsFromConsensus(pbd),
PendingPartialWithdrawals: PendingPartialWithdrawalsFromConsensus(ppw),
PendingConsolidations: PendingConsolidationsFromConsensus(pc),
}, nil
}

View File

@@ -225,19 +225,3 @@ type IndividualVote struct {
InclusionDistance string `json:"inclusion_distance"`
InactivityScore string `json:"inactivity_score"`
}
type ChainHead struct {
HeadSlot string `json:"head_slot"`
HeadEpoch string `json:"head_epoch"`
HeadBlockRoot string `json:"head_block_root"`
FinalizedSlot string `json:"finalized_slot"`
FinalizedEpoch string `json:"finalized_epoch"`
FinalizedBlockRoot string `json:"finalized_block_root"`
JustifiedSlot string `json:"justified_slot"`
JustifiedEpoch string `json:"justified_epoch"`
JustifiedBlockRoot string `json:"justified_block_root"`
PreviousJustifiedSlot string `json:"previous_justified_slot"`
PreviousJustifiedEpoch string `json:"previous_justified_epoch"`
PreviousJustifiedBlockRoot string `json:"previous_justified_block_root"`
OptimisticStatus bool `json:"optimistic_status"`
}

View File

@@ -12,12 +12,3 @@ type Sidecar struct {
KzgProof string `json:"kzg_proof"`
CommitmentInclusionProof []string `json:"kzg_commitment_inclusion_proof"`
}
type BlobSidecars struct {
Sidecars []*Sidecar `json:"sidecars"`
}
type PublishBlobsRequest struct {
BlobSidecars *BlobSidecars `json:"blob_sidecars"`
BlockRoot string `json:"block_root"`
}

View File

@@ -96,7 +96,21 @@ type LightClientFinalityUpdateEvent struct {
Data *LightClientFinalityUpdate `json:"data"`
}
type LightClientFinalityUpdate struct {
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
FinalizedHeader *BeaconBlockHeader `json:"finalized_header"`
FinalityBranch []string `json:"finality_branch"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientOptimisticUpdateEvent struct {
Version string `json:"version"`
Data *LightClientOptimisticUpdate `json:"data"`
}
type LightClientOptimisticUpdate struct {
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}

View File

@@ -1,73 +1,31 @@
package structs
import "encoding/json"
type LightClientHeader struct {
Beacon *BeaconBlockHeader `json:"beacon"`
}
type LightClientHeaderCapella struct {
Beacon *BeaconBlockHeader `json:"beacon"`
Execution *ExecutionPayloadHeaderCapella `json:"execution"`
ExecutionBranch []string `json:"execution_branch"`
}
type LightClientHeaderDeneb struct {
Beacon *BeaconBlockHeader `json:"beacon"`
Execution *ExecutionPayloadHeaderDeneb `json:"execution"`
ExecutionBranch []string `json:"execution_branch"`
}
type LightClientBootstrap struct {
Header json.RawMessage `json:"header"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
CurrentSyncCommitteeBranch []string `json:"current_sync_committee_branch"`
}
type LightClientUpdate struct {
AttestedHeader json.RawMessage `json:"attested_header"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee,omitempty"`
FinalizedHeader json.RawMessage `json:"finalized_header,omitempty"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
NextSyncCommitteeBranch []string `json:"next_sync_committee_branch,omitempty"`
FinalityBranch []string `json:"finality_branch,omitempty"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientFinalityUpdate struct {
AttestedHeader json.RawMessage `json:"attested_header"`
FinalizedHeader json.RawMessage `json:"finalized_header"`
FinalityBranch []string `json:"finality_branch"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientOptimisticUpdate struct {
AttestedHeader json.RawMessage `json:"attested_header"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientBootstrapResponse struct {
Version string `json:"version"`
Data *LightClientBootstrap `json:"data"`
}
type LightClientUpdateResponse struct {
type LightClientBootstrap struct {
Header *BeaconBlockHeader `json:"header"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
CurrentSyncCommitteeBranch []string `json:"current_sync_committee_branch"`
}
type LightClientUpdate struct {
AttestedHeader *BeaconBlockHeader `json:"attested_header"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee,omitempty"`
FinalizedHeader *BeaconBlockHeader `json:"finalized_header,omitempty"`
SyncAggregate *SyncAggregate `json:"sync_aggregate"`
NextSyncCommitteeBranch []string `json:"next_sync_committee_branch,omitempty"`
FinalityBranch []string `json:"finality_branch,omitempty"`
SignatureSlot string `json:"signature_slot"`
}
type LightClientUpdateWithVersion struct {
Version string `json:"version"`
Data *LightClientUpdate `json:"data"`
}
type LightClientFinalityUpdateResponse struct {
Version string `json:"version"`
Data *LightClientFinalityUpdate `json:"data"`
}
type LightClientOptimisticUpdateResponse struct {
Version string `json:"version"`
Data *LightClientOptimisticUpdate `json:"data"`
}
type LightClientUpdatesByRangeResponse struct {
Updates []*LightClientUpdateResponse `json:"updates"`
Updates []*LightClientUpdateWithVersion `json:"updates"`
}

View File

@@ -118,34 +118,3 @@ type GetValidatorPerformanceResponse struct {
MissingValidators [][]byte `json:"missing_validators,omitempty"`
InactivityScores []uint64 `json:"inactivity_scores,omitempty"`
}
type GetValidatorParticipationResponse struct {
Epoch string `json:"epoch"`
Finalized bool `json:"finalized"`
Participation *ValidatorParticipation `json:"participation"`
}
type ValidatorParticipation struct {
GlobalParticipationRate string `json:"global_participation_rate" deprecated:"true"`
VotedEther string `json:"voted_ether" deprecated:"true"`
EligibleEther string `json:"eligible_ether" deprecated:"true"`
CurrentEpochActiveGwei string `json:"current_epoch_active_gwei"`
CurrentEpochAttestingGwei string `json:"current_epoch_attesting_gwei"`
CurrentEpochTargetAttestingGwei string `json:"current_epoch_target_attesting_gwei"`
PreviousEpochActiveGwei string `json:"previous_epoch_active_gwei"`
PreviousEpochAttestingGwei string `json:"previous_epoch_attesting_gwei"`
PreviousEpochTargetAttestingGwei string `json:"previous_epoch_target_attesting_gwei"`
PreviousEpochHeadAttestingGwei string `json:"previous_epoch_head_attesting_gwei"`
}
type ActiveSetChanges struct {
Epoch string `json:"epoch"`
ActivatedPublicKeys []string `json:"activated_public_keys"`
ActivatedIndices []string `json:"activated_indices"`
ExitedPublicKeys []string `json:"exited_public_keys"`
ExitedIndices []string `json:"exited_indices"`
SlashedPublicKeys []string `json:"slashed_public_keys"`
SlashedIndices []string `json:"slashed_indices"`
EjectedPublicKeys []string `json:"ejected_public_keys"`
EjectedIndices []string `json:"ejected_indices"`
}

View File

@@ -29,13 +29,6 @@ type Attestation struct {
Signature string `json:"signature"`
}
type AttestationElectra struct {
AggregationBits string `json:"aggregation_bits"`
Data *AttestationData `json:"data"`
Signature string `json:"signature"`
CommitteeBits string `json:"committee_bits"`
}
type AttestationData struct {
Slot string `json:"slot"`
CommitteeIndex string `json:"index"`
@@ -85,17 +78,6 @@ type AggregateAttestationAndProof struct {
SelectionProof string `json:"selection_proof"`
}
type SignedAggregateAttestationAndProofElectra struct {
Message *AggregateAttestationAndProofElectra `json:"message"`
Signature string `json:"signature"`
}
type AggregateAttestationAndProofElectra struct {
AggregatorIndex string `json:"aggregator_index"`
Aggregate *AttestationElectra `json:"aggregate"`
SelectionProof string `json:"selection_proof"`
}
type SyncCommitteeSubscription struct {
ValidatorIndex string `json:"validator_index"`
SyncCommitteeIndices []string `json:"sync_committee_indices"`
@@ -196,11 +178,6 @@ type AttesterSlashing struct {
Attestation2 *IndexedAttestation `json:"attestation_2"`
}
type AttesterSlashingElectra struct {
Attestation1 *IndexedAttestationElectra `json:"attestation_1"`
Attestation2 *IndexedAttestationElectra `json:"attestation_2"`
}
type Deposit struct {
Proof []string `json:"proof"`
Data *DepositData `json:"data"`
@@ -219,12 +196,6 @@ type IndexedAttestation struct {
Signature string `json:"signature"`
}
type IndexedAttestationElectra struct {
AttestingIndices []string `json:"attesting_indices"`
Data *AttestationData `json:"data"`
Signature string `json:"signature"`
}
type SyncAggregate struct {
SyncCommitteeBits string `json:"sync_committee_bits"`
SyncCommitteeSignature string `json:"sync_committee_signature"`
@@ -236,39 +207,3 @@ type Withdrawal struct {
ExecutionAddress string `json:"address"`
Amount string `json:"amount"`
}
type DepositRequest struct {
Pubkey string `json:"pubkey"`
WithdrawalCredentials string `json:"withdrawal_credentials"`
Amount string `json:"amount"`
Signature string `json:"signature"`
Index string `json:"index"`
}
type WithdrawalRequest struct {
SourceAddress string `json:"source_address"`
ValidatorPubkey string `json:"validator_pubkey"`
Amount string `json:"amount"`
}
type ConsolidationRequest struct {
SourceAddress string `json:"source_address"`
SourcePubkey string `json:"source_pubkey"`
TargetPubkey string `json:"target_pubkey"`
}
type PendingBalanceDeposit struct {
Index string `json:"index"`
Amount string `json:"amount"`
}
type PendingPartialWithdrawal struct {
Index string `json:"index"`
Amount string `json:"amount"`
WithdrawableEpoch string `json:"withdrawable_epoch"`
}
type PendingConsolidation struct {
SourceIndex string `json:"source_index"`
TargetIndex string `json:"target_index"`
}

View File

@@ -140,43 +140,3 @@ type BeaconStateDeneb struct {
NextWithdrawalValidatorIndex string `json:"next_withdrawal_validator_index"`
HistoricalSummaries []*HistoricalSummary `json:"historical_summaries"`
}
type BeaconStateElectra struct {
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root"`
Slot string `json:"slot"`
Fork *Fork `json:"fork"`
LatestBlockHeader *BeaconBlockHeader `json:"latest_block_header"`
BlockRoots []string `json:"block_roots"`
StateRoots []string `json:"state_roots"`
HistoricalRoots []string `json:"historical_roots"`
Eth1Data *Eth1Data `json:"eth1_data"`
Eth1DataVotes []*Eth1Data `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*Validator `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes"`
Slashings []string `json:"slashings"`
PreviousEpochParticipation []string `json:"previous_epoch_participation"`
CurrentEpochParticipation []string `json:"current_epoch_participation"`
JustificationBits string `json:"justification_bits"`
PreviousJustifiedCheckpoint *Checkpoint `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *Checkpoint `json:"current_justified_checkpoint"`
FinalizedCheckpoint *Checkpoint `json:"finalized_checkpoint"`
InactivityScores []string `json:"inactivity_scores"`
CurrentSyncCommittee *SyncCommittee `json:"current_sync_committee"`
NextSyncCommittee *SyncCommittee `json:"next_sync_committee"`
LatestExecutionPayloadHeader *ExecutionPayloadHeaderElectra `json:"latest_execution_payload_header"`
NextWithdrawalIndex string `json:"next_withdrawal_index"`
NextWithdrawalValidatorIndex string `json:"next_withdrawal_validator_index"`
HistoricalSummaries []*HistoricalSummary `json:"historical_summaries"`
DepositRequestsStartIndex string `json:"deposit_requests_start_index"`
DepositBalanceToConsume string `json:"deposit_balance_to_consume"`
ExitBalanceToConsume string `json:"exit_balance_to_consume"`
EarliestExitEpoch string `json:"earliest_exit_epoch"`
ConsolidationBalanceToConsume string `json:"consolidation_balance_to_consume"`
EarliestConsolidationEpoch string `json:"earliest_consolidation_epoch"`
PendingBalanceDeposits []*PendingBalanceDeposit `json:"pending_balance_deposits"`
PendingPartialWithdrawals []*PendingPartialWithdrawal `json:"pending_partial_withdrawals"`
PendingConsolidations []*PendingConsolidation `json:"pending_consolidations"`
}

View File

@@ -8,20 +8,22 @@ go_library(
],
importpath = "github.com/prysmaticlabs/prysm/v5/async/event",
visibility = ["//visibility:public"],
deps = [
"//time/mclock:go_default_library",
"@com_github_ethereum_go_ethereum//event:go_default_library",
],
deps = ["//time/mclock:go_default_library"],
)
go_test(
name = "go_default_test",
size = "small",
srcs = [
"example_feed_test.go",
"example_scope_test.go",
"example_subscription_test.go",
"feed_test.go",
"subscription_test.go",
],
embed = [":go_default_library"],
deps = ["//testing/require:go_default_library"],
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -0,0 +1,73 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package event_test
import (
"fmt"
"github.com/prysmaticlabs/prysm/v5/async/event"
)
func ExampleFeed_acknowledgedEvents() {
// This example shows how the return value of Send can be used for request/reply
// interaction between event consumers and producers.
var feed event.Feed
type ackedEvent struct {
i int
ack chan<- struct{}
}
// Consumers wait for events on the feed and acknowledge processing.
done := make(chan struct{})
defer close(done)
for i := 0; i < 3; i++ {
ch := make(chan ackedEvent, 100)
sub := feed.Subscribe(ch)
go func() {
defer sub.Unsubscribe()
for {
select {
case ev := <-ch:
fmt.Println(ev.i) // "process" the event
ev.ack <- struct{}{}
case <-done:
return
}
}
}()
}
// The producer sends values of type ackedEvent with increasing values of i.
// It waits for all consumers to acknowledge before sending the next event.
for i := 0; i < 3; i++ {
acksignal := make(chan struct{})
n := feed.Send(ackedEvent{i, acksignal})
for ack := 0; ack < n; ack++ {
<-acksignal
}
}
// Output:
// 0
// 0
// 0
// 1
// 1
// 1
// 2
// 2
// 2
}

View File

@@ -14,11 +14,241 @@
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
// Package event contains an event feed implementation for process communication.
package event
import (
geth_event "github.com/ethereum/go-ethereum/event"
"errors"
"reflect"
"slices"
"sync"
)
// Feed is a re-export of the go-ethereum event feed.
type Feed = geth_event.Feed
var errBadChannel = errors.New("event: Subscribe argument does not have sendable channel type")
// Feed implements one-to-many subscriptions where the carrier of events is a channel.
// Values sent to a Feed are delivered to all subscribed channels simultaneously.
//
// Feeds can only be used with a single type. The type is determined by the first Send or
// Subscribe operation. Subsequent calls to these methods panic if the type does not
// match.
//
// The zero value is ready to use.
type Feed struct {
once sync.Once // ensures that init only runs once
sendLock chan struct{} // sendLock has a one-element buffer and is empty when held.It protects sendCases.
removeSub chan interface{} // interrupts Send
sendCases caseList // the active set of select cases used by Send
// The inbox holds newly subscribed channels until they are added to sendCases.
mu sync.Mutex
inbox caseList
etype reflect.Type
}
// This is the index of the first actual subscription channel in sendCases.
// sendCases[0] is a SelectRecv case for the removeSub channel.
const firstSubSendCase = 1
type feedTypeError struct {
got, want reflect.Type
op string
}
func (e feedTypeError) Error() string {
return "event: wrong type in " + e.op + " got " + e.got.String() + ", want " + e.want.String()
}
func (f *Feed) init() {
f.removeSub = make(chan interface{})
f.sendLock = make(chan struct{}, 1)
f.sendLock <- struct{}{}
f.sendCases = caseList{{Chan: reflect.ValueOf(f.removeSub), Dir: reflect.SelectRecv}}
}
// Subscribe adds a channel to the feed. Future sends will be delivered on the channel
// until the subscription is canceled. All channels added must have the same element type.
//
// The channel should have ample buffer space to avoid blocking other subscribers.
// Slow subscribers are not dropped.
func (f *Feed) Subscribe(channel interface{}) Subscription {
f.once.Do(f.init)
chanval := reflect.ValueOf(channel)
chantyp := chanval.Type()
if chantyp.Kind() != reflect.Chan || chantyp.ChanDir()&reflect.SendDir == 0 {
panic(errBadChannel)
}
sub := &feedSub{feed: f, channel: chanval, err: make(chan error, 1)}
f.mu.Lock()
defer f.mu.Unlock()
if !f.typecheck(chantyp.Elem()) {
panic(feedTypeError{op: "Subscribe", got: chantyp, want: reflect.ChanOf(reflect.SendDir, f.etype)})
}
// Add the select case to the inbox.
// The next Send will add it to f.sendCases.
cas := reflect.SelectCase{Dir: reflect.SelectSend, Chan: chanval}
f.inbox = append(f.inbox, cas)
return sub
}
// note: callers must hold f.mu
func (f *Feed) typecheck(typ reflect.Type) bool {
if f.etype == nil {
f.etype = typ
return true
}
// In the event the feed's type is an actual interface, we
// perform an interface conformance check here.
if f.etype.Kind() == reflect.Interface && typ.Implements(f.etype) {
return true
}
return f.etype == typ
}
func (f *Feed) remove(sub *feedSub) {
// Delete from inbox first, which covers channels
// that have not been added to f.sendCases yet.
ch := sub.channel.Interface()
f.mu.Lock()
index := f.inbox.find(ch)
if index != -1 {
f.inbox = f.inbox.delete(index)
f.mu.Unlock()
return
}
f.mu.Unlock()
select {
case f.removeSub <- ch:
// Send will remove the channel from f.sendCases.
case <-f.sendLock:
// No Send is in progress, delete the channel now that we have the send lock.
f.sendCases = f.sendCases.delete(f.sendCases.find(ch))
f.sendLock <- struct{}{}
}
}
// Send delivers to all subscribed channels simultaneously.
// It returns the number of subscribers that the value was sent to.
func (f *Feed) Send(value interface{}) (nsent int) {
rvalue := reflect.ValueOf(value)
f.once.Do(f.init)
<-f.sendLock
// Add new cases from the inbox after taking the send lock.
f.mu.Lock()
f.sendCases = append(f.sendCases, f.inbox...)
f.inbox = nil
if !f.typecheck(rvalue.Type()) {
f.sendLock <- struct{}{}
f.mu.Unlock()
panic(feedTypeError{op: "Send", got: rvalue.Type(), want: f.etype})
}
f.mu.Unlock()
// Set the sent value on all channels.
for i := firstSubSendCase; i < len(f.sendCases); i++ {
f.sendCases[i].Send = rvalue
}
// Send until all channels except removeSub have been chosen. 'cases' tracks a prefix
// of sendCases. When a send succeeds, the corresponding case moves to the end of
// 'cases' and it shrinks by one element.
cases := f.sendCases
for {
// Fast path: try sending without blocking before adding to the select set.
// This should usually succeed if subscribers are fast enough and have free
// buffer space.
for i := firstSubSendCase; i < len(cases); i++ {
if cases[i].Chan.TrySend(rvalue) {
nsent++
cases = cases.deactivate(i)
i--
}
}
if len(cases) == firstSubSendCase {
break
}
// Select on all the receivers, waiting for them to unblock.
chosen, recv, _ := reflect.Select(cases)
if chosen == 0 /* <-f.removeSub */ {
index := f.sendCases.find(recv.Interface())
f.sendCases = f.sendCases.delete(index)
if index >= 0 && index < len(cases) {
// Shrink 'cases' too because the removed case was still active.
cases = f.sendCases[:len(cases)-1]
}
} else {
cases = cases.deactivate(chosen)
nsent++
}
}
// Forget about the sent value and hand off the send lock.
for i := firstSubSendCase; i < len(f.sendCases); i++ {
f.sendCases[i].Send = reflect.Value{}
}
f.sendLock <- struct{}{}
return nsent
}
type feedSub struct {
feed *Feed
channel reflect.Value
errOnce sync.Once
err chan error
}
// Unsubscribe remove feed subscription.
func (sub *feedSub) Unsubscribe() {
sub.errOnce.Do(func() {
sub.feed.remove(sub)
close(sub.err)
})
}
// Err returns error channel.
func (sub *feedSub) Err() <-chan error {
return sub.err
}
type caseList []reflect.SelectCase
// find returns the index of a case containing the given channel.
func (cs caseList) find(channel interface{}) int {
return slices.IndexFunc(cs, func(selectCase reflect.SelectCase) bool {
return selectCase.Chan.Interface() == channel
})
}
// delete removes the given case from cs.
func (cs caseList) delete(index int) caseList {
return append(cs[:index], cs[index+1:]...)
}
// deactivate moves the case at index into the non-accessible portion of the cs slice.
func (cs caseList) deactivate(index int) caseList {
last := len(cs) - 1
cs[index], cs[last] = cs[last], cs[index]
return cs[:last]
}
// func (cs caseList) String() string {
// s := "["
// for i, cas := range cs {
// if i != 0 {
// s += ", "
// }
// switch cas.Dir {
// case reflect.SelectSend:
// s += fmt.Sprintf("%v<-", cas.Chan.Interface())
// case reflect.SelectRecv:
// s += fmt.Sprintf("<-%v", cas.Chan.Interface())
// }
// }
// return s + "]"
// }

509
async/event/feed_test.go Normal file
View File

@@ -0,0 +1,509 @@
// Copyright 2016 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package event
import (
"fmt"
"reflect"
"sync"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
)
func TestFeedPanics(t *testing.T) {
{
var f Feed
f.Send(2)
want := feedTypeError{op: "Send", got: reflect.TypeOf(uint64(0)), want: reflect.TypeOf(0)}
assert.NoError(t, checkPanic(want, func() { f.Send(uint64(2)) }))
// Validate it doesn't deadlock.
assert.NoError(t, checkPanic(want, func() { f.Send(uint64(2)) }))
}
{
var f Feed
ch := make(chan int)
f.Subscribe(ch)
want := feedTypeError{op: "Send", got: reflect.TypeOf(uint64(0)), want: reflect.TypeOf(0)}
assert.NoError(t, checkPanic(want, func() { f.Send(uint64(2)) }))
}
{
var f Feed
f.Send(2)
want := feedTypeError{op: "Subscribe", got: reflect.TypeOf(make(chan uint64)), want: reflect.TypeOf(make(chan<- int))}
assert.NoError(t, checkPanic(want, func() { f.Subscribe(make(chan uint64)) }))
}
{
var f Feed
assert.NoError(t, checkPanic(errBadChannel, func() { f.Subscribe(make(<-chan int)) }))
}
{
var f Feed
assert.NoError(t, checkPanic(errBadChannel, func() { f.Subscribe(0) }))
}
}
func checkPanic(want error, fn func()) (err error) {
defer func() {
panicResult := recover()
if panicResult == nil {
err = fmt.Errorf("didn't panic")
} else if !reflect.DeepEqual(panicResult, want) {
err = fmt.Errorf("panicked with wrong error: got %q, want %q", panicResult, want)
}
}()
fn()
return nil
}
func TestFeed(t *testing.T) {
var feed Feed
var done, subscribed sync.WaitGroup
subscriber := func(i int) {
defer done.Done()
subchan := make(chan int)
sub := feed.Subscribe(subchan)
timeout := time.NewTimer(2 * time.Second)
subscribed.Done()
select {
case v := <-subchan:
if v != 1 {
t.Errorf("%d: received value %d, want 1", i, v)
}
case <-timeout.C:
t.Errorf("%d: receive timeout", i)
}
sub.Unsubscribe()
select {
case _, ok := <-sub.Err():
if ok {
t.Errorf("%d: error channel not closed after unsubscribe", i)
}
case <-timeout.C:
t.Errorf("%d: unsubscribe timeout", i)
}
}
const n = 1000
done.Add(n)
subscribed.Add(n)
for i := 0; i < n; i++ {
go subscriber(i)
}
subscribed.Wait()
if nsent := feed.Send(1); nsent != n {
t.Errorf("first send delivered %d times, want %d", nsent, n)
}
if nsent := feed.Send(2); nsent != 0 {
t.Errorf("second send delivered %d times, want 0", nsent)
}
done.Wait()
}
func TestFeedSubscribeSameChannel(t *testing.T) {
var (
feed Feed
done sync.WaitGroup
ch = make(chan int)
sub1 = feed.Subscribe(ch)
sub2 = feed.Subscribe(ch)
_ = feed.Subscribe(ch)
)
expectSends := func(value, n int) {
if nsent := feed.Send(value); nsent != n {
t.Errorf("send delivered %d times, want %d", nsent, n)
}
done.Done()
}
expectRecv := func(wantValue, n int) {
for i := 0; i < n; i++ {
if v := <-ch; v != wantValue {
t.Errorf("received %d, want %d", v, wantValue)
}
}
}
done.Add(1)
go expectSends(1, 3)
expectRecv(1, 3)
done.Wait()
sub1.Unsubscribe()
done.Add(1)
go expectSends(2, 2)
expectRecv(2, 2)
done.Wait()
sub2.Unsubscribe()
done.Add(1)
go expectSends(3, 1)
expectRecv(3, 1)
done.Wait()
}
func TestFeedSubscribeBlockedPost(_ *testing.T) {
var (
feed Feed
nsends = 2000
ch1 = make(chan int)
ch2 = make(chan int)
wg sync.WaitGroup
)
defer wg.Wait()
feed.Subscribe(ch1)
wg.Add(nsends)
for i := 0; i < nsends; i++ {
go func() {
feed.Send(99)
wg.Done()
}()
}
sub2 := feed.Subscribe(ch2)
defer sub2.Unsubscribe()
// We're done when ch1 has received N times.
// The number of receives on ch2 depends on scheduling.
for i := 0; i < nsends; {
select {
case <-ch1:
i++
case <-ch2:
}
}
}
func TestFeedUnsubscribeBlockedPost(_ *testing.T) {
var (
feed Feed
nsends = 200
chans = make([]chan int, 2000)
subs = make([]Subscription, len(chans))
bchan = make(chan int)
bsub = feed.Subscribe(bchan)
wg sync.WaitGroup
)
for i := range chans {
chans[i] = make(chan int, nsends)
}
// Queue up some Sends. None of these can make progress while bchan isn't read.
wg.Add(nsends)
for i := 0; i < nsends; i++ {
go func() {
feed.Send(99)
wg.Done()
}()
}
// Subscribe the other channels.
for i, ch := range chans {
subs[i] = feed.Subscribe(ch)
}
// Unsubscribe them again.
for _, sub := range subs {
sub.Unsubscribe()
}
// Unblock the Sends.
bsub.Unsubscribe()
wg.Wait()
}
// Checks that unsubscribing a channel during Send works even if that
// channel has already been sent on.
func TestFeedUnsubscribeSentChan(_ *testing.T) {
var (
feed Feed
ch1 = make(chan int)
ch2 = make(chan int)
sub1 = feed.Subscribe(ch1)
sub2 = feed.Subscribe(ch2)
wg sync.WaitGroup
)
defer sub2.Unsubscribe()
wg.Add(1)
go func() {
feed.Send(0)
wg.Done()
}()
// Wait for the value on ch1.
<-ch1
// Unsubscribe ch1, removing it from the send cases.
sub1.Unsubscribe()
// Receive ch2, finishing Send.
<-ch2
wg.Wait()
// Send again. This should send to ch2 only, so the wait group will unblock
// as soon as a value is received on ch2.
wg.Add(1)
go func() {
feed.Send(0)
wg.Done()
}()
<-ch2
wg.Wait()
}
func TestFeedUnsubscribeFromInbox(t *testing.T) {
var (
feed Feed
ch1 = make(chan int)
ch2 = make(chan int)
sub1 = feed.Subscribe(ch1)
sub2 = feed.Subscribe(ch1)
sub3 = feed.Subscribe(ch2)
)
assert.Equal(t, 3, len(feed.inbox))
assert.Equal(t, 1, len(feed.sendCases), "sendCases is non-empty after unsubscribe")
sub1.Unsubscribe()
sub2.Unsubscribe()
sub3.Unsubscribe()
assert.Equal(t, 0, len(feed.inbox), "Inbox is non-empty after unsubscribe")
assert.Equal(t, 1, len(feed.sendCases), "sendCases is non-empty after unsubscribe")
}
func BenchmarkFeedSend1000(b *testing.B) {
var (
done sync.WaitGroup
feed Feed
nsubs = 1000
)
subscriber := func(ch <-chan int) {
for i := 0; i < b.N; i++ {
<-ch
}
done.Done()
}
done.Add(nsubs)
for i := 0; i < nsubs; i++ {
ch := make(chan int, 200)
feed.Subscribe(ch)
go subscriber(ch)
}
// The actual benchmark.
b.ResetTimer()
for i := 0; i < b.N; i++ {
if feed.Send(i) != nsubs {
panic("wrong number of sends")
}
}
b.StopTimer()
done.Wait()
}
func TestFeed_Send(t *testing.T) {
tests := []struct {
name string
evFeed *Feed
testSetup func(fd *Feed, t *testing.T, o interface{})
obj interface{}
expectPanic bool
}{
{
name: "normal struct",
evFeed: new(Feed),
testSetup: func(fd *Feed, t *testing.T, o interface{}) {
testChan := make(chan testFeedWithPointer, 1)
fd.Subscribe(testChan)
},
obj: testFeedWithPointer{
a: new(uint64),
b: new(string),
},
expectPanic: false,
},
{
name: "un-implemented interface",
evFeed: new(Feed),
testSetup: func(fd *Feed, t *testing.T, o interface{}) {
testChan := make(chan testFeedIface, 1)
fd.Subscribe(testChan)
},
obj: testFeedWithPointer{
a: new(uint64),
b: new(string),
},
expectPanic: true,
},
{
name: "semi-implemented interface",
evFeed: new(Feed),
testSetup: func(fd *Feed, t *testing.T, o interface{}) {
testChan := make(chan testFeedIface, 1)
fd.Subscribe(testChan)
},
obj: testFeed2{
a: 0,
b: "",
c: []byte{'A'},
},
expectPanic: true,
},
{
name: "fully-implemented interface",
evFeed: new(Feed),
testSetup: func(fd *Feed, t *testing.T, o interface{}) {
testChan := make(chan testFeedIface)
// Make it unbuffered to allow message to
// pass through
go func() {
a := <-testChan
if !reflect.DeepEqual(a, o) {
t.Errorf("Got = %v, want = %v", a, o)
}
}()
fd.Subscribe(testChan)
},
obj: testFeed{
a: 0,
b: "",
},
expectPanic: false,
},
{
name: "fully-implemented interface with additional methods",
evFeed: new(Feed),
testSetup: func(fd *Feed, t *testing.T, o interface{}) {
testChan := make(chan testFeedIface)
// Make it unbuffered to allow message to
// pass through
go func() {
a := <-testChan
if !reflect.DeepEqual(a, o) {
t.Errorf("Got = %v, want = %v", a, o)
}
}()
fd.Subscribe(testChan)
},
obj: testFeed3{
a: 0,
b: "",
c: []byte{'A'},
d: []byte{'B'},
},
expectPanic: false,
},
{
name: "concrete types implementing the same interface",
evFeed: new(Feed),
testSetup: func(fd *Feed, t *testing.T, o interface{}) {
testChan := make(chan testFeed, 1)
// Make it unbuffered to allow message to
// pass through
go func() {
a := <-testChan
if !reflect.DeepEqual(a, o) {
t.Errorf("Got = %v, want = %v", a, o)
}
}()
fd.Subscribe(testChan)
},
obj: testFeed3{
a: 0,
b: "",
c: []byte{'A'},
d: []byte{'B'},
},
expectPanic: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
defer func() {
if r := recover(); r != nil {
if !tt.expectPanic {
t.Errorf("panic triggered when unexpected: %v", r)
}
} else {
if tt.expectPanic {
t.Error("panic not triggered when expected")
}
}
}()
tt.testSetup(tt.evFeed, t, tt.obj)
if gotNsent := tt.evFeed.Send(tt.obj); gotNsent != 1 {
t.Errorf("Send() = %v, want %v", gotNsent, 1)
}
})
}
}
// The following objects below are a collection of different
// struct types to test with.
type testFeed struct {
a uint64
b string
}
func (testFeed) method1() {
}
func (testFeed) method2() {
}
type testFeedWithPointer struct {
a *uint64
b *string
}
type testFeed2 struct {
a uint64
b string
c []byte
}
func (testFeed2) method1() {
}
type testFeed3 struct {
a uint64
b string
c, d []byte
}
func (testFeed3) method1() {
}
func (testFeed3) method2() {
}
func (testFeed3) method3() {
}
type testFeedIface interface {
method1()
method2()
}

View File

@@ -13,6 +13,7 @@ go_library(
"head.go",
"head_sync_committee_info.go",
"init_sync_process_block.go",
"lightclient.go",
"log.go",
"merge_ascii_art.go",
"metrics.go",
@@ -25,6 +26,7 @@ go_library(
"receive_attestation.go",
"receive_blob.go",
"receive_block.go",
"receive_data_column.go",
"service.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
@@ -47,7 +49,7 @@ go_library(
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -81,10 +83,10 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/migration:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
@@ -97,6 +99,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)
@@ -116,6 +119,7 @@ go_test(
"head_test.go",
"init_sync_process_block_test.go",
"init_test.go",
"lightclient_test.go",
"log_test.go",
"metrics_test.go",
"mock_test.go",
@@ -157,6 +161,7 @@ go_test(
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/p2p/testing:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
@@ -172,6 +177,7 @@ go_test(
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",

View File

@@ -16,9 +16,9 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
// ChainInfoFetcher defines a common interface for methods in blockchain service which
@@ -203,7 +203,7 @@ func (s *Service) HeadState(ctx context.Context) (state.BeaconState, error) {
defer s.headLock.RUnlock()
ok := s.hasHeadState()
span.SetAttributes(trace.BoolAttribute("cache_hit", ok))
span.AddAttributes(trace.BoolAttribute("cache_hit", ok))
if ok {
return s.headState(ctx), nil
@@ -225,7 +225,7 @@ func (s *Service) HeadStateReadOnly(ctx context.Context) (state.ReadOnlyBeaconSt
defer s.headLock.RUnlock()
ok := s.hasHeadState()
span.SetAttributes(trace.BoolAttribute("cache_hit", ok))
span.AddAttributes(trace.BoolAttribute("cache_hit", ok))
if ok {
return s.headStateReadOnly(ctx), nil

View File

@@ -22,6 +22,13 @@ func (s *Service) GetProposerHead() [32]byte {
return s.cfg.ForkChoiceStore.GetProposerHead()
}
// ShouldOverrideFCU returns the corresponding value from forkchoice
func (s *Service) ShouldOverrideFCU() bool {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ShouldOverrideFCU()
}
// SetForkChoiceGenesisTime sets the genesis time in Forkchoice
func (s *Service) SetForkChoiceGenesisTime(timestamp uint64) {
s.cfg.ForkChoiceStore.Lock()
@@ -92,10 +99,3 @@ func (s *Service) FinalizedBlockHash() [32]byte {
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.FinalizedPayloadBlockHash()
}
// ParentRoot wraps a call to the corresponding method in forkchoice
func (s *Service) ParentRoot(root [32]byte) ([32]byte, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ParentRoot(root)
}

View File

@@ -33,6 +33,7 @@ var (
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")
var errMaxDataColumnsExceeded = errors.New("Expected data columns for node exceeds NUMBER_OF_COLUMNS")
// An invalid block is the block that fails state transition based on the core protocol rules.
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.

View File

@@ -21,11 +21,11 @@ import (
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
const blobCommitmentVersionKZG uint8 = 0x01
@@ -39,7 +39,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
ctx, span := trace.StartSpan(ctx, "blockChain.notifyForkchoiceUpdate")
defer span.End()
if arg.headBlock == nil || arg.headBlock.IsNil() {
if arg.headBlock.IsNil() {
log.Error("Head block is nil")
return nil, nil
}

View File

@@ -12,9 +12,9 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
func (s *Service) isNewHead(r [32]byte) bool {

View File

@@ -18,11 +18,11 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/math"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpbv1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// UpdateAndSaveHeadWithBalances updates the beacon state head after getting justified balanced from cache.

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"kzg.go",
"trusted_setup.go",
"validation.go",
],
@@ -12,6 +13,9 @@ go_library(
deps = [
"//consensus-types/blocks:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_c_kzg_4844//bindings/go:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto/kzg4844:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -0,0 +1,109 @@
package kzg
import (
"errors"
ckzg4844 "github.com/ethereum/c-kzg-4844/bindings/go"
"github.com/ethereum/go-ethereum/crypto/kzg4844"
)
// BytesPerBlob is the number of bytes in a single blob.
const BytesPerBlob = ckzg4844.BytesPerBlob
// Blob represents a serialized chunk of data.
type Blob [BytesPerBlob]byte
// BytesPerCell is the number of bytes in a single cell.
const BytesPerCell = ckzg4844.BytesPerCell
// Cell represents a chunk of an encoded Blob.
type Cell [BytesPerCell]byte
// Commitment represent a KZG commitment to a Blob.
type Commitment [48]byte
// Proof represents a KZG proof that attests to the validity of a Blob or parts of it.
type Proof [48]byte
// Bytes48 is a 48-byte array.
type Bytes48 = ckzg4844.Bytes48
// Bytes32 is a 32-byte array.
type Bytes32 = ckzg4844.Bytes32
// CellsAndProofs represents the Cells and Proofs corresponding to
// a single blob.
type CellsAndProofs struct {
Cells []Cell
Proofs []Proof
}
func BlobToKZGCommitment(blob *Blob) (Commitment, error) {
comm, err := kzg4844.BlobToCommitment(kzg4844.Blob(*blob))
if err != nil {
return Commitment{}, err
}
return Commitment(comm), nil
}
func ComputeBlobKZGProof(blob *Blob, commitment Commitment) (Proof, error) {
proof, err := kzg4844.ComputeBlobProof(kzg4844.Blob(*blob), kzg4844.Commitment(commitment))
if err != nil {
return [48]byte{}, err
}
return Proof(proof), nil
}
func ComputeCellsAndKZGProofs(blob *Blob) (CellsAndProofs, error) {
ckzgBlob := (*ckzg4844.Blob)(blob)
ckzgCells, ckzgProofs, err := ckzg4844.ComputeCellsAndKZGProofs(ckzgBlob)
if err != nil {
return CellsAndProofs{}, err
}
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
}
func VerifyCellKZGProofBatch(commitmentsBytes []Bytes48, cellIndices []uint64, cells []Cell, proofsBytes []Bytes48) (bool, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgCells := make([]ckzg4844.Cell, len(cells))
for i := range cells {
ckzgCells[i] = ckzg4844.Cell(cells[i])
}
return ckzg4844.VerifyCellKZGProofBatch(commitmentsBytes, cellIndices, ckzgCells, proofsBytes)
}
func RecoverCellsAndKZGProofs(cellIndices []uint64, partialCells []Cell) (CellsAndProofs, error) {
// Convert `Cell` type to `ckzg4844.Cell`
ckzgPartialCells := make([]ckzg4844.Cell, len(partialCells))
for i := range partialCells {
ckzgPartialCells[i] = ckzg4844.Cell(partialCells[i])
}
ckzgCells, ckzgProofs, err := ckzg4844.RecoverCellsAndKZGProofs(cellIndices, ckzgPartialCells)
if err != nil {
return CellsAndProofs{}, err
}
return makeCellsAndProofs(ckzgCells[:], ckzgProofs[:])
}
// Convert cells/proofs to the CellsAndProofs type defined in this package.
func makeCellsAndProofs(ckzgCells []ckzg4844.Cell, ckzgProofs []ckzg4844.KZGProof) (CellsAndProofs, error) {
if len(ckzgCells) != len(ckzgProofs) {
return CellsAndProofs{}, errors.New("different number of cells/proofs")
}
var cells []Cell
var proofs []Proof
for i := range ckzgCells {
cells = append(cells, Cell(ckzgCells[i]))
proofs = append(proofs, Proof(ckzgProofs[i]))
}
return CellsAndProofs{
Cells: cells,
Proofs: proofs,
}, nil
}

View File

@@ -5,6 +5,8 @@ import (
"encoding/json"
GoKZG "github.com/crate-crypto/go-kzg-4844"
CKZG "github.com/ethereum/c-kzg-4844/bindings/go"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
)
@@ -12,17 +14,53 @@ var (
//go:embed trusted_setup.json
embeddedTrustedSetup []byte // 1.2Mb
kzgContext *GoKZG.Context
kzgLoaded bool
)
type TrustedSetup struct {
G1Monomial [GoKZG.ScalarsPerBlob]GoKZG.G1CompressedHexStr `json:"g1_monomial"`
G1Lagrange [GoKZG.ScalarsPerBlob]GoKZG.G1CompressedHexStr `json:"g1_lagrange"`
G2Monomial [65]GoKZG.G2CompressedHexStr `json:"g2_monomial"`
}
func Start() error {
parsedSetup := GoKZG.JSONTrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, &parsedSetup)
trustedSetup := &TrustedSetup{}
err := json.Unmarshal(embeddedTrustedSetup, trustedSetup)
if err != nil {
return errors.Wrap(err, "could not parse trusted setup JSON")
}
kzgContext, err = GoKZG.NewContext4096(&parsedSetup)
kzgContext, err = GoKZG.NewContext4096(&GoKZG.JSONTrustedSetup{
SetupG2: trustedSetup.G2Monomial[:],
SetupG1Lagrange: trustedSetup.G1Lagrange})
if err != nil {
return errors.Wrap(err, "could not initialize go-kzg context")
}
// Length of a G1 point, converted from hex to binary.
g1MonomialBytes := make([]byte, len(trustedSetup.G1Monomial)*(len(trustedSetup.G1Monomial[0])-2)/2)
for i, g1 := range &trustedSetup.G1Monomial {
copy(g1MonomialBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G1 point, converted from hex to binary.
g1LagrangeBytes := make([]byte, len(trustedSetup.G1Lagrange)*(len(trustedSetup.G1Lagrange[0])-2)/2)
for i, g1 := range &trustedSetup.G1Lagrange {
copy(g1LagrangeBytes[i*(len(g1)-2)/2:], hexutil.MustDecode(g1))
}
// Length of a G2 point, converted from hex to binary.
g2MonomialBytes := make([]byte, len(trustedSetup.G2Monomial)*(len(trustedSetup.G2Monomial[0])-2)/2)
for i, g2 := range &trustedSetup.G2Monomial {
copy(g2MonomialBytes[i*(len(g2)-2)/2:], hexutil.MustDecode(g2))
}
if !kzgLoaded {
// TODO: Provide a configuration option for this.
var precompute uint = 0
// Free the current trusted setup before running this method. CKZG
// panics if the same setup is run multiple times.
if err = CKZG.LoadTrustedSetup(g1MonomialBytes, g1LagrangeBytes, g2MonomialBytes, precompute); err != nil {
panic(err)
}
}
kzgLoaded = true
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,245 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpbv1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v5/proto/migration"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
const (
FinalityBranchNumOfLeaves = 6
)
// CreateLightClientFinalityUpdate - implements https://github.com/ethereum/consensus-specs/blob/3d235740e5f1e641d3b160c8688f26e7dc5a1894/specs/altair/light-client/full-node.md#create_light_client_finality_update
// def create_light_client_finality_update(update: LightClientUpdate) -> LightClientFinalityUpdate:
//
// return LightClientFinalityUpdate(
// attested_header=update.attested_header,
// finalized_header=update.finalized_header,
// finality_branch=update.finality_branch,
// sync_aggregate=update.sync_aggregate,
// signature_slot=update.signature_slot,
// )
func CreateLightClientFinalityUpdate(update *ethpbv2.LightClientUpdate) *ethpbv2.LightClientFinalityUpdate {
return &ethpbv2.LightClientFinalityUpdate{
AttestedHeader: update.AttestedHeader,
FinalizedHeader: update.FinalizedHeader,
FinalityBranch: update.FinalityBranch,
SyncAggregate: update.SyncAggregate,
SignatureSlot: update.SignatureSlot,
}
}
// CreateLightClientOptimisticUpdate - implements https://github.com/ethereum/consensus-specs/blob/3d235740e5f1e641d3b160c8688f26e7dc5a1894/specs/altair/light-client/full-node.md#create_light_client_optimistic_update
// def create_light_client_optimistic_update(update: LightClientUpdate) -> LightClientOptimisticUpdate:
//
// return LightClientOptimisticUpdate(
// attested_header=update.attested_header,
// sync_aggregate=update.sync_aggregate,
// signature_slot=update.signature_slot,
// )
func CreateLightClientOptimisticUpdate(update *ethpbv2.LightClientUpdate) *ethpbv2.LightClientOptimisticUpdate {
return &ethpbv2.LightClientOptimisticUpdate{
AttestedHeader: update.AttestedHeader,
SyncAggregate: update.SyncAggregate,
SignatureSlot: update.SignatureSlot,
}
}
func NewLightClientOptimisticUpdateFromBeaconState(
ctx context.Context,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState) (*ethpbv2.LightClientUpdate, error) {
// assert compute_epoch_at_slot(attested_state.slot) >= ALTAIR_FORK_EPOCH
attestedEpoch := slots.ToEpoch(attestedState.Slot())
if attestedEpoch < params.BeaconConfig().AltairForkEpoch {
return nil, fmt.Errorf("invalid attested epoch %d", attestedEpoch)
}
// assert sum(block.message.body.sync_aggregate.sync_committee_bits) >= MIN_SYNC_COMMITTEE_PARTICIPANTS
syncAggregate, err := block.Block().Body().SyncAggregate()
if err != nil {
return nil, fmt.Errorf("could not get sync aggregate %w", err)
}
if syncAggregate.SyncCommitteeBits.Count() < params.BeaconConfig().MinSyncCommitteeParticipants {
return nil, fmt.Errorf("invalid sync committee bits count %d", syncAggregate.SyncCommitteeBits.Count())
}
// assert state.slot == state.latest_block_header.slot
if state.Slot() != state.LatestBlockHeader().Slot {
return nil, fmt.Errorf("state slot %d not equal to latest block header slot %d", state.Slot(), state.LatestBlockHeader().Slot)
}
// assert hash_tree_root(header) == hash_tree_root(block.message)
header := state.LatestBlockHeader()
stateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, fmt.Errorf("could not get state root %w", err)
}
header.StateRoot = stateRoot[:]
headerRoot, err := header.HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get header root %w", err)
}
blockRoot, err := block.Block().HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get block root %w", err)
}
if headerRoot != blockRoot {
return nil, fmt.Errorf("header root %#x not equal to block root %#x", headerRoot, blockRoot)
}
// assert attested_state.slot == attested_state.latest_block_header.slot
if attestedState.Slot() != attestedState.LatestBlockHeader().Slot {
return nil, fmt.Errorf("attested state slot %d not equal to attested latest block header slot %d", attestedState.Slot(), attestedState.LatestBlockHeader().Slot)
}
// attested_header = attested_state.latest_block_header.copy()
attestedHeader := attestedState.LatestBlockHeader()
// attested_header.state_root = hash_tree_root(attested_state)
attestedStateRoot, err := attestedState.HashTreeRoot(ctx)
if err != nil {
return nil, fmt.Errorf("could not get attested state root %w", err)
}
attestedHeader.StateRoot = attestedStateRoot[:]
// assert hash_tree_root(attested_header) == block.message.parent_root
attestedHeaderRoot, err := attestedHeader.HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get attested header root %w", err)
}
if attestedHeaderRoot != block.Block().ParentRoot() {
return nil, fmt.Errorf("attested header root %#x not equal to block parent root %#x", attestedHeaderRoot, block.Block().ParentRoot())
}
// Return result
attestedHeaderResult := &ethpbv1.BeaconBlockHeader{
Slot: attestedHeader.Slot,
ProposerIndex: attestedHeader.ProposerIndex,
ParentRoot: attestedHeader.ParentRoot,
StateRoot: attestedHeader.StateRoot,
BodyRoot: attestedHeader.BodyRoot,
}
syncAggregateResult := &ethpbv1.SyncAggregate{
SyncCommitteeBits: syncAggregate.SyncCommitteeBits,
SyncCommitteeSignature: syncAggregate.SyncCommitteeSignature,
}
result := &ethpbv2.LightClientUpdate{
AttestedHeader: attestedHeaderResult,
SyncAggregate: syncAggregateResult,
SignatureSlot: block.Block().Slot(),
}
return result, nil
}
func NewLightClientFinalityUpdateFromBeaconState(
ctx context.Context,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
finalizedBlock interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientUpdate, error) {
result, err := NewLightClientOptimisticUpdateFromBeaconState(
ctx,
state,
block,
attestedState,
)
if err != nil {
return nil, err
}
// Indicate finality whenever possible
var finalizedHeader *ethpbv1.BeaconBlockHeader
var finalityBranch [][]byte
if finalizedBlock != nil && !finalizedBlock.IsNil() {
if finalizedBlock.Block().Slot() != 0 {
tempFinalizedHeader, err := finalizedBlock.Header()
if err != nil {
return nil, fmt.Errorf("could not get finalized header %w", err)
}
finalizedHeader = migration.V1Alpha1SignedHeaderToV1(tempFinalizedHeader).GetMessage()
finalizedHeaderRoot, err := finalizedHeader.HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not get finalized header root %w", err)
}
if finalizedHeaderRoot != bytesutil.ToBytes32(attestedState.FinalizedCheckpoint().Root) {
return nil, fmt.Errorf("finalized header root %#x not equal to attested finalized checkpoint root %#x", finalizedHeaderRoot, bytesutil.ToBytes32(attestedState.FinalizedCheckpoint().Root))
}
} else {
if !bytes.Equal(attestedState.FinalizedCheckpoint().Root, make([]byte, 32)) {
return nil, fmt.Errorf("invalid finalized header root %v", attestedState.FinalizedCheckpoint().Root)
}
finalizedHeader = &ethpbv1.BeaconBlockHeader{
Slot: 0,
ProposerIndex: 0,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
}
}
var bErr error
finalityBranch, bErr = attestedState.FinalizedRootProof(ctx)
if bErr != nil {
return nil, fmt.Errorf("could not get finalized root proof %w", bErr)
}
} else {
finalizedHeader = &ethpbv1.BeaconBlockHeader{
Slot: 0,
ProposerIndex: 0,
ParentRoot: make([]byte, 32),
StateRoot: make([]byte, 32),
BodyRoot: make([]byte, 32),
}
finalityBranch = make([][]byte, FinalityBranchNumOfLeaves)
for i := 0; i < FinalityBranchNumOfLeaves; i++ {
finalityBranch[i] = make([]byte, 32)
}
}
result.FinalizedHeader = finalizedHeader
result.FinalityBranch = finalityBranch
return result, nil
}
func NewLightClientUpdateFromFinalityUpdate(update *ethpbv2.LightClientFinalityUpdate) *ethpbv2.LightClientUpdate {
return &ethpbv2.LightClientUpdate{
AttestedHeader: update.AttestedHeader,
FinalizedHeader: update.FinalizedHeader,
FinalityBranch: update.FinalityBranch,
SyncAggregate: update.SyncAggregate,
SignatureSlot: update.SignatureSlot,
}
}
func NewLightClientUpdateFromOptimisticUpdate(update *ethpbv2.LightClientOptimisticUpdate) *ethpbv2.LightClientUpdate {
return &ethpbv2.LightClientUpdate{
AttestedHeader: update.AttestedHeader,
SyncAggregate: update.SyncAggregate,
SignatureSlot: update.SignatureSlot,
}
}

View File

@@ -0,0 +1,160 @@
package blockchain
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
v1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
type testlc struct {
t *testing.T
ctx context.Context
state state.BeaconState
block interfaces.ReadOnlySignedBeaconBlock
attestedState state.BeaconState
attestedHeader *ethpb.BeaconBlockHeader
}
func newTestLc(t *testing.T) *testlc {
return &testlc{t: t}
}
func (l *testlc) setupTest() *testlc {
ctx := context.Background()
slot := primitives.Slot(params.BeaconConfig().AltairForkEpoch * primitives.Epoch(params.BeaconConfig().SlotsPerEpoch)).Add(1)
attestedState, err := util.NewBeaconStateCapella()
require.NoError(l.t, err)
err = attestedState.SetSlot(slot)
require.NoError(l.t, err)
parent := util.NewBeaconBlockCapella()
parent.Block.Slot = slot
signedParent, err := blocks.NewSignedBeaconBlock(parent)
require.NoError(l.t, err)
parentHeader, err := signedParent.Header()
require.NoError(l.t, err)
attestedHeader := parentHeader.Header
err = attestedState.SetLatestBlockHeader(attestedHeader)
require.NoError(l.t, err)
attestedStateRoot, err := attestedState.HashTreeRoot(ctx)
require.NoError(l.t, err)
// get a new signed block so the root is updated with the new state root
parent.Block.StateRoot = attestedStateRoot[:]
signedParent, err = blocks.NewSignedBeaconBlock(parent)
require.NoError(l.t, err)
state, err := util.NewBeaconStateCapella()
require.NoError(l.t, err)
err = state.SetSlot(slot)
require.NoError(l.t, err)
parentRoot, err := signedParent.Block().HashTreeRoot()
require.NoError(l.t, err)
block := util.NewBeaconBlockCapella()
block.Block.Slot = slot
block.Block.ParentRoot = parentRoot[:]
for i := uint64(0); i < params.BeaconConfig().MinSyncCommitteeParticipants; i++ {
block.Block.Body.SyncAggregate.SyncCommitteeBits.SetBitAt(i, true)
}
signedBlock, err := blocks.NewSignedBeaconBlock(block)
require.NoError(l.t, err)
h, err := signedBlock.Header()
require.NoError(l.t, err)
err = state.SetLatestBlockHeader(h.Header)
require.NoError(l.t, err)
stateRoot, err := state.HashTreeRoot(ctx)
require.NoError(l.t, err)
// get a new signed block so the root is updated with the new state root
block.Block.StateRoot = stateRoot[:]
signedBlock, err = blocks.NewSignedBeaconBlock(block)
require.NoError(l.t, err)
l.state = state
l.attestedState = attestedState
l.attestedHeader = attestedHeader
l.block = signedBlock
l.ctx = ctx
return l
}
func (l *testlc) checkAttestedHeader(update *ethpbv2.LightClientUpdate) {
require.Equal(l.t, l.attestedHeader.Slot, update.AttestedHeader.Slot, "Attested header slot is not equal")
require.Equal(l.t, l.attestedHeader.ProposerIndex, update.AttestedHeader.ProposerIndex, "Attested header proposer index is not equal")
require.DeepSSZEqual(l.t, l.attestedHeader.ParentRoot, update.AttestedHeader.ParentRoot, "Attested header parent root is not equal")
require.DeepSSZEqual(l.t, l.attestedHeader.BodyRoot, update.AttestedHeader.BodyRoot, "Attested header body root is not equal")
attestedStateRoot, err := l.attestedState.HashTreeRoot(l.ctx)
require.NoError(l.t, err)
require.DeepSSZEqual(l.t, attestedStateRoot[:], update.AttestedHeader.StateRoot, "Attested header state root is not equal")
}
func (l *testlc) checkSyncAggregate(update *ethpbv2.LightClientUpdate) {
syncAggregate, err := l.block.Block().Body().SyncAggregate()
require.NoError(l.t, err)
require.DeepSSZEqual(l.t, syncAggregate.SyncCommitteeBits, update.SyncAggregate.SyncCommitteeBits, "SyncAggregate bits is not equal")
require.DeepSSZEqual(l.t, syncAggregate.SyncCommitteeSignature, update.SyncAggregate.SyncCommitteeSignature, "SyncAggregate signature is not equal")
}
func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T) {
l := newTestLc(t).setupTest()
update, err := NewLightClientOptimisticUpdateFromBeaconState(l.ctx, l.state, l.block, l.attestedState)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.block.Block().Slot(), update.SignatureSlot, "Signature slot is not equal")
l.checkSyncAggregate(update)
l.checkAttestedHeader(update)
require.Equal(t, (*v1.BeaconBlockHeader)(nil), update.FinalizedHeader, "Finalized header is not nil")
require.DeepSSZEqual(t, ([][]byte)(nil), update.FinalityBranch, "Finality branch is not nil")
}
func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
l := newTestLc(t).setupTest()
update, err := NewLightClientFinalityUpdateFromBeaconState(l.ctx, l.state, l.block, l.attestedState, nil)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.block.Block().Slot(), update.SignatureSlot, "Signature slot is not equal")
l.checkSyncAggregate(update)
l.checkAttestedHeader(update)
zeroHash := params.BeaconConfig().ZeroHash[:]
require.NotNil(t, update.FinalizedHeader, "Finalized header is nil")
require.Equal(t, primitives.Slot(0), update.FinalizedHeader.Slot, "Finalized header slot is not zero")
require.Equal(t, primitives.ValidatorIndex(0), update.FinalizedHeader.ProposerIndex, "Finalized header proposer index is not zero")
require.DeepSSZEqual(t, zeroHash, update.FinalizedHeader.ParentRoot, "Finalized header parent root is not zero")
require.DeepSSZEqual(t, zeroHash, update.FinalizedHeader.StateRoot, "Finalized header state root is not zero")
require.DeepSSZEqual(t, zeroHash, update.FinalizedHeader.BodyRoot, "Finalized header body root is not zero")
require.Equal(t, FinalityBranchNumOfLeaves, len(update.FinalityBranch), "Invalid finality branch leaves")
for _, leaf := range update.FinalityBranch {
require.DeepSSZEqual(t, zeroHash, leaf, "Leaf is not zero")
}
}

View File

@@ -118,9 +118,9 @@ func WithBLSToExecPool(p blstoexec.PoolManager) Option {
}
// WithP2PBroadcaster to broadcast messages after appropriate processing.
func WithP2PBroadcaster(p p2p.Broadcaster) Option {
func WithP2PBroadcaster(p p2p.Acceser) Option {
return func(s *Service) error {
s.cfg.P2p = p
s.cfg.P2P = p
return nil
}
}

View File

@@ -46,7 +46,7 @@ func (s *Service) validateMergeBlock(ctx context.Context, b interfaces.ReadOnlyS
if err != nil {
return err
}
if payload == nil || payload.IsNil() {
if payload.IsNil() {
return errors.New("nil execution payload")
}
ok, err := canUseValidatedTerminalBlockHash(b.Block().Slot(), payload)

View File

@@ -7,10 +7,10 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
// OnAttestation is called whenever an attestation is received, verifies the attestation is valid and saves

View File

@@ -6,10 +6,14 @@ import (
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/peerdas"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
@@ -25,12 +29,10 @@ import (
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
)
// A custom slot deadline for processing state slots in our cache.
@@ -142,7 +144,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
b := blks[0].Block()
// Retrieve incoming block's pre state.
if err := s.verifyBlkPreState(ctx, b.ParentRoot()); err != nil {
if err := s.verifyBlkPreState(ctx, b); err != nil {
return err
}
preState, err := s.cfg.StateGen.StateByRootInitialSync(ctx, b.ParentRoot())
@@ -499,7 +501,7 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
}
indices, err := bs.Indices(root)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "indices")
}
missing := make(map[uint64]struct{}, len(expected))
for i := range expected {
@@ -513,12 +515,39 @@ func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte
return missing, nil
}
func missingDataColumns(bs *filesystem.BlobStorage, root [32]byte, expected map[uint64]bool) (map[uint64]bool, error) {
if len(expected) == 0 {
return nil, nil
}
if len(expected) > int(params.BeaconConfig().NumberOfColumns) {
return nil, errMaxDataColumnsExceeded
}
indices, err := bs.ColumnIndices(root)
if err != nil {
return nil, err
}
missing := make(map[uint64]bool, len(expected))
for col := range expected {
if !indices[col] {
missing[col] = true
}
}
return missing, nil
}
// isDataAvailable blocks until all BlobSidecars committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
// The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if coreTime.PeerDASIsActive(signed.Block().Slot()) {
return s.isDataColumnsAvailable(ctx, root, signed)
}
if signed.Version() < version.Deneb {
return nil
}
@@ -548,7 +577,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
// get a map of BlobSidecar indices that are not currently available.
missing, err := missingIndices(s.blobStorage, root, kzgCommitments)
if err != nil {
return err
return errors.Wrap(err, "missing indices")
}
// If there are no missing indices, all BlobSidecars are available.
if len(missing) == 0 {
@@ -567,8 +596,13 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
if len(missing) == 0 {
return
}
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
Error("Still waiting for DA check at slot end.")
log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": len(missing),
}).Error("Still waiting for blobs DA check at slot end.")
})
defer nst.Stop()
}
@@ -590,12 +624,130 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
}
}
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
return logrus.Fields{
"slot": slot,
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": missing,
func (s *Service) isDataColumnsAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if signed.Version() < version.Deneb {
return nil
}
block := signed.Block()
if block == nil {
return errors.New("invalid nil beacon block")
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(block.Slot()), slots.ToEpoch(s.CurrentSlot())) {
return nil
}
body := block.Body()
if body == nil {
return errors.New("invalid nil beacon block body")
}
kzgCommitments, err := body.BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "blob KZG commitments")
}
// If block has not commitments there is nothing to wait for.
if len(kzgCommitments) == 0 {
return nil
}
colMap, err := peerdas.CustodyColumns(s.cfg.P2P.NodeID(), peerdas.CustodySubnetCount())
if err != nil {
return errors.Wrap(err, "custody columns")
}
// Expected is the number of custody data columnns a node is expected to have.
expected := len(colMap)
if expected == 0 {
return nil
}
// Subscribe to newsly data columns stored in the database.
rootIndexChan := make(chan filesystem.RootIndexPair)
subscription := s.blobStorage.DataColumnFeed.Subscribe(rootIndexChan)
defer subscription.Unsubscribe()
// Get the count of data columns we already have in the store.
retrievedDataColumns, err := s.blobStorage.ColumnIndices(root)
if err != nil {
return errors.Wrap(err, "column indices")
}
retrievedDataColumnsCount := uint64(len(retrievedDataColumns))
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
if peerdas.CanSelfReconstruct(retrievedDataColumnsCount) {
return nil
}
// Get a map of data column indices that are not currently available.
missing, err := missingDataColumns(s.blobStorage, root, colMap)
if err != nil {
return err
}
// If there are no missing indices, all data column sidecars are available.
// This is the happy path.
if len(missing) == 0 {
return nil
}
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
if len(missing) == 0 {
return
}
log.WithFields(logrus.Fields{
"slot": signed.Block().Slot(),
"root": fmt.Sprintf("%#x", root),
"columnsExpected": expected,
"columnsWaiting": len(missing),
}).Error("Still waiting for data columns DA check at slot end.")
})
defer nst.Stop()
}
for {
select {
case rootIndex := <-rootIndexChan:
if rootIndex.Root != root {
// This is not the root we are looking for.
continue
}
// This is a data column we are expecting.
if _, ok := missing[rootIndex.Index]; ok {
retrievedDataColumnsCount++
}
// As soon as we have more than half of the data columns, we can reconstruct the missing ones.
// We don't need to wait for the rest of the data columns to declare the block as available.
if peerdas.CanSelfReconstruct(retrievedDataColumnsCount) {
return nil
}
// Remove the index from the missing map.
delete(missing, rootIndex.Index)
// Exit if there is no more missing data columns.
if len(missing) == 0 {
return nil
}
case <-ctx.Done():
missingIndexes := make([]uint64, 0, len(missing))
for val := range missing {
copiedVal := val
missingIndexes = append(missingIndexes, copiedVal)
}
return errors.Wrapf(ctx.Err(), "context deadline waiting for data column sidecars slot: %d, BlockRoot: %#x, missing %v", block.Slot(), root, missingIndexes)
}
}
}
@@ -678,7 +830,7 @@ func (s *Service) waitForSync() error {
}
}
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot [32]byte, parentRoot [32]byte) error {
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot, parentRoot [32]byte) error {
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err))
}

View File

@@ -5,8 +5,6 @@ import (
"fmt"
"time"
lightclient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
@@ -16,17 +14,16 @@ import (
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v5/math"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// CurrentSlot returns the current slot based on time.
@@ -178,7 +175,7 @@ func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed inte
}
}
update, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(
update, err := NewLightClientFinalityUpdateFromBeaconState(
ctx,
postState,
signed,
@@ -193,7 +190,7 @@ func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed inte
// Return the result
result := &ethpbv2.LightClientFinalityUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: update,
Data: CreateLightClientFinalityUpdate(update),
}
// Send event
@@ -213,7 +210,7 @@ func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed in
return 0, errors.Wrap(err, "could not get attested state")
}
update, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(
update, err := NewLightClientOptimisticUpdateFromBeaconState(
ctx,
postState,
signed,
@@ -227,7 +224,7 @@ func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed in
// Return the result
result := &ethpbv2.LightClientOptimisticUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: update,
Data: CreateLightClientOptimisticUpdate(update),
}
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
@@ -288,7 +285,7 @@ func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBea
defer span.End()
// Verify incoming block has a valid pre state.
if err := s.verifyBlkPreState(ctx, b.ParentRoot()); err != nil {
if err := s.verifyBlkPreState(ctx, b); err != nil {
return nil, err
}
@@ -314,10 +311,11 @@ func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBea
}
// verifyBlkPreState validates input block has a valid pre-state.
func (s *Service) verifyBlkPreState(ctx context.Context, parentRoot [field_params.RootLength]byte) error {
func (s *Service) verifyBlkPreState(ctx context.Context, b interfaces.ReadOnlyBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "blockChain.verifyBlkPreState")
defer span.End()
parentRoot := b.ParentRoot()
// Loosen the check to HasBlock because state summary gets saved in batches
// during initial syncing. There's no risk given a state summary object is just a
// subset of the block object.

View File

@@ -117,7 +117,7 @@ func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: 1, Root: root[:]}))
require.NoError(t, service.cfg.StateGen.SaveState(ctx, root, st))
require.NoError(t, service.verifyBlkPreState(ctx, wsb.Block().ParentRoot()))
require.NoError(t, service.verifyBlkPreState(ctx, wsb.Block()))
}
func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
@@ -2044,11 +2044,7 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
st, err = service.HeadState(ctx)
require.NoError(t, err)
defaultConfig := util.DefaultBlockGenConfig()
defaultConfig.NumWithdrawalRequests = 1
defaultConfig.NumDepositRequests = 2
defaultConfig.NumConsolidationRequests = 1
b, err := util.GenerateFullBlockElectra(st, keys, defaultConfig, 1)
b, err := util.GenerateFullBlockElectra(st, keys, util.DefaultBlockGenConfig(), 1)
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
@@ -2063,7 +2059,7 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err = util.GenerateFullBlockElectra(st, keys, defaultConfig, 2)
b, err = util.GenerateFullBlockElectra(st, keys, util.DefaultBlockGenConfig(), 2)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
@@ -2071,7 +2067,7 @@ func TestOnBlock_HandleBlockAttestations(t *testing.T) {
// prepare another block that is not inserted
st3, err := transition.ExecuteStateTransition(ctx, st, wsb)
require.NoError(t, err)
b3, err := util.GenerateFullBlockElectra(st3, keys, defaultConfig, 3)
b3, err := util.GenerateFullBlockElectra(st3, keys, util.DefaultBlockGenConfig(), 3)
require.NoError(t, err)
wsb3, err := consensusblocks.NewSignedBeaconBlock(b3)
require.NoError(t, err)

View File

@@ -12,11 +12,10 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// reorgLateBlockCountAttestations is the time until the end of the slot in which we count
@@ -191,26 +190,13 @@ func (s *Service) processAttestations(ctx context.Context, disparity time.Durati
}
if err := s.receiveAttestationNoPubsub(ctx, a, disparity); err != nil {
var fields logrus.Fields
if a.Version() >= version.Electra {
fields = logrus.Fields{
"slot": a.GetData().Slot,
"committeeCount": a.CommitteeBitsVal().Count(),
"committeeIndices": a.CommitteeBitsVal().BitIndices(),
"beaconBlockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.GetData().BeaconBlockRoot)),
"targetRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.GetData().Target.Root)),
"aggregatedCount": a.GetAggregationBits().Count(),
}
} else {
fields = logrus.Fields{
"slot": a.GetData().Slot,
"committeeIndex": a.GetData().CommitteeIndex,
"beaconBlockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.GetData().BeaconBlockRoot)),
"targetRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.GetData().Target.Root)),
"aggregatedCount": a.GetAggregationBits().Count(),
}
}
log.WithFields(fields).WithError(err).Warn("Could not process attestation for fork choice")
log.WithFields(logrus.Fields{
"slot": a.GetData().Slot,
"committeeIndex": a.GetData().CommitteeIndex,
"beaconBlockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.GetData().BeaconBlockRoot)),
"targetRoot": fmt.Sprintf("%#x", bytesutil.Trunc(a.GetData().Target.Root)),
"aggregationCount": a.GetAggregationBits().Count(),
}).WithError(err).Warn("Could not process attestation for fork choice")
}
}
}

View File

@@ -21,12 +21,12 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpbv1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
"golang.org/x/sync/errgroup"
)
@@ -51,6 +51,12 @@ type BlobReceiver interface {
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
}
// DataColumnReceiver interface defines the methods of chain service for receiving new
// data columns
type DataColumnReceiver interface {
ReceiveDataColumn(blocks.VerifiedRODataColumn) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
type SlashingReceiver interface {
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)
@@ -77,20 +83,59 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err != nil {
return err
}
rob, err := blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
return err
}
preState, err := s.getBlockPreState(ctx, blockCopy.Block())
if err != nil {
return errors.Wrap(err, "could not get block's prestate")
}
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := s.CurrentJustifiedCheckpt().Epoch
currStoreFinalizedEpoch := s.FinalizedCheckpt().Epoch
currentEpoch := coreTime.CurrentEpoch(preState)
currentCheckpoints := s.saveCurrentCheckpoints(preState)
postState, isValidPayload, err := s.validateExecutionAndConsensus(ctx, preState, blockCopy, blockRoot)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return err
}
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
eg, _ := errgroup.WithContext(ctx)
var postState state.BeaconState
eg.Go(func() error {
var err error
postState, err = s.validateStateTransition(ctx, preState, blockCopy)
if err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
return nil
})
var isValidPayload bool
eg.Go(func() error {
var err error
isValidPayload, err = s.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, blockCopy, blockRoot)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
return nil
})
if err := eg.Wait(); err != nil {
return err
}
daStartTime := time.Now()
if avs != nil {
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
return errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
}
} else {
if err := s.isDataAvailable(ctx, blockRoot, blockCopy); err != nil {
return errors.Wrap(err, "could not validate blob data availability")
}
}
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
// Defragment the state before continuing block processing.
s.defragmentState(postState)
@@ -112,9 +157,29 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
tracing.AnnotateError(span, err)
return err
}
if err := s.updateCheckpoints(ctx, currentCheckpoints, preState, postState, blockRoot); err != nil {
return err
if coreTime.CurrentEpoch(postState) > currentEpoch && s.cfg.ForkChoiceStore.IsCanonical(blockRoot) {
headSt, err := s.HeadState(ctx)
if err != nil {
return errors.Wrap(err, "could not get head state")
}
if err := reportEpochMetrics(ctx, postState, headSt); err != nil {
log.WithError(err).Error("could not report epoch metrics")
}
}
if err := s.updateJustificationOnBlock(ctx, preState, postState, currStoreJustifiedEpoch); err != nil {
return errors.Wrap(err, "could not update justified checkpoint")
}
newFinalized, err := s.updateFinalizationOnBlock(ctx, preState, postState, currStoreFinalizedEpoch)
if err != nil {
return errors.Wrap(err, "could not update finalized checkpoint")
}
// Send finalized events and finalized deposits in the background
if newFinalized {
// hook to process all post state finalization tasks
s.executePostFinalizationTasks(ctx, postState)
}
// If slasher is configured, forward the attestations in the block via an event feed for processing.
if features.Get().EnableSlasher {
go s.sendBlockAttestationsToSlasher(blockCopy, preState)
@@ -134,140 +199,31 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err := s.handleCaches(); err != nil {
return err
}
s.reportPostBlockProcessing(blockCopy, blockRoot, receivedTime, daWaitedTime)
return nil
}
type ffgCheckpoints struct {
j, f, c primitives.Epoch
}
func (s *Service) saveCurrentCheckpoints(state state.BeaconState) (cp ffgCheckpoints) {
// Save current justified and finalized epochs for future use.
cp.j = s.CurrentJustifiedCheckpt().Epoch
cp.f = s.FinalizedCheckpt().Epoch
cp.c = coreTime.CurrentEpoch(state)
return
}
func (s *Service) updateCheckpoints(
ctx context.Context,
cp ffgCheckpoints,
preState, postState state.BeaconState,
blockRoot [32]byte,
) error {
if coreTime.CurrentEpoch(postState) > cp.c && s.cfg.ForkChoiceStore.IsCanonical(blockRoot) {
headSt, err := s.HeadState(ctx)
if err != nil {
return errors.Wrap(err, "could not get head state")
}
if err := reportEpochMetrics(ctx, postState, headSt); err != nil {
log.WithError(err).Error("could not report epoch metrics")
}
}
if err := s.updateJustificationOnBlock(ctx, preState, postState, cp.j); err != nil {
return errors.Wrap(err, "could not update justified checkpoint")
}
newFinalized, err := s.updateFinalizationOnBlock(ctx, preState, postState, cp.f)
if err != nil {
return errors.Wrap(err, "could not update finalized checkpoint")
}
// Send finalized events and finalized deposits in the background
if newFinalized {
// hook to process all post state finalization tasks
s.executePostFinalizationTasks(ctx, postState)
}
return nil
}
func (s *Service) validateExecutionAndConsensus(
ctx context.Context,
preState state.BeaconState,
block interfaces.SignedBeaconBlock,
blockRoot [32]byte,
) (state.BeaconState, bool, error) {
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
return nil, false, err
}
eg, _ := errgroup.WithContext(ctx)
var postState state.BeaconState
eg.Go(func() error {
var err error
postState, err = s.validateStateTransition(ctx, preState, block)
if err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
return nil
})
var isValidPayload bool
eg.Go(func() error {
var err error
isValidPayload, err = s.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, block, blockRoot)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
return nil
})
if err := eg.Wait(); err != nil {
return nil, false, err
}
return postState, isValidPayload, nil
}
func (s *Service) handleDA(
ctx context.Context,
block interfaces.SignedBeaconBlock,
blockRoot [32]byte,
avs das.AvailabilityStore,
) (time.Duration, error) {
daStartTime := time.Now()
if avs != nil {
rob, err := blocks.NewROBlockWithRoot(block, blockRoot)
if err != nil {
return 0, err
}
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), rob); err != nil {
return 0, errors.Wrap(err, "could not validate blob data availability (AvailabilityStore.IsDataAvailable)")
}
} else {
if err := s.isDataAvailable(ctx, blockRoot, block); err != nil {
return 0, errors.Wrap(err, "could not validate blob data availability")
}
}
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
return daWaitedTime, nil
}
func (s *Service) reportPostBlockProcessing(
block interfaces.SignedBeaconBlock,
blockRoot [32]byte,
receivedTime time.Time,
daWaitedTime time.Duration,
) {
// Reports on block and fork choice metrics.
cp := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
finalized := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
reportSlotMetrics(block.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
reportSlotMetrics(blockCopy.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
// Log block sync status.
cp = s.cfg.ForkChoiceStore.JustifiedCheckpoint()
justified := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
if err := logBlockSyncStatus(block.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix()), daWaitedTime); err != nil {
if err := logBlockSyncStatus(blockCopy.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix()), daWaitedTime); err != nil {
log.WithError(err).Error("Unable to log block sync status")
}
// Log payload data
if err := logPayload(block.Block()); err != nil {
if err := logPayload(blockCopy.Block()); err != nil {
log.WithError(err).Error("Unable to log debug block payload data")
}
// Log state transition data.
if err := logStateTransitionData(block.Block()); err != nil {
if err := logStateTransitionData(blockCopy.Block()); err != nil {
log.WithError(err).Error("Unable to log state transition data")
}
timeWithoutDaWait := time.Since(receivedTime) - daWaitedTime
chainServiceProcessingTime.Observe(float64(timeWithoutDaWait.Milliseconds()))
return nil
}
func (s *Service) executePostFinalizationTasks(ctx context.Context, finalizedState state.BeaconState) {

View File

@@ -0,0 +1,14 @@
package blockchain
import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
)
func (s *Service) ReceiveDataColumn(ds blocks.VerifiedRODataColumn) error {
if err := s.blobStorage.SaveDataColumn(ds); err != nil {
return errors.Wrap(err, "save data column")
}
return nil
}

View File

@@ -39,10 +39,10 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
// Service represents a service that handles the internal
@@ -81,7 +81,7 @@ type config struct {
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
P2p p2p.Broadcaster
P2P p2p.Acceser
MaxRoutines int
StateNotifier statefeed.Notifier
ForkChoiceStore f.ForkChoicer
@@ -106,15 +106,17 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
type blobNotifierMap struct {
sync.RWMutex
notifiers map[[32]byte]chan uint64
seenIndex map[[32]byte][fieldparams.MaxBlobsPerBlock]bool
seenIndex map[[32]byte][fieldparams.NumberOfColumns]bool
}
// notifyIndex notifies a blob by its index for a given root.
// It uses internal maps to keep track of seen indices and notifier channels.
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
if idx >= fieldparams.MaxBlobsPerBlock {
return
}
// TODO: Separate Data Columns from blobs
/*
if idx >= fieldparams.MaxBlobsPerBlock {
return
}*/
bn.Lock()
seen := bn.seenIndex[root]
@@ -128,7 +130,7 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
// Retrieve or create the notifier channel for the given root.
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c
}
@@ -142,7 +144,7 @@ func (bn *blobNotifierMap) forRoot(root [32]byte) chan uint64 {
defer bn.Unlock()
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
c = make(chan uint64, fieldparams.NumberOfColumns)
bn.notifiers[root] = c
}
return c
@@ -168,7 +170,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
bn := &blobNotifierMap{
notifiers: make(map[[32]byte]chan uint64),
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
}
srv := &Service{
ctx: ctx,

View File

@@ -97,7 +97,7 @@ func setupBeaconChain(t *testing.T, beaconDB db.Database) *Service {
WithAttestationPool(attestations.NewPool()),
WithSlashingPool(slashings.NewPool()),
WithExitPool(voluntaryexits.NewPool()),
WithP2PBroadcaster(&mockBroadcaster{}),
WithP2PBroadcaster(&mockAccesser{}),
WithStateNotifier(&mockBeaconNode{}),
WithForkChoiceStore(fc),
WithAttestationService(attService),
@@ -579,7 +579,7 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
func TestNotifyIndex(t *testing.T) {
// Initialize a blobNotifierMap
bn := &blobNotifierMap{
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
seenIndex: make(map[[32]byte][fieldparams.NumberOfColumns]bool),
notifiers: make(map[[32]byte]chan uint64),
}

View File

@@ -19,6 +19,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2pTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -45,6 +46,11 @@ type mockBroadcaster struct {
broadcastCalled bool
}
type mockAccesser struct {
mockBroadcaster
p2pTesting.MockPeerManager
}
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
mb.broadcastCalled = true
return nil
@@ -65,6 +71,11 @@ func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.B
return nil
}
func (mb *mockBroadcaster) BroadcastDataColumn(_ context.Context, _ uint64, _ *ethpb.DataColumnSidecar) error {
mb.broadcastCalled = true
return nil
}
func (mb *mockBroadcaster) BroadcastBLSChanges(_ context.Context, _ []*ethpb.SignedBLSToExecutionChange) {
}

View File

@@ -628,6 +628,11 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
return nil
}
// ReceiveDataColumn implements the same method in chain service
func (*ChainService) ReceiveDataColumn(_ blocks.VerifiedRODataColumn) error {
return nil
}
// TargetRootForEpoch mocks the same method in the chain service
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil

View File

@@ -19,7 +19,6 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -27,6 +26,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -14,10 +14,10 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// ErrNoBuilder is used when builder endpoint is not configured.

View File

@@ -8,6 +8,7 @@ go_library(
"attestation_data.go",
"balance_cache_key.go",
"checkpoint_state.go",
"column_subnet_ids.go",
"committee.go",
"committee_disabled.go", # keep
"committees.go",
@@ -46,7 +47,6 @@ go_library(
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
@@ -57,6 +57,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_k8s_client_go//tools/cache:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

70
beacon-chain/cache/column_subnet_ids.go vendored Normal file
View File

@@ -0,0 +1,70 @@
package cache
import (
"sync"
"time"
"github.com/patrickmn/go-cache"
"github.com/prysmaticlabs/prysm/v5/config/params"
)
type columnSubnetIDs struct {
colSubCache *cache.Cache
colSubLock sync.RWMutex
}
// ColumnSubnetIDs for column subnet participants
var ColumnSubnetIDs = newColumnSubnetIDs()
const columnKey = "columns"
func newColumnSubnetIDs() *columnSubnetIDs {
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
epochDuration := time.Duration(slotsPerEpoch.Mul(secondsPerSlot))
// Set the default duration of a column subnet subscription as the column expiry period.
minEpochsForDataColumnSidecarsRequest := time.Duration(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
subLength := epochDuration * minEpochsForDataColumnSidecarsRequest
persistentCache := cache.New(subLength*time.Second, epochDuration*time.Second)
return &columnSubnetIDs{colSubCache: persistentCache}
}
// GetColumnSubnets retrieves the data column subnets.
func (s *columnSubnetIDs) GetColumnSubnets() ([]uint64, bool, time.Time) {
s.colSubLock.RLock()
defer s.colSubLock.RUnlock()
id, duration, ok := s.colSubCache.GetWithExpiration(columnKey)
if !ok {
return nil, false, time.Time{}
}
// Retrieve indices from the cache.
idxs, ok := id.([]uint64)
if !ok {
return nil, false, time.Time{}
}
return idxs, ok, duration
}
// AddColumnSubnets adds the relevant data column subnets.
func (s *columnSubnetIDs) AddColumnSubnets(colIdx []uint64) {
s.colSubLock.Lock()
defer s.colSubLock.Unlock()
s.colSubCache.Set(columnKey, colIdx, 0)
}
// EmptyAllCaches empties out all the related caches and flushes any stored
// entries on them. This should only ever be used for testing, in normal
// production, handling of the relevant subnets for each role is done
// separately.
func (s *columnSubnetIDs) EmptyAllCaches() {
// Clear the cache.
s.colSubLock.Lock()
defer s.colSubLock.Unlock()
s.colSubCache.Flush()
}

View File

@@ -17,6 +17,6 @@ func trim(queue *cache.FIFO, maxSize uint64) {
}
// popProcessNoopFunc is a no-op function that never returns an error.
func popProcessNoopFunc(_ interface{}, _ bool) error {
func popProcessNoopFunc(_ interface{}) error {
return nil
}

View File

@@ -19,7 +19,6 @@ go_library(
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -27,6 +26,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_wealdtech_go_bytesutil//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -10,10 +10,10 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
"github.com/wealdtech/go-bytesutil"
"go.opencensus.io/trace"
)
var (
@@ -241,7 +241,7 @@ func (c *Cache) InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, bloc
c.pendingDeposits = append(c.pendingDeposits,
&ethpb.DepositContainer{Deposit: d, Eth1BlockHeight: blockNum, Index: index, DepositRoot: depositRoot[:]})
pendingDepositsCount.Set(float64(len(c.pendingDeposits)))
span.SetAttributes(trace.Int64Attribute("count", int64(len(c.pendingDeposits))))
span.AddAttributes(trace.Int64Attribute("count", int64(len(c.pendingDeposits))))
}
// Deposits returns the cached internal deposit tree.
@@ -304,7 +304,7 @@ func (c *Cache) PendingContainers(ctx context.Context, untilBlk *big.Int) []*eth
return depositCntrs[i].Index < depositCntrs[j].Index
})
span.SetAttributes(trace.Int64Attribute("count", int64(len(depositCntrs))))
span.AddAttributes(trace.Int64Attribute("count", int64(len(depositCntrs))))
return depositCntrs
}

View File

@@ -10,9 +10,9 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
var (

View File

@@ -7,8 +7,8 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"go.opencensus.io/trace"
)
// RegistrationCache is used to store the cached results of an Validator Registration request.

View File

@@ -11,7 +11,7 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
lruwrpr "github.com/prysmaticlabs/prysm/v5/cache/lru"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"go.opencensus.io/trace"
)
var (
@@ -92,17 +92,17 @@ func (c *SkipSlotCache) Get(ctx context.Context, r [32]byte) (state.BeaconState,
delay *= delayFactor
delay = math.Min(delay, maxDelay)
}
span.SetAttributes(trace.BoolAttribute("inProgress", inProgress))
span.AddAttributes(trace.BoolAttribute("inProgress", inProgress))
item, exists := c.cache.Get(r)
if exists && item != nil {
skipSlotCacheHit.Inc()
span.SetAttributes(trace.BoolAttribute("hit", true))
span.AddAttributes(trace.BoolAttribute("hit", true))
return item.(state.BeaconState).Copy(), nil
}
skipSlotCacheMiss.Inc()
span.SetAttributes(trace.BoolAttribute("hit", false))
span.AddAttributes(trace.BoolAttribute("hit", false))
return nil, nil
}

View File

@@ -34,13 +34,13 @@ go_library(
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -14,9 +14,9 @@ import (
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"go.opencensus.io/trace"
)
// ProcessAttestationsNoVerifySignature applies processing operations to a block's inner attestation

View File

@@ -10,7 +10,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/math"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"go.opencensus.io/trace"
)
// AttDelta contains rewards and penalties for a single attestation.

View File

@@ -7,7 +7,7 @@ import (
e "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epoch"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epoch/precompute"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"go.opencensus.io/trace"
)
// ProcessEpoch describes the per epoch operations that are performed on the beacon state.

View File

@@ -40,7 +40,6 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//encoding/ssz:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -50,6 +49,7 @@ go_library(
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -14,10 +14,10 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"go.opencensus.io/trace"
)
// ProcessAttestationsNoVerifySignature applies processing operations to a block's inner attestation
@@ -110,7 +110,25 @@ func VerifyAttestationNoVerifySignature(
var indexedAtt ethpb.IndexedAtt
if att.Version() >= version.Electra {
if att.Version() < version.Electra {
if uint64(att.GetData().CommitteeIndex) >= c {
return fmt.Errorf("committee index %d >= committee count %d", att.GetData().CommitteeIndex, c)
}
if err = helpers.VerifyAttestationBitfieldLengths(ctx, beaconState, att); err != nil {
return errors.Wrap(err, "could not verify attestation bitfields")
}
// Verify attesting indices are correct.
committee, err := helpers.BeaconCommitteeFromState(ctx, beaconState, att.GetData().Slot, att.GetData().CommitteeIndex)
if err != nil {
return err
}
indexedAtt, err = attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
return err
}
} else {
if att.GetData().CommitteeIndex != 0 {
return errors.New("committee index must be 0 post-Electra")
}
@@ -136,29 +154,6 @@ func VerifyAttestationNoVerifySignature(
if err != nil {
return err
}
} else {
if uint64(att.GetData().CommitteeIndex) >= c {
return fmt.Errorf("committee index %d >= committee count %d", att.GetData().CommitteeIndex, c)
}
// Verify attesting indices are correct.
committee, err := helpers.BeaconCommitteeFromState(ctx, beaconState, att.GetData().Slot, att.GetData().CommitteeIndex)
if err != nil {
return err
}
if committee == nil {
return errors.New("no committee exist for this attestation")
}
if err := helpers.VerifyBitfieldLength(att.GetAggregationBits(), uint64(len(committee))); err != nil {
return errors.Wrap(err, "failed to verify aggregation bitfield")
}
indexedAtt, err = attestation.ConvertToIndexed(ctx, att, committee)
if err != nil {
return err
}
}
return attestation.IsValidAttestationIndices(ctx, indexedAtt)

View File

@@ -1,5 +1,3 @@
package blocks
var ProcessBLSToExecutionChange = processBLSToExecutionChange
var VerifyBlobCommitmentCount = verifyBlobCommitmentCount

View File

@@ -213,11 +213,6 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
BlobKzgCommitments: make([][]byte, 0),
ExecutionRequests: &enginev1.ExecutionRequests{
Withdrawals: make([]*enginev1.WithdrawalRequest, 0),
Deposits: make([]*enginev1.DepositRequest, 0),
Consolidations: make([]*enginev1.ConsolidationRequest, 0),
},
},
},
Signature: params.BeaconConfig().EmptySignature[:],

View File

@@ -2,13 +2,11 @@ package blocks
import (
"bytes"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -202,13 +200,13 @@ func ValidatePayload(st state.BeaconState, payload interfaces.ExecutionData) err
// block_hash=payload.block_hash,
// transactions_root=hash_tree_root(payload.transactions),
// )
func ProcessPayload(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) (state.BeaconState, error) {
payload, err := body.Execution()
if err != nil {
return nil, err
}
if err := verifyBlobCommitmentCount(body); err != nil {
return nil, err
func ProcessPayload(st state.BeaconState, payload interfaces.ExecutionData) (state.BeaconState, error) {
var err error
if st.Version() >= version.Capella {
st, err = ProcessWithdrawals(st, payload)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawals")
}
}
if err := ValidatePayloadWhenMergeCompletes(st, payload); err != nil {
return nil, err
@@ -222,20 +220,70 @@ func ProcessPayload(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBod
return st, nil
}
func verifyBlobCommitmentCount(body interfaces.ReadOnlyBeaconBlockBody) error {
if body.Version() < version.Deneb {
return nil
}
kzgs, err := body.BlobKzgCommitments()
// ValidatePayloadHeaderWhenMergeCompletes validates the payload header when the merge completes.
func ValidatePayloadHeaderWhenMergeCompletes(st state.BeaconState, header interfaces.ExecutionData) error {
// Skip validation if the state is not merge compatible.
complete, err := IsMergeTransitionComplete(st)
if err != nil {
return err
}
if len(kzgs) > field_params.MaxBlobsPerBlock {
return fmt.Errorf("too many kzg commitments in block: %d", len(kzgs))
if !complete {
return nil
}
// Validate current header's parent hash matches state header's block hash.
h, err := st.LatestExecutionPayloadHeader()
if err != nil {
return err
}
if !bytes.Equal(header.ParentHash(), h.BlockHash()) {
return ErrInvalidPayloadBlockHash
}
return nil
}
// ValidatePayloadHeader validates the payload header.
func ValidatePayloadHeader(st state.BeaconState, header interfaces.ExecutionData) error {
// Validate header's random mix matches with state in current epoch
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
if err != nil {
return err
}
if !bytes.Equal(header.PrevRandao(), random) {
return ErrInvalidPayloadPrevRandao
}
// Validate header's timestamp matches with state in current slot.
t, err := slots.ToTime(st.GenesisTime(), st.Slot())
if err != nil {
return err
}
if header.Timestamp() != uint64(t.Unix()) {
return ErrInvalidPayloadTimeStamp
}
return nil
}
// ProcessPayloadHeader processes the payload header.
func ProcessPayloadHeader(st state.BeaconState, header interfaces.ExecutionData) (state.BeaconState, error) {
var err error
if st.Version() >= version.Capella {
st, err = ProcessWithdrawals(st, header)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawals")
}
}
if err := ValidatePayloadHeaderWhenMergeCompletes(st, header); err != nil {
return nil, err
}
if err := ValidatePayloadHeader(st, header); err != nil {
return nil, err
}
if err := st.SetLatestExecutionPayloadHeader(header); err != nil {
return nil, err
}
return st, nil
}
// GetBlockPayloadHash returns the hash of the execution payload of the block
func GetBlockPayloadHash(blk interfaces.ReadOnlyBeaconBlock) ([32]byte, error) {
var payloadHash [32]byte

View File

@@ -1,7 +1,6 @@
package blocks_test
import (
"fmt"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
@@ -14,7 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
@@ -583,18 +581,14 @@ func Test_ProcessPayload(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
body, err := consensusblocks.NewBeaconBlockBody(&ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: tt.payload,
})
wrappedPayload, err := consensusblocks.WrappedExecutionPayload(tt.payload)
require.NoError(t, err)
st, err := blocks.ProcessPayload(st, body)
st, err := blocks.ProcessPayload(st, wrappedPayload)
if err != nil {
require.Equal(t, tt.err.Error(), err.Error())
} else {
require.Equal(t, tt.err, err)
payload, err := body.Execution()
require.NoError(t, err)
want, err := consensusblocks.PayloadToHeader(payload)
want, err := consensusblocks.PayloadToHeader(wrappedPayload)
require.Equal(t, tt.err, err)
h, err := st.LatestExecutionPayloadHeader()
require.NoError(t, err)
@@ -615,15 +609,13 @@ func Test_ProcessPayloadCapella(t *testing.T) {
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
payload.PrevRandao = random
body, err := consensusblocks.NewBeaconBlockBody(&ethpb.BeaconBlockBodyCapella{
ExecutionPayload: payload,
})
wrapped, err := consensusblocks.WrappedExecutionPayloadCapella(payload)
require.NoError(t, err)
_, err = blocks.ProcessPayload(st, body)
_, err = blocks.ProcessPayload(st, wrapped)
require.NoError(t, err)
}
func Test_ProcessPayload_Blinded(t *testing.T) {
func Test_ProcessPayloadHeader(t *testing.T) {
st, _ := util.DeterministicGenesisStateBellatrix(t, 1)
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
require.NoError(t, err)
@@ -671,13 +663,7 @@ func Test_ProcessPayload_Blinded(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
p, ok := tt.header.Proto().(*enginev1.ExecutionPayloadHeader)
require.Equal(t, true, ok)
body, err := consensusblocks.NewBeaconBlockBody(&ethpb.BlindedBeaconBlockBodyBellatrix{
ExecutionPayloadHeader: p,
})
require.NoError(t, err)
st, err := blocks.ProcessPayload(st, body)
st, err := blocks.ProcessPayloadHeader(st, tt.header)
if err != nil {
require.Equal(t, tt.err.Error(), err.Error())
} else {
@@ -742,7 +728,7 @@ func Test_ValidatePayloadHeader(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err = blocks.ValidatePayload(st, tt.header)
err = blocks.ValidatePayloadHeader(st, tt.header)
require.Equal(t, tt.err, err)
})
}
@@ -799,7 +785,7 @@ func Test_ValidatePayloadHeaderWhenMergeCompletes(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err = blocks.ValidatePayloadWhenMergeCompletes(tt.state, tt.header)
err = blocks.ValidatePayloadHeaderWhenMergeCompletes(tt.state, tt.header)
require.Equal(t, tt.err, err)
})
}
@@ -920,15 +906,3 @@ func emptyPayloadCapella() *enginev1.ExecutionPayloadCapella {
Withdrawals: make([]*enginev1.Withdrawal, 0),
}
}
func TestVerifyBlobCommitmentCount(t *testing.T) {
b := &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{}}
rb, err := consensusblocks.NewBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Body()))
b = &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, fieldparams.MaxBlobsPerBlock+1)}}
rb, err = consensusblocks.NewBeaconBlock(b)
require.NoError(t, err)
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", fieldparams.MaxBlobsPerBlock+1), blocks.VerifyBlobCommitmentCount(rb.Body()))
}

View File

@@ -96,6 +96,24 @@ func VerifyBlockHeaderSignature(beaconState state.BeaconState, header *ethpb.Sig
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
func VerifyBlockHeaderSignatureUsingCurrentFork(beaconState state.BeaconState, header *ethpb.SignedBeaconBlockHeader) error {
currentEpoch := slots.ToEpoch(header.Header.Slot)
fork, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(fork, currentEpoch, params.BeaconConfig().DomainBeaconProposer, beaconState.GenesisValidatorsRoot())
if err != nil {
return err
}
proposer, err := beaconState.ValidatorAtIndex(header.Header.ProposerIndex)
if err != nil {
return err
}
proposerPubKey := proposer.PublicKey
return signing.VerifyBlockHeaderSigningRoot(header.Header, proposerPubKey, header.Signature, domain)
}
// VerifyBlockSignatureUsingCurrentFork verifies the proposer signature of a beacon block. This differs
// from the above method by not using fork data from the state and instead retrieving it
// via the respective epoch.

View File

@@ -34,13 +34,13 @@ go_library(
"//contracts/deposit:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -11,10 +11,10 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"go.opencensus.io/trace"
)
// ProcessPendingConsolidations implements the spec definition below. This method makes mutating
@@ -49,7 +49,7 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
return errors.New("nil state")
}
nextEpoch := slots.ToEpoch(st.Slot()) + 1
currentEpoch := slots.ToEpoch(st.Slot())
var nextPendingConsolidation uint64
pendingConsolidations, err := st.PendingConsolidations()
@@ -66,7 +66,7 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
nextPendingConsolidation++
continue
}
if sourceValidator.WithdrawableEpoch > nextEpoch {
if sourceValidator.WithdrawableEpoch > currentEpoch {
break
}

View File

@@ -13,12 +13,12 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/contracts/deposit"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
// ProcessDeposits is one of the operations performed on each processed
@@ -248,7 +248,7 @@ func ProcessPendingBalanceDeposits(ctx context.Context, st state.BeaconState, ac
// constants
ffe := params.BeaconConfig().FarFutureEpoch
nextEpoch := slots.ToEpoch(st.Slot()) + 1
curEpoch := slots.ToEpoch(st.Slot())
for _, balanceDeposit := range deposits {
v, err := st.ValidatorAtIndexReadOnly(balanceDeposit.Index)
@@ -259,7 +259,7 @@ func ProcessPendingBalanceDeposits(ctx context.Context, st state.BeaconState, ac
// If the validator is currently exiting, postpone the deposit until after the withdrawable
// epoch.
if v.ExitEpoch() < ffe {
if nextEpoch <= v.WithdrawableEpoch() {
if curEpoch <= v.WithdrawableEpoch() {
depositsToPostpone = append(depositsToPostpone, balanceDeposit)
} else {
// The deposited balance will never become active. Therefore, we increase the balance but do

View File

@@ -30,21 +30,21 @@ import (
// or validator.effective_balance + UPWARD_THRESHOLD < balance
// ):
// validator.effective_balance = min(balance - balance % EFFECTIVE_BALANCE_INCREMENT, EFFECTIVE_BALANCE_LIMIT)
func ProcessEffectiveBalanceUpdates(st state.BeaconState) error {
func ProcessEffectiveBalanceUpdates(state state.BeaconState) error {
effBalanceInc := params.BeaconConfig().EffectiveBalanceIncrement
hysteresisInc := effBalanceInc / params.BeaconConfig().HysteresisQuotient
downwardThreshold := hysteresisInc * params.BeaconConfig().HysteresisDownwardMultiplier
upwardThreshold := hysteresisInc * params.BeaconConfig().HysteresisUpwardMultiplier
bals := st.Balances()
bals := state.Balances()
// Update effective balances with hysteresis.
validatorFunc := func(idx int, val state.ReadOnlyValidator) (newVal *ethpb.Validator, err error) {
validatorFunc := func(idx int, val *ethpb.Validator) (bool, *ethpb.Validator, error) {
if val == nil {
return nil, fmt.Errorf("validator %d is nil in state", idx)
return false, nil, fmt.Errorf("validator %d is nil in state", idx)
}
if idx >= len(bals) {
return nil, fmt.Errorf("validator index exceeds validator length in state %d >= %d", idx, len(st.Balances()))
return false, nil, fmt.Errorf("validator index exceeds validator length in state %d >= %d", idx, len(state.Balances()))
}
balance := bals[idx]
@@ -53,13 +53,13 @@ func ProcessEffectiveBalanceUpdates(st state.BeaconState) error {
effectiveBalanceLimit = params.BeaconConfig().MaxEffectiveBalanceElectra
}
if balance+downwardThreshold < val.EffectiveBalance() || val.EffectiveBalance()+upwardThreshold < balance {
if balance+downwardThreshold < val.EffectiveBalance || val.EffectiveBalance+upwardThreshold < balance {
effectiveBal := min(balance-balance%effBalanceInc, effectiveBalanceLimit)
newVal = val.Copy()
newVal.EffectiveBalance = effectiveBal
val.EffectiveBalance = effectiveBal
return true, val, nil
}
return newVal, nil
return false, val, nil
}
return st.ApplyToEveryValidator(validatorFunc)
return state.ApplyToEveryValidator(validatorFunc)
}

View File

@@ -11,29 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestProcessEffectiveBalanceUpdates_SafeCopy(t *testing.T) {
pb := &eth.BeaconStateElectra{
Validators: []*eth.Validator{
{
EffectiveBalance: params.BeaconConfig().MinActivationBalance,
WithdrawalCredentials: []byte{params.BeaconConfig().CompoundingWithdrawalPrefixByte, 0x11},
},
},
Balances: []uint64{
params.BeaconConfig().MaxEffectiveBalanceElectra * 2,
},
}
st, err := state_native.InitializeFromProtoElectra(pb)
require.NoError(t, err)
copiedState := st.Copy()
err = electra.ProcessEffectiveBalanceUpdates(copiedState)
require.NoError(t, err)
require.Equal(t, st.Validators()[0].EffectiveBalance, params.BeaconConfig().MinActivationBalance)
require.Equal(t, copiedState.Validators()[0].EffectiveBalance, params.BeaconConfig().MaxEffectiveBalanceElectra)
}
func TestProcessEffectiveBalnceUpdates(t *testing.T) {
effBalanceInc := params.BeaconConfig().EffectiveBalanceIncrement
hysteresisInc := effBalanceInc / params.BeaconConfig().HysteresisQuotient

View File

@@ -12,7 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"go.opencensus.io/trace"
)
// Re-exports for methods that haven't changed in Electra.

View File

@@ -22,31 +22,29 @@ var (
//
// Spec definition:
//
// def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
// # [Modified in Electra:EIP6110]
// # Disable former deposit mechanism once all prior deposits are processed
// eth1_deposit_index_limit = min(state.eth1_data.deposit_count, state.deposit_requests_start_index)
// if state.eth1_deposit_index < eth1_deposit_index_limit:
// assert len(body.deposits) == min(MAX_DEPOSITS, eth1_deposit_index_limit - state.eth1_deposit_index)
// else:
// assert len(body.deposits) == 0
// def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
// # [Modified in Electra:EIP6110]
// # Disable former deposit mechanism once all prior deposits are processed
// eth1_deposit_index_limit = min(state.eth1_data.deposit_count, state.deposit_requests_start_index)
// if state.eth1_deposit_index < eth1_deposit_index_limit:
// assert len(body.deposits) == min(MAX_DEPOSITS, eth1_deposit_index_limit - state.eth1_deposit_index)
// else:
// assert len(body.deposits) == 0
//
// def for_ops(operations: Sequence[Any], fn: Callable[[BeaconState, Any], None]) -> None:
// for operation in operations:
// fn(state, operation)
// def for_ops(operations: Sequence[Any], fn: Callable[[BeaconState, Any], None]) -> None:
// for operation in operations:
// fn(state, operation)
//
// for_ops(body.proposer_slashings, process_proposer_slashing)
// for_ops(body.attester_slashings, process_attester_slashing)
// for_ops(body.attestations, process_attestation) # [Modified in Electra:EIP7549]
// for_ops(body.deposits, process_deposit) # [Modified in Electra:EIP7251]
// for_ops(body.voluntary_exits, process_voluntary_exit) # [Modified in Electra:EIP7251]
// for_ops(body.bls_to_execution_changes, process_bls_to_execution_change)
// for_ops(body.execution_payload.deposit_requests, process_deposit_request) # [New in Electra:EIP6110]
// # [New in Electra:EIP7002:EIP7251]
// for_ops(body.execution_payload.withdrawal_requests, process_withdrawal_request)
// # [New in Electra:EIP7251]
// for_ops(body.execution_payload.consolidation_requests, process_consolidation_request)
// for_ops(body.proposer_slashings, process_proposer_slashing)
// for_ops(body.attester_slashings, process_attester_slashing)
// for_ops(body.attestations, process_attestation) # [Modified in Electra:EIP7549]
// for_ops(body.deposits, process_deposit) # [Modified in Electra:EIP7251]
// for_ops(body.voluntary_exits, process_voluntary_exit) # [Modified in Electra:EIP7251]
// for_ops(body.bls_to_execution_changes, process_bls_to_execution_change)
// # [New in Electra:EIP7002:EIP7251]
// for_ops(body.execution_payload.withdrawal_requests, process_execution_layer_withdrawal_request)
// for_ops(body.execution_payload.deposit_requests, process_deposit_requests) # [New in Electra:EIP6110]
// for_ops(body.consolidations, process_consolidation) # [New in Electra:EIP7251]
func ProcessOperations(
ctx context.Context,
st state.BeaconState,
@@ -78,19 +76,25 @@ func ProcessOperations(
return nil, errors.Wrap(err, "could not process bls-to-execution changes")
}
// new in electra
requests, err := bb.ExecutionRequests()
e, err := bb.Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution requests")
return nil, errors.Wrap(err, "could not get execution data from block")
}
st, err = ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit receipts")
exe, ok := e.(interfaces.ExecutionDataElectra)
if !ok {
return nil, errors.New("could not cast execution data to electra execution data")
}
st, err = ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
st, err = ProcessWithdrawalRequests(ctx, st, exe.WithdrawalRequests())
if err != nil {
return nil, errors.Wrap(err, "could not process execution layer withdrawal requests")
}
if err := ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
st, err = ProcessDepositRequests(ctx, st, exe.DepositRequests())
if err != nil {
return nil, errors.Wrap(err, "could not process deposit receipts")
}
if err := ProcessConsolidationRequests(ctx, st, exe.ConsolidationRequests()); err != nil {
return nil, fmt.Errorf("could not process consolidation requests: %w", err)
}
return st, nil

View File

@@ -1,15 +1,11 @@
package electra_test
import (
"context"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
@@ -51,68 +47,3 @@ func TestVerifyOperationLengths_Electra(t *testing.T) {
require.ErrorContains(t, "incorrect outstanding deposits in block body", err)
})
}
func TestProcessEpoch_CanProcessElectra(t *testing.T) {
st, _ := util.DeterministicGenesisStateElectra(t, params.BeaconConfig().MaxValidatorsPerCommittee)
require.NoError(t, st.SetSlot(10*params.BeaconConfig().SlotsPerEpoch))
require.NoError(t, st.SetDepositBalanceToConsume(100))
amountAvailForProcessing := helpers.ActivationExitChurnLimit(1_000 * 1e9)
deps := make([]*ethpb.PendingBalanceDeposit, 20)
for i := 0; i < len(deps); i += 1 {
deps[i] = &ethpb.PendingBalanceDeposit{
Amount: uint64(amountAvailForProcessing) / 10,
Index: primitives.ValidatorIndex(i),
}
}
require.NoError(t, st.SetPendingBalanceDeposits(deps))
require.NoError(t, st.SetPendingConsolidations([]*ethpb.PendingConsolidation{
{
SourceIndex: 2,
TargetIndex: 3,
},
{
SourceIndex: 0,
TargetIndex: 1,
},
}))
err := electra.ProcessEpoch(context.Background(), st)
require.NoError(t, err)
require.Equal(t, uint64(0), st.Slashings()[2], "Unexpected slashed balance")
b := st.Balances()
require.Equal(t, params.BeaconConfig().MaxValidatorsPerCommittee, uint64(len(b)))
require.Equal(t, uint64(44799839993), b[0])
s, err := st.InactivityScores()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxValidatorsPerCommittee, uint64(len(s)))
p, err := st.PreviousEpochParticipation()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxValidatorsPerCommittee, uint64(len(p)))
p, err = st.CurrentEpochParticipation()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().MaxValidatorsPerCommittee, uint64(len(p)))
sc, err := st.CurrentSyncCommittee()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().SyncCommitteeSize, uint64(len(sc.Pubkeys)))
sc, err = st.NextSyncCommittee()
require.NoError(t, err)
require.Equal(t, params.BeaconConfig().SyncCommitteeSize, uint64(len(sc.Pubkeys)))
res, err := st.DepositBalanceToConsume()
require.NoError(t, err)
require.Equal(t, primitives.Gwei(100), res)
// Half of the balance deposits should have been processed.
remaining, err := st.PendingBalanceDeposits()
require.NoError(t, err)
require.Equal(t, 10, len(remaining))
num, err := st.NumPendingConsolidations()
require.NoError(t, err)
require.Equal(t, uint64(2), num)
}

Some files were not shown because too many files have changed in this diff Show More