Compare commits

...

334 Commits

Author SHA1 Message Date
kasey
61c7e57547 fixing happy path BlobsByRange RPC tests
NOTE!!! This commit is missing a critical fix, which I am leaving out
until it's merged into eip4844 branch, to avoid additional merge mess
(there will be plenty as it is). This is the change:
+++ b/consensus-types/blocks/getters.go
@@ -316,6 +316,7 @@ func (b *SignedBeaconBlock) ToBlinded() (interfaces.ReadOnlySignedBeaconBlock, e
                                                SyncAggregate:          b.block.body.syncAggregate,
                                                ExecutionPayloadHeader: header,
                                                BlsToExecutionChanges:  b.block.body.blsToExecutionChanges,
+                                               BlobKzgCommitments:     b.block.body.blobKzgCommitments,
2023-03-20 16:15:37 -05:00
kasey
a79a3f5d2e BlobsByRange validation tests 2023-03-17 15:08:03 -05:00
kasey
334383355c restoring blobs-by-range code that got stomped 2023-03-17 10:12:38 -05:00
kasey
461d762f54 fixing blob test to use right topic 2023-03-17 00:32:14 -05:00
kasey
5819c2e086 refactor ByRoot test setup to share w/ ByRange 2023-03-16 21:17:58 -05:00
kasey
26604b4c5b getting things building 2023-03-16 19:55:18 -05:00
kasey
a3129daa03 range requests to use correct type 2023-03-16 19:50:29 -05:00
kasey
14cc00720e clean up validation logic 2023-03-16 14:13:17 -05:00
kasey
46da6208d1 untested BlobsByRange impl 2023-03-16 14:13:09 -05:00
kasey
6401755efd refactor BlocksByRange to share w/ blobs 2023-03-16 14:13:00 -05:00
kasey
4541baafe3 refactoring BlockByRange to use in BlobByRange 2023-03-16 14:12:30 -05:00
terencechain
78175ee0fd Update and clean up validators for devnet5 (#12131) 2023-03-15 11:54:57 -07:00
terence tsao
a34cc7f7bf Update geth] 2023-03-15 10:51:52 -07:00
terence tsao
6f22cd7963 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-03-14 14:34:45 -07:00
terencechain
5c234c8c68 Mark GetChainHead deprecated (#12128)
* Mark GetChainHead deprecated

* Add deprecation notice to protobuf definition

* Update proto/prysm/v1alpha1/beacon_chain.proto

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>

* Update beacon_chain.pb.go

---------

Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-14 21:06:34 +00:00
james-prysm
f92d492e33 use headblock for prunePostBlockOperationPools, remove duplicate markInclusionBLStoExecutionChange calls (#12085)
* removing duplicate function

* moved markInclusion for bls to use headblock instead of processed block

* updating based on internal feedback

* addressing some comments

* addressing feedback from slack

* fixing conflict

* making changes based on suggestions on slack

* reverting a change

* making chases based on potuz's comments

* removing one additional block copy

* clarifying comments
2023-03-14 15:23:11 -05:00
int88
a926028e45 fix error message in TestStateSummary_CanDelete (#12123)
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
2023-03-14 18:17:32 +00:00
Sammy Rosso
49f0c44dfe Add getpayloadbodies (#11973)
* el payload bodies code

* Add getPayloadBodies

* Add error 38004

* Add unmarshal

* Add ExecutionPayloadBodyV1 to proto

* Add EnableCapellaEngineMethods flag

* Small fixes

* Add proto files

* gazelle

* passing tests

* compile

* Add remaining tests

* fix tests

* Cleanup

* Fix gazelle

* Remove comments

* Rename the flag + add missing description

---------

Co-authored-by: rauljordan <raul@prysmaticlabs.com>
Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-03-14 15:52:16 +00:00
Sammy Rosso
8aec170f9b Eip4881: Tests (#11754) 2023-03-14 08:29:48 -07:00
Nishant Das
cc764c346b Run Altair Fork Transition (#12124) 2023-03-14 13:59:26 +00:00
james-prysm
7d5d30ac94 validator startup deadline bug (#12049)
* trying fix for validator startup deadline

* updating deadline duration to be set by params

* adding a runner test

* trying nishant's suggestion

* editing based on review feedback

* reverting a change

* fixing epoch deadline

* reverting aliasing
2023-03-14 12:52:56 +08:00
Preston Van Loon
f6eb42b761 Update bazel to 6.1.0 (#12121)
* Update references for cc toolchain after removal of @bazel_tools//cpp/cc_toolchain_config.bzl in 1727361563

* Update to bazel 6.1.0

* Update cross-toolchain configs
2023-03-13 23:17:26 +00:00
Preston Van Loon
39fe29d8f4 Replace @bazel pkg_tar rule with canonical @rules_pkg pkg_tar (#12120)
* Replace @bazel pkg_tar rule with canonical @rules_pkg pkg_tar

* gazelle on workspace
2023-03-13 20:17:12 +00:00
Preston Van Loon
81fbfceea8 Update rules_go to v0.38.1 and go_version to 1.19.7 (#12055)
* Update rules_go to v0.38.1

* Bump go version

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-13 19:22:37 +00:00
terencechain
58967e4516 Rm blobs sidecar usages (#12118) 2023-03-13 11:34:20 -07:00
terence tsao
947c9fbe60 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-03-13 08:04:23 -07:00
Preston Van Loon
a0ff5ff792 Only build non-test targets in hack/update-go-pbs.sh (#12101)
* Only build non-test targets in hack/update-go-pbs.sh

* run ./hack/update-go-pbs.sh

* Add ability to pass config to bazel

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-10 17:49:34 +00:00
Potuz
4528ea8d0d Gate GetProposerHead behind the feature flag (#12110)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-10 17:20:01 +00:00
terencechain
00b9e484e5 Cleanups from eip4844 branch (#12109)
* Minor cleanups from eip4844 branch

* Add back handler

* Fix gazelle

* More fixes

* More fixes

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-10 16:46:27 +00:00
terence tsao
8cc1e67e6c Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-03-10 08:23:09 -08:00
terencechain
2f3deac8b0 Make eip4844 green again (#12093) 2023-03-10 07:32:09 -08:00
Preston Van Loon
5eaa152589 Bazel: fix remote cache uploads (#12108) 2023-03-10 00:24:37 +00:00
Potuz
6d3ff65635 increase attempted reorgs at the right spot (#12106) 2023-03-09 15:06:46 +00:00
Marius van der Wijden
83a294c1a5 go.mod: set a non-zero version for tx-fuzz (#12098)
* go.mod: set a non-zero version for tx-fuzz

* go.mod: use upstream tx-fuzz instead of fork

* run gazelle

* fix tests

---------

Co-authored-by: nisdas <nishdas93@gmail.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-03-09 09:23:45 +00:00
james-prysm
753e285fb6 Prysm V4: Remove Prysm Remote Signer (#11895)
* removing all prysm remote signer code

* fixing unit tests

* resolving more build issues

* resolving deepsource complaint

* fixing lint

* trying to fix bazel library

* trying testonly true

* removing assert and require from non test settings

* fixing bazel and tests

* removing more unused files related to remote signer

* fixing linting

* reverting some changes

* reverting a change that broke some code

* removing typo

* fixing unit test

* fixing mnemonic information
2023-03-08 21:21:12 -06:00
Potuz
525d3b05a6 Late block reorgs metrics and logs (#12097)
* Late block reorgs metrics and logs

* tests

* mixed thresholds

* use threshold variables

* use infof

---------

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-03-09 01:51:50 +00:00
Nishant Das
4f38ba38b7 Update Libp2p (#12096)
* update deps

* add preston's fix

* fix build

* gaz

* test fixes
2023-03-08 23:52:51 +08:00
Potuz
eb0b5a6146 Do not call repeated times FCU (#12091)
* Do not call repeated times FCU

* cleanup

* only log if prune attestations fail
2023-03-08 09:36:46 -03:00
terencechain
aa44ba40ab Add blob gossip (#12007) 2023-03-07 20:05:43 -08:00
terencechain
8b268d646c Add block and blobs cache (#12088) 2023-03-07 15:32:36 -08:00
Preston Van Loon
4356cbc352 Update cross compile toolchains (#12069)
* Regenerate cross-toolchain configs

* Remove some extra whitespaces

* Run gazelle and add that note to the README

* Format numbered lists better in markdown

* gcloud docker command is deprecated, just use docker

* Add comment about docker credentials for gcr.io

* Update dockerfile, some remote executor config work

* gazelle

* Remove commented lines

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-07 20:09:46 +00:00
kasey
0893821e35 Disable conditional go 1.20 code until module is also at 1.20 (#12084)
* disable conditional go 1.20 code until we upgrade

* bazel decided this was unreachable and removed it!

---------

Co-authored-by: kasey <kasey@users.noreply.github.com>
2023-03-07 16:25:35 +00:00
kasey
d6ecadb471 BlobsByRoot RPC (#12011)
* boilerplate for block&block-by-root->blob-by-root

* add db interface to service and use in handler

* rm unused requestBlockAndSidecarByRoot

* add test for base case of sidecar by root

* test blob slot < min req epoch, other fixes

* cleaning up test mess

* rm unused func

* add BroadcastBlob method stubs to fix build

* handler name consistent with path

* initialize blob queue for test

* lint & gaz

* update mock to satisfy interface

* fix wrong sig for mock

* clean up min req epoch, no underflow, + tests

---------

Co-authored-by: kasey <kasey@users.noreply.github.com>
2023-03-07 09:45:01 -06:00
Bret
c3346fefa7 Fix log messages (#12086)
These two log messages were appending a `d` to the hash. When compared to the other hash, they matched up until the `d`

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-03-07 07:16:39 +00:00
Nishant Das
d639a26bbe Fix E2E Flakes (#12074)
* fix flakes

* make it longer

* make it less to prevent triggering of other issues

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-07 04:15:51 +00:00
terence tsao
9a8bde448e Sync with develop 2023-03-06 16:16:18 -08:00
terence tsao
ce71b3b6b1 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-03-06 15:56:29 -08:00
Potuz
bb95d951cc Use Epoch boundary cache to retrieve balances (#12083)
* Use Epoch boundary cache to retrieve balances

* save boundary states before inserting to forkchoice

* move up last block save

* remove boundary checks on balances

* fix ordering

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-06 21:42:47 +00:00
terence tsao
5fa30bf73a Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-02-23 15:45:32 -08:00
terencechain
b93aba6126 Add validator signing decoupled blobs (#12015) 2023-02-23 13:26:45 -08:00
terence tsao
51ed80df69 Fix build 2023-02-23 09:22:38 -08:00
terencechain
a614c4ac8c Add broadcast blob method (#12016) 2023-02-23 07:52:58 -08:00
terencechain
7b777a10a5 EIP4844: update excessive data gas field and pass spec tests (#12032) 2023-02-22 10:19:32 -08:00
terence tsao
bf1ab9951f Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-02-22 06:59:31 -08:00
terencechain
5cea2b3855 Revert block gossip changes to before coupling (#12012) 2023-02-21 17:06:11 -08:00
terencechain
ab63757fe5 Add decoupled blob protobufs (#12002) 2023-02-16 11:10:55 -08:00
terencechain
736ed1e003 Fix spec tests after renamed EIP4844 to Deneb (#12005) 2023-02-16 11:10:37 -08:00
terence tsao
6ee9707fd7 Sync with develop 2023-02-16 08:49:00 -08:00
terence tsao
e40835b1a9 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-02-16 08:20:20 -08:00
terence tsao
216a420bbc Sync with master 2023-01-28 14:19:50 +01:00
terence tsao
4845abecb8 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-01-28 13:09:08 +01:00
James He
08a910e44f renaming to proto endpoint 2023-01-27 16:30:24 +01:00
terence tsao
67ba2c4fe3 Add slash 2023-01-27 15:36:10 +01:00
terence tsao
7efa501bdc Fix space 2023-01-27 15:35:57 +01:00
terence tsao
879a694ab3 Another nil check 2023-01-27 10:15:04 +01:00
terence tsao
4e3b1881ed Add check for nil block 2023-01-27 09:44:15 +01:00
Inphi
35d3707de7 sync: Fix BlobsSidecarByRange rate limiting (#11920) 2023-01-26 07:48:39 +01:00
terence tsao
1f468bd3f5 Rm field params 2023-01-25 10:49:06 +01:00
terence tsao
ac32098c86 Merge branch 'init-nil-withdrawals' into eip4844 2023-01-25 10:32:11 +01:00
terence tsao
e0af005c42 Initialize nil withdrawals at marshal / unmarshal 2023-01-25 10:29:44 +01:00
Francis Li
f08af1bdbf Fix eip4844 branch errors (#11901) 2023-01-24 11:52:05 +01:00
terence tsao
6d0420fde5 Merge branch 'eip4844' of github.com:prysmaticlabs/prysm into eip4844 2023-01-24 11:41:40 +01:00
terence tsao
5172e6e362 Rate limiter 2023-01-24 11:41:08 +01:00
Inphi
42df0f70b6 sync: Re-broadcast block and blobs sidecar (#11907) 2023-01-24 11:15:57 +01:00
terence tsao
dace0f6a2d Rename eip4844 to deneb 2023-01-19 17:30:36 -08:00
terence tsao
26a5878181 Write bad block and blob to disk 2023-01-18 14:02:09 -08:00
terence tsao
db6474a3e4 Sync with develop 2023-01-18 12:00:48 -08:00
terence tsao
b84851fd0d Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-01-18 08:38:13 -08:00
terence tsao
673702c100 Fix loop referencing for kzgs 2023-01-12 15:50:46 -08:00
terence tsao
520eb6baca Rm unused checks 2023-01-12 10:36:21 -08:00
terence tsao
e6047dc344 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2023-01-11 14:59:00 -08:00
terence tsao
d86a452b15 Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2023-01-11 14:52:26 -08:00
terence tsao
67f9d0b9c4 Fix log 2023-01-11 08:08:33 -08:00
terence tsao
21cd055b84 Better logs 2023-01-11 07:40:19 -08:00
Kasey Kirkham
9f3bb623ec add capella to yaml "template" 2023-01-10 16:30:32 -06:00
Kasey Kirkham
b10a95097e capella state version detection bug fix 2023-01-10 15:54:27 -06:00
terence tsao
4561f5cacb Clean ups 2023-01-09 14:33:05 -08:00
terence tsao
50b672a4db Beacon api: get blobs 2023-01-09 11:40:06 -08:00
Potuz
ffbb73a59b Merge remote-tracking branch 'origin/develop' into capella 2023-01-09 12:07:00 -03:00
Potuz
649974f14d removed duplicated case 2023-01-09 09:54:15 -03:00
Potuz
9ec0bc0734 Merge remote-tracking branch 'origin/historical-summaries' into capella 2023-01-09 08:57:24 -03:00
terence tsao
9649e49658 Passing spec tests 2023-01-07 08:50:20 -08:00
terence tsao
49fdcb7347 Add spec tests 2023-01-07 08:33:48 -08:00
terence tsao
cd6ee956ed Merge branch 'historical-summaries' into eip4844 2023-01-07 08:03:15 -08:00
terence tsao
ef95fd33f8 Uncomment withdrawal stubs 2023-01-07 07:52:31 -08:00
terence tsao
1a488241b0 Sync with capella 2023-01-07 07:48:57 -08:00
terence tsao
5fdd3a3d66 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2023-01-07 07:19:55 -08:00
terence tsao
b6a32c050f Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2023-01-07 07:03:58 -08:00
terence tsao
055e225093 Passing spec tests 2023-01-06 15:18:38 -08:00
terence tsao
144218cb1b Process historical roots test 2023-01-06 11:56:24 -08:00
Potuz
13b575a609 Merge remote-tracking branch 'origin/develop' into capella 2023-01-06 10:59:43 -03:00
terence tsao
b5a414eae9 Rm bad imports 2023-01-04 08:10:27 -08:00
terence tsao
b94b347ace Sync with develop 2023-01-04 07:37:25 -08:00
terence tsao
f5ee225819 Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2023-01-04 07:16:00 -08:00
terence tsao
9cb48be14f Add historical summaries to state and processing 2023-01-03 15:58:39 -08:00
terence tsao
85fa9951eb Fix span name 2022-12-24 23:40:30 +08:00
terence tsao
ec72575fc9 Close stream for by root rpc handler 2022-12-24 23:34:02 +08:00
terence tsao
d9d1bb6d3d Skip on empty 2022-12-24 11:37:58 +08:00
Potuz
ffcdc26618 Merge remote-tracking branch 'origin/develop' into capella 2022-12-23 13:03:11 -03:00
Potuz
96981a07b9 check signatures of BLS changes before capella 2022-12-22 16:10:00 -03:00
Potuz
6b2721b239 Check BLS_TO_EXECUTION_CHANGE as if they were from Capella
In the event we receive a BLS_TO_EXECUTION_CHANGE and our head state is
before Capella, verify it's signature with the Capella fork version.
2022-12-22 11:34:57 -03:00
terence tsao
c79151a574 Fix rate limiter 2022-12-22 22:10:34 +08:00
Inphi
4b20234801 Fix missing context in post-altair /v1 messages (#11807) 2022-12-22 06:46:15 +08:00
terence tsao
911048aa6d Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-12-21 13:01:40 +08:00
James He
255e9693ee fixing typo on comments 2022-12-20 22:42:31 -06:00
terence tsao
61c1216e3d Better handling for validate sync msg time 2022-12-21 10:08:44 +08:00
terence tsao
17e1eaf0f3 Set stream write deadline 2022-12-21 09:38:00 +08:00
terence tsao
9940943595 Clean up sync 2022-12-21 09:30:11 +08:00
terence tsao
9a0f941870 Change avgSidecarBlobsTransferBytes 2022-12-21 08:32:47 +08:00
terence tsao
5d0f54d332 Blobs rate limiter 2022-12-21 08:17:54 +08:00
rkapka
d602c94b7b fix 2022-12-20 20:28:53 +01:00
rkapka
6a5ecbd68f Implement getPoolBLSToExecutionChanges API endpoint
(cherry picked from commit cd25d922bc)

# Conflicts:
#	beacon-chain/node/node.go
#	beacon-chain/rpc/eth/beacon/pool.go
#	proto/eth/service/beacon_chain_service.pb.go
#	proto/migration/v1alpha1_to_v2.go
2022-12-20 17:31:21 +01:00
Potuz
29dfcab505 Fix BlockValue marshalling 2022-12-19 11:14:35 -03:00
Potuz
16e5c903cc Merge remote-tracking branch 'origin/develop' into capella 2022-12-19 10:28:05 -03:00
terencechain
66682cb4e5 Clean up block initizliation and remove set block (#11791) 2022-12-19 15:41:28 +08:00
terencechain
52faea8b7d Support 4844 container type queries in the beacon API + update spec tests (#11794) 2022-12-19 15:38:23 +08:00
terence tsao
8a78315682 Save blobs during by range sync 2022-12-18 07:44:30 +08:00
Potuz
cab42a4ae3 Take raw arrays for BLS changes 2022-12-16 18:02:35 -03:00
Potuz
a5bdd42bdd Merge remote-tracking branch 'origin/block-value' into capella 2022-12-16 12:47:29 -03:00
Potuz
a26197f919 take lists for bls changes endpoint 2022-12-16 12:47:18 -03:00
terence tsao
8b9cab457e Got block and blobs gossip working 2022-12-16 13:41:59 +08:00
terence tsao
080ce31395 Add block value to get payload v2 2022-12-15 17:17:58 +08:00
terence tsao
7866e8a196 Got blob syncing to work 2022-12-15 17:02:38 +08:00
Potuz
d5d17e00b3 Merge branch 'develop' into capella 2022-12-14 13:04:23 -03:00
rkapka
9c6a1331cf remove redeclared struct 2022-12-14 16:29:14 +01:00
terence tsao
d89c97634c Merge branch 'eip4844' of github.com:prysmaticlabs/prysm into eip4844 2022-12-14 10:26:47 +08:00
terence tsao
7e95ca3705 Add blobs to initial syncing path 2022-12-14 10:26:39 +08:00
Inphi
abd46b01b7 Fix non-empty kzg commitment in proposal (#11766) 2022-12-14 08:06:14 +08:00
Potuz
8629ac8417 only broadcast bls changes post-capella 2022-12-13 09:53:05 -03:00
terence tsao
304925aabf Add todos for 4844 sync 2022-12-13 16:18:09 +08:00
terence tsao
16d93e47a5 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-12-13 15:12:46 +08:00
rkapka
6dcb2bbf0d Use signed changes in middleware block
(cherry picked from commit e3c9e7bb5c)
2022-12-12 16:21:39 +01:00
Potuz
deb138959a fix validator client 2022-12-12 11:43:48 -03:00
Potuz
45e6f3bd00 fix build 2022-12-12 11:29:57 -03:00
Potuz
55a9e0d51a Merge branch 'develop' into capella 2022-12-12 11:15:39 -03:00
terence tsao
3ddae600fb Merge branch 'capella' of github.com:prysmaticlabs/prysm into capella 2022-12-09 19:35:39 -08:00
terence tsao
861ede8945 Fix subscriptions 2022-12-09 15:29:23 -08:00
terence tsao
93f11f9047 Change target / max blobs to 2 / 4 2022-12-09 10:40:20 -08:00
rkapka
56503110dd Merge branch 'recontruct-capella-blinded' into capella
# Conflicts:
#	testing/spectest/shared/common/forkchoice/service.go
2022-12-09 12:45:23 +01:00
rkapka
f67d35dffd single execution block type 2022-12-09 12:43:35 +01:00
terence tsao
efbca1b5b7 Add v3 engine apis 2022-12-08 16:45:05 -08:00
terence tsao
2de0ebaf8d Merge branch 'roberto-fix-auth' into eip4844 2022-12-08 13:52:11 -08:00
Roberto Bayardo
0815ef94a3 Merge branch 'develop' into roberto-fix-auth 2022-12-08 13:23:29 -08:00
Roberto Bayardo
092ffa99e5 update & fix code around setting auth header for latest geth 2022-12-08 13:14:09 -08:00
rkapka
b05b67b264 reorder checks 2022-12-08 19:48:02 +01:00
rkapka
a5c6518c20 deepsource 2022-12-08 19:48:02 +01:00
Radosław Kapka
da048395ce Merge branch 'develop' into recontruct-capella-blinded 2022-12-08 18:41:12 +01:00
rkapka
f31f7be310 fix engine mock 2022-12-08 18:39:59 +01:00
rkapka
e1a2267f86 Merge remote-tracking branch 'origin/capella' into capella 2022-12-08 18:36:44 +01:00
rkapka
3c9e4ee7f7 Merge branch 'recontruct-capella-blinded' into capella
# Conflicts:
#	beacon-chain/blockchain/pow_block.go
#	beacon-chain/execution/engine_client.go
#	beacon-chain/execution/engine_client_test.go
#	beacon-chain/execution/testing/mock_engine_client.go
#	beacon-chain/rpc/eth/beacon/blocks.go
#	beacon-chain/state/state-native/getters_withdrawal.go
#	consensus-types/blocks/factory.go
#	proto/engine/v1/json_marshal_unmarshal.go
#	proto/engine/v1/json_marshal_unmarshal_test.go
2022-12-08 18:31:25 +01:00
rkapka
9ba32c9acd single ExecutionBlockByHash function 2022-12-08 17:53:02 +01:00
rkapka
d23008452e fix failing tests 2022-12-08 17:29:51 +01:00
terencechain
f397cba1e0 Better proposal RPC (#11721) 2022-12-08 07:40:48 -08:00
terence tsao
3eecbb5b1a Fix enum for cli 2022-12-07 12:12:56 -08:00
rkapka
1583e93b48 bzl 2022-12-07 18:23:43 +01:00
rkapka
849457df81 deepsource
(cherry picked from commit 903cab75ee)

# Conflicts:
#	beacon-chain/execution/testing/mock_engine_client.go
2022-12-07 16:29:34 +01:00
rkapka
903cab75ee deepsource 2022-12-07 16:27:09 +01:00
rkapka
ee108d4aff add doc to interface
(cherry picked from commit a08baf4a14)
2022-12-07 16:22:14 +01:00
rkapka
49bcc58762 rename methods
(cherry picked from commit 8c56dfdd46)
2022-12-07 16:22:09 +01:00
rkapka
a08baf4a14 add doc to interface 2022-12-07 16:20:43 +01:00
rkapka
8c56dfdd46 rename methods 2022-12-07 16:20:31 +01:00
rkapka
dcdd9af9db remove unneeded test 2022-12-07 16:05:44 +01:00
rkapka
a464cf5c60 Merge branch 'reconstruct-capella-block' into capella
(cherry picked from commit b0601580ef)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go
#	proto/engine/v1/json_marshal_unmarshal.go
2022-12-07 15:21:58 +01:00
terence tsao
cc55c754dc Fix cli flag 2022-12-06 16:57:50 -08:00
terence tsao
2d483ab09f Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-12-06 16:48:44 -08:00
terence tsao
d64e10a337 Interop 2022-12-06 16:06:18 -08:00
terence tsao
1e9ee10674 Merge branch 'better-validator-rpcs' of github.com:prysmaticlabs/prysm into eip4844 2022-12-06 15:14:20 -08:00
terence tsao
3ac395b39e Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-12-06 14:42:49 -08:00
Justin Traglia
6e26a6f128 Replace LastWithdrawalValidatorIndex to updated name (#11725) 2022-12-06 14:40:10 -08:00
Justin Traglia
b512b92a8a Update withdrawal error message to reflect new field name (#11724) 2022-12-06 14:39:17 -08:00
terence tsao
5ff601a1b9 Sync with latest go-ethereum changes 2022-12-06 14:38:14 -08:00
terence tsao
5823054519 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-12-06 14:08:00 -08:00
terence tsao
3d196662bc Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-12-06 14:06:30 -08:00
rkapka
b0601580ef Merge branch 'reconstruct-capella-block' into capella 2022-12-06 22:16:59 +01:00
rkapka
c1f29ea651 remove logs 2022-12-06 22:11:35 +01:00
rkapka
881d1d435a logs 2022-12-06 21:46:41 +01:00
rkapka
d1aae0c941 Merge branch 'capella' into reconstruct-capella-block 2022-12-06 21:26:55 +01:00
terence tsao
468cc23876 Fix interop 2022-12-04 08:40:41 -08:00
terence tsao
d9646a9183 Add builder paths 2022-12-03 07:29:46 -08:00
terence tsao
279cee42f1 Refactor block proposal path 2022-12-02 16:13:01 -08:00
terence tsao
57bdb907cc Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-12-02 11:10:39 -08:00
rkapka
15d683c78f Merge branch 'capella' into reconstruct-capella-block 2022-12-02 16:43:56 +01:00
rkapka
bf6c8ced7d working 2022-12-02 16:37:24 +01:00
Potuz
78fb685027 Check BLS changes when requesting from the pool 2022-12-02 10:14:39 -03:00
terence tsao
a87536eba0 Fix minimal spec test 2022-12-01 15:26:18 -08:00
terence tsao
3f05395a00 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-12-01 14:52:49 -08:00
terence tsao
85fc57d41e Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-12-01 14:45:31 -08:00
terence tsao
1e5976d5ce Mainnet spec tests passing 2022-12-01 14:41:34 -08:00
Potuz
98c0b23350 broadcast BLS changes 2022-12-01 11:26:56 -03:00
terence tsao
039a0fffba Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-11-30 12:09:47 -08:00
Potuz
90ec640e7a Fix capella operations spectests 2022-11-30 15:35:55 -03:00
Potuz
10acd31d25 check on verify instead of sig 2022-11-30 15:20:23 -03:00
terence tsao
4224014fad Add 4844 block and state 2022-11-30 10:14:35 -08:00
Potuz
df1e8b33d8 BLS Change signature verification 2022-11-30 14:52:31 -03:00
rkapka
cdb4ee42cc not working 2022-11-30 18:22:52 +01:00
rkapka
d29baec77e proper error handling in BuildSignedBeaconBlockFromExecutionPayload 2022-11-30 15:32:57 +01:00
terence tsao
53c189da9b Merge branch 'update-eip4844-objs' into eip4844 2022-11-29 21:16:31 -08:00
terence tsao
277fbce61b Update eip4844 objects 2022-11-29 21:15:14 -08:00
terence tsao
0adc54b7ff Refactor get payload 2022-11-29 12:02:47 -08:00
Potuz
1cbd7e9888 withdraw by default 2022-11-29 13:52:34 -03:00
rkapka
0a9e1658dd move stuff to blinded_blocks.go 2022-11-29 17:36:52 +01:00
rkapka
31d4a4cd11 test other functions 2022-11-29 17:36:52 +01:00
rkapka
fbc4e73d31 refactor and test GetSSZBlockV2 2022-11-29 17:36:52 +01:00
rkapka
c1d4eaa79d refactor and test GetBlockV2 2022-11-29 17:36:52 +01:00
rkapka
760af6428e update ssz 2022-11-29 17:36:52 +01:00
terence tsao
dfa0ccf626 Fix attribute pb nil checks 2022-11-29 07:27:33 -08:00
terence tsao
7a142cf324 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-11-29 07:09:41 -08:00
rkapka
1a51fdbd58 update withdrawals proto 2022-11-29 14:23:11 +01:00
terencechain
368a99ec8d Fix nil attribute for capella branch (#11701) 2022-11-28 17:34:26 -08:00
terence tsao
1c7e734918 Fix some blockchain tests 2022-11-28 13:59:17 -08:00
rkapka
764d1325bf Merge remote-tracking branch 'origin/capella' into capella 2022-11-28 21:07:00 +01:00
rkapka
0cf30e9022 Merge branch '__develop' into capella 2022-11-28 21:06:20 +01:00
Potuz
227b20f368 fix nil block from stream 2022-11-28 16:53:11 -03:00
rkapka
d7d70bc25b support SSZ lol
(cherry picked from commit 52bc2c8d617ac3e1254c493fa053cdce4a1ebd63)
2022-11-28 20:19:24 +01:00
rkapka
82f6ddb693 add Capella version
(cherry picked from commit 5d6fd0bbe663e5dd16df5b2e773f68982bbcd24e)
2022-11-28 20:19:19 +01:00
rkapka
9e4e82d2c5 refactor GetBlindedBlockSSZ
(cherry picked from commit 97483c339f99b0d96bd81846a979383ffd2b0cda)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go
2022-11-28 20:19:15 +01:00
rkapka
9838369fe9 fix proto generation 2022-11-28 20:15:05 +01:00
rkapka
6085ad1bfa fix issues 2022-11-28 20:07:37 +01:00
rkapka
d3851b27df Merge branch '__develop' into capella
# Conflicts:
#	beacon-chain/rpc/apimiddleware/structs.go
#	beacon-chain/rpc/eth/beacon/blocks.go
#	proto/eth/v2/BUILD.bazel
#	proto/eth/v2/beacon_block.pb.go
#	proto/eth/v2/beacon_block.proto
#	proto/eth/v2/generated.ssz.go
#	proto/migration/v1alpha1_to_v2.go
#	proto/prysm/v1alpha1/beacon_chain.pb.go
#	proto/prysm/v1alpha1/beacon_chain.proto
2022-11-28 19:31:33 +01:00
Potuz
d6100dfdcb fix spectest 2022-11-28 12:38:14 -03:00
Potuz
c2144dac86 Add BLSToExecutionChangge endpoint 2022-11-27 23:00:16 -03:00
Potuz
a47ff569a8 Add Submit BLSChange endpoint 2022-11-27 20:50:21 -03:00
Potuz
f8be022ef2 Merge branch 'develop' into capella 2022-11-27 20:40:41 -03:00
Potuz
4f39e6b685 Implement REST block API endpoints 2022-11-27 16:20:30 -03:00
Potuz
c67b000633 add test 2022-11-27 13:07:50 -03:00
Potuz
02f7443586 Refactor Sync Committee Rewards 2022-11-27 09:04:03 -03:00
terence tsao
6275e7df4e Clean up execution engine 2022-11-25 17:20:28 -08:00
terencechain
1b6b52fda1 Add PayloadAttribute superset and use it for engine-api (#11691) 2022-11-25 17:12:55 -08:00
Potuz
5fa1fd84b9 Hook the BLSTOExecution Pool to the proposer 2022-11-25 09:33:17 -03:00
nisdas
bd0c9f9e8d fix 2022-11-25 09:06:59 -03:00
Potuz
2532bb370c Merge branch 'develop' into capella 2022-11-25 08:16:32 -03:00
nisdas
12efc6c2c1 make it reject 2022-11-24 22:21:12 +08:00
nisdas
a6cc9ac9c5 add sig validation 2022-11-24 22:19:55 +08:00
nisdas
031f5845a2 add gossip handler for bls change object 2022-11-24 21:22:18 +08:00
nisdas
b88559726c Merge branch 'develop' of https://github.com/prysmaticlabs/geth-sharding into capella 2022-11-24 20:05:41 +08:00
terence tsao
ca6ddf4490 Add and use send blocks and sidecars requests 2022-11-23 16:11:06 -08:00
terence tsao
3ebb2fce94 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-11-23 14:21:45 -08:00
nisdas
62f6b07cba fix gossip registration 2022-11-23 20:10:44 +08:00
terence tsao
f956f1ed6e Handle capella version for packing atts 2022-11-22 17:21:21 -08:00
terence tsao
1c0fa95053 Fix forkchoice test 2022-11-22 17:09:34 -08:00
terence tsao
04bf4a1060 Clean up 2022-11-22 15:38:24 -08:00
terence tsao
ae276fd371 Add mainnet spec tests 2022-11-22 10:53:47 -08:00
terence tsao
104bdaed12 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-11-22 09:43:43 -08:00
terence tsao
089a5d6ac2 Migrate from geth's kzg lib to go-kzg/eth (Thanks @roberto-bayardo) 2022-11-22 08:51:45 -08:00
Potuz
16b0820193 Merge branch 'develop' into capella 2022-11-22 13:43:03 -03:00
Potuz
4b02267e96 add more minimal fixes 2022-11-22 13:34:33 -03:00
Potuz
746584c453 fix missing minimal test 2022-11-22 13:34:33 -03:00
Potuz
b56daaaca2 Fix empty withdrawals slice 2022-11-22 10:38:42 -03:00
terence tsao
b7a6fe88ee Update block and sidecar gossip conditions 2022-11-21 14:38:44 -08:00
terence tsao
22d1c37b92 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-20 19:22:30 -08:00
terence tsao
78a393f825 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-18 13:48:40 -08:00
terence tsao
ac8290c1bf Port over shared kzg functions and updated trusted setup 2022-11-18 10:55:35 -08:00
terence tsao
5d0662b415 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-18 10:00:32 -08:00
terence tsao
931e5e10c3 Fix mainnet fork transition tests 2022-11-16 08:49:06 -05:00
Potuz
c172f838b1 Mark capella fields as dirty 2022-11-15 09:58:34 -05:00
Potuz
c07ae29cd9 move MaxWithdrawalsPerPayload to fieldparams 2022-11-14 22:59:31 -05:00
Potuz
214c9bfd8b fix bls_to_execution_changes 2022-11-14 16:41:17 -05:00
Potuz
716140d64d add bls_to_execution_change tests 2022-11-14 16:26:39 -05:00
Potuz
088cb4ef59 fix expected_withdrawals 2022-11-14 14:50:41 -05:00
terence tsao
fa33e93a8e Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-14 08:53:15 -05:00
Potuz
d1472fc351 Add withdrawals operations tests 2022-11-13 08:54:59 -03:00
terence tsao
5c8c0c31d8 Merge branch 'capella-withdrawal-minimal' into capella 2022-11-12 22:06:43 -08:00
terence tsao
7f3c00c7a2 Can build 2022-11-12 22:06:22 -08:00
terencechain
c180dab791 Merge branch 'develop' into capella 2022-11-12 18:28:18 -08:00
terence tsao
f24acc21c7 Fix bazel 2022-11-12 17:43:19 -08:00
terence tsao
40b637849d Fix miminal capella tests 2022-11-12 17:26:05 -08:00
terence tsao
e7db1685df Add mainnet capella tests 2022-11-12 17:13:26 -08:00
terence tsao
eccbfd1011 Add shared capella spec tests helpers 2022-11-12 17:13:16 -08:00
terence tsao
90211f6769 Fix prev epoch attested precompute 2022-11-12 17:12:29 -08:00
terence tsao
edc32ac18e Fix slashing quotient 2022-11-12 17:12:04 -08:00
terence tsao
fe68e020e3 Add selector with minimal withdrawal size 2022-11-12 16:18:16 -08:00
terence tsao
81e1e3544d Add mainnet ssz vectors 2022-11-12 15:50:07 -08:00
Potuz
09372d5c35 Revert "added mainnet ssz tests"
This reverts commit 078a89e4ca.
2022-11-12 18:40:28 -03:00
Potuz
078a89e4ca added mainnet ssz tests 2022-11-12 18:39:11 -03:00
Potuz
dbc6ae26a6 Add minimal support for capella spec tests
Fixed many issues about hashing
Added fork typing in the state replayer
2022-11-12 18:18:35 -03:00
Potuz
b6f429867a Merge branch 'develop' into capella 2022-11-12 16:35:20 -03:00
Potuz
09f50660ce Merge branch 'develop' into capella 2022-11-12 11:51:06 -03:00
Potuz
189825b495 fix withdrawal hashing 2022-11-11 23:21:41 -03:00
terence tsao
441cad58d4 Merge commit 'e03de47db7b782bdf7dc8d9b42749eb2a236cdea' into eip4844 2022-11-11 09:18:57 -08:00
terence tsao
1277d08f9e Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-11 08:46:39 -08:00
terence tsao
e03de47db7 Ops, wrong branch 2022-11-11 08:45:22 -08:00
Potuz
764b7ff610 Don't build capella payload twice 2022-11-11 10:47:29 -03:00
terence tsao
307be7694e Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-09 16:44:34 -08:00
terence tsao
c76ae1ef39 Add beacon_block_and_blobs_sidecar gossip 2022-11-09 16:43:26 -08:00
Potuz
d499db7f0e Merge branch 'develop' into capella 2022-11-09 21:27:18 -03:00
terence tsao
a894b9f29a Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-09 14:49:18 -08:00
terence tsao
902e6b3905 Update to use latest kzg library 2022-11-09 10:17:27 -08:00
Potuz
ed2d1c7bf9 Merge branch 'develop' into capella 2022-11-09 09:14:55 -03:00
nisdas
14b73cbd47 register flag 2022-11-09 08:48:36 +08:00
terence tsao
a39c7aa864 Add rpc blobs sidecars by range 2022-11-07 14:53:12 -08:00
terence tsao
170bc9c8ec Network changes 2022-11-07 14:43:46 -08:00
terence tsao
365c01fc29 Add tests 2022-11-07 06:58:24 -08:00
Potuz
3124785a08 Merge branch 'develop' into capella 2022-11-07 10:08:09 -03:00
Potuz
60e6306107 working withrawals initial commits 2022-11-06 16:53:57 -03:00
terence tsao
42ccb7830a Add Capella DB changes 2022-11-06 15:25:34 -03:00
Potuz
0bb03b9292 fix marshalling and engine calls 2022-11-06 15:24:22 -03:00
nisdas
ed6fbf1480 stupid bug 2022-11-07 00:18:01 +08:00
nisdas
477cec6021 wei it 2022-11-07 00:13:12 +08:00
nisdas
924500d111 add unmarshal 2022-11-06 23:57:37 +08:00
Potuz
0677504ef1 Revert "proposer changes"
This reverts commit ca2a7c4d9c.
2022-11-06 12:16:53 -03:00
Potuz
ca2a7c4d9c proposer changes 2022-11-06 12:14:17 -03:00
Potuz
28606629ad marshalling stub 2022-11-06 12:03:25 -03:00
Potuz
c817279464 fix capella payload 2022-11-06 11:14:39 -03:00
Potuz
009d6ed8ed proposer logic 2022-11-06 10:49:32 -03:00
Potuz
5cec1282a9 FCU two versions 2022-11-06 10:22:45 -03:00
Potuz
340170fd29 propose block V2 2022-11-06 09:42:20 -03:00
Potuz
7ed0cc139a marshalling first attempt 2022-11-06 07:47:43 -03:00
Potuz
2c822213eb rpc changes 2022-11-06 00:24:56 -03:00
terence tsao
0894b9591c Add Capella DB changes 2022-11-05 13:27:18 -07:00
terence tsao
f0ca45f9a2 Json marshal and unmarshal blob bundle 2022-11-05 12:45:14 -07:00
terence tsao
afc48c6485 State change and config changes for Capella 2022-11-05 12:45:01 -07:00
terence tsao
93dce8a0cb P2p changes for Capella fork 2022-11-05 12:44:46 -07:00
terence tsao
149ccdaf39 Add engine call for get payload 2022-11-05 11:00:13 -07:00
Potuz
c08bb39ffe add fork versions 2022-11-05 13:38:11 -03:00
Potuz
5083d8ab34 propose capella blocks 2022-11-05 12:38:27 -03:00
Potuz
7552a5dd07 capella fork logic 2022-11-05 07:33:20 -03:00
Potuz
c93d68f853 Capella state transition 2022-11-05 06:06:15 -03:00
terence tsao
2b74db2dce Add blob database methods 2022-11-04 13:46:01 -07:00
terence tsao
cc6c91415d Add validate blobs sidecar 2022-11-04 13:06:15 -07:00
terence tsao
6d7d7e0adc Add excessive blobs to execution payload 2022-11-04 09:48:39 -07:00
terence tsao
2105d777f0 Add blobs kzg to Capella beacon block 2022-11-04 09:33:01 -07:00
terence tsao
14338afbdb Update go.mod 2022-11-04 08:52:16 -07:00
Potuz
3e8aa4023d Fix config test and export method 2022-11-04 11:59:49 -03:00
Potuz
b443875e66 Implement get_expected_withdrawals 2022-11-04 11:45:45 -03:00
434 changed files with 27411 additions and 9821 deletions

View File

@@ -1 +1 @@
5.3.0
6.1.0

View File

@@ -21,7 +21,7 @@ linters:
linters-settings:
gocognit:
# TODO: We should target for < 50
min-complexity: 65
min-complexity: 69
output:
print-issued-lines: true

View File

@@ -4,15 +4,18 @@ load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
http_archive(
name = "bazel_toolchains",
sha256 = "8e0633dfb59f704594f19ae996a35650747adc621ada5e8b9fb588f808c89cb0",
strip_prefix = "bazel-toolchains-3.7.0",
name = "rules_pkg",
sha256 = "8c20f74bca25d2d442b327ae26768c02cf3c99e93fad0381f32be9aab1967675",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/bazel-toolchains/releases/download/3.7.0/bazel-toolchains-3.7.0.tar.gz",
"https://github.com/bazelbuild/bazel-toolchains/releases/download/3.7.0/bazel-toolchains-3.7.0.tar.gz",
"https://mirror.bazel.build/github.com/bazelbuild/rules_pkg/releases/download/0.8.1/rules_pkg-0.8.1.tar.gz",
"https://github.com/bazelbuild/rules_pkg/releases/download/0.8.1/rules_pkg-0.8.1.tar.gz",
],
)
load("@rules_pkg//:deps.bzl", "rules_pkg_dependencies")
rules_pkg_dependencies()
http_archive(
name = "com_grail_bazel_toolchain",
sha256 = "b210fc8e58782ef171f428bfc850ed7179bdd805543ebd1aa144b9c93489134f",
@@ -39,10 +42,6 @@ load("@prysm//tools/cross-toolchain:prysm_toolchains.bzl", "configure_prysm_tool
configure_prysm_toolchains()
load("@prysm//tools/cross-toolchain:rbe_toolchains_config.bzl", "rbe_toolchains_config")
rbe_toolchains_config()
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
@@ -87,10 +86,10 @@ http_archive(
# Expose internals of go_test for custom build transitions.
"//third_party:io_bazel_rules_go_test.patch",
],
sha256 = "ae013bf35bd23234d1dea46b079f1e05ba74ac0321423830119d3e787ec73483",
sha256 = "dd926a88a564a9246713a9c00b35315f54cbd46b31a26d5d8fb264c07045f05d",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.36.0/rules_go-v0.36.0.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.36.0/rules_go-v0.36.0.zip",
"https://mirror.bazel.build/github.com/bazelbuild/rules_go/releases/download/v0.38.1/rules_go-v0.38.1.zip",
"https://github.com/bazelbuild/rules_go/releases/download/v0.38.1/rules_go-v0.38.1.zip",
],
)
@@ -165,7 +164,7 @@ load("@io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_depe
go_rules_dependencies()
go_register_toolchains(
go_version = "1.19.4",
go_version = "1.19.7",
nogo = "@//:nogo",
)
@@ -191,6 +190,21 @@ filegroup(
url = "https://github.com/eth-clients/slashing-protection-interchange-tests/archive/b8413ca42dc92308019d0d4db52c87e9e125c4e9.tar.gz",
)
http_archive(
name = "eip4881_spec_tests",
build_file_content = """
filegroup(
name = "test_data",
srcs = glob([
"**/*.yaml",
]),
visibility = ["//visibility:public"],
)
""",
sha256 = "89cb659498c0d196fc9f957f8b849b2e1a5c041c3b2b3ae5432ac5c26944297e",
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.3.0-rc.3"
bls_test_version = "v0.1.1"

View File

@@ -103,8 +103,8 @@ func DownloadFinalizedData(ctx context.Context, client *Client) (*OriginData, er
}
log.Printf("BeaconState slot=%d, Block slot=%d", s.Slot(), b.Block().Slot())
log.Printf("BeaconState htr=%#xd, Block state_root=%#x", sr, b.Block().StateRoot())
log.Printf("BeaconState latest_block_header htr=%#xd, block htr=%#x", br, realBlockRoot)
log.Printf("BeaconState htr=%#x, Block state_root=%#x", sr, b.Block().StateRoot())
log.Printf("BeaconState latest_block_header htr=%#x, block htr=%#x", br, realBlockRoot)
return &OriginData{
st: s,
b: b,

View File

@@ -398,11 +398,7 @@ func populateValidators(cfg *params.BeaconChainConfig, st state.BeaconState, val
if err := st.SetValidators(validators); err != nil {
return err
}
if err := st.SetBalances(balances); err != nil {
return err
}
return nil
return st.SetBalances(balances)
}
func TestDownloadFinalizedData(t *testing.T) {

View File

@@ -467,7 +467,7 @@ func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, e
version := bytesutil.ToBytes4(vSlice)
ofs = append(ofs, forks.ForkScheduleEntry{
Version: version,
Epoch: primitives.Epoch(uint64(epoch)),
Epoch: primitives.Epoch(epoch),
})
}
sort.Sort(ofs)

View File

@@ -45,6 +45,7 @@ go_library(
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/core/transition/interop:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/kv:go_default_library",

View File

@@ -301,7 +301,7 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
var attr payloadattribute.Attributer
switch st.Version() {
case version.Capella:
case version.Capella, version.Deneb:
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")

View File

@@ -2,12 +2,18 @@ package blockchain
import (
"context"
"fmt"
"time"
"github.com/pkg/errors"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"github.com/sirupsen/logrus"
)
func (s *Service) isNewProposer(slot primitives.Slot) bool {
@@ -42,67 +48,80 @@ func (s *Service) getStateAndBlock(ctx context.Context, r [32]byte) (state.Beaco
return headState, newHeadBlock, nil
}
// fockchoiceUpdateWithExecution is a wrapper around notifyForkchoiceUpdate. It decides whether a new call to FCU should be made.
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, newHeadRoot [32]byte, proposingSlot primitives.Slot) error {
isNewHead := s.isNewHead(newHeadRoot)
isNewProposer := s.isNewProposer(proposingSlot)
if !isNewHead && !isNewProposer {
if !isNewHead {
return nil
}
var headState state.BeaconState
var headBlock interfaces.ReadOnlySignedBeaconBlock
var headRoot [32]byte
var err error
shouldUpdate := isNewHead
if isNewHead && isNewProposer && !features.Get().DisableReorgLateBlocks {
if proposingSlot == s.CurrentSlot() {
proposerHead := s.ForkChoicer().GetProposerHead()
if proposerHead != newHeadRoot {
shouldUpdate = false
}
} else if s.ForkChoicer().ShouldOverrideFCU() {
shouldUpdate = false
}
}
if shouldUpdate {
headRoot = newHeadRoot
headState, headBlock, err = s.getStateAndBlock(ctx, newHeadRoot)
if err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
isNewProposer := s.isNewProposer(proposingSlot)
if isNewProposer && !features.Get().DisableReorgLateBlocks {
if s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return nil
}
} else {
// We are guaranteed that the head block is the parent
// of the incoming block. We do not process the slot
// because it will be processed anyway in notifyForkchoiceUpdate
headState = s.headState(ctx)
headRoot = s.headRoot()
headBlock, err = s.headBlock()
if err != nil {
return errors.Wrap(err, "could not get head block")
}
}
headState, headBlock, err := s.getStateAndBlock(ctx, newHeadRoot)
if err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return nil
}
_, err = s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: headState,
headRoot: headRoot,
headRoot: newHeadRoot,
headBlock: headBlock.Block(),
})
if err != nil {
return errors.Wrap(err, "could not notify forkchoice update")
}
if shouldUpdate {
if err := s.saveHead(ctx, newHeadRoot, headBlock, headState); err != nil {
log.WithError(err).Error("could not save head")
}
// Only need to prune attestations from pool if the head has changed.
if err := s.pruneAttsFromPool(headBlock); err != nil {
return err
}
if err := s.saveHead(ctx, newHeadRoot, headBlock, headState); err != nil {
log.WithError(err).Error("could not save head")
}
// Only need to prune attestations from pool if the head has changed.
if err := s.pruneAttsFromPool(headBlock); err != nil {
log.WithError(err).Error("could not prune attestations from pool")
}
return nil
}
// shouldOverrideFCU checks whether the incoming block is still subject to being
// reorged or not by the next proposer.
func (s *Service) shouldOverrideFCU(newHeadRoot [32]byte, proposingSlot primitives.Slot) bool {
headWeight, err := s.ForkChoicer().Weight(newHeadRoot)
if err != nil {
log.WithError(err).WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("could not determine node weight")
}
currentSlot := s.CurrentSlot()
if proposingSlot == currentSlot {
proposerHead := s.ForkChoicer().GetProposerHead()
if proposerHead != newHeadRoot {
return true
}
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", newHeadRoot),
"weight": headWeight,
}).Infof("Attempted late block reorg aborted due to attestations at %d seconds",
params.BeaconConfig().SecondsPerSlot)
lateBlockFailedAttemptSecondThreshold.Inc()
} else {
if s.ForkChoicer().ShouldOverrideFCU() {
return true
}
secs, err := slots.SecondsSinceSlotStart(currentSlot,
uint64(s.genesisTime.Unix()), uint64(time.Now().Unix()))
if err != nil {
log.WithError(err).Error("could not compute seconds since slot start")
}
if secs >= doublylinkedtree.ProcessAttestationsThreshold {
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", newHeadRoot),
"weight": headWeight,
}).Infof("Attempted late block reorg aborted due to attestations at %d seconds",
doublylinkedtree.ProcessAttestationsThreshold)
lateBlockFailedAttemptFirstThreshold.Inc()
}
}
return false
}

View File

@@ -3,6 +3,7 @@ package blockchain
import (
"context"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
@@ -190,3 +191,46 @@ func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testin
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, r, service.CurrentSlot()+1))
}
func TestShouldOverrideFCU(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
service, err := NewService(ctx, opts...)
service.SetGenesisTime(time.Now().Add(-time.Duration(2*params.BeaconConfig().SecondsPerSlot) * time.Second))
require.NoError(t, err)
headRoot := [32]byte{'b'}
parentRoot := [32]byte{'a'}
ojc := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, 1, parentRoot, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, headRoot, parentRoot, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
require.Equal(t, primitives.Slot(2), service.CurrentSlot())
require.Equal(t, true, service.shouldOverrideFCU(headRoot, 2))
require.LogsDoNotContain(t, hook, "12 seconds")
require.Equal(t, false, service.shouldOverrideFCU(parentRoot, 2))
require.LogsContain(t, hook, "12 seconds")
head, err := fcs.Head(ctx)
require.NoError(t, err)
require.Equal(t, headRoot, head)
fcs.SetGenesisTime(uint64(time.Now().Unix()) - 29)
require.Equal(t, true, service.shouldOverrideFCU(parentRoot, 3))
require.LogsDoNotContain(t, hook, "10 seconds")
fcs.SetGenesisTime(uint64(time.Now().Unix()) - 24)
service.SetGenesisTime(time.Now().Add(-time.Duration(2*params.BeaconConfig().SecondsPerSlot+10) * time.Second))
require.Equal(t, false, service.shouldOverrideFCU(parentRoot, 3))
require.LogsContain(t, hook, "10 seconds")
}

View File

@@ -60,7 +60,20 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
if b.Version() == version.Deneb {
k, err := b.Body().BlobKzgCommitments()
if err != nil {
return err
}
log = log.WithField("blobCount", len(k))
}
}
if b.Version() == version.Deneb {
k, err := b.Body().BlobKzgCommitments()
if err != nil {
return err
}
log = log.WithField("blobCount", len(k))
}
log.Info("Finished applying state transition")
return nil
@@ -96,6 +109,7 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"version": version.String(block.Version()),
}).Info("Synced new block")
}
return nil

View File

@@ -111,6 +111,18 @@ var (
Name: "beacon_reorgs_total",
Help: "Count the number of times beacon chain has a reorg",
})
LateBlockAttemptedReorgCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_late_block_attempted_reorgs",
Help: "Count the number of times a proposer served by this beacon has attempted a late block reorg",
})
lateBlockFailedAttemptFirstThreshold = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_failed_reorg_attempts_first_threshold",
Help: "Count the number of times a proposer served by this beacon attempted a late block reorg but desisted in the first threshold",
})
lateBlockFailedAttemptSecondThreshold = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_failed_reorg_attempts_second_threshold",
Help: "Count the number of times a proposer served by this beacon attempted a late block reorg but desisted in the second threshold",
})
saveOrphanedAttCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "saved_orphaned_att_total",
Help: "Count the number of times an orphaned attestation is saved",
@@ -158,10 +170,6 @@ var (
Name: "txs_per_slot_count",
Help: "Count the number of txs per slot",
})
missedPayloadIDFilledCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "missed_payload_id_filled_count",
Help: "",
})
onBlockProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "on_block_processing_milliseconds",
Help: "Total time in milliseconds to complete a call to onBlock()",

View File

@@ -152,9 +152,6 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
if err := s.handleBlockAttestations(ctx, signed.Block(), postState); err != nil {
return errors.Wrap(err, "could not handle block's attestations")
}
if err := s.handleBlockBLSToExecChanges(signed.Block()); err != nil {
return errors.Wrap(err, "could not handle block's BLSToExecutionChanges")
}
s.InsertSlashingsToForkChoiceStore(ctx, signed.Block().Body().AttesterSlashings())
if isValidPayload {
@@ -214,6 +211,8 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
}
newBlockHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
// verify conditions for FCU, notifies FCU, and saves the new head.
// This function also prunes attestations, other similar operations happen in prunePostBlockOperationPools.
if err := s.forkchoiceUpdateWithExecution(ctx, headRoot, s.CurrentSlot()+1); err != nil {
return err
}
@@ -447,12 +446,22 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.ReadOnlySi
}
}
}
// Save boundary states that will be useful for forkchoice
for r, st := range boundaries {
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
return err
}
}
// Also saves the last post state which to be used as pre state for the next batch.
lastBR := blockRoots[len(blks)-1]
if err := s.cfg.StateGen.SaveState(ctx, lastBR, preState); err != nil {
return err
}
// Insert all nodes but the last one to forkchoice
if err := s.cfg.ForkChoiceStore.InsertChain(ctx, pendingNodes); err != nil {
return errors.Wrap(err, "could not insert batch to forkchoice")
}
// Insert the last block to forkchoice
lastBR := blockRoots[len(blks)-1]
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, preState, lastBR); err != nil {
return errors.Wrap(err, "could not insert last block in batch to forkchoice")
}
@@ -462,17 +471,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.ReadOnlySi
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
for r, st := range boundaries {
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
return err
}
}
// Also saves the last post state which to be used as pre state for the next batch.
lastB := blks[len(blks)-1]
if err := s.cfg.StateGen.SaveState(ctx, lastBR, preState); err != nil {
return err
}
arg := &notifyForkchoiceUpdateArg{
headState: preState,
headRoot: lastBR,
@@ -597,8 +596,8 @@ func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b interface
}
// This removes the attestations in block `b` from the attestation mem pool.
func (s *Service) pruneAttsFromPool(b interfaces.ReadOnlySignedBeaconBlock) error {
atts := b.Block().Body().Attestations()
func (s *Service) pruneAttsFromPool(headBlock interfaces.ReadOnlySignedBeaconBlock) error {
atts := headBlock.Block().Body().Attestations()
for _, att := range atts {
if helpers.IsAggregated(att) {
if err := s.cfg.AttPool.DeleteAggregatedAttestation(att); err != nil {
@@ -683,6 +682,8 @@ func (s *Service) fillMissingPayloadIDRoutine(ctx context.Context, stateFeed *ev
}()
}
// fillMissingBlockPayloadId is called 4 seconds into the slot and calls FCU if we are proposing next slot
// and the cache has been missed
func (s *Service) fillMissingBlockPayloadId(ctx context.Context) error {
s.ForkChoicer().RLock()
highestReceivedSlot := s.cfg.ForkChoiceStore.HighestReceivedBlockSlot()
@@ -696,10 +697,10 @@ func (s *Service) fillMissingBlockPayloadId(ctx context.Context) error {
if !has || id != [8]byte{} {
return nil
}
missedPayloadIDFilledCount.Inc()
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
s.headLock.RUnlock()
return err
}
headState := s.headState(ctx)

View File

@@ -66,6 +66,7 @@ func (s *Service) verifyBlkPreState(ctx context.Context, b interfaces.ReadOnlyBe
// during initial syncing. There's no risk given a state summary object is just a
// a subset of the block object.
if !s.cfg.BeaconDB.HasStateSummary(ctx, parentRoot) && !s.cfg.BeaconDB.HasBlock(ctx, parentRoot) {
log.Errorf("requesting blockroot %#x", bytesutil.Trunc(parentRoot[:]))
return errors.New("could not reconstruct parent state")
}

View File

@@ -1,11 +1,13 @@
package blockchain
import (
"bytes"
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition/interop"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
@@ -48,17 +50,17 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
// Apply state transition on the new block.
if err := s.onBlock(ctx, blockCopy, blockRoot); err != nil {
err := errors.Wrap(err, "could not process block")
tracing.AnnotateError(span, err)
interop.WriteBlockToDisk("state_transition", blockCopy, true /*failed*/)
return err
}
// Handle post block operations such as attestations and exits.
if err := s.handlePostBlockOperations(blockCopy.Block()); err != nil {
return err
// Handle post block operations such as pruning exits and bls messages if incoming block is the head
if err := s.prunePostBlockOperationPools(ctx, blockCopy, blockRoot); err != nil {
log.WithError(err).Error("Could not prune canonical objects from pool ")
}
// Have we been finalizing? Should we start saving hot states to db?
@@ -157,29 +159,40 @@ func (s *Service) ReceiveAttesterSlashing(ctx context.Context, slashing *ethpb.A
s.InsertSlashingsToForkChoiceStore(ctx, []*ethpb.AttesterSlashing{slashing})
}
func (s *Service) handlePostBlockOperations(b interfaces.ReadOnlyBeaconBlock) error {
// prunePostBlockOperationPools only runs on new head otherwise should return a nil.
func (s *Service) prunePostBlockOperationPools(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock, root [32]byte) error {
headRoot, err := s.HeadRoot(ctx)
if err != nil {
return err
}
// By comparing the current headroot, that has already gone through forkchoice,
// we can assume that if equal the current block root is canonical.
if !bytes.Equal(headRoot, root[:]) {
return nil
}
// Mark block exits as seen so we don't include same ones in future blocks.
for _, e := range b.Body().VoluntaryExits() {
for _, e := range blk.Block().Body().VoluntaryExits() {
s.cfg.ExitPool.MarkIncluded(e)
}
// Mark block BLS changes as seen so we don't include same ones in future blocks.
if err := s.handleBlockBLSToExecChanges(b); err != nil {
if err := s.markIncludedBlockBLSToExecChanges(blk.Block()); err != nil {
return errors.Wrap(err, "could not process BLSToExecutionChanges")
}
// Mark attester slashings as seen so we don't include same ones in future blocks.
for _, as := range b.Body().AttesterSlashings() {
for _, as := range blk.Block().Body().AttesterSlashings() {
s.cfg.SlashingPool.MarkIncludedAttesterSlashing(as)
}
return nil
}
func (s *Service) handleBlockBLSToExecChanges(blk interfaces.ReadOnlyBeaconBlock) error {
if blk.Version() < version.Capella {
func (s *Service) markIncludedBlockBLSToExecChanges(headBlock interfaces.ReadOnlyBeaconBlock) error {
if headBlock.Version() < version.Capella {
return nil
}
changes, err := blk.Body().BLSToExecutionChanges()
changes, err := headBlock.Body().BLSToExecutionChanges()
if err != nil {
return errors.Wrap(err, "could not get BLSToExecutionChanges")
}

View File

@@ -357,7 +357,7 @@ func TestHandleBlockBLSToExecutionChanges(t *testing.T) {
}
blk, err := blocks.NewBeaconBlock(pbb)
require.NoError(t, err)
require.NoError(t, service.handleBlockBLSToExecChanges(blk))
require.NoError(t, service.markIncludedBlockBLSToExecChanges(blk))
})
t.Run("Post Capella no changes", func(t *testing.T) {
@@ -367,7 +367,7 @@ func TestHandleBlockBLSToExecutionChanges(t *testing.T) {
}
blk, err := blocks.NewBeaconBlock(pbb)
require.NoError(t, err)
require.NoError(t, service.handleBlockBLSToExecChanges(blk))
require.NoError(t, service.markIncludedBlockBLSToExecChanges(blk))
})
t.Run("Post Capella some changes", func(t *testing.T) {
@@ -389,7 +389,7 @@ func TestHandleBlockBLSToExecutionChanges(t *testing.T) {
pool.InsertBLSToExecChange(signedChange)
require.Equal(t, true, pool.ValidatorExists(idx))
require.NoError(t, service.handleBlockBLSToExecChanges(blk))
require.NoError(t, service.markIncludedBlockBLSToExecChanges(blk))
require.Equal(t, false, pool.ValidatorExists(idx))
})
}

View File

@@ -57,6 +57,11 @@ type mockBroadcaster struct {
broadcastCalled bool
}
func (mb *mockBroadcaster) BroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar) error {
//TODO implement me
panic("implement me")
}
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
mb.broadcastCalled = true
return nil

View File

@@ -1,4 +1,4 @@
load("@prysm//tools/go:def.bzl", "go_library")
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
@@ -19,3 +19,25 @@ go_library(
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"deposit_tree_snapshot_test.go",
"merkle_tree_test.go",
"spec_test.go",
],
data = [
"@eip4881_spec_tests//:test_data",
],
embed = [":go_default_library"],
deps = [
"//io/file:go_default_library",
"//proto/eth/v1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@in_gopkg_yaml_v3//:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
],
)

View File

@@ -23,8 +23,6 @@ var (
ErrInvalidIndex = errors.New("index should be greater than finalizedDeposits - 1")
// ErrNoDeposits occurs when the number of deposits is 0.
ErrNoDeposits = errors.New("number of deposits should be greater than 0")
// ErrNoFinalizedDeposits occurs when the number of finalized deposits is 0.
ErrNoFinalizedDeposits = errors.New("number of finalized deposits should be greater than 0")
// ErrTooManyDeposits occurs when the number of deposits exceeds the capacity of the tree.
ErrTooManyDeposits = errors.New("number of deposits should not be greater than the capacity of the tree")
)
@@ -62,7 +60,7 @@ func (d *DepositTree) getSnapshot() (DepositTreeSnapshot, error) {
return DepositTreeSnapshot{}, ErrEmptyExecutionBlock
}
var finalized [][32]byte
depositCount, _ := d.tree.GetFinalized(finalized)
depositCount, finalized := d.tree.GetFinalized(finalized)
return fromTreeParts(finalized, depositCount, d.finalizedExecutionBlock)
}
@@ -119,9 +117,6 @@ func (d *DepositTree) getProof(index uint64) ([32]byte, [][32]byte, error) {
return [32]byte{}, nil, ErrInvalidMixInLength
}
finalizedDeposits, _ := d.tree.GetFinalized([][32]byte{})
if finalizedDeposits == 0 {
return [32]byte{}, nil, ErrNoFinalizedDeposits
}
if finalizedDeposits != 0 {
finalizedDeposits = finalizedDeposits - 1
}

View File

@@ -40,7 +40,7 @@ func (ds *DepositTreeSnapshot) CalculateRoot() ([32]byte, error) {
}
size >>= 1
}
return sha256.Sum256(append(root[:], bytesutil.Uint64ToBytesLittleEndian(ds.depositCount)...)), nil
return sha256.Sum256(append(root[:], bytesutil.Uint64ToBytesLittleEndian32(ds.depositCount)...)), nil
}
// fromTreeParts constructs the deposit tree from pre-existing data.

View File

@@ -0,0 +1,54 @@
package depositsnapshot
import (
"fmt"
"reflect"
"testing"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
func TestDepositTreeSnapshot_CalculateRoot(t *testing.T) {
tests := []struct {
name string
finalized int
depositCount uint64
want [32]byte
}{
{
name: "empty",
finalized: 0,
depositCount: 0,
want: [32]byte{215, 10, 35, 71, 49, 40, 92, 104, 4, 194, 164, 245, 103, 17, 221, 184, 200, 44, 153, 116, 15, 32, 120, 84, 137, 16, 40, 175, 52, 226, 126, 94},
},
{
name: "1 Finalized",
finalized: 1,
depositCount: 2,
want: [32]byte{36, 118, 154, 57, 217, 109, 145, 116, 238, 1, 207, 59, 187, 28, 69, 187, 70, 55, 153, 180, 15, 150, 37, 72, 140, 36, 109, 154, 212, 202, 47, 59},
},
{
name: "many finalised",
finalized: 6,
depositCount: 20,
want: [32]byte{210, 63, 57, 119, 12, 5, 3, 25, 139, 20, 244, 59, 114, 119, 35, 88, 222, 88, 122, 106, 239, 20, 45, 140, 99, 92, 222, 166, 133, 159, 128, 72},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var finalized [][32]byte
for i := 0; i < tt.finalized; i++ {
finalized = append(finalized, hexString(t, fmt.Sprintf("%064d", i)))
}
ds := &DepositTreeSnapshot{
finalized: finalized,
depositCount: tt.depositCount,
}
root, err := ds.CalculateRoot()
require.NoError(t, err)
if got := root; !reflect.DeepEqual(got, tt.want) {
require.DeepEqual(t, tt.want, got)
}
})
}
}

View File

@@ -0,0 +1,141 @@
package depositsnapshot
import (
"encoding/hex"
"fmt"
"reflect"
"testing"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
)
func hexString(t *testing.T, hexStr string) [32]byte {
t.Helper()
b, err := hex.DecodeString(hexStr)
require.NoError(t, err)
if len(b) != 32 {
assert.Equal(t, 32, len(b), "bad hash length, expected 32")
}
x := (*[32]byte)(b)
return *x
}
func Test_create(t *testing.T) {
tests := []struct {
name string
leaves [][32]byte
depth uint64
want MerkleTreeNode
}{
{
name: "empty tree",
leaves: nil,
depth: 0,
want: &ZeroNode{},
},
{
name: "zero depth",
leaves: [][32]byte{hexString(t, fmt.Sprintf("%064d", 0))},
depth: 0,
want: &LeafNode{},
},
{
name: "depth of 1",
leaves: [][32]byte{hexString(t, fmt.Sprintf("%064d", 0))},
depth: 1,
want: &InnerNode{&LeafNode{}, &ZeroNode{}},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := create(tt.leaves, tt.depth); !reflect.DeepEqual(got, tt.want) {
require.DeepEqual(t, tt.want, got)
}
})
}
}
func Test_fromSnapshotParts(t *testing.T) {
tests := []struct {
name string
finalized [][32]byte
deposits uint64
level uint64
want MerkleTreeNode
}{
{
name: "empty",
finalized: nil,
deposits: 0,
level: 0,
want: &ZeroNode{},
},
{
name: "single finalized node",
finalized: [][32]byte{hexString(t, fmt.Sprintf("%064d", 0))},
deposits: 1,
level: 0,
want: &FinalizedNode{
depositCount: 1,
hash: [32]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
},
},
{
name: "multiple deposits and 1 Finalized",
finalized: [][32]byte{hexString(t, fmt.Sprintf("%064d", 0))},
deposits: 2,
level: 4,
want: &InnerNode{
left: &InnerNode{&InnerNode{&FinalizedNode{depositCount: 2, hash: hexString(t, fmt.Sprintf("%064d", 0))}, &ZeroNode{1}}, &ZeroNode{2}},
right: &ZeroNode{3},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tree, err := fromSnapshotParts(tt.finalized, tt.deposits, tt.level)
require.NoError(t, err)
if got := tree; !reflect.DeepEqual(got, tt.want) {
require.DeepEqual(t, tt.want, got)
}
})
}
}
func Test_generateProof(t *testing.T) {
tests := []struct {
name string
leaves uint64
}{
{
name: "1 leaf",
leaves: 1,
},
{
name: "4 leaves",
leaves: 4,
},
{
name: "10 leaves",
leaves: 10,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
testCases, err := readTestCases()
require.NoError(t, err)
tree := New()
for _, c := range testCases[:tt.leaves] {
err = tree.pushLeaf(c.DepositDataRoot)
require.NoError(t, err)
}
for i := uint64(0); i < tt.leaves; i++ {
leaf, proof := generateProof(tree.tree, i, DepositContractDepth)
require.Equal(t, leaf, testCases[i].DepositDataRoot)
calcRoot := merkleRootFromBranch(leaf, proof, i)
require.Equal(t, tree.tree.GetRoot(), calcRoot)
}
})
}
}

View File

@@ -0,0 +1,355 @@
package depositsnapshot
import (
"crypto/sha256"
"encoding/hex"
"strconv"
"strings"
"testing"
"github.com/bazelbuild/rules_go/go/tools/bazel"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/io/file"
eth "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v3/testing/require"
"gopkg.in/yaml.v3"
)
type testCase struct {
DepositData depositData `yaml:"deposit_data"`
DepositDataRoot [32]byte `yaml:"deposit_data_root"`
Eth1Data *eth1Data `yaml:"eth1_data"`
BlockHeight uint64 `yaml:"block_height"`
Snapshot snapshot `yaml:"snapshot"`
}
func (tc *testCase) UnmarshalYAML(value *yaml.Node) error {
raw := struct {
DepositData depositData `yaml:"deposit_data"`
DepositDataRoot string `yaml:"deposit_data_root"`
Eth1Data *eth1Data `yaml:"eth1_data"`
BlockHeight string `yaml:"block_height"`
Snapshot snapshot `yaml:"snapshot"`
}{}
err := value.Decode(&raw)
if err != nil {
return err
}
tc.DepositDataRoot, err = hexStringToByteArray(raw.DepositDataRoot)
if err != nil {
return err
}
tc.DepositData = raw.DepositData
tc.Eth1Data = raw.Eth1Data
tc.BlockHeight, err = stringToUint64(raw.BlockHeight)
if err != nil {
return err
}
tc.Snapshot = raw.Snapshot
return nil
}
type depositData struct {
Pubkey []byte `yaml:"pubkey"`
WithdrawalCredentials []byte `yaml:"withdrawal_credentials"`
Amount uint64 `yaml:"amount"`
Signature []byte `yaml:"signature"`
}
func (dd *depositData) UnmarshalYAML(value *yaml.Node) error {
raw := struct {
Pubkey string `yaml:"pubkey"`
WithdrawalCredentials string `yaml:"withdrawal_credentials"`
Amount string `yaml:"amount"`
Signature string `yaml:"signature"`
}{}
err := value.Decode(&raw)
if err != nil {
return err
}
dd.Pubkey, err = hexStringToBytes(raw.Pubkey)
if err != nil {
return err
}
dd.WithdrawalCredentials, err = hexStringToBytes(raw.WithdrawalCredentials)
if err != nil {
return err
}
dd.Amount, err = strconv.ParseUint(raw.Amount, 10, 64)
if err != nil {
return err
}
dd.Signature, err = hexStringToBytes(raw.Signature)
if err != nil {
return err
}
return nil
}
type eth1Data struct {
DepositRoot [32]byte `yaml:"deposit_root"`
DepositCount uint64 `yaml:"deposit_count"`
BlockHash [32]byte `yaml:"block_hash"`
}
func (ed *eth1Data) UnmarshalYAML(value *yaml.Node) error {
raw := struct {
DepositRoot string `yaml:"deposit_root"`
DepositCount string `yaml:"deposit_count"`
BlockHash string `yaml:"block_hash"`
}{}
err := value.Decode(&raw)
if err != nil {
return err
}
ed.DepositRoot, err = hexStringToByteArray(raw.DepositRoot)
if err != nil {
return err
}
ed.DepositCount, err = stringToUint64(raw.DepositCount)
if err != nil {
return err
}
ed.BlockHash, err = hexStringToByteArray(raw.BlockHash)
if err != nil {
return err
}
return nil
}
type snapshot struct {
DepositTreeSnapshot
}
func (sd *snapshot) UnmarshalYAML(value *yaml.Node) error {
raw := struct {
Finalized []string `yaml:"finalized"`
DepositRoot string `yaml:"deposit_root"`
DepositCount string `yaml:"deposit_count"`
ExecutionBlockHash string `yaml:"execution_block_hash"`
ExecutionBlockHeight string `yaml:"execution_block_height"`
}{}
err := value.Decode(&raw)
if err != nil {
return err
}
sd.finalized = make([][32]byte, len(raw.Finalized))
for i, finalized := range raw.Finalized {
sd.finalized[i], err = hexStringToByteArray(finalized)
if err != nil {
return err
}
}
sd.depositRoot, err = hexStringToByteArray(raw.DepositRoot)
if err != nil {
return err
}
sd.depositCount, err = stringToUint64(raw.DepositCount)
if err != nil {
return err
}
sd.executionBlock.Hash, err = hexStringToByteArray(raw.ExecutionBlockHash)
if err != nil {
return err
}
sd.executionBlock.Depth, err = stringToUint64(raw.ExecutionBlockHeight)
if err != nil {
return err
}
return nil
}
func readTestCases() ([]testCase, error) {
testFolders, err := bazel.ListRunfiles()
if err != nil {
return nil, err
}
for _, ff := range testFolders {
if strings.Contains(ff.ShortPath, "eip4881_spec_tests") &&
strings.Contains(ff.ShortPath, "eip-4881/test_cases.yaml") {
enc, err := file.ReadFileAsBytes(ff.Path)
if err != nil {
return nil, err
}
var testCases []testCase
err = yaml.Unmarshal(enc, &testCases)
if err != nil {
return []testCase{}, err
}
return testCases, nil
}
}
return nil, errors.New("spec test file not found")
}
func TestRead(t *testing.T) {
tcs, err := readTestCases()
require.NoError(t, err)
for _, tc := range tcs {
t.Log(tc)
}
}
func hexStringToByteArray(s string) (b [32]byte, err error) {
var raw []byte
raw, err = hexStringToBytes(s)
if err != nil {
return
}
if len(raw) != 32 {
err = errors.New("invalid hex string length")
return
}
copy(b[:], raw[:32])
return
}
func hexStringToBytes(s string) (b []byte, err error) {
b, err = hex.DecodeString(strings.TrimPrefix(s, "0x"))
return
}
func stringToUint64(s string) (uint64, error) {
value, err := strconv.ParseUint(s, 10, 32)
if err != nil {
return 0, err
}
return value, nil
}
func merkleRootFromBranch(leaf [32]byte, branch [][32]byte, index uint64) [32]byte {
root := leaf
for i, l := range branch {
ithBit := (index >> i) & 0x1
if ithBit == 1 {
root = sha256.Sum256(append(l[:], root[:]...))
} else {
root = sha256.Sum256(append(root[:], l[:]...))
}
}
return root
}
func checkProof(t *testing.T, tree *DepositTree, index uint64) {
leaf, proof, err := tree.getProof(index)
require.NoError(t, err)
calcRoot := merkleRootFromBranch(leaf, proof, index)
require.Equal(t, tree.getRoot(), calcRoot)
}
func compareProof(t *testing.T, tree1, tree2 *DepositTree, index uint64) {
require.Equal(t, tree1.getRoot(), tree2.getRoot())
checkProof(t, tree1, index)
checkProof(t, tree2, index)
}
func cloneFromSnapshot(t *testing.T, snapshot DepositTreeSnapshot, testCases []testCase) *DepositTree {
cp, err := fromSnapshot(snapshot)
require.NoError(t, err)
for _, c := range testCases {
err = cp.pushLeaf(c.DepositDataRoot)
require.NoError(t, err)
}
return &cp
}
func TestDepositCases(t *testing.T) {
tree := New()
testCases, err := readTestCases()
require.NoError(t, err)
for _, c := range testCases {
err = tree.pushLeaf(c.DepositDataRoot)
require.NoError(t, err)
}
}
func TestFinalization(t *testing.T) {
tree := New()
testCases, err := readTestCases()
require.NoError(t, err)
for _, c := range testCases[:128] {
err = tree.pushLeaf(c.DepositDataRoot)
require.NoError(t, err)
}
originalRoot := tree.getRoot()
require.DeepEqual(t, testCases[127].Eth1Data.DepositRoot, originalRoot)
err = tree.finalize(&eth.Eth1Data{
DepositRoot: testCases[100].Eth1Data.DepositRoot[:],
DepositCount: testCases[100].Eth1Data.DepositCount,
BlockHash: testCases[100].Eth1Data.BlockHash[:],
}, testCases[100].BlockHeight)
require.NoError(t, err)
// ensure finalization doesn't change root
require.Equal(t, tree.getRoot(), originalRoot)
snapshotData, err := tree.getSnapshot()
require.NoError(t, err)
require.DeepEqual(t, testCases[100].Snapshot.DepositTreeSnapshot, snapshotData)
// create a copy of the tree from a snapshot by replaying
// the deposits after the finalized deposit
cp := cloneFromSnapshot(t, snapshotData, testCases[101:128])
// ensure original and copy have the same root
require.Equal(t, tree.getRoot(), cp.getRoot())
// finalize original again to check double finalization
err = tree.finalize(&eth.Eth1Data{
DepositRoot: testCases[105].Eth1Data.DepositRoot[:],
DepositCount: testCases[105].Eth1Data.DepositCount,
BlockHash: testCases[105].Eth1Data.BlockHash[:],
}, testCases[105].BlockHeight)
require.NoError(t, err)
// root should still be the same
require.Equal(t, originalRoot, tree.getRoot())
// create a copy of the tree by taking a snapshot again
snapshotData, err = tree.getSnapshot()
require.NoError(t, err)
cp = cloneFromSnapshot(t, snapshotData, testCases[106:128])
// create a copy of the tree by replaying ALL deposits from nothing
fullTreeCopy := New()
for _, c := range testCases[:128] {
err = fullTreeCopy.pushLeaf(c.DepositDataRoot)
require.NoError(t, err)
}
for i := 106; i < 128; i++ {
compareProof(t, tree, cp, uint64(i))
compareProof(t, tree, fullTreeCopy, uint64(i))
}
}
func TestSnapshotCases(t *testing.T) {
tree := New()
testCases, err := readTestCases()
require.NoError(t, err)
for _, c := range testCases {
err = tree.pushLeaf(c.DepositDataRoot)
require.NoError(t, err)
}
for _, c := range testCases {
err = tree.finalize(&eth.Eth1Data{
DepositRoot: c.Eth1Data.DepositRoot[:],
DepositCount: c.Eth1Data.DepositCount,
BlockHash: c.Eth1Data.BlockHash[:],
}, c.BlockHeight)
require.NoError(t, err)
s, err := tree.getSnapshot()
require.NoError(t, err)
require.DeepEqual(t, c.Snapshot.DepositTreeSnapshot, s)
}
}
func TestEmptyTreeSnapshot(t *testing.T) {
_, err := New().getSnapshot()
require.ErrorContains(t, "empty execution block", err)
}
func TestInvalidSnapshot(t *testing.T) {
invalidSnapshot := DepositTreeSnapshot{
finalized: nil,
depositRoot: Zerohashes[0],
depositCount: 0,
executionBlock: executionBlock{
Hash: Zerohashes[0],
Depth: 0,
},
}
_, err := fromSnapshot(invalidSnapshot)
require.ErrorContains(t, "snapshot root is invalid", err)
}

View File

@@ -133,6 +133,7 @@ func ValidatePayloadWhenMergeCompletes(st state.BeaconState, payload interfaces.
return err
}
if !bytes.Equal(payload.ParentHash(), header.BlockHash()) {
log.Errorf("parent Hash %#x, header Hash: %#x", bytesutil.Trunc(payload.ParentHash()), bytesutil.Trunc(header.BlockHash()))
return ErrInvalidPayloadBlockHash
}
return nil

View File

@@ -9,6 +9,110 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
)
// UpgradeToDeneb updates inputs a generic state to return the version Deneb state.
func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
currentSyncCommittee, err := state.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := state.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := state.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := state.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := state.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := state.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := state.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := state.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summarires, err := state.HistoricalSummaries()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateDeneb{
GenesisTime: state.GenesisTime(),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().DenebForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: [][]byte{},
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
ExcessDataGas: make([]byte, 32),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summarires,
}
return state_native.InitializeFromProtoUnsafeDeneb(s)
}
// UpgradeToCapella updates a generic state to return the version Capella state.
func UpgradeToCapella(state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)

View File

@@ -353,7 +353,7 @@ func ProcessRandaoMixesReset(state state.BeaconState) (state.BeaconState, error)
}
// ProcessHistoricalDataUpdate processes the updates to historical data during epoch processing.
// From Capella onward, per spec,state's historical summaries are updated instead of historical roots.
// From Capella onward, per spec, state's historical summaries are updated instead of historical roots.
func ProcessHistoricalDataUpdate(state state.BeaconState) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(state)
nextEpoch := currentEpoch + 1

View File

@@ -81,6 +81,15 @@ func CanUpgradeToCapella(slot primitives.Slot) bool {
return epochStart && capellaEpoch
}
// CanUpgradeToDeneb returns true if the input `slot` can upgrade to Deneb.
// Spec code:
// If state.slot % SLOTS_PER_EPOCH == 0 and compute_epoch_at_slot(state.slot) == DENEB_FORK_EPOCH
func CanUpgradeToDeneb(slot primitives.Slot) bool {
epochStart := slots.IsEpochStart(slot)
DenebEpoch := slots.ToEpoch(slot) == params.BeaconConfig().DenebForkEpoch
return epochStart && DenebEpoch
}
// CanProcessEpoch checks the eligibility to process epoch.
// The epoch can be processed at the end of the last slot of every epoch.
//

View File

@@ -23,7 +23,6 @@ go_library(
"//beacon-chain/core/execution:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition/interop:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
@@ -46,6 +45,7 @@ go_library(
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_protolambda_go_kzg//eth:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],

View File

@@ -11,12 +11,12 @@ import (
)
// WriteBlockToDisk as a block ssz. Writes to temp directory. Debug!
func WriteBlockToDisk(block interfaces.ReadOnlySignedBeaconBlock, failed bool) {
func WriteBlockToDisk(prefix string, block interfaces.ReadOnlySignedBeaconBlock, failed bool) {
if !features.Get().WriteSSZStateTransitions {
return
}
filename := fmt.Sprintf("beacon_block_%d.ssz", block.Block().Slot())
filename := fmt.Sprintf(prefix+"_beacon_block_%d.ssz", block.Block().Slot())
if failed {
filename = "failed_" + filename
}

View File

@@ -302,6 +302,14 @@ func ProcessSlots(ctx context.Context, state state.BeaconState, slot primitives.
return nil, err
}
}
if time.CanUpgradeToDeneb(state.Slot()) {
state, err = capella.UpgradeToDeneb(state)
if err != nil {
tracing.AnnotateError(span, err)
return nil, err
}
}
}
if highestSlot < state.Slot() {

View File

@@ -6,14 +6,15 @@ import (
"fmt"
"github.com/pkg/errors"
"github.com/protolambda/go-kzg/eth"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/altair"
b "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition/interop"
v "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
"go.opencensus.io/trace"
@@ -57,9 +58,6 @@ func ExecuteStateTransitionNoVerifyAnySig(
defer span.End()
var err error
interop.WriteBlockToDisk(signed, false /* Has the block failed */)
interop.WriteStateToDisk(st)
parentRoot := signed.Block().ParentRoot()
st, err = ProcessSlotsUsingNextSlotCache(ctx, st, parentRoot[:], signed.Block().Slot())
if err != nil {
@@ -256,7 +254,7 @@ func ProcessOperationsNoVerifyAttsSigs(
if err != nil {
return nil, err
}
case version.Altair, version.Bellatrix, version.Capella:
case version.Altair, version.Bellatrix, version.Capella, version.Deneb:
state, err = altairOperations(ctx, state, signedBeaconBlock)
if err != nil {
return nil, err
@@ -356,9 +354,48 @@ func ProcessBlockForStateRoot(
return nil, errors.Wrap(err, "process_sync_aggregate failed")
}
if signed.Block().Version() == version.Deneb {
err := ValidateBlobKzgs(ctx, signed.Block().Body())
if err != nil {
return nil, errors.Wrap(err, "could not validate blob kzgs")
}
}
return state, nil
}
// ValidateBlobKzgs validates the blob kzgs in the beacon block.
//
// Spec code:
// def process_blob_kzg_commitments(state: BeaconState, body: BeaconBlockBody):
//
// assert verify_kzg_commitments_against_transactions(body.execution_payload.transactions, body.blob_kzg_commitments)
func ValidateBlobKzgs(ctx context.Context, body interfaces.ReadOnlyBeaconBlockBody) error {
_, span := trace.StartSpan(ctx, "core.state.ValidateBlobKzgs")
defer span.End()
payload, err := body.Execution()
if err != nil {
return errors.Wrap(err, "could not get execution payload from block")
}
blkKzgs, err := body.BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "could not get blob kzg commitments from block")
}
kzgs := make(eth.KZGCommitmentSequenceImpl, len(blkKzgs))
for i := range blkKzgs {
kzgs[i] = bytesutil.ToBytes48(blkKzgs[i])
}
txs, err := payload.Transactions()
if err != nil {
return errors.Wrap(err, "could not get transactions from payload")
}
if err := eth.VerifyKZGCommitmentsAgainstTransactions(txs, kzgs); err != nil {
return err
}
return nil
}
// This calls altair block operations.
func altairOperations(
ctx context.Context,

View File

@@ -54,6 +54,7 @@ type ReadOnlyDatabase interface {
// Fee recipients operations.
FeeRecipientByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (common.Address, error)
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
// origin checkpoint sync support
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
BackfillBlockRoot(ctx context.Context) ([32]byte, error)

View File

@@ -818,6 +818,16 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
if err := rawBlock.UnmarshalSSZ(enc[len(capellaBlindKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded Capella block")
}
case hasDenebKey(enc):
rawBlock = &ethpb.SignedBeaconBlockDeneb{}
if err := rawBlock.UnmarshalSSZ(enc[len(denebKey):]); err != nil {
return nil, err
}
case hasDenebBlindKey(enc):
rawBlock = &ethpb.SignedBlindedBeaconBlockDeneb{}
if err := rawBlock.UnmarshalSSZ(enc[len(denebBlindKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded Deneb block")
}
default:
// Marshal block bytes to phase 0 beacon block.
rawBlock = &ethpb.SignedBeaconBlock{}
@@ -854,6 +864,8 @@ func marshalBlockFull(
return nil, err
}
switch blk.Version() {
case version.Deneb:
return snappy.Encode(nil, append(denebKey, encodedBlock...)), nil
case version.Capella:
return snappy.Encode(nil, append(capellaKey, encodedBlock...)), nil
case version.Bellatrix:
@@ -888,6 +900,8 @@ func marshalBlockBlinded(
return nil, errors.Wrap(err, "could not marshal blinded block")
}
switch blk.Version() {
case version.Deneb:
return snappy.Encode(nil, append(denebBlindKey, encodedBlock...)), nil
case version.Capella:
return snappy.Encode(nil, append(capellaBlindKey, encodedBlock...)), nil
case version.Bellatrix:

View File

@@ -37,3 +37,17 @@ func hasCapellaBlindKey(enc []byte) bool {
}
return bytes.Equal(enc[:len(capellaBlindKey)], capellaBlindKey)
}
func hasDenebKey(enc []byte) bool {
if len(denebKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(denebKey)], denebKey)
}
func hasDenebBlindKey(enc []byte) bool {
if len(denebBlindKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(denebBlindKey)], denebBlindKey)
}

View File

@@ -129,6 +129,8 @@ var Buckets = [][]byte{
feeRecipientBucket,
registrationBucket,
blobsBucket,
}
// NewKVStore initializes a new boltDB key-value store at the directory

View File

@@ -46,6 +46,7 @@ var (
finalizedCheckpointKey = []byte("finalized-checkpoint")
powchainDataKey = []byte("powchain-data")
lastValidatedCheckpointKey = []byte("last-validated-checkpoint")
blobsBucket = []byte("blobs")
// Below keys are used to identify objects are to be fork compatible.
// Objects that are only compatible with specific forks should be prefixed with such keys.
@@ -55,6 +56,9 @@ var (
capellaKey = []byte("capella")
capellaBlindKey = []byte("blind-capella")
saveBlindedBeaconBlocksKey = []byte("save-blinded-beacon-blocks")
denebKey = []byte("deneb")
denebBlindKey = []byte("blind-deneb")
// block root included in the beacon state used by weak subjectivity initial sync
originCheckpointBlockRootKey = []byte("origin-checkpoint-block-root")
// block root tracking the progress of backfill, or pointing at genesis if backfill has not been initiated

View File

@@ -473,6 +473,19 @@ func (s *Store) unmarshalState(_ context.Context, enc []byte, validatorEntries [
}
switch {
case hasDenebKey(enc):
protoState := &ethpb.BeaconStateDeneb{}
if err := protoState.UnmarshalSSZ(enc[len(denebKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for Deneb")
}
ok, err := s.isStateValidatorMigrationOver()
if err != nil {
return nil, err
}
if ok {
protoState.Validators = validatorEntries
}
return statenative.InitializeFromProtoUnsafeDeneb(protoState)
case hasCapellaKey(enc):
// Marshal state bytes to capella beacon state.
protoState := &ethpb.BeaconStateCapella{}
@@ -580,6 +593,19 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
return nil, err
}
return snappy.Encode(nil, append(capellaKey, rawObj...)), nil
case *ethpb.BeaconStateDeneb:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateDeneb)
if !ok {
return nil, errors.New("non valid inner state")
}
if rState == nil {
return nil, errors.New("nil state")
}
rawObj, err := rState.MarshalSSZ()
if err != nil {
return nil, err
}
return snappy.Encode(nil, append(denebKey, rawObj...)), nil
default:
return nil, errors.New("invalid inner state")
}

View File

@@ -74,5 +74,5 @@ func TestStateSummary_CanDelete(t *testing.T) {
require.Equal(t, true, db.HasStateSummary(ctx, r1), "State summary should be saved")
require.NoError(t, db.deleteStateSummary(r1))
require.Equal(t, false, db.HasStateSummary(ctx, r1), "State summary should not be saved")
require.Equal(t, false, db.HasStateSummary(ctx, r1), "State summary should be deleted")
}

View File

@@ -37,6 +37,7 @@ go_library(
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
@@ -107,6 +108,7 @@ go_test(
"//beacon-chain/execution/types:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",

View File

@@ -212,38 +212,6 @@ func TestService_logTtdStatus(t *testing.T) {
require.Equal(t, false, reached)
}
func TestService_logTtdStatus_NotSyncedClient(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
resp := (*pb.ExecutionBlock)(nil) // Nil response when a client is not synced
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": resp,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{
cfg: &config{},
}
service.rpcClient = rpcClient
ttd := new(uint256.Int)
reached, err := service.logTtdStatus(context.Background(), ttd.SetUint64(24343))
require.ErrorContains(t, "missing required field 'parentHash' for Header", err)
require.Equal(t, false, reached)
}
func emptyPayload() *pb.ExecutionPayload {
return &pb.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),

View File

@@ -15,6 +15,7 @@ import (
"github.com/holiman/uint256"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/types"
"github.com/prysmaticlabs/prysm/v3/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
@@ -34,6 +35,7 @@ const (
NewPayloadMethod = "engine_newPayloadV1"
// NewPayloadMethodV2 v2 request string for JSON-RPC.
NewPayloadMethodV2 = "engine_newPayloadV2"
NewPayloadMethodV3 = "engine_newPayloadV3"
// ForkchoiceUpdatedMethod v1 request string for JSON-RPC.
ForkchoiceUpdatedMethod = "engine_forkchoiceUpdatedV1"
// ForkchoiceUpdatedMethodV2 v2 request string for JSON-RPC.
@@ -42,12 +44,19 @@ const (
GetPayloadMethod = "engine_getPayloadV1"
// GetPayloadMethodV2 v2 request string for JSON-RPC.
GetPayloadMethodV2 = "engine_getPayloadV2"
GetPayloadMethodV3 = "engine_getPayloadV3"
// GetBlobsBundleMethod v1 request string for JSON-RPC.
GetBlobsBundleMethod = "engine_getBlobsBundleV1"
// ExchangeTransitionConfigurationMethod v1 request string for JSON-RPC.
ExchangeTransitionConfigurationMethod = "engine_exchangeTransitionConfigurationV1"
// ExecutionBlockByHashMethod request string for JSON-RPC.
ExecutionBlockByHashMethod = "eth_getBlockByHash"
// ExecutionBlockByNumberMethod request string for JSON-RPC.
ExecutionBlockByNumberMethod = "eth_getBlockByNumber"
// GetPayloadBodiesByHashV1 v1 request string for JSON-RPC.
GetPayloadBodiesByHashV1 = "engine_getPayloadBodiesByHashV1"
// GetPayloadBodiesByRangeV1 v1 request string for JSON-RPC.
GetPayloadBodiesByRangeV1 = "engine_getPayloadBodiesByRangeV1"
// Defines the seconds before timing out engine endpoints with non-block execution semantics.
defaultEngineTimeout = time.Second
)
@@ -84,6 +93,7 @@ type EngineCaller interface {
) error
ExecutionBlockByHash(ctx context.Context, hash common.Hash, withTxs bool) (*pb.ExecutionBlock, error)
GetTerminalBlockHash(ctx context.Context, transitionTime uint64) ([]byte, bool, error)
GetBlobsBundle(ctx context.Context, payloadId [8]byte) (*pb.BlobsBundle, error)
}
var EmptyBlockHash = errors.New("Block hash is empty 0x0000...")
@@ -121,6 +131,15 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
if err != nil {
return nil, handleRPCError(err)
}
case *pb.ExecutionPayloadDeneb:
payloadPb, ok := payload.Proto().(*pb.ExecutionPayloadDeneb)
if !ok {
return nil, errors.New("execution data must be a Deneb execution payload")
}
err := s.rpcClient.CallContext(ctx, result, NewPayloadMethodV3, payloadPb)
if err != nil {
return nil, handleRPCError(err)
}
default:
return nil, errors.New("unknown execution data type")
}
@@ -168,7 +187,7 @@ func (s *Service) ForkchoiceUpdated(
if err != nil {
return nil, nil, handleRPCError(err)
}
case version.Capella:
case version.Capella, version.Deneb:
a, err := attrs.PbV2()
if err != nil {
return nil, nil, err
@@ -210,6 +229,15 @@ func (s *Service) GetPayload(ctx context.Context, payloadId [8]byte, slot primit
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
if slots.ToEpoch(slot) >= params.BeaconConfig().DenebForkEpoch {
result := &pb.ExecutionPayloadDenebWithValue{}
err := s.rpcClient.CallContext(ctx, result, GetPayloadMethodV3, pb.PayloadIDBytes(payloadId))
if err != nil {
return nil, handleRPCError(err)
}
return blocks.WrappedExecutionPayloadDeneb(result.Payload, big.NewInt(0).SetBytes(result.Value))
}
if slots.ToEpoch(slot) >= params.BeaconConfig().CapellaForkEpoch {
result := &pb.ExecutionPayloadCapellaWithValue{}
err := s.rpcClient.CallContext(ctx, result, GetPayloadMethodV2, pb.PayloadIDBytes(payloadId))
@@ -417,6 +445,19 @@ func (s *Service) ExecutionBlocksByHashes(ctx context.Context, hashes []common.H
return execBlks, nil
}
// GetBlobsBundle calls the engine_getBlobsV1 method via JSON-RPC.
func (s *Service) GetBlobsBundle(ctx context.Context, payloadId [8]byte) (*pb.BlobsBundle, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobsBundle")
defer span.End()
d := time.Now().Add(defaultEngineTimeout)
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
result := &pb.BlobsBundle{}
err := s.rpcClient.CallContext(ctx, result, GetBlobsBundleMethod, pb.PayloadIDBytes(payloadId))
return result, handleRPCError(err)
}
// HeaderByHash returns the relevant header details for the provided block hash.
func (s *Service) HeaderByHash(ctx context.Context, hash common.Hash) (*types.HeaderInfo, error) {
var hdr *types.HeaderInfo
@@ -437,6 +478,50 @@ func (s *Service) HeaderByNumber(ctx context.Context, number *big.Int) (*types.H
return hdr, err
}
// GetPayloadBodiesByHash returns the relevant payload bodies for the provided block hash.
func (s *Service) GetPayloadBodiesByHash(ctx context.Context, executionBlockHashes []common.Hash) ([]*pb.ExecutionPayloadBodyV1, error) {
if !features.Get().EnableOptionalEngineMethods {
return nil, errors.New("optional engine methods not enabled")
}
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetPayloadBodiesByHashV1")
defer span.End()
result := make([]*pb.ExecutionPayloadBodyV1, 0)
err := s.rpcClient.CallContext(ctx, &result, GetPayloadBodiesByHashV1, executionBlockHashes)
for i, item := range result {
if item == nil {
result[i] = &pb.ExecutionPayloadBodyV1{
Transactions: make([][]byte, 0),
Withdrawals: make([]*pb.Withdrawal, 0),
}
}
}
return result, handleRPCError(err)
}
// GetPayloadBodiesByRange returns the relevant payload bodies for the provided range.
func (s *Service) GetPayloadBodiesByRange(ctx context.Context, start, count uint64) ([]*pb.ExecutionPayloadBodyV1, error) {
if !features.Get().EnableOptionalEngineMethods {
return nil, errors.New("optional engine methods not enabled")
}
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetPayloadBodiesByRangeV1")
defer span.End()
result := make([]*pb.ExecutionPayloadBodyV1, 0)
err := s.rpcClient.CallContext(ctx, &result, GetPayloadBodiesByRangeV1, start, count)
for i, item := range result {
if item == nil {
result[i] = &pb.ExecutionPayloadBodyV1{
Transactions: make([][]byte, 0),
Withdrawals: make([]*pb.Withdrawal, 0),
}
}
}
return result, handleRPCError(err)
}
// ReconstructFullBlock takes in a blinded beacon block and reconstructs
// a beacon block with a full execution payload via the engine API.
func (s *Service) ReconstructFullBlock(
@@ -669,6 +754,9 @@ func handleRPCError(err error) error {
case -38003:
errInvalidPayloadAttributesCount.Inc()
return ErrInvalidPayloadAttributes
case -38004:
errRequestTooLargeCount.Inc()
return ErrRequestTooLarge
case -32000:
errServerErrorCount.Inc()
// Only -32000 status codes are data errors in the RPC specification.

View File

@@ -20,6 +20,7 @@ import (
"github.com/holiman/uint256"
"github.com/pkg/errors"
mocks "github.com/prysmaticlabs/prysm/v3/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v3/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
@@ -1314,7 +1315,7 @@ func fixtures() map[string]interface{} {
ReceiptsRoot: &common.Hash{'d'},
LogsBloom: &hexutil.Bytes{'e'},
PrevRandao: &common.Hash{'f'},
BaseFeePerGas: fmt.Sprintf("%s", "0x123"),
BaseFeePerGas: "0x123",
BlockHash: &common.Hash{'g'},
Transactions: []hexutil.Bytes{{'h'}},
Withdrawals: []*pb.Withdrawal{},
@@ -1323,7 +1324,7 @@ func fixtures() map[string]interface{} {
GasUsed: &hexUint,
Timestamp: &hexUint,
},
BlockValue: fmt.Sprintf("%s", "0x11ff"),
BlockValue: "0x11ff",
}
parent := bytesutil.PadTo([]byte("parentHash"), fieldparams.RootLength)
sha3Uncles := bytesutil.PadTo([]byte("sha3Uncles"), fieldparams.RootLength)
@@ -1817,3 +1818,505 @@ func newPayloadV2Setup(t *testing.T, status *pb.PayloadStatus, payload *pb.Execu
service.rpcClient = rpcClient
return service
}
func TestCapella_PayloadBodiesByHash(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableOptionalEngineMethods: true,
})
defer resetFn()
t.Run("empty response works", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 0)
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
require.NoError(t, err)
require.Equal(t, 0, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
t.Run("single element response null works", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 1)
executionPayloadBodies[0] = nil
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
require.NoError(t, err)
require.Equal(t, 1, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
t.Run("empty, null, full works", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 3)
executionPayloadBodies[0] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{},
Withdrawals: []*pb.Withdrawal{},
}
executionPayloadBodies[1] = nil
executionPayloadBodies[2] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{hexutil.MustDecode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")},
Withdrawals: []*pb.Withdrawal{{
Index: 1,
ValidatorIndex: 1,
Address: hexutil.MustDecode("0xcf8e0d4e9587369b2301d0790347320302cc0943"),
Amount: 1,
}},
}
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
require.NoError(t, err)
require.Equal(t, 3, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
t.Run("full works, single item", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 1)
executionPayloadBodies[0] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{hexutil.MustDecode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")},
Withdrawals: []*pb.Withdrawal{{
Index: 1,
ValidatorIndex: 1,
Address: hexutil.MustDecode("0xcf8e0d4e9587369b2301d0790347320302cc0943"),
Amount: 1,
}},
}
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
require.NoError(t, err)
require.Equal(t, 1, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
t.Run("full works, multiple items", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 2)
executionPayloadBodies[0] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{hexutil.MustDecode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")},
Withdrawals: []*pb.Withdrawal{{
Index: 1,
ValidatorIndex: 1,
Address: hexutil.MustDecode("0xcf8e0d4e9587369b2301d0790347320302cc0943"),
Amount: 1,
}},
}
executionPayloadBodies[1] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{hexutil.MustDecode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")},
Withdrawals: []*pb.Withdrawal{{
Index: 2,
ValidatorIndex: 1,
Address: hexutil.MustDecode("0xcf8e0d4e9587369b2301d0790347320302cc0943"),
Amount: 1,
}},
}
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
require.NoError(t, err)
require.Equal(t, 2, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
t.Run("returning empty, null, empty should work properly", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
// [A, B, C] but no B in the server means
// we get [Abody, null, Cbody].
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 3)
executionPayloadBodies[0] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{},
Withdrawals: []*pb.Withdrawal{},
}
executionPayloadBodies[1] = nil
executionPayloadBodies[2] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{},
Withdrawals: []*pb.Withdrawal{},
}
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByHash(ctx, []common.Hash{})
require.NoError(t, err)
require.Equal(t, 3, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
}
func TestCapella_PayloadBodiesByRange(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
EnableOptionalEngineMethods: true,
})
defer resetFn()
t.Run("empty response works", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 0)
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByRange(ctx, uint64(1), uint64(2))
require.NoError(t, err)
require.Equal(t, 0, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
t.Run("single element response null works", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 1)
executionPayloadBodies[0] = nil
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByRange(ctx, uint64(1), uint64(2))
require.NoError(t, err)
require.Equal(t, 1, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
t.Run("empty, null, full works", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 3)
executionPayloadBodies[0] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{},
Withdrawals: []*pb.Withdrawal{},
}
executionPayloadBodies[1] = nil
executionPayloadBodies[2] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{hexutil.MustDecode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")},
Withdrawals: []*pb.Withdrawal{{
Index: 1,
ValidatorIndex: 1,
Address: hexutil.MustDecode("0xcf8e0d4e9587369b2301d0790347320302cc0943"),
Amount: 1,
}},
}
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByRange(ctx, uint64(1), uint64(2))
require.NoError(t, err)
require.Equal(t, 3, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
t.Run("full works, single item", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 1)
executionPayloadBodies[0] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{hexutil.MustDecode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")},
Withdrawals: []*pb.Withdrawal{{
Index: 1,
ValidatorIndex: 1,
Address: hexutil.MustDecode("0xcf8e0d4e9587369b2301d0790347320302cc0943"),
Amount: 1,
}},
}
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByRange(ctx, uint64(1), uint64(2))
require.NoError(t, err)
require.Equal(t, 1, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
t.Run("full works, multiple items", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 2)
executionPayloadBodies[0] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{hexutil.MustDecode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")},
Withdrawals: []*pb.Withdrawal{{
Index: 1,
ValidatorIndex: 1,
Address: hexutil.MustDecode("0xcf8e0d4e9587369b2301d0790347320302cc0943"),
Amount: 1,
}},
}
executionPayloadBodies[1] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{hexutil.MustDecode("0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8ddc86")},
Withdrawals: []*pb.Withdrawal{{
Index: 2,
ValidatorIndex: 1,
Address: hexutil.MustDecode("0xcf8e0d4e9587369b2301d0790347320302cc0943"),
Amount: 1,
}},
}
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByRange(ctx, uint64(1), uint64(2))
require.NoError(t, err)
require.Equal(t, 2, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
t.Run("returning empty, null, empty should work properly", func(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
// [A, B, C] but no B in the server means
// we get [Abody, null, Cbody].
executionPayloadBodies := make([]*pb.ExecutionPayloadBodyV1, 3)
executionPayloadBodies[0] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{},
Withdrawals: []*pb.Withdrawal{},
}
executionPayloadBodies[1] = nil
executionPayloadBodies[2] = &pb.ExecutionPayloadBodyV1{
Transactions: [][]byte{},
Withdrawals: []*pb.Withdrawal{},
}
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": executionPayloadBodies,
}
err := json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
ctx := context.Background()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
service := &Service{}
service.rpcClient = rpcClient
results, err := service.GetPayloadBodiesByRange(ctx, uint64(1), uint64(2))
require.NoError(t, err)
require.Equal(t, 3, len(results))
for _, item := range results {
require.NotNil(t, item)
}
})
}

View File

@@ -34,4 +34,6 @@ var (
ErrInvalidBlockHashPayloadStatus = errors.New("payload status is INVALID_BLOCK_HASH")
// ErrNilResponse when the response is nil.
ErrNilResponse = errors.New("nil response")
// ErrRequestTooLarge when the request is too large
ErrRequestTooLarge = errors.New("request too large")
)

View File

@@ -479,7 +479,7 @@ func (s *Service) requestBatchedHeadersAndLogs(ctx context.Context) error {
func (s *Service) retrieveBlockHashAndTime(ctx context.Context, blkNum *big.Int) ([32]byte, uint64, error) {
bHash, err := s.BlockHashByHeight(ctx, blkNum)
if err != nil {
return [32]byte{}, 0, errors.Wrap(err, "could not get eth1 block hash")
return [32]byte{}, 0, nil
}
if bHash == [32]byte{} {
return [32]byte{}, 0, errors.Wrap(err, "got empty block hash")

View File

@@ -71,4 +71,8 @@ var (
Name: "reconstructed_execution_payload_count",
Help: "Count the number of execution payloads that are reconstructed using JSON-RPC from payload headers",
})
errRequestTooLargeCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "execution_payload_bodies_count",
Help: "The number of requested payload bodies is too large",
})
)

View File

@@ -3,6 +3,7 @@ package execution
import (
"context"
"fmt"
"net/http"
"net/url"
"strings"
"time"
@@ -14,7 +15,6 @@ import (
contracts "github.com/prysmaticlabs/prysm/v3/contracts/deposit"
"github.com/prysmaticlabs/prysm/v3/io/logs"
"github.com/prysmaticlabs/prysm/v3/network"
"github.com/prysmaticlabs/prysm/v3/network/authorization"
)
func (s *Service) setupExecutionClientConnections(ctx context.Context, currEndpoint network.Endpoint) error {
@@ -34,7 +34,7 @@ func (s *Service) setupExecutionClientConnections(ctx context.Context, currEndpo
}
s.depositContractCaller = depositContractCaller
// Ensure we have the correct chain and deposit IDs.
//Ensure we have the correct chain and deposit IDs.
if err := ensureCorrectExecutionChain(ctx, fetcher); err != nil {
client.Close()
errStr := err.Error()
@@ -113,9 +113,21 @@ func (s *Service) newRPCClientWithAuth(ctx context.Context, endpoint network.End
if err != nil {
return nil, err
}
headers := http.Header{}
for _, h := range s.cfg.headers {
if h == "" {
continue
}
keyValue := strings.Split(h, "=")
if len(keyValue) < 2 {
log.Warnf("Incorrect HTTP header flag format. Skipping %v", keyValue[0])
continue
}
headers.Set(keyValue[0], strings.Join(keyValue[1:], "="))
}
switch u.Scheme {
case "http", "https":
client, err = gethRPC.DialOptions(ctx, endpoint.Url, gethRPC.WithHTTPClient(endpoint.HttpClient()))
client, err = gethRPC.DialOptions(ctx, endpoint.Url, gethRPC.WithHTTPClient(endpoint.HttpClient()), gethRPC.WithHeaders(headers))
if err != nil {
return nil, err
}
@@ -127,13 +139,6 @@ func (s *Service) newRPCClientWithAuth(ctx context.Context, endpoint network.End
default:
return nil, fmt.Errorf("no known transport for URL scheme %q", u.Scheme)
}
if endpoint.Auth.Method != authorization.None {
header, err := endpoint.Auth.ToHeaderValue()
if err != nil {
return nil, err
}
client.SetHeader("Authorization", header)
}
for _, h := range s.cfg.headers {
if h != "" {
keyValue := strings.Split(h, "=")

View File

@@ -38,6 +38,7 @@ type EngineClient struct {
TerminalBlockHash []byte
TerminalBlockHashExists bool
OverrideValidHash [32]byte
BlobsBundle *pb.BlobsBundle
BlockValue *big.Int
}
@@ -172,3 +173,8 @@ func (e *EngineClient) GetTerminalBlockHash(ctx context.Context, transitionTime
blk = parentBlk
}
}
// GetBlobsBundle --
func (e *EngineClient) GetBlobsBundle(ctx context.Context, payloadId [8]byte) (*pb.BlobsBundle, error) {
return e.BlobsBundle, nil
}

View File

@@ -16,9 +16,9 @@ import (
// consider a block to be late, and thus a candidate to being reorged.
const orphanLateBlockFirstThreshold = 4
// processAttestationsThreshold is the number of seconds after which we
// ProcessAttestationsThreshold is the number of seconds after which we
// process attestations for the current slot
const processAttestationsThreshold = 10
const ProcessAttestationsThreshold = 10
// applyWeightChanges recomputes the weight of the node passed as an argument and all of its descendants,
// using the current balance stored in each node.
@@ -148,7 +148,7 @@ func (n *Node) arrivedEarly(genesisTime uint64) (bool, error) {
// slot will have secs = 10 below.
func (n *Node) arrivedAfterOrphanCheck(genesisTime uint64) (bool, error) {
secs, err := slots.SecondsSinceSlotStart(n.slot, genesisTime, n.timestamp)
return secs >= processAttestationsThreshold, err
return secs >= ProcessAttestationsThreshold, err
}
// nodeTreeDump appends to the given list all the nodes descending from this one

View File

@@ -294,7 +294,7 @@ func TestNode_TimeStampsChecks(t *testing.T) {
require.Equal(t, false, late)
// very late block
driftGenesisTime(f, 3, processAttestationsThreshold+1)
driftGenesisTime(f, 3, ProcessAttestationsThreshold+1)
root = [32]byte{'c'}
state, blkRoot, err = prepareForkchoiceState(ctx, 3, root, [32]byte{'b'}, [32]byte{'C'}, 0, 0)
require.NoError(t, err)

View File

@@ -3,6 +3,7 @@ package doublylinkedtree
import (
"time"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/time/slots"
)
@@ -41,9 +42,11 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
if head == nil {
return
}
if head.slot != slots.CurrentSlot(f.store.genesisTime) {
return
}
// Do not reorg on epoch boundaries
if (head.slot+1)%params.BeaconConfig().SlotsPerEpoch == 0 {
return
@@ -90,6 +93,9 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
// This function needs to be called only when proposing a block and all
// attestation processing has already happened.
func (f *ForkChoice) GetProposerHead() [32]byte {
if features.Get().DisableReorgLateBlocks {
return f.CachedHeadRoot()
}
head := f.store.headNode
if head == nil {
return [32]byte{}

View File

@@ -143,18 +143,19 @@ func TestConfigureNetwork_ConfigFile(t *testing.T) {
"node2")), 0666))
require.NoError(t, set.Parse([]string{"test-command", "--" + cmd.ConfigFileFlag.Name, "flags_test.yaml"}))
comFlags := cmd.WrapFlags([]cli.Flag{
&cli.StringFlag{
Name: cmd.ConfigFileFlag.Name,
},
&cli.StringSliceFlag{
Name: cmd.BootstrapNode.Name,
},
})
command := &cli.Command{
Name: "test-command",
Flags: cmd.WrapFlags([]cli.Flag{
&cli.StringFlag{
Name: cmd.ConfigFileFlag.Name,
},
&cli.StringSliceFlag{
Name: cmd.BootstrapNode.Name,
},
}),
Name: "test-command",
Flags: comFlags,
Before: func(cliCtx *cli.Context) error {
return cmd.LoadFlagsFromConfig(cliCtx, cliCtx.Command.Flags)
return cmd.LoadFlagsFromConfig(cliCtx, comFlags)
},
Action: func(cliCtx *cli.Context) error {
//TODO: https://github.com/urfave/cli/issues/1197 right now does not set flag
@@ -165,7 +166,7 @@ func TestConfigureNetwork_ConfigFile(t *testing.T) {
return nil
},
}
require.NoError(t, command.Run(context))
require.NoError(t, command.Run(context, context.Args().Slice()...))
require.NoError(t, os.Remove("flags_test.yaml"))
}

View File

@@ -86,7 +86,6 @@ go_library(
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/muxer/mplex:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/protocol/identify:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/tcp:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",

View File

@@ -141,6 +141,57 @@ func (s *Service) broadcastAttestation(ctx context.Context, subnet uint64, att *
}
}
// BroadcastBlob broadcasts a blob to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input subnet.
func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.SignedBlobSidecar) error {
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastBlob")
defer span.End()
forkDigest, err := s.currentForkDigest()
if err != nil {
err := errors.Wrap(err, "could not retrieve fork digest")
tracing.AnnotateError(span, err)
return err
}
// Non-blocking broadcast, with attempts to discover a subnet peer if none available.
go s.broadcastBlob(ctx, subnet, blob, forkDigest)
return nil
}
func (s *Service) broadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar, forkDigest [4]byte) {
ctx, span := trace.StartSpan(ctx, "p2p.broadcastBlob")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
// Ensure we have peers with this subnet.
s.subnetLocker(subnet).RLock()
hasPeer := s.hasPeerWithSubnet(blobSubnetToTopic(subnet, forkDigest))
s.subnetLocker(subnet).RUnlock()
if !hasPeer {
if err := func() error {
s.subnetLocker(subnet).Lock()
defer s.subnetLocker(subnet).Unlock()
ok, err := s.FindPeersWithSubnet(ctx, blobSubnetToTopic(subnet, forkDigest), subnet, 1)
if err != nil {
return err
}
if ok {
return nil
}
return errors.New("failed to find peers for subnet")
}(); err != nil {
log.WithError(err).Error("Failed to find peers")
}
}
if err := s.broadcastObject(ctx, blobSidecar, blobSubnetToTopic(subnet, forkDigest)); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecar")
tracing.AnnotateError(span, err)
}
}
func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [4]byte) {
ctx, span := trace.StartSpan(ctx, "p2p.broadcastSyncCommittee")
defer span.End()
@@ -232,3 +283,7 @@ func attestationToTopic(subnet uint64, forkDigest [4]byte) string {
func syncCommitteeToTopic(subnet uint64, forkDigest [4]byte) string {
return fmt.Sprintf(SyncCommitteeSubnetTopicFormat, forkDigest, subnet)
}
func blobSubnetToTopic(subnet uint64, forkDigest [4]byte) string {
return fmt.Sprintf(BlobSubnetTopicFormat, forkDigest, subnet)
}

View File

@@ -17,7 +17,8 @@ func (s *Service) forkWatcher() {
currEpoch := slots.ToEpoch(currSlot)
if currEpoch == params.BeaconConfig().AltairForkEpoch ||
currEpoch == params.BeaconConfig().BellatrixForkEpoch ||
currEpoch == params.BeaconConfig().CapellaForkEpoch {
currEpoch == params.BeaconConfig().CapellaForkEpoch ||
currEpoch == params.BeaconConfig().DenebForkEpoch {
// If we are in the fork epoch, we update our enr with
// the updated fork digest. These repeatedly does
// this over the epoch, which might be slightly wasteful

View File

@@ -119,6 +119,9 @@ func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, erro
return defaultProposerSlashingTopicParams(), nil
case strings.Contains(topic, GossipAttesterSlashingMessage):
return defaultAttesterSlashingTopicParams(), nil
case strings.Contains(topic, GossipBlobSidecarMessage):
// TODO(Deneb): Using the default block scoring. But this should be updated.
return defaultBlockTopicParams(), nil
case strings.Contains(topic, GossipBlsToExecutionChangeMessage):
return defaultBlsToExecutionChangeTopicParams(), nil
default:

View File

@@ -21,12 +21,16 @@ var gossipTopicMappings = map[string]proto.Message{
SyncContributionAndProofSubnetTopicFormat: &ethpb.SignedContributionAndProof{},
SyncCommitteeSubnetTopicFormat: &ethpb.SyncCommitteeMessage{},
BlsToExecutionChangeSubnetTopicFormat: &ethpb.SignedBLSToExecutionChange{},
BlobSubnetTopicFormat: &ethpb.SignedBlobSidecar{},
}
// GossipTopicMappings is a function to return the assigned data type
// versioned by epoch.
func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
if topic == BlockSubnetTopicFormat {
if epoch >= params.BeaconConfig().DenebForkEpoch {
return &ethpb.SignedBeaconBlockDeneb{}
}
if epoch >= params.BeaconConfig().CapellaForkEpoch {
return &ethpb.SignedBeaconBlockCapella{}
}

View File

@@ -35,6 +35,7 @@ type Broadcaster interface {
Broadcast(context.Context, proto.Message) error
BroadcastAttestation(ctx context.Context, subnet uint64, att *ethpb.Attestation) error
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
BroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar) error
}
// SetStreamHandler configures p2p to handle streams of a certain topic ID.

View File

@@ -57,12 +57,17 @@ func (s *Service) CanSubscribe(topic string) bool {
log.WithError(err).Error("Could not determine Capella fork digest")
return false
}
denebForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Capella fork digest")
return false
}
switch parts[2] {
case fmt.Sprintf("%x", phase0ForkDigest):
case fmt.Sprintf("%x", altairForkDigest):
case fmt.Sprintf("%x", bellatrixForkDigest):
case fmt.Sprintf("%x", capellaForkDigest):
case fmt.Sprintf("%x", denebForkDigest):
default:
return false
}

View File

@@ -92,7 +92,7 @@ func TestService_CanSubscribe(t *testing.T) {
formatting := []interface{}{digest}
// Special case for attestation subnets which have a second formatting placeholder.
if topic == AttestationSubnetTopicFormat || topic == SyncCommitteeSubnetTopicFormat {
if topic == AttestationSubnetTopicFormat || topic == SyncCommitteeSubnetTopicFormat || topic == BlobSubnetTopicFormat {
formatting = append(formatting, 0 /* some subnet ID */)
}

View File

@@ -37,6 +37,9 @@ const PingMessageName = "/ping"
// MetadataMessageName specifies the name for the metadata message topic.
const MetadataMessageName = "/metadata"
const BlobSidecarsByRootName = "/blob_sidecars_by_root"
const BlobSidecarsByRangeName = "/blob_sidecars_by_range"
const (
// V1 RPC Topics
// RPCStatusTopicV1 defines the v1 topic for the status rpc method.
@@ -52,6 +55,15 @@ const (
// RPCMetaDataTopicV1 defines the v1 topic for the metadata rpc method.
RPCMetaDataTopicV1 = protocolPrefix + MetadataMessageName + SchemaVersionV1
// RPCBlobSidecarsByRootTopicV1 is a topic for requesting blob sidecars by their block root. New in deneb.
// /eth2/beacon_chain/req/blob_sidecars_by_root/1/
RPCBlobSidecarsByRootTopicV1 = protocolPrefix + BlobSidecarsByRootName + SchemaVersionV1
// RPCBlobSidecarsByRangeTopicV1 is a topic for requesting blob sidecars
// in the slot range [start_slot, start_slot + count), leading up to the current head block as selected by fork choice.
// Protocol ID: /eth2/beacon_chain/req/blob_sidecars_by_range/1/ - New in deneb.
RPCBlobSidecarsByRangeTopicV1 = protocolPrefix + BlobSidecarsByRangeName + SchemaVersionV1
// V2 RPC Topics
// RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method.
RPCBlocksByRangeTopicV2 = protocolPrefix + BeaconBlocksByRangeMessageName + SchemaVersionV2
@@ -83,6 +95,10 @@ var RPCTopicMappings = map[string]interface{}{
// RPC Metadata Message
RPCMetaDataTopicV1: new(interface{}),
RPCMetaDataTopicV2: new(interface{}),
// BlobSidecarsByRange v1 Message
RPCBlobSidecarsByRangeTopicV1: new(pb.BlobSidecarsByRangeRequest),
// BlobSidecarsByRoot v1 Message
RPCBlobSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq),
}
// Maps all registered protocol prefixes.
@@ -99,6 +115,8 @@ var messageMapping = map[string]bool{
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
BlobSidecarsByRangeName: true,
BlobSidecarsByRootName: true,
}
// Maps all the RPC messages which are to updated in altair.
@@ -113,6 +131,15 @@ var versionMapping = map[string]bool{
SchemaVersionV2: true,
}
var PreAltairV1SchemaMapping = map[string]bool{
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
}
// VerifyTopicMapping verifies that the topic and its accompanying
// message type is correct.
func VerifyTopicMapping(topic string, msg interface{}) error {

View File

@@ -17,7 +17,6 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
"github.com/libp2p/go-libp2p/p2p/protocol/identify"
"github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/async"
@@ -133,7 +132,6 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
}
s.host = h
s.host.RemoveStreamHandler(identify.IDDelta)
// Gossipsub registration is done before we add in any new peers
// due to libp2p's gossipsub implementation not taking into
// account previously added peers when creating the gossipsub

View File

@@ -138,6 +138,11 @@ func (_ *FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ *ethpb.Att
return nil
}
// BroadcastBlob -- fake.
func (p *FakeP2P) BroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar) error {
return nil
}
// BroadcastSyncCommitteeMessage -- fake.
func (_ *FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
return nil

View File

@@ -33,3 +33,9 @@ func (m *MockBroadcaster) BroadcastSyncCommitteeMessage(_ context.Context, _ uin
m.BroadcastCalled = true
return nil
}
// BroadcastSyncCommitteeMessage records a broadcast occurred.
func (m *MockBroadcaster) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.SignedBlobSidecar) error {
m.BroadcastCalled = true
return nil
}

View File

@@ -51,7 +51,8 @@ func (_ *MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
func (_ *MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
// SetStreamHandlerMatch --
func (_ *MockHost) SetStreamHandlerMatch(protocol.ID, func(string) bool, network.StreamHandler) {}
func (_ *MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
}
// RemoveStreamHandler --
func (_ *MockHost) RemoveStreamHandler(_ protocol.ID) {}

View File

@@ -170,6 +170,11 @@ func (p *TestP2P) BroadcastAttestation(_ context.Context, _ uint64, _ *ethpb.Att
return nil
}
func (p *TestP2P) BroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar) error {
p.BroadcastCalled = true
return nil
}
// BroadcastSyncCommitteeMessage broadcasts a sync committee message.
func (p *TestP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
p.BroadcastCalled = true

View File

@@ -28,6 +28,8 @@ const (
GossipContributionAndProofMessage = "sync_committee_contribution_and_proof"
// GossipBlsToExecutionChangeMessage is the name for the bls to execution change message type.
GossipBlsToExecutionChangeMessage = "bls_to_execution_change"
// GossipBlobSidecarMessage is the name for the blob sidecar message type.
GossipBlobSidecarMessage = "blob_sidecar"
// Topic Formats
//
@@ -49,4 +51,6 @@ const (
SyncContributionAndProofSubnetTopicFormat = GossipProtocolAndDigest + GossipContributionAndProofMessage
// BlsToExecutionChangeSubnetTopicFormat is the topic format for the bls to execution change subnet.
BlsToExecutionChangeSubnetTopicFormat = GossipProtocolAndDigest + GossipBlsToExecutionChangeMessage
// BlobSubnetTopicFormat is the topic format for the blob subnet.
BlobSubnetTopicFormat = GossipProtocolAndDigest + GossipBlobSidecarMessage + "_%d"
)

View File

@@ -53,6 +53,11 @@ func InitializeDataMaps() {
&ethpb.SignedBeaconBlockCapella{Block: &ethpb.BeaconBlockCapella{Body: &ethpb.BeaconBlockBodyCapella{}}},
)
},
bytesutil.ToBytes4(params.BeaconConfig().DenebForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockDeneb{Block: &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{}}},
)
},
}
// Reset our metadata map.
@@ -69,5 +74,8 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().CapellaForkVersion): func() metadata.Metadata {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{})
},
bytesutil.ToBytes4(params.BeaconConfig().DenebForkVersion): func() metadata.Metadata {
return wrapper.WrappedMetadataV1(&ethpb.MetaDataV1{})
},
}
}

View File

@@ -12,4 +12,5 @@ var (
ErrRateLimited = errors.New("rate limited")
ErrIODeadline = errors.New("i/o deadline exceeded")
ErrInvalidRequest = errors.New("invalid range, step or count")
ErrBlobLTMinRequest = errors.New("blob slot < minimum_request_epoch")
)

View File

@@ -7,6 +7,7 @@ import (
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v3/config/params"
eth "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
)
const rootLength = 32
@@ -31,6 +32,7 @@ func (b *SSZBytes) HashTreeRootWith(hh *ssz.Hasher) error {
// BeaconBlockByRootsReq specifies the block by roots request type.
type BeaconBlockByRootsReq [][rootLength]byte
type BlobSidecarsByRootReq []*eth.BlobIdentifier
// MarshalSSZTo marshals the block by roots request with the provided byte slice.
func (r *BeaconBlockByRootsReq) MarshalSSZTo(dst []byte) ([]byte, error) {

View File

@@ -46,6 +46,7 @@ pkg_deb(
package = "prysm-beacon-chain",
postinst = "postinst.sh",
preinst = "preinst.sh",
tags = ["no-remote"],
version_file = "//runtime:version_file",
visibility = ["//beacon-chain:__pkg__"],
)

View File

@@ -211,6 +211,14 @@ func handlePostSSZ(m *apimiddleware.ApiProxyMiddleware, endpoint apimiddleware.E
return true
}
func handleGetBlobsSidecarSSZ(m *apimiddleware.ApiProxyMiddleware, endpoint apimiddleware.Endpoint, w http.ResponseWriter, req *http.Request) (handled bool) {
config := sszConfig{
fileName: "blobs_sidecar.ssz",
responseJson: &SszResponseJson{},
}
return handleGetSSZ(m, endpoint, w, req, config)
}
func sszRequested(req *http.Request) (bool, error) {
accept := req.Header.Values("Accept")
if len(accept) == 0 {

View File

@@ -503,6 +503,13 @@ type capellaBlockResponseJson struct {
Finalized bool `json:"finalized"`
}
type denebBlockResponseJson struct {
Version string `json:"version"`
Data *SignedBeaconBlockDenebContainerJson `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type bellatrixBlindedBlockResponseJson struct {
Version string `json:"version" enum:"true"`
Data *SignedBlindedBeaconBlockBellatrixContainerJson `json:"data"`
@@ -517,6 +524,12 @@ type capellaBlindedBlockResponseJson struct {
Finalized bool `json:"finalized"`
}
type denebBlindedBlockResponseJson struct {
Version string `json:"version"`
Data *SignedBlindedBeaconBlockDenebContainerJson `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
}
func serializeV2Block(response interface{}) (apimiddleware.RunDefault, []byte, apimiddleware.ErrorJson) {
respContainer, ok := response.(*BlockV2ResponseJson)
if !ok {
@@ -565,6 +578,16 @@ func serializeV2Block(response interface{}) (apimiddleware.RunDefault, []byte, a
ExecutionOptimistic: respContainer.ExecutionOptimistic,
Finalized: respContainer.Finalized,
}
case strings.EqualFold(respContainer.Version, strings.ToLower(ethpbv2.Version_Deneb.String())):
actualRespContainer = &denebBlockResponseJson{
Version: respContainer.Version,
Data: &SignedBeaconBlockDenebContainerJson{
Message: respContainer.Data.DenebBlock,
Signature: respContainer.Data.Signature,
},
ExecutionOptimistic: respContainer.ExecutionOptimistic,
Finalized: respContainer.Finalized,
}
default:
return false, nil, apimiddleware.InternalServerError(fmt.Errorf("unsupported block version '%s'", respContainer.Version))
}
@@ -624,6 +647,15 @@ func serializeBlindedBlock(response interface{}) (apimiddleware.RunDefault, []by
ExecutionOptimistic: respContainer.ExecutionOptimistic,
Finalized: respContainer.Finalized,
}
case strings.EqualFold(respContainer.Version, strings.ToLower(ethpbv2.Version_Deneb.String())):
actualRespContainer = &denebBlindedBlockResponseJson{
Version: respContainer.Version,
Data: &SignedBlindedBeaconBlockDenebContainerJson{
Message: respContainer.Data.DenebBlock,
Signature: respContainer.Data.Signature,
},
ExecutionOptimistic: respContainer.ExecutionOptimistic,
}
default:
return false, nil, apimiddleware.InternalServerError(fmt.Errorf("unsupported block version '%s'", respContainer.Version))
}
@@ -655,6 +687,11 @@ type capellaStateResponseJson struct {
Data *BeaconStateCapellaJson `json:"data"`
}
type denebStateResponseJson struct {
Version string `json:"version"`
Data *BeaconStateDenebJson `json:"data"`
}
func serializeV2State(response interface{}) (apimiddleware.RunDefault, []byte, apimiddleware.ErrorJson) {
respContainer, ok := response.(*BeaconStateV2ResponseJson)
if !ok {
@@ -683,6 +720,11 @@ func serializeV2State(response interface{}) (apimiddleware.RunDefault, []byte, a
Version: respContainer.Version,
Data: respContainer.Data.CapellaState,
}
case strings.EqualFold(respContainer.Version, strings.ToLower(ethpbv2.Version_Deneb.String())):
actualRespContainer = &denebStateResponseJson{
Version: respContainer.Version,
Data: respContainer.Data.DenebState,
}
default:
return false, nil, apimiddleware.InternalServerError(fmt.Errorf("unsupported state version '%s'", respContainer.Version))
}
@@ -719,6 +761,40 @@ type bellatrixProduceBlindedBlockResponseJson struct {
Data *BlindedBeaconBlockBellatrixJson `json:"data"`
}
type tempBlobJson struct {
Data string `json:"data"`
}
type tempBlobsSidecarJson struct {
BeaconBlockRoot string `json:"beacon_block_root"`
BeaconBlockSlot string `json:"beacon_block_slot"`
Blobs []tempBlobJson `json:"blobs"`
AggregatedProof string `json:"kzg_aggregated_proof"`
}
// This takes the blobs list and exposes the data field of each blob as the blob content itself in the json
func prepareBlobsResponse(body []byte, responseContainer interface{}) (apimiddleware.RunDefault, apimiddleware.ErrorJson) {
tempContainer := &tempBlobsSidecarJson{}
if err := json.Unmarshal(body, tempContainer); err != nil {
return false, apimiddleware.InternalServerErrorWithMessage(err, "could not unmarshal response into temp container")
}
container, ok := responseContainer.(*blobsSidecarResponseJson)
if !ok {
return false, apimiddleware.InternalServerError(errors.New("container is not of the correct type"))
}
container.Data = &blobsSidecarJson{
BeaconBlockRoot: tempContainer.BeaconBlockRoot,
BeaconBlockSlot: tempContainer.BeaconBlockSlot,
Blobs: make([]string, len(tempContainer.Blobs)),
AggregatedProof: tempContainer.AggregatedProof,
}
for i, blob := range tempContainer.Blobs {
container.Data.Blobs[i] = blob.Data
}
return false, nil
}
type capellaProduceBlindedBlockResponseJson struct {
Version string `json:"version" enum:"true"`
Data *BlindedBeaconBlockCapellaJson `json:"data"`

View File

@@ -75,6 +75,7 @@ func (_ *BeaconEndpointFactory) Paths() []string {
"/eth/v1/validator/prepare_beacon_proposer",
"/eth/v1/validator/register_validator",
"/eth/v1/validator/liveness/{epoch}",
"/eth/v1/beacon/blobs_sidecars/{block_id}",
}
}
@@ -136,6 +137,12 @@ func (_ *BeaconEndpointFactory) Create(path string) (*apimiddleware.Endpoint, er
OnPreSerializeMiddlewareResponseIntoJson: serializeV2Block,
}
endpoint.CustomHandlers = []apimiddleware.CustomHandler{handleGetBeaconBlockSSZV2}
case "/eth/v1/beacon/blobs_sidecars/{block_id}":
endpoint.GetResponse = &blobsSidecarResponseJson{}
endpoint.CustomHandlers = []apimiddleware.CustomHandler{handleGetBlobsSidecarSSZ}
endpoint.Hooks = apimiddleware.HookCollection{
OnPreDeserializeGrpcResponseBodyIntoContainer: prepareBlobsResponse,
}
case "/eth/v1/beacon/blocks/{block_id}/root":
endpoint.GetResponse = &BlockRootResponseJson{}
case "/eth/v1/beacon/blocks/{block_id}/attestations":

View File

@@ -395,6 +395,7 @@ type SignedBeaconBlockContainerV2Json struct {
AltairBlock *BeaconBlockAltairJson `json:"altair_block"`
BellatrixBlock *BeaconBlockBellatrixJson `json:"bellatrix_block"`
CapellaBlock *BeaconBlockCapellaJson `json:"capella_block"`
DenebBlock *BeaconBlockDenebJson `json:"deneb_block"`
Signature string `json:"signature" hex:"true"`
}
@@ -403,6 +404,7 @@ type SignedBlindedBeaconBlockContainerJson struct {
AltairBlock *BeaconBlockAltairJson `json:"altair_block"`
BellatrixBlock *BlindedBeaconBlockBellatrixJson `json:"bellatrix_block"`
CapellaBlock *BlindedBeaconBlockCapellaJson `json:"capella_block"`
DenebBlock *BlindedBeaconBlockDenebJson `json:"deneb_block"`
Signature string `json:"signature" hex:"true"`
}
@@ -411,6 +413,7 @@ type BeaconBlockContainerV2Json struct {
AltairBlock *BeaconBlockAltairJson `json:"altair_block"`
BellatrixBlock *BeaconBlockBellatrixJson `json:"bellatrix_block"`
CapellaBlock *BeaconBlockCapellaJson `json:"capella_block"`
DenebBlock *BeaconBlockDenebJson `json:"deneb_block"`
}
type BlindedBeaconBlockContainerJson struct {
@@ -418,6 +421,7 @@ type BlindedBeaconBlockContainerJson struct {
AltairBlock *BeaconBlockAltairJson `json:"altair_block"`
BellatrixBlock *BlindedBeaconBlockBellatrixJson `json:"bellatrix_block"`
CapellaBlock *BlindedBeaconBlockCapellaJson `json:"capella_block"`
DenebBlock *BlindedBeaconBlockDenebJson `json:"deneb_block"`
}
type SignedBeaconBlockAltairContainerJson struct {
@@ -435,6 +439,11 @@ type SignedBeaconBlockCapellaContainerJson struct {
Signature string `json:"signature" hex:"true"`
}
type SignedBeaconBlockDenebContainerJson struct {
Message *BeaconBlockDenebJson `json:"message"`
Signature string `json:"signature" hex:"true"`
}
type SignedBlindedBeaconBlockBellatrixContainerJson struct {
Message *BlindedBeaconBlockBellatrixJson `json:"message"`
Signature string `json:"signature" hex:"true"`
@@ -445,6 +454,11 @@ type SignedBlindedBeaconBlockCapellaContainerJson struct {
Signature string `json:"signature" hex:"true"`
}
type SignedBlindedBeaconBlockDenebContainerJson struct {
Message *BlindedBeaconBlockDenebJson `json:"message"`
Signature string `json:"signature" hex:"true"`
}
type BeaconBlockAltairJson struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
@@ -469,6 +483,14 @@ type BeaconBlockCapellaJson struct {
Body *BeaconBlockBodyCapellaJson `json:"body"`
}
type BeaconBlockDenebJson struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root" hex:"true"`
StateRoot string `json:"state_root" hex:"true"`
Body *BeaconBlockBodyDenebJson `json:"body"`
}
type BlindedBeaconBlockBellatrixJson struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
@@ -485,6 +507,14 @@ type BlindedBeaconBlockCapellaJson struct {
Body *BlindedBeaconBlockBodyCapellaJson `json:"body"`
}
type BlindedBeaconBlockDenebJson struct {
Slot string `json:"slot"`
ProposerIndex string `json:"proposer_index"`
ParentRoot string `json:"parent_root" hex:"true"`
StateRoot string `json:"state_root" hex:"true"`
Body *BlindedBeaconBlockBodyDenebJson `json:"body"`
}
type BeaconBlockBodyAltairJson struct {
RandaoReveal string `json:"randao_reveal" hex:"true"`
Eth1Data *Eth1DataJson `json:"eth1_data"`
@@ -524,6 +554,21 @@ type BeaconBlockBodyCapellaJson struct {
BLSToExecutionChanges []*SignedBLSToExecutionChangeJson `json:"bls_to_execution_changes"`
}
type BeaconBlockBodyDenebJson struct {
RandaoReveal string `json:"randao_reveal" hex:"true"`
Eth1Data *Eth1DataJson `json:"eth1_data"`
Graffiti string `json:"graffiti" hex:"true"`
ProposerSlashings []*ProposerSlashingJson `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingJson `json:"attester_slashings"`
Attestations []*AttestationJson `json:"attestations"`
Deposits []*DepositJson `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExitJson `json:"voluntary_exits"`
SyncAggregate *SyncAggregateJson `json:"sync_aggregate"`
ExecutionPayload *ExecutionPayloadDenebJson `json:"execution_payload"`
BLSToExecutionChanges []*SignedBLSToExecutionChangeJson `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments" hex:"true"`
}
type BlindedBeaconBlockBodyBellatrixJson struct {
RandaoReveal string `json:"randao_reveal" hex:"true"`
Eth1Data *Eth1DataJson `json:"eth1_data"`
@@ -551,6 +596,21 @@ type BlindedBeaconBlockBodyCapellaJson struct {
BLSToExecutionChanges []*SignedBLSToExecutionChangeJson `json:"bls_to_execution_changes"`
}
type BlindedBeaconBlockBodyDenebJson struct {
RandaoReveal string `json:"randao_reveal" hex:"true"`
Eth1Data *Eth1DataJson `json:"eth1_data"`
Graffiti string `json:"graffiti" hex:"true"`
ProposerSlashings []*ProposerSlashingJson `json:"proposer_slashings"`
AttesterSlashings []*AttesterSlashingJson `json:"attester_slashings"`
Attestations []*AttestationJson `json:"attestations"`
Deposits []*DepositJson `json:"deposits"`
VoluntaryExits []*SignedVoluntaryExitJson `json:"voluntary_exits"`
SyncAggregate *SyncAggregateJson `json:"sync_aggregate"`
ExecutionPayloadHeader *ExecutionPayloadHeaderDenebJson `json:"execution_payload_header"`
BLSToExecutionChanges []*SignedBLSToExecutionChangeJson `json:"bls_to_execution_changes"`
BlobKzgCommitments []string `json:"blob_kzg_commitments" hex:"true"`
}
type ExecutionPayloadJson struct {
ParentHash string `json:"parent_hash" hex:"true"`
FeeRecipient string `json:"fee_recipient" hex:"true"`
@@ -586,6 +646,25 @@ type ExecutionPayloadCapellaJson struct {
Withdrawals []*WithdrawalJson `json:"withdrawals"`
}
type ExecutionPayloadDenebJson struct {
ParentHash string `json:"parent_hash" hex:"true"`
FeeRecipient string `json:"fee_recipient" hex:"true"`
StateRoot string `json:"state_root" hex:"true"`
ReceiptsRoot string `json:"receipts_root" hex:"true"`
LogsBloom string `json:"logs_bloom" hex:"true"`
PrevRandao string `json:"prev_randao" hex:"true"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
TimeStamp string `json:"timestamp"`
ExtraData string `json:"extra_data" hex:"true"`
BaseFeePerGas string `json:"base_fee_per_gas" uint256:"true"`
ExcessDataGas string `json:"excess_data_gas" uint256:"true"`
BlockHash string `json:"block_hash" hex:"true"`
Transactions []string `json:"transactions" hex:"true"`
Withdrawals []*WithdrawalJson `json:"withdrawals"`
}
type ExecutionPayloadHeaderJson struct {
ParentHash string `json:"parent_hash" hex:"true"`
FeeRecipient string `json:"fee_recipient" hex:"true"`
@@ -621,6 +700,25 @@ type ExecutionPayloadHeaderCapellaJson struct {
WithdrawalsRoot string `json:"withdrawals_root" hex:"true"`
}
type ExecutionPayloadHeaderDenebJson struct {
ParentHash string `json:"parent_hash" hex:"true"`
FeeRecipient string `json:"fee_recipient" hex:"true"`
StateRoot string `json:"state_root" hex:"true"`
ReceiptsRoot string `json:"receipts_root" hex:"true"`
LogsBloom string `json:"logs_bloom" hex:"true"`
PrevRandao string `json:"prev_randao" hex:"true"`
BlockNumber string `json:"block_number"`
GasLimit string `json:"gas_limit"`
GasUsed string `json:"gas_used"`
TimeStamp string `json:"timestamp"`
ExtraData string `json:"extra_data" hex:"true"`
BaseFeePerGas string `json:"base_fee_per_gas" uint256:"true"`
ExcessDataGas string `json:"excess_data_gas" uint256:"true"`
BlockHash string `json:"block_hash" hex:"true"`
TransactionsRoot string `json:"transactions_root" hex:"true"`
WithdrawalsRoot string `json:"withdrawals_root" hex:"true"`
}
type SyncAggregateJson struct {
SyncCommitteeBits string `json:"sync_committee_bits" hex:"true"`
SyncCommitteeSignature string `json:"sync_committee_signature" hex:"true"`
@@ -877,11 +975,42 @@ type BeaconStateCapellaJson struct {
HistoricalSummaries []*HistoricalSummaryJson `json:"historical_summaries"`
}
type BeaconStateDenebJson struct {
GenesisTime string `json:"genesis_time"`
GenesisValidatorsRoot string `json:"genesis_validators_root" hex:"true"`
Slot string `json:"slot"`
Fork *ForkJson `json:"fork"`
LatestBlockHeader *BeaconBlockHeaderJson `json:"latest_block_header"`
BlockRoots []string `json:"block_roots" hex:"true"`
StateRoots []string `json:"state_roots" hex:"true"`
HistoricalRoots []string `json:"historical_roots" hex:"true"`
Eth1Data *Eth1DataJson `json:"eth1_data"`
Eth1DataVotes []*Eth1DataJson `json:"eth1_data_votes"`
Eth1DepositIndex string `json:"eth1_deposit_index"`
Validators []*ValidatorJson `json:"validators"`
Balances []string `json:"balances"`
RandaoMixes []string `json:"randao_mixes" hex:"true"`
Slashings []string `json:"slashings"`
PreviousEpochParticipation EpochParticipation `json:"previous_epoch_participation"`
CurrentEpochParticipation EpochParticipation `json:"current_epoch_participation"`
JustificationBits string `json:"justification_bits" hex:"true"`
PreviousJustifiedCheckpoint *CheckpointJson `json:"previous_justified_checkpoint"`
CurrentJustifiedCheckpoint *CheckpointJson `json:"current_justified_checkpoint"`
FinalizedCheckpoint *CheckpointJson `json:"finalized_checkpoint"`
InactivityScores []string `json:"inactivity_scores"`
CurrentSyncCommittee *SyncCommitteeJson `json:"current_sync_committee"`
NextSyncCommittee *SyncCommitteeJson `json:"next_sync_committee"`
LatestExecutionPayloadHeader *ExecutionPayloadHeaderDenebJson `json:"latest_execution_payload_header"`
NextWithdrawalIndex string `json:"next_withdrawal_index"`
NextWithdrawalValidatorIndex string `json:"next_withdrawal_validator_index"`
}
type BeaconStateContainerV2Json struct {
Phase0State *BeaconStateJson `json:"phase0_state"`
AltairState *BeaconStateAltairJson `json:"altair_state"`
BellatrixState *BeaconStateBellatrixJson `json:"bellatrix_state"`
CapellaState *BeaconStateCapellaJson `json:"capella_state"`
DenebState *BeaconStateDenebJson `json:"deneb_state"`
}
type ForkJson struct {
@@ -1185,3 +1314,14 @@ type EventErrorJson struct {
StatusCode int `json:"status_code"`
Message string `json:"message"`
}
type blobsSidecarJson struct {
BeaconBlockRoot string `json:"beacon_block_root" hex:"true"`
BeaconBlockSlot string `json:"beacon_block_slot"`
Blobs []string `json:"blobs" hex:"true"`
AggregatedProof string `json:"kzg_aggregated_proof" hex:"true"`
}
type blobsSidecarResponseJson struct {
Data *blobsSidecarJson `json:"data"`
}

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"blinded_blocks.go",
"blobs.go",
"blocks.go",
"config.go",
"log.go",

View File

@@ -0,0 +1,13 @@
package beacon
import (
"context"
"github.com/pkg/errors"
ethpbv1 "github.com/prysmaticlabs/prysm/v3/proto/eth/v1"
)
func (bs *Server) GetBlobsSidecar(ctx context.Context, req *ethpbv1.BlobsRequest) (*ethpbv1.BlobsResponse, error) {
// TODO: implement this with new blob type request
return nil, errors.New("not implemented")
}

View File

@@ -382,6 +382,14 @@ func (bs *Server) GetBlockV2(ctx context.Context, req *ethpbv2.BlockRequestV2) (
if !errors.Is(err, blocks.ErrUnsupportedGetter) {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
}
result, err = bs.getBlockDeneb(ctx, blk)
if result != nil {
return result, nil
}
// ErrUnsupportedGetter means that we have another block type
if !errors.Is(err, blocks.ErrUnsupportedGetter) {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
}
return nil, status.Errorf(codes.Internal, "Unknown block type %T", blk)
}
@@ -436,6 +444,14 @@ func (bs *Server) GetBlockSSZV2(ctx context.Context, req *ethpbv2.BlockRequestV2
if !errors.Is(err, blocks.ErrUnsupportedGetter) {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
}
result, err = bs.getSSZBlockDeneb(ctx, blk)
if result != nil {
return result, nil
}
// ErrUnsupportedGetter means that we have another block type
if !errors.Is(err, blocks.ErrUnsupportedGetter) {
return nil, status.Errorf(codes.Internal, "Could not get signed beacon block: %v", err)
}
return nil, status.Errorf(codes.Internal, "Unknown block type %T", blk)
}
@@ -817,6 +833,76 @@ func (bs *Server) getBlockCapella(ctx context.Context, blk interfaces.ReadOnlySi
}, nil
}
func (bs *Server) getBlockDeneb(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.BlockResponseV2, error) {
denebBlk, err := blk.PbDenebBlock()
if err != nil {
// ErrUnsupportedGetter means that we have another block type
if errors.Is(err, blocks.ErrUnsupportedGetter) {
if blindedDenebBlk, err := blk.PbBlindedDenebBlock(); err == nil {
if blindedDenebBlk == nil {
return nil, errNilBlock
}
signedFullBlock, err := bs.ExecutionPayloadReconstructor.ReconstructFullBlock(ctx, blk)
if err != nil {
return nil, errors.Wrapf(err, "could not reconstruct full execution payload to create signed beacon block")
}
denebBlk, err = signedFullBlock.PbDenebBlock()
if err != nil {
return nil, errors.Wrapf(err, "could not get signed beacon block")
}
v2Blk, err := migration.V1Alpha1BeaconBlockDenebToV2(denebBlk.Block)
if err != nil {
return nil, errors.Wrapf(err, "could not convert beacon block")
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrapf(err, "could not get block root")
}
isOptimistic, err := bs.OptimisticModeFetcher.IsOptimisticForRoot(ctx, root)
if err != nil {
return nil, errors.Wrapf(err, "could not check if block is optimistic")
}
sig := blk.Signature()
return &ethpbv2.BlockResponseV2{
Version: ethpbv2.Version_Deneb,
Data: &ethpbv2.SignedBeaconBlockContainer{
Message: &ethpbv2.SignedBeaconBlockContainer_DenebBlock{DenebBlock: v2Blk},
Signature: sig[:],
},
ExecutionOptimistic: isOptimistic,
}, nil
}
return nil, err
}
return nil, err
}
if denebBlk == nil {
return nil, errNilBlock
}
v2Blk, err := migration.V1Alpha1BeaconBlockDenebToV2(denebBlk.Block)
if err != nil {
return nil, errors.Wrapf(err, "could not convert beacon block")
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrapf(err, "could not get block root")
}
isOptimistic, err := bs.OptimisticModeFetcher.IsOptimisticForRoot(ctx, root)
if err != nil {
return nil, errors.Wrapf(err, "could not check if block is optimistic")
}
sig := blk.Signature()
return &ethpbv2.BlockResponseV2{
Version: ethpbv2.Version_Deneb,
Data: &ethpbv2.SignedBeaconBlockContainer{
Message: &ethpbv2.SignedBeaconBlockContainer_DenebBlock{DenebBlock: v2Blk},
Signature: sig[:],
},
ExecutionOptimistic: isOptimistic,
}, nil
}
func getSSZBlockPhase0(blk interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.SSZContainer, error) {
phase0Blk, err := blk.PbPhase0Block()
if err != nil {
@@ -1012,6 +1098,82 @@ func (bs *Server) getSSZBlockCapella(ctx context.Context, blk interfaces.ReadOnl
return &ethpbv2.SSZContainer{Version: ethpbv2.Version_CAPELLA, ExecutionOptimistic: isOptimistic, Data: sszData}, nil
}
func (bs *Server) getSSZBlockDeneb(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.SSZContainer, error) {
denebBlk, err := blk.PbDenebBlock()
if err != nil {
// ErrUnsupportedGetter means that we have another block type
if errors.Is(err, blocks.ErrUnsupportedGetter) {
if blindedDenebBlk, err := blk.PbBlindedDenebBlock(); err == nil {
if blindedDenebBlk == nil {
return nil, errNilBlock
}
signedFullBlock, err := bs.ExecutionPayloadReconstructor.ReconstructFullBlock(ctx, blk)
if err != nil {
return nil, errors.Wrapf(err, "could not reconstruct full execution payload to create signed beacon block")
}
denebBlk, err = signedFullBlock.PbDenebBlock()
if err != nil {
return nil, errors.Wrapf(err, "could not get signed beacon block")
}
v2Blk, err := migration.V1Alpha1BeaconBlockDenebToV2(denebBlk.Block)
if err != nil {
return nil, errors.Wrapf(err, "could not convert signed beacon block")
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrapf(err, "could not get block root")
}
isOptimistic, err := bs.OptimisticModeFetcher.IsOptimisticForRoot(ctx, root)
if err != nil {
return nil, errors.Wrapf(err, "could not check if block is optimistic")
}
sig := blk.Signature()
data := &ethpbv2.SignedBeaconBlockDeneb{
Message: v2Blk,
Signature: sig[:],
}
sszData, err := data.MarshalSSZ()
if err != nil {
return nil, errors.Wrapf(err, "could not marshal block into SSZ")
}
return &ethpbv2.SSZContainer{
Version: ethpbv2.Version_Deneb,
ExecutionOptimistic: isOptimistic,
Data: sszData,
}, nil
}
return nil, err
}
return nil, err
}
if denebBlk == nil {
return nil, errNilBlock
}
v2Blk, err := migration.V1Alpha1BeaconBlockDenebToV2(denebBlk.Block)
if err != nil {
return nil, errors.Wrapf(err, "could not convert signed beacon block")
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrapf(err, "could not get block root")
}
isOptimistic, err := bs.OptimisticModeFetcher.IsOptimisticForRoot(ctx, root)
if err != nil {
return nil, errors.Wrapf(err, "could not check if block is optimistic")
}
sig := blk.Signature()
data := &ethpbv2.SignedBeaconBlockDeneb{
Message: v2Blk,
Signature: sig[:],
}
sszData, err := data.MarshalSSZ()
if err != nil {
return nil, errors.Wrapf(err, "could not marshal block into SSZ")
}
return &ethpbv2.SSZContainer{Version: ethpbv2.Version_Deneb, ExecutionOptimistic: isOptimistic, Data: sszData}, nil
}
func (bs *Server) submitPhase0Block(ctx context.Context, phase0Blk *ethpbv1.BeaconBlock, sig []byte) error {
v1alpha1Blk, err := migration.V1ToV1Alpha1SignedBlock(&ethpbv1.SignedBeaconBlock{Block: phase0Blk, Signature: sig})
if err != nil {

View File

@@ -105,6 +105,8 @@ func TestGetSpec(t *testing.T) {
config.MaxWithdrawalsPerPayload = 74
config.MaxBlsToExecutionChanges = 75
config.MaxValidatorsPerWithdrawalsSweep = 76
config.DenebForkEpoch = 77
config.DenebForkVersion = []byte("DenebForkVersion")
var dbp [4]byte
copy(dbp[:], []byte{'0', '0', '0', '1'})
@@ -130,6 +132,9 @@ func TestGetSpec(t *testing.T) {
var dam [4]byte
copy(dam[:], []byte{'1', '0', '0', '0'})
config.DomainApplicationMask = dam
var dbs [4]byte
copy(dbs[:], []byte{'0', '0', '0', '8'})
config.DomainBlobSidecar = dbs
params.OverrideBeaconConfig(config)
@@ -137,7 +142,7 @@ func TestGetSpec(t *testing.T) {
resp, err := server.GetSpec(context.Background(), &emptypb.Empty{})
require.NoError(t, err)
assert.Equal(t, 105, len(resp.Data))
assert.Equal(t, 108, len(resp.Data))
for k, v := range resp.Data {
switch k {
case "CONFIG_NAME":
@@ -363,8 +368,14 @@ func TestGetSpec(t *testing.T) {
case "REORG_WEIGHT_THRESHOLD":
assert.Equal(t, "20", v)
case "SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY":
case "DENEB_FORK_EPOCH":
assert.Equal(t, "77", v)
case "DENEB_FORK_VERSION":
assert.Equal(t, "0x"+hex.EncodeToString([]byte("DenebForkVersion")), v)
case "DOMAIN_BLOB_SIDECAR":
assert.Equal(t, "0x30303038", v)
default:
t.Errorf("Incorrect key: %s", k)
t.Errorf("Unknown key: %s", k)
}
}
}

View File

@@ -104,6 +104,19 @@ func (ds *Server) GetBeaconStateV2(ctx context.Context, req *ethpbv2.BeaconState
ExecutionOptimistic: isOptimistic,
Finalized: isFinalized,
}, nil
case version.Deneb:
protoState, err := migration.BeaconStateDenebToProto(beaconSt)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not convert state to proto: %v", err)
}
return &ethpbv2.BeaconStateResponseV2{
Version: ethpbv2.Version_Deneb,
Data: &ethpbv2.BeaconStateContainer{
State: &ethpbv2.BeaconStateContainer_DenebState{DenebState: protoState},
},
ExecutionOptimistic: isOptimistic,
}, nil
default:
return nil, status.Error(codes.Internal, "Unsupported state version")
}
@@ -133,6 +146,8 @@ func (ds *Server) GetBeaconStateSSZV2(ctx context.Context, req *ethpbv2.BeaconSt
ver = ethpbv2.Version_BELLATRIX
case version.Capella:
ver = ethpbv2.Version_CAPELLA
case version.Deneb:
ver = ethpbv2.Version_Deneb
default:
return nil, status.Error(codes.Internal, "Unsupported state version")
}

View File

@@ -15,6 +15,7 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/rpc/eth/helpers:go_default_library",

View File

@@ -4,6 +4,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/synccommittee"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p"
v1alpha1validator "github.com/prysmaticlabs/prysm/v3/beacon-chain/rpc/prysm/v1alpha1/validator"
@@ -24,6 +25,7 @@ type Server struct {
StateFetcher statefetcher.Fetcher
OptimisticModeFetcher blockchain.OptimisticModeFetcher
SyncCommitteePool synccommittee.Pool
BLSChangesPool blstoexec.PoolManager
V1Alpha1Server *v1alpha1validator.Server
ProposerSlotIndexCache *cache.ProposerPayloadIDsCache
}

View File

@@ -6,6 +6,7 @@ go_library(
"assignments.go",
"attestations.go",
"blocks.go",
"blstoexec.go",
"committees.go",
"config.go",
"log.go",
@@ -36,6 +37,7 @@ go_library(
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/p2p:go_default_library",
"//beacon-chain/state:go_default_library",

View File

@@ -122,7 +122,7 @@ func convertToBlockContainer(blk interfaces.ReadOnlySignedBeaconBlock, root [32]
}
ctr.Block = &ethpb.BeaconBlockContainer_BellatrixBlock{BellatrixBlock: rBlk}
}
case version.Capella:
case version.Capella, version.Deneb:
if blk.IsBlinded() {
rBlk, err := blk.PbBlindedCapellaBlock()
if err != nil {
@@ -266,6 +266,7 @@ func (bs *Server) listBlocksForGenesis(ctx context.Context, _ *ethpb.ListBlocksR
//
// This includes the head block slot and root as well as information about
// the most recent finalized and justified slots.
// DEPRECATED: This endpoint is superseded by the /eth/v1/beacon API endpoint
func (bs *Server) GetChainHead(ctx context.Context, _ *emptypb.Empty) (*ethpb.ChainHead, error) {
return bs.chainHeadRetrieval(ctx)
}

View File

@@ -0,0 +1,24 @@
package beacon
import (
"context"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// SubmitBLSToExecutionChange receives a withdrawal credential change object via
// RPC and injects it into the beacon node's operations pool.
// Submission into this pool does not guarantee inclusion into a beacon block. If the object passes validation
// the node MUST broadcast it
func (bs *Server) SubmitBLSToExecutionChange(
ctx context.Context,
req *ethpb.SignedBLSToExecutionChange,
) (*ethpb.BLSToExecutionChangeResponse, error) {
bs.BLSChangesPool.InsertBLSToExecChange(req)
if err := bs.Broadcaster.Broadcast(ctx, req); err != nil {
return nil, status.Errorf(codes.Internal, "Could not broadcast SigledBLSToExecutionChange object: %v", err)
}
return nil, nil
}

View File

@@ -15,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/blstoexec"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/slashings"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
@@ -40,6 +41,7 @@ type Server struct {
Broadcaster p2p.Broadcaster
AttestationsPool attestations.Pool
SlashingsPool slashings.PoolManager
BLSChangesPool blstoexec.PoolManager
ChainStartChan chan time.Time
ReceivedAttestationsBuffer chan *ethpb.Attestation
CollectedAttestationsBuffer chan []*ethpb.Attestation

View File

@@ -27,6 +27,7 @@ go_library(
"@com_github_ipfs_go_log_v2//:go_default_library",
"@com_github_libp2p_go_libp2p//core/network:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_bazel_rules_go//proto/wkt:empty_go_proto",
"@org_golang_google_grpc//codes:go_default_library",

View File

@@ -6,6 +6,7 @@ import (
"github.com/golang/protobuf/ptypes/empty"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/p2p"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"google.golang.org/grpc/codes"
@@ -92,7 +93,7 @@ func (ds *Server) getPeer(pid peer.ID) (*ethpb.DebugPeerResponse, error) {
aVersion = ""
}
peerInfo := &ethpb.DebugPeerResponse_PeerInfo{
Protocols: protocols,
Protocols: protocol.ConvertToStrings(protocols),
FaultCount: uint64(resp),
ProtocolVersion: pVersion,
AgentVersion: aVersion,

View File

@@ -45,13 +45,14 @@ go_library(
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/core/transition/interop:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/operations/attestations:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/operations/blstoexec:go_default_library",
"//beacon-chain/operations/synccommittee:go_default_library",
"//beacon-chain/operations/voluntaryexits:go_default_library",
"//beacon-chain/p2p:go_default_library",
@@ -62,6 +63,7 @@ go_library(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/blobs:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -85,6 +87,7 @@ go_library(
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_protolambda_go_kzg//eth:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",

View File

@@ -100,6 +100,17 @@ func sendVerifiedBlocks(stream ethpb.BeaconNodeValidator_StreamBlocksAltairServe
return nil
}
b.Block = &ethpb.StreamBlocksResponse_CapellaBlock{CapellaBlock: phBlk}
case version.Deneb:
pb, err := data.SignedBlock.Proto()
if err != nil {
return errors.Wrap(err, "could not get protobuf block")
}
phBlk, ok := pb.(*ethpb.SignedBeaconBlockDeneb)
if !ok {
log.Warn("Mismatch between version and block type, was expecting SignedBeaconBlockDeneb")
return nil
}
b.Block = &ethpb.StreamBlocksResponse_DenebBlock{DenebBlock: phBlk}
}
if err := stream.Send(b); err != nil {
@@ -149,6 +160,8 @@ func (vs *Server) sendBlocks(stream ethpb.BeaconNodeValidator_StreamBlocksAltair
b.Block = &ethpb.StreamBlocksResponse_BellatrixBlock{BellatrixBlock: p}
case *ethpb.SignedBeaconBlockCapella:
b.Block = &ethpb.StreamBlocksResponse_CapellaBlock{CapellaBlock: p}
case *ethpb.SignedBeaconBlockDeneb:
b.Block = &ethpb.StreamBlocksResponse_DenebBlock{DenebBlock: p}
default:
log.Errorf("Unknown block type %T", p)
}

View File

@@ -11,11 +11,13 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
emptypb "github.com/golang/protobuf/ptypes/empty"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed"
blockfeed "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed/block"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition/interop"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
@@ -23,6 +25,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
@@ -50,7 +53,11 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
// process attestations and update head in forkchoice
vs.ForkFetcher.ForkChoicer().Lock()
vs.HeadUpdater.UpdateHead(ctx, vs.ForkFetcher.CurrentSlot())
headRoot := vs.ForkFetcher.ForkChoicer().CachedHeadRoot()
parentRoot := vs.ForkFetcher.ForkChoicer().GetProposerHead()
if parentRoot != headRoot {
blockchain.LateBlockAttemptedReorgCount.Inc()
}
vs.ForkFetcher.ForkChoicer().Unlock()
// An optimistic validator MUST NOT produce a block (i.e., sign across the DOMAIN_BEACON_PROPOSER domain).
@@ -117,7 +124,8 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
vs.setSyncAggregate(ctx, sBlk)
// Set execution data. New in Bellatrix.
if err := vs.setExecutionData(ctx, sBlk, head); err != nil {
blobs, err := vs.setExecutionData(ctx, sBlk, head)
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not set execution data: %v", err)
}
@@ -126,6 +134,7 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
sr, err := vs.computeStateRoot(ctx, sBlk)
if err != nil {
interop.WriteBlockToDisk("proposer", sBlk, true /*failed*/)
return nil, status.Errorf(codes.Internal, "Could not compute state root: %v", err)
}
sBlk.SetStateRoot(sr)
@@ -134,6 +143,38 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not convert block to proto: %v", err)
}
if slots.ToEpoch(req.Slot) >= params.BeaconConfig().DenebForkEpoch {
blk, ok := pb.(*ethpb.BeaconBlockDeneb)
if !ok {
return nil, status.Errorf(codes.Internal, "Could not cast block to BeaconBlockDeneb")
}
br, err := blk.HashTreeRoot()
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get block root: %v", err)
}
kzgs, err := sBlk.Block().Body().BlobKzgCommitments()
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not get kzg commitments: %v", err)
}
validatorBlobs := make([]*ethpb.BlobSidecar, len(blk.Body.BlobKzgCommitments))
for i, b := range blobs {
validatorBlobs[i] = &ethpb.BlobSidecar{
BlockRoot: br[:],
Index: uint64(i),
Slot: blk.Slot,
BlockParentRoot: blk.ParentRoot,
ProposerIndex: blk.ProposerIndex,
Blob: b,
KzgCommitment: kzgs[i],
KzgProof: []byte{}, // TODO(TT): add proof
}
}
blkAndBlobs := &ethpb.BeaconBlockDenebAndBlobs{
Block: blk,
Blobs: validatorBlobs,
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Deneb{Deneb: blkAndBlobs}}, nil
}
if slots.ToEpoch(req.Slot) >= params.BeaconConfig().CapellaForkEpoch {
if sBlk.IsBlinded() {
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_BlindedCapella{BlindedCapella: pb.(*ethpb.BlindedBeaconBlockCapella)}}, nil
@@ -157,11 +198,7 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.ProposeBeaconBlock")
defer span.End()
blk, err := blocks.NewSignedBeaconBlock(req.Block)
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "Could not decode block: %v", err)
}
return vs.proposeGenericBeaconBlock(ctx, blk)
return vs.proposeGenericBeaconBlock(ctx, req)
}
// PrepareBeaconProposer caches and updates the fee recipient for the given proposer.
@@ -244,9 +281,15 @@ func (vs *Server) GetFeeRecipientByPubKey(ctx context.Context, request *ethpb.Fe
}, nil
}
func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) (*ethpb.ProposeResponse, error) {
func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, req *ethpb.GenericSignedBeaconBlock) (*ethpb.ProposeResponse, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.proposeGenericBeaconBlock")
defer span.End()
blk, err := blocks.NewSignedBeaconBlock(req.Block)
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "Could not decode block: %v", err)
}
root, err := blk.Block().HashTreeRoot()
if err != nil {
return nil, fmt.Errorf("could not tree hash block: %v", err)
@@ -264,16 +307,6 @@ func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, blk interfaces.
}
}
// Do not block proposal critical path with debug logging or block feed updates.
defer func() {
log.WithField("blockRoot", fmt.Sprintf("%#x", bytesutil.Trunc(root[:]))).Debugf(
"Block proposal received via RPC")
vs.BlockNotifier.BlockFeed().Send(&feed.Event{
Type: blockfeed.ReceivedBlock,
Data: &blockfeed.ReceivedBlockData{SignedBlock: blk},
})
}()
// Broadcast the new block to the network.
blkPb, err := blk.Proto()
if err != nil {
@@ -290,6 +323,28 @@ func (vs *Server) proposeGenericBeaconBlock(ctx context.Context, blk interfaces.
return nil, fmt.Errorf("could not process beacon block: %v", err)
}
if blk.Version() >= version.Deneb {
b, ok := req.GetBlock().(*ethpb.GenericSignedBeaconBlock_Deneb)
if !ok {
return nil, status.Error(codes.Internal, "Could not cast block to Deneb")
}
for _, sidecar := range b.Deneb.Blobs {
if err := vs.P2P.BroadcastBlob(ctx, sidecar.Message.Index, sidecar); err != nil {
return nil, errors.Wrap(err, "could not broadcast blob sidecar")
}
}
}
// Do not block proposal critical path with debug logging or block feed updates.
defer func() {
log.WithField("blockRoot", fmt.Sprintf("%#x", bytesutil.Trunc(root[:]))).Debugf(
"Block proposal received via RPC")
vs.BlockNotifier.BlockFeed().Send(&feed.Event{
Type: blockfeed.ReceivedBlock,
Data: &blockfeed.ReceivedBlockData{SignedBlock: blk},
})
}()
return &ethpb.ProposeResponse{
BlockRoot: root[:],
}, nil

View File

@@ -80,7 +80,7 @@ func (vs *Server) packAttestations(ctx context.Context, latestState state.Beacon
// filter separates attestation list into two groups: valid and invalid attestations.
// The first group passes the all the required checks for attestation to be considered for proposing.
// And attestations from the second group should be deleted.
func (a proposerAtts) filter(ctx context.Context, st state.BeaconState) (proposerAtts, proposerAtts) {
func (a proposerAtts) filter(ctx context.Context, st state.BeaconState) (proposerAtts, proposerAtts, error) {
validAtts := make([]*ethpb.Attestation, 0, len(a))
invalidAtts := make([]*ethpb.Attestation, 0, len(a))
var attestationProcessor func(context.Context, state.BeaconState, *ethpb.Attestation) (state.BeaconState, error)
@@ -98,7 +98,7 @@ func (a proposerAtts) filter(ctx context.Context, st state.BeaconState) (propose
}
} else {
// Exit early if there is an unknown state type.
return validAtts, invalidAtts
return validAtts, invalidAtts, errors.Errorf("unknown state type: %v", st.Version())
}
for _, att := range a {
@@ -108,7 +108,7 @@ func (a proposerAtts) filter(ctx context.Context, st state.BeaconState) (propose
}
invalidAtts = append(invalidAtts, att)
}
return validAtts, invalidAtts
return validAtts, invalidAtts, nil
}
// sortByProfitability orders attestations by highest slot and by highest aggregation bit count.
@@ -247,7 +247,10 @@ func (vs *Server) validateAndDeleteAttsInPool(ctx context.Context, st state.Beac
ctx, span := trace.StartSpan(ctx, "ProposerServer.validateAndDeleteAttsInPool")
defer span.End()
validAtts, invalidAtts := proposerAtts(atts).filter(ctx, st)
validAtts, invalidAtts, err := proposerAtts(atts).filter(ctx, st)
if err != nil {
return nil, err
}
if err := vs.deleteAttsInPool(ctx, invalidAtts); err != nil {
return nil, err
}

View File

@@ -38,11 +38,11 @@ var builderGetPayloadMissCount = promauto.NewCounter(prometheus.CounterOpts{
const blockBuilderTimeout = 1 * time.Second
// Sets the execution data for the block. Execution data can come from local EL client or remote builder depends on validator registration and circuit breaker conditions.
func (vs *Server) setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, headState state.BeaconState) error {
func (vs *Server) setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, headState state.BeaconState) ([]*enginev1.Blob, error) {
idx := blk.Block().ProposerIndex()
slot := blk.Block().Slot()
if slots.ToEpoch(slot) < params.BeaconConfig().BellatrixForkEpoch {
return nil
return nil, nil
}
canUseBuilder, err := vs.canUseBuilder(ctx, slot, idx)
@@ -56,14 +56,14 @@ func (vs *Server) setExecutionData(ctx context.Context, blk interfaces.SignedBea
} else {
switch {
case blk.Version() >= version.Capella:
localPayload, err := vs.getExecutionPayload(ctx, slot, idx, blk.Block().ParentRoot(), headState)
localPayload, _, err := vs.getExecutionPayload(ctx, slot, idx, blk.Block().ParentRoot(), headState)
if err != nil {
return errors.Wrap(err, "failed to get execution payload")
return nil, errors.Wrap(err, "failed to get execution payload")
}
// Compare payload values between local and builder. Default to the local value if it is higher.
localValue, err := localPayload.Value()
if err != nil {
return errors.Wrap(err, "failed to get local payload value")
return nil, errors.Wrap(err, "failed to get local payload value")
}
builderValue, err := builderPayload.Value()
if err != nil {
@@ -72,7 +72,7 @@ func (vs *Server) setExecutionData(ctx context.Context, blk interfaces.SignedBea
withdrawalsMatched, err := matchingWithdrawalsRoot(localPayload, builderPayload)
if err != nil {
return errors.Wrap(err, "failed to match withdrawals root")
return nil, errors.Wrap(err, "failed to match withdrawals root")
}
// If we can't get the builder value, just use local block.
if builderValue.Cmp(localValue) > 0 && withdrawalsMatched { // Builder value is higher and withdrawals match.
@@ -80,31 +80,38 @@ func (vs *Server) setExecutionData(ctx context.Context, blk interfaces.SignedBea
if err := blk.SetExecution(builderPayload); err != nil {
log.WithError(err).Warn("Proposer: failed to set builder payload")
} else {
return nil
return nil, nil
}
}
log.WithFields(logrus.Fields{
"localValue": localValue,
"builderValue": builderValue,
}).Warn("Proposer: using local execution payload because higher value")
return blk.SetExecution(localPayload)
return nil, blk.SetExecution(localPayload)
default: // Bellatrix case.
blk.SetBlinded(true)
if err := blk.SetExecution(builderPayload); err != nil {
log.WithError(err).Warn("Proposer: failed to set builder payload")
} else {
return nil
return nil, nil
}
}
}
}
executionData, err := vs.getExecutionPayload(ctx, slot, idx, blk.Block().ParentRoot(), headState)
executionData, blobsBundle, err := vs.getExecutionPayload(ctx, slot, idx, blk.Block().ParentRoot(), headState)
if err != nil {
return errors.Wrap(err, "failed to get execution payload")
return nil, errors.Wrap(err, "failed to get execution payload")
}
return blk.SetExecution(executionData)
if slots.ToEpoch(slot) >= params.BeaconConfig().DenebForkEpoch && len(blobsBundle.KzgCommitments) > 0 {
// TODO: check block hash matches blob bundle hash
if err := blk.SetBlobKzgCommitments(blobsBundle.KzgCommitments); err != nil {
return nil, errors.Wrap(err, "could not set blob kzg commitments")
}
return blobsBundle.Blobs, nil
}
return nil, blk.SetExecution(executionData)
}
// This function retrieves the payload header given the slot number and the validator index.
@@ -188,17 +195,17 @@ func (vs *Server) getPayloadHeaderFromBuilder(ctx context.Context, slot primitiv
"builderPubKey": fmt.Sprintf("%#x", bid.Pubkey()),
"blockHash": fmt.Sprintf("%#x", header.BlockHash()),
}).Info("Received header with bid")
return header, nil
}
// This function retrieves the full payload block using the input blind block. This input must be versioned as
// bellatrix blind block. The output block will contain the full payload. The original header block
// will be returned the block builder is not configured.
func (vs *Server) unblindBuilderBlock(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock) (interfaces.ReadOnlySignedBeaconBlock, error) {
func (vs *Server) unblindBuilderBlock(ctx context.Context, b interfaces.SignedBeaconBlock) (interfaces.SignedBeaconBlock, error) {
if err := consensusblocks.BeaconBlockIsNil(b); err != nil {
return nil, err
}
// No-op if the input block is not version blind and bellatrix.
if b.Version() != version.Bellatrix || !b.IsBlinded() {
return b, nil

View File

@@ -71,7 +71,8 @@ func TestServer_setExecutionData(t *testing.T) {
t.Run("No builder configured. Use local block", func(t *testing.T) {
blk, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockCapella())
require.NoError(t, err)
require.NoError(t, vs.setExecutionData(context.Background(), blk, capellaTransitionState))
_, err = vs.setExecutionData(context.Background(), blk, capellaTransitionState)
require.NoError(t, err)
e, err := blk.Block().Body().Execution()
require.NoError(t, err)
require.Equal(t, uint64(1), e.BlockNumber()) // Local block
@@ -122,7 +123,8 @@ func TestServer_setExecutionData(t *testing.T) {
vs.ForkFetcher.ForkChoicer().SetGenesisTime(uint64(time.Now().Unix()))
vs.TimeFetcher = chain
vs.HeadFetcher = chain
require.NoError(t, vs.setExecutionData(context.Background(), blk, capellaTransitionState))
_, err = vs.setExecutionData(context.Background(), blk, capellaTransitionState)
require.NoError(t, err)
e, err := blk.Block().Body().Execution()
require.NoError(t, err)
require.Equal(t, uint64(1), e.BlockNumber()) // Local block because incorrect withdrawals
@@ -175,7 +177,8 @@ func TestServer_setExecutionData(t *testing.T) {
vs.ForkFetcher.ForkChoicer().SetGenesisTime(uint64(time.Now().Unix()))
vs.TimeFetcher = chain
vs.HeadFetcher = chain
require.NoError(t, vs.setExecutionData(context.Background(), blk, capellaTransitionState))
_, err = vs.setExecutionData(context.Background(), blk, capellaTransitionState)
require.NoError(t, err)
e, err := blk.Block().Body().Execution()
require.NoError(t, err)
require.Equal(t, uint64(2), e.BlockNumber()) // Builder block
@@ -184,7 +187,8 @@ func TestServer_setExecutionData(t *testing.T) {
blk, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockCapella())
require.NoError(t, err)
vs.ExecutionEngineCaller = &powtesting.EngineClient{PayloadIDBytes: id, ExecutionPayloadCapella: &v1.ExecutionPayloadCapella{BlockNumber: 3}, BlockValue: big.NewInt(3)}
require.NoError(t, vs.setExecutionData(context.Background(), blk, capellaTransitionState))
_, err = vs.setExecutionData(context.Background(), blk, capellaTransitionState)
require.NoError(t, err)
e, err := blk.Block().Body().Execution()
require.NoError(t, err)
require.Equal(t, uint64(3), e.BlockNumber()) // Local block
@@ -196,7 +200,8 @@ func TestServer_setExecutionData(t *testing.T) {
ErrGetHeader: errors.New("fault"),
}
vs.ExecutionEngineCaller = &powtesting.EngineClient{PayloadIDBytes: id, ExecutionPayloadCapella: &v1.ExecutionPayloadCapella{BlockNumber: 4}, BlockValue: big.NewInt(0)}
require.NoError(t, vs.setExecutionData(context.Background(), blk, capellaTransitionState))
_, err = vs.setExecutionData(context.Background(), blk, capellaTransitionState)
require.NoError(t, err)
e, err := blk.Block().Body().Execution()
require.NoError(t, err)
require.Equal(t, uint64(4), e.BlockNumber()) // Local block
@@ -367,10 +372,10 @@ func TestServer_getBuilderBlock(t *testing.T) {
tests := []struct {
name string
blk interfaces.ReadOnlySignedBeaconBlock
blk interfaces.SignedBeaconBlock
mock *builderTest.MockBuilderService
err string
returnedBlk interfaces.ReadOnlySignedBeaconBlock
returnedBlk interfaces.SignedBeaconBlock
}{
{
name: "nil block",
@@ -379,12 +384,12 @@ func TestServer_getBuilderBlock(t *testing.T) {
},
{
name: "old block version",
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
return wb
}(),
returnedBlk: func() interfaces.ReadOnlySignedBeaconBlock {
returnedBlk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
return wb
@@ -392,7 +397,7 @@ func TestServer_getBuilderBlock(t *testing.T) {
},
{
name: "not configured",
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBlindedBeaconBlockBellatrix())
require.NoError(t, err)
return wb
@@ -400,7 +405,7 @@ func TestServer_getBuilderBlock(t *testing.T) {
mock: &builderTest.MockBuilderService{
HasConfigured: false,
},
returnedBlk: func() interfaces.ReadOnlySignedBeaconBlock {
returnedBlk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBlindedBeaconBlockBellatrix())
require.NoError(t, err)
return wb
@@ -408,7 +413,7 @@ func TestServer_getBuilderBlock(t *testing.T) {
},
{
name: "submit blind block error",
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk: func() interfaces.SignedBeaconBlock {
b := util.NewBlindedBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
@@ -425,7 +430,7 @@ func TestServer_getBuilderBlock(t *testing.T) {
},
{
name: "head and payload root mismatch",
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk: func() interfaces.SignedBeaconBlock {
b := util.NewBlindedBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
@@ -437,7 +442,7 @@ func TestServer_getBuilderBlock(t *testing.T) {
HasConfigured: true,
Payload: p,
},
returnedBlk: func() interfaces.ReadOnlySignedBeaconBlock {
returnedBlk: func() interfaces.SignedBeaconBlock {
b := util.NewBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
@@ -450,7 +455,7 @@ func TestServer_getBuilderBlock(t *testing.T) {
},
{
name: "can get payload",
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk: func() interfaces.SignedBeaconBlock {
b := util.NewBlindedBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
@@ -476,7 +481,7 @@ func TestServer_getBuilderBlock(t *testing.T) {
HasConfigured: true,
Payload: p,
},
returnedBlk: func() interfaces.ReadOnlySignedBeaconBlock {
returnedBlk: func() interfaces.SignedBeaconBlock {
b := util.NewBeaconBlockBellatrix()
b.Block.Slot = 1
b.Block.ProposerIndex = 2

View File

@@ -35,7 +35,7 @@ func (vs *Server) setBlsToExecData(blk interfaces.SignedBeaconBlock, headState s
}
}
func (vs *Server) unblindBuilderBlockCapella(ctx context.Context, b interfaces.ReadOnlySignedBeaconBlock) (interfaces.ReadOnlySignedBeaconBlock, error) {
func (vs *Server) unblindBuilderBlockCapella(ctx context.Context, b interfaces.SignedBeaconBlock) (interfaces.SignedBeaconBlock, error) {
if err := consensusblocks.BeaconBlockIsNil(b); err != nil {
return nil, errors.Wrap(err, "block is nil")
}

View File

@@ -22,10 +22,10 @@ func TestServer_unblindBuilderCapellaBlock(t *testing.T) {
tests := []struct {
name string
blk interfaces.ReadOnlySignedBeaconBlock
blk interfaces.SignedBeaconBlock
mock *builderTest.MockBuilderService
err string
returnedBlk interfaces.ReadOnlySignedBeaconBlock
returnedBlk interfaces.SignedBeaconBlock
}{
{
name: "nil block",
@@ -34,12 +34,12 @@ func TestServer_unblindBuilderCapellaBlock(t *testing.T) {
},
{
name: "old block version",
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
return wb
}(),
returnedBlk: func() interfaces.ReadOnlySignedBeaconBlock {
returnedBlk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
return wb
@@ -47,7 +47,7 @@ func TestServer_unblindBuilderCapellaBlock(t *testing.T) {
},
{
name: "not configured",
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBlindedBeaconBlockBellatrix())
require.NoError(t, err)
return wb
@@ -55,7 +55,7 @@ func TestServer_unblindBuilderCapellaBlock(t *testing.T) {
mock: &builderTest.MockBuilderService{
HasConfigured: false,
},
returnedBlk: func() interfaces.ReadOnlySignedBeaconBlock {
returnedBlk: func() interfaces.SignedBeaconBlock {
wb, err := blocks.NewSignedBeaconBlock(util.NewBlindedBeaconBlockBellatrix())
require.NoError(t, err)
return wb
@@ -63,7 +63,7 @@ func TestServer_unblindBuilderCapellaBlock(t *testing.T) {
},
{
name: "submit blind block error",
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk: func() interfaces.SignedBeaconBlock {
b := util.NewBlindedBeaconBlockCapella()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
@@ -80,7 +80,7 @@ func TestServer_unblindBuilderCapellaBlock(t *testing.T) {
},
{
name: "can get payload",
blk: func() interfaces.ReadOnlySignedBeaconBlock {
blk: func() interfaces.SignedBeaconBlock {
b := util.NewBlindedBeaconBlockCapella()
b.Block.Slot = 1
b.Block.ProposerIndex = 2
@@ -109,7 +109,7 @@ func TestServer_unblindBuilderCapellaBlock(t *testing.T) {
HasConfigured: true,
PayloadCapella: p,
},
returnedBlk: func() interfaces.ReadOnlySignedBeaconBlock {
returnedBlk: func() interfaces.SignedBeaconBlock {
b := util.NewBeaconBlockCapella()
b.Block.Slot = 1
b.Block.ProposerIndex = 2

View File

@@ -30,11 +30,16 @@ func getEmptyBlock(slot primitives.Slot) (interfaces.SignedBeaconBlock, error) {
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not initialize block for proposal: %v", err)
}
default:
case slots.ToEpoch(slot) < params.BeaconConfig().DenebForkEpoch:
sBlk, err = blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockCapella{Block: &ethpb.BeaconBlockCapella{Body: &ethpb.BeaconBlockBodyCapella{}}})
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not initialize block for proposal: %v", err)
}
default:
sBlk, err = blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockDeneb{Block: &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{}}})
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not initialize block for proposal: %v", err)
}
}
return sBlk, err
}

View File

@@ -40,9 +40,10 @@ var (
})
)
// This returns the execution payload of a given slot. The function has full awareness of pre and post merge.
// This returns the execution payload of a given slot.
// The function has full awareness of pre and post merge.
// The payload is computed given the respected time of merge.
func (vs *Server) getExecutionPayload(ctx context.Context, slot primitives.Slot, vIdx primitives.ValidatorIndex, headRoot [32]byte, st state.BeaconState) (interfaces.ExecutionData, error) {
func (vs *Server) getExecutionPayload(ctx context.Context, slot primitives.Slot, vIdx primitives.ValidatorIndex, headRoot [32]byte, st state.BeaconState) (interfaces.ExecutionData, *enginev1.BlobsBundle, error) {
proposerID, payloadId, ok := vs.ProposerSlotIndexCache.GetProposerPayloadIDs(slot, headRoot)
feeRecipient := params.BeaconConfig().DefaultFeeRecipient
recipient, err := vs.BeaconDB.FeeRecipientByValidatorID(ctx, vIdx)
@@ -62,7 +63,7 @@ func (vs *Server) getExecutionPayload(ctx context.Context, slot primitives.Slot,
"Please refer to our documentation for instructions")
}
default:
return nil, errors.Wrap(err, "could not get fee recipient in db")
return nil, nil, errors.Wrap(err, "could not get fee recipient in db")
}
if ok && proposerID == vIdx && payloadId != [8]byte{} { // Payload ID is cache hit. Return the cached payload ID.
@@ -73,10 +74,17 @@ func (vs *Server) getExecutionPayload(ctx context.Context, slot primitives.Slot,
switch {
case err == nil:
warnIfFeeRecipientDiffers(payload, feeRecipient)
return payload, nil
if slots.ToEpoch(slot) >= params.BeaconConfig().DenebForkEpoch {
sc, err := vs.ExecutionEngineCaller.GetBlobsBundle(ctx, pid)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get blobs bundle from execution client")
}
return payload, sc, nil
}
return payload, nil, nil
case errors.Is(err, context.DeadlineExceeded):
default:
return nil, errors.Wrap(err, "could not get cached payload from execution client")
return nil, nil, errors.Wrap(err, "could not get cached payload from execution client")
}
}
@@ -84,53 +92,61 @@ func (vs *Server) getExecutionPayload(ctx context.Context, slot primitives.Slot,
var hasTerminalBlock bool
mergeComplete, err := blocks.IsMergeTransitionComplete(st)
if err != nil {
return nil, err
return nil, nil, err
}
t, err := slots.ToTime(st.GenesisTime(), slot)
if err != nil {
return nil, err
return nil, nil, err
}
if mergeComplete {
header, err := st.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
return nil, nil, err
}
parentHash = header.BlockHash()
} else {
if activationEpochNotReached(slot) {
return consensusblocks.WrappedExecutionPayload(emptyPayload())
p, err := consensusblocks.WrappedExecutionPayload(emptyPayload())
if err != nil {
return nil, nil, err
}
return p, nil, nil
}
parentHash, hasTerminalBlock, err = vs.getTerminalBlockHashIfExists(ctx, uint64(t.Unix()))
if err != nil {
return nil, err
return nil, nil, err
}
if !hasTerminalBlock {
return consensusblocks.WrappedExecutionPayload(emptyPayload())
p, err := consensusblocks.WrappedExecutionPayload(emptyPayload())
if err != nil {
return nil, nil, err
}
return p, nil, nil
}
}
payloadIDCacheMiss.Inc()
random, err := helpers.RandaoMix(st, time.CurrentEpoch(st))
if err != nil {
return nil, err
return nil, nil, err
}
finalizedBlockHash := params.BeaconConfig().ZeroHash[:]
finalizedRoot := bytesutil.ToBytes32(st.FinalizedCheckpoint().Root)
if finalizedRoot != [32]byte{} { // finalized root could be zeros before the first finalized block.
finalizedBlock, err := vs.BeaconDB.Block(ctx, bytesutil.ToBytes32(st.FinalizedCheckpoint().Root))
if err != nil {
return nil, err
return nil, nil, err
}
if err := consensusblocks.BeaconBlockIsNil(finalizedBlock); err != nil {
return nil, err
return nil, nil, err
}
switch finalizedBlock.Version() {
case version.Phase0, version.Altair: // Blocks before Bellatrix don't have execution payloads. Use zeros as the hash.
default:
finalizedPayload, err := finalizedBlock.Block().Body().Execution()
if err != nil {
return nil, err
return nil, nil, err
}
finalizedBlockHash = finalizedPayload.BlockHash()
}
@@ -143,10 +159,10 @@ func (vs *Server) getExecutionPayload(ctx context.Context, slot primitives.Slot,
}
var attr payloadattribute.Attributer
switch st.Version() {
case version.Capella:
case version.Capella, version.Deneb:
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
return nil, err
return nil, nil, err
}
attr, err = payloadattribute.New(&enginev1.PayloadAttributesV2{
Timestamp: uint64(t.Unix()),
@@ -155,7 +171,7 @@ func (vs *Server) getExecutionPayload(ctx context.Context, slot primitives.Slot,
Withdrawals: withdrawals,
})
if err != nil {
return nil, err
return nil, nil, err
}
case version.Bellatrix:
attr, err = payloadattribute.New(&enginev1.PayloadAttributes{
@@ -164,25 +180,32 @@ func (vs *Server) getExecutionPayload(ctx context.Context, slot primitives.Slot,
SuggestedFeeRecipient: feeRecipient.Bytes(),
})
if err != nil {
return nil, err
return nil, nil, err
}
default:
return nil, errors.New("unknown beacon state version")
return nil, nil, errors.New("unknown beacon state version")
}
payloadID, _, err := vs.ExecutionEngineCaller.ForkchoiceUpdated(ctx, f, attr)
if err != nil {
return nil, errors.Wrap(err, "could not prepare payload")
return nil, nil, errors.Wrap(err, "could not prepare payload")
}
if payloadID == nil {
return nil, fmt.Errorf("nil payload with block hash: %#x", parentHash)
return nil, nil, fmt.Errorf("nil payload with block hash: %#x", parentHash)
}
payload, err := vs.ExecutionEngineCaller.GetPayload(ctx, *payloadID, slot)
if err != nil {
return nil, err
return nil, nil, err
}
warnIfFeeRecipientDiffers(payload, feeRecipient)
return payload, nil
if slots.ToEpoch(slot) >= params.BeaconConfig().DenebForkEpoch {
sc, err := vs.ExecutionEngineCaller.GetBlobsBundle(ctx, *payloadID)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get blobs bundle from execution client")
}
return payload, sc, nil
}
return payload, nil, nil
}
// warnIfFeeRecipientDiffers logs a warning if the fee recipient in the included payload does not

View File

@@ -143,7 +143,7 @@ func TestServer_getExecutionPayload(t *testing.T) {
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
vs.ProposerSlotIndexCache.SetProposerAndPayloadIDs(tt.st.Slot(), 100, [8]byte{100}, [32]byte{'a'})
_, err := vs.getExecutionPayload(context.Background(), tt.st.Slot(), tt.validatorIndx, [32]byte{'a'}, tt.st)
_, _, err := vs.getExecutionPayload(context.Background(), tt.st.Slot(), tt.validatorIndx, [32]byte{'a'}, tt.st)
if tt.errString != "" {
require.ErrorContains(t, tt.errString, err)
} else {
@@ -179,7 +179,7 @@ func TestServer_getExecutionPayloadContextTimeout(t *testing.T) {
}
vs.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nonTransitionSt.Slot(), 100, [8]byte{100}, [32]byte{'a'})
_, err = vs.getExecutionPayload(context.Background(), nonTransitionSt.Slot(), 100, [32]byte{'a'}, nonTransitionSt)
_, _, err = vs.getExecutionPayload(context.Background(), nonTransitionSt.Slot(), 100, [32]byte{'a'}, nonTransitionSt)
require.NoError(t, err)
}
@@ -224,7 +224,7 @@ func TestServer_getExecutionPayload_UnexpectedFeeRecipient(t *testing.T) {
BeaconDB: beaconDB,
ProposerSlotIndexCache: cache.NewProposerPayloadIDsCache(),
}
gotPayload, err := vs.getExecutionPayload(context.Background(), transitionSt.Slot(), 0, [32]byte{}, transitionSt)
gotPayload, _, err := vs.getExecutionPayload(context.Background(), transitionSt.Slot(), 0, [32]byte{}, transitionSt)
require.NoError(t, err)
require.NotNil(t, gotPayload)
@@ -236,7 +236,7 @@ func TestServer_getExecutionPayload_UnexpectedFeeRecipient(t *testing.T) {
payload.FeeRecipient = evilRecipientAddress[:]
vs.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
gotPayload, err = vs.getExecutionPayload(context.Background(), transitionSt.Slot(), 0, [32]byte{}, transitionSt)
gotPayload, _, err = vs.getExecutionPayload(context.Background(), transitionSt.Slot(), 0, [32]byte{}, transitionSt)
require.NoError(t, err)
require.NotNil(t, gotPayload)

View File

@@ -63,6 +63,7 @@ type Server struct {
SlashingsPool slashings.PoolManager
ExitPool voluntaryexits.PoolManager
SyncCommitteePool synccommittee.Pool
BLSChangesPool blstoexec.PoolManager
BlockReceiver blockchain.BlockReceiver
MockEth1Votes bool
Eth1BlockFetcher execution.POWBlockFetcher
@@ -73,7 +74,6 @@ type Server struct {
BeaconDB db.HeadAccessDatabase
ExecutionEngineCaller execution.EngineCaller
BlockBuilder builder.BlockBuilder
BLSChangesPool blstoexec.PoolManager
}
// WaitForActivation checks if a validator public key exists in the active validator registry of the current

View File

@@ -98,6 +98,7 @@ type Config struct {
AttestationsPool attestations.Pool
ExitPool voluntaryexits.PoolManager
SlashingsPool slashings.PoolManager
BLSToExecPool blstoexec.PoolManager
SlashingChecker slasherservice.SlashingChecker
SyncCommitteeObjectPool synccommittee.Pool
BLSChangesPool blstoexec.PoolManager
@@ -218,12 +219,12 @@ func (s *Service) Start() {
SlashingsPool: s.cfg.SlashingsPool,
StateGen: s.cfg.StateGen,
SyncCommitteePool: s.cfg.SyncCommitteeObjectPool,
BLSChangesPool: s.cfg.BLSChangesPool,
ReplayerBuilder: ch,
ExecutionEngineCaller: s.cfg.ExecutionEngineCaller,
BeaconDB: s.cfg.BeaconDB,
ProposerSlotIndexCache: s.cfg.ProposerIdsCache,
BlockBuilder: s.cfg.BlockBuilder,
BLSChangesPool: s.cfg.BLSChangesPool,
}
validatorServerV1 := &validator.Server{
HeadFetcher: s.cfg.HeadFetcher,
@@ -243,6 +244,7 @@ func (s *Service) Start() {
ReplayerBuilder: ch,
},
SyncCommitteePool: s.cfg.SyncCommitteeObjectPool,
BLSChangesPool: s.cfg.BLSChangesPool,
ProposerSlotIndexCache: s.cfg.ProposerIdsCache,
}
@@ -296,6 +298,7 @@ func (s *Service) Start() {
ReceivedAttestationsBuffer: make(chan *ethpbv1alpha1.Attestation, attestationBufferSize),
CollectedAttestationsBuffer: make(chan []*ethpbv1alpha1.Attestation, attestationBufferSize),
ReplayerBuilder: ch,
BLSChangesPool: s.cfg.BLSChangesPool,
}
beaconChainServerV1 := &beacon.Server{
CanonicalHistory: ch,

Some files were not shown because too many files have changed in this diff Show More