Compare commits

...

337 Commits

Author SHA1 Message Date
terence tsao
365c253cdd Make CI/CD green again 2023-03-11 12:59:50 -08:00
terence tsao
4f53fe0600 Make CI/CD green again 2023-03-11 12:36:38 -08:00
terence tsao
856ce6d82a Rm BlobsSidecar 2023-03-11 12:07:43 -08:00
terence tsao
6d2e597967 Revert pending block and blob changes 2023-03-11 10:38:36 -08:00
terence tsao
3919c214b9 Use blob by root for pending cache 2023-03-11 10:28:10 -08:00
terence tsao
38dee4077f Use block and blob cache before import 2023-03-10 09:46:10 -08:00
terence tsao
8cc1e67e6c Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-03-10 08:23:09 -08:00
terencechain
2f3deac8b0 Make eip4844 green again (#12093) 2023-03-10 07:32:09 -08:00
Preston Van Loon
5eaa152589 Bazel: fix remote cache uploads (#12108) 2023-03-10 00:24:37 +00:00
Potuz
6d3ff65635 increase attempted reorgs at the right spot (#12106) 2023-03-09 15:06:46 +00:00
Marius van der Wijden
83a294c1a5 go.mod: set a non-zero version for tx-fuzz (#12098)
* go.mod: set a non-zero version for tx-fuzz

* go.mod: use upstream tx-fuzz instead of fork

* run gazelle

* fix tests

---------

Co-authored-by: nisdas <nishdas93@gmail.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-03-09 09:23:45 +00:00
james-prysm
753e285fb6 Prysm V4: Remove Prysm Remote Signer (#11895)
* removing all prysm remote signer code

* fixing unit tests

* resolving more build issues

* resolving deepsource complaint

* fixing lint

* trying to fix bazel library

* trying testonly true

* removing assert and require from non test settings

* fixing bazel and tests

* removing more unused files related to remote signer

* fixing linting

* reverting some changes

* reverting a change that broke some code

* removing typo

* fixing unit test

* fixing mnemonic information
2023-03-08 21:21:12 -06:00
Potuz
525d3b05a6 Late block reorgs metrics and logs (#12097)
* Late block reorgs metrics and logs

* tests

* mixed thresholds

* use threshold variables

* use infof

---------

Co-authored-by: terencechain <terence@prysmaticlabs.com>
2023-03-09 01:51:50 +00:00
Nishant Das
4f38ba38b7 Update Libp2p (#12096)
* update deps

* add preston's fix

* fix build

* gaz

* test fixes
2023-03-08 23:52:51 +08:00
Potuz
eb0b5a6146 Do not call repeated times FCU (#12091)
* Do not call repeated times FCU

* cleanup

* only log if prune attestations fail
2023-03-08 09:36:46 -03:00
terencechain
aa44ba40ab Add blob gossip (#12007) 2023-03-07 20:05:43 -08:00
terencechain
8b268d646c Add block and blobs cache (#12088) 2023-03-07 15:32:36 -08:00
Preston Van Loon
4356cbc352 Update cross compile toolchains (#12069)
* Regenerate cross-toolchain configs

* Remove some extra whitespaces

* Run gazelle and add that note to the README

* Format numbered lists better in markdown

* gcloud docker command is deprecated, just use docker

* Add comment about docker credentials for gcr.io

* Update dockerfile, some remote executor config work

* gazelle

* Remove commented lines

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-07 20:09:46 +00:00
kasey
0893821e35 Disable conditional go 1.20 code until module is also at 1.20 (#12084)
* disable conditional go 1.20 code until we upgrade

* bazel decided this was unreachable and removed it!

---------

Co-authored-by: kasey <kasey@users.noreply.github.com>
2023-03-07 16:25:35 +00:00
kasey
d6ecadb471 BlobsByRoot RPC (#12011)
* boilerplate for block&block-by-root->blob-by-root

* add db interface to service and use in handler

* rm unused requestBlockAndSidecarByRoot

* add test for base case of sidecar by root

* test blob slot < min req epoch, other fixes

* cleaning up test mess

* rm unused func

* add BroadcastBlob method stubs to fix build

* handler name consistent with path

* initialize blob queue for test

* lint & gaz

* update mock to satisfy interface

* fix wrong sig for mock

* clean up min req epoch, no underflow, + tests

---------

Co-authored-by: kasey <kasey@users.noreply.github.com>
2023-03-07 09:45:01 -06:00
Bret
c3346fefa7 Fix log messages (#12086)
These two log messages were appending a `d` to the hash. When compared to the other hash, they matched up until the `d`

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-03-07 07:16:39 +00:00
Nishant Das
d639a26bbe Fix E2E Flakes (#12074)
* fix flakes

* make it longer

* make it less to prevent triggering of other issues

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-07 04:15:51 +00:00
terence tsao
9a8bde448e Sync with develop 2023-03-06 16:16:18 -08:00
terence tsao
ce71b3b6b1 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-03-06 15:56:29 -08:00
Potuz
bb95d951cc Use Epoch boundary cache to retrieve balances (#12083)
* Use Epoch boundary cache to retrieve balances

* save boundary states before inserting to forkchoice

* move up last block save

* remove boundary checks on balances

* fix ordering

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-06 21:42:47 +00:00
terencechain
78d49fda13 Minor cleanup to forkchoice pkg (#12078)
* Minor cleanup to forkchoice pkg

* Rm implementation

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-06 17:01:36 +00:00
Nishant Das
3aaba7c065 Remove Feature Flag From Prater (#12082) 2023-03-06 09:56:53 +00:00
terencechain
b8a1bcdfe3 Circuit breaker: lower max builder epoch missed slots to 5 (#12076)
* Circuit breaker: lower max builder epoch missed slots to 5

* Change equality to >=

* Fix tests

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-05 19:10:41 +00:00
Potuz
93514de00f Proposer head v5 (#12075)
* Use ShouldOverrideFCU in regular sync

* fix build

* fix tests

* add feature flag gate in updatehead

* fix rpc tests

* fix grcp tests

* deepsource

* add locks and reuse isNewProposer

* flip flag

* Fix ticker for late blocks

* implement get_proposer_head in GetBeaconBlock

* fix unit tests
2023-03-04 20:19:23 -03:00
terencechain
86a883aa19 Add capella fork epoch for Goerli (#12073)
* Add capella fork epoch for Goerli

* update pr

---------

Co-authored-by: nisdas <nishdas93@gmail.com>
2023-03-04 09:43:20 +00:00
Preston Van Loon
c000e8fde5 Raise the max grpc message size to a very large value by default (#12072)
* Raise the max grpc message size to a very large value by default

* unused import

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-02 19:04:58 +00:00
dependabot[bot]
75e5887f07 Bump golang.org/x/net from 0.6.0 to 0.7.0 (#12063)
* Bump golang.org/x/net from 0.6.0 to 0.7.0

Bumps [golang.org/x/net](https://github.com/golang/net) from 0.6.0 to 0.7.0.
- [Release notes](https://github.com/golang/net/releases)
- [Commits](https://github.com/golang/net/compare/v0.6.0...v0.7.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>

* gazelle

* go mod tidy

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Preston Van Loon <preston@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-02 17:37:16 +00:00
Radosław Kapka
4ca3c5b058 Add slot to proposal error logs (#12071)
* Add slot to proposal error logs

* remove one field
2023-03-02 16:33:39 +00:00
Radosław Kapka
25d06d41be Fix type name in field_trie_helpers error message (#12070)
(cherry picked from commit bc9330c329)
2023-03-02 14:33:22 +00:00
Potuz
0a87210514 Forkchoice external locks v2 (#12036)
* write locks

* fix forkchoice tests

* blockchain locks

* lock on IsOptimistic

* use forkchoice instead of chaininfo within savehead

* Use forkchoice HasNode to check if a block is consistent with finality

* interface fix

* Use forkchoice HasNode to check if a block is consistent with finality

* interface fix

* fix tests

* remove VerifyFinalizedBlkDescendant

* don't write lock wrappers

* fix validateBeaconBlock

* Terence's review and more missing locks

* add lock for InForkChoice

* lock head on fillMissingBlockPayload

* fix lock on IsOptimisticForRoot

* fix lock in fillMissingBlockPayloadId

* extra comments

* lock proposerBoost on spectests

* nishant's review

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-03-02 09:10:52 -03:00
Nishant Das
196798eacc Update Deps For Capella (#12067)
* update

* gzl

* Using zstd workaround from @tals, per github.com/bazelbuild/rules_go/issues/3411

* gaz

* hacky patch

---------

Co-authored-by: rkapka <rkapka@wp.pl>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-02 03:31:41 +00:00
Radosław Kapka
17fe935343 Deprecate --interop-genesis-state (#12008)
* Deprecaste `--interop-genesis-state`

* better pattern

* fix errors

* test fix

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-03-01 19:39:17 +00:00
Radosław Kapka
ac4483417d Redesign voluntary exits pool (#11898)
Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-03-01 17:44:00 +01:00
Potuz
0d3fb0a32b lock head on fillMissingBlockPayload (#12068) 2023-03-01 13:09:10 +00:00
Nishant Das
3d337b07e1 Remove Ropsten Testnet Config (#12058)
* remove support for ropsten testnet

* add deprecated flag for ropsten

---------

Co-authored-by: P <1674920+peterbitfly@users.noreply.github.com>
Co-authored-by: Raul Jordan <raul@prysmaticlabs.com>
2023-03-01 13:26:32 +08:00
Raul Jordan
11b90e1f63 Store Blinded Beacon Blocks by Default for New Prysm Databases (#11591)
* fresh database checks

* gaz

* fix up tests

* fix test

* build

* bellatrix pass tests

* fix flags

* gaz

* fix up rpc test

* gaz

* building

* refactor, remove boolean blindness

* kv blocks

* fix up ensure coverage

* e2e default

* add missing case

* nishant feedback:

* log

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
2023-03-01 00:07:23 +00:00
Preston Van Loon
3c73bac798 Update protoc-gen-go-cast to suppress tool output (#12062)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-02-28 23:04:22 +00:00
Preston Van Loon
91fee5db17 Update distroless base images (#12061)
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
2023-02-28 22:50:37 +00:00
Preston Van Loon
155b0c161e Bazel: cleanup .bazelrc file (#12059)
* Reorganize bazelrc and intro new flags

* move cross compilation toolchain into its own bazelrc

* Restore build_tests_only
2023-02-28 21:36:43 +00:00
Nishant Das
a7010d817d Fix Scenario Test Failures (#12056)
* fix scenario failures

* fix up

* continue fixing
2023-02-28 23:21:15 +08:00
james-prysm
c0dd233a1c E2E: beacon api post attester duties (#11899)
* adding new post request util and attester duties test

* adding in status checks

* fixing error

* increasing size of request

* fixing gofmt
2023-02-28 13:35:02 +08:00
Preston Van Loon
c391fad258 Update rules docker to v0.25.0 (#12054)
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2023-02-27 23:46:10 +00:00
Radosław Kapka
e92b546a36 Return proposer reward from ProcessSyncAggregate (#12047) 2023-02-27 19:04:23 +01:00
Patrice Vignola
765345ac3a Remove the gRPC fallback client from the validator REST API (#12051) 2023-02-27 12:46:34 +00:00
Potuz
ec13d52f03 Remove VerifyFinalizedBlkDescendant (#12046)
* Use forkchoice HasNode to check if a block is consistent with finality

* interface fix

* fix tests

* remove VerifyFinalizedBlkDescendant

* fix validateBeaconBlock
2023-02-25 13:39:13 -03:00
Manu NALEPA
08ebc99bc3 Add (lack of) REST implementation for GetFeeRecipientByPubKey (#11991)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2023-02-24 21:43:52 +00:00
terence tsao
5fa30bf73a Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-02-23 15:45:32 -08:00
terencechain
b93aba6126 Add validator signing decoupled blobs (#12015) 2023-02-23 13:26:45 -08:00
terence tsao
51ed80df69 Fix build 2023-02-23 09:22:38 -08:00
terencechain
a614c4ac8c Add broadcast blob method (#12016) 2023-02-23 07:52:58 -08:00
terencechain
7b777a10a5 EIP4844: update excessive data gas field and pass spec tests (#12032) 2023-02-22 10:19:32 -08:00
terence tsao
bf1ab9951f Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-02-22 06:59:31 -08:00
terencechain
5cea2b3855 Revert block gossip changes to before coupling (#12012) 2023-02-21 17:06:11 -08:00
terencechain
ab63757fe5 Add decoupled blob protobufs (#12002) 2023-02-16 11:10:55 -08:00
terencechain
736ed1e003 Fix spec tests after renamed EIP4844 to Deneb (#12005) 2023-02-16 11:10:37 -08:00
terence tsao
6ee9707fd7 Sync with develop 2023-02-16 08:49:00 -08:00
terence tsao
e40835b1a9 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-02-16 08:20:20 -08:00
terence tsao
216a420bbc Sync with master 2023-01-28 14:19:50 +01:00
terence tsao
4845abecb8 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-01-28 13:09:08 +01:00
James He
08a910e44f renaming to proto endpoint 2023-01-27 16:30:24 +01:00
terence tsao
67ba2c4fe3 Add slash 2023-01-27 15:36:10 +01:00
terence tsao
7efa501bdc Fix space 2023-01-27 15:35:57 +01:00
terence tsao
879a694ab3 Another nil check 2023-01-27 10:15:04 +01:00
terence tsao
4e3b1881ed Add check for nil block 2023-01-27 09:44:15 +01:00
Inphi
35d3707de7 sync: Fix BlobsSidecarByRange rate limiting (#11920) 2023-01-26 07:48:39 +01:00
terence tsao
1f468bd3f5 Rm field params 2023-01-25 10:49:06 +01:00
terence tsao
ac32098c86 Merge branch 'init-nil-withdrawals' into eip4844 2023-01-25 10:32:11 +01:00
terence tsao
e0af005c42 Initialize nil withdrawals at marshal / unmarshal 2023-01-25 10:29:44 +01:00
Francis Li
f08af1bdbf Fix eip4844 branch errors (#11901) 2023-01-24 11:52:05 +01:00
terence tsao
6d0420fde5 Merge branch 'eip4844' of github.com:prysmaticlabs/prysm into eip4844 2023-01-24 11:41:40 +01:00
terence tsao
5172e6e362 Rate limiter 2023-01-24 11:41:08 +01:00
Inphi
42df0f70b6 sync: Re-broadcast block and blobs sidecar (#11907) 2023-01-24 11:15:57 +01:00
terence tsao
dace0f6a2d Rename eip4844 to deneb 2023-01-19 17:30:36 -08:00
terence tsao
26a5878181 Write bad block and blob to disk 2023-01-18 14:02:09 -08:00
terence tsao
db6474a3e4 Sync with develop 2023-01-18 12:00:48 -08:00
terence tsao
b84851fd0d Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2023-01-18 08:38:13 -08:00
terence tsao
673702c100 Fix loop referencing for kzgs 2023-01-12 15:50:46 -08:00
terence tsao
520eb6baca Rm unused checks 2023-01-12 10:36:21 -08:00
terence tsao
e6047dc344 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2023-01-11 14:59:00 -08:00
terence tsao
d86a452b15 Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2023-01-11 14:52:26 -08:00
terence tsao
67f9d0b9c4 Fix log 2023-01-11 08:08:33 -08:00
terence tsao
21cd055b84 Better logs 2023-01-11 07:40:19 -08:00
Kasey Kirkham
9f3bb623ec add capella to yaml "template" 2023-01-10 16:30:32 -06:00
Kasey Kirkham
b10a95097e capella state version detection bug fix 2023-01-10 15:54:27 -06:00
terence tsao
4561f5cacb Clean ups 2023-01-09 14:33:05 -08:00
terence tsao
50b672a4db Beacon api: get blobs 2023-01-09 11:40:06 -08:00
Potuz
ffbb73a59b Merge remote-tracking branch 'origin/develop' into capella 2023-01-09 12:07:00 -03:00
Potuz
649974f14d removed duplicated case 2023-01-09 09:54:15 -03:00
Potuz
9ec0bc0734 Merge remote-tracking branch 'origin/historical-summaries' into capella 2023-01-09 08:57:24 -03:00
terence tsao
9649e49658 Passing spec tests 2023-01-07 08:50:20 -08:00
terence tsao
49fdcb7347 Add spec tests 2023-01-07 08:33:48 -08:00
terence tsao
cd6ee956ed Merge branch 'historical-summaries' into eip4844 2023-01-07 08:03:15 -08:00
terence tsao
ef95fd33f8 Uncomment withdrawal stubs 2023-01-07 07:52:31 -08:00
terence tsao
1a488241b0 Sync with capella 2023-01-07 07:48:57 -08:00
terence tsao
5fdd3a3d66 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2023-01-07 07:19:55 -08:00
terence tsao
b6a32c050f Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2023-01-07 07:03:58 -08:00
terence tsao
055e225093 Passing spec tests 2023-01-06 15:18:38 -08:00
terence tsao
144218cb1b Process historical roots test 2023-01-06 11:56:24 -08:00
Potuz
13b575a609 Merge remote-tracking branch 'origin/develop' into capella 2023-01-06 10:59:43 -03:00
terence tsao
b5a414eae9 Rm bad imports 2023-01-04 08:10:27 -08:00
terence tsao
b94b347ace Sync with develop 2023-01-04 07:37:25 -08:00
terence tsao
f5ee225819 Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2023-01-04 07:16:00 -08:00
terence tsao
9cb48be14f Add historical summaries to state and processing 2023-01-03 15:58:39 -08:00
terence tsao
85fa9951eb Fix span name 2022-12-24 23:40:30 +08:00
terence tsao
ec72575fc9 Close stream for by root rpc handler 2022-12-24 23:34:02 +08:00
terence tsao
d9d1bb6d3d Skip on empty 2022-12-24 11:37:58 +08:00
Potuz
ffcdc26618 Merge remote-tracking branch 'origin/develop' into capella 2022-12-23 13:03:11 -03:00
Potuz
96981a07b9 check signatures of BLS changes before capella 2022-12-22 16:10:00 -03:00
Potuz
6b2721b239 Check BLS_TO_EXECUTION_CHANGE as if they were from Capella
In the event we receive a BLS_TO_EXECUTION_CHANGE and our head state is
before Capella, verify it's signature with the Capella fork version.
2022-12-22 11:34:57 -03:00
terence tsao
c79151a574 Fix rate limiter 2022-12-22 22:10:34 +08:00
Inphi
4b20234801 Fix missing context in post-altair /v1 messages (#11807) 2022-12-22 06:46:15 +08:00
terence tsao
911048aa6d Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-12-21 13:01:40 +08:00
James He
255e9693ee fixing typo on comments 2022-12-20 22:42:31 -06:00
terence tsao
61c1216e3d Better handling for validate sync msg time 2022-12-21 10:08:44 +08:00
terence tsao
17e1eaf0f3 Set stream write deadline 2022-12-21 09:38:00 +08:00
terence tsao
9940943595 Clean up sync 2022-12-21 09:30:11 +08:00
terence tsao
9a0f941870 Change avgSidecarBlobsTransferBytes 2022-12-21 08:32:47 +08:00
terence tsao
5d0f54d332 Blobs rate limiter 2022-12-21 08:17:54 +08:00
rkapka
d602c94b7b fix 2022-12-20 20:28:53 +01:00
rkapka
6a5ecbd68f Implement getPoolBLSToExecutionChanges API endpoint
(cherry picked from commit cd25d922bc)

# Conflicts:
#	beacon-chain/node/node.go
#	beacon-chain/rpc/eth/beacon/pool.go
#	proto/eth/service/beacon_chain_service.pb.go
#	proto/migration/v1alpha1_to_v2.go
2022-12-20 17:31:21 +01:00
Potuz
29dfcab505 Fix BlockValue marshalling 2022-12-19 11:14:35 -03:00
Potuz
16e5c903cc Merge remote-tracking branch 'origin/develop' into capella 2022-12-19 10:28:05 -03:00
terencechain
66682cb4e5 Clean up block initizliation and remove set block (#11791) 2022-12-19 15:41:28 +08:00
terencechain
52faea8b7d Support 4844 container type queries in the beacon API + update spec tests (#11794) 2022-12-19 15:38:23 +08:00
terence tsao
8a78315682 Save blobs during by range sync 2022-12-18 07:44:30 +08:00
Potuz
cab42a4ae3 Take raw arrays for BLS changes 2022-12-16 18:02:35 -03:00
Potuz
a5bdd42bdd Merge remote-tracking branch 'origin/block-value' into capella 2022-12-16 12:47:29 -03:00
Potuz
a26197f919 take lists for bls changes endpoint 2022-12-16 12:47:18 -03:00
terence tsao
8b9cab457e Got block and blobs gossip working 2022-12-16 13:41:59 +08:00
terence tsao
080ce31395 Add block value to get payload v2 2022-12-15 17:17:58 +08:00
terence tsao
7866e8a196 Got blob syncing to work 2022-12-15 17:02:38 +08:00
Potuz
d5d17e00b3 Merge branch 'develop' into capella 2022-12-14 13:04:23 -03:00
rkapka
9c6a1331cf remove redeclared struct 2022-12-14 16:29:14 +01:00
terence tsao
d89c97634c Merge branch 'eip4844' of github.com:prysmaticlabs/prysm into eip4844 2022-12-14 10:26:47 +08:00
terence tsao
7e95ca3705 Add blobs to initial syncing path 2022-12-14 10:26:39 +08:00
Inphi
abd46b01b7 Fix non-empty kzg commitment in proposal (#11766) 2022-12-14 08:06:14 +08:00
Potuz
8629ac8417 only broadcast bls changes post-capella 2022-12-13 09:53:05 -03:00
terence tsao
304925aabf Add todos for 4844 sync 2022-12-13 16:18:09 +08:00
terence tsao
16d93e47a5 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-12-13 15:12:46 +08:00
rkapka
6dcb2bbf0d Use signed changes in middleware block
(cherry picked from commit e3c9e7bb5c)
2022-12-12 16:21:39 +01:00
Potuz
deb138959a fix validator client 2022-12-12 11:43:48 -03:00
Potuz
45e6f3bd00 fix build 2022-12-12 11:29:57 -03:00
Potuz
55a9e0d51a Merge branch 'develop' into capella 2022-12-12 11:15:39 -03:00
terence tsao
3ddae600fb Merge branch 'capella' of github.com:prysmaticlabs/prysm into capella 2022-12-09 19:35:39 -08:00
terence tsao
861ede8945 Fix subscriptions 2022-12-09 15:29:23 -08:00
terence tsao
93f11f9047 Change target / max blobs to 2 / 4 2022-12-09 10:40:20 -08:00
rkapka
56503110dd Merge branch 'recontruct-capella-blinded' into capella
# Conflicts:
#	testing/spectest/shared/common/forkchoice/service.go
2022-12-09 12:45:23 +01:00
rkapka
f67d35dffd single execution block type 2022-12-09 12:43:35 +01:00
terence tsao
efbca1b5b7 Add v3 engine apis 2022-12-08 16:45:05 -08:00
terence tsao
2de0ebaf8d Merge branch 'roberto-fix-auth' into eip4844 2022-12-08 13:52:11 -08:00
Roberto Bayardo
0815ef94a3 Merge branch 'develop' into roberto-fix-auth 2022-12-08 13:23:29 -08:00
Roberto Bayardo
092ffa99e5 update & fix code around setting auth header for latest geth 2022-12-08 13:14:09 -08:00
rkapka
b05b67b264 reorder checks 2022-12-08 19:48:02 +01:00
rkapka
a5c6518c20 deepsource 2022-12-08 19:48:02 +01:00
Radosław Kapka
da048395ce Merge branch 'develop' into recontruct-capella-blinded 2022-12-08 18:41:12 +01:00
rkapka
f31f7be310 fix engine mock 2022-12-08 18:39:59 +01:00
rkapka
e1a2267f86 Merge remote-tracking branch 'origin/capella' into capella 2022-12-08 18:36:44 +01:00
rkapka
3c9e4ee7f7 Merge branch 'recontruct-capella-blinded' into capella
# Conflicts:
#	beacon-chain/blockchain/pow_block.go
#	beacon-chain/execution/engine_client.go
#	beacon-chain/execution/engine_client_test.go
#	beacon-chain/execution/testing/mock_engine_client.go
#	beacon-chain/rpc/eth/beacon/blocks.go
#	beacon-chain/state/state-native/getters_withdrawal.go
#	consensus-types/blocks/factory.go
#	proto/engine/v1/json_marshal_unmarshal.go
#	proto/engine/v1/json_marshal_unmarshal_test.go
2022-12-08 18:31:25 +01:00
rkapka
9ba32c9acd single ExecutionBlockByHash function 2022-12-08 17:53:02 +01:00
rkapka
d23008452e fix failing tests 2022-12-08 17:29:51 +01:00
terencechain
f397cba1e0 Better proposal RPC (#11721) 2022-12-08 07:40:48 -08:00
terence tsao
3eecbb5b1a Fix enum for cli 2022-12-07 12:12:56 -08:00
rkapka
1583e93b48 bzl 2022-12-07 18:23:43 +01:00
rkapka
849457df81 deepsource
(cherry picked from commit 903cab75ee)

# Conflicts:
#	beacon-chain/execution/testing/mock_engine_client.go
2022-12-07 16:29:34 +01:00
rkapka
903cab75ee deepsource 2022-12-07 16:27:09 +01:00
rkapka
ee108d4aff add doc to interface
(cherry picked from commit a08baf4a14)
2022-12-07 16:22:14 +01:00
rkapka
49bcc58762 rename methods
(cherry picked from commit 8c56dfdd46)
2022-12-07 16:22:09 +01:00
rkapka
a08baf4a14 add doc to interface 2022-12-07 16:20:43 +01:00
rkapka
8c56dfdd46 rename methods 2022-12-07 16:20:31 +01:00
rkapka
dcdd9af9db remove unneeded test 2022-12-07 16:05:44 +01:00
rkapka
a464cf5c60 Merge branch 'reconstruct-capella-block' into capella
(cherry picked from commit b0601580ef)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go
#	proto/engine/v1/json_marshal_unmarshal.go
2022-12-07 15:21:58 +01:00
terence tsao
cc55c754dc Fix cli flag 2022-12-06 16:57:50 -08:00
terence tsao
2d483ab09f Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-12-06 16:48:44 -08:00
terence tsao
d64e10a337 Interop 2022-12-06 16:06:18 -08:00
terence tsao
1e9ee10674 Merge branch 'better-validator-rpcs' of github.com:prysmaticlabs/prysm into eip4844 2022-12-06 15:14:20 -08:00
terence tsao
3ac395b39e Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-12-06 14:42:49 -08:00
Justin Traglia
6e26a6f128 Replace LastWithdrawalValidatorIndex to updated name (#11725) 2022-12-06 14:40:10 -08:00
Justin Traglia
b512b92a8a Update withdrawal error message to reflect new field name (#11724) 2022-12-06 14:39:17 -08:00
terence tsao
5ff601a1b9 Sync with latest go-ethereum changes 2022-12-06 14:38:14 -08:00
terence tsao
5823054519 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-12-06 14:08:00 -08:00
terence tsao
3d196662bc Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-12-06 14:06:30 -08:00
rkapka
b0601580ef Merge branch 'reconstruct-capella-block' into capella 2022-12-06 22:16:59 +01:00
rkapka
c1f29ea651 remove logs 2022-12-06 22:11:35 +01:00
rkapka
881d1d435a logs 2022-12-06 21:46:41 +01:00
rkapka
d1aae0c941 Merge branch 'capella' into reconstruct-capella-block 2022-12-06 21:26:55 +01:00
terence tsao
468cc23876 Fix interop 2022-12-04 08:40:41 -08:00
terence tsao
d9646a9183 Add builder paths 2022-12-03 07:29:46 -08:00
terence tsao
279cee42f1 Refactor block proposal path 2022-12-02 16:13:01 -08:00
terence tsao
57bdb907cc Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-12-02 11:10:39 -08:00
rkapka
15d683c78f Merge branch 'capella' into reconstruct-capella-block 2022-12-02 16:43:56 +01:00
rkapka
bf6c8ced7d working 2022-12-02 16:37:24 +01:00
Potuz
78fb685027 Check BLS changes when requesting from the pool 2022-12-02 10:14:39 -03:00
terence tsao
a87536eba0 Fix minimal spec test 2022-12-01 15:26:18 -08:00
terence tsao
3f05395a00 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-12-01 14:52:49 -08:00
terence tsao
85fc57d41e Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-12-01 14:45:31 -08:00
terence tsao
1e5976d5ce Mainnet spec tests passing 2022-12-01 14:41:34 -08:00
Potuz
98c0b23350 broadcast BLS changes 2022-12-01 11:26:56 -03:00
terence tsao
039a0fffba Merge branch 'develop' of github.com:prysmaticlabs/prysm into capella 2022-11-30 12:09:47 -08:00
Potuz
90ec640e7a Fix capella operations spectests 2022-11-30 15:35:55 -03:00
Potuz
10acd31d25 check on verify instead of sig 2022-11-30 15:20:23 -03:00
terence tsao
4224014fad Add 4844 block and state 2022-11-30 10:14:35 -08:00
Potuz
df1e8b33d8 BLS Change signature verification 2022-11-30 14:52:31 -03:00
rkapka
cdb4ee42cc not working 2022-11-30 18:22:52 +01:00
rkapka
d29baec77e proper error handling in BuildSignedBeaconBlockFromExecutionPayload 2022-11-30 15:32:57 +01:00
terence tsao
53c189da9b Merge branch 'update-eip4844-objs' into eip4844 2022-11-29 21:16:31 -08:00
terence tsao
277fbce61b Update eip4844 objects 2022-11-29 21:15:14 -08:00
terence tsao
0adc54b7ff Refactor get payload 2022-11-29 12:02:47 -08:00
Potuz
1cbd7e9888 withdraw by default 2022-11-29 13:52:34 -03:00
rkapka
0a9e1658dd move stuff to blinded_blocks.go 2022-11-29 17:36:52 +01:00
rkapka
31d4a4cd11 test other functions 2022-11-29 17:36:52 +01:00
rkapka
fbc4e73d31 refactor and test GetSSZBlockV2 2022-11-29 17:36:52 +01:00
rkapka
c1d4eaa79d refactor and test GetBlockV2 2022-11-29 17:36:52 +01:00
rkapka
760af6428e update ssz 2022-11-29 17:36:52 +01:00
terence tsao
dfa0ccf626 Fix attribute pb nil checks 2022-11-29 07:27:33 -08:00
terence tsao
7a142cf324 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-11-29 07:09:41 -08:00
rkapka
1a51fdbd58 update withdrawals proto 2022-11-29 14:23:11 +01:00
terencechain
368a99ec8d Fix nil attribute for capella branch (#11701) 2022-11-28 17:34:26 -08:00
terence tsao
1c7e734918 Fix some blockchain tests 2022-11-28 13:59:17 -08:00
rkapka
764d1325bf Merge remote-tracking branch 'origin/capella' into capella 2022-11-28 21:07:00 +01:00
rkapka
0cf30e9022 Merge branch '__develop' into capella 2022-11-28 21:06:20 +01:00
Potuz
227b20f368 fix nil block from stream 2022-11-28 16:53:11 -03:00
rkapka
d7d70bc25b support SSZ lol
(cherry picked from commit 52bc2c8d617ac3e1254c493fa053cdce4a1ebd63)
2022-11-28 20:19:24 +01:00
rkapka
82f6ddb693 add Capella version
(cherry picked from commit 5d6fd0bbe663e5dd16df5b2e773f68982bbcd24e)
2022-11-28 20:19:19 +01:00
rkapka
9e4e82d2c5 refactor GetBlindedBlockSSZ
(cherry picked from commit 97483c339f99b0d96bd81846a979383ffd2b0cda)

# Conflicts:
#	beacon-chain/rpc/eth/beacon/blocks.go
2022-11-28 20:19:15 +01:00
rkapka
9838369fe9 fix proto generation 2022-11-28 20:15:05 +01:00
rkapka
6085ad1bfa fix issues 2022-11-28 20:07:37 +01:00
rkapka
d3851b27df Merge branch '__develop' into capella
# Conflicts:
#	beacon-chain/rpc/apimiddleware/structs.go
#	beacon-chain/rpc/eth/beacon/blocks.go
#	proto/eth/v2/BUILD.bazel
#	proto/eth/v2/beacon_block.pb.go
#	proto/eth/v2/beacon_block.proto
#	proto/eth/v2/generated.ssz.go
#	proto/migration/v1alpha1_to_v2.go
#	proto/prysm/v1alpha1/beacon_chain.pb.go
#	proto/prysm/v1alpha1/beacon_chain.proto
2022-11-28 19:31:33 +01:00
Potuz
d6100dfdcb fix spectest 2022-11-28 12:38:14 -03:00
Potuz
c2144dac86 Add BLSToExecutionChangge endpoint 2022-11-27 23:00:16 -03:00
Potuz
a47ff569a8 Add Submit BLSChange endpoint 2022-11-27 20:50:21 -03:00
Potuz
f8be022ef2 Merge branch 'develop' into capella 2022-11-27 20:40:41 -03:00
Potuz
4f39e6b685 Implement REST block API endpoints 2022-11-27 16:20:30 -03:00
Potuz
c67b000633 add test 2022-11-27 13:07:50 -03:00
Potuz
02f7443586 Refactor Sync Committee Rewards 2022-11-27 09:04:03 -03:00
terence tsao
6275e7df4e Clean up execution engine 2022-11-25 17:20:28 -08:00
terencechain
1b6b52fda1 Add PayloadAttribute superset and use it for engine-api (#11691) 2022-11-25 17:12:55 -08:00
Potuz
5fa1fd84b9 Hook the BLSTOExecution Pool to the proposer 2022-11-25 09:33:17 -03:00
nisdas
bd0c9f9e8d fix 2022-11-25 09:06:59 -03:00
Potuz
2532bb370c Merge branch 'develop' into capella 2022-11-25 08:16:32 -03:00
nisdas
12efc6c2c1 make it reject 2022-11-24 22:21:12 +08:00
nisdas
a6cc9ac9c5 add sig validation 2022-11-24 22:19:55 +08:00
nisdas
031f5845a2 add gossip handler for bls change object 2022-11-24 21:22:18 +08:00
nisdas
b88559726c Merge branch 'develop' of https://github.com/prysmaticlabs/geth-sharding into capella 2022-11-24 20:05:41 +08:00
terence tsao
ca6ddf4490 Add and use send blocks and sidecars requests 2022-11-23 16:11:06 -08:00
terence tsao
3ebb2fce94 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-11-23 14:21:45 -08:00
nisdas
62f6b07cba fix gossip registration 2022-11-23 20:10:44 +08:00
terence tsao
f956f1ed6e Handle capella version for packing atts 2022-11-22 17:21:21 -08:00
terence tsao
1c0fa95053 Fix forkchoice test 2022-11-22 17:09:34 -08:00
terence tsao
04bf4a1060 Clean up 2022-11-22 15:38:24 -08:00
terence tsao
ae276fd371 Add mainnet spec tests 2022-11-22 10:53:47 -08:00
terence tsao
104bdaed12 Merge branch 'capella' of github.com:prysmaticlabs/prysm into eip4844 2022-11-22 09:43:43 -08:00
terence tsao
089a5d6ac2 Migrate from geth's kzg lib to go-kzg/eth (Thanks @roberto-bayardo) 2022-11-22 08:51:45 -08:00
Potuz
16b0820193 Merge branch 'develop' into capella 2022-11-22 13:43:03 -03:00
Potuz
4b02267e96 add more minimal fixes 2022-11-22 13:34:33 -03:00
Potuz
746584c453 fix missing minimal test 2022-11-22 13:34:33 -03:00
Potuz
b56daaaca2 Fix empty withdrawals slice 2022-11-22 10:38:42 -03:00
terence tsao
b7a6fe88ee Update block and sidecar gossip conditions 2022-11-21 14:38:44 -08:00
terence tsao
22d1c37b92 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-20 19:22:30 -08:00
terence tsao
78a393f825 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-18 13:48:40 -08:00
terence tsao
ac8290c1bf Port over shared kzg functions and updated trusted setup 2022-11-18 10:55:35 -08:00
terence tsao
5d0662b415 Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-18 10:00:32 -08:00
terence tsao
931e5e10c3 Fix mainnet fork transition tests 2022-11-16 08:49:06 -05:00
Potuz
c172f838b1 Mark capella fields as dirty 2022-11-15 09:58:34 -05:00
Potuz
c07ae29cd9 move MaxWithdrawalsPerPayload to fieldparams 2022-11-14 22:59:31 -05:00
Potuz
214c9bfd8b fix bls_to_execution_changes 2022-11-14 16:41:17 -05:00
Potuz
716140d64d add bls_to_execution_change tests 2022-11-14 16:26:39 -05:00
Potuz
088cb4ef59 fix expected_withdrawals 2022-11-14 14:50:41 -05:00
terence tsao
fa33e93a8e Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-14 08:53:15 -05:00
Potuz
d1472fc351 Add withdrawals operations tests 2022-11-13 08:54:59 -03:00
terence tsao
5c8c0c31d8 Merge branch 'capella-withdrawal-minimal' into capella 2022-11-12 22:06:43 -08:00
terence tsao
7f3c00c7a2 Can build 2022-11-12 22:06:22 -08:00
terencechain
c180dab791 Merge branch 'develop' into capella 2022-11-12 18:28:18 -08:00
terence tsao
f24acc21c7 Fix bazel 2022-11-12 17:43:19 -08:00
terence tsao
40b637849d Fix miminal capella tests 2022-11-12 17:26:05 -08:00
terence tsao
e7db1685df Add mainnet capella tests 2022-11-12 17:13:26 -08:00
terence tsao
eccbfd1011 Add shared capella spec tests helpers 2022-11-12 17:13:16 -08:00
terence tsao
90211f6769 Fix prev epoch attested precompute 2022-11-12 17:12:29 -08:00
terence tsao
edc32ac18e Fix slashing quotient 2022-11-12 17:12:04 -08:00
terence tsao
fe68e020e3 Add selector with minimal withdrawal size 2022-11-12 16:18:16 -08:00
terence tsao
81e1e3544d Add mainnet ssz vectors 2022-11-12 15:50:07 -08:00
Potuz
09372d5c35 Revert "added mainnet ssz tests"
This reverts commit 078a89e4ca.
2022-11-12 18:40:28 -03:00
Potuz
078a89e4ca added mainnet ssz tests 2022-11-12 18:39:11 -03:00
Potuz
dbc6ae26a6 Add minimal support for capella spec tests
Fixed many issues about hashing
Added fork typing in the state replayer
2022-11-12 18:18:35 -03:00
Potuz
b6f429867a Merge branch 'develop' into capella 2022-11-12 16:35:20 -03:00
Potuz
09f50660ce Merge branch 'develop' into capella 2022-11-12 11:51:06 -03:00
Potuz
189825b495 fix withdrawal hashing 2022-11-11 23:21:41 -03:00
terence tsao
441cad58d4 Merge commit 'e03de47db7b782bdf7dc8d9b42749eb2a236cdea' into eip4844 2022-11-11 09:18:57 -08:00
terence tsao
1277d08f9e Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-11 08:46:39 -08:00
terence tsao
e03de47db7 Ops, wrong branch 2022-11-11 08:45:22 -08:00
Potuz
764b7ff610 Don't build capella payload twice 2022-11-11 10:47:29 -03:00
terence tsao
307be7694e Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-09 16:44:34 -08:00
terence tsao
c76ae1ef39 Add beacon_block_and_blobs_sidecar gossip 2022-11-09 16:43:26 -08:00
Potuz
d499db7f0e Merge branch 'develop' into capella 2022-11-09 21:27:18 -03:00
terence tsao
a894b9f29a Merge branch 'develop' of github.com:prysmaticlabs/prysm into eip4844 2022-11-09 14:49:18 -08:00
terence tsao
902e6b3905 Update to use latest kzg library 2022-11-09 10:17:27 -08:00
Potuz
ed2d1c7bf9 Merge branch 'develop' into capella 2022-11-09 09:14:55 -03:00
nisdas
14b73cbd47 register flag 2022-11-09 08:48:36 +08:00
terence tsao
a39c7aa864 Add rpc blobs sidecars by range 2022-11-07 14:53:12 -08:00
terence tsao
170bc9c8ec Network changes 2022-11-07 14:43:46 -08:00
terence tsao
365c01fc29 Add tests 2022-11-07 06:58:24 -08:00
Potuz
3124785a08 Merge branch 'develop' into capella 2022-11-07 10:08:09 -03:00
Potuz
60e6306107 working withrawals initial commits 2022-11-06 16:53:57 -03:00
terence tsao
42ccb7830a Add Capella DB changes 2022-11-06 15:25:34 -03:00
Potuz
0bb03b9292 fix marshalling and engine calls 2022-11-06 15:24:22 -03:00
nisdas
ed6fbf1480 stupid bug 2022-11-07 00:18:01 +08:00
nisdas
477cec6021 wei it 2022-11-07 00:13:12 +08:00
nisdas
924500d111 add unmarshal 2022-11-06 23:57:37 +08:00
Potuz
0677504ef1 Revert "proposer changes"
This reverts commit ca2a7c4d9c.
2022-11-06 12:16:53 -03:00
Potuz
ca2a7c4d9c proposer changes 2022-11-06 12:14:17 -03:00
Potuz
28606629ad marshalling stub 2022-11-06 12:03:25 -03:00
Potuz
c817279464 fix capella payload 2022-11-06 11:14:39 -03:00
Potuz
009d6ed8ed proposer logic 2022-11-06 10:49:32 -03:00
Potuz
5cec1282a9 FCU two versions 2022-11-06 10:22:45 -03:00
Potuz
340170fd29 propose block V2 2022-11-06 09:42:20 -03:00
Potuz
7ed0cc139a marshalling first attempt 2022-11-06 07:47:43 -03:00
Potuz
2c822213eb rpc changes 2022-11-06 00:24:56 -03:00
terence tsao
0894b9591c Add Capella DB changes 2022-11-05 13:27:18 -07:00
terence tsao
f0ca45f9a2 Json marshal and unmarshal blob bundle 2022-11-05 12:45:14 -07:00
terence tsao
afc48c6485 State change and config changes for Capella 2022-11-05 12:45:01 -07:00
terence tsao
93dce8a0cb P2p changes for Capella fork 2022-11-05 12:44:46 -07:00
terence tsao
149ccdaf39 Add engine call for get payload 2022-11-05 11:00:13 -07:00
Potuz
c08bb39ffe add fork versions 2022-11-05 13:38:11 -03:00
Potuz
5083d8ab34 propose capella blocks 2022-11-05 12:38:27 -03:00
Potuz
7552a5dd07 capella fork logic 2022-11-05 07:33:20 -03:00
Potuz
c93d68f853 Capella state transition 2022-11-05 06:06:15 -03:00
terence tsao
2b74db2dce Add blob database methods 2022-11-04 13:46:01 -07:00
terence tsao
cc6c91415d Add validate blobs sidecar 2022-11-04 13:06:15 -07:00
terence tsao
6d7d7e0adc Add excessive blobs to execution payload 2022-11-04 09:48:39 -07:00
terence tsao
2105d777f0 Add blobs kzg to Capella beacon block 2022-11-04 09:33:01 -07:00
terence tsao
14338afbdb Update go.mod 2022-11-04 08:52:16 -07:00
Potuz
3e8aa4023d Fix config test and export method 2022-11-04 11:59:49 -03:00
Potuz
b443875e66 Implement get_expected_withdrawals 2022-11-04 11:45:45 -03:00
510 changed files with 27954 additions and 11289 deletions

178
.bazelrc
View File

@@ -1,9 +1,9 @@
# Print warnings for tests with inappropriate test size or timeout.
test --test_verbose_timeout_warnings
# Only build test targets when running bazel test //...
test --build_tests_only
test --test_output=errors
# Import bazelrc presets
import %workspace%/build/bazelrc/convenience.bazelrc
import %workspace%/build/bazelrc/correctness.bazelrc
import %workspace%/build/bazelrc/cross.bazelrc
import %workspace%/build/bazelrc/debug.bazelrc
import %workspace%/build/bazelrc/performance.bazelrc
# E2E run with debug gotag
test:e2e --define gotags=debug
@@ -11,21 +11,9 @@ test:e2e --define gotags=debug
# Clearly indicate that coverage is enabled to disable certain nogo checks.
coverage --define=coverage_enabled=1
# Fix for rules_docker. See: https://github.com/bazelbuild/rules_docker/issues/842
build --host_force_python=PY2
run --host_force_python=PY2
# Networking is blocked for tests by default, add "requires-network" tag to your test if networking
# is required within the sandbox. Network sandboxing only works on linux.
build --sandbox_default_allow_network=false
# Stamp binaries with git information
build --workspace_status_command=./hack/workspace_status.sh
# Prevent PATH changes from rebuilding when switching from IDE to command line.
build --incompatible_strict_action_env
run --incompatible_strict_action_env
build --define blst_disabled=false
run --define blst_disabled=false
@@ -68,42 +56,6 @@ build:cgo_symbolizer --define=USE_CGO_SYMBOLIZER=true
build:cgo_symbolizer -c dbg
build:cgo_symbolizer --define=gotags=cgosymbolizer_enabled
# multi-arch cross-compiling toolchain configs:
-----------------------------------------------
build:cross --crosstool_top=@prysm_toolchains//:multiarch_toolchain
build:cross --host_platform=@io_bazel_rules_go//go/toolchain:linux_amd64
build:cross --host_crosstool_top=@prysm_toolchains//:hostonly_toolchain
# linux_amd64 config for cross compiler toolchain, not strictly necessary since host/exec env is amd64
build:linux_amd64 --platforms=@io_bazel_rules_go//go/toolchain:linux_amd64_cgo
# osx_amd64 config for cross compiler toolchain
build:osx_amd64 --config=cross
build:osx_amd64 --platforms=@io_bazel_rules_go//go/toolchain:darwin_amd64_cgo
build:osx_amd64 --compiler=osxcross
# osx_arm64 config for cross compiler toolchain
build:osx_arm64 --config=cross
build:osx_arm64 --platforms=@io_bazel_rules_go//go/toolchain:darwin_arm64_cgo
build:osx_arm64 --compiler=osxcross
build:osx_arm64 --cpu=aarch64
# windows
build:windows_amd64 --config=cross
build:windows_amd64 --platforms=@io_bazel_rules_go//go/toolchain:windows_amd64_cgo
build:windows_amd64 --compiler=mingw-w64
# linux_arm64 conifg for cross compiler toolchain
build:linux_arm64 --config=cross
build:linux_arm64 --platforms=@io_bazel_rules_go//go/toolchain:linux_arm64_cgo
build:linux_arm64 --copt=-funsafe-math-optimizations
build:linux_arm64 --copt=-ftree-vectorize
build:linux_arm64 --copt=-fomit-frame-pointer
build:linux_arm64 --cpu=aarch64
build:linux_arm64 --compiler=clang
build:linux_arm64 --copt=-march=armv8-a
# toolchain build debug configs
#------------------------------
build:debug --sandbox_debug
@@ -111,123 +63,5 @@ build:debug --toolchain_resolution_debug
build:debug --verbose_failures
build:debug -s
# windows debug
build:windows_amd64_debug --config=windows_amd64
build:windows_amd64_debug --config=debug
# osx_amd64 debug config
build:osx_amd64_debug --config=debug
build:osx_amd64_debug --config=osx_amd64
# osx_arm64 debug config
build:osx_arm64_debug --config=debug
build:osx_arm64_debug --config=osx_arm64
# linux_arm64_debug
build:linux_arm64_debug --config=linux_arm64
build:linux_arm64_debug --config=debug
# linux_amd64_debug
build:linux_amd64_debug --config=linux_amd64
build:linux_amd64_debug --config=debug
# Docker Sandbox Configs
#-----------------------
# Note all docker sandbox configs must run from a linux x86_64 host
# build:docker-sandbox --experimental_docker_image=gcr.io/prysmaticlabs/rbe-worker:latest
build:docker-sandbox --spawn_strategy=docker --strategy=Javac=docker --genrule_strategy=docker
build:docker-sandbox --define=EXECUTOR=remote
build:docker-sandbox --experimental_docker_verbose
build:docker-sandbox --experimental_enable_docker_sandbox
build:docker-sandbox --crosstool_top=@rbe_ubuntu_clang//cc:toolchain
build:docker-sandbox --host_javabase=@rbe_ubuntu_clang//java:jdk
build:docker-sandbox --javabase=@rbe_ubuntu_clang//java:jdk
build:docker-sandbox --host_java_toolchain=@bazel_tools//tools/jdk:toolchain_hostjdk8
build:docker-sandbox --java_toolchain=@bazel_tools//tools/jdk:toolchain_hostjdk8
build:docker-sandbox --extra_execution_platforms=@rbe_ubuntu_clang//config:platform
build:docker-sandbox --host_platform=@rbe_ubuntu_clang//config:platform
build:docker-sandbox --platforms=@rbe_ubuntu_clang//config:platform
build:docker-sandbox --extra_toolchains=@prysm_toolchains//:cc-toolchain-multiarch
# windows_amd64 docker sandbox build config
build:windows_amd64_docker --config=docker-sandbox --config=windows_amd64
build:windows_amd64_docker_debug --config=windows_amd64_docker --config=debug
# osx_amd64 docker sandbox build config
build:osx_amd64_docker --config=docker-sandbox --config=osx_amd64
build:osx_amd64_docker_debug --config=osx_amd64_docker --config=debug
# osx_arm64 docker sandbox build config
build:osx_arm64_docker --config=docker-sandbox --config=osx_arm64
build:osx_arm64_docker_debug --config=osx_arm64_docker --config=debug
# linux_arm64 docker sandbox build config
build:linux_arm64_docker --config=docker-sandbox --config=linux_arm64
build:linux_arm64_docker_debug --config=linux_arm64_docker --config=debug
# linux_amd64 docker sandbox build config
build:linux_amd64_docker --config=docker-sandbox --config=linux_amd64
build:linux_amd64_docker_debug --config=linux_amd64_docker --config=debug
# Remote Build Execution
#-----------------------
# Originally from https://github.com/bazelbuild/bazel-toolchains/blob/master/bazelrc/bazel-2.0.0.bazelrc
#
# Depending on how many machines are in the remote execution instance, setting
# this higher can make builds faster by allowing more jobs to run in parallel.
# Setting it too high can result in jobs that timeout, however, while waiting
# for a remote machine to execute them.
build:remote --jobs=50
# Set several flags related to specifying the platform, toolchain and java
# properties.
# These flags should only be used as is for the rbe-ubuntu16-04 container
# and need to be adapted to work with other toolchain containers.
build:remote --host_javabase=@rbe_ubuntu_clang//java:jdk
build:remote --javabase=@rbe_ubuntu_clang//java:jdk
build:remote --host_java_toolchain=@bazel_tools//tools/jdk:toolchain_hostjdk8
build:remote --java_toolchain=@bazel_tools//tools/jdk:toolchain_hostjdk8
build:remote --crosstool_top=@rbe_ubuntu_clang//cc:toolchain
build:remote --action_env=BAZEL_DO_NOT_DETECT_CPP_TOOLCHAIN=1
# Platform flags:
# The toolchain container used for execution is defined in the target indicated
# by "extra_execution_platforms", "host_platform" and "platforms".
# More about platforms: https://docs.bazel.build/versions/master/platforms.html
build:remote --extra_toolchains=@rbe_ubuntu_clang//config:cc-toolchain
build:remote --extra_execution_platforms=@rbe_ubuntu_clang//config:platform
build:remote --host_platform=@rbe_ubuntu_clang//config:platform
build:remote --platforms=@rbe_ubuntu_clang//config:platform
# Starting with Bazel 0.27.0 strategies do not need to be explicitly
# defined. See https://github.com/bazelbuild/bazel/issues/7480
build:remote --define=EXECUTOR=remote
# Enable remote execution so actions are performed on the remote systems.
# build:remote --remote_executor=grpcs://remotebuildexecution.googleapis.com
# Enforce stricter environment rules, which eliminates some non-hermetic
# behavior and therefore improves both the remote cache hit rate and the
# correctness and repeatability of the build.
build:remote --incompatible_strict_action_env=true
# Set a higher timeout value, just in case.
build:remote --remote_timeout=3600
# Enable authentication. This will pick up application default credentials by
# default. You can use --google_credentials=some_file.json to use a service
# account credential instead.
# build:remote --google_default_credentials=true
# Enable build without the bytes
# See: https://github.com/bazelbuild/bazel/issues/6862
build:remote --experimental_remote_download_outputs=toplevel --experimental_inmemory_jdeps_files --experimental_inmemory_dotd_files
build:remote --remote_local_fallback
# Ignore GoStdLib with remote caching
build --modify_execution_info='GoStdlib.*=+no-remote-cache'
# Set bazel gotag
build --define gotags=bazel

View File

@@ -21,7 +21,7 @@ linters:
linters-settings:
gocognit:
# TODO: We should target for < 50
min-complexity: 65
min-complexity: 100
output:
print-issued-lines: true

View File

@@ -3,16 +3,6 @@ workspace(name = "prysm")
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
http_archive(
name = "bazel_toolchains",
sha256 = "8e0633dfb59f704594f19ae996a35650747adc621ada5e8b9fb588f808c89cb0",
strip_prefix = "bazel-toolchains-3.7.0",
urls = [
"https://mirror.bazel.build/github.com/bazelbuild/bazel-toolchains/releases/download/3.7.0/bazel-toolchains-3.7.0.tar.gz",
"https://github.com/bazelbuild/bazel-toolchains/releases/download/3.7.0/bazel-toolchains-3.7.0.tar.gz",
],
)
http_archive(
name = "com_grail_bazel_toolchain",
sha256 = "b210fc8e58782ef171f428bfc850ed7179bdd805543ebd1aa144b9c93489134f",
@@ -39,10 +29,6 @@ load("@prysm//tools/cross-toolchain:prysm_toolchains.bzl", "configure_prysm_tool
configure_prysm_toolchains()
load("@prysm//tools/cross-toolchain:rbe_toolchains_config.bzl", "rbe_toolchains_config")
rbe_toolchains_config()
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
@@ -76,9 +62,8 @@ http_archive(
http_archive(
name = "io_bazel_rules_docker",
sha256 = "1f4e59843b61981a96835dc4ac377ad4da9f8c334ebe5e0bb3f58f80c09735f4",
strip_prefix = "rules_docker-0.19.0",
urls = ["https://github.com/bazelbuild/rules_docker/releases/download/v0.19.0/rules_docker-v0.19.0.tar.gz"],
sha256 = "b1e80761a8a8243d03ebca8845e9cc1ba6c82ce7c5179ce2b295cd36f7e394bf",
urls = ["https://github.com/bazelbuild/rules_docker/releases/download/v0.25.0/rules_docker-v0.25.0.tar.gz"],
)
http_archive(
@@ -122,32 +107,36 @@ load(
"container_pull",
)
# Pulled gcr.io/distroless/cc-debian11:latest on 2022-02-23
container_pull(
name = "cc_image_base",
digest = "sha256:41036fc7ed8df0f6addc18484cef0c94a85867508967789f947e11ffd5ff0cc8",
name = "cc_image_base_amd64",
digest = "sha256:2a0daf90a7deb78465bfca3ef2eee6e91ce0a5706059f05d79d799a51d339523",
registry = "gcr.io",
repository = "distroless/cc",
repository = "distroless/cc-debian11",
)
# Pulled gcr.io/distroless/cc-debian11:debug on 2022-02-23
container_pull(
name = "cc_debug_image_base",
digest = "sha256:6865ad48467c89c3c3524d4c426f52ad12d9ab7dec31fad31fae69da40eb6445",
name = "cc_debug_image_base_amd64",
digest = "sha256:7bd596f5f200588f13a69c268eea6ce428b222b67cd7428d6a7fef95e75c052a",
registry = "gcr.io",
repository = "distroless/cc",
repository = "distroless/cc-debian11",
)
# Pulled from gcr.io/distroless/base-debian11:latest on 2022-02-23
container_pull(
name = "go_image_base",
digest = "sha256:b9b124f955961599e72630654107a0cf04e08e6fa777fa250b8f840728abd770",
name = "go_image_base_amd64",
digest = "sha256:34e682800774ecbd0954b1663d90238505f1ba5543692dbc75feef7dd4839e90",
registry = "gcr.io",
repository = "distroless/base",
repository = "distroless/base-debian11",
)
# Pulled from gcr.io/distroless/base-debian11:debug on 2022-02-23
container_pull(
name = "go_debug_image_base",
digest = "sha256:65668d2b78d25df3d8ccf5a778d017fcaba513b52078c700083eaeef212b9979",
name = "go_debug_image_base_amd64",
digest = "sha256:0f503c6bfd207793bc416f20a35bf6b75d769a903c48f180ad73f60f7b60d7bd",
registry = "gcr.io",
repository = "distroless/base",
repository = "distroless/base-debian11",
)
container_pull(
@@ -282,9 +271,9 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "82b01a48b143fe0f2fb7fb5f5dd385c1f934335a12d7954f08b1d45d77427b5e",
strip_prefix = "eth2-networks-674f7a1d01d9c18345456eab76e3871b3df2126b",
url = "https://github.com/eth-clients/eth2-networks/archive/674f7a1d01d9c18345456eab76e3871b3df2126b.tar.gz",
sha256 = "2701e1e1a3ec10c673fe7dbdbbe6f02c8ae8c922aebbf6e720d8c72d5458aafe",
strip_prefix = "eth2-networks-7b4897888cebef23801540236f73123e21774954",
url = "https://github.com/eth-clients/eth2-networks/archive/7b4897888cebef23801540236f73123e21774954.tar.gz",
)
http_archive(

View File

@@ -103,8 +103,8 @@ func DownloadFinalizedData(ctx context.Context, client *Client) (*OriginData, er
}
log.Printf("BeaconState slot=%d, Block slot=%d", s.Slot(), b.Block().Slot())
log.Printf("BeaconState htr=%#xd, Block state_root=%#x", sr, b.Block().StateRoot())
log.Printf("BeaconState latest_block_header htr=%#xd, block htr=%#x", br, realBlockRoot)
log.Printf("BeaconState htr=%#x, Block state_root=%#x", sr, b.Block().StateRoot())
log.Printf("BeaconState latest_block_header htr=%#x, block htr=%#x", br, realBlockRoot)
return &OriginData{
st: s,
b: b,

View File

@@ -467,7 +467,7 @@ func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, e
version := bytesutil.ToBytes4(vSlice)
ofs = append(ofs, forks.ForkScheduleEntry{
Version: version,
Epoch: primitives.Epoch(uint64(epoch)),
Epoch: primitives.Epoch(epoch),
})
}
sort.Sort(ofs)

View File

@@ -45,6 +45,7 @@ go_library(
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/core/transition/interop:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/kv:go_default_library",

View File

@@ -33,7 +33,7 @@ type ChainInfoFetcher interface {
// HeadUpdater defines a common interface for methods in blockchain service
// which allow to update the head info
type HeadUpdater interface {
UpdateHead(context.Context) error
UpdateHead(context.Context, primitives.Slot)
}
// TimeFetcher retrieves the Ethereum consensus data that's related to time.
@@ -84,7 +84,7 @@ type FinalizationFetcher interface {
FinalizedCheckpt() *ethpb.Checkpoint
CurrentJustifiedCheckpt() *ethpb.Checkpoint
PreviousJustifiedCheckpt() *ethpb.Checkpoint
VerifyFinalizedBlkDescendant(ctx context.Context, blockRoot [32]byte) error
InForkchoice([32]byte) bool
IsFinalized(ctx context.Context, blockRoot [32]byte) bool
}
@@ -96,24 +96,32 @@ type OptimisticModeFetcher interface {
// FinalizedCheckpt returns the latest finalized checkpoint from chain store.
func (s *Service) FinalizedCheckpt() *ethpb.Checkpoint {
s.ForkChoicer().RLock()
defer s.ForkChoicer().RUnlock()
cp := s.ForkChoicer().FinalizedCheckpoint()
return &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
}
// PreviousJustifiedCheckpt returns the current justified checkpoint from chain store.
func (s *Service) PreviousJustifiedCheckpt() *ethpb.Checkpoint {
s.ForkChoicer().RLock()
defer s.ForkChoicer().RUnlock()
cp := s.ForkChoicer().PreviousJustifiedCheckpoint()
return &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
}
// CurrentJustifiedCheckpt returns the current justified checkpoint from chain store.
func (s *Service) CurrentJustifiedCheckpt() *ethpb.Checkpoint {
s.ForkChoicer().RLock()
defer s.ForkChoicer().RUnlock()
cp := s.ForkChoicer().JustifiedCheckpoint()
return &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
}
// BestJustifiedCheckpt returns the best justified checkpoint from store.
func (s *Service) BestJustifiedCheckpt() *ethpb.Checkpoint {
s.ForkChoicer().RLock()
defer s.ForkChoicer().RUnlock()
cp := s.ForkChoicer().BestJustifiedCheckpoint()
return &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
}
@@ -277,6 +285,8 @@ func (s *Service) CurrentFork() *ethpb.Fork {
// IsCanonical returns true if the input block root is part of the canonical chain.
func (s *Service) IsCanonical(ctx context.Context, blockRoot [32]byte) (bool, error) {
s.ForkChoicer().RLock()
defer s.ForkChoicer().RUnlock()
// If the block has not been finalized, check fork choice store to see if the block is canonical
if s.cfg.ForkChoiceStore.HasNode(blockRoot) {
return s.cfg.ForkChoiceStore.IsCanonical(blockRoot), nil
@@ -289,6 +299,8 @@ func (s *Service) IsCanonical(ctx context.Context, blockRoot [32]byte) (bool, er
// ChainHeads returns all possible chain heads (leaves of fork choice tree).
// Heads roots and heads slots are returned.
func (s *Service) ChainHeads() ([][32]byte, []primitives.Slot) {
s.ForkChoicer().RLock()
defer s.ForkChoicer().RUnlock()
return s.cfg.ForkChoiceStore.Tips()
}
@@ -330,6 +342,8 @@ func (s *Service) IsOptimistic(ctx context.Context) (bool, error) {
headRoot := s.head.root
s.headLock.RUnlock()
s.ForkChoicer().RLock()
defer s.ForkChoicer().RUnlock()
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(headRoot)
if err == nil {
return optimistic, nil
@@ -345,6 +359,8 @@ func (s *Service) IsOptimistic(ctx context.Context) (bool, error) {
// IsFinalized returns true if the input root is finalized.
// It first checks latest finalized root then checks finalized root index in DB.
func (s *Service) IsFinalized(ctx context.Context, root [32]byte) bool {
s.ForkChoicer().RLock()
defer s.ForkChoicer().RUnlock()
if s.ForkChoicer().FinalizedCheckpoint().Root == root {
return true
}
@@ -356,10 +372,21 @@ func (s *Service) IsFinalized(ctx context.Context, root [32]byte) bool {
return s.cfg.BeaconDB.IsFinalizedBlock(ctx, root)
}
// InForkchoice returns true if the given root is found in forkchoice
// This in particular means that the blockroot is a descendant of the
// finalized checkpoint
func (s *Service) InForkchoice(root [32]byte) bool {
s.ForkChoicer().RLock()
defer s.ForkChoicer().RUnlock()
return s.ForkChoicer().HasNode(root)
}
// IsOptimisticForRoot takes the root as argument instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {
s.ForkChoicer().RLock()
optimistic, err := s.cfg.ForkChoiceStore.IsOptimistic(root)
s.ForkChoicer().RUnlock()
if err == nil {
return optimistic, nil
}

View File

@@ -26,7 +26,7 @@ var (
// errWSBlockNotFoundInEpoch is returned when a block is not found in the WS cache or DB within epoch.
errWSBlockNotFoundInEpoch = errors.New("weak subjectivity root not found in db within epoch")
// errNotDescendantOfFinalized is returned when a block is not a descendant of the finalized checkpoint
errNotDescendantOfFinalized = invalidBlock{error: errors.New("not descendant of finalized checkpoint")}
ErrNotDescendantOfFinalized = invalidBlock{error: errors.New("not descendant of finalized checkpoint")}
)
// An invalid block is the block that fails state transition based on the core protocol rules.

View File

@@ -301,7 +301,7 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
var attr payloadattribute.Attributer
switch st.Version() {
case version.Capella:
case version.Capella, version.Deneb:
withdrawals, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")

View File

@@ -2,14 +2,22 @@ package blockchain
import (
"context"
"fmt"
"time"
"github.com/pkg/errors"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"github.com/sirupsen/logrus"
)
func (s *Service) isNewProposer() bool {
_, _, ok := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot()+1, [32]byte{} /* root */)
func (s *Service) isNewProposer(slot primitives.Slot) bool {
_, _, ok := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(slot, [32]byte{} /* root */)
return ok
}
@@ -40,12 +48,18 @@ func (s *Service) getStateAndBlock(ctx context.Context, r [32]byte) (state.Beaco
return headState, newHeadBlock, nil
}
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, newHeadRoot [32]byte) error {
// fockchoiceUpdateWithExecution is a wrapper around notifyForkchoiceUpdate. It decides whether a new call to FCU should be made.
func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, newHeadRoot [32]byte, proposingSlot primitives.Slot) error {
isNewHead := s.isNewHead(newHeadRoot)
if !isNewHead && !s.isNewProposer() {
if !isNewHead {
return nil
}
isNewProposer := s.isNewProposer(proposingSlot)
if isNewProposer && !features.Get().DisableReorgLateBlocks {
if s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return nil
}
}
headState, headBlock, err := s.getStateAndBlock(ctx, newHeadRoot)
if err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
@@ -58,19 +72,56 @@ func (s *Service) forkchoiceUpdateWithExecution(ctx context.Context, newHeadRoot
headBlock: headBlock.Block(),
})
if err != nil {
return err
return errors.Wrap(err, "could not notify forkchoice update")
}
if isNewHead {
if err := s.saveHead(ctx, newHeadRoot, headBlock, headState); err != nil {
log.WithError(err).Error("could not save head")
}
// Only need to prune attestations from pool if the head has changed.
if err := s.pruneAttsFromPool(headBlock); err != nil {
return err
}
if err := s.saveHead(ctx, newHeadRoot, headBlock, headState); err != nil {
log.WithError(err).Error("could not save head")
}
// Only need to prune attestations from pool if the head has changed.
if err := s.pruneAttsFromPool(headBlock); err != nil {
log.WithError(err).Error("could not prune attestations from pool")
}
return nil
}
// shouldOverrideFCU checks whether the incoming block is still subject to being
// reorged or not by the next proposer.
func (s *Service) shouldOverrideFCU(newHeadRoot [32]byte, proposingSlot primitives.Slot) bool {
headWeight, err := s.ForkChoicer().Weight(newHeadRoot)
if err != nil {
log.WithError(err).WithField("root", fmt.Sprintf("%#x", newHeadRoot)).Warn("could not determine node weight")
}
currentSlot := s.CurrentSlot()
if proposingSlot == currentSlot {
proposerHead := s.ForkChoicer().GetProposerHead()
if proposerHead != newHeadRoot {
return true
}
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", newHeadRoot),
"weight": headWeight,
}).Infof("Attempted late block reorg aborted due to attestations at %d seconds",
params.BeaconConfig().SecondsPerSlot)
lateBlockFailedAttemptSecondThreshold.Inc()
} else {
if s.ForkChoicer().ShouldOverrideFCU() {
return true
}
secs, err := slots.SecondsSinceSlotStart(currentSlot,
uint64(s.genesisTime.Unix()), uint64(time.Now().Unix()))
if err != nil {
log.WithError(err).Error("could not compute seconds since slot start")
}
if secs >= doublylinkedtree.ProcessAttestationsThreshold {
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", newHeadRoot),
"weight": headWeight,
}).Infof("Attempted late block reorg aborted due to attestations at %d seconds",
doublylinkedtree.ProcessAttestationsThreshold)
lateBlockFailedAttemptFirstThreshold.Inc()
}
}
return false
}

View File

@@ -3,6 +3,7 @@ package blockchain
import (
"context"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/cache"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
@@ -21,10 +22,10 @@ import (
func TestService_isNewProposer(t *testing.T) {
beaconDB := testDB.SetupDB(t)
service := setupBeaconChain(t, beaconDB)
require.Equal(t, false, service.isNewProposer())
require.Equal(t, false, service.isNewProposer(service.CurrentSlot()+1))
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(service.CurrentSlot()+1, 0, [8]byte{}, [32]byte{} /* root */)
require.Equal(t, true, service.isNewProposer())
require.Equal(t, true, service.isNewProposer(service.CurrentSlot()+1))
}
func TestService_isNewHead(t *testing.T) {
@@ -75,7 +76,7 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
service.cfg.ProposerSlotIndexCache = cache.NewProposerPayloadIDsCache()
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, service.headRoot()))
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, service.headRoot(), service.CurrentSlot()+1))
hookErr := "could not notify forkchoice update"
invalidStateErr := "could not get state summary: could not find block in DB"
require.LogsDoNotContain(t, hook, invalidStateErr)
@@ -83,7 +84,7 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
gb, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
require.NoError(t, service.saveInitSyncBlock(ctx, [32]byte{'a'}, gb))
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, [32]byte{'a'}))
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, [32]byte{'a'}, service.CurrentSlot()+1))
require.LogsContain(t, hook, invalidStateErr)
hook.Reset()
@@ -107,7 +108,7 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1}, [32]byte{2})
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, r1))
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, r1, service.CurrentSlot()))
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
@@ -124,7 +125,7 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
state: st,
}
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(2, 1, [8]byte{1}, [32]byte{2})
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, r1))
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, r1, service.CurrentSlot()+1))
require.LogsDoNotContain(t, hook, invalidStateErr)
require.LogsDoNotContain(t, hook, hookErr)
vId, payloadID, has := service.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(2, [32]byte{2})
@@ -134,7 +135,7 @@ func TestService_forkchoiceUpdateWithExecution_exceptionalCases(t *testing.T) {
// Test zero headRoot returns immediately.
headRoot := service.headRoot()
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, [32]byte{}))
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, [32]byte{}, service.CurrentSlot()+1))
require.Equal(t, service.headRoot(), headRoot)
}
@@ -184,7 +185,52 @@ func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testin
// Set head to be the same but proposing next slot
service.head.root = r
service.head.block = sb
service.head.state = st
service.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(service.CurrentSlot()+1, 0, [8]byte{}, [32]byte{} /* root */)
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, r))
require.NoError(t, service.forkchoiceUpdateWithExecution(ctx, r, service.CurrentSlot()+1))
}
func TestShouldOverrideFCU(t *testing.T) {
hook := logTest.NewGlobal()
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
WithProposerIdsCache(cache.NewProposerPayloadIDsCache()),
}
service, err := NewService(ctx, opts...)
service.SetGenesisTime(time.Now().Add(-time.Duration(2*params.BeaconConfig().SecondsPerSlot) * time.Second))
require.NoError(t, err)
headRoot := [32]byte{'b'}
parentRoot := [32]byte{'a'}
ojc := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, 1, parentRoot, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, headRoot, parentRoot, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
require.Equal(t, primitives.Slot(2), service.CurrentSlot())
require.Equal(t, true, service.shouldOverrideFCU(headRoot, 2))
require.LogsDoNotContain(t, hook, "12 seconds")
require.Equal(t, false, service.shouldOverrideFCU(parentRoot, 2))
require.LogsContain(t, hook, "12 seconds")
head, err := fcs.Head(ctx)
require.NoError(t, err)
require.Equal(t, headRoot, head)
fcs.SetGenesisTime(uint64(time.Now().Unix()) - 29)
require.Equal(t, true, service.shouldOverrideFCU(parentRoot, 3))
require.LogsDoNotContain(t, hook, "10 seconds")
fcs.SetGenesisTime(uint64(time.Now().Unix()) - 24)
service.SetGenesisTime(time.Now().Add(-time.Duration(2*params.BeaconConfig().SecondsPerSlot+10) * time.Second))
require.Equal(t, false, service.shouldOverrideFCU(parentRoot, 3))
require.LogsContain(t, hook, "10 seconds")
}

View File

@@ -52,6 +52,7 @@ type head struct {
// This saves head info to the local service cache, it also saves the
// new head root to the DB.
// Caller of the method MUST aqcuire a lock on forkchoice.
func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock interfaces.ReadOnlySignedBeaconBlock, headState state.BeaconState) error {
ctx, span := trace.StartSpan(ctx, "blockChain.saveHead")
defer span.End()
@@ -122,7 +123,7 @@ func (s *Service) saveHead(ctx context.Context, newHeadRoot [32]byte, headBlock
reorgDistance.Observe(float64(dis))
reorgDepth.Observe(float64(dep))
isOptimistic, err := s.IsOptimistic(ctx)
isOptimistic, err := s.ForkChoicer().IsOptimistic(newHeadRoot)
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}

View File

@@ -60,7 +60,20 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
if b.Version() == version.Deneb {
k, err := b.Body().BlobKzgCommitments()
if err != nil {
return err
}
log = log.WithField("blobCount", len(k))
}
}
if b.Version() == version.Deneb {
k, err := b.Body().BlobKzgCommitments()
if err != nil {
return err
}
log = log.WithField("blobCount", len(k))
}
log.Info("Finished applying state transition")
return nil
@@ -96,6 +109,7 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"version": version.String(block.Version()),
}).Info("Synced new block")
}
return nil

View File

@@ -111,6 +111,18 @@ var (
Name: "beacon_reorgs_total",
Help: "Count the number of times beacon chain has a reorg",
})
LateBlockAttemptedReorgCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_late_block_attempted_reorgs",
Help: "Count the number of times a proposer served by this beacon has attempted a late block reorg",
})
lateBlockFailedAttemptFirstThreshold = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_failed_reorg_attempts_first_threshold",
Help: "Count the number of times a proposer served by this beacon attempted a late block reorg but desisted in the first threshold",
})
lateBlockFailedAttemptSecondThreshold = promauto.NewCounter(prometheus.CounterOpts{
Name: "beacon_failed_reorg_attempts_second_threshold",
Help: "Count the number of times a proposer served by this beacon attempted a late block reorg but desisted in the second threshold",
})
saveOrphanedAttCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "saved_orphaned_att_total",
Help: "Count the number of times an orphaned attestation is saved",
@@ -158,10 +170,6 @@ var (
Name: "txs_per_slot_count",
Help: "Count the number of txs per slot",
})
missedPayloadIDFilledCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "missed_payload_id_filled_count",
Help: "",
})
onBlockProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "on_block_processing_milliseconds",
Help: "Total time in milliseconds to complete a call to onBlock()",

View File

@@ -6,7 +6,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1/attestation"
@@ -37,7 +36,7 @@ import (
//
// # Update latest messages for attesting indices
// update_latest_messages(store, indexed_attestation.attesting_indices, attestation)
func (s *Service) OnAttestation(ctx context.Context, a *ethpb.Attestation) error {
func (s *Service) OnAttestation(ctx context.Context, a *ethpb.Attestation, disparity time.Duration) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onAttestation")
defer span.End()
@@ -63,7 +62,7 @@ func (s *Service) OnAttestation(ctx context.Context, a *ethpb.Attestation) error
genesisTime := uint64(s.genesisTime.Unix())
// Verify attestation target is from current epoch or previous epoch.
if err := verifyAttTargetEpoch(ctx, genesisTime, uint64(time.Now().Unix()), tgt); err != nil {
if err := verifyAttTargetEpoch(ctx, genesisTime, uint64(time.Now().Add(disparity).Unix()), tgt); err != nil {
return err
}
@@ -72,11 +71,11 @@ func (s *Service) OnAttestation(ctx context.Context, a *ethpb.Attestation) error
return errors.Wrap(err, "could not verify attestation beacon block")
}
// Note that LMG GHOST and FFG consistency check is ignored because it was performed in sync's validation pipeline:
// Note that LMD GHOST and FFG consistency check is ignored because it was performed in sync's validation pipeline:
// validate_aggregate_proof.go and validate_beacon_attestation.go
// Verify attestations can only affect the fork choice of subsequent slots.
if err := slots.VerifyTime(genesisTime, a.Data.Slot+1, params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
if err := slots.VerifyTime(genesisTime, a.Data.Slot+1, disparity); err != nil {
return err
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition"
testDB "github.com/prysmaticlabs/prysm/v3/beacon-chain/db/testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state/stategen"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
@@ -118,7 +117,7 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := service.OnAttestation(ctx, tt.a)
err := service.OnAttestation(ctx, tt.a, 0)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
} else {
@@ -155,7 +154,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.OnAttestation(ctx, att[0]))
require.NoError(t, service.OnAttestation(ctx, att[0], 0))
}
func TestStore_SaveCheckpointState(t *testing.T) {
@@ -324,86 +323,3 @@ func TestVerifyBeaconBlock_OK(t *testing.T) {
assert.NoError(t, service.verifyBeaconBlock(ctx, d), "Did not receive the wanted error")
}
func TestVerifyFinalizedConsistency_InconsistentRoot_DoublyLinkedTree(t *testing.T) {
ctx := context.Background()
beaconDB := testDB.SetupDB(t)
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b32)
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: 1}))
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b33)
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
err = service.VerifyFinalizedConsistency(r33[:])
require.ErrorContains(t, "Root and finalized store are not consistent", err)
}
func TestVerifyFinalizedConsistency_OK(t *testing.T) {
ctx := context.Background()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 33, [32]byte{'a'}, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
err = service.VerifyFinalizedConsistency(blkRoot[:])
require.NoError(t, err)
}
func TestVerifyFinalizedConsistency_IsCanonical(t *testing.T) {
ctx := context.Background()
opts := testServiceOptsWithDB(t)
service, err := NewService(ctx, opts...)
require.NoError(t, err)
b32 := util.NewBeaconBlock()
b32.Block.Slot = 32
r32, err := b32.Block.HashTreeRoot()
require.NoError(t, err)
b33 := util.NewBeaconBlock()
b33.Block.Slot = 33
b33.Block.ParentRoot = r32[:]
r33, err := b33.Block.HashTreeRoot()
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, b32.Block.Slot, r32, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, b33.Block.Slot, r33, r32, params.BeaconConfig().ZeroHash, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
jc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r32}
bState, _ := util.DeterministicGenesisState(t, 10)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, bState, r32))
require.NoError(t, service.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(ctx, jc))
_, err = service.cfg.ForkChoiceStore.Head(ctx)
require.NoError(t, err)
err = service.VerifyFinalizedConsistency(r33[:])
require.NoError(t, err)
}

View File

@@ -107,6 +107,11 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
return err
}
// Verify that the parent block is in forkchoice
if !s.ForkChoicer().HasNode(b.ParentRoot()) {
return ErrNotDescendantOfFinalized
}
// Save current justified and finalized epochs for future use.
currStoreJustifiedEpoch := s.ForkChoicer().JustifiedCheckpoint().Epoch
currStoreFinalizedEpoch := s.ForkChoicer().FinalizedCheckpoint().Epoch
@@ -209,7 +214,7 @@ func (s *Service) onBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
}
newBlockHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
if err := s.forkchoiceUpdateWithExecution(ctx, headRoot); err != nil {
if err := s.forkchoiceUpdateWithExecution(ctx, headRoot, s.CurrentSlot()+1); err != nil {
return err
}
@@ -442,12 +447,22 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.ReadOnlySi
}
}
}
// Save boundary states that will be useful for forkchoice
for r, st := range boundaries {
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
return err
}
}
// Also saves the last post state which to be used as pre state for the next batch.
lastBR := blockRoots[len(blks)-1]
if err := s.cfg.StateGen.SaveState(ctx, lastBR, preState); err != nil {
return err
}
// Insert all nodes but the last one to forkchoice
if err := s.cfg.ForkChoiceStore.InsertChain(ctx, pendingNodes); err != nil {
return errors.Wrap(err, "could not insert batch to forkchoice")
}
// Insert the last block to forkchoice
lastBR := blockRoots[len(blks)-1]
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, preState, lastBR); err != nil {
return errors.Wrap(err, "could not insert last block in batch to forkchoice")
}
@@ -457,17 +472,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []interfaces.ReadOnlySi
return errors.Wrap(err, "could not set optimistic block to valid")
}
}
for r, st := range boundaries {
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
return err
}
}
// Also saves the last post state which to be used as pre state for the next batch.
lastB := blks[len(blks)-1]
if err := s.cfg.StateGen.SaveState(ctx, lastBR, preState); err != nil {
return err
}
arg := &notifyForkchoiceUpdateArg{
headState: preState,
headRoot: lastBR,
@@ -567,6 +572,7 @@ func (s *Service) handleBlockAttestations(ctx context.Context, blk interfaces.Re
// InsertSlashingsToForkChoiceStore inserts attester slashing indices to fork choice store.
// To call this function, it's caller's responsibility to ensure the slashing object is valid.
// This function requires a write lock on forkchoice.
func (s *Service) InsertSlashingsToForkChoiceStore(ctx context.Context, slashings []*ethpb.AttesterSlashing) {
for _, slashing := range slashings {
indices := blocks.SlashableAttesterIndices(slashing)
@@ -661,15 +667,15 @@ func (s *Service) fillMissingPayloadIDRoutine(ctx context.Context, stateFeed *ev
break
}
ticker := time.NewTicker(time.Second)
defer ticker.Stop()
attThreshold := params.BeaconConfig().SecondsPerSlot / 3
ticker := slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
for {
select {
case ti := <-ticker.C:
if err := s.fillMissingBlockPayloadId(ctx, ti); err != nil {
case <-ticker.C():
if err := s.fillMissingBlockPayloadId(ctx); err != nil {
log.WithError(err).Error("Could not fill missing payload ID")
}
case <-s.ctx.Done():
case <-ctx.Done():
log.Debug("Context closed, exiting routine")
return
}
@@ -677,36 +683,34 @@ func (s *Service) fillMissingPayloadIDRoutine(ctx context.Context, stateFeed *ev
}()
}
// Returns true if time `t` is halfway through the slot in sec.
func atHalfSlot(t time.Time) bool {
s := params.BeaconConfig().SecondsPerSlot
return uint64(t.Second())%s == s/2
}
func (s *Service) fillMissingBlockPayloadId(ctx context.Context, ti time.Time) error {
if !atHalfSlot(ti) {
return nil
}
if s.CurrentSlot() == s.cfg.ForkChoiceStore.HighestReceivedBlockSlot() {
// fillMissingBlockPayloadId is called 4 seconds into the slot and calls FCU if we are proposing next slot
// and the cache has been missed
func (s *Service) fillMissingBlockPayloadId(ctx context.Context) error {
s.ForkChoicer().RLock()
highestReceivedSlot := s.cfg.ForkChoiceStore.HighestReceivedBlockSlot()
s.ForkChoicer().RUnlock()
if s.CurrentSlot() == highestReceivedSlot {
return nil
}
// Head root should be empty when retrieving proposer index for the next slot.
_, id, has := s.cfg.ProposerSlotIndexCache.GetProposerPayloadIDs(s.CurrentSlot()+1, [32]byte{} /* head root */)
// There exists proposer for next slot, but we haven't called fcu w/ payload attribute yet.
if has && id == [8]byte{} {
missedPayloadIDFilledCount.Inc()
headBlock, err := s.headBlock()
if err != nil {
return err
} else {
if _, err := s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: s.headState(ctx),
headRoot: s.headRoot(),
headBlock: headBlock.Block(),
}); err != nil {
return err
}
}
if !has || id != [8]byte{} {
return nil
}
return nil
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
s.headLock.RUnlock()
return err
}
headState := s.headState(ctx)
headRoot := s.headRoot()
s.headLock.RUnlock()
_, err = s.notifyForkchoiceUpdate(ctx, &notifyForkchoiceUpdateArg{
headState: headState,
headRoot: headRoot,
headBlock: headBlock.Block(),
})
return err
}

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"bytes"
"context"
"fmt"
@@ -14,7 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v3/math"
"github.com/prysmaticlabs/prysm/v3/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"go.opencensus.io/trace"
@@ -68,13 +66,10 @@ func (s *Service) verifyBlkPreState(ctx context.Context, b interfaces.ReadOnlyBe
// during initial syncing. There's no risk given a state summary object is just a
// a subset of the block object.
if !s.cfg.BeaconDB.HasStateSummary(ctx, parentRoot) && !s.cfg.BeaconDB.HasBlock(ctx, parentRoot) {
log.Errorf("requesting blockroot %#x", bytesutil.Trunc(parentRoot[:]))
return errors.New("could not reconstruct parent state")
}
if err := s.VerifyFinalizedBlkDescendant(ctx, parentRoot); err != nil {
return err
}
has, err := s.cfg.StateGen.HasState(ctx, parentRoot)
if err != nil {
return err
@@ -88,35 +83,6 @@ func (s *Service) verifyBlkPreState(ctx context.Context, b interfaces.ReadOnlyBe
return nil
}
// VerifyFinalizedBlkDescendant validates if input block root is a descendant of the
// current finalized block root.
func (s *Service) VerifyFinalizedBlkDescendant(ctx context.Context, root [32]byte) error {
ctx, span := trace.StartSpan(ctx, "blockChain.VerifyFinalizedBlkDescendant")
defer span.End()
finalized := s.ForkChoicer().FinalizedCheckpoint()
fRoot := s.ensureRootNotZeros(finalized.Root)
fSlot, err := slots.EpochStart(finalized.Epoch)
if err != nil {
return err
}
bFinalizedRoot, err := s.ancestor(ctx, root[:], fSlot)
if err != nil {
return errors.Wrap(err, "could not get finalized block root")
}
if bFinalizedRoot == nil {
return fmt.Errorf("no finalized block known for block %#x", bytesutil.Trunc(root[:]))
}
if !bytes.Equal(bFinalizedRoot, fRoot[:]) {
err := fmt.Errorf("block %#x is not a descendant of the current finalized block slot %d, %#x != %#x",
bytesutil.Trunc(root[:]), fSlot, bytesutil.Trunc(bFinalizedRoot),
bytesutil.Trunc(fRoot[:]))
tracing.AnnotateError(span, err)
return invalidBlock{error: err}
}
return nil
}
// verifyBlkFinalizedSlot validates input block is not less than or equal
// to current finalized slot.
func (s *Service) verifyBlkFinalizedSlot(b interfaces.ReadOnlyBeaconBlock) error {
@@ -272,7 +238,7 @@ func (s *Service) fillInForkChoiceMissingBlocks(ctx context.Context, blk interfa
return nil
}
if root != s.ensureRootNotZeros(finalized.Root) && !s.ForkChoicer().HasNode(root) {
return errNotDescendantOfFinalized
return ErrNotDescendantOfFinalized
}
return s.cfg.ForkChoiceStore.InsertChain(ctx, pendingNodes)
}

View File

@@ -65,6 +65,10 @@ func TestStore_OnBlock(t *testing.T) {
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), validGenesisRoot))
ojc := &ethpb.Checkpoint{}
stfcs, root, err := prepareForkchoiceState(ctx, 0, validGenesisRoot, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, stfcs, root))
roots, err := blockTree1(t, beaconDB, validGenesisRoot[:])
require.NoError(t, err)
random := util.NewBeaconBlock()
@@ -73,11 +77,16 @@ func TestStore_OnBlock(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, random)
randomParentRoot, err := random.Block.HashTreeRoot()
assert.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: st.Slot(), Root: randomParentRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), randomParentRoot))
randomParentRoot2 := roots[1]
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Slot: st.Slot(), Root: randomParentRoot2}))
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st.Copy(), bytesutil.ToBytes32(randomParentRoot2)))
stfcs, root, err = prepareForkchoiceState(ctx, 2, bytesutil.ToBytes32(randomParentRoot2),
validGenesisRoot, [32]byte{'r'}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, stfcs, root))
tests := []struct {
name string
@@ -108,10 +117,11 @@ func TestStore_OnBlock(t *testing.T) {
blk: func() *ethpb.SignedBeaconBlock {
b := util.NewBeaconBlock()
b.Block.ParentRoot = randomParentRoot[:]
b.Block.Slot = 2
return b
}(),
s: st.Copy(),
wantErrString: "is not a descendant of the current finalized block",
wantErrString: "not descendant of finalized checkpoint",
},
{
name: "same slot as finalized block",
@@ -441,7 +451,7 @@ func TestFillForkChoiceMissingBlocks_FinalizedSibling(t *testing.T) {
err = service.fillInForkChoiceMissingBlocks(
context.Background(), wsb.Block(), beaconState.FinalizedCheckpoint(), beaconState.CurrentJustifiedCheckpoint())
require.Equal(t, errNotDescendantOfFinalized.Error(), err.Error())
require.Equal(t, ErrNotDescendantOfFinalized.Error(), err.Error())
}
// blockTree1 constructs the following tree:
@@ -707,87 +717,6 @@ func TestEnsureRootNotZeroHashes(t *testing.T) {
assert.Equal(t, root, r, "Did not get wanted justified root")
}
func TestVerifyBlkDescendant(t *testing.T) {
beaconDB := testDB.SetupDB(t)
ctx := context.Background()
fcs := doublylinkedtree.New()
opts := []Option{
WithDatabase(beaconDB),
WithStateGen(stategen.New(beaconDB, fcs)),
WithForkChoiceStore(fcs),
}
b := util.NewBeaconBlock()
b.Block.Slot = 32
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, b)
b1 := util.NewBeaconBlock()
b1.Block.Slot = 32
b1.Block.Body.Graffiti = bytesutil.PadTo([]byte{'a'}, 32)
r1, err := b1.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, beaconDB, b1)
type args struct {
parentRoot [32]byte
finalizedRoot [32]byte
}
tests := []struct {
name string
args args
wantedErr string
invalidBlockRoot bool
}{
{
name: "could not get finalized block in block service cache",
args: args{
finalizedRoot: [32]byte{'a'},
},
wantedErr: "block not found in cache or db",
},
{
name: "could not get finalized block root in DB",
args: args{
finalizedRoot: r,
parentRoot: [32]byte{'a'},
},
wantedErr: "could not get finalized block root",
},
{
name: "is not descendant",
args: args{
finalizedRoot: r1,
parentRoot: r,
},
wantedErr: "is not a descendant of the current finalized block slot",
invalidBlockRoot: true,
},
{
name: "is descendant",
args: args{
finalizedRoot: r,
parentRoot: r,
},
},
}
for _, tt := range tests {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
require.NoError(t, service.ForkChoicer().UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: tt.args.finalizedRoot, Epoch: 1}))
err = service.VerifyFinalizedBlkDescendant(ctx, tt.args.parentRoot)
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
if tt.invalidBlockRoot {
require.Equal(t, true, IsInvalidBlock(err))
}
} else if err != nil {
assert.NoError(t, err)
}
}
}
func TestHandleEpochBoundary_UpdateFirstSlot(t *testing.T) {
ctx := context.Background()
opts := testServiceOptsNoDB()
@@ -2140,63 +2069,14 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, service.onBlock(ctx, wsb, root))
// Check that the head is still INVALID and the node is optimistic
// We use onBlockBatch here because the valid chain is missing in forkchoice
require.NoError(t, service.onBlockBatch(ctx, []interfaces.ReadOnlySignedBeaconBlock{wsb}, [][32]byte{root}))
// Check that the head is now VALID and the node is not optimistic
require.Equal(t, genesisRoot, service.ensureRootNotZeros(service.ForkChoicer().CachedHeadRoot()))
headRoot, err = service.HeadRoot(ctx)
require.NoError(t, err)
require.Equal(t, genesisRoot, bytesutil.ToBytes32(headRoot))
optimistic, err = service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, optimistic)
st, err = service.cfg.StateGen.StateByRoot(ctx, root)
require.NoError(t, err)
// Import blocks 21--23
for i := 21; i < 24; i++ {
driftGenesisTime(service, int64(i), 0)
require.NoError(t, err)
b, err := util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), primitives.Slot(i))
require.NoError(t, err)
wsb, err := consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
err = service.onBlock(ctx, wsb, root)
require.NoError(t, err)
st, err = service.cfg.StateGen.StateByRoot(ctx, root)
require.NoError(t, err)
}
// Head should still be INVALID and the node is optimistic
require.Equal(t, genesisRoot, service.ensureRootNotZeros(service.ForkChoicer().CachedHeadRoot()))
headRoot, err = service.HeadRoot(ctx)
require.NoError(t, err)
require.Equal(t, genesisRoot, bytesutil.ToBytes32(headRoot))
optimistic, err = service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, true, optimistic)
// Import block 24, it should justify Epoch 3 and become HEAD, the node
// recovers
driftGenesisTime(service, 24, 0)
b, err = util.GenerateFullBlockBellatrix(st, keys, util.DefaultBlockGenConfig(), 24)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
service.ForkChoicer().SetBalancesByRooter(service.cfg.StateGen.ActiveNonSlashedBalancesByRoot)
err = service.onBlock(ctx, wsb, root)
require.NoError(t, err)
require.Equal(t, root, service.ForkChoicer().CachedHeadRoot())
headRoot, err = service.HeadRoot(ctx)
require.NoError(t, err)
require.Equal(t, root, bytesutil.ToBytes32(headRoot))
sjc = service.CurrentJustifiedCheckpt()
require.Equal(t, primitives.Epoch(3), sjc.Epoch)
optimistic, err = service.IsOptimistic(ctx)
require.NoError(t, err)
require.Equal(t, false, optimistic)
@@ -2284,7 +2164,7 @@ func TestFillMissingBlockPayloadId_DiffSlotExitEarly(t *testing.T) {
service, err := NewService(ctx, opts...)
require.NoError(t, err)
require.NoError(t, service.fillMissingBlockPayloadId(ctx, time.Unix(int64(params.BeaconConfig().SecondsPerSlot/2), 0)))
require.NoError(t, service.fillMissingBlockPayloadId(ctx), 0)
}
// Helper function to simulate the block being on time or delayed for proposer

View File

@@ -11,7 +11,9 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/time/slots"
@@ -19,6 +21,10 @@ import (
"go.opencensus.io/trace"
)
// reorgLateBlockCountAttestations is the time until the end of the slot in which we count
// attestations to see if we will reorg the incoming block
const reorgLateBlockCountAttestations = 2 * time.Second
// AttestationStateFetcher allows for retrieving a beacon state corresponding to the block
// root of an attestation's target checkpoint.
type AttestationStateFetcher interface {
@@ -29,7 +35,7 @@ type AttestationStateFetcher interface {
type AttestationReceiver interface {
AttestationStateFetcher
VerifyLmdFfgConsistency(ctx context.Context, att *ethpb.Attestation) error
VerifyFinalizedConsistency(root []byte) error
InForkchoice([32]byte) bool
}
// AttestationTargetState returns the pre state of attestation.
@@ -60,19 +66,6 @@ func (s *Service) VerifyLmdFfgConsistency(ctx context.Context, a *ethpb.Attestat
return nil
}
// VerifyFinalizedConsistency verifies input root is consistent with finalized store.
// When the input root is not be consistent with finalized store then we know it is not
// on the finalized check point that leads to current canonical chain and should be rejected accordingly.
func (s *Service) VerifyFinalizedConsistency(root []byte) error {
// A canonical root implies the root to has an ancestor that aligns with finalized check point.
// In this case, we could exit early to save on additional computation.
blockRoot := bytesutil.ToBytes32(root)
if !s.cfg.ForkChoiceStore.HasNode(blockRoot) {
return errors.New("Root and finalized store are not consistent")
}
return nil
}
// This routine processes fork choice attestations from the pool to account for validator votes and fork choice.
func (s *Service) spawnProcessAttestationsRoutine(stateFeed *event.Feed) {
// Wait for state to be initialized.
@@ -101,32 +94,40 @@ func (s *Service) spawnProcessAttestationsRoutine(stateFeed *event.Feed) {
}
st := slots.NewSlotTicker(s.genesisTime, params.BeaconConfig().SecondsPerSlot)
pat := slots.NewSlotTickerWithOffset(s.genesisTime, -reorgLateBlockCountAttestations, params.BeaconConfig().SecondsPerSlot)
for {
select {
case <-s.ctx.Done():
return
case <-pat.C():
s.ForkChoicer().Lock()
s.UpdateHead(s.ctx, s.CurrentSlot()+1)
s.ForkChoicer().Unlock()
case <-st.C():
s.ForkChoicer().Lock()
if err := s.ForkChoicer().NewSlot(s.ctx, s.CurrentSlot()); err != nil {
log.WithError(err).Error("Could not process new slot")
log.WithError(err).Error("could not process new slot")
}
if err := s.UpdateHead(s.ctx); err != nil {
log.WithError(err).Error("Could not process attestations and update head")
}
s.UpdateHead(s.ctx, s.CurrentSlot())
s.ForkChoicer().Unlock()
}
}
}()
}
// UpdateHead updates the canonical head of the chain based on information from fork-choice attestations and votes.
// It requires no external inputs.
func (s *Service) UpdateHead(ctx context.Context) error {
// Only one process can process attestations and update head at a time.
s.processAttestationsLock.Lock()
defer s.processAttestationsLock.Unlock()
// The caller of this function MUST hold a lock in forkchoice
func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot) {
start := time.Now()
s.processAttestations(ctx)
// This function is only called at 10 seconds or 0 seconds into the slot
disparity := params.BeaconNetworkConfig().MaximumGossipClockDisparity
if !features.Get().DisableReorgLateBlocks {
disparity += reorgLateBlockCountAttestations
}
s.processAttestations(ctx, disparity)
processAttsElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
start = time.Now()
@@ -144,21 +145,20 @@ func (s *Service) UpdateHead(ctx context.Context) error {
}).Debug("Head changed due to attestations")
}
s.headLock.RUnlock()
if err := s.forkchoiceUpdateWithExecution(ctx, newHeadRoot); err != nil {
return err
if err := s.forkchoiceUpdateWithExecution(s.ctx, newHeadRoot, proposingSlot); err != nil {
log.WithError(err).Error("could not update forkchoice")
}
return nil
}
// This processes fork choice attestations from the pool to account for validator votes and fork choice.
func (s *Service) processAttestations(ctx context.Context) {
func (s *Service) processAttestations(ctx context.Context, disparity time.Duration) {
atts := s.cfg.AttPool.ForkchoiceAttestations()
for _, a := range atts {
// Based on the spec, don't process the attestation until the subsequent slot.
// This delays consideration in the fork choice until their slot is in the past.
// https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/fork-choice.md#validate_on_attestation
nextSlot := a.Data.Slot + 1
if err := slots.VerifyTime(uint64(s.genesisTime.Unix()), nextSlot, params.BeaconNetworkConfig().MaximumGossipClockDisparity); err != nil {
if err := slots.VerifyTime(uint64(s.genesisTime.Unix()), nextSlot, disparity); err != nil {
continue
}
@@ -176,7 +176,7 @@ func (s *Service) processAttestations(ctx context.Context) {
continue
}
if err := s.receiveAttestationNoPubsub(ctx, a); err != nil {
if err := s.receiveAttestationNoPubsub(ctx, a, disparity); err != nil {
log.WithFields(logrus.Fields{
"slot": a.Data.Slot,
"committeeIndex": a.Data.CommitteeIndex,
@@ -193,11 +193,11 @@ func (s *Service) processAttestations(ctx context.Context) {
// 1. Validate attestation, update validator's latest vote
// 2. Apply fork choice to the processed attestation
// 3. Save latest head info
func (s *Service) receiveAttestationNoPubsub(ctx context.Context, att *ethpb.Attestation) error {
func (s *Service) receiveAttestationNoPubsub(ctx context.Context, att *ethpb.Attestation, disparity time.Duration) error {
ctx, span := trace.StartSpan(ctx, "beacon-chain.blockchain.receiveAttestationNoPubsub")
defer span.End()
if err := s.OnAttestation(ctx, att); err != nil {
if err := s.OnAttestation(ctx, att, disparity); err != nil {
return errors.Wrap(err, "could not process attestation")
}

View File

@@ -120,7 +120,7 @@ func TestProcessAttestations_Ok(t *testing.T) {
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(atts))
service.processAttestations(ctx)
service.processAttestations(ctx, 0)
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations()))
require.LogsDoNotContain(t, hook, "Could not process attestation for fork choice")
}
@@ -183,10 +183,9 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
service.head.root = r // Old head
require.Equal(t, 1, len(service.cfg.AttPool.ForkchoiceAttestations()))
require.NoError(t, err, service.UpdateHead(ctx))
service.UpdateHead(ctx, 0)
require.Equal(t, tRoot, service.headRoot())
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations())) // Validate att pool is empty
require.Equal(t, tRoot, service.head.root) // Validate head is the new one
}
func TestService_UpdateHead_NoAtts(t *testing.T) {
@@ -236,9 +235,8 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
require.Equal(t, 3, fcs.NodeCount())
require.Equal(t, 0, service.cfg.AttPool.ForkchoiceAttestationCount())
require.NoError(t, err, service.UpdateHead(ctx))
service.UpdateHead(ctx, 0)
require.Equal(t, r, service.headRoot())
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations())) // Validate att pool is empty
require.Equal(t, r, service.head.root) // Validate head is the new one
}

View File

@@ -6,8 +6,10 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition/interop"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
@@ -45,10 +47,14 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
return err
}
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
// Apply state transition on the new block.
if err := s.onBlock(ctx, blockCopy, blockRoot); err != nil {
err := errors.Wrap(err, "could not process block")
tracing.AnnotateError(span, err)
interop.WriteBlockToDisk("state_transition", blockCopy, true /*failed*/)
return err
}
@@ -63,11 +69,13 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
// Reports on block and fork choice metrics.
finalized := s.FinalizedCheckpt()
cp := s.ForkChoicer().FinalizedCheckpoint()
finalized := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
reportSlotMetrics(blockCopy.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
// Log block sync status.
justified := s.CurrentJustifiedCheckpt()
cp = s.ForkChoicer().JustifiedCheckpoint()
justified := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
if err := logBlockSyncStatus(blockCopy.Block(), blockRoot, justified, finalized, receivedTime, uint64(s.genesisTime.Unix())); err != nil {
log.WithError(err).Error("Unable to log block sync status")
}
@@ -90,6 +98,9 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Rea
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlockBatch")
defer span.End()
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
// Apply state transition on the incoming newly received block batches, one by one.
if err := s.onBlockBatch(ctx, blocks, blkRoots); err != nil {
err := errors.Wrap(err, "could not process block in batch")
@@ -114,14 +125,15 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []interfaces.Rea
})
// Reports on blockCopy and fork choice metrics.
finalized := s.FinalizedCheckpt()
cp := s.ForkChoicer().FinalizedCheckpoint()
finalized := &ethpb.Checkpoint{Epoch: cp.Epoch, Root: bytesutil.SafeCopyBytes(cp.Root[:])}
reportSlotMetrics(blockCopy.Block().Slot(), s.HeadSlot(), s.CurrentSlot(), finalized)
}
if err := s.cfg.BeaconDB.SaveBlocks(ctx, s.getInitSyncBlocks()); err != nil {
return err
}
finalized := s.FinalizedCheckpt()
finalized := s.ForkChoicer().FinalizedCheckpoint()
if finalized == nil {
return errNilFinalizedInStore
}
@@ -142,6 +154,8 @@ func (s *Service) HasBlock(ctx context.Context, root [32]byte) bool {
// ReceiveAttesterSlashing receives an attester slashing and inserts it to forkchoice
func (s *Service) ReceiveAttesterSlashing(ctx context.Context, slashing *ethpb.AttesterSlashing) {
s.ForkChoicer().Lock()
defer s.ForkChoicer().Unlock()
s.InsertSlashingsToForkChoiceStore(ctx, []*ethpb.AttesterSlashing{slashing})
}
@@ -179,11 +193,12 @@ func (s *Service) handleBlockBLSToExecChanges(blk interfaces.ReadOnlyBeaconBlock
// This checks whether it's time to start saving hot state to DB.
// It's time when there's `epochsSinceFinalitySaveHotStateDB` epochs of non-finality.
// Requires a read lock on forkchoice
func (s *Service) checkSaveHotStateDB(ctx context.Context) error {
currentEpoch := slots.ToEpoch(s.CurrentSlot())
// Prevent `sinceFinality` going underflow.
var sinceFinality primitives.Epoch
finalized := s.FinalizedCheckpt()
finalized := s.ForkChoicer().FinalizedCheckpoint()
if finalized == nil {
return errNilFinalizedInStore
}

View File

@@ -97,7 +97,8 @@ func TestService_ReceiveBlock(t *testing.T) {
),
},
check: func(t *testing.T, s *Service) {
pending := s.cfg.ExitPool.PendingExits(genesis, 1, true /* no limit */)
pending, err := s.cfg.ExitPool.PendingExits()
require.NoError(t, err)
if len(pending) != 0 {
t.Errorf(
"Did not mark the correct number of exits. Got %d pending but wanted %d",

View File

@@ -44,20 +44,19 @@ import (
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
nextEpochBoundarySlot primitives.Slot
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
processAttestationsLock sync.Mutex
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
nextEpochBoundarySlot primitives.Slot
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
}
// config options for the service.
@@ -200,6 +199,8 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
}
fRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(s.ctx, &forkchoicetypes.Checkpoint{Epoch: justified.Epoch,
Root: bytesutil.ToBytes32(justified.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
@@ -426,6 +427,8 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
s.originBlockRoot = genesisBlkRoot
s.cfg.StateGen.SaveFinalizedState(0 /*slot*/, genesisBlkRoot, genesisState)
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
if err := s.cfg.ForkChoiceStore.InsertNode(ctx, genesisState, genesisBlkRoot); err != nil {
log.WithError(err).Fatal("Could not process genesis block for fork choice")
}
@@ -446,6 +449,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
// 1.) Check fork choice store.
// 2.) Check DB.
// Checking 1.) is ten times faster than checking 2.)
// this function requires a lock in forkchoice
func (s *Service) hasBlock(ctx context.Context, root [32]byte) bool {
if s.cfg.ForkChoiceStore.HasNode(root) {
return true

View File

@@ -57,6 +57,11 @@ type mockBroadcaster struct {
broadcastCalled bool
}
func (mb *mockBroadcaster) BroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar) error {
//TODO implement me
panic("implement me")
}
func (mb *mockBroadcaster) Broadcast(_ context.Context, _ proto.Message) error {
mb.broadcastCalled = true
return nil

View File

@@ -20,11 +20,13 @@ go_library(
"//beacon-chain/db:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -19,11 +19,13 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v3/config/fieldparams"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
)
@@ -32,6 +34,7 @@ var ErrNilState = errors.New("nil state")
// ChainService defines the mock interface for testing
type ChainService struct {
NotFinalized bool
Optimistic bool
ValidAttestation bool
ValidatorsRoot [32]byte
@@ -389,11 +392,6 @@ func (_ *ChainService) HeadGenesisValidatorsRoot() [32]byte {
return [32]byte{}
}
// VerifyFinalizedBlkDescendant mocks VerifyBlkDescendant and always returns nil.
func (s *ChainService) VerifyFinalizedBlkDescendant(_ context.Context, _ [32]byte) error {
return s.VerifyBlkDescendantErr
}
// VerifyLmdFfgConsistency mocks VerifyLmdFfgConsistency and always returns nil.
func (_ *ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attestation) error {
if !bytes.Equal(a.Data.BeaconBlockRoot, a.Data.Target.Root) {
@@ -402,14 +400,6 @@ func (_ *ChainService) VerifyLmdFfgConsistency(_ context.Context, a *ethpb.Attes
return nil
}
// VerifyFinalizedConsistency mocks VerifyFinalizedConsistency and always returns nil.
func (s *ChainService) VerifyFinalizedConsistency(r []byte) error {
if !bytes.Equal(r, s.FinalizedCheckPoint.Root) {
return errors.New("Root and finalized store are not consistent")
}
return nil
}
// ChainHeads mocks ChainHeads and always return nil.
func (_ *ChainService) ChainHeads() ([][32]byte, []primitives.Slot) {
return [][32]byte{
@@ -459,6 +449,11 @@ func (s *ChainService) IsOptimistic(_ context.Context) (bool, error) {
return s.Optimistic, nil
}
// InForkchoice mocks the same method in the chain service
func (s *ChainService) InForkchoice(_ [32]byte) bool {
return !s.NotFinalized
}
// IsOptimisticForRoot mocks the same method in the chain service.
func (s *ChainService) IsOptimisticForRoot(_ context.Context, root [32]byte) (bool, error) {
s.OptimisticCheckRootReceived = root
@@ -466,7 +461,17 @@ func (s *ChainService) IsOptimisticForRoot(_ context.Context, root [32]byte) (bo
}
// UpdateHead mocks the same method in the chain service.
func (s *ChainService) UpdateHead(_ context.Context) error { return nil }
func (s *ChainService) UpdateHead(ctx context.Context, slot primitives.Slot) {
ojc := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, slot, bytesutil.ToBytes32(s.Root), [32]byte{}, [32]byte{}, ojc, ojc)
if err != nil {
logrus.WithError(err).Error("could not update head")
}
err = s.ForkChoicer().InsertNode(ctx, st, root)
if err != nil {
logrus.WithError(err).Error("could not insert node to forkchoice")
}
}
// ReceiveAttesterSlashing mocks the same method in the chain service.
func (s *ChainService) ReceiveAttesterSlashing(context.Context, *ethpb.AttesterSlashing) {}
@@ -475,3 +480,37 @@ func (s *ChainService) ReceiveAttesterSlashing(context.Context, *ethpb.AttesterS
func (s *ChainService) IsFinalized(_ context.Context, blockRoot [32]byte) bool {
return s.FinalizedRoots[blockRoot]
}
// prepareForkchoiceState prepares a beacon state with the given data to mock
// insert into forkchoice
func prepareForkchoiceState(
_ context.Context,
slot primitives.Slot,
blockRoot [32]byte,
parentRoot [32]byte,
payloadHash [32]byte,
justified *ethpb.Checkpoint,
finalized *ethpb.Checkpoint,
) (state.BeaconState, [32]byte, error) {
blockHeader := &ethpb.BeaconBlockHeader{
ParentRoot: parentRoot[:],
}
executionHeader := &enginev1.ExecutionPayloadHeader{
BlockHash: payloadHash[:],
}
base := &ethpb.BeaconStateBellatrix{
Slot: slot,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
BlockRoots: make([][]byte, 1),
CurrentJustifiedCheckpoint: justified,
FinalizedCheckpoint: finalized,
LatestExecutionPayloadHeader: executionHeader,
LatestBlockHeader: blockHeader,
}
base.BlockRoots[0] = append(base.BlockRoots[0], blockRoot[:]...)
st, err := state_native.InitializeFromProtoBellatrix(base)
return st, blockRoot, err
}

View File

@@ -6,6 +6,7 @@ go_library(
"active_balance.go",
"active_balance_disabled.go", # keep
"attestation_data.go",
"blobs.go",
"checkpoint_state.go",
"committee.go",
"committee_disabled.go", # keep
@@ -40,6 +41,7 @@ go_library(
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",

51
beacon-chain/cache/blobs.go vendored Normal file
View File

@@ -0,0 +1,51 @@
package cache
import (
"sync"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
v1 "github.com/prysmaticlabs/prysm/v3/proto/engine/v1"
)
var (
// blobsCacheMiss tracks the number of blobs requests that aren't present in the cache.
blobsCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "total_blobs_cache_miss",
Help: "The number of get requests that aren't present in the cache.",
})
errNoSidecar = errors.New("no sidecar")
)
// BlobsCache caches the blobs for the proposer through life cycle.
type BlobsCache struct {
cache map[types.Slot][]*v1.Blob
sync.Mutex
}
func NewBlobsCache() *BlobsCache {
return &BlobsCache{
cache: map[types.Slot][]*v1.Blob{},
}
}
func (b *BlobsCache) Get(slot types.Slot) ([]*v1.Blob, error) {
b.Lock()
defer b.Unlock()
sc, ok := b.cache[slot]
if !ok {
blobsCacheMiss.Inc()
return nil, errNoSidecar
}
delete(b.cache, slot)
return sc, nil
}
func (b *BlobsCache) Put(slot types.Slot, blobs []*v1.Blob) {
b.Lock()
defer b.Unlock()
b.cache[slot] = blobs
}

View File

@@ -44,49 +44,51 @@ import (
// increase_balance(state, get_beacon_proposer_index(state), proposer_reward)
// else:
// decrease_balance(state, participant_index, participant_reward)
func ProcessSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.SyncAggregate) (state.BeaconState, error) {
s, votedKeys, err := processSyncAggregate(ctx, s, sync)
func ProcessSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.SyncAggregate) (state.BeaconState, uint64, error) {
s, votedKeys, reward, err := processSyncAggregate(ctx, s, sync)
if err != nil {
return nil, errors.Wrap(err, "could not filter sync committee votes")
return nil, 0, errors.Wrap(err, "could not filter sync committee votes")
}
if err := VerifySyncCommitteeSig(s, votedKeys, sync.SyncCommitteeSignature); err != nil {
return nil, errors.Wrap(err, "could not verify sync committee signature")
return nil, 0, errors.Wrap(err, "could not verify sync committee signature")
}
return s, nil
return s, reward, nil
}
// processSyncAggregate applies all the logic in the spec function `process_sync_aggregate` except
// verifying the BLS signatures. It returns the modified beacons state and the list of validators'
// public keys that voted, for future signature verification.
// verifying the BLS signatures. It returns the modified beacons state, the list of validators'
// public keys that voted (for future signature verification) and the proposer reward for including
// sync aggregate messages.
func processSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.SyncAggregate) (
state.BeaconState,
[]bls.PublicKey,
uint64,
error) {
currentSyncCommittee, err := s.CurrentSyncCommittee()
if err != nil {
return nil, nil, err
return nil, nil, 0, err
}
if currentSyncCommittee == nil {
return nil, nil, errors.New("nil current sync committee in state")
return nil, nil, 0, errors.New("nil current sync committee in state")
}
committeeKeys := currentSyncCommittee.Pubkeys
if sync.SyncCommitteeBits.Len() > uint64(len(committeeKeys)) {
return nil, nil, errors.New("bits length exceeds committee length")
return nil, nil, 0, errors.New("bits length exceeds committee length")
}
votedKeys := make([]bls.PublicKey, 0, len(committeeKeys))
activeBalance, err := helpers.TotalActiveBalance(s)
if err != nil {
return nil, nil, err
return nil, nil, 0, err
}
proposerReward, participantReward, err := SyncRewards(activeBalance)
if err != nil {
return nil, nil, err
return nil, nil, 0, err
}
proposerIndex, err := helpers.BeaconProposerIndex(ctx, s)
if err != nil {
return nil, nil, err
return nil, nil, 0, err
}
earnedProposerReward := uint64(0)
@@ -94,29 +96,29 @@ func processSyncAggregate(ctx context.Context, s state.BeaconState, sync *ethpb.
vIdx, exists := s.ValidatorIndexByPubkey(bytesutil.ToBytes48(committeeKeys[i]))
// Impossible scenario.
if !exists {
return nil, nil, errors.New("validator public key does not exist in state")
return nil, nil, 0, errors.New("validator public key does not exist in state")
}
if sync.SyncCommitteeBits.BitAt(i) {
pubKey, err := bls.PublicKeyFromBytes(committeeKeys[i])
if err != nil {
return nil, nil, err
return nil, nil, 0, err
}
votedKeys = append(votedKeys, pubKey)
if err := helpers.IncreaseBalance(s, vIdx, participantReward); err != nil {
return nil, nil, err
return nil, nil, 0, err
}
earnedProposerReward += proposerReward
} else {
if err := helpers.DecreaseBalance(s, vIdx, participantReward); err != nil {
return nil, nil, err
return nil, nil, 0, err
}
}
}
if err := helpers.IncreaseBalance(s, proposerIndex, earnedProposerReward); err != nil {
return nil, nil, err
return nil, nil, 0, err
}
return s, votedKeys, err
return s, votedKeys, earnedProposerReward, err
}
// VerifySyncCommitteeSig verifies sync committee signature `syncSig` is valid with respect to public keys `syncKeys`.

View File

@@ -17,6 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
"github.com/prysmaticlabs/prysm/v3/testing/util"
"github.com/prysmaticlabs/prysm/v3/time/slots"
@@ -53,8 +54,10 @@ func TestProcessSyncCommittee_PerfectParticipation(t *testing.T) {
SyncCommitteeSignature: aggregatedSig,
}
beaconState, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
var reward uint64
beaconState, reward, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
require.NoError(t, err)
assert.Equal(t, uint64(72192), reward)
// Use a non-sync committee index to compare profitability.
syncCommittee := make(map[primitives.ValidatorIndex]bool)
@@ -127,7 +130,7 @@ func TestProcessSyncCommittee_MixParticipation_BadSignature(t *testing.T) {
SyncCommitteeSignature: aggregatedSig,
}
_, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
_, _, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
require.ErrorContains(t, "invalid sync committee signature", err)
}
@@ -164,7 +167,7 @@ func TestProcessSyncCommittee_MixParticipation_GoodSignature(t *testing.T) {
SyncCommitteeSignature: aggregatedSig,
}
_, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
_, _, err = altair.ProcessSyncAggregate(context.Background(), beaconState, syncAggregate)
require.NoError(t, err)
}
@@ -189,7 +192,7 @@ func TestProcessSyncCommittee_DontPrecompute(t *testing.T) {
SyncCommitteeBits: syncBits,
}
require.NoError(t, beaconState.UpdateBalancesAtIndex(idx, 0))
st, votedKeys, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
require.NoError(t, err)
require.Equal(t, 511, len(votedKeys))
require.DeepEqual(t, committeeKeys[0], votedKeys[0].Marshal())
@@ -212,7 +215,7 @@ func TestProcessSyncCommittee_processSyncAggregate(t *testing.T) {
SyncCommitteeBits: syncBits,
}
st, votedKeys, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
st, votedKeys, _, err := altair.ProcessSyncAggregateEported(context.Background(), beaconState, syncAggregate)
require.NoError(t, err)
votedMap := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
for _, key := range votedKeys {

View File

@@ -116,7 +116,7 @@ func VerifyExitAndSignature(
return nil
}
// verifyExitConditions implements the spec defined validation for voluntary exits(excluding signatures).
// verifyExitConditions implements the spec defined validation for voluntary exits (excluding signatures).
//
// Spec pseudocode definition:
//

View File

@@ -133,6 +133,7 @@ func ValidatePayloadWhenMergeCompletes(st state.BeaconState, payload interfaces.
return err
}
if !bytes.Equal(payload.ParentHash(), header.BlockHash()) {
log.Errorf("parent Hash %#x, header Hash: %#x", bytesutil.Trunc(payload.ParentHash()), bytesutil.Trunc(header.BlockHash()))
return ErrInvalidPayloadBlockHash
}
return nil

View File

@@ -9,6 +9,110 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
)
// UpgradeToDeneb updates inputs a generic state to return the version Deneb state.
func UpgradeToDeneb(state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)
currentSyncCommittee, err := state.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := state.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := state.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := state.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := state.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := state.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := state.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := state.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summarires, err := state.HistoricalSummaries()
if err != nil {
return nil, err
}
s := &ethpb.BeaconStateDeneb{
GenesisTime: state.GenesisTime(),
GenesisValidatorsRoot: state.GenesisValidatorsRoot(),
Slot: state.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: state.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().DenebForkVersion,
Epoch: epoch,
},
LatestBlockHeader: state.LatestBlockHeader(),
BlockRoots: state.BlockRoots(),
StateRoots: state.StateRoots(),
HistoricalRoots: [][]byte{},
Eth1Data: state.Eth1Data(),
Eth1DataVotes: state.Eth1DataVotes(),
Eth1DepositIndex: state.Eth1DepositIndex(),
Validators: state.Validators(),
Balances: state.Balances(),
RandaoMixes: state.RandaoMixes(),
Slashings: state.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: state.JustificationBits(),
PreviousJustifiedCheckpoint: state.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: state.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: state.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
ExcessDataGas: make([]byte, 32),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
},
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summarires,
}
return state_native.InitializeFromProtoUnsafeDeneb(s)
}
// UpgradeToCapella updates a generic state to return the version Capella state.
func UpgradeToCapella(state state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(state)

View File

@@ -353,7 +353,7 @@ func ProcessRandaoMixesReset(state state.BeaconState) (state.BeaconState, error)
}
// ProcessHistoricalDataUpdate processes the updates to historical data during epoch processing.
// From Capella onward, per spec,state's historical summaries are updated instead of historical roots.
// From Capella onward, per spec, state's historical summaries are updated instead of historical roots.
func ProcessHistoricalDataUpdate(state state.BeaconState) (state.BeaconState, error) {
currentEpoch := time.CurrentEpoch(state)
nextEpoch := currentEpoch + 1

View File

@@ -81,6 +81,15 @@ func CanUpgradeToCapella(slot primitives.Slot) bool {
return epochStart && capellaEpoch
}
// CanUpgradeToDeneb returns true if the input `slot` can upgrade to Deneb.
// Spec code:
// If state.slot % SLOTS_PER_EPOCH == 0 and compute_epoch_at_slot(state.slot) == DENEB_FORK_EPOCH
func CanUpgradeToDeneb(slot primitives.Slot) bool {
epochStart := slots.IsEpochStart(slot)
DenebEpoch := slots.ToEpoch(slot) == params.BeaconConfig().DenebForkEpoch
return epochStart && DenebEpoch
}
// CanProcessEpoch checks the eligibility to process epoch.
// The epoch can be processed at the end of the last slot of every epoch.
//

View File

@@ -23,7 +23,6 @@ go_library(
"//beacon-chain/core/execution:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition/interop:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
@@ -46,6 +45,7 @@ go_library(
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_protolambda_go_kzg//eth:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],

View File

@@ -11,12 +11,12 @@ import (
)
// WriteBlockToDisk as a block ssz. Writes to temp directory. Debug!
func WriteBlockToDisk(block interfaces.ReadOnlySignedBeaconBlock, failed bool) {
func WriteBlockToDisk(prefix string, block interfaces.ReadOnlySignedBeaconBlock, failed bool) {
if !features.Get().WriteSSZStateTransitions {
return
}
filename := fmt.Sprintf("beacon_block_%d.ssz", block.Block().Slot())
filename := fmt.Sprintf(prefix+"_beacon_block_%d.ssz", block.Block().Slot())
if failed {
filename = "failed_" + filename
}

View File

@@ -302,6 +302,14 @@ func ProcessSlots(ctx context.Context, state state.BeaconState, slot primitives.
return nil, err
}
}
if time.CanUpgradeToDeneb(state.Slot()) {
state, err = capella.UpgradeToDeneb(state)
if err != nil {
tracing.AnnotateError(span, err)
return nil, err
}
}
}
if highestSlot < state.Slot() {

View File

@@ -6,14 +6,15 @@ import (
"fmt"
"github.com/pkg/errors"
"github.com/protolambda/go-kzg/eth"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/altair"
b "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/transition/interop"
v "github.com/prysmaticlabs/prysm/v3/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
"github.com/prysmaticlabs/prysm/v3/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v3/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v3/runtime/version"
"go.opencensus.io/trace"
@@ -57,9 +58,6 @@ func ExecuteStateTransitionNoVerifyAnySig(
defer span.End()
var err error
interop.WriteBlockToDisk(signed, false /* Has the block failed */)
interop.WriteStateToDisk(st)
parentRoot := signed.Block().ParentRoot()
st, err = ProcessSlotsUsingNextSlotCache(ctx, st, parentRoot[:], signed.Block().Slot())
if err != nil {
@@ -256,7 +254,7 @@ func ProcessOperationsNoVerifyAttsSigs(
if err != nil {
return nil, err
}
case version.Altair, version.Bellatrix, version.Capella:
case version.Altair, version.Bellatrix, version.Capella, version.Deneb:
state, err = altairOperations(ctx, state, signedBeaconBlock)
if err != nil {
return nil, err
@@ -351,14 +349,53 @@ func ProcessBlockForStateRoot(
if err != nil {
return nil, errors.Wrap(err, "could not get sync aggregate from block")
}
state, err = altair.ProcessSyncAggregate(ctx, state, sa)
state, _, err = altair.ProcessSyncAggregate(ctx, state, sa)
if err != nil {
return nil, errors.Wrap(err, "process_sync_aggregate failed")
}
if signed.Block().Version() == version.Deneb {
err := ValidateBlobKzgs(ctx, signed.Block().Body())
if err != nil {
return nil, errors.Wrap(err, "could not validate blob kzgs")
}
}
return state, nil
}
// ValidateBlobKzgs validates the blob kzgs in the beacon block.
//
// Spec code:
// def process_blob_kzg_commitments(state: BeaconState, body: BeaconBlockBody):
//
// assert verify_kzg_commitments_against_transactions(body.execution_payload.transactions, body.blob_kzg_commitments)
func ValidateBlobKzgs(ctx context.Context, body interfaces.ReadOnlyBeaconBlockBody) error {
_, span := trace.StartSpan(ctx, "core.state.ValidateBlobKzgs")
defer span.End()
payload, err := body.Execution()
if err != nil {
return errors.Wrap(err, "could not get execution payload from block")
}
blkKzgs, err := body.BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "could not get blob kzg commitments from block")
}
kzgs := make(eth.KZGCommitmentSequenceImpl, len(blkKzgs))
for i := range blkKzgs {
kzgs[i] = bytesutil.ToBytes48(blkKzgs[i])
}
txs, err := payload.Transactions()
if err != nil {
return errors.Wrap(err, "could not get transactions from payload")
}
if err := eth.VerifyKZGCommitmentsAgainstTransactions(txs, kzgs); err != nil {
return err
}
return nil
}
// This calls altair block operations.
func altairOperations(
ctx context.Context,

View File

@@ -178,7 +178,6 @@ func SlashValidator(
if err != nil {
return nil, errors.Wrap(err, "could not get proposer idx")
}
// In phase 0, the proposer is the whistleblower.
whistleBlowerIdx := proposerIdx
whistleblowerReward := validator.EffectiveBalance / params.BeaconConfig().WhistleBlowerRewardQuotient
proposerReward := whistleblowerReward / proposerRewardQuotient

View File

@@ -54,6 +54,7 @@ type ReadOnlyDatabase interface {
// Fee recipients operations.
FeeRecipientByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (common.Address, error)
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
// origin checkpoint sync support
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
BackfillBlockRoot(ctx context.Context) ([32]byte, error)

View File

@@ -18,7 +18,6 @@ go_library(
"log.go",
"migration.go",
"migration_archived_index.go",
"migration_blinded_beacon_blocks.go",
"migration_block_slot_index.go",
"migration_state_validators.go",
"schema.go",

View File

@@ -10,7 +10,6 @@ import (
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/consensus-types/interfaces"
@@ -272,6 +271,22 @@ func (s *Store) SaveBlock(ctx context.Context, signed interfaces.ReadOnlySignedB
return s.SaveBlocks(ctx, []interfaces.ReadOnlySignedBeaconBlock{signed})
}
// This function determines if we should save beacon blocks in the DB in blinded format by checking
// if a `saveBlindedBeaconBlocks` key exists in the database. Otherwise, we check if the last
// blocked stored to check if it is blinded, and then write that `saveBlindedBeaconBlocks` key
// to the DB for future checks.
func (s *Store) shouldSaveBlinded(ctx context.Context) (bool, error) {
var saveBlinded bool
if err := s.db.View(func(tx *bolt.Tx) error {
metadataBkt := tx.Bucket(chainMetadataBucket)
saveBlinded = len(metadataBkt.Get(saveBlindedBeaconBlocksKey)) > 0
return nil
}); err != nil {
return false, err
}
return saveBlinded, nil
}
// SaveBlocks via bulk updates to the db.
func (s *Store) SaveBlocks(ctx context.Context, blks []interfaces.ReadOnlySignedBeaconBlock) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveBlocks")
@@ -287,7 +302,7 @@ func (s *Store) SaveBlocks(ctx context.Context, blks []interfaces.ReadOnlySigned
if err != nil {
return err
}
enc, err := marshalBlock(ctx, blk)
enc, err := s.marshalBlock(ctx, blk)
if err != nil {
return err
}
@@ -296,6 +311,10 @@ func (s *Store) SaveBlocks(ctx context.Context, blks []interfaces.ReadOnlySigned
indicesByBucket := createBlockIndicesFromBlock(ctx, blk.Block())
indicesForBlocks[i] = indicesByBucket
}
saveBlinded, err := s.shouldSaveBlinded(ctx)
if err != nil {
return err
}
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(blocksBucket)
for i, blk := range blks {
@@ -305,7 +324,7 @@ func (s *Store) SaveBlocks(ctx context.Context, blks []interfaces.ReadOnlySigned
if err := updateValueForIndices(ctx, indicesForBlocks[i], blockRoots[i], tx); err != nil {
return errors.Wrap(err, "could not update DB indices")
}
if features.Get().EnableOnlyBlindedBeaconBlocks {
if saveBlinded {
blindedBlock, err := blk.ToBlinded()
if err != nil {
if !errors.Is(err, blocks.ErrUnsupportedVersion) {
@@ -799,6 +818,16 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
if err := rawBlock.UnmarshalSSZ(enc[len(capellaBlindKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded Capella block")
}
case hasDenebKey(enc):
rawBlock = &ethpb.SignedBeaconBlockDeneb{}
if err := rawBlock.UnmarshalSSZ(enc[len(denebKey):]); err != nil {
return nil, err
}
case hasDenebBlindKey(enc):
rawBlock = &ethpb.SignedBlindedBeaconBlockDeneb{}
if err := rawBlock.UnmarshalSSZ(enc[len(denebBlindKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal blinded Deneb block")
}
default:
// Marshal block bytes to phase 0 beacon block.
rawBlock = &ethpb.SignedBeaconBlock{}
@@ -809,50 +838,75 @@ func unmarshalBlock(_ context.Context, enc []byte) (interfaces.ReadOnlySignedBea
return blocks.NewSignedBeaconBlock(rawBlock)
}
// marshal versioned beacon block from struct type down to bytes.
func marshalBlock(_ context.Context, blk interfaces.ReadOnlySignedBeaconBlock) ([]byte, error) {
func (s *Store) marshalBlock(
ctx context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
shouldBlind, err := s.shouldSaveBlinded(ctx)
if err != nil {
return nil, err
}
if shouldBlind {
return marshalBlockBlinded(ctx, blk)
}
return marshalBlockFull(ctx, blk)
}
// Encodes a full beacon block to the DB with its associated key.
func marshalBlockFull(
_ context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
var encodedBlock []byte
var err error
blockToSave := blk
if features.Get().EnableOnlyBlindedBeaconBlocks {
blindedBlock, err := blk.ToBlinded()
switch {
case errors.Is(err, blocks.ErrUnsupportedVersion):
encodedBlock, err = blk.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal non-blinded block")
}
case err != nil:
return nil, errors.Wrap(err, "could not convert block to blinded format")
default:
encodedBlock, err = blindedBlock.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal blinded block")
}
blockToSave = blindedBlock
}
} else {
encodedBlock, err = blk.MarshalSSZ()
if err != nil {
return nil, err
}
encodedBlock, err = blk.MarshalSSZ()
if err != nil {
return nil, err
}
switch blockToSave.Version() {
switch blk.Version() {
case version.Deneb:
return snappy.Encode(nil, append(denebKey, encodedBlock...)), nil
case version.Capella:
if blockToSave.IsBlinded() {
return snappy.Encode(nil, append(capellaBlindKey, encodedBlock...)), nil
}
return snappy.Encode(nil, append(capellaKey, encodedBlock...)), nil
case version.Bellatrix:
if blockToSave.IsBlinded() {
return snappy.Encode(nil, append(bellatrixBlindKey, encodedBlock...)), nil
}
return snappy.Encode(nil, append(bellatrixKey, encodedBlock...)), nil
case version.Altair:
return snappy.Encode(nil, append(altairKey, encodedBlock...)), nil
case version.Phase0:
return snappy.Encode(nil, encodedBlock), nil
default:
return nil, errors.New("Unknown block version")
return nil, errors.New("unknown block version")
}
}
// Encodes a blinded beacon block with its associated key.
// If the block does not support blinding, we then encode it as a full
// block with its associated key by calling marshalBlockFull.
func marshalBlockBlinded(
ctx context.Context,
blk interfaces.ReadOnlySignedBeaconBlock,
) ([]byte, error) {
blindedBlock, err := blk.ToBlinded()
if err != nil {
switch {
case errors.Is(err, blocks.ErrUnsupportedVersion):
return marshalBlockFull(ctx, blk)
default:
return nil, errors.Wrap(err, "could not convert block to blinded format")
}
}
encodedBlock, err := blindedBlock.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal blinded block")
}
switch blk.Version() {
case version.Deneb:
return snappy.Encode(nil, append(denebBlindKey, encodedBlock...)), nil
case version.Capella:
return snappy.Encode(nil, append(capellaBlindKey, encodedBlock...)), nil
case version.Bellatrix:
return snappy.Encode(nil, append(bellatrixBlindKey, encodedBlock...)), nil
default:
return nil, fmt.Errorf("unsupported block version: %v", blk.Version())
}
}

View File

@@ -1,7 +1,6 @@
package kv
import (
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
)
@@ -10,7 +9,4 @@ func init() {
if err := params.SetActive(params.MainnetTestConfig()); err != nil {
panic(err)
}
features.Init(&features.Flags{
EnableOnlyBlindedBeaconBlocks: true,
})
}

View File

@@ -37,3 +37,17 @@ func hasCapellaBlindKey(enc []byte) bool {
}
return bytes.Equal(enc[:len(capellaBlindKey)], capellaBlindKey)
}
func hasDenebKey(enc []byte) bool {
if len(denebKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(denebKey)], denebKey)
}
func hasDenebBlindKey(enc []byte) bool {
if len(denebBlindKey) >= len(enc) {
return false
}
return bytes.Equal(enc[:len(denebBlindKey)], denebBlindKey)
}

View File

@@ -17,6 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/v3/beacon-chain/db/iface"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v3/io/file"
bolt "go.etcd.io/bbolt"
)
@@ -128,6 +129,8 @@ var Buckets = [][]byte{
feeRecipientBucket,
registrationBucket,
blobsBucket,
}
// NewKVStore initializes a new boltDB key-value store at the directory
@@ -194,7 +197,8 @@ func NewKVStore(ctx context.Context, dirPath string) (*Store, error) {
if err = prometheus.Register(createBoltCollector(kv.db)); err != nil {
return nil, err
}
if err = kv.checkNeedsResync(); err != nil {
// Setup the type of block storage used depending on whether or not this is a fresh database.
if err := kv.setupBlockStorageType(ctx); err != nil {
return nil, err
}
return kv, nil
@@ -229,21 +233,60 @@ func (s *Store) DatabasePath() string {
return s.databasePath
}
func (s *Store) checkNeedsResync() error {
return s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(migrationsBucket)
hasDisabledFeature := !features.Get().EnableOnlyBlindedBeaconBlocks
if hasDisabledFeature && bkt.Get(migrationBlindedBeaconBlocksKey) != nil {
return fmt.Errorf(
"you have disabled the flag %s, and your node must resync to ensure your "+
"database is compatible. If you do not want to resync, please re-enable the %s flag",
features.EnableOnlyBlindedBeaconBlocks.Name,
features.EnableOnlyBlindedBeaconBlocks.Name,
)
func (s *Store) setupBlockStorageType(ctx context.Context) error {
// We check if we want to save blinded beacon blocks by checking a key in the db
// otherwise, we check the last stored block and set that key in the DB if it is blinded.
headBlock, err := s.HeadBlock(ctx)
if err != nil {
return errors.Wrap(err, "could not get head block when setting up block storage type")
}
err = blocks.BeaconBlockIsNil(headBlock)
isNilBlk := err != nil
saveFull := features.Get().SaveFullExecutionPayloads
var saveBlinded bool
if err := s.db.Update(func(tx *bolt.Tx) error {
// If we have a key stating we wish to save blinded beacon blocks, then we set saveBlinded to true.
metadataBkt := tx.Bucket(chainMetadataBucket)
keyExists := len(metadataBkt.Get(saveBlindedBeaconBlocksKey)) > 0
if keyExists {
saveBlinded = true
return nil
}
// If the head block exists and is blinded, we update the key in the DB to
// say we wish to save all blocks as blinded.
if !isNilBlk && headBlock.IsBlinded() {
if err := metadataBkt.Put(saveBlindedBeaconBlocksKey, []byte{1}); err != nil {
return err
}
saveBlinded = true
}
if isNilBlk && !saveFull {
if err := metadataBkt.Put(saveBlindedBeaconBlocksKey, []byte{1}); err != nil {
return err
}
saveBlinded = true
}
return nil
})
}); err != nil {
return err
}
// If the user wants to save full execution payloads but their database is saving blinded blocks only,
// we then throw an error as the node should not start.
if saveFull && saveBlinded {
return fmt.Errorf(
"cannot use the %s flag with this existing database, as it has already been initialized to only store "+
"execution payload headers (aka blinded beacon blocks). If you want to use this flag, you must re-sync your node with a fresh "+
"database. We recommend using checkpoint sync https://docs.prylabs.network/docs/prysm-usage/checkpoint-sync/",
features.SaveFullExecutionPayloads.Name,
)
}
if saveFull {
log.Warn("Saving full beacon blocks to the database. For greater disk space savings, we recommend resyncing from an empty database with " +
"checkpoint sync to save only blinded beacon blocks by default")
}
return nil
}
func createBuckets(tx *bolt.Tx, buckets ...[]byte) error {

View File

@@ -2,10 +2,14 @@ package kv
import (
"context"
"fmt"
"testing"
"github.com/prysmaticlabs/prysm/v3/config/features"
"github.com/prysmaticlabs/prysm/v3/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/testing/require"
"github.com/prysmaticlabs/prysm/v3/testing/util"
bolt "go.etcd.io/bbolt"
)
@@ -19,16 +23,169 @@ func setupDB(t testing.TB) *Store {
return db
}
func Test_checkNeedsResync(t *testing.T) {
store := setupDB(t)
resetFn := features.InitWithReset(&features.Flags{
EnableOnlyBlindedBeaconBlocks: false,
func Test_setupBlockStorageType(t *testing.T) {
ctx := context.Background()
t.Run("fresh database with feature enabled to store full blocks should store full blocks", func(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
SaveFullExecutionPayloads: true,
})
defer resetFn()
store := setupDB(t)
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.BlockNumber = 1
wrappedBlock, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
root, err := wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
require.NoError(t, store.SaveStateSummary(ctx, &ethpb.StateSummary{Root: root[:]}))
require.NoError(t, store.SaveHeadBlockRoot(ctx, root))
retrievedBlk, err := store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, false, retrievedBlk.IsBlinded())
require.DeepEqual(t, wrappedBlock, retrievedBlk)
})
t.Run("fresh database with default settings should store blinded", func(t *testing.T) {
resetFn := features.InitWithReset(&features.Flags{
SaveFullExecutionPayloads: false,
})
defer resetFn()
store := setupDB(t)
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.BlockNumber = 1
wrappedBlock, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
root, err := wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
require.NoError(t, store.SaveStateSummary(ctx, &ethpb.StateSummary{Root: root[:]}))
require.NoError(t, store.SaveHeadBlockRoot(ctx, root))
retrievedBlk, err := store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, true, retrievedBlk.IsBlinded())
wantedBlk, err := wrappedBlock.ToBlinded()
require.NoError(t, err)
require.DeepEqual(t, wantedBlk, retrievedBlk)
})
t.Run("existing database with blinded blocks but no key in metadata bucket should continue storing blinded blocks", func(t *testing.T) {
store := setupDB(t)
require.NoError(t, store.db.Update(func(tx *bolt.Tx) error {
return tx.Bucket(chainMetadataBucket).Put(saveBlindedBeaconBlocksKey, []byte{1})
}))
blk := util.NewBlindedBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayloadHeader.BlockNumber = 1
wrappedBlock, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
root, err := wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
require.NoError(t, store.SaveStateSummary(ctx, &ethpb.StateSummary{Root: root[:]}))
require.NoError(t, store.SaveHeadBlockRoot(ctx, root))
retrievedBlk, err := store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, true, retrievedBlk.IsBlinded())
require.DeepEqual(t, wrappedBlock, retrievedBlk)
// We then delete the key from the bucket.
require.NoError(t, store.db.Update(func(tx *bolt.Tx) error {
return tx.Bucket(chainMetadataBucket).Delete(saveBlindedBeaconBlocksKey)
}))
// Not a fresh database, has blinded blocks already and should continue being that way.
err = store.setupBlockStorageType(ctx)
require.NoError(t, err)
var shouldSaveBlinded bool
require.NoError(t, store.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(chainMetadataBucket)
shouldSaveBlinded = len(bkt.Get(saveBlindedBeaconBlocksKey)) > 0
return nil
}))
// Should have set the chain metadata bucket to save blinded
require.Equal(t, true, shouldSaveBlinded)
blkFull := util.NewBeaconBlockBellatrix()
blkFull.Block.Body.ExecutionPayload.BlockNumber = 2
wrappedBlock, err = blocks.NewSignedBeaconBlock(blkFull)
require.NoError(t, err)
root, err = wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
retrievedBlk, err = store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, true, retrievedBlk.IsBlinded())
wrappedBlinded, err := wrappedBlock.ToBlinded()
require.NoError(t, err)
require.DeepEqual(t, wrappedBlinded, retrievedBlk)
})
t.Run("existing database with full blocks type should continue storing full blocks", func(t *testing.T) {
store := setupDB(t)
require.NoError(t, store.db.Update(func(tx *bolt.Tx) error {
return tx.Bucket(chainMetadataBucket).Delete(saveBlindedBeaconBlocksKey)
}))
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.BlockNumber = 1
wrappedBlock, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
root, err := wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
require.NoError(t, store.SaveStateSummary(ctx, &ethpb.StateSummary{Root: root[:]}))
require.NoError(t, store.SaveHeadBlockRoot(ctx, root))
retrievedBlk, err := store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, false, retrievedBlk.IsBlinded())
require.DeepEqual(t, wrappedBlock, retrievedBlk)
// Not a fresh database, has full blocks already and should continue being that way.
err = store.setupBlockStorageType(ctx)
require.NoError(t, err)
blk = util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.BlockNumber = 2
wrappedBlock, err = blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
root, err = wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
retrievedBlk, err = store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, false, retrievedBlk.IsBlinded())
require.DeepEqual(t, wrappedBlock, retrievedBlk)
})
t.Run("existing database with blinded blocks type should error if user enables full blocks feature flag", func(t *testing.T) {
store := setupDB(t)
blk := util.NewBeaconBlockBellatrix()
blk.Block.Body.ExecutionPayload.BlockNumber = 1
wrappedBlock, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
root, err := wrappedBlock.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, store.SaveBlock(ctx, wrappedBlock))
require.NoError(t, store.SaveStateSummary(ctx, &ethpb.StateSummary{Root: root[:]}))
require.NoError(t, store.SaveHeadBlockRoot(ctx, root))
retrievedBlk, err := store.Block(ctx, root)
require.NoError(t, err)
require.Equal(t, true, retrievedBlk.IsBlinded())
wantedBlk, err := wrappedBlock.ToBlinded()
require.NoError(t, err)
require.DeepEqual(t, wantedBlk, retrievedBlk)
// Trying to enable full blocks with a database that is already storing blinded blocks should error.
resetFn := features.InitWithReset(&features.Flags{
SaveFullExecutionPayloads: true,
})
defer resetFn()
err = store.setupBlockStorageType(ctx)
errMsg := "cannot use the %s flag with this existing database, as it has already been initialized"
require.ErrorContains(t, fmt.Sprintf(errMsg, features.SaveFullExecutionPayloads.Name), err)
})
defer resetFn()
require.NoError(t, store.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(migrationsBucket)
return bkt.Put(migrationBlindedBeaconBlocksKey, migrationCompleted)
}))
err := store.checkNeedsResync()
require.ErrorContains(t, "your node must resync", err)
}

View File

@@ -14,7 +14,6 @@ var migrations = []migration{
migrateArchivedIndex,
migrateBlockSlotIndex,
migrateStateValidators,
migrateBlindedBeaconBlocksEnabled,
}
// RunMigrations defined in the migrations array.

View File

@@ -1,27 +0,0 @@
package kv
import (
"bytes"
"context"
"github.com/prysmaticlabs/prysm/v3/config/features"
bolt "go.etcd.io/bbolt"
)
var migrationBlindedBeaconBlocksKey = []byte("blinded-beacon-blocks-enabled")
func migrateBlindedBeaconBlocksEnabled(ctx context.Context, db *bolt.DB) error {
if !features.Get().EnableOnlyBlindedBeaconBlocks {
return nil // Only write to the migrations bucket if the feature is enabled.
}
if updateErr := db.Update(func(tx *bolt.Tx) error {
mb := tx.Bucket(migrationsBucket)
if b := mb.Get(migrationBlindedBeaconBlocksKey); bytes.Equal(b, migrationCompleted) {
return nil // Migration already completed.
}
return mb.Put(migrationBlindedBeaconBlocksKey, migrationCompleted)
}); updateErr != nil {
return updateErr
}
return nil
}

View File

@@ -46,14 +46,18 @@ var (
finalizedCheckpointKey = []byte("finalized-checkpoint")
powchainDataKey = []byte("powchain-data")
lastValidatedCheckpointKey = []byte("last-validated-checkpoint")
blobsBucket = []byte("blobs")
// Below keys are used to identify objects are to be fork compatible.
// Objects that are only compatible with specific forks should be prefixed with such keys.
altairKey = []byte("altair")
bellatrixKey = []byte("merge")
bellatrixBlindKey = []byte("blind-bellatrix")
capellaKey = []byte("capella")
capellaBlindKey = []byte("blind-capella")
altairKey = []byte("altair")
bellatrixKey = []byte("merge")
bellatrixBlindKey = []byte("blind-bellatrix")
capellaKey = []byte("capella")
capellaBlindKey = []byte("blind-capella")
saveBlindedBeaconBlocksKey = []byte("save-blinded-beacon-blocks")
denebKey = []byte("deneb")
denebBlindKey = []byte("blind-deneb")
// block root included in the beacon state used by weak subjectivity initial sync
originCheckpointBlockRootKey = []byte("origin-checkpoint-block-root")

View File

@@ -473,6 +473,19 @@ func (s *Store) unmarshalState(_ context.Context, enc []byte, validatorEntries [
}
switch {
case hasDenebKey(enc):
protoState := &ethpb.BeaconStateDeneb{}
if err := protoState.UnmarshalSSZ(enc[len(denebKey):]); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal encoding for Deneb")
}
ok, err := s.isStateValidatorMigrationOver()
if err != nil {
return nil, err
}
if ok {
protoState.Validators = validatorEntries
}
return statenative.InitializeFromProtoUnsafeDeneb(protoState)
case hasCapellaKey(enc):
// Marshal state bytes to capella beacon state.
protoState := &ethpb.BeaconStateCapella{}
@@ -580,6 +593,19 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
return nil, err
}
return snappy.Encode(nil, append(capellaKey, rawObj...)), nil
case *ethpb.BeaconStateDeneb:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateDeneb)
if !ok {
return nil, errors.New("non valid inner state")
}
if rState == nil {
return nil, errors.New("nil state")
}
rawObj, err := rState.MarshalSSZ()
if err != nil {
return nil, err
}
return snappy.Encode(nil, append(denebKey, rawObj...)), nil
default:
return nil, errors.New("invalid inner state")
}

View File

@@ -129,9 +129,9 @@ go_test(
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//:go_default_library",
"@com_github_ethereum_go_ethereum//accounts/abi/bind/backends:go_default_library",
"@com_github_ethereum_go_ethereum//beacon/engine:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//core/beacon:go_default_library",
"@com_github_ethereum_go_ethereum//core/types:go_default_library",
"@com_github_ethereum_go_ethereum//rpc:go_default_library",
"@com_github_holiman_uint256//:go_default_library",

View File

@@ -212,38 +212,6 @@ func TestService_logTtdStatus(t *testing.T) {
require.Equal(t, false, reached)
}
func TestService_logTtdStatus_NotSyncedClient(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
resp := (*pb.ExecutionBlock)(nil) // Nil response when a client is not synced
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": resp,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{
cfg: &config{},
}
service.rpcClient = rpcClient
ttd := new(uint256.Int)
reached, err := service.logTtdStatus(context.Background(), ttd.SetUint64(24343))
require.NoError(t, err)
require.Equal(t, false, reached)
}
func emptyPayload() *pb.ExecutionPayload {
return &pb.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),

View File

@@ -34,6 +34,7 @@ const (
NewPayloadMethod = "engine_newPayloadV1"
// NewPayloadMethodV2 v2 request string for JSON-RPC.
NewPayloadMethodV2 = "engine_newPayloadV2"
NewPayloadMethodV3 = "engine_newPayloadV3"
// ForkchoiceUpdatedMethod v1 request string for JSON-RPC.
ForkchoiceUpdatedMethod = "engine_forkchoiceUpdatedV1"
// ForkchoiceUpdatedMethodV2 v2 request string for JSON-RPC.
@@ -42,6 +43,9 @@ const (
GetPayloadMethod = "engine_getPayloadV1"
// GetPayloadMethodV2 v2 request string for JSON-RPC.
GetPayloadMethodV2 = "engine_getPayloadV2"
GetPayloadMethodV3 = "engine_getPayloadV3"
// GetBlobsBundleMethod v1 request string for JSON-RPC.
GetBlobsBundleMethod = "engine_getBlobsBundleV1"
// ExchangeTransitionConfigurationMethod v1 request string for JSON-RPC.
ExchangeTransitionConfigurationMethod = "engine_exchangeTransitionConfigurationV1"
// ExecutionBlockByHashMethod request string for JSON-RPC.
@@ -84,6 +88,7 @@ type EngineCaller interface {
) error
ExecutionBlockByHash(ctx context.Context, hash common.Hash, withTxs bool) (*pb.ExecutionBlock, error)
GetTerminalBlockHash(ctx context.Context, transitionTime uint64) ([]byte, bool, error)
GetBlobsBundle(ctx context.Context, payloadId [8]byte) (*pb.BlobsBundle, error)
}
var EmptyBlockHash = errors.New("Block hash is empty 0x0000...")
@@ -121,6 +126,15 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
if err != nil {
return nil, handleRPCError(err)
}
case *pb.ExecutionPayloadDeneb:
payloadPb, ok := payload.Proto().(*pb.ExecutionPayloadDeneb)
if !ok {
return nil, errors.New("execution data must be a Deneb execution payload")
}
err := s.rpcClient.CallContext(ctx, result, NewPayloadMethodV3, payloadPb)
if err != nil {
return nil, handleRPCError(err)
}
default:
return nil, errors.New("unknown execution data type")
}
@@ -168,7 +182,7 @@ func (s *Service) ForkchoiceUpdated(
if err != nil {
return nil, nil, handleRPCError(err)
}
case version.Capella:
case version.Capella, version.Deneb:
a, err := attrs.PbV2()
if err != nil {
return nil, nil, err
@@ -210,6 +224,15 @@ func (s *Service) GetPayload(ctx context.Context, payloadId [8]byte, slot primit
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
if slots.ToEpoch(slot) >= params.BeaconConfig().DenebForkEpoch {
result := &pb.ExecutionPayloadDenebWithValue{}
err := s.rpcClient.CallContext(ctx, result, GetPayloadMethodV3, pb.PayloadIDBytes(payloadId))
if err != nil {
return nil, handleRPCError(err)
}
return blocks.WrappedExecutionPayloadDeneb(result.Payload, big.NewInt(0).SetBytes(result.Value))
}
if slots.ToEpoch(slot) >= params.BeaconConfig().CapellaForkEpoch {
result := &pb.ExecutionPayloadCapellaWithValue{}
err := s.rpcClient.CallContext(ctx, result, GetPayloadMethodV2, pb.PayloadIDBytes(payloadId))
@@ -417,6 +440,19 @@ func (s *Service) ExecutionBlocksByHashes(ctx context.Context, hashes []common.H
return execBlks, nil
}
// GetBlobsBundle calls the engine_getBlobsV1 method via JSON-RPC.
func (s *Service) GetBlobsBundle(ctx context.Context, payloadId [8]byte) (*pb.BlobsBundle, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobsBundle")
defer span.End()
d := time.Now().Add(defaultEngineTimeout)
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
result := &pb.BlobsBundle{}
err := s.rpcClient.CallContext(ctx, result, GetBlobsBundleMethod, pb.PayloadIDBytes(payloadId))
return result, handleRPCError(err)
}
// HeaderByHash returns the relevant header details for the provided block hash.
func (s *Service) HeaderByHash(ctx context.Context, hash common.Hash) (*types.HeaderInfo, error) {
var hdr *types.HeaderInfo

View File

@@ -11,9 +11,9 @@ import (
"reflect"
"testing"
"github.com/ethereum/go-ethereum/beacon/engine"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/beacon"
"github.com/ethereum/go-ethereum/core/types"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/execution"
@@ -23,10 +23,10 @@ import (
func FuzzForkChoiceResponse(f *testing.F) {
valHash := common.Hash([32]byte{0xFF, 0x01})
payloadID := beacon.PayloadID([8]byte{0x01, 0xFF, 0xAA, 0x00, 0xEE, 0xFE, 0x00, 0x00})
payloadID := engine.PayloadID([8]byte{0x01, 0xFF, 0xAA, 0x00, 0xEE, 0xFE, 0x00, 0x00})
valErr := "asjajshjahsaj"
seed := &beacon.ForkChoiceResponse{
PayloadStatus: beacon.PayloadStatusV1{
seed := &engine.ForkChoiceResponse{
PayloadStatus: engine.PayloadStatusV1{
Status: "INVALID_TERMINAL_BLOCK",
LatestValidHash: &valHash,
ValidationError: &valErr,
@@ -37,7 +37,7 @@ func FuzzForkChoiceResponse(f *testing.F) {
assert.NoError(f, err)
f.Add(output)
f.Fuzz(func(t *testing.T, jsonBlob []byte) {
gethResp := &beacon.ForkChoiceResponse{}
gethResp := &engine.ForkChoiceResponse{}
prysmResp := &execution.ForkchoiceUpdatedResponse{}
gethErr := json.Unmarshal(jsonBlob, gethResp)
prysmErr := json.Unmarshal(jsonBlob, prysmResp)
@@ -49,14 +49,14 @@ func FuzzForkChoiceResponse(f *testing.F) {
gethBlob, gethErr := json.Marshal(gethResp)
prysmBlob, prysmErr := json.Marshal(prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, "geth and prysm unmarshaller return inconsistent errors")
newGethResp := &beacon.ForkChoiceResponse{}
newGethResp := &engine.ForkChoiceResponse{}
newGethErr := json.Unmarshal(prysmBlob, newGethResp)
assert.NoError(t, newGethErr)
if newGethResp.PayloadStatus.Status == "UNKNOWN" {
return
}
newGethResp2 := &beacon.ForkChoiceResponse{}
newGethResp2 := &engine.ForkChoiceResponse{}
newGethErr = json.Unmarshal(gethBlob, newGethResp2)
assert.NoError(t, newGethErr)
@@ -75,7 +75,7 @@ func FuzzForkChoiceResponse(f *testing.F) {
func FuzzExchangeTransitionConfiguration(f *testing.F) {
valHash := common.Hash([32]byte{0xFF, 0x01})
ttd := hexutil.Big(*big.NewInt(math.MaxInt))
seed := &beacon.TransitionConfigurationV1{
seed := &engine.TransitionConfigurationV1{
TerminalTotalDifficulty: &ttd,
TerminalBlockHash: valHash,
TerminalBlockNumber: hexutil.Uint64(math.MaxUint64),
@@ -85,7 +85,7 @@ func FuzzExchangeTransitionConfiguration(f *testing.F) {
assert.NoError(f, err)
f.Add(output)
f.Fuzz(func(t *testing.T, jsonBlob []byte) {
gethResp := &beacon.TransitionConfigurationV1{}
gethResp := &engine.TransitionConfigurationV1{}
prysmResp := &pb.TransitionConfiguration{}
gethErr := json.Unmarshal(jsonBlob, gethResp)
prysmErr := json.Unmarshal(jsonBlob, prysmResp)
@@ -103,11 +103,11 @@ func FuzzExchangeTransitionConfiguration(f *testing.F) {
if gethErr != nil {
t.Errorf("%s %s", gethResp.TerminalTotalDifficulty.String(), prysmResp.TerminalTotalDifficulty)
}
newGethResp := &beacon.TransitionConfigurationV1{}
newGethResp := &engine.TransitionConfigurationV1{}
newGethErr := json.Unmarshal(prysmBlob, newGethResp)
assert.NoError(t, newGethErr)
newGethResp2 := &beacon.TransitionConfigurationV1{}
newGethResp2 := &engine.TransitionConfigurationV1{}
newGethErr = json.Unmarshal(gethBlob, newGethResp2)
assert.NoError(t, newGethErr)
})
@@ -115,7 +115,7 @@ func FuzzExchangeTransitionConfiguration(f *testing.F) {
func FuzzExecutionPayload(f *testing.F) {
logsBloom := [256]byte{'j', 'u', 'n', 'k'}
execData := &beacon.ExecutableData{
execData := &engine.ExecutableData{
ParentHash: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
FeeRecipient: common.Address([20]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF}),
StateRoot: common.Hash([32]byte{0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01, 0xFF, 0x01}),
@@ -135,7 +135,7 @@ func FuzzExecutionPayload(f *testing.F) {
assert.NoError(f, err)
f.Add(output)
f.Fuzz(func(t *testing.T, jsonBlob []byte) {
gethResp := &beacon.ExecutableData{}
gethResp := &engine.ExecutableData{}
prysmResp := &pb.ExecutionPayload{}
gethErr := json.Unmarshal(jsonBlob, gethResp)
prysmErr := json.Unmarshal(jsonBlob, prysmResp)
@@ -147,10 +147,10 @@ func FuzzExecutionPayload(f *testing.F) {
gethBlob, gethErr := json.Marshal(gethResp)
prysmBlob, prysmErr := json.Marshal(prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, "geth and prysm unmarshaller return inconsistent errors")
newGethResp := &beacon.ExecutableData{}
newGethResp := &engine.ExecutableData{}
newGethErr := json.Unmarshal(prysmBlob, newGethResp)
assert.NoError(t, newGethErr)
newGethResp2 := &beacon.ExecutableData{}
newGethResp2 := &engine.ExecutableData{}
newGethErr = json.Unmarshal(gethBlob, newGethResp2)
assert.NoError(t, newGethErr)

View File

@@ -479,7 +479,7 @@ func (s *Service) requestBatchedHeadersAndLogs(ctx context.Context) error {
func (s *Service) retrieveBlockHashAndTime(ctx context.Context, blkNum *big.Int) ([32]byte, uint64, error) {
bHash, err := s.BlockHashByHeight(ctx, blkNum)
if err != nil {
return [32]byte{}, 0, errors.Wrap(err, "could not get eth1 block hash")
return [32]byte{}, 0, nil
}
if bHash == [32]byte{} {
return [32]byte{}, 0, errors.Wrap(err, "got empty block hash")

View File

@@ -3,6 +3,7 @@ package execution
import (
"context"
"fmt"
"net/http"
"net/url"
"strings"
"time"
@@ -14,7 +15,6 @@ import (
contracts "github.com/prysmaticlabs/prysm/v3/contracts/deposit"
"github.com/prysmaticlabs/prysm/v3/io/logs"
"github.com/prysmaticlabs/prysm/v3/network"
"github.com/prysmaticlabs/prysm/v3/network/authorization"
)
func (s *Service) setupExecutionClientConnections(ctx context.Context, currEndpoint network.Endpoint) error {
@@ -34,7 +34,7 @@ func (s *Service) setupExecutionClientConnections(ctx context.Context, currEndpo
}
s.depositContractCaller = depositContractCaller
// Ensure we have the correct chain and deposit IDs.
//Ensure we have the correct chain and deposit IDs.
if err := ensureCorrectExecutionChain(ctx, fetcher); err != nil {
client.Close()
errStr := err.Error()
@@ -113,9 +113,21 @@ func (s *Service) newRPCClientWithAuth(ctx context.Context, endpoint network.End
if err != nil {
return nil, err
}
headers := http.Header{}
for _, h := range s.cfg.headers {
if h == "" {
continue
}
keyValue := strings.Split(h, "=")
if len(keyValue) < 2 {
log.Warnf("Incorrect HTTP header flag format. Skipping %v", keyValue[0])
continue
}
headers.Set(keyValue[0], strings.Join(keyValue[1:], "="))
}
switch u.Scheme {
case "http", "https":
client, err = gethRPC.DialOptions(ctx, endpoint.Url, gethRPC.WithHTTPClient(endpoint.HttpClient()))
client, err = gethRPC.DialOptions(ctx, endpoint.Url, gethRPC.WithHTTPClient(endpoint.HttpClient()), gethRPC.WithHeaders(headers))
if err != nil {
return nil, err
}
@@ -127,13 +139,6 @@ func (s *Service) newRPCClientWithAuth(ctx context.Context, endpoint network.End
default:
return nil, fmt.Errorf("no known transport for URL scheme %q", u.Scheme)
}
if endpoint.Auth.Method != authorization.None {
header, err := endpoint.Auth.ToHeaderValue()
if err != nil {
return nil, err
}
client.SetHeader("Authorization", header)
}
for _, h := range s.cfg.headers {
if h != "" {
keyValue := strings.Split(h, "=")

View File

@@ -38,6 +38,7 @@ type EngineClient struct {
TerminalBlockHash []byte
TerminalBlockHashExists bool
OverrideValidHash [32]byte
BlobsBundle *pb.BlobsBundle
BlockValue *big.Int
}
@@ -172,3 +173,8 @@ func (e *EngineClient) GetTerminalBlockHash(ctx context.Context, transitionTime
blk = parentBlk
}
}
// GetBlobsBundle --
func (e *EngineClient) GetBlobsBundle(ctx context.Context, payloadId [8]byte) (*pb.BlobsBundle, error) {
return e.BlobsBundle, nil
}

View File

@@ -46,8 +46,6 @@ func New() *ForkChoice {
// NodeCount returns the current number of nodes in the Store.
func (f *ForkChoice) NodeCount() int {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
return len(f.store.nodeByRoot)
}
@@ -58,16 +56,9 @@ func (f *ForkChoice) Head(
) ([32]byte, error) {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.Head")
defer span.End()
f.votesLock.Lock()
defer f.votesLock.Unlock()
calledHeadCount.Inc()
// Using the write lock here because subsequent calls to `updateBalances`, `applyProposerBoostScore`,
// `applyWeightChanges`, `updateBestDescendant`, and `head` require write operations on nodes.
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
if err := f.updateBalances(); err != nil {
return [32]byte{}, errors.Wrap(err, "could not update balances")
}
@@ -94,8 +85,6 @@ func (f *ForkChoice) Head(
func (f *ForkChoice) ProcessAttestation(ctx context.Context, validatorIndices []uint64, blockRoot [32]byte, targetEpoch primitives.Epoch) {
_, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.ProcessAttestation")
defer span.End()
f.votesLock.Lock()
defer f.votesLock.Unlock()
for _, index := range validatorIndices {
// Validator indices will grow the vote cache.
@@ -161,7 +150,6 @@ func (f *ForkChoice) InsertNode(ctx context.Context, state state.BeaconState, ro
// updateCheckpoints update the checkpoints when inserting a new node.
func (f *ForkChoice) updateCheckpoints(ctx context.Context, jc, fc *ethpb.Checkpoint) error {
f.store.checkpointsLock.Lock()
if jc.Epoch > f.store.justifiedCheckpoint.Epoch {
if jc.Epoch > f.store.bestJustifiedCheckpoint.Epoch {
f.store.bestJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: jc.Epoch,
@@ -175,7 +163,6 @@ func (f *ForkChoice) updateCheckpoints(ctx context.Context, jc, fc *ethpb.Checkp
f.store.justifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: jc.Epoch,
Root: root}
if err := f.updateJustifiedBalances(ctx, root); err != nil {
f.store.checkpointsLock.Unlock()
return errors.Wrap(err, "could not update justified balances")
}
} else {
@@ -186,25 +173,18 @@ func (f *ForkChoice) updateCheckpoints(ctx context.Context, jc, fc *ethpb.Checkp
}
jSlot, err := slots.EpochStart(currentJcp.Epoch)
if err != nil {
f.store.checkpointsLock.Unlock()
return err
}
jcRoot := bytesutil.ToBytes32(jc.Root)
// Releasing here the checkpoints lock because
// AncestorRoot acquires a lock on nodes and that can
// cause a double lock.
f.store.checkpointsLock.Unlock()
root, err := f.AncestorRoot(ctx, jcRoot, jSlot)
if err != nil {
return err
}
f.store.checkpointsLock.Lock()
if root == currentRoot {
f.store.prevJustifiedCheckpoint = f.store.justifiedCheckpoint
f.store.justifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: jc.Epoch,
Root: jcRoot}
if err := f.updateJustifiedBalances(ctx, jcRoot); err != nil {
f.store.checkpointsLock.Unlock()
return errors.Wrap(err, "could not update justified balances")
}
}
@@ -214,14 +194,12 @@ func (f *ForkChoice) updateCheckpoints(ctx context.Context, jc, fc *ethpb.Checkp
jcRoot := bytesutil.ToBytes32(jc.Root)
f.store.justifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: jc.Epoch, Root: jcRoot}
if err := f.updateJustifiedBalances(ctx, jcRoot); err != nil {
f.store.checkpointsLock.Unlock()
return errors.Wrap(err, "could not update justified balances")
}
}
}
// Update finalization
if fc.Epoch <= f.store.finalizedCheckpoint.Epoch {
f.store.checkpointsLock.Unlock()
return nil
}
f.store.finalizedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: fc.Epoch,
@@ -231,29 +209,21 @@ func (f *ForkChoice) updateCheckpoints(ctx context.Context, jc, fc *ethpb.Checkp
f.store.justifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: jc.Epoch,
Root: root}
if err := f.updateJustifiedBalances(ctx, root); err != nil {
f.store.checkpointsLock.Unlock()
return errors.Wrap(err, "could not update justified balances")
}
}
f.store.checkpointsLock.Unlock()
return f.store.prune(ctx)
}
// HasNode returns true if the node exists in fork choice store,
// false else wise.
func (f *ForkChoice) HasNode(root [32]byte) bool {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
_, ok := f.store.nodeByRoot[root]
return ok
}
// IsCanonical returns true if the given root is part of the canonical chain.
func (f *ForkChoice) IsCanonical(root [32]byte) bool {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
return false
@@ -273,9 +243,6 @@ func (f *ForkChoice) IsCanonical(root [32]byte) bool {
// IsOptimistic returns true if the given root has been optimistically synced.
func (f *ForkChoice) IsOptimistic(root [32]byte) (bool, error) {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
if f.store.allTipsAreInvalid {
return true, nil
}
@@ -293,9 +260,6 @@ func (f *ForkChoice) AncestorRoot(ctx context.Context, root [32]byte, slot primi
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.AncestorRoot")
defer span.End()
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
return [32]byte{}, errors.Wrap(ErrNilNode, "could not determine ancestor root")
@@ -317,12 +281,8 @@ func (f *ForkChoice) AncestorRoot(ctx context.Context, root [32]byte, slot primi
}
// updateBalances updates the balances that directly voted for each block taking into account the
// validators' latest votes. This function requires a lock in Store.nodesLock
// and votesLock
// validators' latest votes.
func (f *ForkChoice) updateBalances() error {
// lock checkpoints for the justified balances
f.store.checkpointsLock.RLock()
defer f.store.checkpointsLock.RUnlock()
newBalances := f.justifiedBalances
for index, vote := range f.votes {
@@ -367,7 +327,6 @@ func (f *ForkChoice) updateBalances() error {
return errors.Wrap(ErrNilNode, "could not update balances")
}
if currentNode.balance < oldBalance {
f.store.proposerBoostLock.RLock()
log.WithFields(logrus.Fields{
"nodeRoot": fmt.Sprintf("%#x", bytesutil.Trunc(vote.currentRoot[:])),
"oldBalance": oldBalance,
@@ -377,7 +336,6 @@ func (f *ForkChoice) updateBalances() error {
"previousProposerBoostRoot": fmt.Sprintf("%#x", bytesutil.Trunc(f.store.previousProposerBoostRoot[:])),
"previousProposerBoostScore": f.store.previousProposerBoostScore,
}).Warning("node with invalid balance, setting it to zero")
f.store.proposerBoostLock.RUnlock()
currentNode.balance = 0
} else {
currentNode.balance -= oldBalance
@@ -405,8 +363,6 @@ func (f *ForkChoice) ProposerBoost() [fieldparams.RootLength]byte {
// SetOptimisticToValid sets the node with the given root as a fully validated node
func (f *ForkChoice) SetOptimisticToValid(ctx context.Context, root [fieldparams.RootLength]byte) error {
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
return errors.Wrap(ErrNilNode, "could not set node to valid")
@@ -416,29 +372,21 @@ func (f *ForkChoice) SetOptimisticToValid(ctx context.Context, root [fieldparams
// BestJustifiedCheckpoint of fork choice store.
func (f *ForkChoice) BestJustifiedCheckpoint() *forkchoicetypes.Checkpoint {
f.store.checkpointsLock.RLock()
defer f.store.checkpointsLock.RUnlock()
return f.store.bestJustifiedCheckpoint
}
// PreviousJustifiedCheckpoint of fork choice store.
func (f *ForkChoice) PreviousJustifiedCheckpoint() *forkchoicetypes.Checkpoint {
f.store.checkpointsLock.RLock()
defer f.store.checkpointsLock.RUnlock()
return f.store.prevJustifiedCheckpoint
}
// JustifiedCheckpoint of fork choice store.
func (f *ForkChoice) JustifiedCheckpoint() *forkchoicetypes.Checkpoint {
f.store.checkpointsLock.RLock()
defer f.store.checkpointsLock.RUnlock()
return f.store.justifiedCheckpoint
}
// FinalizedCheckpoint of fork choice store.
func (f *ForkChoice) FinalizedCheckpoint() *forkchoicetypes.Checkpoint {
f.store.checkpointsLock.RLock()
defer f.store.checkpointsLock.RUnlock()
return f.store.finalizedCheckpoint
}
@@ -451,11 +399,6 @@ func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root, parentRoo
// store-tracked list. Votes from these validators are not accounted for
// in forkchoice.
func (f *ForkChoice) InsertSlashedIndex(_ context.Context, index primitives.ValidatorIndex) {
f.votesLock.RLock()
defer f.votesLock.RUnlock()
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
// return early if the index was already included:
if f.store.slashedIndices[index] {
return
@@ -489,8 +432,6 @@ func (f *ForkChoice) UpdateJustifiedCheckpoint(ctx context.Context, jc *forkchoi
if jc == nil {
return errInvalidNilCheckpoint
}
f.store.checkpointsLock.Lock()
defer f.store.checkpointsLock.Unlock()
f.store.prevJustifiedCheckpoint = f.store.justifiedCheckpoint
f.store.justifiedCheckpoint = jc
f.store.bestJustifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: jc.Epoch, Root: jc.Root}
@@ -505,8 +446,6 @@ func (f *ForkChoice) UpdateFinalizedCheckpoint(fc *forkchoicetypes.Checkpoint) e
if fc == nil {
return errInvalidNilCheckpoint
}
f.store.checkpointsLock.Lock()
defer f.store.checkpointsLock.Unlock()
f.store.finalizedCheckpoint = fc
return nil
}
@@ -516,9 +455,6 @@ func (f *ForkChoice) CommonAncestor(ctx context.Context, r1 [32]byte, r2 [32]byt
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.CommonAncestorRoot")
defer span.End()
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
n1, ok := f.store.nodeByRoot[r1]
if !ok || n1 == nil {
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
@@ -559,7 +495,7 @@ func (f *ForkChoice) CommonAncestor(ctx context.Context, r1 [32]byte, r2 [32]byt
}
}
// InsertOptimisticChain inserts all nodes corresponding to blocks in the slice
// InsertChain inserts all nodes corresponding to blocks in the slice
// `blocks`. This slice must be ordered from child to parent. It includes all
// blocks **except** the first one (that is the one with the highest slot
// number). All blocks are assumed to be a strict chain
@@ -601,8 +537,6 @@ func (f *ForkChoice) SetOriginRoot(root [32]byte) {
// CachedHeadRoot returns the last cached head root
func (f *ForkChoice) CachedHeadRoot() [32]byte {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
node := f.store.headNode
if node == nil {
return [32]byte{}
@@ -612,8 +546,6 @@ func (f *ForkChoice) CachedHeadRoot() [32]byte {
// FinalizedPayloadBlockHash returns the hash of the payload at the finalized checkpoint
func (f *ForkChoice) FinalizedPayloadBlockHash() [32]byte {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
root := f.FinalizedCheckpoint().Root
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
@@ -625,8 +557,6 @@ func (f *ForkChoice) FinalizedPayloadBlockHash() [32]byte {
// JustifiedPayloadBlockHash returns the hash of the payload at the justified checkpoint
func (f *ForkChoice) JustifiedPayloadBlockHash() [32]byte {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
root := f.JustifiedCheckpoint().Root
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
@@ -692,8 +622,6 @@ func (f *ForkChoice) SetBalancesByRooter(handler forkchoice.BalancesByRooter) {
// Weight returns the weight of the given root if found on the store
func (f *ForkChoice) Weight(root [32]byte) (uint64, error) {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
n, ok := f.store.nodeByRoot[root]
if !ok || n == nil {
return 0, ErrNilNode
@@ -702,7 +630,6 @@ func (f *ForkChoice) Weight(root [32]byte) (uint64, error) {
}
// updateJustifiedBalances updates the validators balances on the justified checkpoint pointed by root.
// This function requires a lock on checkpointsLock being held by the caller.
func (f *ForkChoice) updateJustifiedBalances(ctx context.Context, root [32]byte) error {
balances, err := f.balancesByRoot(ctx, root)
if err != nil {

View File

@@ -205,7 +205,6 @@ func TestForkChoice_IsCanonicalReorg(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, blkRoot))
f.store.nodesLock.Lock()
f.store.nodeByRoot[[32]byte{'3'}].balance = 10
require.NoError(t, f.store.treeRootNode.applyWeightChanges(ctx))
require.Equal(t, uint64(10), f.store.nodeByRoot[[32]byte{'1'}].weight)
@@ -213,7 +212,6 @@ func TestForkChoice_IsCanonicalReorg(t *testing.T) {
require.NoError(t, f.store.treeRootNode.updateBestDescendant(ctx, 1, 1, 1))
require.DeepEqual(t, [32]byte{'3'}, f.store.treeRootNode.bestDescendant.root)
f.store.nodesLock.Unlock()
r1 := [32]byte{'1'}
f.store.justifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 1, Root: r1}

View File

@@ -16,13 +16,12 @@ import (
// consider a block to be late, and thus a candidate to being reorged.
const orphanLateBlockFirstThreshold = 4
// processAttestationsThreshold is the number of seconds after which we
// ProcessAttestationsThreshold is the number of seconds after which we
// process attestations for the current slot
const processAttestationsThreshold = 10
const ProcessAttestationsThreshold = 10
// applyWeightChanges recomputes the weight of the node passed as an argument and all of its descendants,
// using the current balance stored in each node. This function requires a lock
// in Store.nodesLock
// using the current balance stored in each node.
func (n *Node) applyWeightChanges(ctx context.Context) error {
// Recursively calling the children to sum their weights.
childrenWeight := uint64(0)
@@ -43,7 +42,7 @@ func (n *Node) applyWeightChanges(ctx context.Context) error {
}
// updateBestDescendant updates the best descendant of this node and its
// children. This function assumes the caller has a lock on Store.nodesLock
// children.
func (n *Node) updateBestDescendant(ctx context.Context, justifiedEpoch, finalizedEpoch, currentEpoch primitives.Epoch) error {
if ctx.Err() != nil {
return ctx.Err()
@@ -149,7 +148,7 @@ func (n *Node) arrivedEarly(genesisTime uint64) (bool, error) {
// slot will have secs = 10 below.
func (n *Node) arrivedAfterOrphanCheck(genesisTime uint64) (bool, error) {
secs, err := slots.SecondsSinceSlotStart(n.slot, genesisTime, n.timestamp)
return secs >= processAttestationsThreshold, err
return secs >= ProcessAttestationsThreshold, err
}
// nodeTreeDump appends to the given list all the nodes descending from this one

View File

@@ -27,8 +27,6 @@ func TestNode_ApplyWeightChanges_PositiveChange(t *testing.T) {
// The updated balances of each node is 100
s := f.store
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
s.nodeByRoot[indexToHash(1)].balance = 100
s.nodeByRoot[indexToHash(2)].balance = 100
s.nodeByRoot[indexToHash(3)].balance = 100
@@ -55,8 +53,6 @@ func TestNode_ApplyWeightChanges_NegativeChange(t *testing.T) {
// The updated balances of each node is 100
s := f.store
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
s.nodeByRoot[indexToHash(1)].weight = 400
s.nodeByRoot[indexToHash(2)].weight = 400
s.nodeByRoot[indexToHash(3)].weight = 400
@@ -298,7 +294,7 @@ func TestNode_TimeStampsChecks(t *testing.T) {
require.Equal(t, false, late)
// very late block
driftGenesisTime(f, 3, processAttestationsThreshold+1)
driftGenesisTime(f, 3, ProcessAttestationsThreshold+1)
root = [32]byte{'c'}
state, blkRoot, err = prepareForkchoiceState(ctx, 3, root, [32]byte{'b'}, [32]byte{'C'}, 0, 0)
require.NoError(t, err)

View File

@@ -42,11 +42,9 @@ func (f *ForkChoice) NewSlot(ctx context.Context, slot primitives.Slot) error {
}
// Update store.justified_checkpoint if a better checkpoint on the store.finalized_checkpoint chain
f.store.checkpointsLock.RLock()
bjcp := f.store.bestJustifiedCheckpoint
jcp := f.store.justifiedCheckpoint
fcp := f.store.finalizedCheckpoint
f.store.checkpointsLock.RUnlock()
if bjcp.Epoch > jcp.Epoch {
finalizedSlot, err := slots.EpochStart(fcp.Epoch)
if err != nil {
@@ -62,13 +60,11 @@ func (f *ForkChoice) NewSlot(ctx context.Context, slot primitives.Slot) error {
return err
}
if r == fcp.Root {
f.store.checkpointsLock.Lock()
f.store.prevJustifiedCheckpoint = jcp
f.store.justifiedCheckpoint = bjcp
if err := f.updateJustifiedBalances(ctx, bjcp.Root); err != nil {
log.Error("could not update justified balances")
}
f.store.checkpointsLock.Unlock()
}
}
if !features.Get().DisablePullTips {

View File

@@ -8,34 +8,28 @@ import (
)
func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, payloadHash [32]byte) ([][32]byte, error) {
s.nodesLock.RLock()
invalidRoots := make([][32]byte, 0)
node, ok := s.nodeByRoot[root]
if !ok {
node, ok = s.nodeByRoot[parentRoot]
if !ok || node == nil {
s.nodesLock.RUnlock()
return invalidRoots, errors.Wrap(ErrNilNode, "could not set node to invalid")
}
// return early if the parent is LVH
if node.payloadHash == payloadHash {
s.nodesLock.RUnlock()
return invalidRoots, nil
}
} else {
if node == nil {
s.nodesLock.RUnlock()
return invalidRoots, errors.Wrap(ErrNilNode, "could not set node to invalid")
}
if node.parent.root != parentRoot {
s.nodesLock.RUnlock()
return invalidRoots, errInvalidParentRoot
}
}
firstInvalid := node
for ; firstInvalid.parent != nil && firstInvalid.parent.payloadHash != payloadHash; firstInvalid = firstInvalid.parent {
if ctx.Err() != nil {
s.nodesLock.RUnlock()
return invalidRoots, ctx.Err()
}
}
@@ -44,20 +38,16 @@ func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, pa
if firstInvalid.parent == nil {
// return early if the invalid node was not imported
if node.root == parentRoot {
s.nodesLock.RUnlock()
return invalidRoots, nil
}
firstInvalid = node
}
s.nodesLock.RUnlock()
return s.removeNode(ctx, firstInvalid)
}
// removeNode removes the node with the given root and all of its children
// from the Fork Choice Store.
func (s *Store) removeNode(ctx context.Context, node *Node) ([][32]byte, error) {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
invalidRoots := make([][32]byte, 0)
if node == nil {
@@ -96,7 +86,6 @@ func (s *Store) removeNodeAndChildren(ctx context.Context, node *Node, invalidRo
}
}
invalidRoots = append(invalidRoots, node.root)
s.proposerBoostLock.Lock()
if node.root == s.proposerBoostRoot {
s.proposerBoostRoot = [32]byte{}
}
@@ -104,7 +93,6 @@ func (s *Store) removeNodeAndChildren(ctx context.Context, node *Node, invalidRo
s.previousProposerBoostRoot = params.BeaconConfig().ZeroHash
s.previousProposerBoostScore = 0
}
s.proposerBoostLock.Unlock()
delete(s.nodeByRoot, node.root)
delete(s.nodeByPayload, node.payloadHash)
return invalidRoots, nil

View File

@@ -237,19 +237,15 @@ func TestSetOptimisticToInvalid_ProposerBoost(t *testing.T) {
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.store.proposerBoostLock.Lock()
f.store.proposerBoostRoot = [32]byte{'c'}
f.store.previousProposerBoostScore = 10
f.store.previousProposerBoostRoot = [32]byte{'b'}
f.store.proposerBoostLock.Unlock()
_, err = f.SetOptimisticToInvalid(ctx, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'A'})
require.NoError(t, err)
f.store.proposerBoostLock.RLock()
require.Equal(t, uint64(0), f.store.previousProposerBoostScore)
require.DeepEqual(t, [32]byte{}, f.store.proposerBoostRoot)
require.DeepEqual(t, params.BeaconConfig().ZeroHash, f.store.previousProposerBoostRoot)
f.store.proposerBoostLock.RUnlock()
}
// This is a regression test (10565)

View File

@@ -10,23 +10,14 @@ import (
// resetBoostedProposerRoot sets the value of the proposer boosted root to zeros.
func (f *ForkChoice) resetBoostedProposerRoot(_ context.Context) error {
f.store.proposerBoostLock.Lock()
f.store.proposerBoostRoot = [32]byte{}
f.store.proposerBoostLock.Unlock()
return nil
}
// applyProposerBoostScore applies the current proposer boost scores to the
// relevant nodes. This function requires a lock in Store.nodesLock.
// relevant nodes.
func (f *ForkChoice) applyProposerBoostScore() error {
s := f.store
s.proposerBoostLock.Lock()
defer s.proposerBoostLock.Unlock()
// acquire checkpoints lock for the justified balances
s.checkpointsLock.RLock()
defer s.checkpointsLock.RUnlock()
proposerScore := uint64(0)
if s.previousProposerBoostRoot != params.BeaconConfig().ZeroHash {
previousNode, ok := s.nodeByRoot[s.previousProposerBoostRoot]
@@ -53,7 +44,5 @@ func (f *ForkChoice) applyProposerBoostScore() error {
// ProposerBoost of fork choice store.
func (s *Store) proposerBoost() [fieldparams.RootLength]byte {
s.proposerBoostLock.RLock()
defer s.proposerBoostLock.RUnlock()
return s.proposerBoostRoot
}

View File

@@ -34,9 +34,6 @@ const orphanLateBlockProposingEarly = 2
func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
override = false
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
// We only need to override FCU if our current head is from the current
// slot. This differs from the spec implementation in that we assume
// that we will call this function in the previous slot to proposing.
@@ -44,9 +41,11 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
if head == nil {
return
}
if head.slot != slots.CurrentSlot(f.store.genesisTime) {
return
}
// Do not reorg on epoch boundaries
if (head.slot+1)%params.BeaconConfig().SlotsPerEpoch == 0 {
return
@@ -61,9 +60,7 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
return
}
// Only reorg if we have been finalizing
f.store.checkpointsLock.RLock()
finalizedEpoch := f.store.finalizedCheckpoint.Epoch
f.store.checkpointsLock.RUnlock()
if slots.ToEpoch(head.slot+1) > finalizedEpoch+params.BeaconConfig().ReorgMaxEpochsSinceFinalization {
return
}
@@ -95,9 +92,6 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
// This function needs to be called only when proposing a block and all
// attestation processing has already happened.
func (f *ForkChoice) GetProposerHead() [32]byte {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
head := f.store.headNode
if head == nil {
return [32]byte{}
@@ -121,9 +115,7 @@ func (f *ForkChoice) GetProposerHead() [32]byte {
return head.root
}
// Only reorg if we have been finalizing
f.store.checkpointsLock.RLock()
finalizedEpoch := f.store.finalizedCheckpoint.Epoch
f.store.checkpointsLock.RUnlock()
if slots.ToEpoch(head.slot+1) > finalizedEpoch+params.BeaconConfig().ReorgMaxEpochsSinceFinalization {
return head.root
}

View File

@@ -14,12 +14,10 @@ import (
)
// head starts from justified root and then follows the best descendant links
// to find the best block for head. This function assumes a lock on s.nodesLock
// to find the best block for head.
func (s *Store) head(ctx context.Context) ([32]byte, error) {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.head")
defer span.End()
s.checkpointsLock.RLock()
defer s.checkpointsLock.RUnlock()
if err := ctx.Err(); err != nil {
return [32]byte{}, err
@@ -71,9 +69,6 @@ func (s *Store) insert(ctx context.Context,
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.insert")
defer span.End()
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
// Return if the block has been inserted into Store before.
if n, ok := s.nodeByRoot[root]; ok {
return n, nil
@@ -115,16 +110,12 @@ func (s *Store) insert(ctx context.Context,
currentSlot := slots.CurrentSlot(s.genesisTime)
boostThreshold := params.BeaconConfig().SecondsPerSlot / params.BeaconConfig().IntervalsPerSlot
if currentSlot == slot && secondsIntoSlot < boostThreshold {
s.proposerBoostLock.Lock()
s.proposerBoostRoot = root
s.proposerBoostLock.Unlock()
}
// Update best descendants
s.checkpointsLock.RLock()
jEpoch := s.justifiedCheckpoint.Epoch
fEpoch := s.finalizedCheckpoint.Epoch
s.checkpointsLock.RUnlock()
if err := s.treeRootNode.updateBestDescendant(ctx, jEpoch, fEpoch, slots.ToEpoch(currentSlot)); err != nil {
return n, err
}
@@ -147,7 +138,7 @@ func (s *Store) insert(ctx context.Context,
// pruneFinalizedNodeByRootMap prunes the `nodeByRoot` map
// starting from `node` down to the finalized Node or to a leaf of the Fork
// choice store. This method assumes a lock on nodesLock.
// choice store.
func (s *Store) pruneFinalizedNodeByRootMap(ctx context.Context, node, finalizedNode *Node) error {
if ctx.Err() != nil {
return ctx.Err()
@@ -173,12 +164,8 @@ func (s *Store) prune(ctx context.Context) error {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.Prune")
defer span.End()
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
s.checkpointsLock.RLock()
finalizedRoot := s.finalizedCheckpoint.Root
finalizedEpoch := s.finalizedCheckpoint.Epoch
s.checkpointsLock.RUnlock()
finalizedNode, ok := s.nodeByRoot[finalizedRoot]
if !ok || finalizedNode == nil {
return errors.WithMessage(errUnknownFinalizedRoot, fmt.Sprintf("%#x", finalizedRoot))
@@ -222,9 +209,6 @@ func (s *Store) tips() ([][32]byte, []primitives.Slot) {
var roots [][32]byte
var slots []primitives.Slot
s.nodesLock.RLock()
defer s.nodesLock.RUnlock()
for root, node := range s.nodeByRoot {
if len(node.children) == 0 {
roots = append(roots, root)
@@ -236,28 +220,14 @@ func (s *Store) tips() ([][32]byte, []primitives.Slot) {
// HighestReceivedBlockSlot returns the highest slot received by the forkchoice
func (f *ForkChoice) HighestReceivedBlockSlot() primitives.Slot {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
if f.store.highestReceivedNode == nil {
return 0
}
return f.store.highestReceivedNode.slot
}
// HighestReceivedBlockRoot returns the highest slot root received by the forkchoice
func (f *ForkChoice) HighestReceivedBlockRoot() [32]byte {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
if f.store.highestReceivedNode == nil {
return [32]byte{}
}
return f.store.highestReceivedNode.root
}
// ReceivedBlocksLastEpoch returns the number of blocks received in the last epoch
func (f *ForkChoice) ReceivedBlocksLastEpoch() (uint64, error) {
f.store.nodesLock.RLock()
defer f.store.nodesLock.RUnlock()
count := uint64(0)
lowerBound := slots.CurrentSlot(f.store.genesisTime)
var err error

View File

@@ -320,26 +320,6 @@ func TestStore_PruneMapsNodes(t *testing.T) {
}
func TestForkChoice_HighestReceivedBlockSlotRoot(t *testing.T) {
f := setup(1, 1)
s := f.store
_, err := s.insert(context.Background(), 100, [32]byte{'A'}, [32]byte{}, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.Equal(t, primitives.Slot(100), s.highestReceivedNode.slot)
require.Equal(t, primitives.Slot(100), f.HighestReceivedBlockSlot())
require.Equal(t, [32]byte{'A'}, f.HighestReceivedBlockRoot())
_, err = s.insert(context.Background(), 1000, [32]byte{'B'}, [32]byte{}, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.Equal(t, primitives.Slot(1000), s.highestReceivedNode.slot)
require.Equal(t, primitives.Slot(1000), f.HighestReceivedBlockSlot())
require.Equal(t, [32]byte{'B'}, f.HighestReceivedBlockRoot())
_, err = s.insert(context.Background(), 500, [32]byte{'C'}, [32]byte{}, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.Equal(t, primitives.Slot(1000), s.highestReceivedNode.slot)
require.Equal(t, primitives.Slot(1000), f.HighestReceivedBlockSlot())
require.Equal(t, [32]byte{'B'}, f.HighestReceivedBlockRoot())
}
func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
f := setup(1, 1)
s := f.store

View File

@@ -11,12 +11,12 @@ import (
// ForkChoice defines the overall fork choice store which includes all block nodes, validator's latest votes and balances.
type ForkChoice struct {
sync.RWMutex
store *Store
votes []Vote // tracks individual validator's last vote.
votesLock sync.RWMutex
votes []Vote // tracks individual validator's last vote.
balances []uint64 // tracks individual validator's balances last accounted in votes.
justifiedBalances []uint64 // tracks individual validator's last justified balances.
numActiveValidators uint64 // tracks the total number of active validators. Requires a checkpoints lock to read/write
numActiveValidators uint64 // tracks the total number of active validators.
balancesByRoot forkchoice.BalancesByRooter // handler to obtain balances for the state with a given root
}
@@ -31,16 +31,13 @@ type Store struct {
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
committeeWeight uint64 // tracks the total active validator balance divided by the number of slots per Epoch. Requires a checkpoints lock to read/write
committeeWeight uint64 // tracks the total active validator balance divided by the number of slots per Epoch.
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node
nodeByRoot map[[fieldparams.RootLength]byte]*Node // nodes indexed by roots.
nodeByPayload map[[fieldparams.RootLength]byte]*Node // nodes indexed by payload Hash
slashedIndices map[primitives.ValidatorIndex]bool // the list of equivocating validator indices
originRoot [fieldparams.RootLength]byte // The genesis block root
nodesLock sync.RWMutex
proposerBoostLock sync.RWMutex
checkpointsLock sync.RWMutex
genesisTime uint64
highestReceivedNode *Node // The highest slot node.
receivedBlocksLastEpoch [fieldparams.SlotsPerEpoch]primitives.Slot // Using `highestReceivedSlot`. The slot of blocks received in the last epoch.

View File

@@ -16,9 +16,6 @@ import (
)
func (s *Store) setUnrealizedJustifiedEpoch(root [32]byte, epoch primitives.Epoch) error {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
node, ok := s.nodeByRoot[root]
if !ok || node == nil {
return errors.Wrap(ErrNilNode, "could not set unrealized justified epoch")
@@ -31,9 +28,6 @@ func (s *Store) setUnrealizedJustifiedEpoch(root [32]byte, epoch primitives.Epoc
}
func (s *Store) setUnrealizedFinalizedEpoch(root [32]byte, epoch primitives.Epoch) error {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
node, ok := s.nodeByRoot[root]
if !ok || node == nil {
return errors.Wrap(ErrNilNode, "could not set unrealized finalized epoch")
@@ -48,10 +42,6 @@ func (s *Store) setUnrealizedFinalizedEpoch(root [32]byte, epoch primitives.Epoc
// updateUnrealizedCheckpoints "realizes" the unrealized justified and finalized
// epochs stored within nodes. It should be called at the beginning of each epoch.
func (f *ForkChoice) updateUnrealizedCheckpoints(ctx context.Context) error {
f.store.nodesLock.Lock()
defer f.store.nodesLock.Unlock()
f.store.checkpointsLock.Lock()
defer f.store.checkpointsLock.Unlock()
for _, node := range f.store.nodeByRoot {
node.justifiedEpoch = node.unrealizedJustifiedEpoch
node.finalizedEpoch = node.unrealizedFinalizedEpoch
@@ -81,16 +71,9 @@ func (f *ForkChoice) updateUnrealizedCheckpoints(ctx context.Context) error {
}
func (s *Store) pullTips(state state.BeaconState, node *Node, jc, fc *ethpb.Checkpoint) (*ethpb.Checkpoint, *ethpb.Checkpoint) {
s.nodesLock.Lock()
defer s.nodesLock.Unlock()
if node.parent == nil { // Nothing to do if the parent is nil.
return jc, fc
}
s.checkpointsLock.Lock()
defer s.checkpointsLock.Unlock()
currentEpoch := slots.ToEpoch(slots.CurrentSlot(s.genesisTime))
stateSlot := state.Slot()
stateEpoch := slots.ToEpoch(stateSlot)

View File

@@ -23,16 +23,12 @@ func TestStore_SetUnrealizedEpochs(t *testing.T) {
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.store.nodesLock.RLock()
require.Equal(t, primitives.Epoch(1), f.store.nodeByRoot[[32]byte{'b'}].unrealizedJustifiedEpoch)
require.Equal(t, primitives.Epoch(1), f.store.nodeByRoot[[32]byte{'b'}].unrealizedFinalizedEpoch)
f.store.nodesLock.RUnlock()
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'b'}, 2))
require.NoError(t, f.store.setUnrealizedFinalizedEpoch([32]byte{'b'}, 2))
f.store.nodesLock.RLock()
require.Equal(t, primitives.Epoch(2), f.store.nodeByRoot[[32]byte{'b'}].unrealizedJustifiedEpoch)
require.Equal(t, primitives.Epoch(2), f.store.nodeByRoot[[32]byte{'b'}].unrealizedFinalizedEpoch)
f.store.nodesLock.RUnlock()
require.ErrorIs(t, errInvalidUnrealizedJustifiedEpoch, f.store.setUnrealizedJustifiedEpoch([32]byte{'b'}, 0))
require.ErrorIs(t, errInvalidUnrealizedFinalizedEpoch, f.store.setUnrealizedFinalizedEpoch([32]byte{'b'}, 0))

View File

@@ -124,7 +124,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// /
// 5 <- head, justified epoch = 2
//
// We set this node's slot to be 64 so that when prunning below we do not prune its child
// We set this node's slot to be 64 so that when pruning below we do not prune its child
state, blkRoot, err = prepareForkchoiceState(context.Background(), 2*params.BeaconConfig().SlotsPerEpoch, indexToHash(5), indexToHash(4), params.BeaconConfig().ZeroHash, 2, 2)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))

View File

@@ -16,6 +16,10 @@ type BalancesByRooter func(context.Context, [32]byte) ([]uint64, error)
// ForkChoicer represents the full fork choice interface composed of all the sub-interfaces.
type ForkChoicer interface {
Lock()
Unlock()
RLock()
RUnlock()
HeadRetriever // to compute head.
BlockProcessor // to track new block for fork choice.
AttestationProcessor // to track new attestation for fork choice.
@@ -26,9 +30,8 @@ type ForkChoicer interface {
// HeadRetriever retrieves head root and optimistic info of the current chain.
type HeadRetriever interface {
Head(context.Context) ([32]byte, error)
GetProposerHead() [32]byte
CachedHeadRoot() [32]byte
Tips() ([][32]byte, []primitives.Slot)
IsOptimistic(root [32]byte) (bool, error)
}
// BlockProcessor processes the block that's used for accounting fork choice.
@@ -40,7 +43,6 @@ type BlockProcessor interface {
// AttestationProcessor processes the attestation that's used for accounting fork choice.
type AttestationProcessor interface {
ProcessAttestation(context.Context, []uint64, [32]byte, primitives.Epoch)
InsertSlashedIndex(context.Context, primitives.ValidatorIndex)
}
// Getter returns fork choice related information.
@@ -58,10 +60,12 @@ type Getter interface {
BestJustifiedCheckpoint() *forkchoicetypes.Checkpoint
NodeCount() int
HighestReceivedBlockSlot() primitives.Slot
HighestReceivedBlockRoot() [32]byte
ReceivedBlocksLastEpoch() (uint64, error)
ForkChoiceDump(context.Context) (*v1.ForkChoiceDump, error)
Weight(root [32]byte) (uint64, error)
Tips() ([][32]byte, []primitives.Slot)
IsOptimistic(root [32]byte) (bool, error)
ShouldOverrideFCU() bool
}
// Setter allows to set forkchoice information
@@ -74,4 +78,5 @@ type Setter interface {
SetOriginRoot([32]byte)
NewSlot(context.Context, primitives.Slot) error
SetBalancesByRooter(BalancesByRooter)
InsertSlashedIndex(context.Context, primitives.ValidatorIndex)
}

View File

@@ -7,16 +7,8 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
)
// ProposerBoostRootArgs to call the BoostProposerRoot function.
type ProposerBoostRootArgs struct {
BlockRoot [32]byte
BlockSlot primitives.Slot
CurrentSlot primitives.Slot
SecondsIntoSlot uint64
}
// Checkpoint is an array version of ethpb.Checkpoint. It is used internally in
// forkchoice, while the slice version is used in the interface to legagy code
// forkchoice, while the slice version is used in the interface to legacy code
// in other packages
type Checkpoint struct {
Epoch primitives.Epoch

View File

@@ -129,12 +129,11 @@ func configureInteropConfig(cliCtx *cli.Context) error {
if cliCtx.IsSet(cmd.ChainConfigFileFlag.Name) {
return nil
}
genStateIsSet := cliCtx.IsSet(flags.InteropGenesisStateFlag.Name)
genTimeIsSet := cliCtx.IsSet(flags.InteropGenesisTimeFlag.Name)
numValsIsSet := cliCtx.IsSet(flags.InteropNumValidatorsFlag.Name)
votesIsSet := cliCtx.IsSet(flags.InteropMockEth1DataVotesFlag.Name)
if genStateIsSet || genTimeIsSet || numValsIsSet || votesIsSet {
if genTimeIsSet || numValsIsSet || votesIsSet {
if err := params.SetActive(params.InteropConfig().Copy()); err != nil {
return err
}

View File

@@ -143,18 +143,19 @@ func TestConfigureNetwork_ConfigFile(t *testing.T) {
"node2")), 0666))
require.NoError(t, set.Parse([]string{"test-command", "--" + cmd.ConfigFileFlag.Name, "flags_test.yaml"}))
comFlags := cmd.WrapFlags([]cli.Flag{
&cli.StringFlag{
Name: cmd.ConfigFileFlag.Name,
},
&cli.StringSliceFlag{
Name: cmd.BootstrapNode.Name,
},
})
command := &cli.Command{
Name: "test-command",
Flags: cmd.WrapFlags([]cli.Flag{
&cli.StringFlag{
Name: cmd.ConfigFileFlag.Name,
},
&cli.StringSliceFlag{
Name: cmd.BootstrapNode.Name,
},
}),
Name: "test-command",
Flags: comFlags,
Before: func(cliCtx *cli.Context) error {
return cmd.LoadFlagsFromConfig(cliCtx, cliCtx.Command.Flags)
return cmd.LoadFlagsFromConfig(cliCtx, comFlags)
},
Action: func(cliCtx *cli.Context) error {
//TODO: https://github.com/urfave/cli/issues/1197 right now does not set flag
@@ -165,7 +166,7 @@ func TestConfigureNetwork_ConfigFile(t *testing.T) {
return nil
},
}
require.NoError(t, command.Run(context))
require.NoError(t, command.Run(context, context.Args().Slice()...))
require.NoError(t, os.Remove("flags_test.yaml"))
}
@@ -219,17 +220,6 @@ func TestConfigureInterop(t *testing.T) {
},
"interop",
},
{
"genesis state set",
func() *cli.Context {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
set.String(flags.InteropGenesisStateFlag.Name, "", "")
assert.NoError(t, set.Set(flags.InteropGenesisStateFlag.Name, "/path/"))
return cli.NewContext(&app, set, nil)
},
"interop",
},
}
for _, tt := range tests {

View File

@@ -7,7 +7,6 @@ import (
"bytes"
"context"
"fmt"
"math"
"os"
"os/signal"
"path/filepath"
@@ -67,9 +66,6 @@ import (
const testSkipPowFlag = "test-skip-pow"
// 128MB max message size when enabling debug endpoints.
const debugGrpcMaxMsgSize = 1 << 27
// Used as a struct to keep cli flag options for configuring services
// for the beacon node. We keep this as a separate struct to not pollute the actual BeaconNode
// struct, as it is merely used to pass down configuration options into the appropriate services.
@@ -762,10 +758,9 @@ func (b *BeaconNode) registerRPCService() error {
}
genesisValidators := b.cliCtx.Uint64(flags.InteropNumValidatorsFlag.Name)
genesisStatePath := b.cliCtx.String(flags.InteropGenesisStateFlag.Name)
var depositFetcher depositcache.DepositFetcher
var chainStartFetcher execution.ChainStartFetcher
if genesisValidators > 0 || genesisStatePath != "" {
if genesisValidators > 0 {
var interopService *interopcoldstart.Service
if err := b.services.FetchService(&interopService); err != nil {
return err
@@ -787,9 +782,6 @@ func (b *BeaconNode) registerRPCService() error {
maxMsgSize := b.cliCtx.Int(cmd.GrpcMaxCallRecvMsgSizeFlag.Name)
enableDebugRPCEndpoints := b.cliCtx.Bool(flags.EnableDebugRPCEndpoints.Name)
if enableDebugRPCEndpoints {
maxMsgSize = int(math.Max(float64(maxMsgSize), debugGrpcMaxMsgSize))
}
p2pService := b.fetchP2P()
rpcService := rpc.NewService(b.ctx, &rpc.Config{
@@ -881,9 +873,6 @@ func (b *BeaconNode) registerGRPCGateway() error {
maxCallSize := b.cliCtx.Uint64(cmd.GrpcMaxCallRecvMsgSizeFlag.Name)
httpModules := b.cliCtx.String(flags.HTTPModules.Name)
timeout := b.cliCtx.Int(cmd.ApiTimeoutFlag.Name)
if enableDebugRPCEndpoints {
maxCallSize = uint64(math.Max(float64(maxCallSize), debugGrpcMaxMsgSize))
}
gatewayConfig := gateway.DefaultConfig(enableDebugRPCEndpoints, httpModules)
muxs := make([]*apigateway.PbMux, 0)
@@ -917,15 +906,13 @@ func (b *BeaconNode) registerGRPCGateway() error {
func (b *BeaconNode) registerDeterminsticGenesisService() error {
genesisTime := b.cliCtx.Uint64(flags.InteropGenesisTimeFlag.Name)
genesisValidators := b.cliCtx.Uint64(flags.InteropNumValidatorsFlag.Name)
genesisStatePath := b.cliCtx.String(flags.InteropGenesisStateFlag.Name)
if genesisValidators > 0 || genesisStatePath != "" {
if genesisValidators > 0 {
svc := interopcoldstart.NewService(b.ctx, &interopcoldstart.Config{
GenesisTime: genesisTime,
NumValidators: genesisValidators,
BeaconDB: b.db,
DepositCache: b.depositCache,
GenesisPath: genesisStatePath,
})
svc.Start()

View File

@@ -114,7 +114,7 @@ func TestNodeStart_Ok_registerDeterministicGenesisService(t *testing.T) {
genesisBytes, err := genesisState.MarshalSSZ()
require.NoError(t, err)
require.NoError(t, os.WriteFile("genesis_ssz.json", genesisBytes, 0666))
set.String(flags.InteropGenesisStateFlag.Name, "genesis_ssz.json", "")
set.String("genesis-state", "genesis_ssz.json", "")
ctx := cli.NewContext(&app, set, nil)
node, err := New(ctx, WithBlockchainFlagOptions([]blockchain.Option{}),
WithBuilderFlagOptions([]builder.Option{}),

View File

@@ -4,33 +4,40 @@ go_library(
name = "go_default_library",
srcs = [
"doc.go",
"service.go",
"pool.go",
],
importpath = "github.com/prysmaticlabs/prysm/v3/beacon-chain/operations/voluntaryexits",
visibility = [
"//beacon-chain:__subpackages__",
],
deps = [
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/doubly-linked-list:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
size = "small",
srcs = ["service_test.go"],
srcs = ["pool_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
"//time/slots:go_default_library",
],
)

View File

@@ -1,8 +1,6 @@
package mock
import (
"context"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
@@ -14,12 +12,17 @@ type PoolMock struct {
}
// PendingExits --
func (m *PoolMock) PendingExits(_ state.ReadOnlyBeaconState, _ primitives.Slot, _ bool) []*eth.SignedVoluntaryExit {
return m.Exits
func (m *PoolMock) PendingExits() ([]*eth.SignedVoluntaryExit, error) {
return m.Exits, nil
}
// ExitsForInclusion --
func (m *PoolMock) ExitsForInclusion(_ state.ReadOnlyBeaconState, _ primitives.Slot) ([]*eth.SignedVoluntaryExit, error) {
return m.Exits, nil
}
// InsertVoluntaryExit --
func (m *PoolMock) InsertVoluntaryExit(_ context.Context, _ state.ReadOnlyBeaconState, exit *eth.SignedVoluntaryExit) {
func (m *PoolMock) InsertVoluntaryExit(exit *eth.SignedVoluntaryExit) {
m.Exits = append(m.Exits, exit)
}

View File

@@ -0,0 +1,139 @@
package voluntaryexits
import (
"math"
"sync"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
doublylinkedlist "github.com/prysmaticlabs/prysm/v3/container/doubly-linked-list"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"github.com/sirupsen/logrus"
)
// PoolManager maintains pending and seen voluntary exits.
// This pool is used by proposers to insert voluntary exits into new blocks.
type PoolManager interface {
PendingExits() ([]*ethpb.SignedVoluntaryExit, error)
ExitsForInclusion(state state.ReadOnlyBeaconState, slot types.Slot) ([]*ethpb.SignedVoluntaryExit, error)
InsertVoluntaryExit(exit *ethpb.SignedVoluntaryExit)
MarkIncluded(exit *ethpb.SignedVoluntaryExit)
}
// Pool is a concrete implementation of PoolManager.
type Pool struct {
lock sync.RWMutex
pending doublylinkedlist.List[*ethpb.SignedVoluntaryExit]
m map[types.ValidatorIndex]*doublylinkedlist.Node[*ethpb.SignedVoluntaryExit]
}
// NewPool returns an initialized pool.
func NewPool() *Pool {
return &Pool{
pending: doublylinkedlist.List[*ethpb.SignedVoluntaryExit]{},
m: make(map[types.ValidatorIndex]*doublylinkedlist.Node[*ethpb.SignedVoluntaryExit]),
}
}
// PendingExits returns all objects from the pool.
func (p *Pool) PendingExits() ([]*ethpb.SignedVoluntaryExit, error) {
p.lock.RLock()
defer p.lock.RUnlock()
result := make([]*ethpb.SignedVoluntaryExit, p.pending.Len())
node := p.pending.First()
var err error
for i := 0; node != nil; i++ {
result[i], err = node.Value()
if err != nil {
return nil, err
}
node, err = node.Next()
if err != nil {
return nil, err
}
}
return result, nil
}
// ExitsForInclusion returns objects that are ready for inclusion at the given slot. This method will not
// return more than the block enforced MaxVoluntaryExits.
func (p *Pool) ExitsForInclusion(state state.ReadOnlyBeaconState, slot types.Slot) ([]*ethpb.SignedVoluntaryExit, error) {
p.lock.RLock()
length := int(math.Min(float64(params.BeaconConfig().MaxVoluntaryExits), float64(p.pending.Len())))
result := make([]*ethpb.SignedVoluntaryExit, 0, length)
node := p.pending.First()
for node != nil && len(result) < length {
exit, err := node.Value()
if err != nil {
p.lock.RUnlock()
return nil, err
}
if exit.Exit.Epoch > slots.ToEpoch(slot) {
node, err = node.Next()
if err != nil {
p.lock.RUnlock()
return nil, err
}
continue
}
validator, err := state.ValidatorAtIndexReadOnly(exit.Exit.ValidatorIndex)
if err != nil {
logrus.WithError(err).Warningf("could not get validator at index %d", exit.Exit.ValidatorIndex)
node, err = node.Next()
if err != nil {
p.lock.RUnlock()
return nil, err
}
continue
}
if err = blocks.VerifyExitAndSignature(validator, state.Slot(), state.Fork(), exit, state.GenesisValidatorsRoot()); err != nil {
logrus.WithError(err).Warning("removing invalid exit from pool")
p.lock.RUnlock()
// MarkIncluded removes the invalid exit from the pool
p.MarkIncluded(exit)
p.lock.RLock()
} else {
result = append(result, exit)
}
node, err = node.Next()
if err != nil {
p.lock.RUnlock()
return nil, err
}
}
p.lock.RUnlock()
return result, nil
}
// InsertVoluntaryExit into the pool.
func (p *Pool) InsertVoluntaryExit(exit *ethpb.SignedVoluntaryExit) {
p.lock.Lock()
defer p.lock.Unlock()
_, exists := p.m[exit.Exit.ValidatorIndex]
if exists {
return
}
p.pending.Append(doublylinkedlist.NewNode(exit))
p.m[exit.Exit.ValidatorIndex] = p.pending.Last()
}
// MarkIncluded is used when an exit has been included in a beacon block. Every block seen by this
// node should call this method to include the exit. This will remove the exit from the pool.
func (p *Pool) MarkIncluded(exit *ethpb.SignedVoluntaryExit) {
p.lock.Lock()
defer p.lock.Unlock()
node := p.m[exit.Exit.ValidatorIndex]
if node == nil {
return
}
delete(p.m, exit.Exit.ValidatorIndex)
p.pending.Remove(node)
}

View File

@@ -0,0 +1,327 @@
package voluntaryexits
import (
"testing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/core/time"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v3/config/params"
types "github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v3/crypto/bls"
"github.com/prysmaticlabs/prysm/v3/crypto/bls/common"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/testing/assert"
"github.com/prysmaticlabs/prysm/v3/testing/require"
"github.com/prysmaticlabs/prysm/v3/time/slots"
)
func TestPendingExits(t *testing.T) {
t.Run("empty pool", func(t *testing.T) {
pool := NewPool()
changes, err := pool.PendingExits()
require.NoError(t, err)
assert.Equal(t, 0, len(changes))
})
t.Run("non-empty pool", func(t *testing.T) {
pool := NewPool()
pool.InsertVoluntaryExit(&ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: 0,
ValidatorIndex: 0,
},
})
pool.InsertVoluntaryExit(&ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: 0,
ValidatorIndex: 1,
},
})
changes, err := pool.PendingExits()
require.NoError(t, err)
assert.Equal(t, 2, len(changes))
})
}
func TestExitsForInclusion(t *testing.T) {
spb := &ethpb.BeaconStateCapella{
Fork: &ethpb.Fork{
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
}
stateSlot := types.Slot(uint64(params.BeaconConfig().ShardCommitteePeriod) * uint64(params.BeaconConfig().SlotsPerEpoch))
spb.Slot = stateSlot
numValidators := 2 * params.BeaconConfig().MaxVoluntaryExits
validators := make([]*ethpb.Validator, numValidators)
exits := make([]*ethpb.VoluntaryExit, numValidators)
privKeys := make([]common.SecretKey, numValidators)
for i := range validators {
v := &ethpb.Validator{}
if i == len(validators)-2 {
// exit for this validator is invalid
v.ExitEpoch = 0
} else {
v.ExitEpoch = params.BeaconConfig().FarFutureEpoch
}
priv, err := bls.RandKey()
require.NoError(t, err)
privKeys[i] = priv
pubkey := priv.PublicKey().Marshal()
v.PublicKey = pubkey
message := &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(i),
}
// exit for future slot
if i == len(validators)-1 {
message.Epoch = slots.ToEpoch(stateSlot) + 1
}
validators[i] = v
exits[i] = message
}
spb.Validators = validators
st, err := state_native.InitializeFromProtoCapella(spb)
require.NoError(t, err)
signedExits := make([]*ethpb.SignedVoluntaryExit, numValidators)
for i, message := range exits {
signature, err := signing.ComputeDomainAndSign(st, time.CurrentEpoch(st), message, params.BeaconConfig().DomainVoluntaryExit, privKeys[i])
require.NoError(t, err)
signed := &ethpb.SignedVoluntaryExit{
Exit: message,
Signature: signature,
}
signedExits[i] = signed
}
t.Run("empty pool", func(t *testing.T) {
pool := NewPool()
exits, err := pool.ExitsForInclusion(st, stateSlot)
require.NoError(t, err)
assert.Equal(t, 0, len(exits))
})
t.Run("less than MaxVoluntaryExits in pool", func(t *testing.T) {
pool := NewPool()
for i := uint64(0); i < params.BeaconConfig().MaxVoluntaryExits-1; i++ {
pool.InsertVoluntaryExit(signedExits[i])
}
exits, err := pool.ExitsForInclusion(st, stateSlot)
require.NoError(t, err)
assert.Equal(t, int(params.BeaconConfig().MaxVoluntaryExits)-1, len(exits))
})
t.Run("MaxVoluntaryExits in pool", func(t *testing.T) {
pool := NewPool()
for i := uint64(0); i < params.BeaconConfig().MaxVoluntaryExits; i++ {
pool.InsertVoluntaryExit(signedExits[i])
}
exits, err := pool.ExitsForInclusion(st, stateSlot)
require.NoError(t, err)
assert.Equal(t, int(params.BeaconConfig().MaxVoluntaryExits), len(exits))
})
t.Run("more than MaxVoluntaryExits in pool", func(t *testing.T) {
pool := NewPool()
for i := uint64(0); i < numValidators; i++ {
pool.InsertVoluntaryExit(signedExits[i])
}
exits, err := pool.ExitsForInclusion(st, stateSlot)
require.NoError(t, err)
assert.Equal(t, int(params.BeaconConfig().MaxVoluntaryExits), len(exits))
for _, ch := range exits {
assert.NotEqual(t, types.ValidatorIndex(params.BeaconConfig().MaxVoluntaryExits), ch.Exit.ValidatorIndex)
}
})
t.Run("exit for future epoch not returned", func(t *testing.T) {
pool := NewPool()
pool.InsertVoluntaryExit(signedExits[len(signedExits)-1])
exits, err := pool.ExitsForInclusion(st, stateSlot)
require.NoError(t, err)
assert.Equal(t, 0, len(exits))
})
t.Run("invalid exit not returned", func(t *testing.T) {
pool := NewPool()
pool.InsertVoluntaryExit(signedExits[len(signedExits)-2])
exits, err := pool.ExitsForInclusion(st, stateSlot)
require.NoError(t, err)
assert.Equal(t, 0, len(exits))
})
}
func TestInsertExit(t *testing.T) {
t.Run("empty pool", func(t *testing.T) {
pool := NewPool()
exit := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(0),
},
}
pool.InsertVoluntaryExit(exit)
require.Equal(t, 1, pool.pending.Len())
require.Equal(t, 1, len(pool.m))
n, ok := pool.m[0]
require.Equal(t, true, ok)
v, err := n.Value()
require.NoError(t, err)
assert.DeepEqual(t, exit, v)
})
t.Run("item in pool", func(t *testing.T) {
pool := NewPool()
old := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(0),
},
}
exit := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(1),
},
}
pool.InsertVoluntaryExit(old)
pool.InsertVoluntaryExit(exit)
require.Equal(t, 2, pool.pending.Len())
require.Equal(t, 2, len(pool.m))
n, ok := pool.m[0]
require.Equal(t, true, ok)
v, err := n.Value()
require.NoError(t, err)
assert.DeepEqual(t, old, v)
n, ok = pool.m[1]
require.Equal(t, true, ok)
v, err = n.Value()
require.NoError(t, err)
assert.DeepEqual(t, exit, v)
})
t.Run("validator index already exists", func(t *testing.T) {
pool := NewPool()
old := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(0),
},
Signature: []byte("old"),
}
exit := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(0),
},
Signature: []byte("exit"),
}
pool.InsertVoluntaryExit(old)
pool.InsertVoluntaryExit(exit)
assert.Equal(t, 1, pool.pending.Len())
require.Equal(t, 1, len(pool.m))
n, ok := pool.m[0]
require.Equal(t, true, ok)
v, err := n.Value()
require.NoError(t, err)
assert.DeepEqual(t, old, v)
})
}
func TestMarkIncluded(t *testing.T) {
t.Run("one element in pool", func(t *testing.T) {
pool := NewPool()
exit := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(0),
}}
pool.InsertVoluntaryExit(exit)
pool.MarkIncluded(exit)
assert.Equal(t, 0, pool.pending.Len())
_, ok := pool.m[0]
assert.Equal(t, false, ok)
})
t.Run("first of multiple elements", func(t *testing.T) {
pool := NewPool()
first := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(0),
}}
second := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(1),
}}
third := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(2),
}}
pool.InsertVoluntaryExit(first)
pool.InsertVoluntaryExit(second)
pool.InsertVoluntaryExit(third)
pool.MarkIncluded(first)
require.Equal(t, 2, pool.pending.Len())
_, ok := pool.m[0]
assert.Equal(t, false, ok)
})
t.Run("last of multiple elements", func(t *testing.T) {
pool := NewPool()
first := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(0),
}}
second := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(1),
}}
third := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(2),
}}
pool.InsertVoluntaryExit(first)
pool.InsertVoluntaryExit(second)
pool.InsertVoluntaryExit(third)
pool.MarkIncluded(third)
require.Equal(t, 2, pool.pending.Len())
_, ok := pool.m[2]
assert.Equal(t, false, ok)
})
t.Run("in the middle of multiple elements", func(t *testing.T) {
pool := NewPool()
first := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(0),
}}
second := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(1),
}}
third := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(2),
}}
pool.InsertVoluntaryExit(first)
pool.InsertVoluntaryExit(second)
pool.InsertVoluntaryExit(third)
pool.MarkIncluded(second)
require.Equal(t, 2, pool.pending.Len())
_, ok := pool.m[1]
assert.Equal(t, false, ok)
})
t.Run("not in pool", func(t *testing.T) {
pool := NewPool()
first := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(0),
}}
second := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(1),
}}
exit := &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
ValidatorIndex: types.ValidatorIndex(2),
}}
pool.InsertVoluntaryExit(first)
pool.InsertVoluntaryExit(second)
pool.MarkIncluded(exit)
require.Equal(t, 2, pool.pending.Len())
_, ok := pool.m[0]
require.Equal(t, true, ok)
assert.NotNil(t, pool.m[0])
_, ok = pool.m[1]
require.Equal(t, true, ok)
assert.NotNil(t, pool.m[1])
})
}

View File

@@ -1,125 +0,0 @@
package voluntaryexits
import (
"context"
"sort"
"sync"
"github.com/prysmaticlabs/prysm/v3/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/time/slots"
"go.opencensus.io/trace"
)
// PoolManager maintains pending and seen voluntary exits.
// This pool is used by proposers to insert voluntary exits into new blocks.
type PoolManager interface {
PendingExits(state state.ReadOnlyBeaconState, slot primitives.Slot, noLimit bool) []*ethpb.SignedVoluntaryExit
InsertVoluntaryExit(ctx context.Context, state state.ReadOnlyBeaconState, exit *ethpb.SignedVoluntaryExit)
MarkIncluded(exit *ethpb.SignedVoluntaryExit)
}
// Pool is a concrete implementation of PoolManager.
type Pool struct {
lock sync.RWMutex
pending []*ethpb.SignedVoluntaryExit
}
// NewPool accepts a head fetcher (for reading the validator set) and returns an initialized
// voluntary exit pool.
func NewPool() *Pool {
return &Pool{
pending: make([]*ethpb.SignedVoluntaryExit, 0),
}
}
// PendingExits returns exits that are ready for inclusion at the given slot. This method will not
// return more than the block enforced MaxVoluntaryExits.
func (p *Pool) PendingExits(state state.ReadOnlyBeaconState, slot primitives.Slot, noLimit bool) []*ethpb.SignedVoluntaryExit {
p.lock.RLock()
defer p.lock.RUnlock()
// Allocate pending slice with a capacity of min(len(p.pending), maxVoluntaryExits) since the
// array cannot exceed the max and is typically less than the max value.
maxExits := params.BeaconConfig().MaxVoluntaryExits
if noLimit {
maxExits = uint64(len(p.pending))
}
pending := make([]*ethpb.SignedVoluntaryExit, 0, maxExits)
for _, e := range p.pending {
if e.Exit.Epoch > slots.ToEpoch(slot) {
continue
}
if v, err := state.ValidatorAtIndexReadOnly(e.Exit.ValidatorIndex); err == nil &&
v.ExitEpoch() == params.BeaconConfig().FarFutureEpoch {
pending = append(pending, e)
if uint64(len(pending)) == maxExits {
break
}
}
}
return pending
}
// InsertVoluntaryExit into the pool. This method is a no-op if the pending exit already exists,
// or the validator is already exited.
func (p *Pool) InsertVoluntaryExit(ctx context.Context, state state.ReadOnlyBeaconState, exit *ethpb.SignedVoluntaryExit) {
ctx, span := trace.StartSpan(ctx, "exitPool.InsertVoluntaryExit")
defer span.End()
p.lock.Lock()
defer p.lock.Unlock()
// Prevent malformed messages from being inserted.
if exit == nil || exit.Exit == nil {
return
}
existsInPending, index := existsInList(p.pending, exit.Exit.ValidatorIndex)
// If the item exists in the pending list and includes a more favorable, earlier
// exit epoch, we replace it in the pending list. If it exists but the prior condition is false,
// we simply return.
if existsInPending {
if exit.Exit.Epoch < p.pending[index].Exit.Epoch {
p.pending[index] = exit
}
return
}
// Has the validator been exited already?
if v, err := state.ValidatorAtIndexReadOnly(exit.Exit.ValidatorIndex); err != nil ||
v.ExitEpoch() != params.BeaconConfig().FarFutureEpoch {
return
}
// Insert into pending list and sort.
p.pending = append(p.pending, exit)
sort.Slice(p.pending, func(i, j int) bool {
return p.pending[i].Exit.ValidatorIndex < p.pending[j].Exit.ValidatorIndex
})
}
// MarkIncluded is used when an exit has been included in a beacon block. Every block seen by this
// node should call this method to include the exit. This will remove the exit from
// the pending exits slice.
func (p *Pool) MarkIncluded(exit *ethpb.SignedVoluntaryExit) {
p.lock.Lock()
defer p.lock.Unlock()
exists, index := existsInList(p.pending, exit.Exit.ValidatorIndex)
if exists {
// Exit we want is present at p.pending[index], so we remove it.
p.pending = append(p.pending[:index], p.pending[index+1:]...)
}
}
// Binary search to check if the index exists in the list of pending exits.
func existsInList(pending []*ethpb.SignedVoluntaryExit, searchingFor primitives.ValidatorIndex) (bool, int) {
i := sort.Search(len(pending), func(j int) bool {
return pending[j].Exit.ValidatorIndex >= searchingFor
})
if i < len(pending) && pending[i].Exit.ValidatorIndex == searchingFor {
return true, i
}
return false, -1
}

View File

@@ -1,530 +0,0 @@
package voluntaryexits
import (
"context"
"reflect"
"testing"
state_native "github.com/prysmaticlabs/prysm/v3/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v3/config/params"
"github.com/prysmaticlabs/prysm/v3/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v3/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v3/testing/require"
"google.golang.org/protobuf/proto"
)
func TestPool_InsertVoluntaryExit(t *testing.T) {
type fields struct {
pending []*ethpb.SignedVoluntaryExit
}
type args struct {
exit *ethpb.SignedVoluntaryExit
}
tests := []struct {
name string
fields fields
args args
want []*ethpb.SignedVoluntaryExit
}{
{
name: "Prevent inserting nil exit",
fields: fields{
pending: make([]*ethpb.SignedVoluntaryExit, 0),
},
args: args{
exit: nil,
},
want: []*ethpb.SignedVoluntaryExit{},
},
{
name: "Prevent inserting malformed exit",
fields: fields{
pending: make([]*ethpb.SignedVoluntaryExit, 0),
},
args: args{
exit: &ethpb.SignedVoluntaryExit{
Exit: nil,
},
},
want: []*ethpb.SignedVoluntaryExit{},
},
{
name: "Empty list",
fields: fields{
pending: make([]*ethpb.SignedVoluntaryExit, 0),
},
args: args{
exit: &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
want: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
},
{
name: "Duplicate identical exit",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
},
args: args{
exit: &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
want: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
},
{
name: "Duplicate exit in pending list",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
},
args: args{
exit: &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
want: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
},
{
name: "Duplicate validator index",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
},
args: args{
exit: &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: 20,
ValidatorIndex: 1,
},
},
},
want: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
},
{
name: "Duplicate received with more favorable exit epoch",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 1,
},
},
},
},
args: args{
exit: &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: 4,
ValidatorIndex: 1,
},
},
},
want: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 4,
ValidatorIndex: 1,
},
},
},
},
{
name: "Exit for already exited validator",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{},
},
args: args{
exit: &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 2,
},
},
},
want: []*ethpb.SignedVoluntaryExit{},
},
{
name: "Maintains sorted order",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 0,
},
},
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 2,
},
},
},
},
args: args{
exit: &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{
Epoch: 10,
ValidatorIndex: 1,
},
},
},
want: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 0,
},
},
{
Exit: &ethpb.VoluntaryExit{
Epoch: 10,
ValidatorIndex: 1,
},
},
{
Exit: &ethpb.VoluntaryExit{
Epoch: 12,
ValidatorIndex: 2,
},
},
},
},
}
ctx := context.Background()
validators := []*ethpb.Validator{
{ // 0
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
{ // 1
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
{ // 2 - Already exited.
ExitEpoch: 15,
},
{ // 3
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
p := &Pool{
pending: tt.fields.pending,
}
s, err := state_native.InitializeFromProtoUnsafePhase0(&ethpb.BeaconState{Validators: validators})
require.NoError(t, err)
p.InsertVoluntaryExit(ctx, s, tt.args.exit)
if len(p.pending) != len(tt.want) {
t.Fatalf("Mismatched lengths of pending list. Got %d, wanted %d.", len(p.pending), len(tt.want))
}
for i := range p.pending {
if !proto.Equal(p.pending[i], tt.want[i]) {
t.Errorf("Pending exit at index %d does not match expected. Got=%v wanted=%v", i, p.pending[i], tt.want[i])
}
}
})
}
}
func TestPool_MarkIncluded(t *testing.T) {
type fields struct {
pending []*ethpb.SignedVoluntaryExit
}
type args struct {
exit *ethpb.SignedVoluntaryExit
}
tests := []struct {
name string
fields fields
args args
want fields
}{
{
name: "Removes from pending list",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{ValidatorIndex: 1},
},
{
Exit: &ethpb.VoluntaryExit{ValidatorIndex: 2},
},
{
Exit: &ethpb.VoluntaryExit{ValidatorIndex: 3},
},
},
},
args: args{
exit: &ethpb.SignedVoluntaryExit{
Exit: &ethpb.VoluntaryExit{ValidatorIndex: 2},
},
},
want: fields{
pending: []*ethpb.SignedVoluntaryExit{
{
Exit: &ethpb.VoluntaryExit{ValidatorIndex: 1},
},
{
Exit: &ethpb.VoluntaryExit{ValidatorIndex: 3},
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
p := &Pool{
pending: tt.fields.pending,
}
p.MarkIncluded(tt.args.exit)
if len(p.pending) != len(tt.want.pending) {
t.Fatalf("Mismatched lengths of pending list. Got %d, wanted %d.", len(p.pending), len(tt.want.pending))
}
for i := range p.pending {
if !proto.Equal(p.pending[i], tt.want.pending[i]) {
t.Errorf("Pending exit at index %d does not match expected. Got=%v wanted=%v", i, p.pending[i], tt.want.pending[i])
}
}
})
}
}
func TestPool_PendingExits(t *testing.T) {
type fields struct {
pending []*ethpb.SignedVoluntaryExit
noLimit bool
}
type args struct {
slot primitives.Slot
}
tests := []struct {
name string
fields fields
args args
want []*ethpb.SignedVoluntaryExit
}{
{
name: "Empty list",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{},
},
args: args{
slot: 100000,
},
want: []*ethpb.SignedVoluntaryExit{},
},
{
name: "All eligible",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{
{Exit: &ethpb.VoluntaryExit{Epoch: 0}},
{Exit: &ethpb.VoluntaryExit{Epoch: 1}},
{Exit: &ethpb.VoluntaryExit{Epoch: 2}},
{Exit: &ethpb.VoluntaryExit{Epoch: 3}},
{Exit: &ethpb.VoluntaryExit{Epoch: 4}},
},
},
args: args{
slot: 1000000,
},
want: []*ethpb.SignedVoluntaryExit{
{Exit: &ethpb.VoluntaryExit{Epoch: 0}},
{Exit: &ethpb.VoluntaryExit{Epoch: 1}},
{Exit: &ethpb.VoluntaryExit{Epoch: 2}},
{Exit: &ethpb.VoluntaryExit{Epoch: 3}},
{Exit: &ethpb.VoluntaryExit{Epoch: 4}},
},
},
{
name: "All eligible, above max",
fields: fields{
noLimit: true,
pending: []*ethpb.SignedVoluntaryExit{
{Exit: &ethpb.VoluntaryExit{Epoch: 0}},
{Exit: &ethpb.VoluntaryExit{Epoch: 1}},
{Exit: &ethpb.VoluntaryExit{Epoch: 2}},
{Exit: &ethpb.VoluntaryExit{Epoch: 3}},
{Exit: &ethpb.VoluntaryExit{Epoch: 4}},
{Exit: &ethpb.VoluntaryExit{Epoch: 5}},
{Exit: &ethpb.VoluntaryExit{Epoch: 6}},
{Exit: &ethpb.VoluntaryExit{Epoch: 7}},
{Exit: &ethpb.VoluntaryExit{Epoch: 8}},
{Exit: &ethpb.VoluntaryExit{Epoch: 9}},
{Exit: &ethpb.VoluntaryExit{Epoch: 10}},
{Exit: &ethpb.VoluntaryExit{Epoch: 11}},
{Exit: &ethpb.VoluntaryExit{Epoch: 12}},
{Exit: &ethpb.VoluntaryExit{Epoch: 13}},
{Exit: &ethpb.VoluntaryExit{Epoch: 14}},
{Exit: &ethpb.VoluntaryExit{Epoch: 15}},
{Exit: &ethpb.VoluntaryExit{Epoch: 16}},
{Exit: &ethpb.VoluntaryExit{Epoch: 17}},
{Exit: &ethpb.VoluntaryExit{Epoch: 18}},
{Exit: &ethpb.VoluntaryExit{Epoch: 19}},
},
},
args: args{
slot: 1000000,
},
want: []*ethpb.SignedVoluntaryExit{
{Exit: &ethpb.VoluntaryExit{Epoch: 0}},
{Exit: &ethpb.VoluntaryExit{Epoch: 1}},
{Exit: &ethpb.VoluntaryExit{Epoch: 2}},
{Exit: &ethpb.VoluntaryExit{Epoch: 3}},
{Exit: &ethpb.VoluntaryExit{Epoch: 4}},
{Exit: &ethpb.VoluntaryExit{Epoch: 5}},
{Exit: &ethpb.VoluntaryExit{Epoch: 6}},
{Exit: &ethpb.VoluntaryExit{Epoch: 7}},
{Exit: &ethpb.VoluntaryExit{Epoch: 8}},
{Exit: &ethpb.VoluntaryExit{Epoch: 9}},
{Exit: &ethpb.VoluntaryExit{Epoch: 10}},
{Exit: &ethpb.VoluntaryExit{Epoch: 11}},
{Exit: &ethpb.VoluntaryExit{Epoch: 12}},
{Exit: &ethpb.VoluntaryExit{Epoch: 13}},
{Exit: &ethpb.VoluntaryExit{Epoch: 14}},
{Exit: &ethpb.VoluntaryExit{Epoch: 15}},
{Exit: &ethpb.VoluntaryExit{Epoch: 16}},
{Exit: &ethpb.VoluntaryExit{Epoch: 17}},
{Exit: &ethpb.VoluntaryExit{Epoch: 18}},
{Exit: &ethpb.VoluntaryExit{Epoch: 19}},
},
},
{
name: "All eligible, block max",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{
{Exit: &ethpb.VoluntaryExit{Epoch: 0}},
{Exit: &ethpb.VoluntaryExit{Epoch: 1}},
{Exit: &ethpb.VoluntaryExit{Epoch: 2}},
{Exit: &ethpb.VoluntaryExit{Epoch: 3}},
{Exit: &ethpb.VoluntaryExit{Epoch: 4}},
{Exit: &ethpb.VoluntaryExit{Epoch: 5}},
{Exit: &ethpb.VoluntaryExit{Epoch: 6}},
{Exit: &ethpb.VoluntaryExit{Epoch: 7}},
{Exit: &ethpb.VoluntaryExit{Epoch: 8}},
{Exit: &ethpb.VoluntaryExit{Epoch: 9}},
{Exit: &ethpb.VoluntaryExit{Epoch: 10}},
{Exit: &ethpb.VoluntaryExit{Epoch: 11}},
{Exit: &ethpb.VoluntaryExit{Epoch: 12}},
{Exit: &ethpb.VoluntaryExit{Epoch: 13}},
{Exit: &ethpb.VoluntaryExit{Epoch: 14}},
{Exit: &ethpb.VoluntaryExit{Epoch: 15}},
{Exit: &ethpb.VoluntaryExit{Epoch: 16}},
{Exit: &ethpb.VoluntaryExit{Epoch: 17}},
{Exit: &ethpb.VoluntaryExit{Epoch: 18}},
{Exit: &ethpb.VoluntaryExit{Epoch: 19}},
},
},
args: args{
slot: 1000000,
},
want: []*ethpb.SignedVoluntaryExit{
{Exit: &ethpb.VoluntaryExit{Epoch: 0}},
{Exit: &ethpb.VoluntaryExit{Epoch: 1}},
{Exit: &ethpb.VoluntaryExit{Epoch: 2}},
{Exit: &ethpb.VoluntaryExit{Epoch: 3}},
{Exit: &ethpb.VoluntaryExit{Epoch: 4}},
{Exit: &ethpb.VoluntaryExit{Epoch: 5}},
{Exit: &ethpb.VoluntaryExit{Epoch: 6}},
{Exit: &ethpb.VoluntaryExit{Epoch: 7}},
{Exit: &ethpb.VoluntaryExit{Epoch: 8}},
{Exit: &ethpb.VoluntaryExit{Epoch: 9}},
{Exit: &ethpb.VoluntaryExit{Epoch: 10}},
{Exit: &ethpb.VoluntaryExit{Epoch: 11}},
{Exit: &ethpb.VoluntaryExit{Epoch: 12}},
{Exit: &ethpb.VoluntaryExit{Epoch: 13}},
{Exit: &ethpb.VoluntaryExit{Epoch: 14}},
{Exit: &ethpb.VoluntaryExit{Epoch: 15}},
},
},
{
name: "Some eligible",
fields: fields{
pending: []*ethpb.SignedVoluntaryExit{
{Exit: &ethpb.VoluntaryExit{Epoch: 0}},
{Exit: &ethpb.VoluntaryExit{Epoch: 3}},
{Exit: &ethpb.VoluntaryExit{Epoch: 4}},
{Exit: &ethpb.VoluntaryExit{Epoch: 2}},
{Exit: &ethpb.VoluntaryExit{Epoch: 1}},
},
},
args: args{
slot: 2 * params.BeaconConfig().SlotsPerEpoch,
},
want: []*ethpb.SignedVoluntaryExit{
{Exit: &ethpb.VoluntaryExit{Epoch: 0}},
{Exit: &ethpb.VoluntaryExit{Epoch: 2}},
{Exit: &ethpb.VoluntaryExit{Epoch: 1}},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
p := &Pool{
pending: tt.fields.pending,
}
s, err := state_native.InitializeFromProtoUnsafePhase0(&ethpb.BeaconState{Validators: []*ethpb.Validator{{ExitEpoch: params.BeaconConfig().FarFutureEpoch}}})
require.NoError(t, err)
if got := p.PendingExits(s, tt.args.slot, tt.fields.noLimit); !reflect.DeepEqual(got, tt.want) {
t.Errorf("PendingExits() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -86,7 +86,6 @@ go_library(
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/muxer/mplex:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/protocol/identify:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/tcp:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",

View File

@@ -141,6 +141,57 @@ func (s *Service) broadcastAttestation(ctx context.Context, subnet uint64, att *
}
}
// BroadcastBlob broadcasts a blob to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input subnet.
func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.SignedBlobSidecar) error {
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastBlob")
defer span.End()
forkDigest, err := s.currentForkDigest()
if err != nil {
err := errors.Wrap(err, "could not retrieve fork digest")
tracing.AnnotateError(span, err)
return err
}
// Non-blocking broadcast, with attempts to discover a subnet peer if none available.
go s.broadcastBlob(ctx, subnet, blob, forkDigest)
return nil
}
func (s *Service) broadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar, forkDigest [4]byte) {
ctx, span := trace.StartSpan(ctx, "p2p.broadcastBlob")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
// Ensure we have peers with this subnet.
s.subnetLocker(subnet).RLock()
hasPeer := s.hasPeerWithSubnet(blobSubnetToTopic(subnet, forkDigest))
s.subnetLocker(subnet).RUnlock()
if !hasPeer {
if err := func() error {
s.subnetLocker(subnet).Lock()
defer s.subnetLocker(subnet).Unlock()
ok, err := s.FindPeersWithSubnet(ctx, blobSubnetToTopic(subnet, forkDigest), subnet, 1)
if err != nil {
return err
}
if ok {
return nil
}
return errors.New("failed to find peers for subnet")
}(); err != nil {
log.WithError(err).Error("Failed to find peers")
}
}
if err := s.broadcastObject(ctx, blobSidecar, blobSubnetToTopic(subnet, forkDigest)); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecar")
tracing.AnnotateError(span, err)
}
}
func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage, forkDigest [4]byte) {
ctx, span := trace.StartSpan(ctx, "p2p.broadcastSyncCommittee")
defer span.End()
@@ -232,3 +283,7 @@ func attestationToTopic(subnet uint64, forkDigest [4]byte) string {
func syncCommitteeToTopic(subnet uint64, forkDigest [4]byte) string {
return fmt.Sprintf(SyncCommitteeSubnetTopicFormat, forkDigest, subnet)
}
func blobSubnetToTopic(subnet uint64, forkDigest [4]byte) string {
return fmt.Sprintf(BlobSubnetTopicFormat, forkDigest, subnet)
}

View File

@@ -17,7 +17,8 @@ func (s *Service) forkWatcher() {
currEpoch := slots.ToEpoch(currSlot)
if currEpoch == params.BeaconConfig().AltairForkEpoch ||
currEpoch == params.BeaconConfig().BellatrixForkEpoch ||
currEpoch == params.BeaconConfig().CapellaForkEpoch {
currEpoch == params.BeaconConfig().CapellaForkEpoch ||
currEpoch == params.BeaconConfig().DenebForkEpoch {
// If we are in the fork epoch, we update our enr with
// the updated fork digest. These repeatedly does
// this over the epoch, which might be slightly wasteful

View File

@@ -119,6 +119,9 @@ func (s *Service) topicScoreParams(topic string) (*pubsub.TopicScoreParams, erro
return defaultProposerSlashingTopicParams(), nil
case strings.Contains(topic, GossipAttesterSlashingMessage):
return defaultAttesterSlashingTopicParams(), nil
case strings.Contains(topic, GossipBlobSidecarMessage):
// TODO(Deneb): Using the default block scoring. But this should be updated.
return defaultBlockTopicParams(), nil
case strings.Contains(topic, GossipBlsToExecutionChangeMessage):
return defaultBlsToExecutionChangeTopicParams(), nil
default:

View File

@@ -21,12 +21,16 @@ var gossipTopicMappings = map[string]proto.Message{
SyncContributionAndProofSubnetTopicFormat: &ethpb.SignedContributionAndProof{},
SyncCommitteeSubnetTopicFormat: &ethpb.SyncCommitteeMessage{},
BlsToExecutionChangeSubnetTopicFormat: &ethpb.SignedBLSToExecutionChange{},
BlobSubnetTopicFormat: &ethpb.SignedBlobSidecar{},
}
// GossipTopicMappings is a function to return the assigned data type
// versioned by epoch.
func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
if topic == BlockSubnetTopicFormat {
if epoch >= params.BeaconConfig().DenebForkEpoch {
return &ethpb.SignedBeaconBlockDeneb{}
}
if epoch >= params.BeaconConfig().CapellaForkEpoch {
return &ethpb.SignedBeaconBlockCapella{}
}

View File

@@ -35,6 +35,7 @@ type Broadcaster interface {
Broadcast(context.Context, proto.Message) error
BroadcastAttestation(ctx context.Context, subnet uint64, att *ethpb.Attestation) error
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
BroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar) error
}
// SetStreamHandler configures p2p to handle streams of a certain topic ID.

View File

@@ -57,12 +57,17 @@ func (s *Service) CanSubscribe(topic string) bool {
log.WithError(err).Error("Could not determine Capella fork digest")
return false
}
denebForkDigest, err := forks.ForkDigestFromEpoch(params.BeaconConfig().DenebForkEpoch, s.genesisValidatorsRoot)
if err != nil {
log.WithError(err).Error("Could not determine Capella fork digest")
return false
}
switch parts[2] {
case fmt.Sprintf("%x", phase0ForkDigest):
case fmt.Sprintf("%x", altairForkDigest):
case fmt.Sprintf("%x", bellatrixForkDigest):
case fmt.Sprintf("%x", capellaForkDigest):
case fmt.Sprintf("%x", denebForkDigest):
default:
return false
}

View File

@@ -37,6 +37,14 @@ const PingMessageName = "/ping"
// MetadataMessageName specifies the name for the metadata message topic.
const MetadataMessageName = "/metadata"
// BlobsSidecarsByRangeMessageName specifies the name for the blobs sidecars by range message topic.
const BlobsSidecarsByRangeMessageName = "/blobs_sidecars_by_range"
// BeaconBlockAndBlobsSidecarByRootName is a topic for fetching beacon blocks and sidecar blobs by root.
const BeaconBlockAndBlobsSidecarByRootName = "/beacon_block_and_blobs_sidecar_by_root"
const BlobSidecarsByRootName = "/blob_sidecars_by_root"
const (
// V1 RPC Topics
// RPCStatusTopicV1 defines the v1 topic for the status rpc method.
@@ -51,6 +59,11 @@ const (
RPCPingTopicV1 = protocolPrefix + PingMessageName + SchemaVersionV1
// RPCMetaDataTopicV1 defines the v1 topic for the metadata rpc method.
RPCMetaDataTopicV1 = protocolPrefix + MetadataMessageName + SchemaVersionV1
// RPCBlobsSidecarsByRangeTopicV1 defines the v1 topic for the blobs sidecars by range rpc method.
RPCBlobsSidecarsByRangeTopicV1 = protocolPrefix + BlobsSidecarsByRangeMessageName + SchemaVersionV1
// RPCBlobSidecarsByRootTopicV1 is a topic for requestion blob sidecars by their block root. New in deneb.
// /eth2/beacon_chain/req/blob_sidecars_by_root/1/
RPCBlobSidecarsByRootTopicV1 = protocolPrefix + BlobSidecarsByRootName + SchemaVersionV1
// V2 RPC Topics
// RPCBlocksByRangeTopicV2 defines v2 the topic for the blocks by range rpc method.
@@ -83,6 +96,10 @@ var RPCTopicMappings = map[string]interface{}{
// RPC Metadata Message
RPCMetaDataTopicV1: new(interface{}),
RPCMetaDataTopicV2: new(interface{}),
// RPC Blobs Sidecars By Range Message
RPCBlobsSidecarsByRangeTopicV1: new(pb.BlobsSidecarsByRangeRequest),
// RPC Blobs Sidecars By Root Message
RPCBlobSidecarsByRootTopicV1: new(p2ptypes.BlobSidecarsByRootReq),
}
// Maps all registered protocol prefixes.
@@ -93,12 +110,15 @@ var protocolMapping = map[string]bool{
// Maps all the protocol message names for the different rpc
// topics.
var messageMapping = map[string]bool{
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
BlobsSidecarsByRangeMessageName: true,
BeaconBlockAndBlobsSidecarByRootName: true,
BlobSidecarsByRootName: true,
}
// Maps all the RPC messages which are to updated in altair.
@@ -113,6 +133,18 @@ var versionMapping = map[string]bool{
SchemaVersionV2: true,
}
var PreAltairV1SchemaMapping = map[string]bool{
StatusMessageName: true,
GoodbyeMessageName: true,
BeaconBlocksByRangeMessageName: true,
BeaconBlocksByRootsMessageName: true,
PingMessageName: true,
MetadataMessageName: true,
BlobsSidecarsByRangeMessageName: false,
BeaconBlockAndBlobsSidecarByRootName: false,
BlobSidecarsByRootName: false,
}
// VerifyTopicMapping verifies that the topic and its accompanying
// message type is correct.
func VerifyTopicMapping(topic string, msg interface{}) error {

View File

@@ -17,7 +17,6 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
"github.com/libp2p/go-libp2p/p2p/protocol/identify"
"github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v3/async"
@@ -133,7 +132,6 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
}
s.host = h
s.host.RemoveStreamHandler(identify.IDDelta)
// Gossipsub registration is done before we add in any new peers
// due to libp2p's gossipsub implementation not taking into
// account previously added peers when creating the gossipsub

View File

@@ -138,6 +138,11 @@ func (_ *FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ *ethpb.Att
return nil
}
// BroadcastBlob -- fake.
func (p *FakeP2P) BroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar) error {
return nil
}
// BroadcastSyncCommitteeMessage -- fake.
func (_ *FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
return nil

View File

@@ -33,3 +33,9 @@ func (m *MockBroadcaster) BroadcastSyncCommitteeMessage(_ context.Context, _ uin
m.BroadcastCalled = true
return nil
}
// BroadcastSyncCommitteeMessage records a broadcast occurred.
func (m *MockBroadcaster) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.SignedBlobSidecar) error {
m.BroadcastCalled = true
return nil
}

View File

@@ -51,7 +51,8 @@ func (_ *MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
func (_ *MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
// SetStreamHandlerMatch --
func (_ *MockHost) SetStreamHandlerMatch(protocol.ID, func(string) bool, network.StreamHandler) {}
func (_ *MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
}
// RemoveStreamHandler --
func (_ *MockHost) RemoveStreamHandler(_ protocol.ID) {}

View File

@@ -170,6 +170,11 @@ func (p *TestP2P) BroadcastAttestation(_ context.Context, _ uint64, _ *ethpb.Att
return nil
}
func (p *TestP2P) BroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar) error {
p.BroadcastCalled = true
return nil
}
// BroadcastSyncCommitteeMessage broadcasts a sync committee message.
func (p *TestP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
p.BroadcastCalled = true

View File

@@ -28,6 +28,8 @@ const (
GossipContributionAndProofMessage = "sync_committee_contribution_and_proof"
// GossipBlsToExecutionChangeMessage is the name for the bls to execution change message type.
GossipBlsToExecutionChangeMessage = "bls_to_execution_change"
// GossipBlobSidecarMessage is the name for the blob sidecar message type.
GossipBlobSidecarMessage = "blobs_sidecar"
// Topic Formats
//
@@ -49,4 +51,6 @@ const (
SyncContributionAndProofSubnetTopicFormat = GossipProtocolAndDigest + GossipContributionAndProofMessage
// BlsToExecutionChangeSubnetTopicFormat is the topic format for the bls to execution change subnet.
BlsToExecutionChangeSubnetTopicFormat = GossipProtocolAndDigest + GossipBlsToExecutionChangeMessage
// BlobSubnetTopicFormat is the topic format for the blob subnet.
BlobSubnetTopicFormat = GossipProtocolAndDigest + GossipBlobSidecarMessage + "_%d"
)

Some files were not shown because too many files have changed in this diff Show More