Compare commits

...

76 Commits

Author SHA1 Message Date
nisdas
501ad49f58 handle quic 2024-04-03 14:26:52 +08:00
nisdas
875c654f55 Merge branch 'quic' of https://github.com/prysmaticlabs/geth-sharding into quic 2024-04-03 13:58:49 +08:00
Manu NALEPA
c3ccf8d536 Logging: Display the number of TCP/QUIC connected peers. 2024-04-02 11:36:00 +02:00
Manu NALEPA
649e2ea9fa Status: Implement {Inbound,Outbound}Connected{TCP,QUIC}. 2024-04-02 11:36:00 +02:00
Manu NALEPA
f3b54dbdd3 p2p: Add QUIC support. 2024-04-02 11:36:00 +02:00
Manu NALEPA
5cd86812c7 p2p Start: Fix error message. 2024-04-02 11:36:00 +02:00
Manu NALEPA
6b7a7a1c10 multiAddressBuilderWithID: Add the ability to build QUIC multiaddr 2024-04-02 11:36:00 +02:00
Manu NALEPA
4efb331924 tcp ==> libp2ptcp 2024-04-02 11:36:00 +02:00
Manu NALEPA
46497793b9 NewService: Bubbles up errors. 2024-04-02 11:36:00 +02:00
Manu NALEPA
1a0bd725fa buildOptions: Improve log. 2024-04-02 11:36:00 +02:00
Manu NALEPA
ebbdc78b2b MultiAddressBuilder{WithID}: Refactor. 2024-04-02 11:36:00 +02:00
Manu NALEPA
a1bb70e50a ensurePeerConnections: Remove capital letter in error message. 2024-04-02 11:36:00 +02:00
Manu NALEPA
b95bdf87fd beacon-blocks-by-range: Add --network option. 2024-04-02 11:36:00 +02:00
Manu NALEPA
3102a79a5d (Unrelated) DoppelGanger: Improve message. 2024-04-02 11:36:00 +02:00
cui
2cc3f69a3f using slices.Contains (#13835) 2024-04-01 21:26:52 +00:00
cui
a861489a83 using slices.ContainsFunc (#13838) 2024-04-01 21:15:38 +00:00
cui
0e1c585f7d using slices.Contains (#13837) 2024-04-01 21:10:46 +00:00
cui
9df20e616c using slices.IndexFunc (#13834) 2024-04-01 20:04:40 +00:00
kasey
53fdd2d062 allow other pkgs to check for blobs in pruning cache (#13788)
* allow other pkgs to check for blobs in pruning cache

* address deepsource complaints

* custom error to simplify test setup

* add AllAvailable method

* make storage summary slot field private

* unit test and off-by-one fix

* remove comment with copy of tested function

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-04-01 14:19:51 +00:00
Sammy Rosso
2b4bb5d890 Fixed spelling mistakes in comments (#13833) 2024-04-01 11:12:20 +00:00
Nishant Das
38f208d70d Reject Empty Bundles (#13798)
* reject it

* test

* add test case
2024-04-01 04:37:36 +00:00
Nishant Das
65b90abdda Maximize Peer Capacity When Syncing (#13820)
* maximize it

* fix it

* lint

* add test

* Update beacon-chain/sync/initial-sync/blocks_fetcher.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* logs

* kasey's review

* kasey's review

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-03-30 14:54:11 +00:00
kasey
f3b49d4eaf Repair idx 13486 (#13831)
* Revert "Modify the algorithm of `updateFinalizedBlockRoots` (#13486)"

This reverts commit 32fb183392.

* migration to fix index corruption from pr 13486

* bail as soon as we see 10 epochs without the bug

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-29 17:29:39 +00:00
Manu NALEPA
37332e2e49 Logging: Display the number of TCP/QUIC connected peers. 2024-03-29 11:50:00 +01:00
Manu NALEPA
123b034d0b Status: Implement {Inbound,Outbound}Connected{TCP,QUIC}. 2024-03-29 11:50:00 +01:00
Manu NALEPA
7f449660f0 p2p: Add QUIC support. 2024-03-29 11:49:56 +01:00
Manu NALEPA
a2d0702aa5 p2p Start: Fix error message. 2024-03-29 11:31:43 +01:00
Manu NALEPA
c9ccc1cb2c multiAddressBuilderWithID: Add the ability to build QUIC multiaddr 2024-03-29 11:31:43 +01:00
Manu NALEPA
a9687a5f35 tcp ==> libp2ptcp 2024-03-29 11:31:43 +01:00
Manu NALEPA
3996a5b5be NewService: Bubbles up errors. 2024-03-29 11:31:43 +01:00
Manu NALEPA
e01decfc4f buildOptions: Improve log. 2024-03-29 11:31:43 +01:00
Manu NALEPA
8ea340026b MultiAddressBuilder{WithID}: Refactor. 2024-03-29 11:31:43 +01:00
Manu NALEPA
9e382c4ff4 ensurePeerConnections: Remove capital letter in error message. 2024-03-29 11:31:43 +01:00
Manu NALEPA
54b2ee8435 aaaa 2024-03-29 11:31:43 +01:00
Lorenzo
5b1da7353c feat(direct peers): configure static peers to be direct peers in pubsub options (#13773) 2024-03-29 04:00:40 +00:00
terence
9f17e65860 Fill in missing debug logs for blob p2p IGNORE/REJECT (#13825) 2024-03-28 16:11:55 +00:00
Manu NALEPA
da51fae94c (Unrelated) DoppelGanger: Improve message. 2024-03-28 13:34:13 +01:00
Manu NALEPA
9b2d53b0d1 Bump libp2p to v0.33.1 (#13784)
* Run bare `//:gazelle -- update-repos`.

--> Removes some blank lines.

* Update libp2p to `v0.33.1`.
2024-03-28 08:38:46 +00:00
terence
d6f9196707 Change goodbye message from rate limited peer to debug verbosity (#13819) 2024-03-28 04:14:37 +00:00
Potuz
1b0e09369e Add metrics to track pending attestations (#13815) 2024-03-27 18:20:53 +00:00
Potuz
12482eeb40 Remove check for duplicates in pending attestation queue (#13814)
* Remove check for duplicates in pending attestation queue

The current queue will only save 1 unaggregated attestation for a pending block because we wrap the object into a SignedAggregatedAttestationAndProof with a zeroed aggregator.

* fix tests
2024-03-27 16:39:45 +00:00
Joel Rousseau
acc307b959 Command-line interface for visualizing min/max span bucket (#13748)
* add max/min span visualisation tool cli

* go mod tidy

* lint imports

* remove typo

* fix epoch table value

* fix deepsource

* add dep to bazel

* fix dep import order

* change command name from span to slasher-span-display

* change command args style using - instead of _

* sed s/CONFIGURATION/SLASHER PARAMS//

* change double neg to double pos condition

* remove unused anonymous func

* better function naming

* add range condition

* [deepsource] Fix Empty slice literal used to declare a variable
    GO-W1027

* correct typo

* do not show incorrect epochs due to round robin

* fix import

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-03-27 16:15:39 +00:00
Afanti
c1d75c295a chore: enhance comment and more readable (#13792) 2024-03-27 14:43:55 +00:00
Potuz
fad118cb04 Simplify ValidateAttestationTime (#13813)
ValidateClock in ValidateAttestationTime is useless

The check is that the attSlot is not > than the currentslot + 128 slots.

Later there's a check that the attSlot start time is not > than current slot
start time + clockDisparity.

if attSlot > than currentSlot + 128 slots, then the second check would fail
anyway.

The lattest check already guarantees that the attSlot cannot be larger than the
currentSlot, therefore it may never happen that attEpoch > currentEpoch. We just
need to check for Deneb that attEpoch >= currentEpoch - 1.

Removes also some duplicated variables like the attestation epoch being computed
twice.
2024-03-27 14:17:16 +00:00
kasey
cdd1d819df Refactor batch verifier for sharing across packages (#13812)
* refactor batch verifier to share with pending queue

* unit test for batch verifier

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-27 12:36:17 +00:00
terence
97edffaff5 Add bid value metrics (#13804) 2024-03-26 14:58:41 +00:00
Radosław Kapka
6de7df6b9d Get genesis only once (#13796) 2024-03-26 03:26:34 +00:00
Sammy Rosso
14d7416c16 Spec test coverage report hack (#13718)
* Spec test report hack

* no export

* fix shell complaint

* shell fix?

* shell again?

* chmod +x ./hack/spectest-report.sh

* Review + improvements

* Remove unwanted change

* Add exclusion list

* Fix path + add eip6110 to exclusion

* Fix bazel path nonsense

* Add extra detail about specific test

* Cleanup exclusion list

* Add fail conditions

* Add mkdir

* Shorten filename + mkdir only if new

* Fix names

* Add to exclusion list

* Add report to .gitignore

* Back to stupid names

* Add Bazel flags option

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-03-25 16:10:32 +00:00
Radosław Kapka
6782df917a Utilize next slot cache in block rewards rpc (#13684)
* Utilize next slot cache in block rewards rpc

* msg fix

* tests
2024-03-25 08:56:20 +00:00
Bharath Vedartham
3d2230223f create the log file along with its parent directory if not present (#12675)
* Remove Feature Flag From Prater (#12082)

* Use Epoch boundary cache to retrieve balances (#12083)

* Use Epoch boundary cache to retrieve balances

* save boundary states before inserting to forkchoice

* move up last block save

* remove boundary checks on balances

* fix ordering

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* create the log file along with its parent directory if not present

* only give ReadWritePermissions to the log file

* use io/file package to create the parent directories

* fix ci related issues

* add regression tests

* run gazelle

* fix tests

* remove print statements

* gazelle

* Remove failing test for MkdirAll, this failure is not expected

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-03-22 15:32:08 +00:00
Preston Van Loon
b008a6422d Add tarball support for docker images (#13790) 2024-03-22 15:31:29 +00:00
Fredrik Svantes
d19365507f Set default LocalBlockValueBoost to 10 (#13772)
* Set default LocalBlockValueBoost to 10

* Update base.go

* Update mainnet_config.go
2024-03-22 13:18:20 +00:00
kasey
c05e39a668 fix handling of goodbye messages for limited peers (#13785)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-22 13:06:16 +00:00
Radosław Kapka
63c2b3563a Optimize GetDuties VC action (#13789)
* wait groups

* errgroup

* tests

* bzl

* review
2024-03-22 09:50:19 +00:00
Justin Traglia
a6e86c6731 Rename payloadattribute Timestamps to Timestamp (#13523)
Co-authored-by: terence <terence@prysmaticlabs.com>
2024-03-21 21:11:01 +00:00
Radosław Kapka
32fb183392 Modify the algorithm of updateFinalizedBlockRoots (#13486)
* rename error var

* new algo

* replay_test

* add comment

* review

* fill out parent root

* handle edge cases

* review
2024-03-21 21:09:56 +00:00
carrychair
cade09ba0b chore: fix some typos (#13726)
Signed-off-by: carrychair <linghuchong404@gmail.com>
2024-03-21 21:00:21 +00:00
Potuz
f85ddfe265 Log the slot and blockroot when we deadline waiting for blobs (#13774) 2024-03-21 20:29:23 +00:00
terence
3b97094ea4 Log da block root in hex (#13787) 2024-03-21 20:26:17 +00:00
Nishant Das
acdbf7c491 expand it (#13770) 2024-03-21 19:57:22 +00:00
Potuz
1cc1effd75 Revert "pass justified=finalized in Prater (#13695)" (#13709)
This reverts commit 102518e106.
2024-03-21 17:42:40 +00:00
james-prysm
f7f1d249f2 Fix get validator endpoint for empty query parameters (#13780)
* fix handlers for get validators

* removing log
2024-03-21 14:00:07 +00:00
kasey
02abb3e3c0 add log message if in da check at slot end (#13776)
* add log message if in da check at slot end

* don't bother logging late da check start

* break up defer with a var, too dense all together

* pass slot instead of block ref

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-20 19:31:09 +00:00
james-prysm
2255c8b287 setting missing beacon API (#13778) 2024-03-20 17:59:15 +00:00
terence
27ecf448a7 Add da waited time to sync block log (#13775) 2024-03-20 14:53:02 +00:00
james-prysm
e243f04e44 validator client on rest mode has an inappropriate context deadline for events (#13771)
* addressing errors on events endpoint

* reverting timeout on get health

* fixing linting

* fixing more linting

* Update validator/client/beacon-api/beacon_api_validator_client.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* Update beacon-chain/rpc/eth/events/events.go

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>

* reverting change and removing line on context done which creates a superfluous response.WriteHeader error

* gofmt

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-03-20 13:19:05 +00:00
Manu NALEPA
fca1adbad7 Re-design TestStartDiscV5_DiscoverPeersWithSubnets test (#13766)
* `Test_AttSubnets`: Factorize.

* `filterPeerForAttSubnet`: `O(n)` ==> `O(1)`

* `FindPeersWithSubnet`: Optimize.

* `TestStartDiscV5_DiscoverPeersWithSubnets`: Complete re-design.

* `broadcastAttestation`: User `log.WithFields`.

* `filterPeer`: Refactor comments.

* Make deepsource happy.

* `TestStartDiscV5_FindPeersWithSubnet`: Add context cancellation.

Add some notes on `FindPeersWithSubnet` about
this limitation as well.
2024-03-20 03:36:00 +00:00
Radosław Kapka
b692722ddf Optimize SubmitAggregateSelectionProof VC action (#13711)
* Optimize `SubscribeCommitteeSubnets` VC action

* test fixes

* remove newline

* Optimize `SubmitAggregateSelectionProof`

* mock

* bzl gzl

* test fixes
2024-03-19 14:09:07 +00:00
Nishant Das
c4f6020677 add mplex timeout (#13745) 2024-03-19 13:37:23 +00:00
Chanh Le
d779e65d4e chore(kzg): Additional tests for KZG commitments (#13758)
* add a test explaining kzgRootIndex

* minor

* minor
2024-03-19 09:08:02 +00:00
terence
357211b7d9 Update spec test to official 1.4.0 (#13761) 2024-03-18 23:39:03 +00:00
Potuz
2dd48343a2 Set default fee recipient if tracked val fails (#13768) 2024-03-18 19:35:34 +00:00
james-prysm
7f931bf65b Keymanager APIs - get,post,delete graffiti (#13474)
* wip

* adding set and delete graffiti

* fixing mock

* fixing mock linting and putting in scaffolds for unit tests

* adding some tests

* gaz

* adding tests

* updating missing unit test

* fixing unit test

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update validator/client/propose.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/rpc/handlers_keymanager.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/propose.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek's feedback

* fixing tests

* using wrapper for graffiti

* fixing linting

* wip

* fixing setting proposer settings

* more partial fixes to tests

* gaz

* fixing tests and setting logic

* changing keymanager

* fixing tests and making graffiti optional in the proposer file

* remove unneeded lines

* reverting unintended changes

* Update validator/client/propose.go

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* addressing feedback

* removing uneeded line

* fixing bad merge resolution

* gofmt

* gaz

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-03-18 15:03:08 +00:00
Nishant Das
fda4589251 Rewrite Pruning Implementation To Handle EIP 7045 (#13762)
* make it very big

* use new pruning implementation

* handle pre deneb

* revert cache change

* less verbose

* gaz

* Update beacon-chain/operations/attestations/prune_expired.go

Co-authored-by: Potuz <potuz@prysmaticlabs.com>

* gofmt

* be safer

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2024-03-18 12:57:21 +00:00
kasey
34593d34d4 allow blob by root within da period (#13757)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-18 03:15:17 +00:00
Potuz
4d18e590ed Rename mispelled variable (#13759) 2024-03-17 20:47:19 +00:00
190 changed files with 4698 additions and 2052 deletions

3
.gitignore vendored
View File

@@ -41,3 +41,6 @@ jwt.hex
# manual testing
tmp
# spectest coverage reports
report.txt

View File

@@ -2,7 +2,7 @@
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![Go Report Card](https://goreportcard.com/badge/github.com/prysmaticlabs/prysm)](https://goreportcard.com/report/github.com/prysmaticlabs/prysm)
[![Consensus_Spec_Version 1.3.0](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.3.0-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.3.0)
[![Consensus_Spec_Version 1.4.0](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.4.0-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.4.0)
[![Execution_API_Version 1.0.0-beta.2](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.beta.2-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-beta.2/src/engine)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/prysmaticlabs)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/prysmaticlabs/prysm/badge)](https://www.gitpoap.io/gh/prysmaticlabs/prysm)

View File

@@ -130,9 +130,9 @@ aspect_bazel_lib_register_toolchains()
http_archive(
name = "rules_oci",
sha256 = "c71c25ed333a4909d2dd77e0b16c39e9912525a98c7fa85144282be8d04ef54c",
strip_prefix = "rules_oci-1.3.4",
url = "https://github.com/bazel-contrib/rules_oci/releases/download/v1.3.4/rules_oci-v1.3.4.tar.gz",
sha256 = "4a276e9566c03491649eef63f27c2816cc222f41ccdebd97d2c5159e84917c3b",
strip_prefix = "rules_oci-1.7.4",
url = "https://github.com/bazel-contrib/rules_oci/releases/download/v1.7.4/rules_oci-v1.7.4.tar.gz",
)
load("@rules_oci//oci:dependencies.bzl", "rules_oci_dependencies")
@@ -243,9 +243,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.4.0-beta.7"
consensus_spec_test_version = "v1.4.0-beta.7-hotfix"
consensus_spec_version = "v1.4.0"
bls_test_version = "v0.1.1"
@@ -262,7 +260,7 @@ filegroup(
)
""",
sha256 = "c282c0f86f23f3d2e0f71f5975769a4077e62a7e3c7382a16bd26a7e589811a0",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_test_version,
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -278,7 +276,7 @@ filegroup(
)
""",
sha256 = "4649c35aa3b8eb0cfdc81bee7c05649f90ef36bede5b0513e1f2e8baf37d6033",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_test_version,
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -294,7 +292,7 @@ filegroup(
)
""",
sha256 = "c5a03f724f757456ffaabd2a899992a71d2baf45ee4db65ca3518f2b7ee928c8",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_test_version,
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
http_archive(
@@ -308,7 +306,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "049c29267310e6b88280f4f834a75866c2f5b9036fa97acb9d9c6db8f64d9118",
sha256 = "cd1c9d97baccbdde1d2454a7dceb8c6c61192a3b581eee12ffc94969f2db8453",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -88,7 +88,7 @@ func TestToggle(t *testing.T) {
}
}
func TestToogleMultipleTimes(t *testing.T) {
func TestToggleMultipleTimes(t *testing.T) {
t.Parallel()
v := New()
@@ -101,16 +101,16 @@ func TestToogleMultipleTimes(t *testing.T) {
expected := i%2 != 0
if v.IsSet() != expected {
t.Fatalf("AtomicBool.Toogle() doesn't work after %d calls, expected: %v, got %v", i, expected, v.IsSet())
t.Fatalf("AtomicBool.Toggle() doesn't work after %d calls, expected: %v, got %v", i, expected, v.IsSet())
}
if pre == v.IsSet() {
t.Fatalf("AtomicBool.Toogle() returned wrong value at the %dth calls, expected: %v, got %v", i, !v.IsSet(), pre)
t.Fatalf("AtomicBool.Toggle() returned wrong value at the %dth calls, expected: %v, got %v", i, !v.IsSet(), pre)
}
}
}
func TestToogleAfterOverflow(t *testing.T) {
func TestToggleAfterOverflow(t *testing.T) {
t.Parallel()
var value int32 = math.MaxInt32
@@ -122,7 +122,7 @@ func TestToogleAfterOverflow(t *testing.T) {
v.Toggle()
expected := math.MaxInt32%2 == 0
if v.IsSet() != expected {
t.Fatalf("AtomicBool.Toogle() doesn't work after overflow, expected: %v, got %v", expected, v.IsSet())
t.Fatalf("AtomicBool.Toggle() doesn't work after overflow, expected: %v, got %v", expected, v.IsSet())
}
// make sure overflow happened
@@ -135,7 +135,7 @@ func TestToogleAfterOverflow(t *testing.T) {
v.Toggle()
expected = !expected
if v.IsSet() != expected {
t.Fatalf("AtomicBool.Toogle() doesn't work after the second call after overflow, expected: %v, got %v", expected, v.IsSet())
t.Fatalf("AtomicBool.Toggle() doesn't work after the second call after overflow, expected: %v, got %v", expected, v.IsSet())
}
}

View File

@@ -20,6 +20,7 @@ package event
import (
"errors"
"reflect"
"slices"
"sync"
)
@@ -219,12 +220,9 @@ type caseList []reflect.SelectCase
// find returns the index of a case containing the given channel.
func (cs caseList) find(channel interface{}) int {
for i, cas := range cs {
if cas.Chan.Interface() == channel {
return i
}
}
return -1
return slices.IndexFunc(cs, func(selectCase reflect.SelectCase) bool {
return selectCase.Chan.Interface() == channel
})
}
// delete removes the given case from cs.

View File

@@ -63,7 +63,7 @@ func Scatter(inputLen int, sFunc func(int, int, *sync.RWMutex) (interface{}, err
return results, nil
}
// calculateChunkSize calculates a suitable chunk size for the purposes of parallelisation.
// calculateChunkSize calculates a suitable chunk size for the purposes of parallelization.
func calculateChunkSize(items int) int {
// Start with a simple even split
chunkSize := items / runtime.GOMAXPROCS(0)

View File

@@ -2,7 +2,7 @@
# This script serves as a wrapper around bazel to limit the scope of environment variables that
# may change the action output. Using this script should result in a higher cache hit ratio for
# cached actions with a more heremtic build.
# cached actions with a more hermetic build.
env -i \
PATH=/usr/bin:/bin \

View File

@@ -82,19 +82,20 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
if level >= logrus.DebugLevel {
parentRoot := block.ParentRoot()
lf := logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"deposits": len(block.Body().Deposits()),
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
"deposits": len(block.Body().Deposits()),
}
log.WithFields(lf).Debug("Synced new block")
} else {

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
@@ -558,6 +559,20 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
// The gossip handler for blobs writes the index of each verified blob referencing the given
// root to the channel returned by blobNotifiers.forRoot.
nc := s.blobNotifiers.forRoot(root)
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
// Avoid logging if DA check is called after next slot start.
if nextSlot.After(time.Now()) {
nst := time.AfterFunc(time.Until(nextSlot), func() {
if len(missing) == 0 {
return
}
log.WithFields(daCheckLogFields(root, signed.Block().Slot(), expected, len(missing))).
Error("Still waiting for DA check at slot end.")
})
defer nst.Stop()
}
for {
select {
case idx := <-nc:
@@ -571,11 +586,20 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
s.blobNotifiers.delete(root)
return nil
case <-ctx.Done():
return errors.Wrap(ctx.Err(), "context deadline waiting for blob sidecars")
return errors.Wrapf(ctx.Err(), "context deadline waiting for blob sidecars slot: %d, BlockRoot: %#x", block.Slot(), root)
}
}
}
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
return logrus.Fields{
"slot": slot,
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": missing,
}
}
// lateBlockTasks is called 4 seconds into the slot and performs tasks
// related to late blocks. It emits a MissedSlot state feed event.
// It calls FCU and sets the right attributes if we are proposing next slot

View File

@@ -60,7 +60,7 @@ func (s *Service) getFCUArgsEarlyBlock(cfg *postBlockProcessConfig, fcuArgs *fcu
// logNonCanonicalBlockReceived prints a message informing that the received
// block is not the head of the chain. It requires the caller holds a lock on
// Foprkchoice.
// Forkchoice.
func (s *Service) logNonCanonicalBlockReceived(blockRoot [32]byte, headRoot [32]byte) {
receivedWeight, err := s.cfg.ForkChoiceStore.Weight(blockRoot)
if err != nil {

View File

@@ -290,18 +290,10 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
fRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
if params.BeaconConfig().ConfigName != params.PraterName {
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(s.ctx, &forkchoicetypes.Checkpoint{Epoch: justified.Epoch,
Root: bytesutil.ToBytes32(justified.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
} else {
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(s.ctx, &forkchoicetypes.Checkpoint{Epoch: finalized.Epoch,
Root: bytesutil.ToBytes32(finalized.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(s.ctx, &forkchoicetypes.Checkpoint{Epoch: justified.Epoch,
Root: bytesutil.ToBytes32(justified.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
if err := s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: finalized.Epoch,
Root: bytesutil.ToBytes32(finalized.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's finalized checkpoint")

View File

@@ -804,7 +804,7 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
depositTrie, err := trie.GenerateTrieFromItems(trieItems, params.BeaconConfig().DepositContractTreeDepth)
assert.NoError(t, err)
// Perform this in a non-sensical ordering
// Perform this in a nonsensical ordering
require.NoError(t, dc.InsertFinalizedDeposits(context.Background(), 10, [32]byte{}, 0))
require.NoError(t, dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0))
require.NoError(t, dc.InsertFinalizedDeposits(context.Background(), 3, [32]byte{}, 0))

View File

@@ -784,7 +784,7 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
depositTrie, err := trie.GenerateTrieFromItems(trieItems, params.BeaconConfig().DepositContractTreeDepth)
assert.NoError(t, err)
// Perform this in a non-sensical ordering
// Perform this in a nonsensical ordering
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)

View File

@@ -12,7 +12,6 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
var (
@@ -133,9 +132,6 @@ func ComputeSubnetFromCommitteeAndSlot(activeValCount uint64, comIdx primitives.
//
// In the attestation must be within the range of 95 to 102 in the example above.
func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clockDisparity time.Duration) error {
if err := slots.ValidateClock(attSlot, uint64(genesisTime.Unix())); err != nil {
return err
}
attTime, err := slots.ToTime(uint64(genesisTime.Unix()), attSlot)
if err != nil {
return err
@@ -182,24 +178,15 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
}
// EIP-7045: Starting in Deneb, allow any attestations from the current or previous epoch.
currentEpoch := slots.ToEpoch(currentSlot)
prevEpoch, err := currentEpoch.SafeSub(1)
if err != nil {
log.WithError(err).Debug("Ignoring underflow for a deneb attestation inclusion check in epoch 0")
prevEpoch = 0
}
attSlotEpoch := slots.ToEpoch(attSlot)
if attSlotEpoch != currentEpoch && attSlotEpoch != prevEpoch {
if attEpoch+1 < currentEpoch {
attError = fmt.Errorf(
"attestation epoch %d not within current epoch %d or previous epoch %d",
attSlot/params.BeaconConfig().SlotsPerEpoch,
"attestation epoch %d not within current epoch %d or previous epoch",
attEpoch,
currentEpoch,
prevEpoch,
)
return errors.Join(ErrTooLate, attError)
}
return nil
}

View File

@@ -197,7 +197,7 @@ func Test_ValidateAttestationTime(t *testing.T) {
-500 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second,
).Add(200 * time.Millisecond),
},
wantedErr: "attestation epoch 8 not within current epoch 15 or previous epoch 14",
wantedErr: "attestation epoch 8 not within current epoch 15 or previous epoch",
},
{
name: "attestation.slot is well beyond current slot",
@@ -205,7 +205,7 @@ func Test_ValidateAttestationTime(t *testing.T) {
attSlot: 1 << 32,
genesisTime: prysmTime.Now().Add(-15 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
},
wantedErr: "which exceeds max allowed value relative to the local clock",
wantedErr: "attestation slot 4294967296 not within attestation propagation range of 0 to 15 (current slot)",
},
}
for _, tt := range tests {

View File

@@ -22,7 +22,7 @@ var balanceCache = cache.NewEffectiveBalanceCache()
// """
// Return the combined effective balance of the ``indices``.
// ``EFFECTIVE_BALANCE_INCREMENT`` Gwei minimum to avoid divisions by zero.
// Math safe up to ~10B ETH, afterwhich this overflows uint64.
// Math safe up to ~10B ETH, after which this overflows uint64.
// """
// return Gwei(max(EFFECTIVE_BALANCE_INCREMENT, sum([state.validators[index].effective_balance for index in indices])))
func TotalBalance(state state.ReadOnlyValidators, indices []primitives.ValidatorIndex) uint64 {

View File

@@ -59,7 +59,7 @@ func ComputeDomainAndSign(st state.ReadOnlyBeaconState, epoch primitives.Epoch,
return ComputeDomainAndSignWithoutState(st.Fork(), epoch, domain, st.GenesisValidatorsRoot(), obj, key)
}
// ComputeDomainAndSignWithoutState offers the same functionalit as ComputeDomainAndSign without the need to provide a BeaconState.
// ComputeDomainAndSignWithoutState offers the same functionality as ComputeDomainAndSign without the need to provide a BeaconState.
// This is particularly helpful for signing values in tests.
func ComputeDomainAndSignWithoutState(fork *ethpb.Fork, epoch primitives.Epoch, domain [4]byte, vr []byte, obj fssz.HashRoot, key bls.SecretKey) ([]byte, error) {
// EIP-7044: Beginning in Deneb, fix the fork version to Capella for signed exits.

View File

@@ -4,6 +4,7 @@ go_library(
name = "go_default_library",
srcs = [
"blob.go",
"cache.go",
"ephemeral.go",
"log.go",
"metrics.go",
@@ -33,6 +34,7 @@ go_test(
name = "go_default_test",
srcs = [
"blob_test.go",
"cache_test.go",
"pruner_test.go",
],
embed = [":go_default_library"],

View File

@@ -1,6 +1,7 @@
package filesystem
import (
"context"
"fmt"
"os"
"path"
@@ -103,12 +104,29 @@ func (bs *BlobStorage) WarmCache() {
return
}
go func() {
if err := bs.pruner.prune(0); err != nil {
start := time.Now()
if err := bs.pruner.warmCache(); err != nil {
log.WithError(err).Error("Error encountered while warming up blob pruner cache")
}
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete.")
}()
}
// ErrBlobStorageSummarizerUnavailable is a sentinel error returned when there is no pruner/cache available.
// This should be used by code that optionally uses the summarizer to optimize rpc requests. Being able to
// fallback when there is no summarizer allows client code to avoid test complexity where the summarizer doesn't matter.
var ErrBlobStorageSummarizerUnavailable = errors.New("BlobStorage not initialized with a pruner or cache")
// WaitForSummarizer blocks until the BlobStorageSummarizer is ready to use.
// BlobStorageSummarizer is not ready immediately on node startup because it needs to sample the blob filesystem to
// determine which blobs are available.
func (bs *BlobStorage) WaitForSummarizer(ctx context.Context) (BlobStorageSummarizer, error) {
if bs.pruner == nil {
return nil, ErrBlobStorageSummarizerUnavailable
}
return bs.pruner.waitForCache(ctx)
}
// Save saves blobs given a list of sidecars.
func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
startTime := time.Now()

View File

@@ -0,0 +1,119 @@
package filesystem
import (
"sync"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
// blobIndexMask is a bitmask representing the set of blob indices that are currently set.
type blobIndexMask [fieldparams.MaxBlobsPerBlock]bool
// BlobStorageSummary represents cached information about the BlobSidecars on disk for each root the cache knows about.
type BlobStorageSummary struct {
slot primitives.Slot
mask blobIndexMask
}
// HasIndex returns true if the BlobSidecar at the given index is available in the filesystem.
func (s BlobStorageSummary) HasIndex(idx uint64) bool {
// Protect from panic, but assume callers are sophisticated enough to not need an error telling them they have an invalid idx.
if idx >= fieldparams.MaxBlobsPerBlock {
return false
}
return s.mask[idx]
}
// AllAvailable returns true if we have all blobs for all indices from 0 to count-1.
func (s BlobStorageSummary) AllAvailable(count int) bool {
if count > fieldparams.MaxBlobsPerBlock {
return false
}
for i := 0; i < count; i++ {
if !s.mask[i] {
return false
}
}
return true
}
// BlobStorageSummarizer can be used to receive a summary of metadata about blobs on disk for a given root.
// The BlobStorageSummary can be used to check which indices (if any) are available for a given block by root.
type BlobStorageSummarizer interface {
Summary(root [32]byte) BlobStorageSummary
}
type blobStorageCache struct {
mu sync.RWMutex
nBlobs float64
cache map[string]BlobStorageSummary
}
var _ BlobStorageSummarizer = &blobStorageCache{}
func newBlobStorageCache() *blobStorageCache {
return &blobStorageCache{
cache: make(map[string]BlobStorageSummary, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest*fieldparams.SlotsPerEpoch),
}
}
// Summary returns the BlobStorageSummary for `root`. The BlobStorageSummary can be used to check for the presence of
// BlobSidecars based on Index.
func (s *blobStorageCache) Summary(root [32]byte) BlobStorageSummary {
k := rootString(root)
s.mu.RLock()
defer s.mu.RUnlock()
return s.cache[k]
}
func (s *blobStorageCache) ensure(key string, slot primitives.Slot, idx uint64) error {
if idx >= fieldparams.MaxBlobsPerBlock {
return errIndexOutOfBounds
}
s.mu.Lock()
defer s.mu.Unlock()
v := s.cache[key]
v.slot = slot
if !v.mask[idx] {
s.updateMetrics(1)
}
v.mask[idx] = true
s.cache[key] = v
return nil
}
func (s *blobStorageCache) slot(key string) (primitives.Slot, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
v, ok := s.cache[key]
if !ok {
return 0, false
}
return v.slot, ok
}
func (s *blobStorageCache) evict(key string) {
var deleted float64
s.mu.Lock()
v, ok := s.cache[key]
if ok {
for i := range v.mask {
if v.mask[i] {
deleted += 1
}
}
}
delete(s.cache, key)
s.mu.Unlock()
if deleted > 0 {
s.updateMetrics(-deleted)
}
}
func (s *blobStorageCache) updateMetrics(delta float64) {
s.nBlobs += delta
blobDiskCount.Set(s.nBlobs)
blobDiskSize.Set(s.nBlobs * bytesPerSidecar)
}

View File

@@ -0,0 +1,150 @@
package filesystem
import (
"testing"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestSlotByRoot_Summary(t *testing.T) {
var noneSet, allSet, firstSet, lastSet, oneSet blobIndexMask
firstSet[0] = true
lastSet[len(lastSet)-1] = true
oneSet[1] = true
for i := range allSet {
allSet[i] = true
}
cases := []struct {
name string
root [32]byte
expected *blobIndexMask
}{
{
name: "not found",
},
{
name: "none set",
expected: &noneSet,
},
{
name: "index 1 set",
expected: &oneSet,
},
{
name: "all set",
expected: &allSet,
},
{
name: "first set",
expected: &firstSet,
},
{
name: "last set",
expected: &lastSet,
},
}
sc := newBlobStorageCache()
for _, c := range cases {
if c.expected != nil {
key := rootString(bytesutil.ToBytes32([]byte(c.name)))
sc.cache[key] = BlobStorageSummary{slot: 0, mask: *c.expected}
}
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
key := bytesutil.ToBytes32([]byte(c.name))
sum := sc.Summary(key)
for i := range c.expected {
ui := uint64(i)
if c.expected == nil {
require.Equal(t, false, sum.HasIndex(ui))
} else {
require.Equal(t, c.expected[i], sum.HasIndex(ui))
}
}
})
}
}
func TestAllAvailable(t *testing.T) {
idxUpTo := func(u int) []int {
r := make([]int, u)
for i := range r {
r[i] = i
}
return r
}
require.DeepEqual(t, []int{}, idxUpTo(0))
require.DeepEqual(t, []int{0}, idxUpTo(1))
require.DeepEqual(t, []int{0, 1, 2, 3, 4, 5}, idxUpTo(6))
cases := []struct {
name string
idxSet []int
count int
aa bool
}{
{
// If there are no blobs committed, then all the committed blobs are available.
name: "none in idx, 0 arg",
count: 0,
aa: true,
},
{
name: "none in idx, 1 arg",
count: 1,
aa: false,
},
{
name: "first in idx, 1 arg",
idxSet: []int{0},
count: 1,
aa: true,
},
{
name: "second in idx, 1 arg",
idxSet: []int{1},
count: 1,
aa: false,
},
{
name: "first missing, 2 arg",
idxSet: []int{1},
count: 2,
aa: false,
},
{
name: "all missing, 1 arg",
count: 6,
aa: false,
},
{
name: "out of bound is safe",
count: fieldparams.MaxBlobsPerBlock + 1,
aa: false,
},
{
name: "max present",
count: fieldparams.MaxBlobsPerBlock,
idxSet: idxUpTo(fieldparams.MaxBlobsPerBlock),
aa: true,
},
{
name: "one present",
count: 1,
idxSet: idxUpTo(1),
aa: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
var mask blobIndexMask
for _, idx := range c.idxSet {
mask[idx] = true
}
sum := BlobStorageSummary{mask: mask}
require.Equal(t, c.aa, sum.AllAvailable(c.count))
})
}
}

View File

@@ -1,6 +1,7 @@
package filesystem
import (
"context"
"encoding/binary"
"io"
"path"
@@ -12,7 +13,6 @@ import (
"time"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
@@ -32,7 +32,8 @@ type blobPruner struct {
sync.Mutex
prunedBefore atomic.Uint64
windowSize primitives.Slot
slotMap *slotForRoot
cache *blobStorageCache
cacheWarmed chan struct{}
fs afero.Fs
}
@@ -41,13 +42,14 @@ func newBlobPruner(fs afero.Fs, retain primitives.Epoch) (*blobPruner, error) {
if err != nil {
return nil, errors.Wrap(err, "could not set retentionSlots")
}
return &blobPruner{fs: fs, windowSize: r, slotMap: newSlotForRoot()}, nil
cw := make(chan struct{})
return &blobPruner{fs: fs, windowSize: r, cache: newBlobStorageCache(), cacheWarmed: cw}, nil
}
// notify updates the pruner's view of root->blob mappings. This allows the pruner to build a cache
// of root->slot mappings and decide when to evict old blobs based on the age of present blobs.
func (p *blobPruner) notify(root [32]byte, latest primitives.Slot, idx uint64) error {
if err := p.slotMap.ensure(rootString(root), latest, idx); err != nil {
if err := p.cache.ensure(rootString(root), latest, idx); err != nil {
return err
}
pruned := uint64(windowMin(latest, p.windowSize))
@@ -62,7 +64,7 @@ func (p *blobPruner) notify(root [32]byte, latest primitives.Slot, idx uint64) e
return nil
}
func windowMin(latest primitives.Slot, offset primitives.Slot) primitives.Slot {
func windowMin(latest, offset primitives.Slot) primitives.Slot {
// Safely compute the first slot in the epoch for the latest slot
latest = latest - latest%params.BeaconConfig().SlotsPerEpoch
if latest < offset {
@@ -71,6 +73,23 @@ func windowMin(latest primitives.Slot, offset primitives.Slot) primitives.Slot {
return latest - offset
}
func (p *blobPruner) warmCache() error {
if err := p.prune(0); err != nil {
return err
}
close(p.cacheWarmed)
return nil
}
func (p *blobPruner) waitForCache(ctx context.Context) (*blobStorageCache, error) {
select {
case <-p.cacheWarmed:
return p.cache, nil
case <-ctx.Done():
return nil, ctx.Err()
}
}
// Prune prunes blobs in the base directory based on the retention epoch.
// It deletes blobs older than currentEpoch - (retentionEpochs+bufferEpochs).
// This is so that we keep a slight buffer and blobs are deleted after n+2 epochs.
@@ -122,7 +141,7 @@ func shouldRetain(slot, pruneBefore primitives.Slot) bool {
func (p *blobPruner) tryPruneDir(dir string, pruneBefore primitives.Slot) (int, error) {
root := rootFromDir(dir)
slot, slotCached := p.slotMap.slot(root)
slot, slotCached := p.cache.slot(root)
// Return early if the slot is cached and doesn't need pruning.
if slotCached && shouldRetain(slot, pruneBefore) {
return 0, nil
@@ -151,7 +170,7 @@ func (p *blobPruner) tryPruneDir(dir string, pruneBefore primitives.Slot) (int,
if err != nil {
return 0, errors.Wrapf(err, "index could not be determined for blob file %s", scFiles[i])
}
if err := p.slotMap.ensure(root, slot, idx); err != nil {
if err := p.cache.ensure(root, slot, idx); err != nil {
return 0, errors.Wrapf(err, "could not update prune cache for blob file %s", scFiles[i])
}
}
@@ -179,7 +198,7 @@ func (p *blobPruner) tryPruneDir(dir string, pruneBefore primitives.Slot) (int,
return removed, errors.Wrapf(err, "unable to remove blob directory %s", dir)
}
p.slotMap.evict(rootFromDir(dir))
p.cache.evict(rootFromDir(dir))
return len(scFiles), nil
}
@@ -269,71 +288,3 @@ func filterSsz(s string) bool {
func filterPart(s string) bool {
return filepath.Ext(s) == dotPartExt
}
func newSlotForRoot() *slotForRoot {
return &slotForRoot{
cache: make(map[string]*slotCacheEntry, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest*fieldparams.SlotsPerEpoch),
}
}
type slotCacheEntry struct {
slot primitives.Slot
mask [fieldparams.MaxBlobsPerBlock]bool
}
type slotForRoot struct {
sync.RWMutex
nBlobs float64
cache map[string]*slotCacheEntry
}
func (s *slotForRoot) updateMetrics(delta float64) {
s.nBlobs += delta
blobDiskCount.Set(s.nBlobs)
blobDiskSize.Set(s.nBlobs * bytesPerSidecar)
}
func (s *slotForRoot) ensure(key string, slot primitives.Slot, idx uint64) error {
if idx >= fieldparams.MaxBlobsPerBlock {
return errIndexOutOfBounds
}
s.Lock()
defer s.Unlock()
v, ok := s.cache[key]
if !ok {
v = &slotCacheEntry{}
}
v.slot = slot
if !v.mask[idx] {
s.updateMetrics(1)
}
v.mask[idx] = true
s.cache[key] = v
return nil
}
func (s *slotForRoot) slot(key string) (primitives.Slot, bool) {
s.RLock()
defer s.RUnlock()
v, ok := s.cache[key]
if !ok {
return 0, false
}
return v.slot, ok
}
func (s *slotForRoot) evict(key string) {
s.Lock()
defer s.Unlock()
v, ok := s.cache[key]
var deleted float64
if ok {
for i := range v.mask {
if v.mask[i] {
deleted += 1
}
}
s.updateMetrics(-deleted)
}
delete(s.cache, key)
}

View File

@@ -28,7 +28,7 @@ func TestTryPruneDir_CachedNotExpired(t *testing.T) {
root := fmt.Sprintf("%#x", sc.BlockRoot())
// This slot is right on the edge of what would need to be pruned, so by adding it to the cache and
// skipping any other test setup, we can be certain the hot cache path never touches the filesystem.
require.NoError(t, pr.slotMap.ensure(root, sc.Slot(), 0))
require.NoError(t, pr.cache.ensure(root, sc.Slot(), 0))
pruned, err := pr.tryPruneDir(root, pr.windowSize)
require.NoError(t, err)
require.Equal(t, 0, pruned)
@@ -45,7 +45,7 @@ func TestTryPruneDir_CachedExpired(t *testing.T) {
require.NoError(t, err)
root := fmt.Sprintf("%#x", sc.BlockRoot())
require.NoError(t, fs.Mkdir(root, directoryPermissions)) // make empty directory
require.NoError(t, pr.slotMap.ensure(root, sc.Slot(), 0))
require.NoError(t, pr.cache.ensure(root, sc.Slot(), 0))
pruned, err := pr.tryPruneDir(root, slot+1)
require.NoError(t, err)
require.Equal(t, 0, pruned)
@@ -63,7 +63,7 @@ func TestTryPruneDir_CachedExpired(t *testing.T) {
// check that the root->slot is cached
root := fmt.Sprintf("%#x", scs[0].BlockRoot())
cs, cok := bs.pruner.slotMap.slot(root)
cs, cok := bs.pruner.cache.slot(root)
require.Equal(t, true, cok)
require.Equal(t, slot, cs)
@@ -95,12 +95,12 @@ func TestTryPruneDir_SlotFromFile(t *testing.T) {
// check that the root->slot is cached
root := fmt.Sprintf("%#x", scs[0].BlockRoot())
cs, ok := bs.pruner.slotMap.slot(root)
cs, ok := bs.pruner.cache.slot(root)
require.Equal(t, true, ok)
require.Equal(t, slot, cs)
// evict it from the cache so that we trigger the file read path
bs.pruner.slotMap.evict(root)
_, ok = bs.pruner.slotMap.slot(root)
bs.pruner.cache.evict(root)
_, ok = bs.pruner.cache.slot(root)
require.Equal(t, false, ok)
// ensure that we see the saved files in the filesystem
@@ -119,7 +119,7 @@ func TestTryPruneDir_SlotFromFile(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
// Set slot equal to the window size, so it should be retained.
var slot primitives.Slot = bs.pruner.windowSize
slot := bs.pruner.windowSize
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
@@ -129,8 +129,8 @@ func TestTryPruneDir_SlotFromFile(t *testing.T) {
// Evict slot mapping from the cache so that we trigger the file read path.
root := fmt.Sprintf("%#x", scs[0].BlockRoot())
bs.pruner.slotMap.evict(root)
_, ok := bs.pruner.slotMap.slot(root)
bs.pruner.cache.evict(root)
_, ok := bs.pruner.cache.slot(root)
require.Equal(t, false, ok)
// Ensure that we see the saved files in the filesystem.
@@ -243,10 +243,8 @@ func TestListDir(t *testing.T) {
}
blobWithSszAndTmp := dirFiles{name: "0x1234567890", isDir: true,
children: []dirFiles{{name: "5.ssz"}, {name: "0.part"}}}
fsLayout.children = append(fsLayout.children, notABlob)
fsLayout.children = append(fsLayout.children, childlessBlob)
fsLayout.children = append(fsLayout.children, blobWithSsz)
fsLayout.children = append(fsLayout.children, blobWithSszAndTmp)
fsLayout.children = append(fsLayout.children,
notABlob, childlessBlob, blobWithSsz, blobWithSszAndTmp)
topChildren := make([]string, len(fsLayout.children))
for i := range fsLayout.children {
@@ -282,10 +280,7 @@ func TestListDir(t *testing.T) {
dirPath: ".",
expected: []string{notABlob.name},
filter: func(s string) bool {
if s == notABlob.name {
return true
}
return false
return s == notABlob.name
},
},
{

View File

@@ -118,7 +118,7 @@ type HeadAccessDatabase interface {
// SlasherDatabase interface for persisting data related to detecting slashable offenses on Ethereum.
type SlasherDatabase interface {
io.Closer
SaveLastEpochsWrittenForValidators(
SaveLastEpochWrittenForValidators(
ctx context.Context, epochByValidator map[primitives.ValidatorIndex]primitives.Epoch,
) error
SaveAttestationRecordsForValidators(

View File

@@ -20,6 +20,7 @@ go_library(
"migration.go",
"migration_archived_index.go",
"migration_block_slot_index.go",
"migration_finalized_parent.go",
"migration_state_validators.go",
"schema.go",
"state.go",

View File

@@ -14,6 +14,7 @@ var migrations = []migration{
migrateArchivedIndex,
migrateBlockSlotIndex,
migrateStateValidators,
migrateFinalizedParent,
}
// RunMigrations defined in the migrations array.

View File

@@ -0,0 +1,87 @@
package kv
import (
"bytes"
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
bolt "go.etcd.io/bbolt"
)
var migrationFinalizedParent = []byte("parent_bug_32fb183")
func migrateFinalizedParent(ctx context.Context, db *bolt.DB) error {
if updateErr := db.Update(func(tx *bolt.Tx) error {
mb := tx.Bucket(migrationsBucket)
if b := mb.Get(migrationFinalizedParent); bytes.Equal(b, migrationCompleted) {
return nil // Migration already completed.
}
bkt := tx.Bucket(finalizedBlockRootsIndexBucket)
if bkt == nil {
return fmt.Errorf("unable to read %s bucket for migration", finalizedBlockRootsIndexBucket)
}
bb := tx.Bucket(blocksBucket)
if bb == nil {
return fmt.Errorf("unable to read %s bucket for migration", blocksBucket)
}
c := bkt.Cursor()
var slotsWithoutBug primitives.Slot
maxBugSearch := params.BeaconConfig().SlotsPerEpoch * 10
for k, v := c.Last(); k != nil; k, v = c.Prev() {
// check if context is cancelled in between
if ctx.Err() != nil {
return ctx.Err()
}
idxEntry := &ethpb.FinalizedBlockRootContainer{}
if err := decode(ctx, v, idxEntry); err != nil {
return errors.Wrapf(err, "unable to decode finalized block root container for root=%#x", k)
}
// Not one of the corrupt values
if !bytes.Equal(idxEntry.ParentRoot, k) {
slotsWithoutBug += 1
if slotsWithoutBug > maxBugSearch {
break
}
continue
}
slotsWithoutBug = 0
log.WithField("root", fmt.Sprintf("%#x", k)).Debug("found index entry with incorrect parent root")
// Look up full block to get the correct parent root.
encBlk := bb.Get(k)
if encBlk == nil {
return errors.Wrapf(ErrNotFound, "could not find block for corrupt finalized index entry %#x", k)
}
blk, err := unmarshalBlock(ctx, encBlk)
if err != nil {
return errors.Wrapf(err, "unable to decode block for root=%#x", k)
}
// Replace parent root in the index with the correct value and write it back.
pr := blk.Block().ParentRoot()
idxEntry.ParentRoot = pr[:]
idxEnc, err := encode(ctx, idxEntry)
if err != nil {
return errors.Wrapf(err, "failed to encode finalized index entry for root=%#x", k)
}
if err := bkt.Put(k, idxEnc); err != nil {
return errors.Wrapf(err, "failed to update finalized index entry for root=%#x", k)
}
log.WithField("root", fmt.Sprintf("%#x", k)).
WithField("parentRoot", fmt.Sprintf("%#x", idxEntry.ParentRoot)).
Debug("updated corrupt index entry with correct parent")
}
// Mark migration complete.
return mb.Put(migrationFinalizedParent, migrationCompleted)
}); updateErr != nil {
log.WithError(updateErr).Errorf("could not run finalized parent root index repair migration")
return updateErr
}
return nil
}

View File

@@ -70,12 +70,12 @@ func (s *Store) LastEpochWrittenForValidators(
return attestedEpochs, err
}
// SaveLastEpochsWrittenForValidators updates the latest epoch a slice
// of validator indices has attested to.
func (s *Store) SaveLastEpochsWrittenForValidators(
// SaveLastEpochWrittenForValidators saves the latest epoch
// that each validator has attested to in the provided map.
func (s *Store) SaveLastEpochWrittenForValidators(
ctx context.Context, epochByValIndex map[primitives.ValidatorIndex]primitives.Epoch,
) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveLastEpochsWrittenForValidators")
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveLastEpochWrittenForValidators")
defer span.End()
const batchSize = 10000
@@ -157,7 +157,7 @@ func (s *Store) CheckAttesterDoubleVotes(
attRecordsBkt := tx.Bucket(attestationRecordsBucket)
encEpoch := encodeTargetEpoch(attToProcess.IndexedAttestation.Data.Target.Epoch)
localDoubleVotes := []*slashertypes.AttesterDoubleVote{}
localDoubleVotes := make([]*slashertypes.AttesterDoubleVote, 0)
for _, valIdx := range attToProcess.IndexedAttestation.AttestingIndices {
// Check if there is signing root in the database for this combination
@@ -166,7 +166,7 @@ func (s *Store) CheckAttesterDoubleVotes(
validatorEpochKey := append(encEpoch, encIdx...)
attRecordsKey := signingRootsBkt.Get(validatorEpochKey)
// An attestation record key is comprised of a signing root (32 bytes).
// An attestation record key consists of a signing root (32 bytes).
if len(attRecordsKey) < attestationRecordKeySize {
// If there is no signing root for this combination,
// then there is no double vote. We can continue to the next validator.
@@ -697,7 +697,7 @@ func decodeSlasherChunk(enc []byte) ([]uint16, error) {
}
// Encode attestation record to bytes.
// The output encoded attestation record consists in the signing root concatened with the compressed attestation record.
// The output encoded attestation record consists in the signing root concatenated with the compressed attestation record.
func encodeAttestationRecord(att *slashertypes.IndexedAttestationWrapper) ([]byte, error) {
if att == nil || att.IndexedAttestation == nil {
return []byte{}, errors.New("nil proposal record")
@@ -716,7 +716,7 @@ func encodeAttestationRecord(att *slashertypes.IndexedAttestationWrapper) ([]byt
}
// Decode attestation record from bytes.
// The input encoded attestation record consists in the signing root concatened with the compressed attestation record.
// The input encoded attestation record consists in the signing root concatenated with the compressed attestation record.
func decodeAttestationRecord(encoded []byte) (*slashertypes.IndexedAttestationWrapper, error) {
if len(encoded) < rootSize {
return nil, fmt.Errorf("wrong length for encoded attestation record, want minimum %d, got %d", rootSize, len(encoded))

View File

@@ -89,7 +89,7 @@ func TestStore_LastEpochWrittenForValidators(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 0, len(attestedEpochs))
err = beaconDB.SaveLastEpochsWrittenForValidators(ctx, epochsByValidator)
err = beaconDB.SaveLastEpochWrittenForValidators(ctx, epochsByValidator)
require.NoError(t, err)
retrievedEpochs, err := beaconDB.LastEpochWrittenForValidators(ctx, indices)

View File

@@ -10,7 +10,7 @@ import (
)
// TestCleanup ensures that the cleanup function unregisters the prometheus.Collection
// also tests the interchangability of the explicit prometheus Register/Unregister
// also tests the interchangeability of the explicit prometheus Register/Unregister
// and the implicit methods within the collector implementation
func TestCleanup(t *testing.T) {
ctx := context.Background()
@@ -32,11 +32,11 @@ func TestCleanup(t *testing.T) {
assert.Equal(t, true, unregistered, "prometheus.Unregister failed to unregister PowchainCollector on final cleanup")
}
// TestCancelation tests that canceling the context passed into
// TestCancellation tests that canceling the context passed into
// NewPowchainCollector cleans everything up as expected. This
// does come at the cost of an extra channel cluttering up
// PowchainCollector, just for this test.
func TestCancelation(t *testing.T) {
func TestCancellation(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
pc, err := NewPowchainCollector(ctx)
assert.NoError(t, err, "Unexpected error calling NewPowchainCollector")

View File

@@ -707,6 +707,7 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
PrivateKey: cliCtx.String(cmd.P2PPrivKey.Name),
StaticPeerID: cliCtx.Bool(cmd.P2PStaticID.Name),
MetaDataDir: cliCtx.String(cmd.P2PMetadata.Name),
QUICPort: cliCtx.Uint(cmd.P2PQUICPort.Name),
TCPPort: cliCtx.Uint(cmd.P2PTCPPort.Name),
UDPPort: cliCtx.Uint(cmd.P2PUDPPort.Name),
MaxPeers: cliCtx.Uint(cmd.P2PMaxPeers.Name),

View File

@@ -49,6 +49,7 @@ go_test(
"//beacon-chain/operations/attestations/kv:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",

View File

@@ -6,6 +6,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
// pruneAttsPool prunes attestations pool on every slot interval.
@@ -66,7 +67,18 @@ func (s *Service) pruneExpiredAtts() {
// Return true if the input slot has been expired.
// Expired is defined as one epoch behind than current time.
func (s *Service) expired(slot primitives.Slot) bool {
func (s *Service) expired(providedSlot primitives.Slot) bool {
providedEpoch := slots.ToEpoch(providedSlot)
currSlot := slots.CurrentSlot(s.genesisTime)
currEpoch := slots.ToEpoch(currSlot)
if currEpoch < params.BeaconConfig().DenebForkEpoch {
return s.expiredPreDeneb(providedSlot)
}
return providedEpoch+1 < currEpoch
}
// Handles expiration of attestations before deneb.
func (s *Service) expiredPreDeneb(slot primitives.Slot) bool {
expirationSlot := slot + params.BeaconConfig().SlotsPerEpoch
expirationTime := s.genesisTime + uint64(expirationSlot.Mul(params.BeaconConfig().SecondsPerSlot))
currentTime := uint64(prysmTime.Now().Unix())

View File

@@ -9,6 +9,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/async"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -127,3 +128,22 @@ func TestPruneExpired_Expired(t *testing.T) {
assert.Equal(t, true, s.expired(0), "Should be expired")
assert.Equal(t, false, s.expired(1), "Should not be expired")
}
func TestPruneExpired_ExpiredDeneb(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.DenebForkEpoch = 3
params.OverrideBeaconConfig(cfg)
s, err := NewService(context.Background(), &Config{Pool: NewPool()})
require.NoError(t, err)
// Rewind back 4 epochs + 10 slots worth of time.
s.genesisTime = uint64(prysmTime.Now().Unix()) - (4*uint64(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) + 10)
secondEpochStart := primitives.Slot(2 * uint64(params.BeaconConfig().SlotsPerEpoch))
thirdEpochStart := primitives.Slot(3 * uint64(params.BeaconConfig().SlotsPerEpoch))
assert.Equal(t, true, s.expired(secondEpochStart), "Should be expired")
assert.Equal(t, false, s.expired(thirdEpochStart), "Should not be expired")
}

View File

@@ -90,10 +90,12 @@ go_library(
"@com_github_libp2p_go_libp2p//core/peerstore:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/quic:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/tcp:go_default_library",
"@com_github_libp2p_go_libp2p_mplex//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//pb:go_default_library",
"@com_github_libp2p_go_mplex//:go_default_library",
"@com_github_multiformats_go_multiaddr//:go_default_library",
"@com_github_multiformats_go_multiaddr//net:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -15,6 +15,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"google.golang.org/protobuf/proto"
)
@@ -68,7 +69,7 @@ func (s *Service) BroadcastAttestation(ctx context.Context, subnet uint64, att *
}
// Non-blocking broadcast, with attempts to discover a subnet peer if none available.
go s.broadcastAttestation(ctx, subnet, att, forkDigest)
go s.internalBroadcastAttestation(ctx, subnet, att, forkDigest)
return nil
}
@@ -94,8 +95,8 @@ func (s *Service) BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint
return nil
}
func (s *Service) broadcastAttestation(ctx context.Context, subnet uint64, att *ethpb.Attestation, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.broadcastAttestation")
func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint64, att *ethpb.Attestation, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastAttestation")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.
@@ -137,7 +138,10 @@ func (s *Service) broadcastAttestation(ctx context.Context, subnet uint64, att *
// acceptable threshold, we exit early and do not broadcast it.
currSlot := slots.CurrentSlot(uint64(s.genesisTime.Unix()))
if att.Data.Slot+params.BeaconConfig().SlotsPerEpoch < currSlot {
log.Warnf("Attestation is too old to broadcast, discarding it. Current Slot: %d , Attestation Slot: %d", currSlot, att.Data.Slot)
log.WithFields(logrus.Fields{
"attestationSlot": att.Data.Slot,
"currentSlot": currSlot,
}).Warning("Attestation is too old to broadcast, discarding it")
return
}
@@ -218,13 +222,13 @@ func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.
}
// Non-blocking broadcast, with attempts to discover a subnet peer if none available.
go s.broadcastBlob(ctx, subnet, blob, forkDigest)
go s.internalBroadcastBlob(ctx, subnet, blob, forkDigest)
return nil
}
func (s *Service) broadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.BlobSidecar, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.broadcastBlob")
func (s *Service) internalBroadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.BlobSidecar, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.internalBroadcastBlob")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.

View File

@@ -24,6 +24,7 @@ type Config struct {
PrivateKey string
DataDir string
MetaDataDir string
QUICPort uint
TCPPort uint
UDPPort uint
MaxPeers uint

View File

@@ -39,6 +39,11 @@ const (
udp6
)
type quicProtocol uint16
// quicProtocol is the "quic" key, which holds the QUIC port of the node.
func (quicProtocol) ENRKey() string { return "quic" }
// RefreshENR uses an epoch to refresh the enr entry for our node
// with the tracked committee ids for the epoch, allowing our node
// to be dynamically discoverable by others given our tracked committee ids.
@@ -100,14 +105,15 @@ func (s *Service) RefreshENR() {
// listen for new nodes watches for new nodes in the network and adds them to the peerstore.
func (s *Service) listenForNewNodes() {
iterator := s.dv5Listener.RandomNodes()
iterator = enode.Filter(iterator, s.filterPeer)
iterator := enode.Filter(s.dv5Listener.RandomNodes(), s.filterPeer)
defer iterator.Close()
for {
// Exit if service's context is canceled
// Exit if service's context is canceled.
if s.ctx.Err() != nil {
break
}
if s.isPeerAtLimit(false /* inbound */) {
// Pause the main loop for a period to stop looking
// for new peers.
@@ -115,16 +121,22 @@ func (s *Service) listenForNewNodes() {
time.Sleep(pollingPeriod)
continue
}
exists := iterator.Next()
if !exists {
if exists := iterator.Next(); !exists {
break
}
node := iterator.Node()
peerInfo, _, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Error("Could not convert to peer info")
continue
}
if peerInfo == nil {
continue
}
// Make sure that peer is not dialed too often, for each connection attempt there's a backoff period.
s.Peers().RandomizeBackOff(peerInfo.ID)
go func(info *peer.AddrInfo) {
@@ -167,8 +179,7 @@ func (s *Service) createListener(
// Listen to all network interfaces
// for both ip protocols.
networkVersion := "udp"
conn, err := net.ListenUDP(networkVersion, udpAddr)
conn, err := net.ListenUDP("udp", udpAddr)
if err != nil {
return nil, errors.Wrap(err, "could not listen to UDP")
}
@@ -178,6 +189,7 @@ func (s *Service) createListener(
ipAddr,
int(s.cfg.UDPPort),
int(s.cfg.TCPPort),
int(s.cfg.QUICPort),
)
if err != nil {
return nil, errors.Wrap(err, "could not create local node")
@@ -209,7 +221,7 @@ func (s *Service) createListener(
func (s *Service) createLocalNode(
privKey *ecdsa.PrivateKey,
ipAddr net.IP,
udpPort, tcpPort int,
udpPort, tcpPort, quicPort int,
) (*enode.LocalNode, error) {
db, err := enode.OpenDB("")
if err != nil {
@@ -220,9 +232,13 @@ func (s *Service) createLocalNode(
ipEntry := enr.IP(ipAddr)
udpEntry := enr.UDP(udpPort)
tcpEntry := enr.TCP(tcpPort)
quicEntry := quicProtocol(quicPort)
localNode.Set(ipEntry)
localNode.Set(udpEntry)
localNode.Set(tcpEntry)
localNode.Set(quicEntry)
localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort)
@@ -277,58 +293,68 @@ func (s *Service) startDiscoveryV5(
// filterPeer validates each node that we retrieve from our dht. We
// try to ascertain that the peer can be a valid protocol peer.
// Validity Conditions:
// 1. The local node is still actively looking for peers to
// connect to.
// 2. Peer has a valid IP and TCP port set in their enr.
// 3. Peer hasn't been marked as 'bad'
// 4. Peer is not currently active or connected.
// 5. Peer is ready to receive incoming connections.
// 6. Peer's fork digest in their ENR matches that of
// 1. Peer has a valid IP and a (QUIC and/or TCP) port set in their enr.
// 2. Peer hasn't been marked as 'bad'.
// 3. Peer is not currently active or connected.
// 4. Peer is ready to receive incoming connections.
// 5. Peer's fork digest in their ENR matches that of
// our localnodes.
func (s *Service) filterPeer(node *enode.Node) bool {
// Ignore nil node entries passed in.
if node == nil {
return false
}
// ignore nodes with no ip address stored.
// Ignore nodes with no IP address stored.
if node.IP() == nil {
return false
}
// do not dial nodes with their tcp ports not set
if err := node.Record().Load(enr.WithEntry("tcp", new(enr.TCP))); err != nil {
if !enr.IsNotFound(err) {
log.WithError(err).Debug("Could not retrieve tcp port")
}
return false
}
peerData, multiAddr, err := convertToAddrInfo(node)
peerData, multiAddrs, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Debug("Could not convert to peer data")
return false
}
if len(multiAddrs) == 0 {
return false
}
// Ignore bad nodes.
if s.peers.IsBad(peerData.ID) {
return false
}
// Ignore nodes that are already active.
if s.peers.IsActive(peerData.ID) {
return false
}
// Ignore nodes that are already connected.
if s.host.Network().Connectedness(peerData.ID) == network.Connected {
return false
}
// Ignore nodes that are not ready to receive incoming connections.
if !s.peers.IsReadyToDial(peerData.ID) {
return false
}
// Ignore nodes that don't match our fork digest.
nodeENR := node.Record()
// Decide whether or not to connect to peer that does not
// match the proper fork ENR data with our local node.
if s.genesisValidatorsRoot != nil {
if err := s.compareForkENR(nodeENR); err != nil {
log.WithError(err).Trace("Fork ENR mismatches between peer and local node")
return false
}
}
// If the peer has 2 multiaddrs, favor the QUIC address, which is in first position.
multiAddr := multiAddrs[0]
// Add peer to peer handler.
s.peers.Add(nodeENR, peerData.ID, multiAddr, network.DirUnknown)
return true
}
@@ -369,11 +395,11 @@ func PeersFromStringAddrs(addrs []string) ([]ma.Multiaddr, error) {
if err != nil {
return nil, errors.Wrapf(err, "Could not get enode from string")
}
addr, err := convertToSingleMultiAddr(enodeAddr)
nodeAddrs, err := convertToMultiAddrs(enodeAddr)
if err != nil {
return nil, errors.Wrapf(err, "Could not get multiaddr")
}
allAddrs = append(allAddrs, addr)
allAddrs = append(allAddrs, nodeAddrs...)
}
return allAddrs, nil
}
@@ -408,45 +434,141 @@ func parseGenericAddrs(addrs []string) (enodeString, multiAddrString []string) {
}
func convertToMultiAddr(nodes []*enode.Node) []ma.Multiaddr {
var multiAddrs []ma.Multiaddr
// Expect each node to have a TCP and a QUIC address.
multiAddrs := make([]ma.Multiaddr, 0, 2*len(nodes))
for _, node := range nodes {
// ignore nodes with no ip address stored
// Skip nodes with no ip address stored.
if node.IP() == nil {
continue
}
multiAddr, err := convertToSingleMultiAddr(node)
// Get up to two multiaddrs (TCP and QUIC) for each node.
nodeMultiAddrs, err := convertToMultiAddrs(node)
if err != nil {
log.WithError(err).Error("Could not convert to multiAddr")
log.WithError(err).Errorf("Could not convert to multiAddr node %s", node)
continue
}
multiAddrs = append(multiAddrs, multiAddr)
multiAddrs = append(multiAddrs, nodeMultiAddrs...)
}
return multiAddrs
}
func convertToAddrInfo(node *enode.Node) (*peer.AddrInfo, ma.Multiaddr, error) {
multiAddr, err := convertToSingleMultiAddr(node)
func convertToAddrInfo(node *enode.Node) (*peer.AddrInfo, []ma.Multiaddr, error) {
multiAddrs, err := convertToMultiAddrs(node)
if err != nil {
return nil, nil, err
}
info, err := peer.AddrInfoFromP2pAddr(multiAddr)
if err != nil {
return nil, nil, err
if len(multiAddrs) == 0 {
return nil, nil, nil
}
return info, multiAddr, nil
infos, err := peer.AddrInfosFromP2pAddrs(multiAddrs...)
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert to peer info: %v", multiAddrs)
}
if len(infos) > 1 {
return nil, nil, errors.Errorf("infos contains %v elements, expected not more than 1", len(infos))
}
if len(infos) == 0 {
return nil, multiAddrs, nil
}
return &infos[0], multiAddrs, nil
}
func convertToSingleMultiAddr(node *enode.Node) (ma.Multiaddr, error) {
// convertToMultiAddrs converts an enode.Node to a list of multiaddrs.
// If the node has a both a QUIC and a TCP port set in their ENR, then
// the multiaddr corresponding to the QUIC port is added first, followed
// by the multiaddr corresponding to the TCP port.
func convertToMultiAddrs(node *enode.Node) ([]ma.Multiaddr, error) {
multiaddrs := make([]ma.Multiaddr, 0, 2)
// Retrieve the node public key.
pubkey := node.Pubkey()
assertedKey, err := ecdsaprysm.ConvertToInterfacePubkey(pubkey)
if err != nil {
return nil, errors.Wrap(err, "could not get pubkey")
}
// Compute the node ID from the public key.
id, err := peer.IDFromPublicKey(assertedKey)
if err != nil {
return nil, errors.Wrap(err, "could not get peer id")
}
return multiAddressBuilderWithID(node.IP().String(), "tcp", uint(node.TCP()), id)
// If the QUIC entry is present in the ENR, build the corresponding multiaddress.
port, ok, err := getPort(node, quic)
if err != nil {
return nil, errors.Wrap(err, "could not get QUIC port")
}
if ok {
addr, err := multiAddressBuilderWithID(node.IP(), quic, port, id)
if err != nil {
return nil, errors.Wrap(err, "could not build QUIC address")
}
multiaddrs = append(multiaddrs, addr)
}
// If the TCP entry is present in the ENR, build the corresponding multiaddress.
port, ok, err = getPort(node, tcp)
if err != nil {
return nil, errors.Wrap(err, "could not get TCP port")
}
if ok {
addr, err := multiAddressBuilderWithID(node.IP(), tcp, port, id)
if err != nil {
return nil, errors.Wrap(err, "could not build TCP address")
}
multiaddrs = append(multiaddrs, addr)
}
return multiaddrs, nil
}
// getPort retrieves the port for a given node and protocol, as well as a boolean
// indicating whether the port was found, and an error
func getPort(node *enode.Node, protocol internetProtocol) (uint, bool, error) {
var (
port uint
err error
)
switch protocol {
case tcp:
var entry enr.TCP
err = node.Load(&entry)
port = uint(entry)
case udp:
var entry enr.UDP
err = node.Load(&entry)
port = uint(entry)
case quic:
var entry quicProtocol
err = node.Load(&entry)
port = uint(entry)
default:
return 0, false, errors.Errorf("invalid protocol: %v", protocol)
}
if enr.IsNotFound(err) {
return port, false, nil
}
if err != nil {
return 0, false, errors.Wrap(err, "could not get port")
}
return port, true, nil
}
func convertToUdpMultiAddr(node *enode.Node) ([]ma.Multiaddr, error) {
@@ -464,14 +586,14 @@ func convertToUdpMultiAddr(node *enode.Node) ([]ma.Multiaddr, error) {
var ip4 enr.IPv4
var ip6 enr.IPv6
if node.Load(&ip4) == nil {
address, ipErr := multiAddressBuilderWithID(net.IP(ip4).String(), "udp", uint(node.UDP()), id)
address, ipErr := multiAddressBuilderWithID(net.IP(ip4), udp, uint(node.UDP()), id)
if ipErr != nil {
return nil, errors.Wrap(ipErr, "could not build IPv4 address")
}
addresses = append(addresses, address)
}
if node.Load(&ip6) == nil {
address, ipErr := multiAddressBuilderWithID(net.IP(ip6).String(), "udp", uint(node.UDP()), id)
address, ipErr := multiAddressBuilderWithID(net.IP(ip6), udp, uint(node.UDP()), id)
if ipErr != nil {
return nil, errors.Wrap(ipErr, "could not build IPv6 address")
}

View File

@@ -166,8 +166,9 @@ func TestCreateLocalNode(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
// Define ports.
const (
udpPort = 2000
tcpPort = 3000
udpPort = 2000
tcpPort = 3000
quicPort = 3000
)
// Create a private key.
@@ -180,7 +181,7 @@ func TestCreateLocalNode(t *testing.T) {
cfg: tt.cfg,
}
localNode, err := service.createLocalNode(privKey, address, udpPort, tcpPort)
localNode, err := service.createLocalNode(privKey, address, udpPort, tcpPort, quicPort)
if tt.expectedError {
require.NotNil(t, err)
return
@@ -237,7 +238,7 @@ func TestMultiAddrsConversion_InvalidIPAddr(t *testing.T) {
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
}
node, err := s.createLocalNode(pkey, addr, 0, 0)
node, err := s.createLocalNode(pkey, addr, 0, 0, 0)
require.NoError(t, err)
multiAddr := convertToMultiAddr([]*enode.Node{node.Node()})
assert.Equal(t, 0, len(multiAddr), "Invalid ip address converted successfully")
@@ -248,8 +249,9 @@ func TestMultiAddrConversion_OK(t *testing.T) {
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
cfg: &Config{
TCPPort: 0,
UDPPort: 0,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),

View File

@@ -28,7 +28,8 @@ import (
)
func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
port := 2000
const port = 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, fieldparams.RootLength)
@@ -53,7 +54,7 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
var listeners []*discover.UDPv5
for i := 1; i <= 5; i++ {
port = 3000 + i
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
@@ -98,13 +99,14 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
var addrs []ma.Multiaddr
for _, n := range nodes {
if s.filterPeer(n) {
addr, err := convertToSingleMultiAddr(n)
addrs := make([]ma.Multiaddr, 0)
for _, node := range nodes {
if s.filterPeer(node) {
nodeAddrs, err := convertToMultiAddrs(node)
require.NoError(t, err)
addrs = append(addrs, addr)
addrs = append(addrs, nodeAddrs...)
}
}
@@ -114,10 +116,11 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
}
func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
const port = 2000
params.SetupTestConfigCleanup(t)
hook := logTest.NewGlobal()
logrus.SetLevel(logrus.TraceLevel)
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
@@ -138,7 +141,7 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
var listeners []*discover.UDPv5
for i := 1; i <= 5; i++ {
port = 3000 + i
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
@@ -188,13 +191,13 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
var addrs []ma.Multiaddr
addrs := make([]ma.Multiaddr, 0, len(nodes))
for _, n := range nodes {
if s.filterPeer(n) {
addr, err := convertToSingleMultiAddr(n)
for _, node := range nodes {
if s.filterPeer(node) {
nodeAddrs, err := convertToMultiAddrs(node)
require.NoError(t, err)
addrs = append(addrs, addr)
addrs = append(addrs, nodeAddrs...)
}
}
if len(addrs) == 0 {

View File

@@ -1,6 +1,7 @@
package p2p
import (
"net"
"strconv"
"strings"
@@ -12,32 +13,32 @@ import (
var log = logrus.WithField("prefix", "p2p")
func logIPAddr(id peer.ID, addrs ...ma.Multiaddr) {
var correctAddr ma.Multiaddr
for _, addr := range addrs {
if strings.Contains(addr.String(), "/ip4/") || strings.Contains(addr.String(), "/ip6/") {
correctAddr = addr
break
if !(strings.Contains(addr.String(), "/ip4/") || strings.Contains(addr.String(), "/ip6/")) {
continue
}
}
if correctAddr != nil {
log.WithField(
"multiAddr",
correctAddr.String()+"/p2p/"+id.String(),
addr.String()+"/p2p/"+id.String(),
).Info("Node started p2p server")
}
}
func logExternalIPAddr(id peer.ID, addr string, port uint) {
func logExternalIPAddr(id peer.ID, addr string, tcpPort, quicPort uint) {
if addr != "" {
multiAddr, err := MultiAddressBuilder(addr, port)
multiAddrs, err := MultiAddressBuilder(net.ParseIP(addr), tcpPort, quicPort)
if err != nil {
log.WithError(err).Error("Could not create multiaddress")
return
}
log.WithField(
"multiAddr",
multiAddr.String()+"/p2p/"+id.String(),
).Info("Node started external p2p server")
for _, multiAddr := range multiAddrs {
log.WithField(
"multiAddr",
multiAddr.String()+"/p2p/"+id.String(),
).Info("Node started external p2p server")
}
}
}

View File

@@ -4,45 +4,69 @@ import (
"crypto/ecdsa"
"fmt"
"net"
"time"
"github.com/libp2p/go-libp2p"
mplex "github.com/libp2p/go-libp2p-mplex"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/p2p/security/noise"
"github.com/libp2p/go-libp2p/p2p/transport/tcp"
libp2pquic "github.com/libp2p/go-libp2p/p2p/transport/quic"
libp2ptcp "github.com/libp2p/go-libp2p/p2p/transport/tcp"
gomplex "github.com/libp2p/go-mplex"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/features"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
type internetProtocol string
const (
udp = "udp"
tcp = "tcp"
quic = "quic"
)
// MultiAddressBuilder takes in an ip address string and port to produce a go multiaddr format.
func MultiAddressBuilder(ipAddr string, port uint) (ma.Multiaddr, error) {
parsedIP := net.ParseIP(ipAddr)
if parsedIP.To4() == nil && parsedIP.To16() == nil {
return nil, errors.Errorf("invalid ip address provided: %s", ipAddr)
func MultiAddressBuilder(ip net.IP, tcpPort, quicPort uint) ([]ma.Multiaddr, error) {
ipType, err := extractIpType(ip)
if err != nil {
return nil, errors.Wrap(err, "unable to determine IP type")
}
if parsedIP.To4() != nil {
return ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ipAddr, port))
// Example: /ip4/1.2.3.4/udp/5678/quic-v1
multiAddrQUIC, err := ma.NewMultiaddr(fmt.Sprintf("/%s/%s/udp/%d/quic-v1", ipType, ip, quicPort))
if err != nil {
return nil, errors.Wrapf(err, "cannot produce QUIC multiaddr format from %s:%d", ip, tcpPort)
}
return ma.NewMultiaddr(fmt.Sprintf("/ip6/%s/tcp/%d", ipAddr, port))
// Example: /ip4/1.2.3.4./tcp/5678
multiaddrStr := fmt.Sprintf("/%s/%s/tcp/%d", ipType, ip, tcpPort)
multiAddrTCP, err := ma.NewMultiaddr(multiaddrStr)
if err != nil {
return nil, errors.Wrapf(err, "cannot produce TCP multiaddr format from %s:%d", ip, tcpPort)
}
return []ma.Multiaddr{multiAddrTCP, multiAddrQUIC}, nil
}
// buildOptions for the libp2p host.
func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Option, error) {
cfg := s.cfg
listen, err := MultiAddressBuilder(ip.String(), cfg.TCPPort)
multiaddrs, err := MultiAddressBuilder(ip, cfg.TCPPort, cfg.QUICPort)
if err != nil {
return nil, errors.Wrapf(err, "cannot produce multiaddr format from %s:%d", ip.String(), cfg.TCPPort)
return nil, errors.Wrapf(err, "cannot produce multiaddr format from %s:%d", ip, cfg.TCPPort)
}
if cfg.LocalIP != "" {
if net.ParseIP(cfg.LocalIP) == nil {
localIP := net.ParseIP(cfg.LocalIP)
if localIP == nil {
return nil, errors.Wrapf(err, "invalid local ip provided: %s:%d", cfg.LocalIP, cfg.TCPPort)
}
listen, err = MultiAddressBuilder(cfg.LocalIP, cfg.TCPPort)
multiaddrs, err = MultiAddressBuilder(localIP, cfg.TCPPort, cfg.QUICPort)
if err != nil {
return nil, errors.Wrapf(err, "cannot produce multiaddr format from %s:%d", cfg.LocalIP, cfg.TCPPort)
}
@@ -56,14 +80,15 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
return nil, errors.Wrapf(err, "cannot get ID from public key: %s", ifaceKey.GetPublic().Type().String())
}
log.Infof("Running node with peer id of %s ", id.String())
log.WithField("peerId", id).Info("Running node with")
options := []libp2p.Option{
privKeyOption(priKey),
libp2p.ListenAddrs(listen),
libp2p.ListenAddrs(multiaddrs...),
libp2p.UserAgent(version.BuildData()),
libp2p.ConnectionGater(s),
libp2p.Transport(tcp.NewTCPTransport),
libp2p.Transport(libp2pquic.NewTransport),
libp2p.Transport(libp2ptcp.NewTCPTransport),
libp2p.DefaultMuxers,
libp2p.Muxer("/mplex/6.7.0", mplex.DefaultTransport),
libp2p.Security(noise.ID, noise.New),
@@ -73,23 +98,26 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
if cfg.EnableUPnP {
options = append(options, libp2p.NATPortMap()) // Allow to use UPnP
}
if cfg.RelayNodeAddr != "" {
options = append(options, libp2p.AddrsFactory(withRelayAddrs(cfg.RelayNodeAddr)))
} else {
// Disable relay if it has not been set.
options = append(options, libp2p.DisableRelay())
}
if cfg.HostAddress != "" {
options = append(options, libp2p.AddrsFactory(func(addrs []ma.Multiaddr) []ma.Multiaddr {
external, err := MultiAddressBuilder(cfg.HostAddress, cfg.TCPPort)
externalMultiaddrs, err := MultiAddressBuilder(net.ParseIP(cfg.HostAddress), cfg.TCPPort, cfg.QUICPort)
if err != nil {
log.WithError(err).Error("Unable to create external multiaddress")
} else {
addrs = append(addrs, external)
addrs = append(addrs, externalMultiaddrs...)
}
return addrs
}))
}
if cfg.HostDNS != "" {
options = append(options, libp2p.AddrsFactory(func(addrs []ma.Multiaddr) []ma.Multiaddr {
external, err := ma.NewMultiaddr(fmt.Sprintf("/dns4/%s/tcp/%d", cfg.HostDNS, cfg.TCPPort))
@@ -105,21 +133,47 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
if features.Get().DisableResourceManager {
options = append(options, libp2p.ResourceManager(&network.NullResourceManager{}))
}
return options, nil
}
func multiAddressBuilderWithID(ipAddr, protocol string, port uint, id peer.ID) (ma.Multiaddr, error) {
parsedIP := net.ParseIP(ipAddr)
if parsedIP.To4() == nil && parsedIP.To16() == nil {
return nil, errors.Errorf("invalid ip address provided: %s", ipAddr)
func extractIpType(ip net.IP) (string, error) {
if ip.To4() != nil {
return "ip4", nil
}
if id.String() == "" {
return nil, errors.New("empty peer id given")
if ip.To16() != nil {
return "ip6", nil
}
if parsedIP.To4() != nil {
return ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/%s/%d/p2p/%s", ipAddr, protocol, port, id.String()))
return "", errors.Errorf("provided IP address is neither IPv4 nor IPv6: %s", ip)
}
func multiAddressBuilderWithID(ip net.IP, protocol internetProtocol, port uint, id peer.ID) (ma.Multiaddr, error) {
var multiaddrStr string
if id == "" {
return nil, errors.Errorf("empty peer id given: %s", id)
}
return ma.NewMultiaddr(fmt.Sprintf("/ip6/%s/%s/%d/p2p/%s", ipAddr, protocol, port, id.String()))
ipType, err := extractIpType(ip)
if err != nil {
return nil, errors.Wrap(err, "unable to determine IP type")
}
switch protocol {
case udp, tcp:
// Example with UDP: /ip4/1.2.3.4/udp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs
// Example with TCP: /ip6/1.2.3.4/tcp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs
multiaddrStr = fmt.Sprintf("/%s/%s/%s/%d/p2p/%s", ipType, ip, protocol, port, id)
case quic:
// Example: /ip4/1.2.3.4/udp/5678/quic-v1/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs
multiaddrStr = fmt.Sprintf("/%s/%s/udp/%d/quic-v1/p2p/%s", ipType, ip, port, id)
default:
return nil, errors.Errorf("unsupported protocol: %s", protocol)
}
return ma.NewMultiaddr(multiaddrStr)
}
// Adds a private key to the libp2p option if the option was provided.
@@ -135,3 +189,8 @@ func privKeyOption(privkey *ecdsa.PrivateKey) libp2p.Option {
return cfg.Apply(libp2p.Identity(ifaceKey))
}
}
// Configures stream timeouts on mplex.
func configureMplex() {
gomplex.ResetStreamTimeout = 5 * time.Second
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -88,30 +89,34 @@ func TestIPV6Support(t *testing.T) {
lNode := enode.NewLocalNode(db, key)
mockIPV6 := net.IP{0xff, 0x02, 0xAA, 0, 0x1F, 0, 0x2E, 0, 0, 0x36, 0x45, 0, 0, 0, 0, 0x02}
lNode.Set(enr.IP(mockIPV6))
ma, err := convertToSingleMultiAddr(lNode.Node())
mas, err := convertToMultiAddrs(lNode.Node())
if err != nil {
t.Fatal(err)
}
ipv6Exists := false
for _, p := range ma.Protocols() {
if p.Name == "ip4" {
t.Error("Got ip4 address instead of ip6")
for _, ma := range mas {
ipv6Exists := false
for _, p := range ma.Protocols() {
if p.Name == "ip4" {
t.Error("Got ip4 address instead of ip6")
}
if p.Name == "ip6" {
ipv6Exists = true
}
}
if p.Name == "ip6" {
ipv6Exists = true
if !ipv6Exists {
t.Error("Multiaddress did not have ipv6 protocol")
}
}
if !ipv6Exists {
t.Error("Multiaddress did not have ipv6 protocol")
}
}
func TestDefaultMultiplexers(t *testing.T) {
var cfg libp2p.Config
_ = cfg
p2pCfg := &Config{
TCPPort: 2000,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
StateNotifier: &mock.MockStateNotifier{},
}
svc := &Service{cfg: p2pCfg}
@@ -127,5 +132,57 @@ func TestDefaultMultiplexers(t *testing.T) {
assert.Equal(t, protocol.ID("/yamux/1.0.0"), cfg.Muxers[0].ID)
assert.Equal(t, protocol.ID("/mplex/6.7.0"), cfg.Muxers[1].ID)
}
func TestMultiAddressBuilderWithID(t *testing.T) {
testCases := []struct {
name string
ip net.IP
protocol internetProtocol
port uint
id string
expectedMultiaddrStr string
}{
{
name: "UDP",
ip: net.IPv4(192, 168, 0, 1),
protocol: udp,
port: 5678,
id: "0025080212210204fb1ebb1aa467527d34306a4794a5171d6516405e720b909b7f816d63aef96a",
expectedMultiaddrStr: "/ip4/192.168.0.1/udp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs",
},
{
name: "TCP",
ip: net.IPv4(192, 168, 0, 1),
protocol: tcp,
port: 5678,
id: "0025080212210204fb1ebb1aa467527d34306a4794a5171d6516405e720b909b7f816d63aef96a",
expectedMultiaddrStr: "/ip4/192.168.0.1/tcp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs",
},
{
name: "QUIC",
ip: net.IPv4(192, 168, 0, 1),
protocol: quic,
port: 5678,
id: "0025080212210204fb1ebb1aa467527d34306a4794a5171d6516405e720b909b7f816d63aef96a",
expectedMultiaddrStr: "/ip4/192.168.0.1/udp/5678/quic-v1/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs",
},
}
for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) {
id, err := hex.DecodeString(tt.id)
require.NoError(t, err)
actualMultiaddr, err := multiAddressBuilderWithID(tt.ip, tt.protocol, tt.port, peer.ID(id))
require.NoError(t, err)
actualMultiaddrStr := actualMultiaddr.String()
require.Equal(t, tt.expectedMultiaddrStr, actualMultiaddrStr)
})
}
}

View File

@@ -22,7 +22,7 @@ func TestGossipParameters(t *testing.T) {
pms := pubsubGossipParam()
assert.Equal(t, gossipSubMcacheLen, pms.HistoryLength, "gossipSubMcacheLen")
assert.Equal(t, gossipSubMcacheGossip, pms.HistoryGossip, "gossipSubMcacheGossip")
assert.Equal(t, gossipSubSeenTTL, int(pubsub.TimeCacheDuration.Milliseconds()/pms.HeartbeatInterval.Milliseconds()), "gossipSubSeenTtl")
assert.Equal(t, gossipSubSeenTTL, int(pubsub.TimeCacheDuration.Seconds()), "gossipSubSeenTtl")
}
func TestFanoutParameters(t *testing.T) {

View File

@@ -1,7 +1,7 @@
// Package peers provides information about peers at the Ethereum consensus protocol level.
//
// "Protocol level" is the level above the network level, so this layer never sees or interacts with
// (for example) hosts that are uncontactable due to being down, firewalled, etc. Instead, this works
// (for example) hosts that are unreachable due to being down, firewalled, etc. Instead, this works
// with peers that are contactable but may or may not be of the correct fork version, not currently
// required due to the number of current connections, etc.
//
@@ -26,6 +26,7 @@ import (
"context"
"math"
"sort"
"strings"
"time"
"github.com/ethereum/go-ethereum/p2p/enr"
@@ -59,8 +60,8 @@ const (
)
const (
// ColocationLimit restricts how many peer identities we can see from a single ip or ipv6 subnet.
ColocationLimit = 5
// CollocationLimit restricts how many peer identities we can see from a single ip or ipv6 subnet.
CollocationLimit = 5
// Additional buffer beyond current peer limit, from which we can store the relevant peer statuses.
maxLimitBuffer = 150
@@ -449,6 +450,32 @@ func (p *Status) InboundConnected() []peer.ID {
return peers
}
// InboundConnectedTCP returns the current batch of inbound peers that are connected using TCP.
func (p *Status) InboundConnectedTCP() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirInbound && strings.Contains(peerData.Address.String(), "tcp") {
peers = append(peers, pid)
}
}
return peers
}
// InboundConnectedTCP returns the current batch of inbound peers that are connected using QUIC.
func (p *Status) InboundConnectedQUIC() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirInbound && strings.Contains(peerData.Address.String(), "quic") {
peers = append(peers, pid)
}
}
return peers
}
// Outbound returns the current batch of outbound peers.
func (p *Status) Outbound() []peer.ID {
p.store.RLock()
@@ -475,7 +502,33 @@ func (p *Status) OutboundConnected() []peer.ID {
return peers
}
// Active returns the peers that are connecting or connected.
// OutboundConnected returns the current batch of outbound peers that are connected using TCP.
func (p *Status) OutboundConnectedTCP() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirOutbound && strings.Contains(peerData.Address.String(), "tcp") {
peers = append(peers, pid)
}
}
return peers
}
// OutboundConnected returns the current batch of outbound peers that are connected using QUIC.
func (p *Status) OutboundConnectedQUIC() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirOutbound && strings.Contains(peerData.Address.String(), "quic") {
peers = append(peers, pid)
}
}
return peers
}
// Active returns the peers that are active (connecting or connected).
func (p *Status) Active() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
@@ -514,7 +567,7 @@ func (p *Status) Disconnected() []peer.ID {
return peers
}
// Inactive returns the peers that are disconnecting or disconnected.
// Inactive returns the peers that are inactive (disconnecting or disconnected).
func (p *Status) Inactive() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
@@ -548,7 +601,7 @@ func (p *Status) Prune() {
p.store.Lock()
defer p.store.Unlock()
// Default to old method if flag isnt enabled.
// Default to old method if flag isn't enabled.
if !features.Get().EnablePeerScorer {
p.deprecatedPrune()
return
@@ -961,7 +1014,7 @@ func (p *Status) isfromBadIP(pid peer.ID) bool {
return true
}
if val, ok := p.ipTracker[ip.String()]; ok {
if val > ColocationLimit {
if val > CollocationLimit {
return true
}
}
@@ -1012,7 +1065,7 @@ func (p *Status) tallyIPTracker() {
}
func sameIP(firstAddr, secondAddr ma.Multiaddr) bool {
// Exit early if we do get nil multiaddresses
// Exit early if we do get nil multi-addresses
if firstAddr == nil || secondAddr == nil {
return false
}

View File

@@ -565,7 +565,7 @@ func TestPeerIPTracker(t *testing.T) {
badIP := "211.227.218.116"
var badPeers []peer.ID
for i := 0; i < peers.ColocationLimit+10; i++ {
for i := 0; i < peers.CollocationLimit+10; i++ {
port := strconv.Itoa(3000 + i)
addr, err := ma.NewMultiaddr("/ip4/" + badIP + "/tcp/" + port)
if err != nil {
@@ -1111,6 +1111,74 @@ func TestInbound(t *testing.T) {
assert.Equal(t, inbound.String(), result[0].String())
}
func TestInboundConnected(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirInbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirInbound, peers.PeerConnecting)
result := p.InboundConnected()
require.Equal(t, 1, len(result))
assert.Equal(t, inbound.String(), result[0].String())
}
func TestInboundConnectedTCP(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addrTCP, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
addrQUIC, err := ma.NewMultiaddr("/ip4/192.168.1.3/udp/13000/quic-v1")
require.NoError(t, err)
inboundTCP := createPeer(t, p, addrTCP, network.DirInbound, peers.PeerConnected)
createPeer(t, p, addrQUIC, network.DirInbound, peers.PeerConnected)
result := p.InboundConnectedTCP()
require.Equal(t, 1, len(result))
assert.Equal(t, inboundTCP.String(), result[0].String())
}
func TestInboundConnectedQUIC(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addrQUIC, err := ma.NewMultiaddr("/ip4/192.168.1.3/udp/13000/quic-v1")
require.NoError(t, err)
addrTCP, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inboundQUIC := createPeer(t, p, addrQUIC, network.DirInbound, peers.PeerConnected)
createPeer(t, p, addrTCP, network.DirInbound, peers.PeerConnected)
result := p.InboundConnectedQUIC()
require.Equal(t, 1, len(result))
assert.Equal(t, inboundQUIC.String(), result[0].String())
}
func TestOutbound(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
@@ -1130,6 +1198,74 @@ func TestOutbound(t *testing.T) {
assert.Equal(t, outbound.String(), result[0].String())
}
func TestOutboundConnected(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirOutbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirOutbound, peers.PeerConnecting)
result := p.OutboundConnected()
require.Equal(t, 1, len(result))
assert.Equal(t, inbound.String(), result[0].String())
}
func TestOutbondConnectedTCP(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addrTCP, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
addrQUIC, err := ma.NewMultiaddr("/ip4/192.168.1.3/udp/13000/quic-v1")
require.NoError(t, err)
outboundTCP := createPeer(t, p, addrTCP, network.DirOutbound, peers.PeerConnected)
createPeer(t, p, addrQUIC, network.DirOutbound, peers.PeerConnected)
result := p.OutboundConnectedTCP()
require.Equal(t, 1, len(result))
assert.Equal(t, outboundTCP.String(), result[0].String())
}
func TestOutboundConnectedQUIC(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addrQUIC, err := ma.NewMultiaddr("/ip4/192.168.1.3/udp/13000/quic-v1")
require.NoError(t, err)
addrTCP, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
outboundQUIC := createPeer(t, p, addrQUIC, network.DirOutbound, peers.PeerConnected)
createPeer(t, p, addrTCP, network.DirOutbound, peers.PeerConnected)
result := p.OutboundConnectedQUIC()
require.Equal(t, 1, len(result))
assert.Equal(t, outboundQUIC.String(), result[0].String())
}
// addPeer is a helper to add a peer with a given connection state)
func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState) peer.ID {
// Set up some peers with different states

View File

@@ -3,6 +3,7 @@ package p2p
import (
"context"
"encoding/hex"
"fmt"
"strings"
"time"
@@ -25,7 +26,7 @@ const (
// gossip parameters
gossipSubMcacheLen = 6 // number of windows to retain full messages in cache for `IWANT` responses
gossipSubMcacheGossip = 3 // number of windows to gossip about
gossipSubSeenTTL = 550 // number of heartbeat intervals to retain message IDs
gossipSubSeenTTL = 768 // number of seconds to retain message IDs ( 2 epochs)
// fanout ttl
gossipSubFanoutTTL = 60000000000 // TTL for fanout maps for topics we are not subscribed to but have published to, in nano seconds
@@ -130,7 +131,7 @@ func (s *Service) peerInspector(peerMap map[peer.ID]*pubsub.PeerScoreSnapshot) {
}
}
// Creates a list of pubsub options to configure out router with.
// pubsubOptions creates a list of options to configure our router with.
func (s *Service) pubsubOptions() []pubsub.Option {
psOpts := []pubsub.Option{
pubsub.WithMessageSignaturePolicy(pubsub.StrictNoSign),
@@ -147,9 +148,35 @@ func (s *Service) pubsubOptions() []pubsub.Option {
pubsub.WithGossipSubParams(pubsubGossipParam()),
pubsub.WithRawTracer(gossipTracer{host: s.host}),
}
if len(s.cfg.StaticPeers) > 0 {
directPeersAddrInfos, err := parsePeersEnr(s.cfg.StaticPeers)
if err != nil {
log.WithError(err).Error("Could not add direct peer option")
return psOpts
}
psOpts = append(psOpts, pubsub.WithDirectPeers(directPeersAddrInfos))
}
return psOpts
}
// parsePeersEnr takes a list of raw ENRs and converts them into a list of AddrInfos.
func parsePeersEnr(peers []string) ([]peer.AddrInfo, error) {
addrs, err := PeersFromStringAddrs(peers)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers raw ENRs into multiaddresses: %v", err)
}
if len(addrs) == 0 {
return nil, fmt.Errorf("Converting peers raw ENRs into multiaddresses resulted in an empty list")
}
directAddrInfos, err := peer.AddrInfosFromP2pAddrs(addrs...)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers multiaddresses into AddrInfos: %v", err)
}
return directAddrInfos, nil
}
// creates a custom gossipsub parameter set.
func pubsubGossipParam() pubsub.GossipSubParams {
gParams := pubsub.DefaultGossipSubParams()
@@ -165,7 +192,8 @@ func pubsubGossipParam() pubsub.GossipSubParams {
// to configure our message id time-cache rather than instantiating
// it with a router instance.
func setPubSubParameters() {
pubsub.TimeCacheDuration = 550 * gossipSubHeartbeatInterval
seenTtl := 2 * time.Second * time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot))
pubsub.TimeCacheDuration = seenTtl
}
// convert from libp2p's internal schema to a compatible prysm protobuf format.

View File

@@ -125,29 +125,33 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
return nil, errors.Wrapf(err, "failed to build p2p options")
}
// Sets mplex timeouts
configureMplex()
h, err := libp2p.New(opts...)
if err != nil {
log.WithError(err).Error("Failed to create p2p host")
return nil, err
return nil, errors.Wrapf(err, "failed to create p2p host")
}
s.host = h
// Gossipsub registration is done before we add in any new peers
// due to libp2p's gossipsub implementation not taking into
// account previously added peers when creating the gossipsub
// object.
psOpts := s.pubsubOptions()
// Set the pubsub global parameters that we require.
setPubSubParameters()
// Reinitialize them in the event we are running a custom config.
attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
gs, err := pubsub.NewGossipSub(s.ctx, s.host, psOpts...)
if err != nil {
log.WithError(err).Error("Failed to start pubsub")
return nil, err
return nil, errors.Wrapf(err, "failed to create p2p pubsub")
}
s.pubsub = gs
s.peers = peers.NewStatus(ctx, &peers.StatusConfig{
@@ -212,7 +216,7 @@ func (s *Service) Start() {
if len(s.cfg.StaticPeers) > 0 {
addrs, err := PeersFromStringAddrs(s.cfg.StaticPeers)
if err != nil {
log.WithError(err).Error("Could not connect to static peer")
log.WithError(err).Error("could not convert ENR to multiaddr")
}
// Set trusted peers for those that are provided as static addresses.
pids := peerIdsFromMultiAddrs(addrs)
@@ -231,11 +235,19 @@ func (s *Service) Start() {
async.RunEvery(s.ctx, time.Duration(params.BeaconConfig().RespTimeout)*time.Second, s.updateMetrics)
async.RunEvery(s.ctx, refreshRate, s.RefreshENR)
async.RunEvery(s.ctx, 1*time.Minute, func() {
inboundQUICCount := len(s.peers.InboundConnectedQUIC())
inboundTCPCount := len(s.peers.InboundConnectedTCP())
outboundQUICCount := len(s.peers.OutboundConnectedQUIC())
outboundTCPCount := len(s.peers.OutboundConnectedTCP())
total := inboundQUICCount + inboundTCPCount + outboundQUICCount + outboundTCPCount
log.WithFields(logrus.Fields{
"inbound": len(s.peers.InboundConnected()),
"outbound": len(s.peers.OutboundConnected()),
"activePeers": len(s.peers.Active()),
}).Info("Peer summary")
"inboundQUIC": inboundQUICCount,
"inboundTCP": inboundTCPCount,
"outboundQUIC": outboundQUICCount,
"outboundTCP": outboundTCPCount,
"total": total,
}).Info("Connected peers")
})
multiAddrs := s.host.Network().ListenAddresses()
@@ -243,9 +255,10 @@ func (s *Service) Start() {
p2pHostAddress := s.cfg.HostAddress
p2pTCPPort := s.cfg.TCPPort
p2pQUICPort := s.cfg.QUICPort
if p2pHostAddress != "" {
logExternalIPAddr(s.host.ID(), p2pHostAddress, p2pTCPPort)
logExternalIPAddr(s.host.ID(), p2pHostAddress, p2pTCPPort, p2pQUICPort)
verifyConnectivity(p2pHostAddress, p2pTCPPort, "tcp")
}

View File

@@ -102,8 +102,9 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
cs := startup.NewClockSynchronizer()
cfg := &Config{
TCPPort: 2000,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
ClockWaiter: cs,
}
s, err := NewService(context.Background(), cfg)
@@ -147,8 +148,9 @@ func TestService_Start_NoDiscoverFlag(t *testing.T) {
cs := startup.NewClockSynchronizer()
cfg := &Config{
TCPPort: 2000,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
StateNotifier: &mock.MockStateNotifier{},
NoDiscovery: true, // <-- no s.dv5Listener is created
ClockWaiter: cs,

View File

@@ -46,9 +46,13 @@ const syncLockerVal = 100
const blobSubnetLockerVal = 110
// FindPeersWithSubnet performs a network search for peers
// subscribed to a particular subnet. Then we try to connect
// with those peers. This method will block until the required amount of
// peers are found, the method only exits in the event of context timeouts.
// subscribed to a particular subnet. Then it tries to connect
// with those peers. This method will block until either:
// - the required amount of peers are found, or
// - the context is terminated.
// On some edge cases, this method may hang indefinitely while peers
// are actually found. In such a case, the user should cancel the context
// and re-run the method again.
func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
index uint64, threshold int) (bool, error) {
ctx, span := trace.StartSpan(ctx, "p2p.FindPeersWithSubnet")
@@ -73,9 +77,9 @@ func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
return false, errors.New("no subnet exists for provided topic")
}
currNum := len(s.pubsub.ListPeers(topic))
wg := new(sync.WaitGroup)
for {
currNum := len(s.pubsub.ListPeers(topic))
if currNum >= threshold {
break
}
@@ -89,6 +93,11 @@ func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
if err != nil {
continue
}
if info == nil {
continue
}
wg.Add(1)
go func() {
if err := s.connectWithPeer(ctx, *info); err != nil {
@@ -99,7 +108,6 @@ func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
}
// Wait for all dials to be completed.
wg.Wait()
currNum = len(s.pubsub.ListPeers(topic))
}
return true, nil
}
@@ -110,18 +118,13 @@ func (s *Service) filterPeerForAttSubnet(index uint64) func(node *enode.Node) bo
if !s.filterPeer(node) {
return false
}
subnets, err := attSubnets(node.Record())
if err != nil {
return false
}
indExists := false
for _, comIdx := range subnets {
if comIdx == index {
indExists = true
break
}
}
return indExists
return subnets[index]
}
}
@@ -205,8 +208,10 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
//
// return [compute_subscribed_subnet(node_id, epoch, index) for index in range(SUBNETS_PER_NODE)]
func computeSubscribedSubnets(nodeID enode.ID, epoch primitives.Epoch) ([]uint64, error) {
subs := []uint64{}
for i := uint64(0); i < params.BeaconConfig().SubnetsPerNode; i++ {
subnetsPerNode := params.BeaconConfig().SubnetsPerNode
subs := make([]uint64, 0, subnetsPerNode)
for i := uint64(0); i < subnetsPerNode; i++ {
sub, err := computeSubscribedSubnet(nodeID, epoch, i)
if err != nil {
return nil, err
@@ -281,19 +286,20 @@ func initializeSyncCommSubnets(node *enode.LocalNode) *enode.LocalNode {
// Reads the attestation subnets entry from a node's ENR and determines
// the committee indices of the attestation subnets the node is subscribed to.
func attSubnets(record *enr.Record) ([]uint64, error) {
func attSubnets(record *enr.Record) (map[uint64]bool, error) {
bitV, err := attBitvector(record)
if err != nil {
return nil, err
}
committeeIdxs := make(map[uint64]bool)
// lint:ignore uintcast -- subnet count can be safely cast to int.
if len(bitV) != byteCount(int(attestationSubnetCount)) {
return []uint64{}, errors.Errorf("invalid bitvector provided, it has a size of %d", len(bitV))
return committeeIdxs, errors.Errorf("invalid bitvector provided, it has a size of %d", len(bitV))
}
var committeeIdxs []uint64
for i := uint64(0); i < attestationSubnetCount; i++ {
if bitV.BitAt(i) {
committeeIdxs = append(committeeIdxs, i)
committeeIdxs[i] = true
}
}
return committeeIdxs, nil

View File

@@ -3,49 +3,46 @@ package p2p
import (
"context"
"crypto/rand"
"encoding/hex"
"fmt"
"reflect"
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/startup"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestStartDiscV5_DiscoverPeersWithSubnets(t *testing.T) {
params.SetupTestConfigCleanup(t)
// This test needs to be entirely rewritten and should be done in a follow up PR from #7885.
t.Skip("This test is now failing after PR 7885 due to false positive")
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 4
flags.Init(gFlags)
// Reset config.
defer flags.Init(new(flags.GlobalFlags))
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
s := &Service{
cfg: &Config{UDPPort: uint(port)},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
bootListener, err := s.createListener(ipAddr, pkey)
require.NoError(t, err)
defer bootListener.Close()
func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
// Topology of this test:
//
//
// Node 1 (subscribed to subnet 1) --\
// |
// Node 2 (subscribed to subnet 2) --+--> BootNode (not subscribed to any subnet) <------- Node 0 (not subscribed to any subnet)
// |
// Node 3 (subscribed to subnet 3) --/
//
// The purpose of this test is to ensure that the "Node 0" (connected only to the boot node) is able to
// find and connect to a node already subscribed to a specific subnet.
// In our case: The node i is subscribed to subnet i, with i = 1, 2, 3
// Define the genesis validators root, to ensure everybody is on the same network.
const genesisValidatorRootStr = "0xdeadbeefcafecafedeadbeefcafecafedeadbeefcafecafedeadbeefcafecafe"
genesisValidatorsRoot, err := hex.DecodeString(genesisValidatorRootStr[2:])
require.NoError(t, err)
// Create a context.
ctx := context.Background()
bootNode := bootListener.Self()
// Use shorter period for testing.
currentPeriod := pollingPeriod
pollingPeriod = 1 * time.Second
@@ -53,111 +50,152 @@ func TestStartDiscV5_DiscoverPeersWithSubnets(t *testing.T) {
pollingPeriod = currentPeriod
}()
var listeners []*discover.UDPv5
// Create flags.
params.SetupTestConfigCleanup(t)
gFlags := new(flags.GlobalFlags)
gFlags.MinimumPeersPerSubnet = 1
flags.Init(gFlags)
params.BeaconNetworkConfig().MinimumPeersInSubnetSearch = 1
// Reset config.
defer flags.Init(new(flags.GlobalFlags))
// First, generate a bootstrap node.
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
bootNodeService := &Service{
cfg: &Config{UDPPort: 2000, TCPPort: 3000, QUICPort: 3000},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
bootNodeForkDigest, err := bootNodeService.currentForkDigest()
require.NoError(t, err)
bootListener, err := bootNodeService.createListener(ipAddr, pkey)
require.NoError(t, err)
defer bootListener.Close()
bootNodeENR := bootListener.Self().String()
// Create 3 nodes, each subscribed to a different subnet.
// Each node is connected to the boostrap node.
services := make([]*Service, 0, 3)
for i := 1; i <= 3; i++ {
port = 3000 + i
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
subnet := uint64(i)
service, err := NewService(ctx, &Config{
Discv5BootStrapAddrs: []string{bootNodeENR},
MaxPeers: 30,
UDPPort: uint(port),
}
ipAddr, pkey := createAddrAndPrivKey(t)
s = &Service{
cfg: cfg,
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
listener, err := s.startDiscoveryV5(ipAddr, pkey)
assert.NoError(t, err, "Could not start discovery for node")
UDPPort: uint(2000 + i),
TCPPort: uint(3000 + i),
QUICPort: uint(3000 + i),
})
require.NoError(t, err)
service.genesisTime = genesisTime
service.genesisValidatorsRoot = genesisValidatorsRoot
nodeForkDigest, err := service.currentForkDigest()
require.NoError(t, err)
require.Equal(t, true, nodeForkDigest == bootNodeForkDigest, "fork digest of the node doesn't match the boot node")
// Start the service.
service.Start()
// Set the ENR `attnets`, used by Prysm to filter peers by subnet.
bitV := bitfield.NewBitvector64()
bitV.SetBitAt(uint64(i), true)
bitV.SetBitAt(subnet, true)
entry := enr.WithEntry(attSubnetEnrKey, &bitV)
listener.LocalNode().Set(entry)
listeners = append(listeners, listener)
service.dv5Listener.LocalNode().Set(entry)
// Join and subscribe to the subnet, needed by libp2p.
topic, err := service.pubsub.Join(fmt.Sprintf(AttestationSubnetTopicFormat, bootNodeForkDigest, subnet) + "/ssz_snappy")
require.NoError(t, err)
_, err = topic.Subscribe()
require.NoError(t, err)
// Memoize the service.
services = append(services, service)
}
// Stop the services.
defer func() {
// Close down all peers.
for _, listener := range listeners {
listener.Close()
for _, service := range services {
err := service.Stop()
require.NoError(t, err)
}
}()
// Make one service on port 4001.
port = 4001
gs := startup.NewClockSynchronizer()
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNode.String()},
Discv5BootStrapAddrs: []string{bootNodeENR},
MaxPeers: 30,
UDPPort: uint(port),
ClockWaiter: gs,
UDPPort: 2010,
TCPPort: 3010,
QUICPort: 3010,
}
s, err = NewService(context.Background(), cfg)
service, err := NewService(ctx, cfg)
require.NoError(t, err)
exitRoutine := make(chan bool)
go func() {
s.Start()
<-exitRoutine
service.genesisTime = genesisTime
service.genesisValidatorsRoot = genesisValidatorsRoot
service.Start()
defer func() {
err := service.Stop()
require.NoError(t, err)
}()
time.Sleep(50 * time.Millisecond)
// Send in a loop to ensure it is delivered (busy wait for the service to subscribe to the state feed).
var vr [32]byte
require.NoError(t, gs.SetClock(startup.NewClock(time.Now(), vr)))
// Wait for the nodes to have their local routing tables to be populated with the other nodes
time.Sleep(6 * discoveryWaitTime)
// Look up 3 different subnets.
exists := make([]bool, 0, 3)
for i := 1; i <= 3; i++ {
subnet := uint64(i)
topic := fmt.Sprintf(AttestationSubnetTopicFormat, bootNodeForkDigest, subnet)
exist := false
// This for loop is used to ensure we don't get stuck in `FindPeersWithSubnet`.
// Read the documentation of `FindPeersWithSubnet` for more details.
for j := 0; j < 3; j++ {
ctxWithTimeOut, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
exist, err = service.FindPeersWithSubnet(ctxWithTimeOut, topic, subnet, 1)
require.NoError(t, err)
if exist {
break
}
}
require.NoError(t, err)
exists = append(exists, exist)
// look up 3 different subnets
ctx := context.Background()
exists, err := s.FindPeersWithSubnet(ctx, "", 1, flags.Get().MinimumPeersPerSubnet)
require.NoError(t, err)
exists2, err := s.FindPeersWithSubnet(ctx, "", 2, flags.Get().MinimumPeersPerSubnet)
require.NoError(t, err)
exists3, err := s.FindPeersWithSubnet(ctx, "", 3, flags.Get().MinimumPeersPerSubnet)
require.NoError(t, err)
if !exists || !exists2 || !exists3 {
t.Fatal("Peer with subnet doesn't exist")
}
// Update ENR of a peer.
testService := &Service{
dv5Listener: listeners[0],
metaData: wrapper.WrappedMetadataV0(&pb.MetaDataV0{
Attnets: bitfield.NewBitvector64(),
}),
// Check if all peers are found.
for _, exist := range exists {
require.Equal(t, true, exist, "Peer with subnet doesn't exist")
}
cache.SubnetIDs.AddAttesterSubnetID(0, 10)
testService.RefreshENR()
time.Sleep(2 * time.Second)
exists, err = s.FindPeersWithSubnet(ctx, "", 2, flags.Get().MinimumPeersPerSubnet)
require.NoError(t, err)
assert.Equal(t, true, exists, "Peer with subnet doesn't exist")
assert.NoError(t, s.Stop())
exitRoutine <- true
}
func Test_AttSubnets(t *testing.T) {
params.SetupTestConfigCleanup(t)
tests := []struct {
name string
record func(t *testing.T) *enr.Record
record func(localNode *enode.LocalNode) *enr.Record
want []uint64
wantErr bool
errContains string
}{
{
name: "valid record",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
localNode = initializeAttSubnets(localNode)
return localNode.Node().Record()
},
@@ -166,14 +204,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "too small subnet",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
entry := enr.WithEntry(attSubnetEnrKey, []byte{})
localNode.Set(entry)
return localNode.Node().Record()
@@ -184,14 +215,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "half sized subnet",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
entry := enr.WithEntry(attSubnetEnrKey, make([]byte, 4))
localNode.Set(entry)
return localNode.Node().Record()
@@ -202,14 +226,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "too large subnet",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
entry := enr.WithEntry(attSubnetEnrKey, make([]byte, byteCount(int(attestationSubnetCount))+1))
localNode.Set(entry)
return localNode.Node().Record()
@@ -220,14 +237,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "very large subnet",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
entry := enr.WithEntry(attSubnetEnrKey, make([]byte, byteCount(int(attestationSubnetCount))+100))
localNode.Set(entry)
return localNode.Node().Record()
@@ -238,14 +248,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "single subnet",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
bitV := bitfield.NewBitvector64()
bitV.SetBitAt(0, true)
entry := enr.WithEntry(attSubnetEnrKey, bitV.Bytes())
@@ -257,17 +260,10 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "multiple subnets",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
bitV := bitfield.NewBitvector64()
for i := uint64(0); i < bitV.Len(); i++ {
// skip 2 subnets
// Keep only odd subnets.
if (i+1)%2 == 0 {
continue
}
@@ -285,14 +281,7 @@ func Test_AttSubnets(t *testing.T) {
},
{
name: "all subnets",
record: func(t *testing.T) *enr.Record {
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record: func(localNode *enode.LocalNode) *enr.Record {
bitV := bitfield.NewBitvector64()
for i := uint64(0); i < bitV.Len(); i++ {
bitV.SetBitAt(i, true)
@@ -309,16 +298,35 @@ func Test_AttSubnets(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
got, err := attSubnets(tt.record(t))
db, err := enode.OpenDB("")
assert.NoError(t, err)
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
record := tt.record(localNode)
got, err := attSubnets(record)
if (err != nil) != tt.wantErr {
t.Errorf("syncSubnets() error = %v, wantErr %v", err, tt.wantErr)
return
}
if tt.wantErr {
assert.ErrorContains(t, tt.errContains, err)
}
if !reflect.DeepEqual(got, tt.want) {
t.Errorf("syncSubnets() got = %v, want %v", got, tt.want)
want := make(map[uint64]bool, len(tt.want))
for _, subnet := range tt.want {
want[subnet] = true
}
if !reflect.DeepEqual(got, want) {
t.Errorf("syncSubnets() got = %v, want %v", got, want)
}
})
}

View File

@@ -50,7 +50,7 @@ func ensurePeerConnections(ctx context.Context, h host.Host, peers *peers.Status
c := h.Network().ConnsToPeer(p.ID)
if len(c) == 0 {
if err := connectWithTimeout(ctx, h, p); err != nil {
log.WithField("peer", p.ID).WithField("addrs", p.Addrs).WithError(err).Errorf("Failed to reconnect to peer")
log.WithField("peer", p.ID).WithField("addrs", p.Addrs).WithError(err).Errorf("failed to reconnect to peer")
continue
}
}

View File

@@ -25,7 +25,7 @@ go_library(
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
],
)

View File

@@ -8,6 +8,7 @@ import (
time2 "time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
@@ -23,7 +24,6 @@ import (
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -124,86 +124,93 @@ func (s *Server) StreamEvents(w http.ResponseWriter, r *http.Request) {
// stalling while waiting for the first response chunk.
// After that we send a keepalive dummy message every SECONDS_PER_SLOT
// to prevent anyone (e.g. proxy servers) from closing connections.
sendKeepalive(w, flusher)
if err := sendKeepalive(w, flusher); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
keepaliveTicker := time2.NewTicker(time2.Duration(params.BeaconConfig().SecondsPerSlot) * time2.Second)
for {
select {
case event := <-opsChan:
handleBlockOperationEvents(w, flusher, topicsMap, event)
if err := handleBlockOperationEvents(w, flusher, topicsMap, event); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
case event := <-stateChan:
s.handleStateEvents(ctx, w, flusher, topicsMap, event)
if err := s.handleStateEvents(ctx, w, flusher, topicsMap, event); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
case <-keepaliveTicker.C:
sendKeepalive(w, flusher)
if err := sendKeepalive(w, flusher); err != nil {
httputil.HandleError(w, err.Error(), http.StatusInternalServerError)
return
}
case <-ctx.Done():
return
}
}
}
func handleBlockOperationEvents(w http.ResponseWriter, flusher http.Flusher, requestedTopics map[string]bool, event *feed.Event) {
func handleBlockOperationEvents(w http.ResponseWriter, flusher http.Flusher, requestedTopics map[string]bool, event *feed.Event) error {
switch event.Type {
case operation.AggregatedAttReceived:
if _, ok := requestedTopics[AttestationTopic]; !ok {
return
return nil
}
attData, ok := event.Data.(*operation.AggregatedAttReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
}
att := structs.AttFromConsensus(attData.Attestation.Aggregate)
send(w, flusher, AttestationTopic, att)
return send(w, flusher, AttestationTopic, att)
case operation.UnaggregatedAttReceived:
if _, ok := requestedTopics[AttestationTopic]; !ok {
return
return nil
}
attData, ok := event.Data.(*operation.UnAggregatedAttReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, AttestationTopic)
}
att := structs.AttFromConsensus(attData.Attestation)
send(w, flusher, AttestationTopic, att)
return send(w, flusher, AttestationTopic, att)
case operation.ExitReceived:
if _, ok := requestedTopics[VoluntaryExitTopic]; !ok {
return
return nil
}
exitData, ok := event.Data.(*operation.ExitReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, VoluntaryExitTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, VoluntaryExitTopic)
}
exit := structs.SignedExitFromConsensus(exitData.Exit)
send(w, flusher, VoluntaryExitTopic, exit)
return send(w, flusher, VoluntaryExitTopic, exit)
case operation.SyncCommitteeContributionReceived:
if _, ok := requestedTopics[SyncCommitteeContributionTopic]; !ok {
return
return nil
}
contributionData, ok := event.Data.(*operation.SyncCommitteeContributionReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, SyncCommitteeContributionTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, SyncCommitteeContributionTopic)
}
contribution := structs.SignedContributionAndProofFromConsensus(contributionData.Contribution)
send(w, flusher, SyncCommitteeContributionTopic, contribution)
return send(w, flusher, SyncCommitteeContributionTopic, contribution)
case operation.BLSToExecutionChangeReceived:
if _, ok := requestedTopics[BLSToExecutionChangeTopic]; !ok {
return
return nil
}
changeData, ok := event.Data.(*operation.BLSToExecutionChangeReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, BLSToExecutionChangeTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, BLSToExecutionChangeTopic)
}
send(w, flusher, BLSToExecutionChangeTopic, structs.SignedBLSChangeFromConsensus(changeData.Change))
return send(w, flusher, BLSToExecutionChangeTopic, structs.SignedBLSChangeFromConsensus(changeData.Change))
case operation.BlobSidecarReceived:
if _, ok := requestedTopics[BlobSidecarTopic]; !ok {
return
return nil
}
blobData, ok := event.Data.(*operation.BlobSidecarReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, BlobSidecarTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, BlobSidecarTopic)
}
versionedHash := blockchain.ConvertKzgCommitmentToVersionedHash(blobData.Blob.KzgCommitment)
blobEvent := &structs.BlobSidecarEvent{
@@ -213,38 +220,36 @@ func handleBlockOperationEvents(w http.ResponseWriter, flusher http.Flusher, req
VersionedHash: versionedHash.String(),
KzgCommitment: hexutil.Encode(blobData.Blob.KzgCommitment),
}
send(w, flusher, BlobSidecarTopic, blobEvent)
return send(w, flusher, BlobSidecarTopic, blobEvent)
case operation.AttesterSlashingReceived:
if _, ok := requestedTopics[AttesterSlashingTopic]; !ok {
return
return nil
}
attesterSlashingData, ok := event.Data.(*operation.AttesterSlashingReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, AttesterSlashingTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, AttesterSlashingTopic)
}
send(w, flusher, AttesterSlashingTopic, structs.AttesterSlashingFromConsensus(attesterSlashingData.AttesterSlashing))
return send(w, flusher, AttesterSlashingTopic, structs.AttesterSlashingFromConsensus(attesterSlashingData.AttesterSlashing))
case operation.ProposerSlashingReceived:
if _, ok := requestedTopics[ProposerSlashingTopic]; !ok {
return
return nil
}
proposerSlashingData, ok := event.Data.(*operation.ProposerSlashingReceivedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, ProposerSlashingTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, ProposerSlashingTopic)
}
send(w, flusher, ProposerSlashingTopic, structs.ProposerSlashingFromConsensus(proposerSlashingData.ProposerSlashing))
return send(w, flusher, ProposerSlashingTopic, structs.ProposerSlashingFromConsensus(proposerSlashingData.ProposerSlashing))
}
return nil
}
func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, flusher http.Flusher, requestedTopics map[string]bool, event *feed.Event) {
func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, flusher http.Flusher, requestedTopics map[string]bool, event *feed.Event) error {
switch event.Type {
case statefeed.NewHead:
if _, ok := requestedTopics[HeadTopic]; ok {
headData, ok := event.Data.(*ethpb.EventHead)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, HeadTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, HeadTopic)
}
head := &structs.HeadEvent{
Slot: fmt.Sprintf("%d", headData.Slot),
@@ -255,23 +260,22 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
PreviousDutyDependentRoot: hexutil.Encode(headData.PreviousDutyDependentRoot),
CurrentDutyDependentRoot: hexutil.Encode(headData.CurrentDutyDependentRoot),
}
send(w, flusher, HeadTopic, head)
return send(w, flusher, HeadTopic, head)
}
if _, ok := requestedTopics[PayloadAttributesTopic]; ok {
s.sendPayloadAttributes(ctx, w, flusher)
return s.sendPayloadAttributes(ctx, w, flusher)
}
case statefeed.MissedSlot:
if _, ok := requestedTopics[PayloadAttributesTopic]; ok {
s.sendPayloadAttributes(ctx, w, flusher)
return s.sendPayloadAttributes(ctx, w, flusher)
}
case statefeed.FinalizedCheckpoint:
if _, ok := requestedTopics[FinalizedCheckpointTopic]; !ok {
return
return nil
}
checkpointData, ok := event.Data.(*ethpb.EventFinalizedCheckpoint)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, FinalizedCheckpointTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, FinalizedCheckpointTopic)
}
checkpoint := &structs.FinalizedCheckpointEvent{
Block: hexutil.Encode(checkpointData.Block),
@@ -279,15 +283,14 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
Epoch: fmt.Sprintf("%d", checkpointData.Epoch),
ExecutionOptimistic: checkpointData.ExecutionOptimistic,
}
send(w, flusher, FinalizedCheckpointTopic, checkpoint)
return send(w, flusher, FinalizedCheckpointTopic, checkpoint)
case statefeed.LightClientFinalityUpdate:
if _, ok := requestedTopics[LightClientFinalityUpdateTopic]; !ok {
return
return nil
}
updateData, ok := event.Data.(*ethpbv2.LightClientFinalityUpdateWithVersion)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, LightClientFinalityUpdateTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, LightClientFinalityUpdateTopic)
}
var finalityBranch []string
@@ -318,15 +321,14 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
SignatureSlot: fmt.Sprintf("%d", updateData.Data.SignatureSlot),
},
}
send(w, flusher, LightClientFinalityUpdateTopic, update)
return send(w, flusher, LightClientFinalityUpdateTopic, update)
case statefeed.LightClientOptimisticUpdate:
if _, ok := requestedTopics[LightClientOptimisticUpdateTopic]; !ok {
return
return nil
}
updateData, ok := event.Data.(*ethpbv2.LightClientOptimisticUpdateWithVersion)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, LightClientOptimisticUpdateTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, LightClientOptimisticUpdateTopic)
}
update := &structs.LightClientOptimisticUpdateEvent{
Version: version.String(int(updateData.Version)),
@@ -345,15 +347,14 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
SignatureSlot: fmt.Sprintf("%d", updateData.Data.SignatureSlot),
},
}
send(w, flusher, LightClientOptimisticUpdateTopic, update)
return send(w, flusher, LightClientOptimisticUpdateTopic, update)
case statefeed.Reorg:
if _, ok := requestedTopics[ChainReorgTopic]; !ok {
return
return nil
}
reorgData, ok := event.Data.(*ethpb.EventChainReorg)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, ChainReorgTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, ChainReorgTopic)
}
reorg := &structs.ChainReorgEvent{
Slot: fmt.Sprintf("%d", reorgData.Slot),
@@ -365,78 +366,69 @@ func (s *Server) handleStateEvents(ctx context.Context, w http.ResponseWriter, f
Epoch: fmt.Sprintf("%d", reorgData.Epoch),
ExecutionOptimistic: reorgData.ExecutionOptimistic,
}
send(w, flusher, ChainReorgTopic, reorg)
return send(w, flusher, ChainReorgTopic, reorg)
case statefeed.BlockProcessed:
if _, ok := requestedTopics[BlockTopic]; !ok {
return
return nil
}
blkData, ok := event.Data.(*statefeed.BlockProcessedData)
if !ok {
write(w, flusher, topicDataMismatch, event.Data, BlockTopic)
return
return write(w, flusher, topicDataMismatch, event.Data, BlockTopic)
}
blockRoot, err := blkData.SignedBlock.Block().HashTreeRoot()
if err != nil {
write(w, flusher, "Could not get block root: "+err.Error())
return
return write(w, flusher, "Could not get block root: "+err.Error())
}
blk := &structs.BlockEvent{
Slot: fmt.Sprintf("%d", blkData.Slot),
Block: hexutil.Encode(blockRoot[:]),
ExecutionOptimistic: blkData.Optimistic,
}
send(w, flusher, BlockTopic, blk)
return send(w, flusher, BlockTopic, blk)
}
return nil
}
// This event stream is intended to be used by builders and relays.
// Parent fields are based on state at N_{current_slot}, while the rest of fields are based on state of N_{current_slot + 1}
func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWriter, flusher http.Flusher) {
func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWriter, flusher http.Flusher) error {
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
write(w, flusher, "Could not get head root: "+err.Error())
return
return write(w, flusher, "Could not get head root: "+err.Error())
}
st, err := s.HeadFetcher.HeadState(ctx)
if err != nil {
write(w, flusher, "Could not get head state: "+err.Error())
return
return write(w, flusher, "Could not get head state: "+err.Error())
}
// advance the head state
headState, err := transition.ProcessSlotsIfPossible(ctx, st, s.ChainInfoFetcher.CurrentSlot()+1)
if err != nil {
write(w, flusher, "Could not advance head state: "+err.Error())
return
return write(w, flusher, "Could not advance head state: "+err.Error())
}
headBlock, err := s.HeadFetcher.HeadBlock(ctx)
if err != nil {
write(w, flusher, "Could not get head block: "+err.Error())
return
return write(w, flusher, "Could not get head block: "+err.Error())
}
headPayload, err := headBlock.Block().Body().Execution()
if err != nil {
write(w, flusher, "Could not get execution payload: "+err.Error())
return
return write(w, flusher, "Could not get execution payload: "+err.Error())
}
t, err := slots.ToTime(headState.GenesisTime(), headState.Slot())
if err != nil {
write(w, flusher, "Could not get head state slot time: "+err.Error())
return
return write(w, flusher, "Could not get head state slot time: "+err.Error())
}
prevRando, err := helpers.RandaoMix(headState, time.CurrentEpoch(headState))
if err != nil {
write(w, flusher, "Could not get head state randao mix: "+err.Error())
return
return write(w, flusher, "Could not get head state randao mix: "+err.Error())
}
proposerIndex, err := helpers.BeaconProposerIndex(ctx, headState)
if err != nil {
write(w, flusher, "Could not get head state proposer index: "+err.Error())
return
return write(w, flusher, "Could not get head state proposer index: "+err.Error())
}
var attributes interface{}
@@ -450,8 +442,7 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
case version.Capella:
withdrawals, err := headState.ExpectedWithdrawals()
if err != nil {
write(w, flusher, "Could not get head state expected withdrawals: "+err.Error())
return
return write(w, flusher, "Could not get head state expected withdrawals: "+err.Error())
}
attributes = &structs.PayloadAttributesV2{
Timestamp: fmt.Sprintf("%d", t.Unix()),
@@ -462,13 +453,11 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
case version.Deneb:
withdrawals, err := headState.ExpectedWithdrawals()
if err != nil {
write(w, flusher, "Could not get head state expected withdrawals: "+err.Error())
return
return write(w, flusher, "Could not get head state expected withdrawals: "+err.Error())
}
parentRoot, err := headBlock.Block().HashTreeRoot()
if err != nil {
write(w, flusher, "Could not get head block root: "+err.Error())
return
return write(w, flusher, "Could not get head block root: "+err.Error())
}
attributes = &structs.PayloadAttributesV3{
Timestamp: fmt.Sprintf("%d", t.Unix()),
@@ -478,14 +467,12 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
ParentBeaconBlockRoot: hexutil.Encode(parentRoot[:]),
}
default:
write(w, flusher, "Payload version %s is not supported", version.String(headState.Version()))
return
return write(w, flusher, "Payload version %s is not supported", version.String(headState.Version()))
}
attributesBytes, err := json.Marshal(attributes)
if err != nil {
write(w, flusher, err.Error())
return
return write(w, flusher, err.Error())
}
eventData := structs.PayloadAttributesEventData{
ProposerIndex: fmt.Sprintf("%d", proposerIndex),
@@ -497,32 +484,31 @@ func (s *Server) sendPayloadAttributes(ctx context.Context, w http.ResponseWrite
}
eventDataBytes, err := json.Marshal(eventData)
if err != nil {
write(w, flusher, err.Error())
return
return write(w, flusher, err.Error())
}
send(w, flusher, PayloadAttributesTopic, &structs.PayloadAttributesEvent{
return send(w, flusher, PayloadAttributesTopic, &structs.PayloadAttributesEvent{
Version: version.String(headState.Version()),
Data: eventDataBytes,
})
}
func send(w http.ResponseWriter, flusher http.Flusher, name string, data interface{}) {
func send(w http.ResponseWriter, flusher http.Flusher, name string, data interface{}) error {
j, err := json.Marshal(data)
if err != nil {
write(w, flusher, "Could not marshal event to JSON: "+err.Error())
return
return write(w, flusher, "Could not marshal event to JSON: "+err.Error())
}
write(w, flusher, "event: %s\ndata: %s\n\n", name, string(j))
return write(w, flusher, "event: %s\ndata: %s\n\n", name, string(j))
}
func sendKeepalive(w http.ResponseWriter, flusher http.Flusher) {
write(w, flusher, ":\n\n")
func sendKeepalive(w http.ResponseWriter, flusher http.Flusher) error {
return write(w, flusher, ":\n\n")
}
func write(w http.ResponseWriter, flusher http.Flusher, format string, a ...any) {
func write(w http.ResponseWriter, flusher http.Flusher, format string, a ...any) error {
_, err := fmt.Fprintf(w, format, a...)
if err != nil {
log.WithError(err).Error("Could not write to response writer")
return errors.Wrap(err, "could not write to response writer")
}
flusher.Flush()
return nil
}

View File

@@ -15,7 +15,9 @@ go_library(
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/state:go_default_library",
@@ -35,7 +37,10 @@ go_library(
go_test(
name = "go_default_test",
srcs = ["handlers_test.go"],
srcs = [
"handlers_test.go",
"service_test.go",
],
embed = [":go_default_library"],
deps = [
"//api/server/structs:go_default_library",
@@ -43,6 +48,8 @@ go_test(
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen/mock:go_default_library",

View File

@@ -18,6 +18,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
dbutil "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/testutil"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
mockstategen "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen/mock"
@@ -192,6 +193,7 @@ func BlockRewardTestSetup(t *testing.T, forkName string) (state.BeaconState, int
}
func TestBlockRewards(t *testing.T) {
db := dbutil.SetupDB(t)
phase0block, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
t.Run("phase 0", func(t *testing.T) {
@@ -227,7 +229,10 @@ func TestBlockRewards(t *testing.T) {
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
BlockRewardFetcher: &BlockRewardService{Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st))},
BlockRewardFetcher: &BlockRewardService{
Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st)),
DB: db,
},
}
url := "http://only.the.slot.number.at.the.end.is.important/2"
@@ -260,7 +265,10 @@ func TestBlockRewards(t *testing.T) {
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
BlockRewardFetcher: &BlockRewardService{Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st))},
BlockRewardFetcher: &BlockRewardService{
Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st)),
DB: db,
},
}
url := "http://only.the.slot.number.at.the.end.is.important/2"
@@ -293,7 +301,10 @@ func TestBlockRewards(t *testing.T) {
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
BlockRewardFetcher: &BlockRewardService{Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st))},
BlockRewardFetcher: &BlockRewardService{
Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st)),
DB: db,
},
}
url := "http://only.the.slot.number.at.the.end.is.important/2"
@@ -326,7 +337,10 @@ func TestBlockRewards(t *testing.T) {
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
BlockRewardFetcher: &BlockRewardService{Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st))},
BlockRewardFetcher: &BlockRewardService{
Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st)),
DB: db,
},
}
url := "http://only.the.slot.number.at.the.end.is.important/2"
@@ -715,7 +729,9 @@ func TestSyncCommiteeRewards(t *testing.T) {
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
BlockRewardFetcher: &BlockRewardService{Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st))},
BlockRewardFetcher: &BlockRewardService{
Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st)),
DB: dbutil.SetupDB(t)},
}
t.Run("ok - filtered vals", func(t *testing.T) {

View File

@@ -8,7 +8,9 @@ import (
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
coreblocks "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -26,6 +28,7 @@ type BlockRewardsFetcher interface {
// BlockRewardService implements BlockRewardsFetcher and can be declared to access the underlying functions
type BlockRewardService struct {
Replayer stategen.ReplayerBuilder
DB db.HeadAccessDatabase
}
// GetBlockRewardsData returns the BlockRewards object which is used for the BlockRewardsResponse and ProduceBlockV3.
@@ -124,6 +127,22 @@ func (rs *BlockRewardService) GetStateForRewards(ctx context.Context, blk interf
// We want to run several block processing functions that update the proposer's balance.
// This will allow us to calculate proposer rewards for each operation (atts, slashings etc).
// To do this, we replay the state up to the block's slot, but before processing the block.
// Try getting the state from the next slot cache first.
_, prevSlotRoots, err := rs.DB.BlockRootsBySlot(ctx, slots.PrevSlot(blk.Slot()))
if err != nil {
return nil, &httputil.DefaultJsonError{
Message: "Could not get roots for previous slot: " + err.Error(),
Code: http.StatusInternalServerError,
}
}
for _, r := range prevSlotRoots {
s := transition.NextSlotState(r[:], blk.Slot())
if s != nil {
return s, nil
}
}
st, err := rs.Replayer.ReplayerForSlot(slots.PrevSlot(blk.Slot())).ReplayToSlot(ctx, blk.Slot())
if err != nil {
return nil, &httputil.DefaultJsonError{

View File

@@ -0,0 +1,46 @@
package rewards
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
dbutil "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestGetStateForRewards_NextSlotCacheHit(t *testing.T) {
ctx := context.Background()
db := dbutil.SetupDB(t)
st, err := util.NewBeaconStateDeneb()
require.NoError(t, err)
b := util.HydrateSignedBeaconBlockDeneb(util.NewBeaconBlockDeneb())
parent, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, parent))
r, err := parent.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, transition.UpdateNextSlotCache(ctx, r[:], st))
s := &BlockRewardService{
Replayer: nil, // setting to nil because replayer must not be invoked
DB: db,
}
b = util.HydrateSignedBeaconBlockDeneb(util.NewBeaconBlockDeneb())
sbb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
sbb.SetSlot(parent.Block().Slot() + 1)
result, err := s.GetStateForRewards(ctx, sbb.Block())
require.NoError(t, err)
_, lcs := transition.LastCachedState()
expected, err := lcs.HashTreeRoot(ctx)
require.NoError(t, err)
actual, err := result.HashTreeRoot(ctx)
require.NoError(t, err)
assert.DeepEqual(t, expected, actual)
}

View File

@@ -4,10 +4,10 @@ go_library(
name = "go_default_library",
srcs = [
"aggregator.go",
"duties.go",
"attester.go",
"blocks.go",
"construct_generic_block.go",
"duties.go",
"exit.go",
"log.go",
"proposer.go",
@@ -179,10 +179,10 @@ go_test(
timeout = "moderate",
srcs = [
"aggregator_test.go",
"duties_test.go",
"attester_test.go",
"blocks_test.go",
"construct_generic_block_test.go",
"duties_test.go",
"exit_test.go",
"proposer_altair_test.go",
"proposer_attestations_test.go",
@@ -201,6 +201,7 @@ go_test(
"status_mainnet_test.go",
"status_test.go",
"sync_committee_test.go",
"unblinder_test.go",
"validator_test.go",
],
embed = [":go_default_library"],

View File

@@ -27,11 +27,20 @@ import (
"go.opencensus.io/trace"
)
// builderGetPayloadMissCount tracks the number of misses when validator tries to get a payload from builder
var builderGetPayloadMissCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "builder_get_payload_miss_count",
Help: "The number of get payload misses for validator requests to builder",
})
var (
builderValueGweiGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "builder_value_gwei",
Help: "Builder payload value in gwei",
})
localValueGweiGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "local_value_gwei",
Help: "Local payload value in gwei",
})
builderGetPayloadMissCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "builder_get_payload_miss_count",
Help: "The number of get payload misses for validator requests to builder",
})
)
// emptyTransactionsRoot represents the returned value of ssz.TransactionsRoot([][]byte{}) and
// can be used as a constant to avoid recomputing this value in every call.
@@ -92,6 +101,8 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
"builderBoostFactor": builderBoostFactor,
}).Warn("Proposer: both local boost and builder boost are using non default values")
}
builderValueGweiGauge.Set(float64(builderValueGwei))
localValueGweiGauge.Set(float64(localValueGwei))
// If we can't get the builder value, just use local block.
if higherValueBuilder && withdrawalsMatched { // Builder value is higher and withdrawals match.

View File

@@ -8,6 +8,7 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
@@ -39,6 +40,12 @@ var (
})
)
func setFeeRecipientIfBurnAddress(val *cache.TrackedValidator) {
if val.FeeRecipient == primitives.ExecutionAddress([20]byte{}) && val.Index == 0 {
val.FeeRecipient = primitives.ExecutionAddress(params.BeaconConfig().DefaultFeeRecipient)
}
}
// This returns the local execution payload of a given slot. The function has full awareness of pre and post merge.
func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBeaconBlock, st state.BeaconState) (interfaces.ExecutionData, bool, error) {
ctx, span := trace.StartSpan(ctx, "ProposerServer.getLocalPayload")
@@ -62,6 +69,7 @@ func (vs *Server) getLocalPayload(ctx context.Context, blk interfaces.ReadOnlyBe
if !tracked {
logrus.WithFields(logFields).Warn("could not find tracked proposer index")
}
setFeeRecipientIfBurnAddress(&val)
var err error
if ok && payloadId != [8]byte{} {

View File

@@ -383,3 +383,16 @@ func TestServer_getTerminalBlockHashIfExists(t *testing.T) {
})
}
}
func TestSetFeeRecipientIfBurnAddress(t *testing.T) {
val := &cache.TrackedValidator{Index: 1}
cfg := params.BeaconConfig().Copy()
cfg.DefaultFeeRecipient = common.Address([20]byte{'a'})
params.OverrideBeaconConfig(cfg)
require.NotEqual(t, common.Address(val.FeeRecipient), params.BeaconConfig().DefaultFeeRecipient)
setFeeRecipientIfBurnAddress(val)
require.NotEqual(t, common.Address(val.FeeRecipient), params.BeaconConfig().DefaultFeeRecipient)
val.Index = 0
setFeeRecipientIfBurnAddress(val)
require.Equal(t, common.Address(val.FeeRecipient), params.BeaconConfig().DefaultFeeRecipient)
}

View File

@@ -13,18 +13,25 @@ import (
)
func unblindBlobsSidecars(block interfaces.SignedBeaconBlock, bundle *enginev1.BlobsBundle) ([]*ethpb.BlobSidecar, error) {
if block.Version() < version.Deneb || bundle == nil {
if block.Version() < version.Deneb {
return nil, nil
}
header, err := block.Header()
if err != nil {
return nil, err
}
body := block.Block().Body()
blockCommitments, err := body.BlobKzgCommitments()
if err != nil {
return nil, err
}
if len(blockCommitments) == 0 {
return nil, nil
}
// Do not allow builders to provide no blob bundles for blocks which carry commitments.
if bundle == nil {
return nil, errors.New("no valid bundle provided")
}
header, err := block.Header()
if err != nil {
return nil, err
}
// Ensure there are equal counts of blobs/commitments/proofs.
if len(bundle.KzgCommitments) != len(bundle.Blobs) {

View File

@@ -0,0 +1,34 @@
package validator
import (
"testing"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
)
func TestUnblinder_UnblindBlobSidecars_InvalidBundle(t *testing.T) {
wBlock, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockDeneb{
Block: &ethpb.BeaconBlockDeneb{
Body: &ethpb.BeaconBlockBodyDeneb{},
},
Signature: nil,
})
assert.NoError(t, err)
_, err = unblindBlobsSidecars(wBlock, nil)
assert.NoError(t, err)
wBlock, err = consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockDeneb{
Block: &ethpb.BeaconBlockDeneb{
Body: &ethpb.BeaconBlockBodyDeneb{
BlobKzgCommitments: [][]byte{[]byte("a"), []byte("b")},
},
},
Signature: nil,
})
assert.NoError(t, err)
_, err = unblindBlobsSidecars(wBlock, nil)
assert.ErrorContains(t, "no valid bundle provided", err)
}

View File

@@ -215,7 +215,7 @@ func NewService(ctx context.Context, cfg *Config) *Service {
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
BlobStorage: s.cfg.BlobStorage,
}
rewardFetcher := &rewards.BlockRewardService{Replayer: ch}
rewardFetcher := &rewards.BlockRewardService{Replayer: ch, DB: s.cfg.BeaconDB}
coreService := &core.Service{
HeadFetcher: s.cfg.HeadFetcher,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,

View File

@@ -19,6 +19,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/slasher",
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/prysmctl:__subpackages__",
"//testing/slasher/simulator:__subpackages__",
],
deps = [
@@ -27,6 +28,7 @@ go_library(
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/slasherkv:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/slasher/types:go_default_library",
"//beacon-chain/startup:go_default_library",
@@ -45,6 +47,7 @@ go_library(
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_x_exp//maps:go_default_library",
],
)

View File

@@ -208,8 +208,8 @@ func (m *MinSpanChunksSlice) CheckSlashable(
}
if existingAttWrapper == nil {
// This case should normally not happen. If this happen, it means we previously
// recorded in our min/max DB an distance corresponding to an attestaiton, but WITHOUT
// This case should normally not happen. If this happens, it means we previously
// recorded in our min/max DB a distance corresponding to an attestation, but WITHOUT
// recording the attestation itself. As a consequence, we say there is no surrounding vote,
// but we log an error.
fields := logrus.Fields{
@@ -287,8 +287,8 @@ func (m *MaxSpanChunksSlice) CheckSlashable(
}
if existingAttWrapper == nil {
// This case should normally not happen. If this happen, it means we previously
// recorded in our min/max DB an distance corresponding to an attestaiton, but WITHOUT
// This case should normally not happen. If this happens, it means we previously
// recorded in our min/max DB a distance corresponding to an attestation, but WITHOUT
// recording the attestation itself. As a consequence, we say there is no surrounded vote,
// but we log an error.
fields := logrus.Fields{

View File

@@ -11,6 +11,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"go.opencensus.io/trace"
"golang.org/x/exp/maps"
)
// Takes in a list of indexed attestation wrappers and returns any
@@ -131,7 +132,7 @@ func (s *Service) checkSurroundVotes(
}
// Update the latest updated epoch for all validators involved to the current chunk.
indexes := s.params.validatorIndexesInChunk(validatorChunkIndex)
indexes := s.params.ValidatorIndexesInChunk(validatorChunkIndex)
for _, index := range indexes {
s.latestEpochUpdatedForValidator[index] = currentEpoch
}
@@ -272,44 +273,20 @@ func (s *Service) updatedChunkByChunkIndex(
// minFirstEpochToUpdate is set to the smallest first epoch to update for all validators in the chunk
// corresponding to the `validatorChunkIndex`.
var minFirstEpochToUpdate *primitives.Epoch
var (
minFirstEpochToUpdate *primitives.Epoch
neededChunkIndexesMap map[uint64]bool
neededChunkIndexesMap := map[uint64]bool{}
err error
)
validatorIndexes := s.params.ValidatorIndexesInChunk(validatorChunkIndex)
validatorIndexes := s.params.validatorIndexesInChunk(validatorChunkIndex)
for _, validatorIndex := range validatorIndexes {
// Retrieve the first epoch to write for the validator index.
isAnEpochToUpdate, firstEpochToUpdate, err := s.firstEpochToUpdate(validatorIndex, currentEpoch)
if err != nil {
return nil, errors.Wrapf(err, "could not get first epoch to write for validator index %d with current epoch %d", validatorIndex, currentEpoch)
}
if !isAnEpochToUpdate {
// If there is no epoch to write, skip.
continue
}
// If, for this validator index, the chunk corresponding to the first epoch to write
// (and all following epochs until the current epoch) are already flagged as needed,
// skip.
if minFirstEpochToUpdate != nil && *minFirstEpochToUpdate <= firstEpochToUpdate {
continue
}
minFirstEpochToUpdate = &firstEpochToUpdate
// Add new needed chunk indexes to the map.
for i := firstEpochToUpdate; i <= currentEpoch; i++ {
chunkIndex := s.params.chunkIndex(i)
neededChunkIndexesMap[chunkIndex] = true
}
if neededChunkIndexesMap, err = s.findNeededChunkIndexes(validatorIndexes, currentEpoch, minFirstEpochToUpdate); err != nil {
return nil, errors.Wrap(err, "could not find the needed chunk indexed")
}
// Get the list of needed chunk indexes.
neededChunkIndexes := make([]uint64, 0, len(neededChunkIndexesMap))
for chunkIndex := range neededChunkIndexesMap {
neededChunkIndexes = append(neededChunkIndexes, chunkIndex)
}
// Transform the map of needed chunk indexes to a slice.
neededChunkIndexes := maps.Keys(neededChunkIndexesMap)
// Retrieve needed chunks from the database.
chunkByChunkIndex, err := s.loadChunksFromDisk(ctx, validatorChunkIndex, chunkKind, neededChunkIndexes)
@@ -332,7 +309,7 @@ func (s *Service) updatedChunkByChunkIndex(
epochToUpdate := firstEpochToUpdate
for epochToUpdate <= currentEpoch {
// Get the chunk index for the ecpoh to write.
// Get the chunk index for the epoch to write.
chunkIndex := s.params.chunkIndex(epochToUpdate)
// Get the chunk corresponding to the chunk index from the `chunkByChunkIndex` map.
@@ -363,6 +340,45 @@ func (s *Service) updatedChunkByChunkIndex(
return chunkByChunkIndex, nil
}
// findNeededChunkIndexes returns a map of chunk indexes
// it loops over the validator indexes and finds the first epoch to update for each validator index.
func (s *Service) findNeededChunkIndexes(
validatorIndexes []primitives.ValidatorIndex,
currentEpoch primitives.Epoch,
minFirstEpochToUpdate *primitives.Epoch,
) (map[uint64]bool, error) {
neededChunkIndexesMap := map[uint64]bool{}
for _, validatorIndex := range validatorIndexes {
// Retrieve the first epoch to write for the validator index.
isAnEpochToUpdate, firstEpochToUpdate, err := s.firstEpochToUpdate(validatorIndex, currentEpoch)
if err != nil {
return nil, errors.Wrapf(err, "could not get first epoch to write for validator index %d with current epoch %d", validatorIndex, currentEpoch)
}
if !isAnEpochToUpdate {
// If there is no epoch to write, skip.
continue
}
// If, for this validator index, the chunk corresponding to the first epoch to write
// (and all following epochs until the current epoch) are already flagged as needed,
// skip.
if minFirstEpochToUpdate != nil && *minFirstEpochToUpdate <= firstEpochToUpdate {
continue
}
minFirstEpochToUpdate = &firstEpochToUpdate
// Add new needed chunk indexes to the map.
for i := firstEpochToUpdate; i <= currentEpoch; i++ {
chunkIndex := s.params.chunkIndex(i)
neededChunkIndexesMap[chunkIndex] = true
}
}
return neededChunkIndexesMap, nil
}
// firstEpochToUpdate, given a validator index and the current epoch, returns a boolean indicating
// if there is an epoch to write. If it is the case, it returns the first epoch to write.
func (s *Service) firstEpochToUpdate(validatorIndex primitives.ValidatorIndex, currentEpoch primitives.Epoch) (bool, primitives.Epoch, error) {

View File

@@ -1059,7 +1059,7 @@ func Test_updatedChunkByChunkIndex(t *testing.T) {
// Initialize the slasher database.
slasherDB := dbtest.SetupSlasherDB(t)
// Intialize the slasher service.
// Initialize the slasher service.
service := &Service{
params: &Parameters{
chunkSize: tt.chunkSize,
@@ -1502,7 +1502,7 @@ func runAttestationsBenchmark(b *testing.B, s *Service, numAtts, numValidators u
func Benchmark_checkSurroundVotes(b *testing.B) {
const (
// Approximatively the number of Holesky active validators on 2024-02-16
// Approximately the number of Holesky active validators on 2024-02-16
// This number is both a multiple of 32 (the number of slots per epoch) and 256 (the number of validators per chunk)
validatorsCount = 1_638_400
slotsPerEpoch = 32
@@ -1526,7 +1526,7 @@ func Benchmark_checkSurroundVotes(b *testing.B) {
// So for 1_638_400 validators with 32 slots per epoch, we would have 48_000 attestation wrappers per slot.
// With 256 validators per chunk, we would have only 188 modified chunks.
//
// In this benchmark, we use the worst case scenario where attestating validators are evenly splitted across all validators chunks.
// In this benchmark, we use the worst case scenario where attesting validators are evenly split across all validators chunks.
// We also suppose that only one chunk per validator chunk index is modified.
// For one given validator index, multiple chunk indexes could be modified.
//

View File

@@ -2,8 +2,11 @@ package slasher
import (
"bytes"
"context"
"fmt"
"strconv"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/slasherkv"
slashertypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/slasher/types"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -76,7 +79,7 @@ func (s *Service) filterAttestations(
continue
}
// If an attestations's target epoch is in the future, we defer processing for later.
// If an attestation's target epoch is in the future, we defer processing for later.
if attWrapper.IndexedAttestation.Data.Target.Epoch > currentEpoch {
validInFuture = append(validInFuture, attWrapper)
continue
@@ -159,3 +162,93 @@ func isDoubleProposal(incomingSigningRoot, existingSigningRoot [32]byte) bool {
}
return incomingSigningRoot != existingSigningRoot
}
type GetChunkFromDatabaseFilters struct {
ChunkKind slashertypes.ChunkKind
ValidatorIndex primitives.ValidatorIndex
SourceEpoch primitives.Epoch
IsDisplayAllValidatorsInChunk bool
IsDisplayAllEpochsInChunk bool
}
// GetChunkFromDatabase Utility function aiming at retrieving a chunk from the
// database.
func GetChunkFromDatabase(
ctx context.Context,
dbPath string,
filters GetChunkFromDatabaseFilters,
params *Parameters,
) (lastEpochForValidatorIndex primitives.Epoch, chunkIndex, validatorChunkIndex uint64, chunk Chunker, err error) {
// init store
d, err := slasherkv.NewKVStore(ctx, dbPath)
if err != nil {
return lastEpochForValidatorIndex, chunkIndex, validatorChunkIndex, chunk, fmt.Errorf("could not open database at path %s: %w", dbPath, err)
}
defer closeDB(d)
// init service
s := Service{
params: params,
serviceCfg: &ServiceConfig{
Database: d,
},
}
// variables
validatorIndex := filters.ValidatorIndex
sourceEpoch := filters.SourceEpoch
chunkKind := filters.ChunkKind
validatorChunkIndex = s.params.validatorChunkIndex(validatorIndex)
chunkIndex = s.params.chunkIndex(sourceEpoch)
// before getting the chunk, we need to verify if the requested epoch is in database
lastEpochForValidator, err := s.serviceCfg.Database.LastEpochWrittenForValidators(ctx, []primitives.ValidatorIndex{validatorIndex})
if err != nil {
return lastEpochForValidatorIndex,
chunkIndex,
validatorChunkIndex,
chunk,
fmt.Errorf("could not get last epoch written for validator %d: %w", validatorIndex, err)
}
if len(lastEpochForValidator) == 0 {
return lastEpochForValidatorIndex,
chunkIndex,
validatorChunkIndex,
chunk,
fmt.Errorf("could not get information at epoch %d for validator %d: there's no record found in slasher database",
sourceEpoch, validatorIndex,
)
}
lastEpochForValidatorIndex = lastEpochForValidator[0].Epoch
// if the epoch requested is within the range, we can proceed to get the chunk, otherwise return error
atBestSmallestEpoch := lastEpochForValidatorIndex.Sub(uint64(params.historyLength))
if sourceEpoch < atBestSmallestEpoch || sourceEpoch > lastEpochForValidatorIndex {
return lastEpochForValidatorIndex,
chunkIndex,
validatorChunkIndex,
chunk,
fmt.Errorf("requested epoch %d is outside the slasher history length %d, data can be provided within the epoch range [%d:%d] for validator %d",
sourceEpoch, params.historyLength, atBestSmallestEpoch, lastEpochForValidatorIndex, validatorIndex,
)
}
// fetch chunk from DB
chunk, err = s.getChunkFromDatabase(ctx, chunkKind, validatorChunkIndex, chunkIndex)
if err != nil {
return lastEpochForValidatorIndex,
chunkIndex,
validatorChunkIndex,
chunk,
fmt.Errorf("could not get chunk at index %d: %w", chunkIndex, err)
}
return lastEpochForValidatorIndex, chunkIndex, validatorChunkIndex, chunk, nil
}
func closeDB(d *slasherkv.Store) {
if err := d.Close(); err != nil {
log.WithError(err).Error("could not close database")
}
}

View File

@@ -16,6 +16,21 @@ type Parameters struct {
historyLength primitives.Epoch // H - defines how many epochs we keep of min or max spans.
}
// ChunkSize returns the chunk size.
func (p *Parameters) ChunkSize() uint64 {
return p.chunkSize
}
// ValidatorChunkSize returns the validator chunk size.
func (p *Parameters) ValidatorChunkSize() uint64 {
return p.validatorChunkSize
}
// HistoryLength returns the history length.
func (p *Parameters) HistoryLength() primitives.Epoch {
return p.historyLength
}
// DefaultParams defines default values for slasher's important parameters, defined
// based on optimization analysis for best and worst case scenarios for
// slasher's performance.
@@ -32,7 +47,15 @@ func DefaultParams() *Parameters {
}
}
// Validator min and max spans are split into chunks of length C = chunkSize.
func NewParams(chunkSize, validatorChunkSize uint64, historyLength primitives.Epoch) *Parameters {
return &Parameters{
chunkSize: chunkSize,
validatorChunkSize: validatorChunkSize,
historyLength: historyLength,
}
}
// ChunkIndex Validator min and max spans are split into chunks of length C = chunkSize.
// That is, if we are keeping N epochs worth of attesting history, finding what
// chunk a certain epoch, e, falls into can be computed as (e % N) / C. For example,
// if we are keeping 6 epochs worth of data, and we have chunks of size 2, then epoch
@@ -139,9 +162,9 @@ func (p *Parameters) flatSliceID(validatorChunkIndex, chunkIndex uint64) []byte
return ssz.MarshalUint64(make([]byte, 0), uint64(width.Mul(validatorChunkIndex).Add(chunkIndex)))
}
// Given a validator chunk index, we determine all of the validator
// ValidatorIndexesInChunk Given a validator chunk index, we determine all the validators
// indices that will belong in that chunk.
func (p *Parameters) validatorIndexesInChunk(validatorChunkIndex uint64) []primitives.ValidatorIndex {
func (p *Parameters) ValidatorIndexesInChunk(validatorChunkIndex uint64) []primitives.ValidatorIndex {
validatorIndices := make([]primitives.ValidatorIndex, 0)
low := validatorChunkIndex * p.validatorChunkSize
high := (validatorChunkIndex + 1) * p.validatorChunkSize

View File

@@ -468,7 +468,7 @@ func TestParams_validatorIndicesInChunk(t *testing.T) {
c := &Parameters{
validatorChunkSize: tt.fields.validatorChunkSize,
}
if got := c.validatorIndexesInChunk(tt.validatorChunkIdx); !reflect.DeepEqual(got, tt.want) {
if got := c.ValidatorIndexesInChunk(tt.validatorChunkIdx); !reflect.DeepEqual(got, tt.want) {
t.Errorf("validatorIndicesInChunk() = %v, want %v", got, tt.want)
}
})

View File

@@ -161,7 +161,7 @@ func (s *Service) Stop() error {
ctx, innerCancel := context.WithTimeout(context.Background(), shutdownTimeout)
defer innerCancel()
log.Info("Flushing last epoch written for each validator to disk, please wait")
if err := s.serviceCfg.Database.SaveLastEpochsWrittenForValidators(
if err := s.serviceCfg.Database.SaveLastEpochWrittenForValidators(
ctx, s.latestEpochUpdatedForValidator,
); err != nil {
log.Error(err)

View File

@@ -4,7 +4,10 @@ go_library(
name = "go_default_library",
srcs = ["types.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/slasher/types",
visibility = ["//beacon-chain:__subpackages__"],
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/prysmctl:__subpackages__",
],
deps = [
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -14,6 +14,18 @@ const (
MaxSpan
)
// String returns the string representation of the chunk kind.
func (c ChunkKind) String() string {
switch c {
case MinSpan:
return "minspan"
case MaxSpan:
return "maxspan"
default:
return "unknown"
}
}
// IndexedAttestationWrapper contains an indexed attestation with its
// data root to reduce duplicated computation.
type IndexedAttestationWrapper struct {

View File

@@ -45,7 +45,7 @@ func (w *p2pWorker) run(ctx context.Context) {
func (w *p2pWorker) handleBlocks(ctx context.Context, b batch) batch {
cs := w.c.CurrentSlot()
blobRetentionStart, err := sync.BlobsByRangeMinStartSlot(cs)
blobRetentionStart, err := sync.BlobRPCMinValidSlot(cs)
if err != nil {
return b.withRetryableError(errors.Wrap(err, "configuration issue, could not compute minimum blob retention slot"))
}

View File

@@ -327,40 +327,3 @@ func TestTestcaseSetup_BlocksAndBlobs(t *testing.T) {
require.Equal(t, true, found != nil)
}
}
func TestRoundTripDenebSave(t *testing.T) {
ctx := context.Background()
cfg := params.BeaconConfig()
repositionFutureEpochs(cfg)
undo, err := params.SetActiveWithUndo(cfg)
require.NoError(t, err)
defer func() {
require.NoError(t, undo())
}()
parentRoot := [32]byte{}
c := blobsTestCase{}
chain, clock := defaultMockChain(t)
c.chain = chain
c.clock = clock
oldest, err := slots.EpochStart(blobMinReqEpoch(c.chain.FinalizedCheckPoint.Epoch, slots.ToEpoch(c.clock.CurrentSlot())))
require.NoError(t, err)
maxBlobs := fieldparams.MaxBlobsPerBlock
block, bsc := generateTestBlockWithSidecars(t, parentRoot, oldest, maxBlobs)
require.Equal(t, len(block.Block.Body.BlobKzgCommitments), len(bsc))
require.Equal(t, maxBlobs, len(bsc))
for i := range bsc {
require.DeepEqual(t, block.Block.Body.BlobKzgCommitments[i], bsc[i].KzgCommitment)
}
d := db.SetupDB(t)
util.SaveBlock(t, ctx, d, block)
root, err := block.Block.HashTreeRoot()
require.NoError(t, err)
dbBlock, err := d.Block(ctx, root)
require.NoError(t, err)
comms, err := dbBlock.Block().Body().BlobKzgCommitments()
require.NoError(t, err)
require.Equal(t, maxBlobs, len(comms))
for i := range bsc {
require.DeepEqual(t, comms[i], bsc[i].KzgCommitment)
}
}

View File

@@ -12,14 +12,12 @@ go_library(
"log.go",
"round_robin.go",
"service.go",
"verification.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/sync/initial-sync",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//async/abool:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/core/feed/block:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/transition:go_default_library",
@@ -41,7 +39,6 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//container/leaky-bucket:go_default_library",
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime:go_default_library",

View File

@@ -312,7 +312,7 @@ func (f *blocksFetcher) handleRequest(ctx context.Context, start primitives.Slot
response.bwb, response.pid, response.err = f.fetchBlocksFromPeer(ctx, start, count, peers)
if response.err == nil {
bwb, err := f.fetchBlobsFromPeer(ctx, response.bwb, response.pid)
bwb, err := f.fetchBlobsFromPeer(ctx, response.bwb, response.pid, peers)
if err != nil {
response.err = err
}
@@ -336,6 +336,11 @@ func (f *blocksFetcher) fetchBlocksFromPeer(
Count: count,
Step: 1,
}
bestPeers := f.hasSufficientBandwidth(peers, req.Count)
// We append the best peers to the front so that higher capacity
// peers are dialed first.
peers = append(bestPeers, peers...)
peers = dedupPeers(peers)
for i := 0; i < len(peers); i++ {
p := peers[i]
blocks, err := f.requestBlocks(ctx, req, p)
@@ -472,13 +477,13 @@ func missingCommitError(root [32]byte, slot primitives.Slot, missing [][]byte) e
}
// fetchBlobsFromPeer fetches blocks from a single randomly selected peer.
func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks2.BlockWithROBlobs, pid peer.ID) ([]blocks2.BlockWithROBlobs, error) {
func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks2.BlockWithROBlobs, pid peer.ID, peers []peer.ID) ([]blocks2.BlockWithROBlobs, error) {
ctx, span := trace.StartSpan(ctx, "initialsync.fetchBlobsFromPeer")
defer span.End()
if slots.ToEpoch(f.clock.CurrentSlot()) < params.BeaconConfig().DenebForkEpoch {
return bwb, nil
}
blobWindowStart, err := prysmsync.BlobsByRangeMinStartSlot(f.clock.CurrentSlot())
blobWindowStart, err := prysmsync.BlobRPCMinValidSlot(f.clock.CurrentSlot())
if err != nil {
return nil, err
}
@@ -487,13 +492,30 @@ func (f *blocksFetcher) fetchBlobsFromPeer(ctx context.Context, bwb []blocks2.Bl
if req == nil {
return bwb, nil
}
// Request blobs from the same peer that gave us the blob batch.
blobs, err := f.requestBlobs(ctx, req, pid)
if err != nil {
return nil, errors.Wrap(err, "could not request blobs by range")
peers = f.filterPeers(ctx, peers, peersPercentagePerRequest)
// We dial the initial peer first to ensure that we get the desired set of blobs.
wantedPeers := append([]peer.ID{pid}, peers...)
bestPeers := f.hasSufficientBandwidth(wantedPeers, req.Count)
// We append the best peers to the front so that higher capacity
// peers are dialed first. If all of them fail, we fallback to the
// initial peer we wanted to request blobs from.
peers = append(bestPeers, pid)
for i := 0; i < len(peers); i++ {
p := peers[i]
blobs, err := f.requestBlobs(ctx, req, p)
if err != nil {
log.WithField("peer", p).WithError(err).Debug("Could not request blobs by range from peer")
continue
}
f.p2p.Peers().Scorers().BlockProviderScorer().Touch(p)
robs, err := verifyAndPopulateBlobs(bwb, blobs, blobWindowStart)
if err != nil {
log.WithField("peer", p).WithError(err).Debug("Invalid BeaconBlobsByRange response")
continue
}
return robs, err
}
f.p2p.Peers().Scorers().BlockProviderScorer().Touch(pid)
return verifyAndPopulateBlobs(bwb, blobs, blobWindowStart)
return nil, errNoPeersAvailable
}
// requestBlocks is a wrapper for handling BeaconBlocksByRangeRequest requests/streams.
@@ -606,6 +628,18 @@ func (f *blocksFetcher) waitForBandwidth(pid peer.ID, count uint64) error {
return nil
}
func (f *blocksFetcher) hasSufficientBandwidth(peers []peer.ID, count uint64) []peer.ID {
filteredPeers := []peer.ID{}
for _, p := range peers {
if uint64(f.rateLimiter.Remaining(p.String())) < count {
continue
}
copiedP := p
filteredPeers = append(filteredPeers, copiedP)
}
return filteredPeers
}
// Determine how long it will take for us to have the required number of blocks allowed by our rate limiter.
// We do this by calculating the duration till the rate limiter can request these blocks without exceeding
// the provided bandwidth limits per peer.
@@ -626,3 +660,18 @@ func timeToWait(wanted, rem, capacity int64, timeTillEmpty time.Duration) time.D
expectedTime := int64(timeTillEmpty) * blocksNeeded / currentNumBlks
return time.Duration(expectedTime)
}
// deduplicates the provided peer list.
func dedupPeers(peers []peer.ID) []peer.ID {
newPeerList := make([]peer.ID, 0, len(peers))
peerExists := make(map[peer.ID]bool)
for i := range peers {
if peerExists[peers[i]] {
continue
}
newPeerList = append(newPeerList, peers[i])
peerExists[peers[i]] = true
}
return newPeerList
}

View File

@@ -12,6 +12,7 @@ import (
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
dbtest "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
p2pm "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
@@ -1166,3 +1167,22 @@ func TestBatchLimit(t *testing.T) {
assert.Equal(t, params.BeaconConfig().MaxRequestBlocksDeneb, uint64(maxBatchLimit()))
}
func TestBlockFetcher_HasSufficientBandwidth(t *testing.T) {
bf := newBlocksFetcher(context.Background(), &blocksFetcherConfig{})
currCap := bf.rateLimiter.Capacity()
wantedAmt := currCap - 100
bf.rateLimiter.Add(peer.ID("a").String(), wantedAmt)
bf.rateLimiter.Add(peer.ID("c").String(), wantedAmt)
bf.rateLimiter.Add(peer.ID("f").String(), wantedAmt)
bf.rateLimiter.Add(peer.ID("d").String(), wantedAmt)
receivedPeers := bf.hasSufficientBandwidth([]peer.ID{"a", "b", "c", "d", "e", "f"}, 110)
for _, p := range receivedPeers {
switch p {
case "a", "c", "f", "d":
t.Errorf("peer has exceeded capacity: %s", p)
}
}
assert.Equal(t, 2, len(receivedPeers))
}

View File

@@ -280,7 +280,7 @@ func (f *blocksFetcher) findForkWithPeer(ctx context.Context, pid peer.ID, slot
}
// We need to fetch the blobs for the given alt-chain if any exist, so that we can try to verify and import
// the blocks.
bwb, err := f.fetchBlobsFromPeer(ctx, altBlocks, pid)
bwb, err := f.fetchBlobsFromPeer(ctx, altBlocks, pid, []peer.ID{pid})
if err != nil {
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findForkWithPeer")
}
@@ -302,7 +302,7 @@ func (f *blocksFetcher) findAncestor(ctx context.Context, pid peer.ID, b interfa
if err != nil {
return nil, errors.Wrap(err, "received invalid blocks in findAncestor")
}
bwb, err = f.fetchBlobsFromPeer(ctx, bwb, pid)
bwb, err = f.fetchBlobsFromPeer(ctx, bwb, pid, []peer.ID{pid})
if err != nil {
return nil, errors.Wrap(err, "unable to retrieve blobs for blocks found in findAncestor")
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/sync"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -167,7 +168,7 @@ func (s *Service) processFetchedDataRegSync(
if len(bwb) == 0 {
return
}
bv := newBlobBatchVerifier(s.newBlobVerifier)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
batchFields := logrus.Fields{
"firstSlot": data.bwb[0].Block.Block().Slot(),
@@ -326,7 +327,7 @@ func (s *Service) processBatchedBlocks(ctx context.Context, genesis time.Time,
errParentDoesNotExist, first.Block().ParentRoot(), first.Block().Slot())
}
bv := newBlobBatchVerifier(s.newBlobVerifier)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
s.logBatchSyncStatus(genesis, first, len(bwb))
for _, bb := range bwb {

View File

@@ -340,7 +340,7 @@ func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
if len(sidecars) != len(req) {
continue
}
bv := newBlobBatchVerifier(s.newBlobVerifier)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
current := s.clock.CurrentSlot()
if err := avs.Persist(current, sidecars...); err != nil {
@@ -362,3 +362,9 @@ func shufflePeers(pids []peer.ID) {
pids[i], pids[j] = pids[j], pids[i]
})
}
func newBlobVerifierFromInitializer(ini *verification.Initializer) verification.NewBlobVerifier {
return func(b blocks.ROBlob, reqs []verification.Requirement) verification.BlobVerifier {
return ini.NewBlobVerifier(b, reqs)
}
}

View File

@@ -150,6 +150,10 @@ var (
Help: "Time to verify gossiped blob sidecars",
},
)
pendingAttCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "gossip_pending_attestations_total",
Help: "increased when receiving a new pending attestation",
})
// Sync committee verification performance.
syncMessagesForUnknownBlocks = promauto.NewCounter(

View File

@@ -1,6 +1,7 @@
package sync
import (
"bytes"
"context"
"encoding/hex"
"sync"
@@ -157,7 +158,8 @@ func (s *Service) processAttestations(ctx context.Context, attestations []*ethpb
// This defines how pending attestations is saved in the map. The key is the
// root of the missing block. The value is the list of pending attestations
// that voted for that block root.
// that voted for that block root. The caller of this function is responsible
// for not sending repeated attestations to the pending queue.
func (s *Service) savePendingAtt(att *ethpb.SignedAggregateAttestationAndProof) {
root := bytesutil.ToBytes32(att.Message.Aggregate.Data.BeaconBlockRoot)
@@ -175,20 +177,37 @@ func (s *Service) savePendingAtt(att *ethpb.SignedAggregateAttestationAndProof)
_, ok := s.blkRootToPendingAtts[root]
if !ok {
pendingAttCount.Inc()
s.blkRootToPendingAtts[root] = []*ethpb.SignedAggregateAttestationAndProof{att}
return
}
// Skip if the attestation from the same aggregator already exists in the pending queue.
// Skip if the attestation from the same aggregator already exists in
// the pending queue.
for _, a := range s.blkRootToPendingAtts[root] {
if a.Message.AggregatorIndex == att.Message.AggregatorIndex {
if attsAreEqual(att, a) {
return
}
}
pendingAttCount.Inc()
s.blkRootToPendingAtts[root] = append(s.blkRootToPendingAtts[root], att)
}
func attsAreEqual(a, b *ethpb.SignedAggregateAttestationAndProof) bool {
if a.Signature != nil {
return b.Signature != nil && a.Message.AggregatorIndex == b.Message.AggregatorIndex
}
if b.Signature != nil {
return false
}
if a.Message.Aggregate.Data.Slot != b.Message.Aggregate.Data.Slot {
return false
}
if a.Message.Aggregate.Data.CommitteeIndex != b.Message.Aggregate.Data.CommitteeIndex {
return false
}
return bytes.Equal(a.Message.Aggregate.AggregationBits, b.Message.Aggregate.AggregationBits)
}
// This validates the pending attestations in the queue are still valid.
// If not valid, a node will remove it in the queue in place. The validity
// check specifies the pending attestation could not fall one epoch behind

View File

@@ -399,7 +399,7 @@ func TestValidatePendingAtts_CanPruneOldAtts(t *testing.T) {
assert.Equal(t, 0, len(s.blkRootToPendingAtts), "Did not delete block keys")
}
func TestValidatePendingAtts_NoDuplicatingAggregatorIndex(t *testing.T) {
func TestValidatePendingAtts_NoDuplicatingAtts(t *testing.T) {
s := &Service{
blkRootToPendingAtts: make(map[[32]byte][]*ethpb.SignedAggregateAttestationAndProof),
}
@@ -420,7 +420,7 @@ func TestValidatePendingAtts_NoDuplicatingAggregatorIndex(t *testing.T) {
Message: &ethpb.AggregateAttestationAndProof{
AggregatorIndex: 2,
Aggregate: &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 3, BeaconBlockRoot: r2[:]}}}})
Data: &ethpb.AttestationData{Slot: 2, BeaconBlockRoot: r2[:]}}}})
assert.Equal(t, 1, len(s.blkRootToPendingAtts[r1]), "Did not save pending atts")
assert.Equal(t, 1, len(s.blkRootToPendingAtts[r2]), "Did not save pending atts")

View File

@@ -646,7 +646,7 @@ func TestService_BatchRootRequest(t *testing.T) {
b4Root, err := b4.Block.HashTreeRoot()
require.NoError(t, err)
// Send in duplicated roots to also test deduplicaton.
// Send in duplicated roots to also test deduplication.
sentRoots := p2ptypes.BeaconBlockByRootsReq{b2Root, b2Root, b3Root, b3Root, b4Root, b5Root}
expectedRoots := p2ptypes.BeaconBlockByRootsReq{b2Root, b3Root, b4Root, b5Root}

View File

@@ -10,6 +10,7 @@ import (
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/protocol"
"github.com/multiformats/go-multiaddr"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
@@ -142,7 +143,30 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
// it successfully writes a response. We don't blindly call
// Close here because we may have only written a partial
// response.
// About the special case for quic-v1, please see:
// https://github.com/quic-go/quic-go/issues/3291
defer func() {
isQuic := false
multiaddr.ForEach(stream.Conn().RemoteMultiaddr(), func(c multiaddr.Component) bool {
pCode := c.Protocol().Code
if pCode == multiaddr.P_QUIC || pCode == multiaddr.P_QUIC_V1 {
isQuic = true
return false
}
return true
})
// We special case for QUIC connections as unlike yamux where a reset is a no-op for a successful close. An abrupt
// stream termination can lead to the remote peer dropping the inbound data. For that reason, we only reset streams
// in the event there is an error while closing them.
if isQuic {
if err := stream.Close(); err != nil {
_err := stream.Reset()
_ = _err
}
return
}
_err := stream.Reset()
_ = _err
}()

View File

@@ -151,14 +151,14 @@ func (s *Service) sendAndSaveBlobSidecars(ctx context.Context, request types.Blo
if len(sidecars) != len(request) {
return fmt.Errorf("received %d blob sidecars, expected %d for RPC", len(sidecars), len(request))
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.PendingQueueSidecarRequirements)
for _, sidecar := range sidecars {
if err := verify.BlobAlignsWithBlock(sidecar, RoBlock); err != nil {
return err
}
log.WithFields(blobFields(sidecar)).Debug("Received blob sidecar RPC")
}
vscs, err := verification.BlobSidecarSliceNoop(sidecars)
vscs, err := bv.VerifiedROBlobs(ctx, RoBlock, sidecars)
if err != nil {
return err
}

View File

@@ -123,10 +123,10 @@ func (s *Service) blobSidecarsByRangeRPCHandler(ctx context.Context, msg interfa
return nil
}
// BlobsByRangeMinStartSlot returns the lowest slot that we should expect peers to respect as the
// BlobRPCMinValidSlot returns the lowest slot that we should expect peers to respect as the
// start slot in a BlobSidecarsByRange request. This can be used to validate incoming requests and
// to avoid pestering peers with requests for blobs that are outside the retention window.
func BlobsByRangeMinStartSlot(current primitives.Slot) (primitives.Slot, error) {
func BlobRPCMinValidSlot(current primitives.Slot) (primitives.Slot, error) {
// Avoid overflow if we're running on a config where deneb is set to far future epoch.
if params.BeaconConfig().DenebForkEpoch == math.MaxUint64 {
return primitives.Slot(math.MaxUint64), nil
@@ -176,9 +176,9 @@ func validateBlobsByRange(r *pb.BlobSidecarsByRangeRequest, current primitives.S
// [max(current_epoch - MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS, DENEB_FORK_EPOCH), current_epoch]
// where current_epoch is defined by the current wall-clock time,
// and clients MUST support serving requests of blobs on this range.
minStartSlot, err := BlobsByRangeMinStartSlot(current)
minStartSlot, err := BlobRPCMinValidSlot(current)
if err != nil {
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "BlobsByRangeMinStartSlot error")
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "BlobRPCMinValidSlot error")
}
if rp.start > maxStart {
return rangeParams{}, errors.Wrap(p2ptypes.ErrInvalidRequest, "start > maxStart")

View File

@@ -178,7 +178,7 @@ func TestBlobsByRangeValidation(t *testing.T) {
and clients MUST support serving requests of blobs on this range.
*/
defaultCurrent := denebSlot + 100 + minReqSlots
defaultMinStart, err := BlobsByRangeMinStartSlot(defaultCurrent)
defaultMinStart, err := BlobRPCMinValidSlot(defaultCurrent)
require.NoError(t, err)
cases := []struct {
name string
@@ -285,3 +285,67 @@ func TestBlobsByRangeValidation(t *testing.T) {
})
}
}
func TestBlobRPCMinValidSlot(t *testing.T) {
denebSlot, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
require.NoError(t, err)
cases := []struct {
name string
current func(t *testing.T) types.Slot
expected types.Slot
err error
}{
{
name: "before deneb",
current: func(t *testing.T) types.Slot {
st, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch - 1)
// note: we no longer need to deal with deneb fork epoch being far future
require.NoError(t, err)
return st
},
expected: denebSlot,
},
{
name: "equal to deneb",
current: func(t *testing.T) types.Slot {
st, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
// note: we no longer need to deal with deneb fork epoch being far future
require.NoError(t, err)
return st
},
expected: denebSlot,
},
{
name: "after deneb, before expiry starts",
current: func(t *testing.T) types.Slot {
st, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch + params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
// note: we no longer need to deal with deneb fork epoch being far future
require.NoError(t, err)
return st
},
expected: denebSlot,
},
{
name: "expiry starts one epoch after deneb + MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS",
current: func(t *testing.T) types.Slot {
st, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch + params.BeaconConfig().MinEpochsForBlobsSidecarsRequest + 1)
// note: we no longer need to deal with deneb fork epoch being far future
require.NoError(t, err)
return st
},
expected: denebSlot + params.BeaconConfig().SlotsPerEpoch,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
current := c.current(t)
got, err := BlobRPCMinValidSlot(current)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, c.expected, got)
})
}
}

View File

@@ -13,30 +13,12 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
func blobMinReqEpoch(finalized, current primitives.Epoch) primitives.Epoch {
// max(finalized_epoch, current_epoch - MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS, DENEB_FORK_EPOCH)
denebFork := params.BeaconConfig().DenebForkEpoch
var reqWindow primitives.Epoch
if current > params.BeaconConfig().MinEpochsForBlobsSidecarsRequest {
reqWindow = current - params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
}
if finalized >= reqWindow && finalized > denebFork {
return finalized
}
if reqWindow >= finalized && reqWindow > denebFork {
return reqWindow
}
return denebFork
}
// blobSidecarByRootRPCHandler handles the /eth2/beacon_chain/req/blob_sidecars_by_root/1/ RPC request.
// spec: https://github.com/ethereum/consensus-specs/blob/a7e45db9ac2b60a33e144444969ad3ac0aae3d4c/specs/deneb/p2p-interface.md#blobsidecarsbyroot-v1
func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface{}, stream libp2pcore.Stream) error {
@@ -65,7 +47,13 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
if len(blobIdents) > batchSize {
ticker = time.NewTicker(time.Second)
}
minReqEpoch := blobMinReqEpoch(s.cfg.chain.FinalizedCheckpt().Epoch, slots.ToEpoch(s.cfg.clock.CurrentSlot()))
// Compute the oldest slot we'll allow a peer to request, based on the current slot.
cs := s.cfg.clock.CurrentSlot()
minReqSlot, err := BlobRPCMinValidSlot(cs)
if err != nil {
return errors.Wrapf(err, "unexpected error computing min valid blob request slot, current_slot=%d", cs)
}
for i := range blobIdents {
if err := ctx.Err(); err != nil {
@@ -95,12 +83,15 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
// If any root in the request content references a block earlier than minimum_request_epoch,
// peers MAY respond with error code 3: ResourceUnavailable or not include the blob in the response.
if slots.ToEpoch(sc.Slot()) < minReqEpoch {
// note: we are deviating from the spec to allow requests for blobs that are before minimum_request_epoch,
// up to the beginning of the retention period.
if sc.Slot() < minReqSlot {
s.writeErrorResponseToStream(responseCodeResourceUnavailable, types.ErrBlobLTMinRequest.Error(), stream)
log.WithError(types.ErrBlobLTMinRequest).
Debugf("requested blob for block %#x before minimum_request_epoch", blobIdents[i].BlockRoot)
return types.ErrBlobLTMinRequest
}
SetStreamWriteDeadline(stream, defaultWriteDuration)
if chunkErr := WriteBlobSidecarChunk(stream, s.cfg.chain, s.cfg.p2p.Encoding(), sc); chunkErr != nil {
log.WithError(chunkErr).Debug("Could not send a chunked response")

View File

@@ -19,7 +19,7 @@ import (
)
func (c *blobsTestCase) defaultOldestSlotByRoot(t *testing.T) types.Slot {
oldest, err := slots.EpochStart(blobMinReqEpoch(c.chain.FinalizedCheckPoint.Epoch, slots.ToEpoch(c.clock.CurrentSlot())))
oldest, err := BlobRPCMinValidSlot(c.clock.CurrentSlot())
require.NoError(t, err)
return oldest
}
@@ -259,71 +259,3 @@ func TestBlobsByRootOK(t *testing.T) {
})
}
}
func TestBlobsByRootMinReqEpoch(t *testing.T) {
winMin := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
cases := []struct {
name string
finalized types.Epoch
current types.Epoch
deneb types.Epoch
expected types.Epoch
}{
{
name: "testnet genesis",
deneb: 100,
current: 0,
finalized: 0,
expected: 100,
},
{
name: "underflow averted",
deneb: 100,
current: winMin - 1,
finalized: 0,
expected: 100,
},
{
name: "underflow averted - finalized is higher",
deneb: 100,
current: winMin - 1,
finalized: winMin - 2,
expected: winMin - 2,
},
{
name: "underflow averted - genesis at deneb",
deneb: 0,
current: winMin - 1,
finalized: 0,
expected: 0,
},
{
name: "max is finalized",
deneb: 100,
current: 99 + winMin,
finalized: 101,
expected: 101,
},
{
name: "reqWindow > finalized, reqWindow < deneb",
deneb: 100,
current: 99 + winMin,
finalized: 98,
expected: 100,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
cfg := params.BeaconConfig()
repositionFutureEpochs(cfg)
cfg.DenebForkEpoch = c.deneb
undo, err := params.SetActiveWithUndo(cfg)
require.NoError(t, err)
defer func() {
require.NoError(t, undo())
}()
ep := blobMinReqEpoch(c.finalized, c.current)
require.Equal(t, c.expected, ep)
})
}
}

View File

@@ -42,9 +42,10 @@ func (s *Service) goodbyeRPCHandler(_ context.Context, msg interface{}, stream l
return fmt.Errorf("wrong message type for goodbye, got %T, wanted *uint64", msg)
}
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
return err
log.WithError(err).Debug("Goodbye message from rate-limited peer.")
} else {
s.rateLimiter.add(stream, 1)
}
s.rateLimiter.add(stream, 1)
log := log.WithField("Reason", goodbyeMessage(*m))
log.WithField("peer", stream.Conn().RemotePeer()).Trace("Peer has sent a goodbye message")
s.cfg.p2p.Peers().SetNextValidTime(stream.Conn().RemotePeer(), goodByeBackoff(*m))

View File

@@ -219,7 +219,7 @@ func TestSyncService_StopCleanly(t *testing.T) {
require.NotEqual(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
require.NotEqual(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))
// Both pubsub and rpc topcis should be unsubscribed.
// Both pubsub and rpc topics should be unsubscribed.
require.NoError(t, r.Stop())
// Sleep to allow pubsub topics to be deregistered.

View File

@@ -3,6 +3,7 @@ load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"batch.go",
"blob.go",
"cache.go",
"error.go",
@@ -45,6 +46,7 @@ go_library(
go_test(
name = "go_default_test",
srcs = [
"batch_test.go",
"blob_test.go",
"cache_test.go",
"initializer_test.go",
@@ -69,5 +71,6 @@ go_test(
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_stretchr_testify//require:go_default_library",
],
)

View File

@@ -1,12 +1,10 @@
package initialsync
package verification
import (
"context"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
)
@@ -20,21 +18,17 @@ var (
ErrBatchBlockRootMismatch = errors.New("Sidecar block header root does not match signed block")
)
func newBlobVerifierFromInitializer(ini *verification.Initializer) verification.NewBlobVerifier {
return func(b blocks.ROBlob, reqs []verification.Requirement) verification.BlobVerifier {
return ini.NewBlobVerifier(b, reqs)
}
}
func newBlobBatchVerifier(newVerifier verification.NewBlobVerifier) *BlobBatchVerifier {
// NewBlobBatchVerifier initializes a blob batch verifier. It requires the caller to correctly specify
// verification Requirements and to also pass in a NewBlobVerifier, which is a callback function that
// returns a new BlobVerifier for handling a single blob in the batch.
func NewBlobBatchVerifier(newVerifier NewBlobVerifier, reqs []Requirement) *BlobBatchVerifier {
return &BlobBatchVerifier{
verifyKzg: kzg.Verify,
newVerifier: newVerifier,
reqs: reqs,
}
}
type kzgVerifier func(b ...blocks.ROBlob) error
// BlobBatchVerifier solves problems that come from verifying batches of blobs from RPC.
// First: we only update forkchoice after the entire batch has completed, so the n+1 elements in the batch
// won't be in forkchoice yet.
@@ -42,18 +36,17 @@ type kzgVerifier func(b ...blocks.ROBlob) error
// method to BlobVerifier to verify the kzg commitments of all blob sidecars for a block together, then using the cached
// result of the batch verification when verifying the individual blobs.
type BlobBatchVerifier struct {
verifyKzg kzgVerifier
newVerifier verification.NewBlobVerifier
verifyKzg roblobCommitmentVerifier
newVerifier NewBlobVerifier
reqs []Requirement
}
var _ das.BlobBatchVerifier = &BlobBatchVerifier{}
// VerifiedROBlobs satisfies the das.BlobBatchVerifier interface, used by das.AvailabilityStore.
func (batch *BlobBatchVerifier) VerifiedROBlobs(ctx context.Context, blk blocks.ROBlock, scs []blocks.ROBlob) ([]blocks.VerifiedROBlob, error) {
if len(scs) == 0 {
return nil, nil
}
// We assume the proposer was validated wrt the block in batch block processing before performing the DA check.
// We assume the proposer is validated wrt the block in batch block processing before performing the DA check.
// So at this stage we just need to make sure the value being signed and signature bytes match the block.
for i := range scs {
if blk.Signature() != bytesutil.ToBytes96(scs[i].SignedBlockHeader.Signature) {
@@ -71,7 +64,7 @@ func (batch *BlobBatchVerifier) VerifiedROBlobs(ctx context.Context, blk blocks.
}
vs := make([]blocks.VerifiedROBlob, len(scs))
for i := range scs {
vb, err := batch.verifyOneBlob(ctx, scs[i])
vb, err := batch.verifyOneBlob(scs[i])
if err != nil {
return nil, err
}
@@ -80,13 +73,13 @@ func (batch *BlobBatchVerifier) VerifiedROBlobs(ctx context.Context, blk blocks.
return vs, nil
}
func (batch *BlobBatchVerifier) verifyOneBlob(ctx context.Context, sc blocks.ROBlob) (blocks.VerifiedROBlob, error) {
func (batch *BlobBatchVerifier) verifyOneBlob(sc blocks.ROBlob) (blocks.VerifiedROBlob, error) {
vb := blocks.VerifiedROBlob{}
bv := batch.newVerifier(sc, verification.InitsyncSidecarRequirements)
bv := batch.newVerifier(sc, batch.reqs)
// We can satisfy the following 2 requirements immediately because VerifiedROBlobs always verifies commitments
// and block signature for all blobs in the batch before calling verifyOneBlob.
bv.SatisfyRequirement(verification.RequireSidecarKzgProofVerified)
bv.SatisfyRequirement(verification.RequireValidProposerSignature)
bv.SatisfyRequirement(RequireSidecarKzgProofVerified)
bv.SatisfyRequirement(RequireValidProposerSignature)
if err := bv.BlobIndexInBounds(); err != nil {
return vb, err

View File

@@ -0,0 +1,189 @@
package verification
import (
"context"
"testing"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/stretchr/testify/require"
)
func TestBatchVerifier(t *testing.T) {
ctx := context.Background()
mockCV := func(err error) roblobCommitmentVerifier {
return func(...blocks.ROBlob) error {
return err
}
}
var invCmtErr = errors.New("mock invalid commitment")
type vbcbt func() (blocks.VerifiedROBlob, error)
vbcb := func(bl blocks.ROBlob, err error) vbcbt {
return func() (blocks.VerifiedROBlob, error) {
return blocks.VerifiedROBlob{ROBlob: bl}, err
}
}
cases := []struct {
name string
nv func() NewBlobVerifier
cv roblobCommitmentVerifier
bandb func(t *testing.T, n int) (blocks.ROBlock, []blocks.ROBlob)
err error
nblobs int
reqs []Requirement
}{
{
name: "no blobs",
bandb: func(t *testing.T, nb int) (blocks.ROBlock, []blocks.ROBlob) {
return util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, nb)
},
nv: func() NewBlobVerifier {
return func(bl blocks.ROBlob, reqs []Requirement) BlobVerifier {
return &MockBlobVerifier{cbVerifiedROBlob: vbcb(bl, nil)}
}
},
nblobs: 0,
},
{
name: "happy path",
nv: func() NewBlobVerifier {
return func(bl blocks.ROBlob, reqs []Requirement) BlobVerifier {
return &MockBlobVerifier{cbVerifiedROBlob: vbcb(bl, nil)}
}
},
bandb: func(t *testing.T, nb int) (blocks.ROBlock, []blocks.ROBlob) {
return util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, nb)
},
nblobs: 3,
},
{
name: "partial batch",
nv: func() NewBlobVerifier {
return func(bl blocks.ROBlob, reqs []Requirement) BlobVerifier {
return &MockBlobVerifier{cbVerifiedROBlob: vbcb(bl, nil)}
}
},
bandb: func(t *testing.T, nb int) (blocks.ROBlock, []blocks.ROBlob) {
// Add extra blobs to the block that we won't return
blk, blbs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, nb+3)
return blk, blbs[0:3]
},
nblobs: 3,
},
{
name: "invalid commitment",
nv: func() NewBlobVerifier {
return func(bl blocks.ROBlob, reqs []Requirement) BlobVerifier {
return &MockBlobVerifier{cbVerifiedROBlob: func() (blocks.VerifiedROBlob, error) {
t.Fatal("Batch verifier should stop before this point")
return blocks.VerifiedROBlob{}, nil
}}
}
},
cv: mockCV(invCmtErr),
bandb: func(t *testing.T, nb int) (blocks.ROBlock, []blocks.ROBlob) {
return util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, nb)
},
err: invCmtErr,
nblobs: 1,
},
{
name: "signature mismatch",
nv: func() NewBlobVerifier {
return func(bl blocks.ROBlob, reqs []Requirement) BlobVerifier {
return &MockBlobVerifier{cbVerifiedROBlob: func() (blocks.VerifiedROBlob, error) {
t.Fatal("Batch verifier should stop before this point")
return blocks.VerifiedROBlob{}, nil
}}
}
},
bandb: func(t *testing.T, nb int) (blocks.ROBlock, []blocks.ROBlob) {
blk, blbs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, nb)
blbs[0].SignedBlockHeader.Signature = []byte("wrong")
return blk, blbs
},
err: ErrBatchSignatureMismatch,
nblobs: 2,
},
{
name: "root mismatch",
nv: func() NewBlobVerifier {
return func(bl blocks.ROBlob, reqs []Requirement) BlobVerifier {
return &MockBlobVerifier{cbVerifiedROBlob: func() (blocks.VerifiedROBlob, error) {
t.Fatal("Batch verifier should stop before this point")
return blocks.VerifiedROBlob{}, nil
}}
}
},
bandb: func(t *testing.T, nb int) (blocks.ROBlock, []blocks.ROBlob) {
blk, blbs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, nb)
wr, err := blocks.NewROBlobWithRoot(blbs[0].BlobSidecar, bytesutil.ToBytes32([]byte("wrong")))
require.NoError(t, err)
blbs[0] = wr
return blk, blbs
},
err: ErrBatchBlockRootMismatch,
nblobs: 1,
},
{
name: "idx oob",
nv: func() NewBlobVerifier {
return func(bl blocks.ROBlob, reqs []Requirement) BlobVerifier {
return &MockBlobVerifier{
ErrBlobIndexInBounds: ErrBlobIndexInvalid,
cbVerifiedROBlob: func() (blocks.VerifiedROBlob, error) {
t.Fatal("Batch verifier should stop before this point")
return blocks.VerifiedROBlob{}, nil
}}
}
},
bandb: func(t *testing.T, nb int) (blocks.ROBlock, []blocks.ROBlob) {
return util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, nb)
},
nblobs: 1,
err: ErrBlobIndexInvalid,
},
{
name: "inclusion proof invalid",
nv: func() NewBlobVerifier {
return func(bl blocks.ROBlob, reqs []Requirement) BlobVerifier {
return &MockBlobVerifier{
ErrSidecarInclusionProven: ErrSidecarInclusionProofInvalid,
cbVerifiedROBlob: func() (blocks.VerifiedROBlob, error) {
t.Fatal("Batch verifier should stop before this point")
return blocks.VerifiedROBlob{}, nil
}}
}
},
bandb: func(t *testing.T, nb int) (blocks.ROBlock, []blocks.ROBlob) {
return util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, nb)
},
nblobs: 1,
err: ErrSidecarInclusionProofInvalid,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
blk, blbs := c.bandb(t, c.nblobs)
reqs := c.reqs
if reqs == nil {
reqs = InitsyncSidecarRequirements
}
bbv := NewBlobBatchVerifier(c.nv(), reqs)
if c.cv == nil {
bbv.verifyKzg = mockCV(nil)
} else {
bbv.verifyKzg = c.cv
}
vb, err := bbv.VerifiedROBlobs(ctx, blk, blbs)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, c.nblobs, len(vb))
})
}
}

View File

@@ -70,6 +70,9 @@ var InitsyncSidecarRequirements = requirementList(GossipSidecarRequirements).exc
// BackfillSidecarRequirements is the same as InitsyncSidecarRequirements.
var BackfillSidecarRequirements = requirementList(InitsyncSidecarRequirements).excluding()
// PendingQueueSidecarRequirements is the same as InitsyncSidecarRequirements, used by the pending blocks queue.
var PendingQueueSidecarRequirements = requirementList(InitsyncSidecarRequirements).excluding()
var (
ErrBlobInvalid = errors.New("blob failed verification")
// ErrBlobIndexInvalid means RequireBlobIndexInBounds failed.
@@ -123,7 +126,7 @@ func (bv *ROBlobVerifier) VerifiedROBlob() (blocks.VerifiedROBlob, error) {
// For example, when batch syncing, forkchoice is only updated at the end of the batch. So the checks that use
// forkchoice, like descends from finalized or parent seen, would necessarily fail. Allowing the caller to
// assert the requirement has been satisfied ensures we have an easy way to audit which piece of code is satisfying
// a requireent outside of this package.
// a requirement outside of this package.
func (bv *ROBlobVerifier) SatisfyRequirement(req Requirement) {
bv.recordResult(req, nil)
}
@@ -160,6 +163,7 @@ func (bv *ROBlobVerifier) NotFromFutureSlot() (err error) {
earliestStart := bv.clock.SlotStart(bv.blob.Slot()).Add(-1 * params.BeaconConfig().MaximumGossipClockDisparityDuration())
// If the system time is still before earliestStart, we consider the blob from a future slot and return an error.
if bv.clock.Now().Before(earliestStart) {
log.WithFields(logging.BlobFields(bv.blob)).Debug("sidecar slot is too far in the future")
return ErrFromFutureSlot
}
return nil
@@ -176,6 +180,7 @@ func (bv *ROBlobVerifier) SlotAboveFinalized() (err error) {
return errors.Wrapf(ErrSlotNotAfterFinalized, "error computing epoch start slot for finalized checkpoint (%d) %s", fcp.Epoch, err.Error())
}
if bv.blob.Slot() <= fSlot {
log.WithFields(logging.BlobFields(bv.blob)).Debug("sidecar slot is not after finalized checkpoint")
return ErrSlotNotAfterFinalized
}
return nil
@@ -190,15 +195,15 @@ func (bv *ROBlobVerifier) ValidProposerSignature(ctx context.Context) (err error
// First check if there is a cached verification that can be reused.
seen, err := bv.sc.SignatureVerified(sd)
if seen {
blobVerifiationProposerSignatureCache.WithLabelValues("hit-valid").Inc()
blobVerificationProposerSignatureCache.WithLabelValues("hit-valid").Inc()
if err != nil {
log.WithFields(logging.BlobFields(bv.blob)).WithError(err).Debug("reusing failed proposer signature validation from cache")
blobVerifiationProposerSignatureCache.WithLabelValues("hit-invalid").Inc()
blobVerificationProposerSignatureCache.WithLabelValues("hit-invalid").Inc()
return ErrInvalidProposerSignature
}
return nil
}
blobVerifiationProposerSignatureCache.WithLabelValues("miss").Inc()
blobVerificationProposerSignatureCache.WithLabelValues("miss").Inc()
// Retrieve the parent state to fallback to full verification.
parent, err := bv.parentState(ctx)
@@ -225,6 +230,7 @@ func (bv *ROBlobVerifier) SidecarParentSeen(parentSeen func([32]byte) bool) (err
if bv.fc.HasNode(bv.blob.ParentRoot()) {
return nil
}
log.WithFields(logging.BlobFields(bv.blob)).Debug("parent root has not been seen")
return ErrSidecarParentNotSeen
}
@@ -233,6 +239,7 @@ func (bv *ROBlobVerifier) SidecarParentSeen(parentSeen func([32]byte) bool) (err
func (bv *ROBlobVerifier) SidecarParentValid(badParent func([32]byte) bool) (err error) {
defer bv.recordResult(RequireSidecarParentValid, &err)
if badParent != nil && badParent(bv.blob.ParentRoot()) {
log.WithFields(logging.BlobFields(bv.blob)).Debug("parent root is invalid")
return ErrSidecarParentInvalid
}
return nil
@@ -258,6 +265,7 @@ func (bv *ROBlobVerifier) SidecarParentSlotLower() (err error) {
func (bv *ROBlobVerifier) SidecarDescendsFromFinalized() (err error) {
defer bv.recordResult(RequireSidecarDescendsFromFinalized, &err)
if !bv.fc.HasNode(bv.blob.ParentRoot()) {
log.WithFields(logging.BlobFields(bv.blob)).Debug("parent root not in forkchoice")
return ErrSidecarNotFinalizedDescendent
}
return nil

View File

@@ -6,7 +6,7 @@ import (
)
var (
blobVerifiationProposerSignatureCache = promauto.NewCounterVec(
blobVerificationProposerSignatureCache = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "blob_verification_proposer_signature_cache",
Help: "BlobSidecar proposer signature cache result.",

Some files were not shown because too many files have changed in this diff Show More