Compare commits

...

69 Commits

Author SHA1 Message Date
Kasey Kirkham
7d11b27c85 mock fixes 2024-04-24 23:58:14 -05:00
Kasey Kirkham
ac94da8706 pruning tests 2024-04-24 23:26:04 -05:00
Kasey Kirkham
6df85c72a6 naming 2024-04-24 19:28:22 -05:00
Kasey Kirkham
442a28c2f9 handle errors identifying blobs 2024-04-24 14:53:24 -05:00
Kasey Kirkham
43cb6919ea assume blocking cache warm and other cleanups 2024-04-24 13:39:17 -05:00
Kasey Kirkham
9ed237faec improve naming, implement pruning for new layout 2024-04-24 09:25:58 -05:00
Kasey Kirkham
7635a4654c tests and bugfixes 2024-04-24 08:45:41 -05:00
Kasey Kirkham
9062c9c05e updating migration tests 2024-04-23 11:40:31 -05:00
Kasey Kirkham
d132d74b6b wip 2024-04-23 07:56:47 -05:00
Kasey Kirkham
1c4dd7e21c aggressive refactoring in progress 2024-04-19 17:49:25 -05:00
Kasey Kirkham
546d8a7f00 wip 2024-04-18 17:58:37 -05:00
Kasey Kirkham
a4d54488c7 start populating slot in blob namer 2024-04-17 20:45:06 -05:00
Kasey Kirkham
dd4cb07455 new blob storage scheme avoiding large base dir 2024-04-17 14:46:12 -05:00
Kasey Kirkham
4179582a72 use [32]byte keys in the filesystem cache 2024-04-17 14:32:51 -05:00
kasey
219301339c Don't return error that can be internally handled (#13887)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-04-17 18:28:01 +00:00
Radosław Kapka
aec349f75a Upgrade the Beacon API e2e evaluator (#13868)
* GET

* POST

* Revert "Auxiliary commit to revert individual files from 615feb104004d6a945ededf5862ae38325fc7ec2"

This reverts commit 55cf071c684019f3d6124179154c10b2277fda49.

* comment fix

* deepsource
2024-04-15 05:56:47 +00:00
terence
5f909caedf Remove unused IsViableForCheckpoint (#13879) 2024-04-14 16:38:25 +00:00
Radosław Kapka
ba6dff3adb Return syncing status when node is optimistic (#13875) 2024-04-12 10:24:30 +00:00
Radosław Kapka
8cd05f098b Use read only validators in Beacon API (#13873) 2024-04-12 07:19:40 +00:00
Radosław Kapka
425f5387fa Handle overflow in retention period calculation (#13874) 2024-04-12 07:17:12 +00:00
Nishant Das
f2ce115ade Revert Peer Log Changes (#13872) 2024-04-12 06:49:01 +00:00
kasey
090a3e1ded Fix bug from PR 13827 (#13871)
* fix AS cache bug, tighten ro constructors

* additional coverage on AS cache filter

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-04-11 23:07:44 +00:00
kasey
c0acb7d352 Backfill throttling (#13855)
* add a sleep between retries as a simple throttle

* unit test

* deepsource

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-04-11 15:22:29 +00:00
Radosław Kapka
0d6070e6fc Use retention period when fetching blobs (#13869) 2024-04-11 14:06:00 +00:00
Manu NALEPA
bd00f851f0 e2e: Expected log Running node with peerId= -> Running node with. (#13861)
Rationale:
The `FindFollowingTextInFile` seems to have troubles with `logrus` fields.
2024-04-09 07:51:56 +00:00
Nishant Das
1a0c07deec Extend Broadcast Window For Attestations (#13858)
* fix it

* make check better
2024-04-08 04:49:20 +00:00
kasey
04f231a400 Initsync skip local blobs (#13827)
* wip - init-sync skip available blob req

* satisfy deep source

* gaz

* don't need to sort blobs; simplify blobRequest stack

* wip debug log to watch blob skip behavior

* unit tests for new blob req generator

* refactor to reduce blob req func count

* log when WaitForSummarizer fails

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-04-05 19:09:43 +00:00
Manu NALEPA
be1bfcce63 P2P: Add QUIC support (#13786)
* (Unrelated) DoppelGanger: Improve message.

* `beacon-blocks-by-range`: Add `--network` option.

* `ensurePeerConnections`: Remove capital letter in error message.

* `MultiAddressBuilder{WithID}`: Refactor.

* `buildOptions`: Improve log.

* `NewService`: Bubbles up errors.

* `tcp` ==> `libp2ptcp`

* `multiAddressBuilderWithID`: Add the ability to build QUIC multiaddr

* `p2p Start`: Fix error message.

* `p2p`: Add QUIC support.

* Status: Implement `{Inbound,Outbound}Connected{TCP,QUIC}`.

* Logging: Display the number of TCP/QUIC connected peers.

* P2P: Implement `{Inbound,Outbound}ConnectedWithProtocol`.

* Hide QUIC protocol behind the `--enable-quic` feature flag.

* `e2e`: Add `--enable-quic` flag.

* Add `--enable-quic` in `devModeFlag`.

* `convertToMultiAddrs` ==> `retrieveMultiAddrsFromNode`.

* `convertToAddrInfo`: Ensure `len(infos) == 1`.
2024-04-04 12:21:35 +00:00
Manu NALEPA
8cf5d79852 Remove the Goerli/Prater support. (#13846) 2024-04-03 19:19:17 +00:00
redistay
f7912e7c20 chore: fix some comments (#13843)
Signed-off-by: redistay <wujunjing@outlook.com>
2024-04-02 22:19:15 +00:00
terence
caa8be5dd1 Beacon-api: broadcast blobs in the event of seen block (#13830)
* Beacon-api: broadcast blobs in the event of seen block

* Fix parameters

* Fix test

* Check forkchoice

* Ran go format

* Revert "Ran go format"

This reverts commit 091e77e81d6e2b9861fecc27c0bad1898033f9a3.

* James feedback

* Radek's feedback

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Fix bad tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-04-02 18:12:58 +00:00
cui
0c15a30a34 using slices.Index (#13836) 2024-04-02 16:30:05 +00:00
cui
7bce1c0714 using slices.IndexFunc (#13839) 2024-04-02 16:06:27 +00:00
Radosław Kapka
d1084cbe48 Send correct state root with finalization event (#13842) 2024-04-02 15:32:32 +00:00
cui
2cc3f69a3f using slices.Contains (#13835) 2024-04-01 21:26:52 +00:00
cui
a861489a83 using slices.ContainsFunc (#13838) 2024-04-01 21:15:38 +00:00
cui
0e1c585f7d using slices.Contains (#13837) 2024-04-01 21:10:46 +00:00
cui
9df20e616c using slices.IndexFunc (#13834) 2024-04-01 20:04:40 +00:00
kasey
53fdd2d062 allow other pkgs to check for blobs in pruning cache (#13788)
* allow other pkgs to check for blobs in pruning cache

* address deepsource complaints

* custom error to simplify test setup

* add AllAvailable method

* make storage summary slot field private

* unit test and off-by-one fix

* remove comment with copy of tested function

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-04-01 14:19:51 +00:00
Sammy Rosso
2b4bb5d890 Fixed spelling mistakes in comments (#13833) 2024-04-01 11:12:20 +00:00
Nishant Das
38f208d70d Reject Empty Bundles (#13798)
* reject it

* test

* add test case
2024-04-01 04:37:36 +00:00
Nishant Das
65b90abdda Maximize Peer Capacity When Syncing (#13820)
* maximize it

* fix it

* lint

* add test

* Update beacon-chain/sync/initial-sync/blocks_fetcher.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* logs

* kasey's review

* kasey's review

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-03-30 14:54:11 +00:00
kasey
f3b49d4eaf Repair idx 13486 (#13831)
* Revert "Modify the algorithm of `updateFinalizedBlockRoots` (#13486)"

This reverts commit 32fb183392.

* migration to fix index corruption from pr 13486

* bail as soon as we see 10 epochs without the bug

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-29 17:29:39 +00:00
Lorenzo
5b1da7353c feat(direct peers): configure static peers to be direct peers in pubsub options (#13773) 2024-03-29 04:00:40 +00:00
terence
9f17e65860 Fill in missing debug logs for blob p2p IGNORE/REJECT (#13825) 2024-03-28 16:11:55 +00:00
Manu NALEPA
9b2d53b0d1 Bump libp2p to v0.33.1 (#13784)
* Run bare `//:gazelle -- update-repos`.

--> Removes some blank lines.

* Update libp2p to `v0.33.1`.
2024-03-28 08:38:46 +00:00
terence
d6f9196707 Change goodbye message from rate limited peer to debug verbosity (#13819) 2024-03-28 04:14:37 +00:00
Potuz
1b0e09369e Add metrics to track pending attestations (#13815) 2024-03-27 18:20:53 +00:00
Potuz
12482eeb40 Remove check for duplicates in pending attestation queue (#13814)
* Remove check for duplicates in pending attestation queue

The current queue will only save 1 unaggregated attestation for a pending block because we wrap the object into a SignedAggregatedAttestationAndProof with a zeroed aggregator.

* fix tests
2024-03-27 16:39:45 +00:00
Joel Rousseau
acc307b959 Command-line interface for visualizing min/max span bucket (#13748)
* add max/min span visualisation tool cli

* go mod tidy

* lint imports

* remove typo

* fix epoch table value

* fix deepsource

* add dep to bazel

* fix dep import order

* change command name from span to slasher-span-display

* change command args style using - instead of _

* sed s/CONFIGURATION/SLASHER PARAMS//

* change double neg to double pos condition

* remove unused anonymous func

* better function naming

* add range condition

* [deepsource] Fix Empty slice literal used to declare a variable
    GO-W1027

* correct typo

* do not show incorrect epochs due to round robin

* fix import

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-03-27 16:15:39 +00:00
Afanti
c1d75c295a chore: enhance comment and more readable (#13792) 2024-03-27 14:43:55 +00:00
Potuz
fad118cb04 Simplify ValidateAttestationTime (#13813)
ValidateClock in ValidateAttestationTime is useless

The check is that the attSlot is not > than the currentslot + 128 slots.

Later there's a check that the attSlot start time is not > than current slot
start time + clockDisparity.

if attSlot > than currentSlot + 128 slots, then the second check would fail
anyway.

The lattest check already guarantees that the attSlot cannot be larger than the
currentSlot, therefore it may never happen that attEpoch > currentEpoch. We just
need to check for Deneb that attEpoch >= currentEpoch - 1.

Removes also some duplicated variables like the attestation epoch being computed
twice.
2024-03-27 14:17:16 +00:00
kasey
cdd1d819df Refactor batch verifier for sharing across packages (#13812)
* refactor batch verifier to share with pending queue

* unit test for batch verifier

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-27 12:36:17 +00:00
terence
97edffaff5 Add bid value metrics (#13804) 2024-03-26 14:58:41 +00:00
Radosław Kapka
6de7df6b9d Get genesis only once (#13796) 2024-03-26 03:26:34 +00:00
Sammy Rosso
14d7416c16 Spec test coverage report hack (#13718)
* Spec test report hack

* no export

* fix shell complaint

* shell fix?

* shell again?

* chmod +x ./hack/spectest-report.sh

* Review + improvements

* Remove unwanted change

* Add exclusion list

* Fix path + add eip6110 to exclusion

* Fix bazel path nonsense

* Add extra detail about specific test

* Cleanup exclusion list

* Add fail conditions

* Add mkdir

* Shorten filename + mkdir only if new

* Fix names

* Add to exclusion list

* Add report to .gitignore

* Back to stupid names

* Add Bazel flags option

---------

Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-03-25 16:10:32 +00:00
Radosław Kapka
6782df917a Utilize next slot cache in block rewards rpc (#13684)
* Utilize next slot cache in block rewards rpc

* msg fix

* tests
2024-03-25 08:56:20 +00:00
Bharath Vedartham
3d2230223f create the log file along with its parent directory if not present (#12675)
* Remove Feature Flag From Prater (#12082)

* Use Epoch boundary cache to retrieve balances (#12083)

* Use Epoch boundary cache to retrieve balances

* save boundary states before inserting to forkchoice

* move up last block save

* remove boundary checks on balances

* fix ordering

---------

Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>

* create the log file along with its parent directory if not present

* only give ReadWritePermissions to the log file

* use io/file package to create the parent directories

* fix ci related issues

* add regression tests

* run gazelle

* fix tests

* remove print statements

* gazelle

* Remove failing test for MkdirAll, this failure is not expected

---------

Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
Co-authored-by: prylabs-bulldozer[bot] <58059840+prylabs-bulldozer[bot]@users.noreply.github.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-03-22 15:32:08 +00:00
Preston Van Loon
b008a6422d Add tarball support for docker images (#13790) 2024-03-22 15:31:29 +00:00
Fredrik Svantes
d19365507f Set default LocalBlockValueBoost to 10 (#13772)
* Set default LocalBlockValueBoost to 10

* Update base.go

* Update mainnet_config.go
2024-03-22 13:18:20 +00:00
kasey
c05e39a668 fix handling of goodbye messages for limited peers (#13785)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-03-22 13:06:16 +00:00
Radosław Kapka
63c2b3563a Optimize GetDuties VC action (#13789)
* wait groups

* errgroup

* tests

* bzl

* review
2024-03-22 09:50:19 +00:00
Justin Traglia
a6e86c6731 Rename payloadattribute Timestamps to Timestamp (#13523)
Co-authored-by: terence <terence@prysmaticlabs.com>
2024-03-21 21:11:01 +00:00
Radosław Kapka
32fb183392 Modify the algorithm of updateFinalizedBlockRoots (#13486)
* rename error var

* new algo

* replay_test

* add comment

* review

* fill out parent root

* handle edge cases

* review
2024-03-21 21:09:56 +00:00
carrychair
cade09ba0b chore: fix some typos (#13726)
Signed-off-by: carrychair <linghuchong404@gmail.com>
2024-03-21 21:00:21 +00:00
Potuz
f85ddfe265 Log the slot and blockroot when we deadline waiting for blobs (#13774) 2024-03-21 20:29:23 +00:00
terence
3b97094ea4 Log da block root in hex (#13787) 2024-03-21 20:26:17 +00:00
Nishant Das
acdbf7c491 expand it (#13770) 2024-03-21 19:57:22 +00:00
Potuz
1cc1effd75 Revert "pass justified=finalized in Prater (#13695)" (#13709)
This reverts commit 102518e106.
2024-03-21 17:42:40 +00:00
216 changed files with 6298 additions and 2769 deletions

3
.gitignore vendored
View File

@@ -41,3 +41,6 @@ jwt.hex
# manual testing
tmp
# spectest coverage reports
report.txt

View File

@@ -130,9 +130,9 @@ aspect_bazel_lib_register_toolchains()
http_archive(
name = "rules_oci",
sha256 = "c71c25ed333a4909d2dd77e0b16c39e9912525a98c7fa85144282be8d04ef54c",
strip_prefix = "rules_oci-1.3.4",
url = "https://github.com/bazel-contrib/rules_oci/releases/download/v1.3.4/rules_oci-v1.3.4.tar.gz",
sha256 = "4a276e9566c03491649eef63f27c2816cc222f41ccdebd97d2c5159e84917c3b",
strip_prefix = "rules_oci-1.7.4",
url = "https://github.com/bazel-contrib/rules_oci/releases/download/v1.7.4/rules_oci-v1.7.4.tar.gz",
)
load("@rules_oci//oci:dependencies.bzl", "rules_oci_dependencies")
@@ -342,22 +342,6 @@ filegroup(
url = "https://github.com/eth-clients/eth2-networks/archive/934c948e69205dcf2deb87e4ae6cc140c335f94d.tar.gz",
)
http_archive(
name = "goerli_testnet",
build_file_content = """
filegroup(
name = "configs",
srcs = [
"prater/config.yaml",
],
visibility = ["//visibility:public"],
)
""",
sha256 = "43fc0f55ddff7b511713e2de07aa22846a67432df997296fb4fc09cd8ed1dcdb",
strip_prefix = "goerli-6522ac6684693740cd4ddcc2a0662e03702aa4a1",
url = "https://github.com/eth-clients/goerli/archive/6522ac6684693740cd4ddcc2a0662e03702aa4a1.tar.gz",
)
http_archive(
name = "holesky_testnet",
build_file_content = """

View File

@@ -88,7 +88,7 @@ func TestToggle(t *testing.T) {
}
}
func TestToogleMultipleTimes(t *testing.T) {
func TestToggleMultipleTimes(t *testing.T) {
t.Parallel()
v := New()
@@ -101,16 +101,16 @@ func TestToogleMultipleTimes(t *testing.T) {
expected := i%2 != 0
if v.IsSet() != expected {
t.Fatalf("AtomicBool.Toogle() doesn't work after %d calls, expected: %v, got %v", i, expected, v.IsSet())
t.Fatalf("AtomicBool.Toggle() doesn't work after %d calls, expected: %v, got %v", i, expected, v.IsSet())
}
if pre == v.IsSet() {
t.Fatalf("AtomicBool.Toogle() returned wrong value at the %dth calls, expected: %v, got %v", i, !v.IsSet(), pre)
t.Fatalf("AtomicBool.Toggle() returned wrong value at the %dth calls, expected: %v, got %v", i, !v.IsSet(), pre)
}
}
}
func TestToogleAfterOverflow(t *testing.T) {
func TestToggleAfterOverflow(t *testing.T) {
t.Parallel()
var value int32 = math.MaxInt32
@@ -122,7 +122,7 @@ func TestToogleAfterOverflow(t *testing.T) {
v.Toggle()
expected := math.MaxInt32%2 == 0
if v.IsSet() != expected {
t.Fatalf("AtomicBool.Toogle() doesn't work after overflow, expected: %v, got %v", expected, v.IsSet())
t.Fatalf("AtomicBool.Toggle() doesn't work after overflow, expected: %v, got %v", expected, v.IsSet())
}
// make sure overflow happened
@@ -135,7 +135,7 @@ func TestToogleAfterOverflow(t *testing.T) {
v.Toggle()
expected = !expected
if v.IsSet() != expected {
t.Fatalf("AtomicBool.Toogle() doesn't work after the second call after overflow, expected: %v, got %v", expected, v.IsSet())
t.Fatalf("AtomicBool.Toggle() doesn't work after the second call after overflow, expected: %v, got %v", expected, v.IsSet())
}
}

View File

@@ -20,6 +20,7 @@ package event
import (
"errors"
"reflect"
"slices"
"sync"
)
@@ -219,12 +220,9 @@ type caseList []reflect.SelectCase
// find returns the index of a case containing the given channel.
func (cs caseList) find(channel interface{}) int {
for i, cas := range cs {
if cas.Chan.Interface() == channel {
return i
}
}
return -1
return slices.IndexFunc(cs, func(selectCase reflect.SelectCase) bool {
return selectCase.Chan.Interface() == channel
})
}
// delete removes the given case from cs.

View File

@@ -63,7 +63,7 @@ func Scatter(inputLen int, sFunc func(int, int, *sync.RWMutex) (interface{}, err
return results, nil
}
// calculateChunkSize calculates a suitable chunk size for the purposes of parallelisation.
// calculateChunkSize calculates a suitable chunk size for the purposes of parallelization.
func calculateChunkSize(items int) int {
// Start with a simple even split
chunkSize := items / runtime.GOMAXPROCS(0)

View File

@@ -2,7 +2,7 @@
# This script serves as a wrapper around bazel to limit the scope of environment variables that
# may change the action output. Using this script should result in a higher cache hit ratio for
# cached actions with a more heremtic build.
# cached actions with a more hermetic build.
env -i \
PATH=/usr/bin:/bin \

View File

@@ -11,7 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
f "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -399,14 +398,6 @@ func (s *Service) InForkchoice(root [32]byte) bool {
return s.cfg.ForkChoiceStore.HasNode(root)
}
// IsViableForCheckpoint returns whether the given checkpoint is a checkpoint in any
// chain known to forkchoice
func (s *Service) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.IsViableForCheckpoint(cp)
}
// IsOptimisticForRoot takes the root as argument instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {

View File

@@ -61,7 +61,7 @@ func TestService_headNextSyncCommitteeIndices(t *testing.T) {
indices, err := c.headNextSyncCommitteeIndices(context.Background(), 0, primitives.Slot(slot))
require.NoError(t, err)
// NextSyncCommittee should be be empty after `ProcessSyncCommitteeUpdates`. Validator should get indices.
// NextSyncCommittee should be empty after `ProcessSyncCommitteeUpdates`. Validator should get indices.
require.NotEqual(t, 0, len(indices))
}

View File

@@ -586,7 +586,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
s.blobNotifiers.delete(root)
return nil
case <-ctx.Done():
return errors.Wrap(ctx.Err(), "context deadline waiting for blob sidecars")
return errors.Wrapf(ctx.Err(), "context deadline waiting for blob sidecars slot: %d, BlockRoot: %#x", block.Slot(), root)
}
}
}
@@ -594,7 +594,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
func daCheckLogFields(root [32]byte, slot primitives.Slot, expected, missing int) logrus.Fields {
return logrus.Fields{
"slot": slot,
"root": root,
"root": fmt.Sprintf("%#x", root),
"blobsExpected": expected,
"blobsWaiting": missing,
}

View File

@@ -60,7 +60,7 @@ func (s *Service) getFCUArgsEarlyBlock(cfg *postBlockProcessConfig, fcuArgs *fcu
// logNonCanonicalBlockReceived prints a message informing that the received
// block is not the head of the chain. It requires the caller holds a lock on
// Foprkchoice.
// Forkchoice.
func (s *Service) logNonCanonicalBlockReceived(blockRoot [32]byte, headRoot [32]byte) {
receivedWeight, err := s.cfg.ForkChoiceStore.Weight(blockRoot)
if err != nil {

View File

@@ -2115,7 +2115,7 @@ func TestMissingIndices(t *testing.T) {
for _, c := range cases {
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
t.Run(c.name, func(t *testing.T) {
require.NoError(t, bm.CreateFakeIndices(c.root, c.present...))
require.NoError(t, bm.CreateFakeIndices(c.root, 0, c.present...))
missing, err := missingIndices(bs, c.root, c.expected)
if c.err != nil {
require.ErrorIs(t, err, c.err)

View File

@@ -170,7 +170,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
// Send finalized events and finalized deposits in the background
if newFinalized {
finalized := s.cfg.ForkChoiceStore.FinalizedCheckpoint()
go s.sendNewFinalizedEvent(blockCopy, postState)
go s.sendNewFinalizedEvent(ctx, postState)
depCtx, cancel := context.WithTimeout(context.Background(), depositDeadline)
go func() {
s.insertFinalizedDeposits(depCtx, finalized.Root)
@@ -443,7 +443,7 @@ func (s *Service) updateFinalizationOnBlock(ctx context.Context, preState, postS
// sendNewFinalizedEvent sends a new finalization checkpoint event over the
// event feed. It needs to be called on the background
func (s *Service) sendNewFinalizedEvent(signed interfaces.ReadOnlySignedBeaconBlock, postState state.BeaconState) {
func (s *Service) sendNewFinalizedEvent(ctx context.Context, postState state.BeaconState) {
isValidPayload := false
s.headLock.RLock()
if s.head != nil {
@@ -451,8 +451,17 @@ func (s *Service) sendNewFinalizedEvent(signed interfaces.ReadOnlySignedBeaconBl
}
s.headLock.RUnlock()
blk, err := s.cfg.BeaconDB.Block(ctx, bytesutil.ToBytes32(postState.FinalizedCheckpoint().Root))
if err != nil {
log.WithError(err).Error("Could not retrieve block for finalized checkpoint root. Finalized event will not be emitted")
return
}
if blk == nil || blk.IsNil() || blk.Block() == nil || blk.Block().IsNil() {
log.WithError(err).Error("Block retrieved for finalized checkpoint root is nil. Finalized event will not be emitted")
return
}
stateRoot := blk.Block().StateRoot()
// Send an event regarding the new finalized checkpoint over a common event feed.
stateRoot := signed.Block().StateRoot()
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.FinalizedCheckpoint,
Data: &ethpbv1.EventFinalizedCheckpoint{

View File

@@ -8,12 +8,14 @@ import (
blockchainTesting "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpbv1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -378,3 +380,38 @@ func TestHandleBlockBLSToExecutionChanges(t *testing.T) {
require.Equal(t, false, pool.ValidatorExists(idx))
})
}
func Test_sendNewFinalizedEvent(t *testing.T) {
s, _ := minimalTestService(t)
notifier := &blockchainTesting.MockStateNotifier{RecordEvents: true}
s.cfg.StateNotifier = notifier
finalizedSt, err := util.NewBeaconState()
require.NoError(t, err)
finalizedStRoot, err := finalizedSt.HashTreeRoot(s.ctx)
require.NoError(t, err)
b := util.NewBeaconBlock()
b.Block.StateRoot = finalizedStRoot[:]
sbb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
sbbRoot, err := sbb.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, s.cfg.BeaconDB.SaveBlock(s.ctx, sbb))
st, err := util.NewBeaconState()
require.NoError(t, err)
require.NoError(t, st.SetFinalizedCheckpoint(&ethpb.Checkpoint{
Epoch: 123,
Root: sbbRoot[:],
}))
s.sendNewFinalizedEvent(s.ctx, st)
require.Equal(t, 1, len(notifier.ReceivedEvents()))
e := notifier.ReceivedEvents()[0]
assert.Equal(t, statefeed.FinalizedCheckpoint, int(e.Type))
fc, ok := e.Data.(*ethpbv1.EventFinalizedCheckpoint)
require.Equal(t, true, ok, "event has wrong data type")
assert.Equal(t, primitives.Epoch(123), fc.Epoch)
assert.DeepEqual(t, sbbRoot[:], fc.Block)
assert.DeepEqual(t, finalizedStRoot[:], fc.State)
assert.Equal(t, false, fc.ExecutionOptimistic)
}

View File

@@ -290,18 +290,10 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
fRoot := s.ensureRootNotZeros(bytesutil.ToBytes32(finalized.Root))
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
if params.BeaconConfig().ConfigName != params.PraterName {
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(s.ctx, &forkchoicetypes.Checkpoint{Epoch: justified.Epoch,
Root: bytesutil.ToBytes32(justified.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
} else {
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(s.ctx, &forkchoicetypes.Checkpoint{Epoch: finalized.Epoch,
Root: bytesutil.ToBytes32(finalized.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
if err := s.cfg.ForkChoiceStore.UpdateJustifiedCheckpoint(s.ctx, &forkchoicetypes.Checkpoint{Epoch: justified.Epoch,
Root: bytesutil.ToBytes32(justified.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's justified checkpoint")
}
if err := s.cfg.ForkChoiceStore.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Epoch: finalized.Epoch,
Root: bytesutil.ToBytes32(finalized.Root)}); err != nil {
return errors.Wrap(err, "could not update forkchoice's finalized checkpoint")

View File

@@ -804,7 +804,7 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
depositTrie, err := trie.GenerateTrieFromItems(trieItems, params.BeaconConfig().DepositContractTreeDepth)
assert.NoError(t, err)
// Perform this in a non-sensical ordering
// Perform this in a nonsensical ordering
require.NoError(t, dc.InsertFinalizedDeposits(context.Background(), 10, [32]byte{}, 0))
require.NoError(t, dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0))
require.NoError(t, dc.InsertFinalizedDeposits(context.Background(), 3, [32]byte{}, 0))

View File

@@ -784,7 +784,7 @@ func TestFinalizedDeposits_ReturnsTrieCorrectly(t *testing.T) {
depositTrie, err := trie.GenerateTrieFromItems(trieItems, params.BeaconConfig().DepositContractTreeDepth)
assert.NoError(t, err)
// Perform this in a non-sensical ordering
// Perform this in a nonsensical ordering
err = dc.InsertFinalizedDeposits(context.Background(), 1, [32]byte{}, 0)
require.NoError(t, err)
err = dc.InsertFinalizedDeposits(context.Background(), 2, [32]byte{}, 0)

View File

@@ -12,7 +12,6 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
var (
@@ -133,9 +132,6 @@ func ComputeSubnetFromCommitteeAndSlot(activeValCount uint64, comIdx primitives.
//
// In the attestation must be within the range of 95 to 102 in the example above.
func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clockDisparity time.Duration) error {
if err := slots.ValidateClock(attSlot, uint64(genesisTime.Unix())); err != nil {
return err
}
attTime, err := slots.ToTime(uint64(genesisTime.Unix()), attSlot)
if err != nil {
return err
@@ -182,24 +178,15 @@ func ValidateAttestationTime(attSlot primitives.Slot, genesisTime time.Time, clo
}
// EIP-7045: Starting in Deneb, allow any attestations from the current or previous epoch.
currentEpoch := slots.ToEpoch(currentSlot)
prevEpoch, err := currentEpoch.SafeSub(1)
if err != nil {
log.WithError(err).Debug("Ignoring underflow for a deneb attestation inclusion check in epoch 0")
prevEpoch = 0
}
attSlotEpoch := slots.ToEpoch(attSlot)
if attSlotEpoch != currentEpoch && attSlotEpoch != prevEpoch {
if attEpoch+1 < currentEpoch {
attError = fmt.Errorf(
"attestation epoch %d not within current epoch %d or previous epoch %d",
attSlot/params.BeaconConfig().SlotsPerEpoch,
"attestation epoch %d not within current epoch %d or previous epoch",
attEpoch,
currentEpoch,
prevEpoch,
)
return errors.Join(ErrTooLate, attError)
}
return nil
}

View File

@@ -197,7 +197,7 @@ func Test_ValidateAttestationTime(t *testing.T) {
-500 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second,
).Add(200 * time.Millisecond),
},
wantedErr: "attestation epoch 8 not within current epoch 15 or previous epoch 14",
wantedErr: "attestation epoch 8 not within current epoch 15 or previous epoch",
},
{
name: "attestation.slot is well beyond current slot",
@@ -205,7 +205,7 @@ func Test_ValidateAttestationTime(t *testing.T) {
attSlot: 1 << 32,
genesisTime: prysmTime.Now().Add(-15 * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second),
},
wantedErr: "which exceeds max allowed value relative to the local clock",
wantedErr: "attestation slot 4294967296 not within attestation propagation range of 0 to 15 (current slot)",
},
}
for _, tt := range tests {

View File

@@ -22,7 +22,7 @@ var balanceCache = cache.NewEffectiveBalanceCache()
// """
// Return the combined effective balance of the ``indices``.
// ``EFFECTIVE_BALANCE_INCREMENT`` Gwei minimum to avoid divisions by zero.
// Math safe up to ~10B ETH, afterwhich this overflows uint64.
// Math safe up to ~10B ETH, after which this overflows uint64.
// """
// return Gwei(max(EFFECTIVE_BALANCE_INCREMENT, sum([state.validators[index].effective_balance for index in indices])))
func TotalBalance(state state.ReadOnlyValidators, indices []primitives.ValidatorIndex) uint64 {

View File

@@ -59,7 +59,7 @@ func ComputeDomainAndSign(st state.ReadOnlyBeaconState, epoch primitives.Epoch,
return ComputeDomainAndSignWithoutState(st.Fork(), epoch, domain, st.GenesisValidatorsRoot(), obj, key)
}
// ComputeDomainAndSignWithoutState offers the same functionalit as ComputeDomainAndSign without the need to provide a BeaconState.
// ComputeDomainAndSignWithoutState offers the same functionality as ComputeDomainAndSign without the need to provide a BeaconState.
// This is particularly helpful for signing values in tests.
func ComputeDomainAndSignWithoutState(fork *ethpb.Fork, epoch primitives.Epoch, domain [4]byte, vr []byte, obj fssz.HashRoot, key bls.SecretKey) ([]byte, error) {
// EIP-7044: Beginning in Deneb, fix the fork version to Capella for signed exits.

View File

@@ -94,6 +94,8 @@ func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current pri
entry := s.cache.ensure(key)
defer s.cache.delete(key)
root := b.Root()
entry.setDiskSummary(s.store.Summary(root))
// Verify we have all the expected sidecars, and fail fast if any are missing or inconsistent.
// We don't try to salvage problematic batches because this indicates a misbehaving peer and we'd rather
// ignore their response and decrease their peer score.

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -59,7 +60,12 @@ func (c *cache) delete(key cacheKey) {
// cacheEntry holds a fixed-length cache of BlobSidecars.
type cacheEntry struct {
scs [fieldparams.MaxBlobsPerBlock]*blocks.ROBlob
scs [fieldparams.MaxBlobsPerBlock]*blocks.ROBlob
diskSummary filesystem.BlobStorageSummary
}
func (e *cacheEntry) setDiskSummary(sum filesystem.BlobStorageSummary) {
e.diskSummary = sum
}
// stash adds an item to the in-memory cache of BlobSidecars.
@@ -81,9 +87,17 @@ func (e *cacheEntry) stash(sc *blocks.ROBlob) error {
// the cache do not match those found in the block. If err is nil, then all expected
// commitments were found in the cache and the sidecar slice return value can be used
// to perform a DA check against the cached sidecars.
// filter only returns blobs that need to be checked. Blobs already available on disk will be excluded.
func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROBlob, error) {
scs := make([]blocks.ROBlob, kc.count())
if e.diskSummary.AllAvailable(kc.count()) {
return nil, nil
}
scs := make([]blocks.ROBlob, 0, kc.count())
for i := uint64(0); i < fieldparams.MaxBlobsPerBlock; i++ {
// We already have this blob, we don't need to write it or validate it.
if e.diskSummary.HasIndex(i) {
continue
}
if kc[i] == nil {
if e.scs[i] != nil {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, no block commitment", root, i, e.scs[i].KzgCommitment)
@@ -97,7 +111,7 @@ func (e *cacheEntry) filter(root [32]byte, kc safeCommitmentArray) ([]blocks.ROB
if !bytes.Equal(kc[i], e.scs[i].KzgCommitment) {
return nil, errors.Wrapf(errCommitmentMismatch, "root=%#x, index=%#x, commitment=%#x, block commitment=%#x", root, i, e.scs[i].KzgCommitment, kc[i])
}
scs[i] = *e.scs[i]
scs = append(scs, *e.scs[i])
}
return scs, nil

View File

@@ -3,9 +3,14 @@ package das
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func TestCacheEnsureDelete(t *testing.T) {
@@ -23,3 +28,145 @@ func TestCacheEnsureDelete(t *testing.T) {
var nilEntry *cacheEntry
require.Equal(t, nilEntry, c.entries[k])
}
type filterTestCaseSetupFunc func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob)
func filterTestCaseSetup(slot primitives.Slot, nBlobs int, onDisk []int, numExpected int) filterTestCaseSetupFunc {
return func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
blk, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, nBlobs)
commits, err := commitmentsToCheck(blk, blk.Block().Slot())
require.NoError(t, err)
entry := &cacheEntry{}
if len(onDisk) > 0 {
od := map[[32]byte][]int{blk.Root(): onDisk}
sumz := filesystem.NewMockBlobStorageSummarizer(t, od)
sum := sumz.Summary(blk.Root())
entry.setDiskSummary(sum)
}
expected := make([]blocks.ROBlob, 0, nBlobs)
for i := 0; i < commits.count(); i++ {
if entry.diskSummary.HasIndex(uint64(i)) {
continue
}
// If we aren't telling the cache a blob is on disk, add it to the expected list and stash.
expected = append(expected, blobs[i])
require.NoError(t, entry.stash(&blobs[i]))
}
require.Equal(t, numExpected, len(expected))
return entry, commits, expected
}
}
func TestFilterDiskSummary(t *testing.T) {
denebSlot, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
require.NoError(t, err)
cases := []struct {
name string
setup filterTestCaseSetupFunc
}{
{
name: "full blobs, all on disk",
setup: filterTestCaseSetup(denebSlot, 6, []int{0, 1, 2, 3, 4, 5}, 0),
},
{
name: "full blobs, first on disk",
setup: filterTestCaseSetup(denebSlot, 6, []int{0}, 5),
},
{
name: "full blobs, middle on disk",
setup: filterTestCaseSetup(denebSlot, 6, []int{2}, 5),
},
{
name: "full blobs, last on disk",
setup: filterTestCaseSetup(denebSlot, 6, []int{5}, 5),
},
{
name: "full blobs, none on disk",
setup: filterTestCaseSetup(denebSlot, 6, []int{}, 6),
},
{
name: "one commitment, on disk",
setup: filterTestCaseSetup(denebSlot, 1, []int{0}, 0),
},
{
name: "one commitment, not on disk",
setup: filterTestCaseSetup(denebSlot, 1, []int{}, 1),
},
{
name: "two commitments, first on disk",
setup: filterTestCaseSetup(denebSlot, 2, []int{0}, 1),
},
{
name: "two commitments, last on disk",
setup: filterTestCaseSetup(denebSlot, 2, []int{1}, 1),
},
{
name: "two commitments, none on disk",
setup: filterTestCaseSetup(denebSlot, 2, []int{}, 2),
},
{
name: "two commitments, all on disk",
setup: filterTestCaseSetup(denebSlot, 2, []int{0, 1}, 0),
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
entry, commits, expected := c.setup(t)
// first (root) argument doesn't matter, it is just for logs
got, err := entry.filter([32]byte{}, commits)
require.NoError(t, err)
require.Equal(t, len(expected), len(got))
})
}
}
func TestFilter(t *testing.T) {
denebSlot, err := slots.EpochStart(params.BeaconConfig().DenebForkEpoch)
require.NoError(t, err)
cases := []struct {
name string
setup func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob)
err error
}{
{
name: "commitments mismatch - extra sidecar",
setup: func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
commits[5] = nil
return entry, commits, expected
},
err: errCommitmentMismatch,
},
{
name: "sidecar missing",
setup: func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
entry.scs[5] = nil
return entry, commits, expected
},
err: errMissingSidecar,
},
{
name: "commitments mismatch - different bytes",
setup: func(t *testing.T) (*cacheEntry, safeCommitmentArray, []blocks.ROBlob) {
entry, commits, expected := filterTestCaseSetup(denebSlot, 6, []int{0, 1}, 4)(t)
entry.scs[5].KzgCommitment = []byte("nope")
return entry, commits, expected
},
err: errCommitmentMismatch,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
entry, commits, expected := c.setup(t)
// first (root) argument doesn't matter, it is just for logs
got, err := entry.filter([32]byte{}, commits)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, len(expected), len(got))
})
}
}

View File

@@ -4,23 +4,28 @@ go_library(
name = "go_default_library",
srcs = [
"blob.go",
"ephemeral.go",
"cache.go",
"iteration.go",
"layout.go",
"log.go",
"metrics.go",
"mock.go",
"pruner.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/db:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//io/file:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/logging:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
@@ -33,17 +38,25 @@ go_test(
name = "go_default_test",
srcs = [
"blob_test.go",
"cache_test.go",
"iteration_test.go",
"layout_test.go",
"migration_test.go",
"pruner_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/db:go_default_library",
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_spf13_afero//:go_default_library",
],

View File

@@ -2,10 +2,8 @@ package filesystem
import (
"fmt"
"os"
"math"
"path"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
@@ -14,7 +12,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/io/file"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/logging"
"github.com/sirupsen/logrus"
"github.com/spf13/afero"
@@ -27,12 +24,7 @@ var (
errNoBasePath = errors.New("BlobStorage base path not specified in init")
)
const (
sszExt = "ssz"
partExt = "part"
directoryPermissions = 0700
)
const directoryPermissions = 0700
// BlobStorageOption is a functional option for configuring a BlobStorage.
type BlobStorageOption func(*BlobStorage) error
@@ -61,6 +53,13 @@ func WithSaveFsync(fsync bool) BlobStorageOption {
}
}
func WithFs(fs afero.Fs) BlobStorageOption {
return func(b *BlobStorage) error {
b.fs = fs
return nil
}
}
// NewBlobStorage creates a new instance of the BlobStorage object. Note that the implementation of BlobStorage may
// attempt to hold a file lock to guarantee exclusive control of the blob storage directory, so this should only be
// initialized once per beacon node.
@@ -71,19 +70,24 @@ func NewBlobStorage(opts ...BlobStorageOption) (*BlobStorage, error) {
return nil, errors.Wrap(err, "failed to create blob storage")
}
}
if b.base == "" {
return nil, errNoBasePath
// Allow tests to set up a different fs using WithFs.
if b.fs == nil {
if b.base == "" {
return nil, errNoBasePath
}
b.base = path.Clean(b.base)
if err := file.MkdirAll(b.base); err != nil {
return nil, errors.Wrapf(err, "failed to create blob storage at %s", b.base)
}
b.fs = afero.NewBasePathFs(afero.NewOsFs(), b.base)
}
b.base = path.Clean(b.base)
if err := file.MkdirAll(b.base); err != nil {
return nil, errors.Wrapf(err, "failed to create blob storage at %s", b.base)
}
b.fs = afero.NewBasePathFs(afero.NewOsFs(), b.base)
pruner, err := newBlobPruner(b.fs, b.retentionEpochs)
b.cache = newBlobStorageCache()
pruner := newBlobPruner(b.retentionEpochs)
layout, err := newPeriodicEpochLayout(b.fs, b.cache, pruner)
if err != nil {
return nil, err
}
b.pruner = pruner
b.layout = layout
return b, nil
}
@@ -94,26 +98,29 @@ type BlobStorage struct {
fsync bool
fs afero.Fs
pruner *blobPruner
layout runtimeLayout
cache *blobStorageCache
}
// WarmCache runs the prune routine with an expiration of slot of 0, so nothing will be pruned, but the pruner's cache
// will be populated at node startup, avoiding a costly cold prune (~4s in syscalls) during syncing.
func (bs *BlobStorage) WarmCache() {
if bs.pruner == nil {
return
start := time.Now()
if err := warmCache(bs.layout, bs.cache); err != nil {
log.WithError(err).Error("Error encountered while warming up blob filesystem cache.")
}
log.WithField("elapsed", time.Since(start)).Info("Blob filesystem cache warm-up complete.")
from := &flatRootLayout{fs: bs.fs}
if err := migrateLayout(bs.fs, from, bs.layout, bs.cache); err != nil {
log.WithError(err).Error("Error encountered while migrating legacy blob storage scheme.")
}
go func() {
if err := bs.pruner.prune(0); err != nil {
log.WithError(err).Error("Error encountered while warming up blob pruner cache")
}
}()
}
// Save saves blobs given a list of sidecars.
func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
startTime := time.Now()
fname := namerForSidecar(sidecar)
sszPath := fname.path()
ident := identForSidecar(sidecar)
sszPath := bs.layout.sszPath(ident)
exists, err := afero.Exists(bs.fs, sszPath)
if err != nil {
return err
@@ -122,10 +129,9 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
log.WithFields(logging.BlobFields(sidecar.ROBlob)).Debug("Ignoring a duplicate blob sidecar save attempt")
return nil
}
if bs.pruner != nil {
if err := bs.pruner.notify(sidecar.BlockRoot(), sidecar.Slot(), sidecar.Index); err != nil {
return errors.Wrapf(err, "problem maintaining pruning cache/metrics for sidecar with root=%#x", sidecar.BlockRoot())
}
if err := bs.layout.notify(ident); err != nil {
return errors.Wrapf(err, "problem maintaining pruning cache/metrics for sidecar with root=%#x", sidecar.BlockRoot())
}
// Serialize the ethpb.BlobSidecar to binary data using SSZ.
@@ -136,10 +142,10 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
return errSidecarEmptySSZData
}
if err := bs.fs.MkdirAll(fname.dir(), directoryPermissions); err != nil {
if err := bs.fs.MkdirAll(bs.layout.dir(ident), directoryPermissions); err != nil {
return err
}
partPath := fname.partPath(fmt.Sprintf("%p", sidecarData))
partPath := bs.layout.partPath(ident, fmt.Sprintf("%p", sidecarData))
partialMoved := false
// Ensure the partial file is deleted.
@@ -204,67 +210,37 @@ func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
// value is always a VerifiedROBlob.
func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, error) {
startTime := time.Now()
expected := blobNamer{root: root, index: idx}
encoded, err := afero.ReadFile(bs.fs, expected.path())
var v blocks.VerifiedROBlob
ident, err := bs.layout.ident(root, idx)
if err != nil {
return v, err
}
s := &ethpb.BlobSidecar{}
if err := s.UnmarshalSSZ(encoded); err != nil {
return v, err
}
ro, err := blocks.NewROBlobWithRoot(s, root)
if err != nil {
return blocks.VerifiedROBlob{}, err
return verification.VerifiedROBlobError(err)
}
defer func() {
blobFetchLatency.Observe(float64(time.Since(startTime).Milliseconds()))
}()
return verification.BlobSidecarNoop(ro)
return verification.VerifiedROBlobFromDisk(bs.fs, root, bs.layout.sszPath(ident))
}
// Remove removes all blobs for a given root.
func (bs *BlobStorage) Remove(root [32]byte) error {
rootDir := blobNamer{root: root}.dir()
return bs.fs.RemoveAll(rootDir)
dirIdent, err := bs.layout.dirIdent(root)
if err != nil {
return err
}
_, err = bs.layout.remove(dirIdent)
return err
}
// Indices generates a bitmap representing which BlobSidecar.Index values are present on disk for a given root.
// This value can be compared to the commitments observed in a block to determine which indices need to be found
// on the network to confirm data availability.
func (bs *BlobStorage) Indices(root [32]byte) ([fieldparams.MaxBlobsPerBlock]bool, error) {
var mask [fieldparams.MaxBlobsPerBlock]bool
rootDir := blobNamer{root: root}.dir()
entries, err := afero.ReadDir(bs.fs, rootDir)
if err != nil {
if os.IsNotExist(err) {
return mask, nil
}
return mask, err
}
for i := range entries {
if entries[i].IsDir() {
continue
}
name := entries[i].Name()
if !strings.HasSuffix(name, sszExt) {
continue
}
parts := strings.Split(name, ".")
if len(parts) != 2 {
continue
}
u, err := strconv.ParseUint(parts[0], 10, 64)
if err != nil {
return mask, errors.Wrapf(err, "unexpected directory entry breaks listing, %s", parts[0])
}
if u >= fieldparams.MaxBlobsPerBlock {
return mask, errIndexOutOfBounds
}
mask[u] = true
}
return mask, nil
return bs.Summary(root).mask, nil
}
// Summary returns the BlobStorageSummary from the layout.
// Internally, this is a cached representation of the directory listing for the given root.
func (bs *BlobStorage) Summary(root [32]byte) BlobStorageSummary {
return bs.layout.summary(root)
}
// Clear deletes all files on the filesystem.
@@ -281,27 +257,11 @@ func (bs *BlobStorage) Clear() error {
return nil
}
type blobNamer struct {
root [32]byte
index uint64
}
func namerForSidecar(sc blocks.VerifiedROBlob) blobNamer {
return blobNamer{root: sc.BlockRoot(), index: sc.Index}
}
func (p blobNamer) dir() string {
return rootString(p.root)
}
func (p blobNamer) partPath(entropy string) string {
return path.Join(p.dir(), fmt.Sprintf("%s-%d.%s", entropy, p.index, partExt))
}
func (p blobNamer) path() string {
return path.Join(p.dir(), fmt.Sprintf("%d.%s", p.index, sszExt))
}
func rootString(root [32]byte) string {
return fmt.Sprintf("%#x", root)
// WithinRetentionPeriod checks if the requested epoch is within the blob retention period.
func (bs *BlobStorage) WithinRetentionPeriod(requested, current primitives.Epoch) bool {
if requested > math.MaxUint64-bs.retentionEpochs {
// If there is an overflow, then the retention period was set to an extremely large number.
return true
}
return requested+bs.retentionEpochs >= current
}

View File

@@ -2,33 +2,33 @@ package filesystem
import (
"bytes"
"math"
"os"
"path"
"sync"
"testing"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/spf13/afero"
)
func TestBlobStorage_SaveBlobData(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, fieldparams.MaxBlobsPerBlock)
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
testSidecars := verification.FakeVerifySliceForTest(t, sidecars)
t.Run("no error for duplicate", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
fs, bs := NewEphemeralBlobStorageAndFs(t)
existingSidecar := testSidecars[0]
blobPath := namerForSidecar(existingSidecar).path()
blobPath := bs.layout.sszPath(identForSidecar(existingSidecar))
// Serialize the existing BlobSidecar to binary data.
existingSidecarData, err := ssz.MarshalSSZ(existingSidecar)
require.NoError(t, err)
@@ -85,7 +85,7 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
require.NoError(t, bs.Remove(expected.BlockRoot()))
_, err = bs.Get(expected.BlockRoot(), expected.Index)
require.ErrorContains(t, "file does not exist", err)
require.Equal(t, true, db.IsNotFound(err))
})
t.Run("clear", func(t *testing.T) {
@@ -126,15 +126,13 @@ func TestBlobStorage_SaveBlobData(t *testing.T) {
})
}
// pollUntil polls a condition function until it returns true or a timeout is reached.
func TestBlobIndicesBounds(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
fs := afero.NewMemMapFs()
root := [32]byte{}
okIdx := uint64(fieldparams.MaxBlobsPerBlock - 1)
writeFakeSSZ(t, fs, root, okIdx)
writeFakeSSZ(t, fs, root, 0, okIdx)
bs := NewEphemeralBlobStorageUsingFs(t, fs)
indices, err := bs.Indices(root)
require.NoError(t, err)
var expected [fieldparams.MaxBlobsPerBlock]bool
@@ -144,25 +142,27 @@ func TestBlobIndicesBounds(t *testing.T) {
}
oobIdx := uint64(fieldparams.MaxBlobsPerBlock)
writeFakeSSZ(t, fs, root, oobIdx)
_, err = bs.Indices(root)
require.ErrorIs(t, err, errIndexOutOfBounds)
writeFakeSSZ(t, fs, root, 0, oobIdx)
// This now fails at cache warmup time.
require.ErrorIs(t, err, warmCache(bs.layout, bs.cache))
}
func writeFakeSSZ(t *testing.T, fs afero.Fs, root [32]byte, idx uint64) {
namer := blobNamer{root: root, index: idx}
require.NoError(t, fs.MkdirAll(namer.dir(), 0700))
fh, err := fs.Create(namer.path())
func writeFakeSSZ(t *testing.T, fs afero.Fs, root [32]byte, slot primitives.Slot, idx uint64) {
epoch := slots.ToEpoch(slot)
namer := newBlobIdent(root, epoch, idx)
layout := periodicEpochLayout{}
require.NoError(t, fs.MkdirAll(layout.dir(namer), 0700))
fh, err := fs.Create(layout.sszPath(namer))
require.NoError(t, err)
_, err = fh.Write([]byte("derp"))
require.NoError(t, err)
require.NoError(t, fh.Close())
}
/*
func TestBlobStoragePrune(t *testing.T) {
currentSlot := primitives.Slot(200000)
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
fs, bs := NewEphemeralBlobStorageAndFs(t)
t.Run("PruneOne", func(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 300, fieldparams.MaxBlobsPerBlock)
@@ -172,10 +172,15 @@ func TestBlobStoragePrune(t *testing.T) {
for _, sidecar := range testSidecars {
require.NoError(t, bs.Save(sidecar))
}
ident := identForSidecar(testSidecars[0])
beforeFolders, err := afero.ReadDir(fs, ident.groupDir())
require.NoError(t, err)
require.Equal(t, 1, len(beforeFolders))
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.windowSize))
remainingFolders, err := afero.ReadDir(fs, ".")
remainingFolders, err := afero.ReadDir(fs, ident.groupDir())
require.NoError(t, err)
require.Equal(t, 0, len(remainingFolders))
})
@@ -183,6 +188,7 @@ func TestBlobStoragePrune(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 299, fieldparams.MaxBlobsPerBlock)
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
ident := identForSidecar(testSidecars[0])
for _, sidecar := range testSidecars[4:] {
require.NoError(t, bs.Save(sidecar))
@@ -190,57 +196,46 @@ func TestBlobStoragePrune(t *testing.T) {
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.windowSize))
remainingFolders, err := afero.ReadDir(fs, ".")
remainingFolders, err := afero.ReadDir(fs, ident.groupDir())
require.NoError(t, err)
require.Equal(t, 0, len(remainingFolders))
})
t.Run("PruneMany", func(t *testing.T) {
blockQty := 10
slot := primitives.Slot(1)
for j := 0; j <= blockQty; j++ {
root := bytesutil.ToBytes32(bytesutil.ToBytes(uint64(slot), 32))
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, root, slot, fieldparams.MaxBlobsPerBlock)
pruneBefore := currentSlot - bs.pruner.windowSize
increment := primitives.Slot(10000)
slots := []primitives.Slot{
pruneBefore - increment,
pruneBefore - (2 * increment),
pruneBefore,
pruneBefore + increment,
pruneBefore + (2 * increment),
}
namers := make([]blobIdent, len(slots))
for i, s := range slots {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, s, 1)
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(testSidecars[0]))
slot += 10000
namers[i] = identForSidecar(testSidecars[0])
}
require.NoError(t, bs.pruner.prune(currentSlot-bs.pruner.windowSize))
remainingFolders, err := afero.ReadDir(fs, ".")
require.NoError(t, err)
require.Equal(t, 4, len(remainingFolders))
// first 2 subdirs should be removed
for _, nmr := range namers[0:2] {
entries, err := listDir(fs, nmr.dir())
require.Equal(t, 0, len(entries))
require.ErrorIs(t, err, os.ErrNotExist)
}
// the rest should still be there
for _, nmr := range namers[2:] {
entries, err := listDir(fs, nmr.dir())
require.NoError(t, err)
require.Equal(t, 1, len(entries))
}
})
}
func BenchmarkPruning(b *testing.B) {
var t *testing.T
_, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
blockQty := 10000
currentSlot := primitives.Slot(150000)
slot := primitives.Slot(0)
for j := 0; j <= blockQty; j++ {
root := bytesutil.ToBytes32(bytesutil.ToBytes(uint64(slot), 32))
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, root, slot, fieldparams.MaxBlobsPerBlock)
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(testSidecars[0]))
slot += 100
}
b.ReportAllocs()
b.ResetTimer()
for i := 0; i < b.N; i++ {
err := bs.pruner.prune(currentSlot)
require.NoError(b, err)
}
}
*/
func TestNewBlobStorage(t *testing.T) {
_, err := NewBlobStorage()
@@ -248,3 +243,50 @@ func TestNewBlobStorage(t *testing.T) {
_, err = NewBlobStorage(WithBasePath(path.Join(t.TempDir(), "good")))
require.NoError(t, err)
}
func TestConfig_WithinRetentionPeriod(t *testing.T) {
retention := primitives.Epoch(16)
storage := &BlobStorage{retentionEpochs: retention}
cases := []struct {
name string
requested primitives.Epoch
current primitives.Epoch
within bool
}{
{
name: "before",
requested: 0,
current: retention + 1,
within: false,
},
{
name: "same",
requested: 0,
current: 0,
within: true,
},
{
name: "boundary",
requested: 0,
current: retention,
within: true,
},
{
name: "one less",
requested: retention - 1,
current: retention,
within: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
require.Equal(t, c.within, storage.WithinRetentionPeriod(c.requested, c.current))
})
}
t.Run("overflow", func(t *testing.T) {
storage := &BlobStorage{retentionEpochs: math.MaxUint64}
require.Equal(t, true, storage.WithinRetentionPeriod(1, 1))
})
}

View File

@@ -0,0 +1,151 @@
package filesystem
import (
"sync"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
const bytesPerSidecar = 131928
// blobIndexMask is a bitmask representing the set of blob indices that are currently set.
type blobIndexMask [fieldparams.MaxBlobsPerBlock]bool
// BlobStorageSummary represents cached information about the BlobSidecars on disk for each root the cache knows about.
type BlobStorageSummary struct {
epoch primitives.Epoch
mask blobIndexMask
}
// HasIndex returns true if the BlobSidecar at the given index is available in the filesystem.
func (s BlobStorageSummary) HasIndex(idx uint64) bool {
// Protect from panic, but assume callers are sophisticated enough to not need an error telling them they have an invalid idx.
if idx >= fieldparams.MaxBlobsPerBlock {
return false
}
return s.mask[idx]
}
// AllAvailable returns true if we have all blobs for all indices from 0 to count-1.
func (s BlobStorageSummary) AllAvailable(count int) bool {
if count > fieldparams.MaxBlobsPerBlock {
return false
}
for i := 0; i < count; i++ {
if !s.mask[i] {
return false
}
}
return true
}
// BlobStorageSummarizer can be used to receive a summary of metadata about blobs on disk for a given root.
// The BlobStorageSummary can be used to check which indices (if any) are available for a given block by root.
type BlobStorageSummarizer interface {
Summary(root [32]byte) BlobStorageSummary
}
type blobStorageCache struct {
mu sync.RWMutex
nBlobs float64
cache map[[32]byte]BlobStorageSummary
}
var _ BlobStorageSummarizer = &blobStorageCache{}
func newBlobStorageCache() *blobStorageCache {
return &blobStorageCache{
cache: make(map[[32]byte]BlobStorageSummary),
}
}
// Summary returns the BlobStorageSummary for `root`. The BlobStorageSummary can be used to check for the presence of
// BlobSidecars based on Index.
func (s *blobStorageCache) Summary(root [32]byte) BlobStorageSummary {
s.mu.RLock()
defer s.mu.RUnlock()
return s.cache[root]
}
func (s *blobStorageCache) ensure(key [32]byte, epoch primitives.Epoch, idx uint64) error {
if idx >= fieldparams.MaxBlobsPerBlock {
return errIndexOutOfBounds
}
s.mu.Lock()
defer s.mu.Unlock()
v := s.cache[key]
v.epoch = epoch
if !v.mask[idx] {
s.updateMetrics(1)
}
v.mask[idx] = true
s.cache[key] = v
return nil
}
func (s *blobStorageCache) epoch(key [32]byte) (primitives.Epoch, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
v, ok := s.cache[key]
if !ok {
return 0, false
}
return v.epoch, ok
}
func (s *blobStorageCache) get(key [32]byte) (BlobStorageSummary, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
v, ok := s.cache[key]
return v, ok
}
func (s *blobStorageCache) identForIdx(key [32]byte, idx uint64) (blobIdent, error) {
v, ok := s.get(key)
if !ok || !v.HasIndex(idx) {
return blobIdent{}, db.ErrNotFound
}
return blobIdent{
root: key,
index: idx,
epoch: v.epoch,
}, nil
}
func (s *blobStorageCache) identForRoot(key [32]byte) (blobIdent, error) {
v, ok := s.get(key)
if !ok {
return blobIdent{}, db.ErrNotFound
}
return blobIdent{
root: key,
epoch: v.epoch,
}, nil
}
func (s *blobStorageCache) evict(key [32]byte) int {
deleted := 0
s.mu.Lock()
v, ok := s.cache[key]
if ok {
for i := range v.mask {
if v.mask[i] {
deleted += 1
}
}
}
delete(s.cache, key)
s.mu.Unlock()
if deleted > 0 {
s.updateMetrics(-float64(deleted))
}
return deleted
}
func (s *blobStorageCache) updateMetrics(delta float64) {
s.nBlobs += delta
blobDiskCount.Set(s.nBlobs)
blobDiskSize.Set(s.nBlobs * bytesPerSidecar)
}

View File

@@ -0,0 +1,150 @@
package filesystem
import (
"testing"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestSlotByRoot_Summary(t *testing.T) {
var noneSet, allSet, firstSet, lastSet, oneSet blobIndexMask
firstSet[0] = true
lastSet[len(lastSet)-1] = true
oneSet[1] = true
for i := range allSet {
allSet[i] = true
}
cases := []struct {
name string
root [32]byte
expected *blobIndexMask
}{
{
name: "not found",
},
{
name: "none set",
expected: &noneSet,
},
{
name: "index 1 set",
expected: &oneSet,
},
{
name: "all set",
expected: &allSet,
},
{
name: "first set",
expected: &firstSet,
},
{
name: "last set",
expected: &lastSet,
},
}
sc := newBlobStorageCache()
for _, c := range cases {
if c.expected != nil {
key := bytesutil.ToBytes32([]byte(c.name))
sc.cache[key] = BlobStorageSummary{epoch: 0, mask: *c.expected}
}
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
key := bytesutil.ToBytes32([]byte(c.name))
sum := sc.Summary(key)
for i := range c.expected {
ui := uint64(i)
if c.expected == nil {
require.Equal(t, false, sum.HasIndex(ui))
} else {
require.Equal(t, c.expected[i], sum.HasIndex(ui))
}
}
})
}
}
func TestAllAvailable(t *testing.T) {
idxUpTo := func(u int) []int {
r := make([]int, u)
for i := range r {
r[i] = i
}
return r
}
require.DeepEqual(t, []int{}, idxUpTo(0))
require.DeepEqual(t, []int{0}, idxUpTo(1))
require.DeepEqual(t, []int{0, 1, 2, 3, 4, 5}, idxUpTo(6))
cases := []struct {
name string
idxSet []int
count int
aa bool
}{
{
// If there are no blobs committed, then all the committed blobs are available.
name: "none in idx, 0 arg",
count: 0,
aa: true,
},
{
name: "none in idx, 1 arg",
count: 1,
aa: false,
},
{
name: "first in idx, 1 arg",
idxSet: []int{0},
count: 1,
aa: true,
},
{
name: "second in idx, 1 arg",
idxSet: []int{1},
count: 1,
aa: false,
},
{
name: "first missing, 2 arg",
idxSet: []int{1},
count: 2,
aa: false,
},
{
name: "all missing, 1 arg",
count: 6,
aa: false,
},
{
name: "out of bound is safe",
count: fieldparams.MaxBlobsPerBlock + 1,
aa: false,
},
{
name: "max present",
count: fieldparams.MaxBlobsPerBlock,
idxSet: idxUpTo(fieldparams.MaxBlobsPerBlock),
aa: true,
},
{
name: "one present",
count: 1,
idxSet: idxUpTo(1),
aa: true,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
var mask blobIndexMask
for _, idx := range c.idxSet {
mask[idx] = true
}
sum := BlobStorageSummary{mask: mask}
require.Equal(t, c.aa, sum.AllAvailable(c.count))
})
}
}

View File

@@ -1,63 +0,0 @@
package filesystem
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/spf13/afero"
)
// NewEphemeralBlobStorage should only be used for tests.
// The instance of BlobStorage returned is backed by an in-memory virtual filesystem,
// improving test performance and simplifying cleanup.
func NewEphemeralBlobStorage(t testing.TB) *BlobStorage {
fs := afero.NewMemMapFs()
pruner, err := newBlobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
if err != nil {
t.Fatal("test setup issue", err)
}
return &BlobStorage{fs: fs, pruner: pruner}
}
// NewEphemeralBlobStorageWithFs can be used by tests that want access to the virtual filesystem
// in order to interact with it outside the parameters of the BlobStorage api.
func NewEphemeralBlobStorageWithFs(t testing.TB) (afero.Fs, *BlobStorage, error) {
fs := afero.NewMemMapFs()
pruner, err := newBlobPruner(fs, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
if err != nil {
t.Fatal("test setup issue", err)
}
return fs, &BlobStorage{fs: fs, pruner: pruner}, nil
}
type BlobMocker struct {
fs afero.Fs
bs *BlobStorage
}
// CreateFakeIndices creates empty blob sidecar files at the expected path for the given
// root and indices to influence the result of Indices().
func (bm *BlobMocker) CreateFakeIndices(root [32]byte, indices ...uint64) error {
for i := range indices {
n := blobNamer{root: root, index: indices[i]}
if err := bm.fs.MkdirAll(n.dir(), directoryPermissions); err != nil {
return err
}
f, err := bm.fs.Create(n.path())
if err != nil {
return err
}
if err := f.Close(); err != nil {
return err
}
}
return nil
}
// NewEphemeralBlobStorageWithMocker returns a *BlobMocker value in addition to the BlobStorage value.
// BlockMocker encapsulates things blob path construction to avoid leaking implementation details.
func NewEphemeralBlobStorageWithMocker(_ testing.TB) (*BlobMocker, *BlobStorage) {
fs := afero.NewMemMapFs()
bs := &BlobStorage{fs: fs}
return &BlobMocker{fs: fs, bs: bs}, bs
}

View File

@@ -0,0 +1,328 @@
package filesystem
import (
"encoding/binary"
"fmt"
"io"
"path"
"path/filepath"
"strconv"
"strings"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
var errIdentFailure = errors.New("failed to determine blob metadata, ignoring all sub-path.")
type identificationError struct {
err error
path string
ident blobIdent
}
func (ide *identificationError) Error() string {
return fmt.Sprintf("%s path=%s, err=%s", errIdentFailure.Error(), ide.path, ide.err.Error())
}
func (ide *identificationError) Unwrap() error {
return ide.err
}
func (ide *identificationError) Is(err error) bool {
return err == errIdentFailure
}
func (ide *identificationError) LogFields() logrus.Fields {
fields := ide.ident.logFields()
fields["path"] = ide.path
return fields
}
func newIdentificationError(path string, ident blobIdent, err error) *identificationError {
return &identificationError{path: path, ident: ident, err: err}
}
func listDir(fs afero.Fs, dir string) ([]string, error) {
top, err := fs.Open(dir)
if err != nil {
return nil, errors.Wrap(err, "failed to open directory descriptor")
}
defer func() {
if err := top.Close(); err != nil {
log.WithError(err).Errorf("Could not close file %s", dir)
}
}()
// re the -1 param: "If n <= 0, Readdirnames returns all the names from the directory in a single slice"
dirs, err := top.Readdirnames(-1)
if err != nil {
return nil, errors.Wrap(err, "failed to read directory listing")
}
return dirs, nil
}
type layoutLevel struct {
populateIdent identPopulator
filter func(string) bool
}
type identPopulator func(blobIdent, string) (blobIdent, error)
type identIterator struct {
fs afero.Fs
path string
child *identIterator
ident blobIdent
levels []layoutLevel
entries []string
offset int
}
func (iter *identIterator) next() (blobIdent, error) {
if iter.child != nil {
next, err := iter.child.next()
if err == nil {
return next, nil
}
if err != io.EOF {
return blobIdent{}, err
}
}
return iter.advanceChild()
}
func (iter *identIterator) advanceChild() (blobIdent, error) {
defer func() {
iter.offset += 1
}()
for i := iter.offset; i < len(iter.entries); i++ {
iter.offset = i
nextPath := filepath.Join(iter.path, iter.entries[iter.offset])
nextLevel := iter.levels[0]
if !nextLevel.filter(nextPath) {
continue
}
ident, err := nextLevel.populateIdent(iter.ident, nextPath)
if err != nil {
return ident, newIdentificationError(nextPath, ident, err)
}
// if we're at the leaf level, we can return the updated ident.
if len(iter.levels) == 1 {
return ident, nil
}
entries, err := listDir(iter.fs, nextPath)
if err != nil {
return blobIdent{}, err
}
if len(entries) == 0 {
continue
}
iter.child = &identIterator{
fs: iter.fs,
path: nextPath,
ident: ident,
levels: iter.levels[1:],
entries: entries,
}
return iter.child.next()
}
return blobIdent{}, io.EOF
}
func populateNoop(namer blobIdent, dir string) (blobIdent, error) {
return namer, nil
}
func populateEpoch(namer blobIdent, dir string) (blobIdent, error) {
epoch, err := epochFromPath(dir)
if err != nil {
return namer, err
}
namer.epoch = epoch
return namer, nil
}
func populateRoot(namer blobIdent, dir string) (blobIdent, error) {
root, err := rootFromPath(dir)
if err != nil {
return namer, err
}
namer.root = root
return namer, nil
}
func populateIndex(namer blobIdent, fname string) (blobIdent, error) {
idx, err := idxFromPath(fname)
if err != nil {
return namer, err
}
namer.index = idx
return namer, nil
}
type readSlotOncePerRoot struct {
fs afero.Fs
lastRoot [32]byte
epoch primitives.Epoch
}
func (l *readSlotOncePerRoot) populateIdent(ident blobIdent, fname string) (blobIdent, error) {
ident, err := populateIndex(ident, fname)
if err != nil {
return ident, err
}
if ident.root != l.lastRoot {
slot, err := slotFromFile(fname, l.fs)
if err != nil {
return ident, err
}
l.lastRoot = ident.root
l.epoch = slots.ToEpoch(slot)
}
ident.epoch = l.epoch
return ident, nil
}
func epochFromPath(p string) (primitives.Epoch, error) {
subdir := filepath.Base(p)
epoch, err := strconv.ParseUint(subdir, 10, 64)
if err != nil {
return 0, errors.Wrapf(errInvalidDirectoryLayout,
"failed to decode epoch as uint, err=%s, dir=%s", err.Error(), p)
}
return primitives.Epoch(epoch), nil
}
func periodFromPath(p string) (uint64, error) {
subdir := filepath.Base(p)
period, err := strconv.ParseUint(subdir, 10, 64)
if err != nil {
return 0, errors.Wrapf(errInvalidDirectoryLayout,
"failed to decode period from path as uint, err=%s, dir=%s", err.Error(), p)
}
return period, nil
}
func rootFromPath(p string) ([32]byte, error) {
subdir := filepath.Base(p)
root, err := stringToRoot(subdir)
if err != nil {
return root, errors.Wrapf(err, "invalid directory, could not parse subdir as root %s", p)
}
return root, nil
}
func idxFromPath(p string) (uint64, error) {
p = path.Base(p)
if !isSszFile(p) {
return 0, errors.Wrap(errNotBlobSSZ, "does not have .ssz extension")
}
parts := strings.Split(p, ".")
if len(parts) != 2 {
return 0, errors.Wrap(errNotBlobSSZ, "unexpected filename structure (want <index>.ssz)")
}
idx, err := strconv.ParseUint(parts[0], 10, 64)
if err != nil {
return 0, err
}
if idx >= fieldparams.MaxBlobsPerBlock {
return 0, errors.Wrapf(errIndexOutOfBounds, "index=%d", idx)
}
return idx, nil
}
// Read slot from marshaled BlobSidecar data in the given file. See slotFromBlob for details.
func slotFromFile(name string, fs afero.Fs) (primitives.Slot, error) {
f, err := fs.Open(name)
if err != nil {
return 0, err
}
defer func() {
if err := f.Close(); err != nil {
log.WithError(err).Errorf("Could not close blob file")
}
}()
return slotFromBlob(f)
}
// slotFromBlob reads the ssz data of a file at the specified offset (8 + 131072 + 48 + 48 = 131176 bytes),
// which is calculated based on the size of the BlobSidecar struct and is based on the size of the fields
// preceding the slot information within SignedBeaconBlockHeader.
func slotFromBlob(at io.ReaderAt) (primitives.Slot, error) {
b := make([]byte, 8)
_, err := at.ReadAt(b, 131176)
if err != nil {
return 0, err
}
rawSlot := binary.LittleEndian.Uint64(b)
return primitives.Slot(rawSlot), nil
}
func filterNoop(_ string) bool {
return true
}
func isRootDir(p string) bool {
dir := filepath.Base(p)
return len(dir) == rootStringLen && strings.HasPrefix(dir, "0x")
}
func isSszFile(s string) bool {
return filepath.Ext(s) == "."+sszExt
}
func isBeforeEpoch(before primitives.Epoch) func(string) bool {
if before == 0 {
return filterNoop
}
return func(p string) bool {
epoch, err := epochFromPath(p)
if err != nil {
return false
}
return epoch < before
}
}
func isBeforePeriod(before primitives.Epoch) func(string) bool {
if before == 0 {
return filterNoop
}
beforePeriod := periodForEpoch(before)
if before%4096 != 0 {
// Add one because we need to include the period the epoch is in, unless it is the first epoch in the period,
// in which case we can just look at any previous period.
beforePeriod += 1
}
return func(p string) bool {
period, err := periodFromPath(p)
if err != nil {
return false
}
return primitives.Epoch(period) < beforePeriod
}
}
func rootToString(root [32]byte) string {
return fmt.Sprintf("%#x", root)
}
func stringToRoot(str string) ([32]byte, error) {
if len(str) != rootStringLen {
return [32]byte{}, errors.Wrapf(errInvalidRootString, "incorrect len for input=%s", str)
}
slice, err := hexutil.Decode(str)
if err != nil {
return [32]byte{}, errors.Wrapf(errInvalidRootString, "input=%s", str)
}
return bytesutil.ToBytes32(slice), nil
}

View File

@@ -0,0 +1,242 @@
package filesystem
import (
"bytes"
"fmt"
"math"
"os"
"path"
"sort"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/spf13/afero"
)
func TestRootFromDir(t *testing.T) {
cases := []struct {
name string
dir string
err error
root [32]byte
}{
{
name: "happy path",
dir: "0xffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5cb",
root: [32]byte{255, 255, 135, 94, 29, 152, 92, 92, 203, 33, 72, 148, 152, 63, 36, 40,
237, 178, 113, 240, 248, 123, 104, 186, 112, 16, 228, 169, 157, 243, 181, 203},
},
{
name: "too short",
dir: "0xffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5c",
err: errInvalidRootString,
},
{
name: "too log",
dir: "0xffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5cbb",
err: errInvalidRootString,
},
{
name: "missing prefix",
dir: "ffff875e1d985c5ccb214894983f2428edb271f0f87b68ba7010e4a99df3b5cb",
err: errInvalidRootString,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
root, err := stringToRoot(c.dir)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, c.root, root)
})
}
}
func TestSlotFromFile(t *testing.T) {
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageAndFs(t)
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc := verification.FakeVerifyForTest(t, sidecars[0])
require.NoError(t, bs.Save(sc))
namer := identForSidecar(sc)
sszPath := bs.layout.sszPath(namer)
slot, err := slotFromFile(sszPath, fs)
require.NoError(t, err)
require.Equal(t, c.slot, slot)
})
}
}
type dirFiles struct {
name string
isDir bool
children []dirFiles
}
func (df dirFiles) reify(t *testing.T, fs afero.Fs, base string) {
fullPath := path.Join(base, df.name)
if df.isDir {
if df.name != "" {
require.NoError(t, fs.Mkdir(fullPath, directoryPermissions))
}
for _, c := range df.children {
c.reify(t, fs, fullPath)
}
} else {
fp, err := fs.Create(fullPath)
require.NoError(t, err)
_, err = fp.WriteString("derp")
require.NoError(t, err)
}
}
func (df dirFiles) childNames() []string {
cn := make([]string, len(df.children))
for i := range df.children {
cn[i] = df.children[i].name
}
return cn
}
func TestListDir(t *testing.T) {
fs := afero.NewMemMapFs()
rootStrs := []string{
"0x0023dc5d063c7c1b37016bb54963c6ff4bfe5dfdf6dac29e7ceeb2b8fa81ed7a",
"0xff30526cd634a5af3a09cc9bff67f33a621fc5b975750bb4432f74df077554b4",
"0x23f5f795aaeb78c01fadaf3d06da2e99bd4b3622ae4dfea61b05b7d9adb119c2",
}
// parent directory
tree := dirFiles{isDir: true}
// break out each subdir for easier assertions
notABlob := dirFiles{name: "notABlob", isDir: true}
childlessBlob := dirFiles{name: rootStrs[0], isDir: true}
blobWithSsz := dirFiles{name: rootStrs[1], isDir: true,
children: []dirFiles{{name: "1.ssz"}, {name: "2.ssz"}},
}
blobWithSszAndTmp := dirFiles{name: rootStrs[2], isDir: true,
children: []dirFiles{{name: "5.ssz"}, {name: "0.part"}}}
tree.children = append(tree.children,
notABlob, childlessBlob, blobWithSsz, blobWithSszAndTmp)
topChildren := make([]string, len(tree.children))
for i := range tree.children {
topChildren[i] = tree.children[i].name
}
var filter = func(entries []string, filt func(string) bool) []string {
filtered := make([]string, 0, len(entries))
for i := range entries {
if filt(entries[i]) {
filtered = append(filtered, entries[i])
}
}
return filtered
}
tree.reify(t, fs, "")
cases := []struct {
name string
dirPath string
expected []string
filter func(string) bool
err error
}{
{
name: "non-existent",
dirPath: "derp",
expected: []string{},
err: os.ErrNotExist,
},
{
name: "empty",
dirPath: childlessBlob.name,
expected: []string{},
},
{
name: "top",
dirPath: ".",
expected: topChildren,
},
{
name: "custom filter: only notABlob",
dirPath: ".",
expected: []string{notABlob.name},
filter: func(s string) bool {
return s == notABlob.name
},
},
{
name: "root filter",
dirPath: ".",
expected: []string{childlessBlob.name, blobWithSsz.name, blobWithSszAndTmp.name},
filter: isRootDir,
},
{
name: "ssz filter",
dirPath: blobWithSsz.name,
expected: blobWithSsz.childNames(),
filter: isSszFile,
},
{
name: "ssz mixed filter",
dirPath: blobWithSszAndTmp.name,
expected: []string{"5.ssz"},
filter: isSszFile,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
result, err := listDir(fs, c.dirPath)
if c.filter != nil {
result = filter(result, c.filter)
}
if c.err != nil {
require.ErrorIs(t, err, c.err)
require.Equal(t, 0, len(result))
} else {
require.NoError(t, err)
sort.Strings(c.expected)
sort.Strings(result)
require.DeepEqual(t, c.expected, result)
}
})
}
}
func TestSlotFromBlob(t *testing.T) {
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc := sidecars[0]
enc, err := sc.MarshalSSZ()
require.NoError(t, err)
slot, err := slotFromBlob(bytes.NewReader(enc))
require.NoError(t, err)
require.Equal(t, c.slot, slot)
})
}
}

View File

@@ -0,0 +1,330 @@
package filesystem
import (
"fmt"
"io"
"path"
"path/filepath"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
const (
rootPrefixLen = 4
// Full root in directory will be 66 chars, eg:
// >>> len('0x0002fb4db510b8618b04dc82d023793739c26346a8b02eb73482e24b0fec0555') == 66
rootStringLen = 66
sszExt = "ssz"
partExt = "part"
periodicEpochBaseDir = "by-epoch"
hexPrefixBaseDir = "by-hex-prefix"
)
var (
errMigrationFailure = errors.New("unable to migrate blob directory between old and new layout")
errCacheWarmFailed = errors.New("failed to warm blob filesystem cache")
errPruneFailed = errors.New("failed to prune root")
errInvalidRootString = errors.New("Could not parse hex string as a [32]byte")
errInvalidDirectoryLayout = errors.New("Could not parse blob directory path")
)
type migratableLayout interface {
dir(n blobIdent) string
sszPath(n blobIdent) string
partPath(n blobIdent, entropy string) string
iterateIdents(before primitives.Epoch) (*identIterator, error)
}
type runtimeLayout interface {
migratableLayout
ident(root [32]byte, idx uint64) (blobIdent, error)
dirIdent(root [32]byte) (blobIdent, error)
summary(root [32]byte) BlobStorageSummary
notify(ident blobIdent) error
pruneBefore(before primitives.Epoch) (*pruneSummary, error)
remove(ident blobIdent) (int, error)
}
func warmCache(l runtimeLayout, cache *blobStorageCache) error {
iter, err := l.iterateIdents(0)
if err != nil {
return errors.Wrap(errCacheWarmFailed, err.Error())
}
for ident, err := iter.next(); err != io.EOF; ident, err = iter.next() {
if errors.Is(err, errIdentFailure) {
idf := &identificationError{}
if errors.As(err, &idf) {
log.WithFields(idf.LogFields()).WithError(err).Error("Failed to cache blob data for path")
}
continue
}
if err != nil {
return errors.Wrapf(errCacheWarmFailed, "failed to populate blob data cache err=%s", err.Error())
}
if err := cache.ensure(ident.root, ident.epoch, ident.index); err != nil {
return errors.Wrapf(errCacheWarmFailed, "failed to write cache entry for %s, err=%s", l.sszPath(ident), err.Error())
}
}
return nil
}
func migrateLayout(fs afero.Fs, from, to migratableLayout, cache *blobStorageCache) error {
start := time.Now()
iter, err := from.iterateIdents(0)
if err != nil {
return errors.Wrapf(errMigrationFailure, "failed to iterate legacy structure while migrating blobs, err=%s", err.Error())
}
lastMoved := ""
parentDirs := make(map[string]bool) // this map should have < 65k keys by design
moved := 0
for ident, err := iter.next(); err != io.EOF; ident, err = iter.next() {
if err != nil {
if errors.Is(err, errIdentFailure) {
idf := &identificationError{}
if errors.As(err, &idf) {
log.WithFields(idf.LogFields()).WithError(err).Error("Failed to migrate blob path")
}
continue
}
return errors.Wrapf(errMigrationFailure, "failed to iterate legacy structure while migrating blobs, err=%s", err.Error())
}
src := from.dir(ident)
target := to.dir(ident)
if src != lastMoved {
targetParent := filepath.Dir(target)
if targetParent != "" && targetParent != "." && !parentDirs[targetParent] {
if err := fs.MkdirAll(targetParent, directoryPermissions); err != nil {
return errors.Wrapf(errMigrationFailure, "failed to make enclosing path before moving %s to %s", src, target)
}
parentDirs[targetParent] = true
}
if err := fs.Rename(src, target); err != nil {
return errors.Wrapf(errMigrationFailure, "could not rename %s to %s", src, target)
}
moved += 1
lastMoved = src
}
if err := cache.ensure(ident.root, ident.epoch, ident.index); err != nil {
return errors.Wrapf(errMigrationFailure, "could not cache path %s, err=%s", to.sszPath(ident), err.Error())
}
}
if moved > 0 {
log.WithField("dirsMoved", moved).WithField("elapsed", time.Since(start)).
Info("Blob filesystem migration complete.")
}
return nil
}
type blobIdent struct {
root [32]byte
epoch primitives.Epoch
index uint64
}
func newBlobIdent(root [32]byte, epoch primitives.Epoch, index uint64) blobIdent {
return blobIdent{root: root, epoch: epoch, index: index}
}
func identForSidecar(sc blocks.VerifiedROBlob) blobIdent {
return newBlobIdent(sc.BlockRoot(), slots.ToEpoch(sc.Slot()), sc.Index)
}
func (n blobIdent) sszFname() string {
return fmt.Sprintf("%d.%s", n.index, sszExt)
}
func (n blobIdent) partFname(entropy string) string {
return fmt.Sprintf("%s-%d.%s", entropy, n.index, partExt)
}
func (n blobIdent) logFields() logrus.Fields {
return logrus.Fields{
"root": fmt.Sprintf("%#x", n.root),
"epoch": n.epoch,
"index": n.index,
}
}
type pruneSummary struct {
blobsPruned int
failedRemovals []string
}
func (s pruneSummary) LogFields() logrus.Fields {
return logrus.Fields{}
}
func newPeriodicEpochLayout(fs afero.Fs, cache *blobStorageCache, pruner *blobPruner) (*periodicEpochLayout, error) {
l := &periodicEpochLayout{fs: fs, cache: cache, pruner: pruner}
if err := l.initialize(); err != nil {
return nil, err
}
return l, nil
}
var _ migratableLayout = &flatRootLayout{}
var _ runtimeLayout = &periodicEpochLayout{}
type periodicEpochLayout struct {
fs afero.Fs
cache *blobStorageCache
pruner *blobPruner
}
func (l *periodicEpochLayout) notify(ident blobIdent) error {
if err := l.cache.ensure(ident.root, ident.epoch, ident.index); err != nil {
return err
}
l.pruner.notify(ident.epoch, l)
return nil
}
func (l *periodicEpochLayout) initialize() error {
return l.fs.MkdirAll(periodicEpochBaseDir, directoryPermissions)
}
// If before == 0, it won't be used as a filter and all idents will be returned.
func (l *periodicEpochLayout) iterateIdents(before primitives.Epoch) (*identIterator, error) {
// iterate root, which should have directories named by "period"
entries, err := listDir(l.fs, periodicEpochBaseDir)
if err != nil {
return nil, errors.Wrapf(err, "failed to list %s", periodicEpochBaseDir)
}
return &identIterator{
fs: l.fs,
path: periodicEpochBaseDir,
levels: []layoutLevel{
{populateIdent: populateNoop, filter: isBeforePeriod(before)},
{populateIdent: populateEpoch, filter: isBeforeEpoch(before)},
{populateIdent: populateRoot, filter: isRootDir}, // extract root from path
{populateIdent: populateIndex, filter: isSszFile}, // extract index from filename
},
entries: entries,
}, nil
}
func (l *periodicEpochLayout) ident(root [32]byte, idx uint64) (blobIdent, error) {
return l.cache.identForIdx(root, idx)
}
func (l *periodicEpochLayout) dirIdent(root [32]byte) (blobIdent, error) {
return l.cache.identForRoot(root)
}
func (l *periodicEpochLayout) summary(root [32]byte) BlobStorageSummary {
return l.cache.Summary(root)
}
func (l *periodicEpochLayout) dir(n blobIdent) string {
return filepath.Join(l.epochDir(n.epoch), rootToString(n.root))
}
func (l *periodicEpochLayout) epochDir(epoch primitives.Epoch) string {
return filepath.Join(periodicEpochBaseDir, fmt.Sprintf("%d", periodForEpoch(epoch)), fmt.Sprintf("%d", epoch))
}
func periodForEpoch(epoch primitives.Epoch) primitives.Epoch {
return epoch / params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
}
func (l *periodicEpochLayout) sszPath(n blobIdent) string {
return filepath.Join(l.dir(n), n.sszFname())
}
func (l *periodicEpochLayout) partPath(n blobIdent, entropy string) string {
return path.Join(l.dir(n), n.partFname(entropy))
}
func (l *periodicEpochLayout) pruneBefore(before primitives.Epoch) (*pruneSummary, error) {
sums := make(map[primitives.Epoch]*pruneSummary)
iter, err := l.iterateIdents(before)
rollup := &pruneSummary{}
for ident, err := iter.next(); err != io.EOF; ident, err = iter.next() {
if err != nil {
if errors.Is(err, errIdentFailure) {
idf := &identificationError{}
if errors.As(err, &idf) {
log.WithFields(idf.LogFields()).WithError(err).Error("Failed to prune blob path due to identification errors")
}
continue
}
log.WithError(err).Error("encountered unhandled error during pruning")
return nil, errors.Wrap(errPruneFailed, err.Error())
}
_, ok := sums[ident.epoch]
if !ok {
sums[ident.epoch] = &pruneSummary{}
}
s := sums[ident.epoch]
removed, err := l.remove(ident)
if err != nil {
s.failedRemovals = append(s.failedRemovals, l.dir(ident))
log.WithField("root", fmt.Sprintf("%#x", ident.root)).Error("Failed to delete blob directory for root")
}
s.blobsPruned += removed
}
// Roll up summaries and clean up per-epoch directories.
for epoch, sum := range sums {
rollup.blobsPruned += sum.blobsPruned
rollup.failedRemovals = append(rollup.failedRemovals, sum.failedRemovals...)
rmdir := l.epochDir(epoch)
if len(sum.failedRemovals) == 0 {
if err := l.fs.Remove(rmdir); err != nil {
log.WithField("dir", rmdir).WithError(err).Error("Failed to remove epoch directory while pruning")
}
} else {
log.WithField("dir", rmdir).WithField("numFailed", len(sum.failedRemovals)).WithError(err).Error("Unable to remove epoch directory due to pruning failures")
}
}
return rollup, nil
}
func (l *periodicEpochLayout) remove(ident blobIdent) (int, error) {
removed := l.cache.evict(ident.root)
if err := l.fs.RemoveAll(l.dir(ident)); err != nil {
return removed, err
}
return removed, nil
}
type flatRootLayout struct {
fs afero.Fs
}
func (l *flatRootLayout) iterateIdents(_ primitives.Epoch) (*identIterator, error) {
entries, err := listDir(l.fs, ".")
if err != nil {
return nil, errors.Wrapf(err, "could not list root directory")
}
slotAndIndex := &readSlotOncePerRoot{fs: l.fs}
return &identIterator{
fs: l.fs,
levels: []layoutLevel{
{populateIdent: populateRoot, filter: isRootDir},
{populateIdent: slotAndIndex.populateIdent, filter: isSszFile}},
entries: entries,
}, nil
}
func (l *flatRootLayout) dir(n blobIdent) string {
return rootToString(n.root)
}
func (l *flatRootLayout) sszPath(n blobIdent) string {
return path.Join(l.dir(n), n.sszFname())
}
func (l *flatRootLayout) partPath(n blobIdent, entropy string) string {
return path.Join(l.dir(n), n.partFname(entropy))
}

View File

@@ -0,0 +1,52 @@
package filesystem
import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
type mockLayout struct {
pruneBeforeFunc func(primitives.Epoch) (*pruneSummary, error)
}
func (m *mockLayout) dir(n blobIdent) string {
return ""
}
func (m *mockLayout) sszPath(n blobIdent) string {
return ""
}
func (m *mockLayout) partPath(n blobIdent, entropy string) string {
return ""
}
func (m *mockLayout) iterateIdents(before primitives.Epoch) (*identIterator, error) {
return nil, nil
}
func (m *mockLayout) ident(root [32]byte, idx uint64) (blobIdent, error) {
return blobIdent{}, nil
}
func (m *mockLayout) dirIdent(root [32]byte) (blobIdent, error) {
return blobIdent{}, nil
}
func (m *mockLayout) summary(root [32]byte) BlobStorageSummary {
return BlobStorageSummary{}
}
func (m *mockLayout) notify(sidecar blocks.VerifiedROBlob) error {
return nil
}
func (m *mockLayout) pruneBefore(before primitives.Epoch) (*pruneSummary, error) {
return m.pruneBeforeFunc(before)
}
func (m *mockLayout) remove(ident blobIdent) (int, error) {
return 0, nil
}
var _ runtimeLayout = &mockLayout{}

View File

@@ -0,0 +1,194 @@
package filesystem
import (
"os"
"path/filepath"
"testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/spf13/afero"
)
func testSetupPaths(t *testing.T, fs afero.Fs, paths []migrateBeforeAfter) {
for _, ba := range paths {
slot, err := slots.EpochStart(ba.epoch)
require.NoError(t, err)
slot += ba.slotOffset
_, sc := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 1)
scb, err := sc[0].MarshalSSZ()
require.NoError(t, err)
p := ba.before
dir := filepath.Dir(p)
require.NoError(t, fs.MkdirAll(dir, directoryPermissions))
require.NoError(t, afero.WriteFile(fs, p, scb, 0666))
_, err = fs.Stat(ba.before)
require.NoError(t, err)
}
}
func testAssertNewPaths(t *testing.T, fs afero.Fs, bs *BlobStorage, paths []migrateBeforeAfter) {
for _, ba := range paths {
if ba.before != ba.after {
_, err := fs.Stat(ba.before)
require.ErrorIs(t, err, os.ErrNotExist)
dir := filepath.Dir(ba.before)
_, err = listDir(fs, dir)
require.ErrorIs(t, err, os.ErrNotExist)
}
_, err := fs.Stat(ba.after)
require.NoError(t, err)
root, err := stringToRoot(ba.root)
require.NoError(t, err)
namer, err := bs.layout.ident(root, ba.index)
require.NoError(t, err)
path := bs.layout.sszPath(namer)
require.Equal(t, ba.after, path)
}
}
type migrateBeforeAfter struct {
before string
after string
epoch primitives.Epoch
slotOffset primitives.Slot
index uint64
root string
}
func TestPeriodicEpochMigrator(t *testing.T) {
cases := []struct {
name string
plan []migrateBeforeAfter
err error
}{
{
name: "happy path",
plan: []migrateBeforeAfter{
{
before: "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b/0.ssz",
epoch: 1234,
slotOffset: 0,
root: "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b",
index: 0,
after: periodicEpochBaseDir + "/0/1234/0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b/0.ssz",
},
{
before: "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/0.ssz",
root: "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86",
index: 0,
epoch: 5330,
slotOffset: 0,
after: periodicEpochBaseDir + "/1/5330/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/0.ssz",
},
{
before: "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/1.ssz",
root: "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86",
index: 1,
epoch: 5330,
slotOffset: 31,
after: periodicEpochBaseDir + "/1/5330/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/1.ssz",
},
{
before: "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
root: "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c",
index: 0,
epoch: 16777216,
slotOffset: 16,
after: periodicEpochBaseDir + "/4096/16777216/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
},
},
},
{
name: "mix old and new",
plan: []migrateBeforeAfter{
{
before: "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b/0.ssz",
root: "0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b",
index: 0,
epoch: 1234,
slotOffset: 0,
after: periodicEpochBaseDir + "/0/1234/0x0125e54c64c925018c9296965a5b622d9f5ab626c10917860dcfb6aa09a0a00b/0.ssz",
},
{
before: "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/0.ssz",
root: "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86",
index: 0,
epoch: 5330,
slotOffset: 0,
after: periodicEpochBaseDir + "/1/5330/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/0.ssz",
},
{
before: "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/1.ssz",
root: "0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86",
index: 1,
epoch: 5330,
slotOffset: 31,
after: periodicEpochBaseDir + "/1/5330/0x0127dba6fd30fdbb47e73e861d5c6e602b38ac3ddc945bb6a2fc4e10761e9a86/1.ssz",
},
{
before: "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
root: "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c",
index: 0,
epoch: 16777216,
slotOffset: 16,
after: periodicEpochBaseDir + "/4096/16777216/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
},
{
before: periodicEpochBaseDir + "/4096/16777217/0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba/0.ssz",
root: "0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba",
index: 0,
epoch: 16777217,
slotOffset: 16,
after: periodicEpochBaseDir + "/4096/16777217/0x42eabe3d2c125410cd226de6f2825fb7575ab896c3f52e43de1fa29e4c809aba/0.ssz",
},
{
before: "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
root: "0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c",
index: 0,
epoch: 16777216,
slotOffset: 16,
after: periodicEpochBaseDir + "/4096/16777216/0x0232521756a0b965eab2c2245d7ad85feaeaf5f427cd14d1a7531f9d555b415c/0.ssz",
},
{
before: periodicEpochBaseDir + "/4096/16777216/0x2326de064f828c564740da17fc247b30d7e7300da24b0aae39a0c91791acc19f/0.ssz",
root: "0x2326de064f828c564740da17fc247b30d7e7300da24b0aae39a0c91791acc19f",
index: 0,
epoch: 16777216,
slotOffset: 31,
after: periodicEpochBaseDir + "/4096/16777216/0x2326de064f828c564740da17fc247b30d7e7300da24b0aae39a0c91791acc19f/0.ssz",
},
{
before: periodicEpochBaseDir + "/2/11235/0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d/1.ssz",
root: "0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d",
index: 1,
epoch: 11235,
slotOffset: 0,
after: periodicEpochBaseDir + "/2/11235/0x666cea5034e22bd3b849cb33914cad59afd88ee08e4d5bc0e997411c945fbc1d/1.ssz",
},
},
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageAndFs(t)
from := &flatRootLayout{fs: fs}
cache := newBlobStorageCache()
pruner := newBlobPruner(bs.retentionEpochs)
to, err := newPeriodicEpochLayout(fs, cache, pruner)
require.NoError(t, err)
testSetupPaths(t, fs, c.plan)
err = migrateLayout(fs, from, to, cache)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.NoError(t, warmCache(bs.layout, bs.cache))
testAssertNewPaths(t, fs, bs, c.plan)
})
}
}

View File

@@ -0,0 +1,73 @@
package filesystem
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/spf13/afero"
)
// NewEphemeralBlobStorage should only be used for tests.
// The instance of BlobStorage returned is backed by an in-memory virtual filesystem,
// improving test performance and simplifying cleanup.
func NewEphemeralBlobStorage(t testing.TB) *BlobStorage {
return NewEphemeralBlobStorageUsingFs(t, afero.NewMemMapFs())
}
// NewEphemeralBlobStorageAndFs can be used by tests that want access to the virtual filesystem
// in order to interact with it outside the parameters of the BlobStorage api.
func NewEphemeralBlobStorageAndFs(t testing.TB) (afero.Fs, *BlobStorage) {
fs := afero.NewMemMapFs()
bs := NewEphemeralBlobStorageUsingFs(t, fs)
return fs, bs
}
func NewEphemeralBlobStorageUsingFs(t testing.TB, fs afero.Fs) *BlobStorage {
opts := []BlobStorageOption{
WithBlobRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
WithFs(fs),
}
bs, err := NewBlobStorage(opts...)
if err != nil {
t.Fatalf("error initializing test BlobStorage, err=%s", err.Error())
}
bs.WarmCache()
return bs
}
type BlobMocker struct {
fs afero.Fs
bs *BlobStorage
}
// CreateFakeIndices creates empty blob sidecar files at the expected path for the given
// root and indices to influence the result of Indices().
func (bm *BlobMocker) CreateFakeIndices(root [32]byte, slot primitives.Slot, indices ...uint64) error {
for i := range indices {
if err := bm.bs.layout.notify(newBlobIdent(root, slots.ToEpoch(slot), indices[i])); err != nil {
return err
}
}
return nil
}
// NewEphemeralBlobStorageWithMocker returns a *BlobMocker value in addition to the BlobStorage value.
// BlockMocker encapsulates things blob path construction to avoid leaking implementation details.
func NewEphemeralBlobStorageWithMocker(t testing.TB) (*BlobMocker, *BlobStorage) {
fs, bs := NewEphemeralBlobStorageAndFs(t)
return &BlobMocker{fs: fs, bs: bs}, bs
}
func NewMockBlobStorageSummarizer(t *testing.T, set map[[32]byte][]int) BlobStorageSummarizer {
c := newBlobStorageCache()
for k, v := range set {
for i := range v {
if err := c.ensure(k, 0, uint64(v[i])); err != nil {
t.Fatal(err)
}
}
}
return c
}

View File

@@ -1,27 +1,16 @@
package filesystem
import (
"encoding/binary"
"io"
"path"
"path/filepath"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
const retentionBuffer primitives.Epoch = 2
const bytesPerSidecar = 131928
var (
errPruningFailures = errors.New("blobs could not be pruned for some roots")
@@ -29,311 +18,46 @@ var (
)
type blobPruner struct {
sync.Mutex
prunedBefore atomic.Uint64
windowSize primitives.Slot
slotMap *slotForRoot
fs afero.Fs
mu sync.Mutex
prunedBefore atomic.Uint64
retentionPeriod primitives.Epoch
}
func newBlobPruner(fs afero.Fs, retain primitives.Epoch) (*blobPruner, error) {
r, err := slots.EpochStart(retain + retentionBuffer)
if err != nil {
return nil, errors.Wrap(err, "could not set retentionSlots")
}
return &blobPruner{fs: fs, windowSize: r, slotMap: newSlotForRoot()}, nil
func newBlobPruner(retain primitives.Epoch) *blobPruner {
p := &blobPruner{retentionPeriod: retain + retentionBuffer}
return p
}
// notify updates the pruner's view of root->blob mappings. This allows the pruner to build a cache
// of root->slot mappings and decide when to evict old blobs based on the age of present blobs.
func (p *blobPruner) notify(root [32]byte, latest primitives.Slot, idx uint64) error {
if err := p.slotMap.ensure(rootString(root), latest, idx); err != nil {
return err
}
pruned := uint64(windowMin(latest, p.windowSize))
if p.prunedBefore.Swap(pruned) == pruned {
return nil
func (p *blobPruner) notify(latest primitives.Epoch, layout runtimeLayout) chan struct{} {
done := make(chan struct{})
floor := periodFloor(latest, p.retentionPeriod)
if primitives.Epoch(p.prunedBefore.Swap(uint64(floor))) >= floor {
// Only trigger pruning if the atomic swap changed the previous value of prunedBefore.
close(done)
return done
}
go func() {
if err := p.prune(primitives.Slot(pruned)); err != nil {
log.WithError(err).Errorf("Failed to prune blobs from slot %d", latest)
p.mu.Lock()
start := time.Now()
defer p.mu.Unlock()
sum, err := layout.pruneBefore(floor)
if err != nil {
log.WithError(err).WithFields(sum.LogFields()).Warn("Encountered errors during blob pruning.")
}
log.WithFields(logrus.Fields{
"upToEpoch": floor,
"duration": time.Since(start).String(),
"filesRemoved": sum.blobsPruned,
}).Debug("Pruned old blobs")
blobsPrunedCounter.Add(float64(sum.blobsPruned))
close(done)
}()
return nil
return done
}
func windowMin(latest primitives.Slot, offset primitives.Slot) primitives.Slot {
// Safely compute the first slot in the epoch for the latest slot
latest = latest - latest%params.BeaconConfig().SlotsPerEpoch
if latest < offset {
func periodFloor(latest, period primitives.Epoch) primitives.Epoch {
if latest < period {
return 0
}
return latest - offset
}
// Prune prunes blobs in the base directory based on the retention epoch.
// It deletes blobs older than currentEpoch - (retentionEpochs+bufferEpochs).
// This is so that we keep a slight buffer and blobs are deleted after n+2 epochs.
func (p *blobPruner) prune(pruneBefore primitives.Slot) error {
p.Lock()
defer p.Unlock()
start := time.Now()
totalPruned, totalErr := 0, 0
// Customize logging/metrics behavior for the initial cache warmup when slot=0.
// We'll never see a prune request for slot 0, unless this is the initial call to warm up the cache.
if pruneBefore == 0 {
defer func() {
log.WithField("duration", time.Since(start).String()).Debug("Warmed up pruner cache")
}()
} else {
defer func() {
log.WithFields(logrus.Fields{
"upToEpoch": slots.ToEpoch(pruneBefore),
"duration": time.Since(start).String(),
"filesRemoved": totalPruned,
}).Debug("Pruned old blobs")
blobsPrunedCounter.Add(float64(totalPruned))
}()
}
entries, err := listDir(p.fs, ".")
if err != nil {
return errors.Wrap(err, "unable to list root blobs directory")
}
dirs := filter(entries, filterRoot)
for _, dir := range dirs {
pruned, err := p.tryPruneDir(dir, pruneBefore)
if err != nil {
totalErr += 1
log.WithError(err).WithField("directory", dir).Error("Unable to prune directory")
}
totalPruned += pruned
}
if totalErr > 0 {
return errors.Wrapf(errPruningFailures, "pruning failed for %d root directories", totalErr)
}
return nil
}
func shouldRetain(slot, pruneBefore primitives.Slot) bool {
return slot >= pruneBefore
}
func (p *blobPruner) tryPruneDir(dir string, pruneBefore primitives.Slot) (int, error) {
root := rootFromDir(dir)
slot, slotCached := p.slotMap.slot(root)
// Return early if the slot is cached and doesn't need pruning.
if slotCached && shouldRetain(slot, pruneBefore) {
return 0, nil
}
// entries will include things that aren't ssz files, like dangling .part files. We need these to
// completely clean up the directory.
entries, err := listDir(p.fs, dir)
if err != nil {
return 0, errors.Wrapf(err, "failed to list blobs in directory %s", dir)
}
// scFiles filters the dir listing down to the ssz encoded BlobSidecar files. This allows us to peek
// at the first one in the list to figure out the slot.
scFiles := filter(entries, filterSsz)
if len(scFiles) == 0 {
log.WithField("dir", dir).Warn("Pruner ignoring directory with no blob files")
return 0, nil
}
if !slotCached {
slot, err = slotFromFile(path.Join(dir, scFiles[0]), p.fs)
if err != nil {
return 0, errors.Wrapf(err, "slot could not be read from blob file %s", scFiles[0])
}
for i := range scFiles {
idx, err := idxFromPath(scFiles[i])
if err != nil {
return 0, errors.Wrapf(err, "index could not be determined for blob file %s", scFiles[i])
}
if err := p.slotMap.ensure(root, slot, idx); err != nil {
return 0, errors.Wrapf(err, "could not update prune cache for blob file %s", scFiles[i])
}
}
if shouldRetain(slot, pruneBefore) {
return 0, nil
}
}
removed := 0
for _, fname := range entries {
fullName := path.Join(dir, fname)
if err := p.fs.Remove(fullName); err != nil {
return removed, errors.Wrapf(err, "unable to remove %s", fullName)
}
// Don't count other files that happen to be in the dir, like dangling .part files.
if filterSsz(fname) {
removed += 1
}
// Log a warning whenever we clean up a .part file
if filterPart(fullName) {
log.WithField("file", fullName).Warn("Deleting abandoned blob .part file")
}
}
if err := p.fs.Remove(dir); err != nil {
return removed, errors.Wrapf(err, "unable to remove blob directory %s", dir)
}
p.slotMap.evict(rootFromDir(dir))
return len(scFiles), nil
}
func idxFromPath(fname string) (uint64, error) {
fname = path.Base(fname)
if filepath.Ext(fname) != dotSszExt {
return 0, errors.Wrap(errNotBlobSSZ, "does not have .ssz extension")
}
parts := strings.Split(fname, ".")
if len(parts) != 2 {
return 0, errors.Wrap(errNotBlobSSZ, "unexpected filename structure (want <index>.ssz)")
}
return strconv.ParseUint(parts[0], 10, 64)
}
func rootFromDir(dir string) string {
return filepath.Base(dir) // end of the path should be the blob directory, named by hex encoding of root
}
// Read slot from marshaled BlobSidecar data in the given file. See slotFromBlob for details.
func slotFromFile(file string, fs afero.Fs) (primitives.Slot, error) {
f, err := fs.Open(file)
if err != nil {
return 0, err
}
defer func() {
if err := f.Close(); err != nil {
log.WithError(err).Errorf("Could not close blob file")
}
}()
return slotFromBlob(f)
}
// slotFromBlob reads the ssz data of a file at the specified offset (8 + 131072 + 48 + 48 = 131176 bytes),
// which is calculated based on the size of the BlobSidecar struct and is based on the size of the fields
// preceding the slot information within SignedBeaconBlockHeader.
func slotFromBlob(at io.ReaderAt) (primitives.Slot, error) {
b := make([]byte, 8)
_, err := at.ReadAt(b, 131176)
if err != nil {
return 0, err
}
rawSlot := binary.LittleEndian.Uint64(b)
return primitives.Slot(rawSlot), nil
}
func listDir(fs afero.Fs, dir string) ([]string, error) {
top, err := fs.Open(dir)
if err != nil {
return nil, errors.Wrap(err, "failed to open directory descriptor")
}
defer func() {
if err := top.Close(); err != nil {
log.WithError(err).Errorf("Could not close file %s", dir)
}
}()
// re the -1 param: "If n <= 0, Readdirnames returns all the names from the directory in a single slice"
dirs, err := top.Readdirnames(-1)
if err != nil {
return nil, errors.Wrap(err, "failed to read directory listing")
}
return dirs, nil
}
func filter(entries []string, filt func(string) bool) []string {
filtered := make([]string, 0, len(entries))
for i := range entries {
if filt(entries[i]) {
filtered = append(filtered, entries[i])
}
}
return filtered
}
func filterRoot(s string) bool {
return strings.HasPrefix(s, "0x")
}
var dotSszExt = "." + sszExt
var dotPartExt = "." + partExt
func filterSsz(s string) bool {
return filepath.Ext(s) == dotSszExt
}
func filterPart(s string) bool {
return filepath.Ext(s) == dotPartExt
}
func newSlotForRoot() *slotForRoot {
return &slotForRoot{
cache: make(map[string]*slotCacheEntry, params.BeaconConfig().MinEpochsForBlobsSidecarsRequest*fieldparams.SlotsPerEpoch),
}
}
type slotCacheEntry struct {
slot primitives.Slot
mask [fieldparams.MaxBlobsPerBlock]bool
}
type slotForRoot struct {
sync.RWMutex
nBlobs float64
cache map[string]*slotCacheEntry
}
func (s *slotForRoot) updateMetrics(delta float64) {
s.nBlobs += delta
blobDiskCount.Set(s.nBlobs)
blobDiskSize.Set(s.nBlobs * bytesPerSidecar)
}
func (s *slotForRoot) ensure(key string, slot primitives.Slot, idx uint64) error {
if idx >= fieldparams.MaxBlobsPerBlock {
return errIndexOutOfBounds
}
s.Lock()
defer s.Unlock()
v, ok := s.cache[key]
if !ok {
v = &slotCacheEntry{}
}
v.slot = slot
if !v.mask[idx] {
s.updateMetrics(1)
}
v.mask[idx] = true
s.cache[key] = v
return nil
}
func (s *slotForRoot) slot(key string) (primitives.Slot, bool) {
s.RLock()
defer s.RUnlock()
v, ok := s.cache[key]
if !ok {
return 0, false
}
return v.slot, ok
}
func (s *slotForRoot) evict(key string) {
s.Lock()
defer s.Unlock()
v, ok := s.cache[key]
var deleted float64
if ok {
for i := range v.mask {
if v.mask[i] {
deleted += 1
}
}
s.updateMetrics(-deleted)
}
delete(s.cache, key)
return latest - period
}

View File

@@ -1,327 +1,196 @@
package filesystem
import (
"bytes"
"fmt"
"math"
"encoding/binary"
"os"
"path"
"sort"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/spf13/afero"
)
func TestTryPruneDir_CachedNotExpired(t *testing.T) {
fs := afero.NewMemMapFs()
pr, err := newBlobPruner(fs, 0)
require.NoError(t, err)
slot := pr.windowSize
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, fieldparams.MaxBlobsPerBlock)
sc, err := verification.BlobSidecarNoop(sidecars[0])
require.NoError(t, err)
root := fmt.Sprintf("%#x", sc.BlockRoot())
// This slot is right on the edge of what would need to be pruned, so by adding it to the cache and
// skipping any other test setup, we can be certain the hot cache path never touches the filesystem.
require.NoError(t, pr.slotMap.ensure(root, sc.Slot(), 0))
pruned, err := pr.tryPruneDir(root, pr.windowSize)
require.NoError(t, err)
require.Equal(t, 0, pruned)
type prunerScenario struct {
name string
prunedBefore primitives.Epoch
retentionPeriod primitives.Epoch
latest primitives.Epoch
expected pruneExpectation
}
func TestTryPruneDir_CachedExpired(t *testing.T) {
t.Run("empty directory", func(t *testing.T) {
fs := afero.NewMemMapFs()
pr, err := newBlobPruner(fs, 0)
require.NoError(t, err)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 1)
sc, err := verification.BlobSidecarNoop(sidecars[0])
require.NoError(t, err)
root := fmt.Sprintf("%#x", sc.BlockRoot())
require.NoError(t, fs.Mkdir(root, directoryPermissions)) // make empty directory
require.NoError(t, pr.slotMap.ensure(root, sc.Slot(), 0))
pruned, err := pr.tryPruneDir(root, slot+1)
require.NoError(t, err)
require.Equal(t, 0, pruned)
})
t.Run("blobs to delete", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(scs[0]))
require.NoError(t, bs.Save(scs[1]))
// check that the root->slot is cached
root := fmt.Sprintf("%#x", scs[0].BlockRoot())
cs, cok := bs.pruner.slotMap.slot(root)
require.Equal(t, true, cok)
require.Equal(t, slot, cs)
// ensure that we see the saved files in the filesystem
files, err := listDir(fs, root)
require.NoError(t, err)
require.Equal(t, 2, len(files))
pruned, err := bs.pruner.tryPruneDir(root, slot+1)
require.NoError(t, err)
require.Equal(t, 2, pruned)
files, err = listDir(fs, root)
require.ErrorIs(t, err, os.ErrNotExist)
require.Equal(t, 0, len(files))
})
type pruneExpectation struct {
called bool
arg primitives.Epoch
summary *pruneSummary
err error
}
func TestTryPruneDir_SlotFromFile(t *testing.T) {
t.Run("expired blobs deleted", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
var slot primitives.Slot = 0
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(scs[0]))
require.NoError(t, bs.Save(scs[1]))
// check that the root->slot is cached
root := fmt.Sprintf("%#x", scs[0].BlockRoot())
cs, ok := bs.pruner.slotMap.slot(root)
require.Equal(t, true, ok)
require.Equal(t, slot, cs)
// evict it from the cache so that we trigger the file read path
bs.pruner.slotMap.evict(root)
_, ok = bs.pruner.slotMap.slot(root)
require.Equal(t, false, ok)
// ensure that we see the saved files in the filesystem
files, err := listDir(fs, root)
require.NoError(t, err)
require.Equal(t, 2, len(files))
pruned, err := bs.pruner.tryPruneDir(root, slot+1)
require.NoError(t, err)
require.Equal(t, 2, pruned)
files, err = listDir(fs, root)
require.ErrorIs(t, err, os.ErrNotExist)
require.Equal(t, 0, len(files))
})
t.Run("not expired, intact", func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
// Set slot equal to the window size, so it should be retained.
var slot primitives.Slot = bs.pruner.windowSize
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 2)
scs, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
require.NoError(t, bs.Save(scs[0]))
require.NoError(t, bs.Save(scs[1]))
// Evict slot mapping from the cache so that we trigger the file read path.
root := fmt.Sprintf("%#x", scs[0].BlockRoot())
bs.pruner.slotMap.evict(root)
_, ok := bs.pruner.slotMap.slot(root)
require.Equal(t, false, ok)
// Ensure that we see the saved files in the filesystem.
files, err := listDir(fs, root)
require.NoError(t, err)
require.Equal(t, 2, len(files))
// This should use the slotFromFile code (simulating restart).
// Setting pruneBefore == slot, so that the slot will be outside the window (at the boundary).
pruned, err := bs.pruner.tryPruneDir(root, slot)
require.NoError(t, err)
require.Equal(t, 0, pruned)
// Ensure files are still present.
files, err = listDir(fs, root)
require.NoError(t, err)
require.Equal(t, 2, len(files))
})
func (e *pruneExpectation) record(before primitives.Epoch) (*pruneSummary, error) {
e.called = true
e.arg = before
if e.summary == nil {
e.summary = &pruneSummary{}
}
return e.summary, e.err
}
func TestSlotFromBlob(t *testing.T) {
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc := sidecars[0]
enc, err := sc.MarshalSSZ()
require.NoError(t, err)
slot, err := slotFromBlob(bytes.NewReader(enc))
require.NoError(t, err)
require.Equal(t, c.slot, slot)
})
}
}
func TestSlotFromFile(t *testing.T) {
cases := []struct {
slot primitives.Slot
}{
{slot: 0},
{slot: 2},
{slot: 1123581321},
{slot: math.MaxUint64},
}
for _, c := range cases {
t.Run(fmt.Sprintf("slot %d", c.slot), func(t *testing.T) {
fs, bs, err := NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, c.slot, 1)
sc, err := verification.BlobSidecarNoop(sidecars[0])
require.NoError(t, err)
require.NoError(t, bs.Save(sc))
fname := namerForSidecar(sc)
sszPath := fname.path()
slot, err := slotFromFile(sszPath, fs)
require.NoError(t, err)
require.Equal(t, c.slot, slot)
})
}
}
type dirFiles struct {
name string
isDir bool
children []dirFiles
}
func (df dirFiles) reify(t *testing.T, fs afero.Fs, base string) {
fullPath := path.Join(base, df.name)
if df.isDir {
if df.name != "" {
require.NoError(t, fs.Mkdir(fullPath, directoryPermissions))
}
for _, c := range df.children {
c.reify(t, fs, fullPath)
}
} else {
fp, err := fs.Create(fullPath)
require.NoError(t, err)
_, err = fp.WriteString("derp")
require.NoError(t, err)
}
}
func (df dirFiles) childNames() []string {
cn := make([]string, len(df.children))
for i := range df.children {
cn[i] = df.children[i].name
}
return cn
}
func TestListDir(t *testing.T) {
fs := afero.NewMemMapFs()
// parent directory
fsLayout := dirFiles{isDir: true}
// break out each subdir for easier assertions
notABlob := dirFiles{name: "notABlob", isDir: true}
childlessBlob := dirFiles{name: "0x0987654321", isDir: true}
blobWithSsz := dirFiles{name: "0x1123581321", isDir: true,
children: []dirFiles{{name: "1.ssz"}, {name: "2.ssz"}},
}
blobWithSszAndTmp := dirFiles{name: "0x1234567890", isDir: true,
children: []dirFiles{{name: "5.ssz"}, {name: "0.part"}}}
fsLayout.children = append(fsLayout.children, notABlob)
fsLayout.children = append(fsLayout.children, childlessBlob)
fsLayout.children = append(fsLayout.children, blobWithSsz)
fsLayout.children = append(fsLayout.children, blobWithSszAndTmp)
topChildren := make([]string, len(fsLayout.children))
for i := range fsLayout.children {
topChildren[i] = fsLayout.children[i].name
}
fsLayout.reify(t, fs, "")
cases := []struct {
name string
dirPath string
expected []string
filter func(string) bool
err error
}{
func TestPrunerNotify(t *testing.T) {
defaultRetention := params.BeaconConfig().MinEpochsForBlobsSidecarsRequest
cases := []prunerScenario{
{
name: "non-existent",
dirPath: "derp",
expected: []string{},
err: os.ErrNotExist,
name: "last epoch of period",
retentionPeriod: defaultRetention,
prunedBefore: 11235,
latest: defaultRetention + 11235,
expected: pruneExpectation{called: false},
},
{
name: "empty",
dirPath: childlessBlob.name,
expected: []string{},
name: "within period",
retentionPeriod: defaultRetention,
prunedBefore: 11235,
latest: 11235 + defaultRetention - 1,
expected: pruneExpectation{called: false},
},
{
name: "top",
dirPath: ".",
expected: topChildren,
name: "triggers",
retentionPeriod: defaultRetention,
prunedBefore: 11235,
latest: 11235 + 1 + defaultRetention,
expected: pruneExpectation{called: true, arg: 11235 + 1},
},
{
name: "custom filter: only notABlob",
dirPath: ".",
expected: []string{notABlob.name},
filter: func(s string) bool {
if s == notABlob.name {
return true
}
return false
},
name: "from zero - before first period",
retentionPeriod: defaultRetention,
prunedBefore: 0,
latest: defaultRetention - 1,
expected: pruneExpectation{called: false},
},
{
name: "root filter",
dirPath: ".",
expected: []string{childlessBlob.name, blobWithSsz.name, blobWithSszAndTmp.name},
filter: filterRoot,
name: "from zero - at boundary",
retentionPeriod: defaultRetention,
prunedBefore: 0,
latest: defaultRetention,
expected: pruneExpectation{called: false},
},
{
name: "ssz filter",
dirPath: blobWithSsz.name,
expected: blobWithSsz.childNames(),
filter: filterSsz,
},
{
name: "ssz mixed filter",
dirPath: blobWithSszAndTmp.name,
expected: []string{"5.ssz"},
filter: filterSsz,
name: "from zero - triggers",
retentionPeriod: defaultRetention,
prunedBefore: 0,
latest: defaultRetention + 1,
expected: pruneExpectation{called: true, arg: 1},
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
result, err := listDir(fs, c.dirPath)
if c.filter != nil {
result = filter(result, c.filter)
}
if c.err != nil {
require.ErrorIs(t, err, c.err)
require.Equal(t, 0, len(result))
} else {
require.NoError(t, err)
sort.Strings(c.expected)
sort.Strings(result)
require.DeepEqual(t, c.expected, result)
}
actual := &pruneExpectation{}
l := &mockLayout{pruneBeforeFunc: actual.record}
pruner := &blobPruner{retentionPeriod: c.retentionPeriod}
pruner.prunedBefore.Store(uint64(c.prunedBefore))
done := pruner.notify(c.latest, l)
<-done
require.Equal(t, c.expected.called, actual.called)
require.Equal(t, c.expected.arg, actual.arg)
})
}
}
func testSetupBlobIdentPaths(t *testing.T, fs afero.Fs, bs *BlobStorage, idents []testIdent) []blobIdent {
created := make([]blobIdent, len(idents))
for i, id := range idents {
slot, err := slots.EpochStart(id.epoch)
require.NoError(t, err)
slot += id.offset
_, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, slot, 1)
sc := verification.FakeVerifyForTest(t, scs[0])
require.NoError(t, bs.Save(sc))
ident := identForSidecar(sc)
_, err = fs.Stat(bs.layout.sszPath(ident))
require.NoError(t, err)
created[i] = ident
}
return created
}
func testAssertBlobsPruned(t *testing.T, fs afero.Fs, bs *BlobStorage, pruned, remain []blobIdent) {
for _, id := range pruned {
_, err := fs.Stat(bs.layout.sszPath(id))
require.Equal(t, true, os.IsNotExist(err))
}
for _, id := range remain {
_, err := fs.Stat(bs.layout.sszPath(id))
require.NoError(t, err)
}
}
type testIdent struct {
blobIdent
offset primitives.Slot
}
func testRoots(n int) [][32]byte {
roots := make([][32]byte, n)
for i := range roots {
binary.LittleEndian.PutUint32(roots[i][:], uint32(1+i))
}
return roots
}
func TestLayoutPruneBefore(t *testing.T) {
roots := testRoots(10)
cases := []struct {
name string
pruned []testIdent
remain []testIdent
pruneBefore primitives.Epoch
err error
sum pruneSummary
}{
{
name: "none pruned",
pruneBefore: 1,
pruned: []testIdent{},
remain: []testIdent{
{offset: 1, blobIdent: blobIdent{root: roots[0], epoch: 1, index: 0}},
{offset: 1, blobIdent: blobIdent{root: roots[1], epoch: 1, index: 0}},
},
},
{
name: "expected pruned before epoch",
pruneBefore: 3,
pruned: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[0], epoch: 1, index: 0}},
{offset: 31, blobIdent: blobIdent{root: roots[1], epoch: 1, index: 5}},
{offset: 0, blobIdent: blobIdent{root: roots[2], epoch: 2, index: 0}},
{offset: 31, blobIdent: blobIdent{root: roots[3], epoch: 2, index: 3}},
},
remain: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[4], epoch: 3, index: 2}}, // boundary
{offset: 31, blobIdent: blobIdent{root: roots[5], epoch: 3, index: 0}}, // boundary
{offset: 0, blobIdent: blobIdent{root: roots[6], epoch: 4, index: 1}},
{offset: 31, blobIdent: blobIdent{root: roots[7], epoch: 4, index: 5}},
},
sum: pruneSummary{blobsPruned: 4},
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageAndFs(t)
pruned := testSetupBlobIdentPaths(t, fs, bs, c.pruned)
remain := testSetupBlobIdentPaths(t, fs, bs, c.remain)
sum, err := bs.layout.pruneBefore(c.pruneBefore)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
testAssertBlobsPruned(t, fs, bs, pruned, remain)
require.Equal(t, c.sum.blobsPruned, sum.blobsPruned)
require.Equal(t, len(c.pruned), sum.blobsPruned)
require.Equal(t, len(c.sum.failedRemovals), len(sum.failedRemovals))
})
}
}

View File

@@ -118,7 +118,7 @@ type HeadAccessDatabase interface {
// SlasherDatabase interface for persisting data related to detecting slashable offenses on Ethereum.
type SlasherDatabase interface {
io.Closer
SaveLastEpochsWrittenForValidators(
SaveLastEpochWrittenForValidators(
ctx context.Context, epochByValidator map[primitives.ValidatorIndex]primitives.Epoch,
) error
SaveAttestationRecordsForValidators(

View File

@@ -20,6 +20,7 @@ go_library(
"migration.go",
"migration_archived_index.go",
"migration_block_slot_index.go",
"migration_finalized_parent.go",
"migration_state_validators.go",
"schema.go",
"state.go",

View File

@@ -14,6 +14,7 @@ var migrations = []migration{
migrateArchivedIndex,
migrateBlockSlotIndex,
migrateStateValidators,
migrateFinalizedParent,
}
// RunMigrations defined in the migrations array.

View File

@@ -0,0 +1,87 @@
package kv
import (
"bytes"
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
bolt "go.etcd.io/bbolt"
)
var migrationFinalizedParent = []byte("parent_bug_32fb183")
func migrateFinalizedParent(ctx context.Context, db *bolt.DB) error {
if updateErr := db.Update(func(tx *bolt.Tx) error {
mb := tx.Bucket(migrationsBucket)
if b := mb.Get(migrationFinalizedParent); bytes.Equal(b, migrationCompleted) {
return nil // Migration already completed.
}
bkt := tx.Bucket(finalizedBlockRootsIndexBucket)
if bkt == nil {
return fmt.Errorf("unable to read %s bucket for migration", finalizedBlockRootsIndexBucket)
}
bb := tx.Bucket(blocksBucket)
if bb == nil {
return fmt.Errorf("unable to read %s bucket for migration", blocksBucket)
}
c := bkt.Cursor()
var slotsWithoutBug primitives.Slot
maxBugSearch := params.BeaconConfig().SlotsPerEpoch * 10
for k, v := c.Last(); k != nil; k, v = c.Prev() {
// check if context is cancelled in between
if ctx.Err() != nil {
return ctx.Err()
}
idxEntry := &ethpb.FinalizedBlockRootContainer{}
if err := decode(ctx, v, idxEntry); err != nil {
return errors.Wrapf(err, "unable to decode finalized block root container for root=%#x", k)
}
// Not one of the corrupt values
if !bytes.Equal(idxEntry.ParentRoot, k) {
slotsWithoutBug += 1
if slotsWithoutBug > maxBugSearch {
break
}
continue
}
slotsWithoutBug = 0
log.WithField("root", fmt.Sprintf("%#x", k)).Debug("found index entry with incorrect parent root")
// Look up full block to get the correct parent root.
encBlk := bb.Get(k)
if encBlk == nil {
return errors.Wrapf(ErrNotFound, "could not find block for corrupt finalized index entry %#x", k)
}
blk, err := unmarshalBlock(ctx, encBlk)
if err != nil {
return errors.Wrapf(err, "unable to decode block for root=%#x", k)
}
// Replace parent root in the index with the correct value and write it back.
pr := blk.Block().ParentRoot()
idxEntry.ParentRoot = pr[:]
idxEnc, err := encode(ctx, idxEntry)
if err != nil {
return errors.Wrapf(err, "failed to encode finalized index entry for root=%#x", k)
}
if err := bkt.Put(k, idxEnc); err != nil {
return errors.Wrapf(err, "failed to update finalized index entry for root=%#x", k)
}
log.WithField("root", fmt.Sprintf("%#x", k)).
WithField("parentRoot", fmt.Sprintf("%#x", idxEntry.ParentRoot)).
Debug("updated corrupt index entry with correct parent")
}
// Mark migration complete.
return mb.Put(migrationFinalizedParent, migrationCompleted)
}); updateErr != nil {
log.WithError(updateErr).Errorf("could not run finalized parent root index repair migration")
return updateErr
}
return nil
}

View File

@@ -70,12 +70,12 @@ func (s *Store) LastEpochWrittenForValidators(
return attestedEpochs, err
}
// SaveLastEpochsWrittenForValidators updates the latest epoch a slice
// of validator indices has attested to.
func (s *Store) SaveLastEpochsWrittenForValidators(
// SaveLastEpochWrittenForValidators saves the latest epoch
// that each validator has attested to in the provided map.
func (s *Store) SaveLastEpochWrittenForValidators(
ctx context.Context, epochByValIndex map[primitives.ValidatorIndex]primitives.Epoch,
) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveLastEpochsWrittenForValidators")
ctx, span := trace.StartSpan(ctx, "BeaconDB.SaveLastEpochWrittenForValidators")
defer span.End()
const batchSize = 10000
@@ -157,7 +157,7 @@ func (s *Store) CheckAttesterDoubleVotes(
attRecordsBkt := tx.Bucket(attestationRecordsBucket)
encEpoch := encodeTargetEpoch(attToProcess.IndexedAttestation.Data.Target.Epoch)
localDoubleVotes := []*slashertypes.AttesterDoubleVote{}
localDoubleVotes := make([]*slashertypes.AttesterDoubleVote, 0)
for _, valIdx := range attToProcess.IndexedAttestation.AttestingIndices {
// Check if there is signing root in the database for this combination
@@ -166,7 +166,7 @@ func (s *Store) CheckAttesterDoubleVotes(
validatorEpochKey := append(encEpoch, encIdx...)
attRecordsKey := signingRootsBkt.Get(validatorEpochKey)
// An attestation record key is comprised of a signing root (32 bytes).
// An attestation record key consists of a signing root (32 bytes).
if len(attRecordsKey) < attestationRecordKeySize {
// If there is no signing root for this combination,
// then there is no double vote. We can continue to the next validator.
@@ -697,7 +697,7 @@ func decodeSlasherChunk(enc []byte) ([]uint16, error) {
}
// Encode attestation record to bytes.
// The output encoded attestation record consists in the signing root concatened with the compressed attestation record.
// The output encoded attestation record consists in the signing root concatenated with the compressed attestation record.
func encodeAttestationRecord(att *slashertypes.IndexedAttestationWrapper) ([]byte, error) {
if att == nil || att.IndexedAttestation == nil {
return []byte{}, errors.New("nil proposal record")
@@ -716,7 +716,7 @@ func encodeAttestationRecord(att *slashertypes.IndexedAttestationWrapper) ([]byt
}
// Decode attestation record from bytes.
// The input encoded attestation record consists in the signing root concatened with the compressed attestation record.
// The input encoded attestation record consists in the signing root concatenated with the compressed attestation record.
func decodeAttestationRecord(encoded []byte) (*slashertypes.IndexedAttestationWrapper, error) {
if len(encoded) < rootSize {
return nil, fmt.Errorf("wrong length for encoded attestation record, want minimum %d, got %d", rootSize, len(encoded))

View File

@@ -89,7 +89,7 @@ func TestStore_LastEpochWrittenForValidators(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 0, len(attestedEpochs))
err = beaconDB.SaveLastEpochsWrittenForValidators(ctx, epochsByValidator)
err = beaconDB.SaveLastEpochWrittenForValidators(ctx, epochsByValidator)
require.NoError(t, err)
retrievedEpochs, err := beaconDB.LastEpochWrittenForValidators(ctx, indices)

View File

@@ -10,7 +10,7 @@ import (
)
// TestCleanup ensures that the cleanup function unregisters the prometheus.Collection
// also tests the interchangability of the explicit prometheus Register/Unregister
// also tests the interchangeability of the explicit prometheus Register/Unregister
// and the implicit methods within the collector implementation
func TestCleanup(t *testing.T) {
ctx := context.Background()
@@ -32,11 +32,11 @@ func TestCleanup(t *testing.T) {
assert.Equal(t, true, unregistered, "prometheus.Unregister failed to unregister PowchainCollector on final cleanup")
}
// TestCancelation tests that canceling the context passed into
// TestCancellation tests that canceling the context passed into
// NewPowchainCollector cleans everything up as expected. This
// does come at the cost of an extra channel cluttering up
// PowchainCollector, just for this test.
func TestCancelation(t *testing.T) {
func TestCancellation(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
pc, err := NewPowchainCollector(ctx)
assert.NoError(t, err, "Unexpected error calling NewPowchainCollector")

View File

@@ -707,6 +707,7 @@ func (b *BeaconNode) registerP2P(cliCtx *cli.Context) error {
PrivateKey: cliCtx.String(cmd.P2PPrivKey.Name),
StaticPeerID: cliCtx.Bool(cmd.P2PStaticID.Name),
MetaDataDir: cliCtx.String(cmd.P2PMetadata.Name),
QUICPort: cliCtx.Uint(cmd.P2PQUICPort.Name),
TCPPort: cliCtx.Uint(cmd.P2PTCPPort.Name),
UDPPort: cliCtx.Uint(cmd.P2PUDPPort.Name),
MaxPeers: cliCtx.Uint(cmd.P2PMaxPeers.Name),

View File

@@ -217,9 +217,9 @@ func Test_hasNetworkFlag(t *testing.T) {
want bool
}{
{
name: "Prater testnet",
networkName: features.PraterTestnet.Name,
networkValue: "prater",
name: "Holesky testnet",
networkName: features.HoleskyTestnet.Name,
networkValue: "holesky",
want: true,
},
{

View File

@@ -90,6 +90,7 @@ go_library(
"@com_github_libp2p_go_libp2p//core/peerstore:go_default_library",
"@com_github_libp2p_go_libp2p//core/protocol:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/security/noise:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/quic:go_default_library",
"@com_github_libp2p_go_libp2p//p2p/transport/tcp:go_default_library",
"@com_github_libp2p_go_libp2p_mplex//:go_default_library",
"@com_github_libp2p_go_libp2p_pubsub//:go_default_library",

View File

@@ -10,6 +10,7 @@ import (
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
@@ -137,11 +138,11 @@ func (s *Service) internalBroadcastAttestation(ctx context.Context, subnet uint6
// In the event our attestation is outdated and beyond the
// acceptable threshold, we exit early and do not broadcast it.
currSlot := slots.CurrentSlot(uint64(s.genesisTime.Unix()))
if att.Data.Slot+params.BeaconConfig().SlotsPerEpoch < currSlot {
if err := helpers.ValidateAttestationTime(att.Data.Slot, s.genesisTime, params.BeaconConfig().MaximumGossipClockDisparityDuration()); err != nil {
log.WithFields(logrus.Fields{
"attestationSlot": att.Data.Slot,
"currentSlot": currSlot,
}).Warning("Attestation is too old to broadcast, discarding it")
}).WithError(err).Warning("Attestation is too old to broadcast, discarding it")
return
}

View File

@@ -24,6 +24,7 @@ type Config struct {
PrivateKey string
DataDir string
MetaDataDir string
QUICPort uint
TCPPort uint
UDPPort uint
MaxPeers uint

View File

@@ -15,6 +15,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
@@ -39,6 +40,11 @@ const (
udp6
)
type quicProtocol uint16
// quicProtocol is the "quic" key, which holds the QUIC port of the node.
func (quicProtocol) ENRKey() string { return "quic" }
// RefreshENR uses an epoch to refresh the enr entry for our node
// with the tracked committee ids for the epoch, allowing our node
// to be dynamically discoverable by others given our tracked committee ids.
@@ -100,14 +106,15 @@ func (s *Service) RefreshENR() {
// listen for new nodes watches for new nodes in the network and adds them to the peerstore.
func (s *Service) listenForNewNodes() {
iterator := s.dv5Listener.RandomNodes()
iterator = enode.Filter(iterator, s.filterPeer)
iterator := enode.Filter(s.dv5Listener.RandomNodes(), s.filterPeer)
defer iterator.Close()
for {
// Exit if service's context is canceled
// Exit if service's context is canceled.
if s.ctx.Err() != nil {
break
}
if s.isPeerAtLimit(false /* inbound */) {
// Pause the main loop for a period to stop looking
// for new peers.
@@ -115,16 +122,22 @@ func (s *Service) listenForNewNodes() {
time.Sleep(pollingPeriod)
continue
}
exists := iterator.Next()
if !exists {
if exists := iterator.Next(); !exists {
break
}
node := iterator.Node()
peerInfo, _, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Error("Could not convert to peer info")
continue
}
if peerInfo == nil {
continue
}
// Make sure that peer is not dialed too often, for each connection attempt there's a backoff period.
s.Peers().RandomizeBackOff(peerInfo.ID)
go func(info *peer.AddrInfo) {
@@ -167,8 +180,7 @@ func (s *Service) createListener(
// Listen to all network interfaces
// for both ip protocols.
networkVersion := "udp"
conn, err := net.ListenUDP(networkVersion, udpAddr)
conn, err := net.ListenUDP("udp", udpAddr)
if err != nil {
return nil, errors.Wrap(err, "could not listen to UDP")
}
@@ -178,6 +190,7 @@ func (s *Service) createListener(
ipAddr,
int(s.cfg.UDPPort),
int(s.cfg.TCPPort),
int(s.cfg.QUICPort),
)
if err != nil {
return nil, errors.Wrap(err, "could not create local node")
@@ -209,7 +222,7 @@ func (s *Service) createListener(
func (s *Service) createLocalNode(
privKey *ecdsa.PrivateKey,
ipAddr net.IP,
udpPort, tcpPort int,
udpPort, tcpPort, quicPort int,
) (*enode.LocalNode, error) {
db, err := enode.OpenDB("")
if err != nil {
@@ -218,11 +231,19 @@ func (s *Service) createLocalNode(
localNode := enode.NewLocalNode(db, privKey)
ipEntry := enr.IP(ipAddr)
udpEntry := enr.UDP(udpPort)
tcpEntry := enr.TCP(tcpPort)
localNode.Set(ipEntry)
udpEntry := enr.UDP(udpPort)
localNode.Set(udpEntry)
tcpEntry := enr.TCP(tcpPort)
localNode.Set(tcpEntry)
if features.Get().EnableQUIC {
quicEntry := quicProtocol(quicPort)
localNode.Set(quicEntry)
}
localNode.SetFallbackIP(ipAddr)
localNode.SetFallbackUDP(udpPort)
@@ -277,7 +298,7 @@ func (s *Service) startDiscoveryV5(
// filterPeer validates each node that we retrieve from our dht. We
// try to ascertain that the peer can be a valid protocol peer.
// Validity Conditions:
// 1. Peer has a valid IP and TCP port set in their enr.
// 1. Peer has a valid IP and a (QUIC and/or TCP) port set in their enr.
// 2. Peer hasn't been marked as 'bad'.
// 3. Peer is not currently active or connected.
// 4. Peer is ready to receive incoming connections.
@@ -294,17 +315,13 @@ func (s *Service) filterPeer(node *enode.Node) bool {
return false
}
// Ignore nodes with their TCP ports not set.
if err := node.Record().Load(enr.WithEntry("tcp", new(enr.TCP))); err != nil {
if !enr.IsNotFound(err) {
log.WithError(err).Debug("Could not retrieve tcp port")
}
peerData, multiAddrs, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Debug("Could not convert to peer data")
return false
}
peerData, multiAddr, err := convertToAddrInfo(node)
if err != nil {
log.WithError(err).Debug("Could not convert to peer data")
if peerData == nil || len(multiAddrs) == 0 {
return false
}
@@ -337,6 +354,9 @@ func (s *Service) filterPeer(node *enode.Node) bool {
}
}
// If the peer has 2 multiaddrs, favor the QUIC address, which is in first position.
multiAddr := multiAddrs[0]
// Add peer to peer handler.
s.peers.Add(nodeENR, peerData.ID, multiAddr, network.DirUnknown)
@@ -380,11 +400,11 @@ func PeersFromStringAddrs(addrs []string) ([]ma.Multiaddr, error) {
if err != nil {
return nil, errors.Wrapf(err, "Could not get enode from string")
}
addr, err := convertToSingleMultiAddr(enodeAddr)
nodeAddrs, err := retrieveMultiAddrsFromNode(enodeAddr)
if err != nil {
return nil, errors.Wrapf(err, "Could not get multiaddr")
}
allAddrs = append(allAddrs, addr)
allAddrs = append(allAddrs, nodeAddrs...)
}
return allAddrs, nil
}
@@ -419,45 +439,139 @@ func parseGenericAddrs(addrs []string) (enodeString, multiAddrString []string) {
}
func convertToMultiAddr(nodes []*enode.Node) []ma.Multiaddr {
var multiAddrs []ma.Multiaddr
// Expect each node to have a TCP and a QUIC address.
multiAddrs := make([]ma.Multiaddr, 0, 2*len(nodes))
for _, node := range nodes {
// ignore nodes with no ip address stored
// Skip nodes with no ip address stored.
if node.IP() == nil {
continue
}
multiAddr, err := convertToSingleMultiAddr(node)
// Get up to two multiaddrs (TCP and QUIC) for each node.
nodeMultiAddrs, err := retrieveMultiAddrsFromNode(node)
if err != nil {
log.WithError(err).Error("Could not convert to multiAddr")
log.WithError(err).Errorf("Could not convert to multiAddr node %s", node)
continue
}
multiAddrs = append(multiAddrs, multiAddr)
multiAddrs = append(multiAddrs, nodeMultiAddrs...)
}
return multiAddrs
}
func convertToAddrInfo(node *enode.Node) (*peer.AddrInfo, ma.Multiaddr, error) {
multiAddr, err := convertToSingleMultiAddr(node)
func convertToAddrInfo(node *enode.Node) (*peer.AddrInfo, []ma.Multiaddr, error) {
multiAddrs, err := retrieveMultiAddrsFromNode(node)
if err != nil {
return nil, nil, err
}
info, err := peer.AddrInfoFromP2pAddr(multiAddr)
if err != nil {
return nil, nil, err
if len(multiAddrs) == 0 {
return nil, nil, nil
}
return info, multiAddr, nil
infos, err := peer.AddrInfosFromP2pAddrs(multiAddrs...)
if err != nil {
return nil, nil, errors.Wrapf(err, "could not convert to peer info: %v", multiAddrs)
}
if len(infos) != 1 {
return nil, nil, errors.Errorf("infos contains %v elements, expected exactly 1", len(infos))
}
return &infos[0], multiAddrs, nil
}
func convertToSingleMultiAddr(node *enode.Node) (ma.Multiaddr, error) {
// retrieveMultiAddrsFromNode converts an enode.Node to a list of multiaddrs.
// If the node has a both a QUIC and a TCP port set in their ENR, then
// the multiaddr corresponding to the QUIC port is added first, followed
// by the multiaddr corresponding to the TCP port.
func retrieveMultiAddrsFromNode(node *enode.Node) ([]ma.Multiaddr, error) {
multiaddrs := make([]ma.Multiaddr, 0, 2)
// Retrieve the node public key.
pubkey := node.Pubkey()
assertedKey, err := ecdsaprysm.ConvertToInterfacePubkey(pubkey)
if err != nil {
return nil, errors.Wrap(err, "could not get pubkey")
}
// Compute the node ID from the public key.
id, err := peer.IDFromPublicKey(assertedKey)
if err != nil {
return nil, errors.Wrap(err, "could not get peer id")
}
return multiAddressBuilderWithID(node.IP().String(), "tcp", uint(node.TCP()), id)
if features.Get().EnableQUIC {
// If the QUIC entry is present in the ENR, build the corresponding multiaddress.
port, ok, err := getPort(node, quic)
if err != nil {
return nil, errors.Wrap(err, "could not get QUIC port")
}
if ok {
addr, err := multiAddressBuilderWithID(node.IP(), quic, port, id)
if err != nil {
return nil, errors.Wrap(err, "could not build QUIC address")
}
multiaddrs = append(multiaddrs, addr)
}
}
// If the TCP entry is present in the ENR, build the corresponding multiaddress.
port, ok, err := getPort(node, tcp)
if err != nil {
return nil, errors.Wrap(err, "could not get TCP port")
}
if ok {
addr, err := multiAddressBuilderWithID(node.IP(), tcp, port, id)
if err != nil {
return nil, errors.Wrap(err, "could not build TCP address")
}
multiaddrs = append(multiaddrs, addr)
}
return multiaddrs, nil
}
// getPort retrieves the port for a given node and protocol, as well as a boolean
// indicating whether the port was found, and an error
func getPort(node *enode.Node, protocol internetProtocol) (uint, bool, error) {
var (
port uint
err error
)
switch protocol {
case tcp:
var entry enr.TCP
err = node.Load(&entry)
port = uint(entry)
case udp:
var entry enr.UDP
err = node.Load(&entry)
port = uint(entry)
case quic:
var entry quicProtocol
err = node.Load(&entry)
port = uint(entry)
default:
return 0, false, errors.Errorf("invalid protocol: %v", protocol)
}
if enr.IsNotFound(err) {
return port, false, nil
}
if err != nil {
return 0, false, errors.Wrap(err, "could not get port")
}
return port, true, nil
}
func convertToUdpMultiAddr(node *enode.Node) ([]ma.Multiaddr, error) {
@@ -475,14 +589,14 @@ func convertToUdpMultiAddr(node *enode.Node) ([]ma.Multiaddr, error) {
var ip4 enr.IPv4
var ip6 enr.IPv6
if node.Load(&ip4) == nil {
address, ipErr := multiAddressBuilderWithID(net.IP(ip4).String(), "udp", uint(node.UDP()), id)
address, ipErr := multiAddressBuilderWithID(net.IP(ip4), udp, uint(node.UDP()), id)
if ipErr != nil {
return nil, errors.Wrap(ipErr, "could not build IPv4 address")
}
addresses = append(addresses, address)
}
if node.Load(&ip6) == nil {
address, ipErr := multiAddressBuilderWithID(net.IP(ip6).String(), "udp", uint(node.UDP()), id)
address, ipErr := multiAddressBuilderWithID(net.IP(ip6), udp, uint(node.UDP()), id)
if ipErr != nil {
return nil, errors.Wrap(ipErr, "could not build IPv6 address")
}

View File

@@ -166,8 +166,9 @@ func TestCreateLocalNode(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
// Define ports.
const (
udpPort = 2000
tcpPort = 3000
udpPort = 2000
tcpPort = 3000
quicPort = 3000
)
// Create a private key.
@@ -180,7 +181,7 @@ func TestCreateLocalNode(t *testing.T) {
cfg: tt.cfg,
}
localNode, err := service.createLocalNode(privKey, address, udpPort, tcpPort)
localNode, err := service.createLocalNode(privKey, address, udpPort, tcpPort, quicPort)
if tt.expectedError {
require.NotNil(t, err)
return
@@ -237,7 +238,7 @@ func TestMultiAddrsConversion_InvalidIPAddr(t *testing.T) {
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
}
node, err := s.createLocalNode(pkey, addr, 0, 0)
node, err := s.createLocalNode(pkey, addr, 0, 0, 0)
require.NoError(t, err)
multiAddr := convertToMultiAddr([]*enode.Node{node.Node()})
assert.Equal(t, 0, len(multiAddr), "Invalid ip address converted successfully")
@@ -248,8 +249,9 @@ func TestMultiAddrConversion_OK(t *testing.T) {
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
cfg: &Config{
TCPPort: 0,
UDPPort: 0,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
},
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),

View File

@@ -28,7 +28,8 @@ import (
)
func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
port := 2000
const port = 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, fieldparams.RootLength)
@@ -53,7 +54,7 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
var listeners []*discover.UDPv5
for i := 1; i <= 5; i++ {
port = 3000 + i
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
@@ -98,13 +99,14 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
var addrs []ma.Multiaddr
for _, n := range nodes {
if s.filterPeer(n) {
addr, err := convertToSingleMultiAddr(n)
addrs := make([]ma.Multiaddr, 0)
for _, node := range nodes {
if s.filterPeer(node) {
nodeAddrs, err := retrieveMultiAddrsFromNode(node)
require.NoError(t, err)
addrs = append(addrs, addr)
addrs = append(addrs, nodeAddrs...)
}
}
@@ -114,10 +116,11 @@ func TestStartDiscv5_DifferentForkDigests(t *testing.T) {
}
func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
const port = 2000
params.SetupTestConfigCleanup(t)
hook := logTest.NewGlobal()
logrus.SetLevel(logrus.TraceLevel)
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
genesisTime := time.Now()
genesisValidatorsRoot := make([]byte, 32)
@@ -138,7 +141,7 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
var listeners []*discover.UDPv5
for i := 1; i <= 5; i++ {
port = 3000 + i
port := 3000 + i
cfg.UDPPort = uint(port)
ipAddr, pkey := createAddrAndPrivKey(t)
@@ -188,13 +191,13 @@ func TestStartDiscv5_SameForkDigests_DifferentNextForkData(t *testing.T) {
s.genesisTime = genesisTime
s.genesisValidatorsRoot = make([]byte, 32)
s.dv5Listener = lastListener
var addrs []ma.Multiaddr
addrs := make([]ma.Multiaddr, 0, len(nodes))
for _, n := range nodes {
if s.filterPeer(n) {
addr, err := convertToSingleMultiAddr(n)
for _, node := range nodes {
if s.filterPeer(node) {
nodeAddrs, err := retrieveMultiAddrsFromNode(node)
require.NoError(t, err)
addrs = append(addrs, addr)
addrs = append(addrs, nodeAddrs...)
}
}
if len(addrs) == 0 {

View File

@@ -1,6 +1,7 @@
package p2p
import (
"net"
"strconv"
"strings"
@@ -12,32 +13,32 @@ import (
var log = logrus.WithField("prefix", "p2p")
func logIPAddr(id peer.ID, addrs ...ma.Multiaddr) {
var correctAddr ma.Multiaddr
for _, addr := range addrs {
if strings.Contains(addr.String(), "/ip4/") || strings.Contains(addr.String(), "/ip6/") {
correctAddr = addr
break
if !(strings.Contains(addr.String(), "/ip4/") || strings.Contains(addr.String(), "/ip6/")) {
continue
}
}
if correctAddr != nil {
log.WithField(
"multiAddr",
correctAddr.String()+"/p2p/"+id.String(),
addr.String()+"/p2p/"+id.String(),
).Info("Node started p2p server")
}
}
func logExternalIPAddr(id peer.ID, addr string, port uint) {
func logExternalIPAddr(id peer.ID, addr string, tcpPort, quicPort uint) {
if addr != "" {
multiAddr, err := MultiAddressBuilder(addr, port)
multiAddrs, err := MultiAddressBuilder(net.ParseIP(addr), tcpPort, quicPort)
if err != nil {
log.WithError(err).Error("Could not create multiaddress")
return
}
log.WithField(
"multiAddr",
multiAddr.String()+"/p2p/"+id.String(),
).Info("Node started external p2p server")
for _, multiAddr := range multiAddrs {
log.WithField(
"multiAddr",
multiAddr.String()+"/p2p/"+id.String(),
).Info("Node started external p2p server")
}
}
}

View File

@@ -11,40 +11,68 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/p2p/security/noise"
"github.com/libp2p/go-libp2p/p2p/transport/tcp"
libp2pquic "github.com/libp2p/go-libp2p/p2p/transport/quic"
libp2ptcp "github.com/libp2p/go-libp2p/p2p/transport/tcp"
gomplex "github.com/libp2p/go-mplex"
ma "github.com/multiformats/go-multiaddr"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/features"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
type internetProtocol string
const (
udp = "udp"
tcp = "tcp"
quic = "quic"
)
// MultiAddressBuilder takes in an ip address string and port to produce a go multiaddr format.
func MultiAddressBuilder(ipAddr string, port uint) (ma.Multiaddr, error) {
parsedIP := net.ParseIP(ipAddr)
if parsedIP.To4() == nil && parsedIP.To16() == nil {
return nil, errors.Errorf("invalid ip address provided: %s", ipAddr)
func MultiAddressBuilder(ip net.IP, tcpPort, quicPort uint) ([]ma.Multiaddr, error) {
ipType, err := extractIpType(ip)
if err != nil {
return nil, errors.Wrap(err, "unable to determine IP type")
}
if parsedIP.To4() != nil {
return ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/tcp/%d", ipAddr, port))
// Example: /ip4/1.2.3.4./tcp/5678
multiaddrStr := fmt.Sprintf("/%s/%s/tcp/%d", ipType, ip, tcpPort)
multiAddrTCP, err := ma.NewMultiaddr(multiaddrStr)
if err != nil {
return nil, errors.Wrapf(err, "cannot produce TCP multiaddr format from %s:%d", ip, tcpPort)
}
return ma.NewMultiaddr(fmt.Sprintf("/ip6/%s/tcp/%d", ipAddr, port))
multiaddrs := []ma.Multiaddr{multiAddrTCP}
if features.Get().EnableQUIC {
// Example: /ip4/1.2.3.4/udp/5678/quic-v1
multiAddrQUIC, err := ma.NewMultiaddr(fmt.Sprintf("/%s/%s/udp/%d/quic-v1", ipType, ip, quicPort))
if err != nil {
return nil, errors.Wrapf(err, "cannot produce QUIC multiaddr format from %s:%d", ip, tcpPort)
}
multiaddrs = append(multiaddrs, multiAddrQUIC)
}
return multiaddrs, nil
}
// buildOptions for the libp2p host.
func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Option, error) {
cfg := s.cfg
listen, err := MultiAddressBuilder(ip.String(), cfg.TCPPort)
multiaddrs, err := MultiAddressBuilder(ip, cfg.TCPPort, cfg.QUICPort)
if err != nil {
return nil, errors.Wrapf(err, "cannot produce multiaddr format from %s:%d", ip.String(), cfg.TCPPort)
return nil, errors.Wrapf(err, "cannot produce multiaddr format from %s:%d", ip, cfg.TCPPort)
}
if cfg.LocalIP != "" {
if net.ParseIP(cfg.LocalIP) == nil {
localIP := net.ParseIP(cfg.LocalIP)
if localIP == nil {
return nil, errors.Wrapf(err, "invalid local ip provided: %s:%d", cfg.LocalIP, cfg.TCPPort)
}
listen, err = MultiAddressBuilder(cfg.LocalIP, cfg.TCPPort)
multiaddrs, err = MultiAddressBuilder(localIP, cfg.TCPPort, cfg.QUICPort)
if err != nil {
return nil, errors.Wrapf(err, "cannot produce multiaddr format from %s:%d", cfg.LocalIP, cfg.TCPPort)
}
@@ -62,36 +90,43 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
options := []libp2p.Option{
privKeyOption(priKey),
libp2p.ListenAddrs(listen),
libp2p.ListenAddrs(multiaddrs...),
libp2p.UserAgent(version.BuildData()),
libp2p.ConnectionGater(s),
libp2p.Transport(tcp.NewTCPTransport),
libp2p.Transport(libp2ptcp.NewTCPTransport),
libp2p.DefaultMuxers,
libp2p.Muxer("/mplex/6.7.0", mplex.DefaultTransport),
libp2p.Security(noise.ID, noise.New),
libp2p.Ping(false), // Disable Ping Service.
}
if features.Get().EnableQUIC {
options = append(options, libp2p.Transport(libp2pquic.NewTransport))
}
if cfg.EnableUPnP {
options = append(options, libp2p.NATPortMap()) // Allow to use UPnP
}
if cfg.RelayNodeAddr != "" {
options = append(options, libp2p.AddrsFactory(withRelayAddrs(cfg.RelayNodeAddr)))
} else {
// Disable relay if it has not been set.
options = append(options, libp2p.DisableRelay())
}
if cfg.HostAddress != "" {
options = append(options, libp2p.AddrsFactory(func(addrs []ma.Multiaddr) []ma.Multiaddr {
external, err := MultiAddressBuilder(cfg.HostAddress, cfg.TCPPort)
externalMultiaddrs, err := MultiAddressBuilder(net.ParseIP(cfg.HostAddress), cfg.TCPPort, cfg.QUICPort)
if err != nil {
log.WithError(err).Error("Unable to create external multiaddress")
} else {
addrs = append(addrs, external)
addrs = append(addrs, externalMultiaddrs...)
}
return addrs
}))
}
if cfg.HostDNS != "" {
options = append(options, libp2p.AddrsFactory(func(addrs []ma.Multiaddr) []ma.Multiaddr {
external, err := ma.NewMultiaddr(fmt.Sprintf("/dns4/%s/tcp/%d", cfg.HostDNS, cfg.TCPPort))
@@ -107,21 +142,47 @@ func (s *Service) buildOptions(ip net.IP, priKey *ecdsa.PrivateKey) ([]libp2p.Op
if features.Get().DisableResourceManager {
options = append(options, libp2p.ResourceManager(&network.NullResourceManager{}))
}
return options, nil
}
func multiAddressBuilderWithID(ipAddr, protocol string, port uint, id peer.ID) (ma.Multiaddr, error) {
parsedIP := net.ParseIP(ipAddr)
if parsedIP.To4() == nil && parsedIP.To16() == nil {
return nil, errors.Errorf("invalid ip address provided: %s", ipAddr)
func extractIpType(ip net.IP) (string, error) {
if ip.To4() != nil {
return "ip4", nil
}
if id.String() == "" {
return nil, errors.New("empty peer id given")
if ip.To16() != nil {
return "ip6", nil
}
if parsedIP.To4() != nil {
return ma.NewMultiaddr(fmt.Sprintf("/ip4/%s/%s/%d/p2p/%s", ipAddr, protocol, port, id.String()))
return "", errors.Errorf("provided IP address is neither IPv4 nor IPv6: %s", ip)
}
func multiAddressBuilderWithID(ip net.IP, protocol internetProtocol, port uint, id peer.ID) (ma.Multiaddr, error) {
var multiaddrStr string
if id == "" {
return nil, errors.Errorf("empty peer id given: %s", id)
}
return ma.NewMultiaddr(fmt.Sprintf("/ip6/%s/%s/%d/p2p/%s", ipAddr, protocol, port, id.String()))
ipType, err := extractIpType(ip)
if err != nil {
return nil, errors.Wrap(err, "unable to determine IP type")
}
switch protocol {
case udp, tcp:
// Example with UDP: /ip4/1.2.3.4/udp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs
// Example with TCP: /ip6/1.2.3.4/tcp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs
multiaddrStr = fmt.Sprintf("/%s/%s/%s/%d/p2p/%s", ipType, ip, protocol, port, id)
case quic:
// Example: /ip4/1.2.3.4/udp/5678/quic-v1/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs
multiaddrStr = fmt.Sprintf("/%s/%s/udp/%d/quic-v1/p2p/%s", ipType, ip, port, id)
default:
return nil, errors.Errorf("unsupported protocol: %s", protocol)
}
return ma.NewMultiaddr(multiaddrStr)
}
// Adds a private key to the libp2p option if the option was provided.

View File

@@ -13,6 +13,7 @@ import (
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
mock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -88,30 +89,34 @@ func TestIPV6Support(t *testing.T) {
lNode := enode.NewLocalNode(db, key)
mockIPV6 := net.IP{0xff, 0x02, 0xAA, 0, 0x1F, 0, 0x2E, 0, 0, 0x36, 0x45, 0, 0, 0, 0, 0x02}
lNode.Set(enr.IP(mockIPV6))
ma, err := convertToSingleMultiAddr(lNode.Node())
mas, err := retrieveMultiAddrsFromNode(lNode.Node())
if err != nil {
t.Fatal(err)
}
ipv6Exists := false
for _, p := range ma.Protocols() {
if p.Name == "ip4" {
t.Error("Got ip4 address instead of ip6")
for _, ma := range mas {
ipv6Exists := false
for _, p := range ma.Protocols() {
if p.Name == "ip4" {
t.Error("Got ip4 address instead of ip6")
}
if p.Name == "ip6" {
ipv6Exists = true
}
}
if p.Name == "ip6" {
ipv6Exists = true
if !ipv6Exists {
t.Error("Multiaddress did not have ipv6 protocol")
}
}
if !ipv6Exists {
t.Error("Multiaddress did not have ipv6 protocol")
}
}
func TestDefaultMultiplexers(t *testing.T) {
var cfg libp2p.Config
_ = cfg
p2pCfg := &Config{
TCPPort: 2000,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
StateNotifier: &mock.MockStateNotifier{},
}
svc := &Service{cfg: p2pCfg}
@@ -127,5 +132,57 @@ func TestDefaultMultiplexers(t *testing.T) {
assert.Equal(t, protocol.ID("/yamux/1.0.0"), cfg.Muxers[0].ID)
assert.Equal(t, protocol.ID("/mplex/6.7.0"), cfg.Muxers[1].ID)
}
func TestMultiAddressBuilderWithID(t *testing.T) {
testCases := []struct {
name string
ip net.IP
protocol internetProtocol
port uint
id string
expectedMultiaddrStr string
}{
{
name: "UDP",
ip: net.IPv4(192, 168, 0, 1),
protocol: udp,
port: 5678,
id: "0025080212210204fb1ebb1aa467527d34306a4794a5171d6516405e720b909b7f816d63aef96a",
expectedMultiaddrStr: "/ip4/192.168.0.1/udp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs",
},
{
name: "TCP",
ip: net.IPv4(192, 168, 0, 1),
protocol: tcp,
port: 5678,
id: "0025080212210204fb1ebb1aa467527d34306a4794a5171d6516405e720b909b7f816d63aef96a",
expectedMultiaddrStr: "/ip4/192.168.0.1/tcp/5678/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs",
},
{
name: "QUIC",
ip: net.IPv4(192, 168, 0, 1),
protocol: quic,
port: 5678,
id: "0025080212210204fb1ebb1aa467527d34306a4794a5171d6516405e720b909b7f816d63aef96a",
expectedMultiaddrStr: "/ip4/192.168.0.1/udp/5678/quic-v1/p2p/16Uiu2HAkum7hhuMpWqFj3yNLcmQBGmThmqw2ohaCRThXQuKU9ohs",
},
}
for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) {
id, err := hex.DecodeString(tt.id)
require.NoError(t, err)
actualMultiaddr, err := multiAddressBuilderWithID(tt.ip, tt.protocol, tt.port, peer.ID(id))
require.NoError(t, err)
actualMultiaddrStr := actualMultiaddr.String()
require.Equal(t, tt.expectedMultiaddrStr, actualMultiaddrStr)
})
}
}

View File

@@ -22,7 +22,7 @@ func TestGossipParameters(t *testing.T) {
pms := pubsubGossipParam()
assert.Equal(t, gossipSubMcacheLen, pms.HistoryLength, "gossipSubMcacheLen")
assert.Equal(t, gossipSubMcacheGossip, pms.HistoryGossip, "gossipSubMcacheGossip")
assert.Equal(t, gossipSubSeenTTL, int(pubsub.TimeCacheDuration.Milliseconds()/pms.HeartbeatInterval.Milliseconds()), "gossipSubSeenTtl")
assert.Equal(t, gossipSubSeenTTL, int(pubsub.TimeCacheDuration.Seconds()), "gossipSubSeenTtl")
}
func TestFanoutParameters(t *testing.T) {

View File

@@ -1,7 +1,7 @@
// Package peers provides information about peers at the Ethereum consensus protocol level.
//
// "Protocol level" is the level above the network level, so this layer never sees or interacts with
// (for example) hosts that are uncontactable due to being down, firewalled, etc. Instead, this works
// (for example) hosts that are unreachable due to being down, firewalled, etc. Instead, this works
// with peers that are contactable but may or may not be of the correct fork version, not currently
// required due to the number of current connections, etc.
//
@@ -26,6 +26,7 @@ import (
"context"
"math"
"sort"
"strings"
"time"
"github.com/ethereum/go-ethereum/p2p/enr"
@@ -59,8 +60,8 @@ const (
)
const (
// ColocationLimit restricts how many peer identities we can see from a single ip or ipv6 subnet.
ColocationLimit = 5
// CollocationLimit restricts how many peer identities we can see from a single ip or ipv6 subnet.
CollocationLimit = 5
// Additional buffer beyond current peer limit, from which we can store the relevant peer statuses.
maxLimitBuffer = 150
@@ -76,6 +77,13 @@ const (
MaxBackOffDuration = 5000
)
type InternetProtocol string
const (
TCP = "tcp"
QUIC = "quic"
)
// Status is the structure holding the peer status information.
type Status struct {
ctx context.Context
@@ -449,6 +457,19 @@ func (p *Status) InboundConnected() []peer.ID {
return peers
}
// InboundConnectedWithProtocol returns the current batch of inbound peers that are connected with a given protocol.
func (p *Status) InboundConnectedWithProtocol(protocol InternetProtocol) []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirInbound && strings.Contains(peerData.Address.String(), string(protocol)) {
peers = append(peers, pid)
}
}
return peers
}
// Outbound returns the current batch of outbound peers.
func (p *Status) Outbound() []peer.ID {
p.store.RLock()
@@ -475,7 +496,20 @@ func (p *Status) OutboundConnected() []peer.ID {
return peers
}
// Active returns the peers that are connecting or connected.
// OutboundConnectedWithProtocol returns the current batch of outbound peers that are connected with a given protocol.
func (p *Status) OutboundConnectedWithProtocol(protocol InternetProtocol) []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirOutbound && strings.Contains(peerData.Address.String(), string(protocol)) {
peers = append(peers, pid)
}
}
return peers
}
// Active returns the peers that are active (connecting or connected).
func (p *Status) Active() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
@@ -514,7 +548,7 @@ func (p *Status) Disconnected() []peer.ID {
return peers
}
// Inactive returns the peers that are disconnecting or disconnected.
// Inactive returns the peers that are inactive (disconnecting or disconnected).
func (p *Status) Inactive() []peer.ID {
p.store.RLock()
defer p.store.RUnlock()
@@ -548,7 +582,7 @@ func (p *Status) Prune() {
p.store.Lock()
defer p.store.Unlock()
// Default to old method if flag isnt enabled.
// Default to old method if flag isn't enabled.
if !features.Get().EnablePeerScorer {
p.deprecatedPrune()
return
@@ -961,7 +995,7 @@ func (p *Status) isfromBadIP(pid peer.ID) bool {
return true
}
if val, ok := p.ipTracker[ip.String()]; ok {
if val > ColocationLimit {
if val > CollocationLimit {
return true
}
}
@@ -1012,7 +1046,7 @@ func (p *Status) tallyIPTracker() {
}
func sameIP(firstAddr, secondAddr ma.Multiaddr) bool {
// Exit early if we do get nil multiaddresses
// Exit early if we do get nil multi-addresses
if firstAddr == nil || secondAddr == nil {
return false
}

View File

@@ -565,7 +565,7 @@ func TestPeerIPTracker(t *testing.T) {
badIP := "211.227.218.116"
var badPeers []peer.ID
for i := 0; i < peers.ColocationLimit+10; i++ {
for i := 0; i < peers.CollocationLimit+10; i++ {
port := strconv.Itoa(3000 + i)
addr, err := ma.NewMultiaddr("/ip4/" + badIP + "/tcp/" + port)
if err != nil {
@@ -1111,6 +1111,87 @@ func TestInbound(t *testing.T) {
assert.Equal(t, inbound.String(), result[0].String())
}
func TestInboundConnected(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirInbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirInbound, peers.PeerConnecting)
result := p.InboundConnected()
require.Equal(t, 1, len(result))
assert.Equal(t, inbound.String(), result[0].String())
}
func TestInboundConnectedWithProtocol(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addrsTCP := []string{
"/ip4/127.0.0.1/tcp/33333",
"/ip4/127.0.0.2/tcp/44444",
}
addrsQUIC := []string{
"/ip4/192.168.1.3/udp/13000/quic-v1",
"/ip4/192.168.1.4/udp/14000/quic-v1",
"/ip4/192.168.1.5/udp/14000/quic-v1",
}
expectedTCP := make(map[string]bool, len(addrsTCP))
for _, addr := range addrsTCP {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.PeerConnected)
expectedTCP[peer.String()] = true
}
expectedQUIC := make(map[string]bool, len(addrsQUIC))
for _, addr := range addrsQUIC {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.PeerConnected)
expectedQUIC[peer.String()] = true
}
// TCP
// ---
actualTCP := p.InboundConnectedWithProtocol(peers.TCP)
require.Equal(t, len(expectedTCP), len(actualTCP))
for _, actualPeer := range actualTCP {
_, ok := expectedTCP[actualPeer.String()]
require.Equal(t, true, ok)
}
// QUIC
// ----
actualQUIC := p.InboundConnectedWithProtocol(peers.QUIC)
require.Equal(t, len(expectedQUIC), len(actualQUIC))
for _, actualPeer := range actualQUIC {
_, ok := expectedQUIC[actualPeer.String()]
require.Equal(t, true, ok)
}
}
func TestOutbound(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
@@ -1130,6 +1211,87 @@ func TestOutbound(t *testing.T) {
assert.Equal(t, outbound.String(), result[0].String())
}
func TestOutboundConnected(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirOutbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirOutbound, peers.PeerConnecting)
result := p.OutboundConnected()
require.Equal(t, 1, len(result))
assert.Equal(t, inbound.String(), result[0].String())
}
func TestOutboundConnectedWithProtocol(t *testing.T) {
p := peers.NewStatus(context.Background(), &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
BadResponsesScorerConfig: &scorers.BadResponsesScorerConfig{
Threshold: 0,
},
},
})
addrsTCP := []string{
"/ip4/127.0.0.1/tcp/33333",
"/ip4/127.0.0.2/tcp/44444",
}
addrsQUIC := []string{
"/ip4/192.168.1.3/udp/13000/quic-v1",
"/ip4/192.168.1.4/udp/14000/quic-v1",
"/ip4/192.168.1.5/udp/14000/quic-v1",
}
expectedTCP := make(map[string]bool, len(addrsTCP))
for _, addr := range addrsTCP {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.PeerConnected)
expectedTCP[peer.String()] = true
}
expectedQUIC := make(map[string]bool, len(addrsQUIC))
for _, addr := range addrsQUIC {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.PeerConnected)
expectedQUIC[peer.String()] = true
}
// TCP
// ---
actualTCP := p.OutboundConnectedWithProtocol(peers.TCP)
require.Equal(t, len(expectedTCP), len(actualTCP))
for _, actualPeer := range actualTCP {
_, ok := expectedTCP[actualPeer.String()]
require.Equal(t, true, ok)
}
// QUIC
// ----
actualQUIC := p.OutboundConnectedWithProtocol(peers.QUIC)
require.Equal(t, len(expectedQUIC), len(actualQUIC))
for _, actualPeer := range actualQUIC {
_, ok := expectedQUIC[actualPeer.String()]
require.Equal(t, true, ok)
}
}
// addPeer is a helper to add a peer with a given connection state)
func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState) peer.ID {
// Set up some peers with different states

View File

@@ -3,6 +3,7 @@ package p2p
import (
"context"
"encoding/hex"
"fmt"
"strings"
"time"
@@ -25,7 +26,7 @@ const (
// gossip parameters
gossipSubMcacheLen = 6 // number of windows to retain full messages in cache for `IWANT` responses
gossipSubMcacheGossip = 3 // number of windows to gossip about
gossipSubSeenTTL = 550 // number of heartbeat intervals to retain message IDs
gossipSubSeenTTL = 768 // number of seconds to retain message IDs ( 2 epochs)
// fanout ttl
gossipSubFanoutTTL = 60000000000 // TTL for fanout maps for topics we are not subscribed to but have published to, in nano seconds
@@ -130,7 +131,7 @@ func (s *Service) peerInspector(peerMap map[peer.ID]*pubsub.PeerScoreSnapshot) {
}
}
// Creates a list of pubsub options to configure out router with.
// pubsubOptions creates a list of options to configure our router with.
func (s *Service) pubsubOptions() []pubsub.Option {
psOpts := []pubsub.Option{
pubsub.WithMessageSignaturePolicy(pubsub.StrictNoSign),
@@ -147,9 +148,35 @@ func (s *Service) pubsubOptions() []pubsub.Option {
pubsub.WithGossipSubParams(pubsubGossipParam()),
pubsub.WithRawTracer(gossipTracer{host: s.host}),
}
if len(s.cfg.StaticPeers) > 0 {
directPeersAddrInfos, err := parsePeersEnr(s.cfg.StaticPeers)
if err != nil {
log.WithError(err).Error("Could not add direct peer option")
return psOpts
}
psOpts = append(psOpts, pubsub.WithDirectPeers(directPeersAddrInfos))
}
return psOpts
}
// parsePeersEnr takes a list of raw ENRs and converts them into a list of AddrInfos.
func parsePeersEnr(peers []string) ([]peer.AddrInfo, error) {
addrs, err := PeersFromStringAddrs(peers)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers raw ENRs into multiaddresses: %v", err)
}
if len(addrs) == 0 {
return nil, fmt.Errorf("Converting peers raw ENRs into multiaddresses resulted in an empty list")
}
directAddrInfos, err := peer.AddrInfosFromP2pAddrs(addrs...)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers multiaddresses into AddrInfos: %v", err)
}
return directAddrInfos, nil
}
// creates a custom gossipsub parameter set.
func pubsubGossipParam() pubsub.GossipSubParams {
gParams := pubsub.DefaultGossipSubParams()
@@ -165,7 +192,8 @@ func pubsubGossipParam() pubsub.GossipSubParams {
// to configure our message id time-cache rather than instantiating
// it with a router instance.
func setPubSubParameters() {
pubsub.TimeCacheDuration = 550 * gossipSubHeartbeatInterval
seenTtl := 2 * time.Second * time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot))
pubsub.TimeCacheDuration = seenTtl
}
// convert from libp2p's internal schema to a compatible prysm protobuf format.

View File

@@ -24,6 +24,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket"
prysmnetwork "github.com/prysmaticlabs/prysm/v5/network"
@@ -124,31 +125,34 @@ func NewService(ctx context.Context, cfg *Config) (*Service, error) {
if err != nil {
return nil, errors.Wrapf(err, "failed to build p2p options")
}
// Sets mplex timeouts
configureMplex()
h, err := libp2p.New(opts...)
if err != nil {
log.WithError(err).Error("Failed to create p2p host")
return nil, err
return nil, errors.Wrapf(err, "failed to create p2p host")
}
s.host = h
// Gossipsub registration is done before we add in any new peers
// due to libp2p's gossipsub implementation not taking into
// account previously added peers when creating the gossipsub
// object.
psOpts := s.pubsubOptions()
// Set the pubsub global parameters that we require.
setPubSubParameters()
// Reinitialize them in the event we are running a custom config.
attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
gs, err := pubsub.NewGossipSub(s.ctx, s.host, psOpts...)
if err != nil {
log.WithError(err).Error("Failed to start pubsub")
return nil, err
return nil, errors.Wrapf(err, "failed to create p2p pubsub")
}
s.pubsub = gs
s.peers = peers.NewStatus(ctx, &peers.StatusConfig{
@@ -213,7 +217,7 @@ func (s *Service) Start() {
if len(s.cfg.StaticPeers) > 0 {
addrs, err := PeersFromStringAddrs(s.cfg.StaticPeers)
if err != nil {
log.WithError(err).Error("Could not connect to static peer")
log.WithError(err).Error("could not convert ENR to multiaddr")
}
// Set trusted peers for those that are provided as static addresses.
pids := peerIdsFromMultiAddrs(addrs)
@@ -232,11 +236,24 @@ func (s *Service) Start() {
async.RunEvery(s.ctx, time.Duration(params.BeaconConfig().RespTimeout)*time.Second, s.updateMetrics)
async.RunEvery(s.ctx, refreshRate, s.RefreshENR)
async.RunEvery(s.ctx, 1*time.Minute, func() {
log.WithFields(logrus.Fields{
"inbound": len(s.peers.InboundConnected()),
"outbound": len(s.peers.OutboundConnected()),
"activePeers": len(s.peers.Active()),
}).Info("Peer summary")
inboundQUICCount := len(s.peers.InboundConnectedWithProtocol(peers.QUIC))
inboundTCPCount := len(s.peers.InboundConnectedWithProtocol(peers.TCP))
outboundQUICCount := len(s.peers.OutboundConnectedWithProtocol(peers.QUIC))
outboundTCPCount := len(s.peers.OutboundConnectedWithProtocol(peers.TCP))
total := inboundQUICCount + inboundTCPCount + outboundQUICCount + outboundTCPCount
fields := logrus.Fields{
"inboundTCP": inboundTCPCount,
"outboundTCP": outboundTCPCount,
"total": total,
}
if features.Get().EnableQUIC {
fields["inboundQUIC"] = inboundQUICCount
fields["outboundQUIC"] = outboundQUICCount
}
log.WithFields(fields).Info("Connected peers")
})
multiAddrs := s.host.Network().ListenAddresses()
@@ -244,9 +261,10 @@ func (s *Service) Start() {
p2pHostAddress := s.cfg.HostAddress
p2pTCPPort := s.cfg.TCPPort
p2pQUICPort := s.cfg.QUICPort
if p2pHostAddress != "" {
logExternalIPAddr(s.host.ID(), p2pHostAddress, p2pTCPPort)
logExternalIPAddr(s.host.ID(), p2pHostAddress, p2pTCPPort, p2pQUICPort)
verifyConnectivity(p2pHostAddress, p2pTCPPort, "tcp")
}

View File

@@ -102,8 +102,9 @@ func TestService_Start_OnlyStartsOnce(t *testing.T) {
cs := startup.NewClockSynchronizer()
cfg := &Config{
TCPPort: 2000,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
ClockWaiter: cs,
}
s, err := NewService(context.Background(), cfg)
@@ -147,8 +148,9 @@ func TestService_Start_NoDiscoverFlag(t *testing.T) {
cs := startup.NewClockSynchronizer()
cfg := &Config{
TCPPort: 2000,
UDPPort: 2000,
TCPPort: 3000,
QUICPort: 3000,
StateNotifier: &mock.MockStateNotifier{},
NoDiscovery: true, // <-- no s.dv5Listener is created
ClockWaiter: cs,

View File

@@ -93,6 +93,11 @@ func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
if err != nil {
continue
}
if info == nil {
continue
}
wg.Add(1)
go func() {
if err := s.connectWithPeer(ctx, *info); err != nil {

View File

@@ -66,7 +66,7 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
genesisTime := time.Now()
bootNodeService := &Service{
cfg: &Config{TCPPort: 2000, UDPPort: 3000},
cfg: &Config{UDPPort: 2000, TCPPort: 3000, QUICPort: 3000},
genesisTime: genesisTime,
genesisValidatorsRoot: genesisValidatorsRoot,
}
@@ -89,8 +89,9 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
service, err := NewService(ctx, &Config{
Discv5BootStrapAddrs: []string{bootNodeENR},
MaxPeers: 30,
TCPPort: uint(2000 + i),
UDPPort: uint(3000 + i),
UDPPort: uint(2000 + i),
TCPPort: uint(3000 + i),
QUICPort: uint(3000 + i),
})
require.NoError(t, err)
@@ -133,8 +134,9 @@ func TestStartDiscV5_FindPeersWithSubnet(t *testing.T) {
cfg := &Config{
Discv5BootStrapAddrs: []string{bootNodeENR},
MaxPeers: 30,
TCPPort: 2010,
UDPPort: 3010,
UDPPort: 2010,
TCPPort: 3010,
QUICPort: 3010,
}
service, err := NewService(ctx, cfg)

View File

@@ -50,7 +50,7 @@ func ensurePeerConnections(ctx context.Context, h host.Host, peers *peers.Status
c := h.Network().ConnsToPeer(p.ID)
if len(c) == 0 {
if err := connectWithTimeout(ctx, h, p); err != nil {
log.WithField("peer", p.ID).WithField("addrs", p.Addrs).WithError(err).Errorf("Failed to reconnect to peer")
log.WithField("peer", p.ID).WithField("addrs", p.Addrs).WithError(err).Errorf("failed to reconnect to peer")
continue
}
}

View File

@@ -37,6 +37,7 @@ go_library(
"//beacon-chain/rpc/eth/helpers:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/validator:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//beacon-chain/state/stategen:go_default_library",
@@ -121,6 +122,7 @@ go_test(
"@com_github_gorilla_mux//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_stretchr_testify//mock:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],

View File

@@ -21,6 +21,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filters"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/eth/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/prysm/v1alpha1/validator"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
@@ -32,6 +33,7 @@ import (
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
)
@@ -42,7 +44,8 @@ const (
)
var (
errNilBlock = errors.New("nil block")
errNilBlock = errors.New("nil block")
errEquivocatedBlock = errors.New("block is equivocated")
)
type handled bool
@@ -1254,6 +1257,16 @@ func (s *Server) publishBlockSSZ(ctx context.Context, w http.ResponseWriter, r *
},
}
if err = s.validateBroadcast(ctx, r, genericBlock); err != nil {
if errors.Is(err, errEquivocatedBlock) {
b, err := blocks.NewSignedBeaconBlock(genericBlock)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
if err := s.broadcastSeenBlockSidecars(ctx, b, genericBlock.GetDeneb().Blobs, genericBlock.GetDeneb().KzgProofs); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecars")
}
}
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
@@ -1383,6 +1396,16 @@ func (s *Server) publishBlock(ctx context.Context, w http.ResponseWriter, r *htt
consensusBlock, err = denebBlockContents.ToGeneric()
if err == nil {
if err = s.validateBroadcast(ctx, r, consensusBlock); err != nil {
if errors.Is(err, errEquivocatedBlock) {
b, err := blocks.NewSignedBeaconBlock(consensusBlock)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
if err := s.broadcastSeenBlockSidecars(ctx, b, consensusBlock.GetDeneb().Blobs, consensusBlock.GetDeneb().KzgProofs); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecars")
}
}
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
@@ -1547,7 +1570,7 @@ func (s *Server) validateConsensus(ctx context.Context, blk interfaces.ReadOnlyS
func (s *Server) validateEquivocation(blk interfaces.ReadOnlyBeaconBlock) error {
if s.ForkchoiceFetcher.HighestReceivedBlockSlot() == blk.Slot() {
return fmt.Errorf("block for slot %d already exists in fork choice", blk.Slot())
return errors.Wrapf(errEquivocatedBlock, "block for slot %d already exists in fork choice", blk.Slot())
}
return nil
}
@@ -2072,3 +2095,37 @@ func (s *Server) GetDepositSnapshot(w http.ResponseWriter, r *http.Request) {
},
)
}
// Broadcast blob sidecars even if the block of the same slot has been imported.
// To ensure safety, we will only broadcast blob sidecars if the header references the same block that was previously seen.
// Otherwise, a proposer could get slashed through a different blob sidecar header reference.
func (s *Server) broadcastSeenBlockSidecars(
ctx context.Context,
b interfaces.SignedBeaconBlock,
blobs [][]byte,
kzgProofs [][]byte) error {
scs, err := validator.BuildBlobSidecars(b, blobs, kzgProofs)
if err != nil {
return err
}
for _, sc := range scs {
r, err := sc.SignedBlockHeader.Header.HashTreeRoot()
if err != nil {
log.WithError(err).Error("Failed to hash block header for blob sidecar")
continue
}
if !s.FinalizationFetcher.InForkchoice(r) {
log.WithField("root", fmt.Sprintf("%#x", r)).Debug("Block header not in forkchoice, skipping blob sidecar broadcast")
continue
}
if err := s.Broadcaster.BroadcastBlob(ctx, sc.Index, sc); err != nil {
log.WithError(err).Error("Failed to broadcast blob sidecar for index ", sc.Index)
}
log.WithFields(logrus.Fields{
"index": sc.Index,
"slot": sc.SignedBlockHeader.Header.Slot,
"kzgCommitment": fmt.Sprintf("%#x", sc.KzgCommitment),
}).Info("Broadcasted blob sidecar for already seen block")
}
return nil
}

View File

@@ -13,6 +13,8 @@ import (
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
mockp2p "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/testing"
logTest "github.com/sirupsen/logrus/hooks/test"
"go.uber.org/mock/gomock"
"github.com/gorilla/mux"
@@ -2472,7 +2474,9 @@ func TestValidateEquivocation(t *testing.T) {
require.NoError(t, err)
blk.SetSlot(st.Slot())
assert.ErrorContains(t, "already exists", server.validateEquivocation(blk.Block()))
err = server.validateEquivocation(blk.Block())
assert.ErrorContains(t, "already exists", err)
require.ErrorIs(t, err, errEquivocatedBlock)
})
}
@@ -3630,3 +3634,27 @@ func TestGetDepositSnapshot(t *testing.T) {
assert.Equal(t, finalized, len(resp.Finalized))
})
}
func TestServer_broadcastBlobSidecars(t *testing.T) {
hook := logTest.NewGlobal()
blockToPropose := util.NewBeaconBlockContentsDeneb()
blockToPropose.Blobs = [][]byte{{0x01}, {0x02}, {0x03}}
blockToPropose.KzgProofs = [][]byte{{0x01}, {0x02}, {0x03}}
blockToPropose.Block.Block.Body.BlobKzgCommitments = [][]byte{bytesutil.PadTo([]byte("kc"), 48), bytesutil.PadTo([]byte("kc1"), 48), bytesutil.PadTo([]byte("kc2"), 48)}
d := &eth.GenericSignedBeaconBlock_Deneb{Deneb: blockToPropose}
b := &eth.GenericSignedBeaconBlock{Block: d}
server := &Server{
Broadcaster: &mockp2p.MockBroadcaster{},
FinalizationFetcher: &chainMock.ChainService{NotFinalized: true},
}
blk, err := blocks.NewSignedBeaconBlock(b.Block)
require.NoError(t, err)
require.NoError(t, server.broadcastSeenBlockSidecars(context.Background(), blk, b.GetDeneb().Blobs, b.GetDeneb().KzgProofs))
require.LogsDoNotContain(t, hook, "Broadcasted blob sidecar for already seen block")
server.FinalizationFetcher = &chainMock.ChainService{NotFinalized: false}
require.NoError(t, server.broadcastSeenBlockSidecars(context.Background(), blk, b.GetDeneb().Blobs, b.GetDeneb().KzgProofs))
require.LogsContain(t, hook, "Broadcasted blob sidecar for already seen block")
}

View File

@@ -369,16 +369,7 @@ func decodeIds(w http.ResponseWriter, st state.BeaconState, rawIds []string, ign
func valsFromIds(w http.ResponseWriter, st state.BeaconState, ids []primitives.ValidatorIndex) ([]state.ReadOnlyValidator, bool) {
var vals []state.ReadOnlyValidator
if len(ids) == 0 {
allVals := st.Validators()
vals = make([]state.ReadOnlyValidator, len(allVals))
for i, val := range allVals {
readOnlyVal, err := statenative.NewValidator(val)
if err != nil {
httputil.HandleError(w, "Could not convert validator: "+err.Error(), http.StatusInternalServerError)
return nil, false
}
vals[i] = readOnlyVal
}
vals = st.ValidatorsReadOnly()
} else {
vals = make([]state.ReadOnlyValidator, 0, len(ids))
for _, id := range ids {

View File

@@ -38,8 +38,7 @@ func TestBlobs(t *testing.T) {
denebBlock, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 123, 4)
require.NoError(t, db.SaveBlock(context.Background(), denebBlock))
bs := filesystem.NewEphemeralBlobStorage(t)
testSidecars, err := verification.BlobSidecarSliceNoop(blobs)
require.NoError(t, err)
testSidecars := verification.FakeVerifySliceForTest(t, blobs)
for i := range testSidecars {
require.NoError(t, bs.Save(testSidecars[i]))
}

View File

@@ -108,7 +108,7 @@ func (*Server) GetVersion(w http.ResponseWriter, r *http.Request) {
// GetHealth returns node health status in http status codes. Useful for load balancers.
func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
_, span := trace.StartSpan(r.Context(), "node.GetHealth")
ctx, span := trace.StartSpan(r.Context(), "node.GetHealth")
defer span.End()
rawSyncingStatus, syncingStatus, ok := shared.UintFromQuery(w, r, "syncing_status", false)
@@ -119,10 +119,14 @@ func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
return
}
if s.SyncChecker.Synced() {
optimistic, err := s.OptimisticModeFetcher.IsOptimistic(ctx)
if err != nil {
httputil.HandleError(w, "Could not check optimistic status: "+err.Error(), http.StatusInternalServerError)
}
if s.SyncChecker.Synced() && !optimistic {
return
}
if s.SyncChecker.Syncing() || s.SyncChecker.Initialized() {
if s.SyncChecker.Syncing() || optimistic {
if rawSyncingStatus != "" {
w.WriteHeader(intSyncingStatus)
} else {
@@ -130,5 +134,6 @@ func (s *Server) GetHealth(w http.ResponseWriter, r *http.Request) {
}
return
}
w.WriteHeader(http.StatusServiceUnavailable)
}

View File

@@ -91,8 +91,10 @@ func TestGetVersion(t *testing.T) {
func TestGetHealth(t *testing.T) {
checker := &syncmock.Sync{}
optimisticFetcher := &mock.ChainService{Optimistic: false}
s := &Server{
SyncChecker: checker,
SyncChecker: checker,
OptimisticModeFetcher: optimisticFetcher,
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/node/health", nil)
@@ -101,25 +103,30 @@ func TestGetHealth(t *testing.T) {
s.GetHealth(writer, request)
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
checker.IsInitialized = true
request = httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/node/health", nil)
writer = httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetHealth(writer, request)
assert.Equal(t, http.StatusPartialContent, writer.Code)
checker.IsSyncing = true
checker.IsSynced = false
request = httptest.NewRequest(http.MethodGet, fmt.Sprintf("http://example.com/eth/v1/node/health?syncing_status=%d", http.StatusPaymentRequired), nil)
writer = httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetHealth(writer, request)
assert.Equal(t, http.StatusPaymentRequired, writer.Code)
checker.IsSyncing = false
checker.IsSynced = true
request = httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/node/health", nil)
writer = httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetHealth(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
checker.IsSyncing = false
checker.IsSynced = true
optimisticFetcher.Optimistic = true
request = httptest.NewRequest(http.MethodGet, "http://example.com/eth/v1/node/health", nil)
writer = httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetHealth(writer, request)
assert.Equal(t, http.StatusPartialContent, writer.Code)
}
func TestGetIdentity(t *testing.T) {

View File

@@ -15,7 +15,9 @@ go_library(
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/state:go_default_library",
@@ -35,7 +37,10 @@ go_library(
go_test(
name = "go_default_test",
srcs = ["handlers_test.go"],
srcs = [
"handlers_test.go",
"service_test.go",
],
embed = [":go_default_library"],
deps = [
"//api/server/structs:go_default_library",
@@ -43,6 +48,8 @@ go_test(
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/db/testing:go_default_library",
"//beacon-chain/rpc/testutil:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/stategen/mock:go_default_library",

View File

@@ -18,6 +18,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
dbutil "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/testutil"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
mockstategen "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen/mock"
@@ -192,6 +193,7 @@ func BlockRewardTestSetup(t *testing.T, forkName string) (state.BeaconState, int
}
func TestBlockRewards(t *testing.T) {
db := dbutil.SetupDB(t)
phase0block, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
t.Run("phase 0", func(t *testing.T) {
@@ -227,7 +229,10 @@ func TestBlockRewards(t *testing.T) {
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
BlockRewardFetcher: &BlockRewardService{Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st))},
BlockRewardFetcher: &BlockRewardService{
Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st)),
DB: db,
},
}
url := "http://only.the.slot.number.at.the.end.is.important/2"
@@ -260,7 +265,10 @@ func TestBlockRewards(t *testing.T) {
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
BlockRewardFetcher: &BlockRewardService{Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st))},
BlockRewardFetcher: &BlockRewardService{
Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st)),
DB: db,
},
}
url := "http://only.the.slot.number.at.the.end.is.important/2"
@@ -293,7 +301,10 @@ func TestBlockRewards(t *testing.T) {
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
BlockRewardFetcher: &BlockRewardService{Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st))},
BlockRewardFetcher: &BlockRewardService{
Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st)),
DB: db,
},
}
url := "http://only.the.slot.number.at.the.end.is.important/2"
@@ -326,7 +337,10 @@ func TestBlockRewards(t *testing.T) {
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
BlockRewardFetcher: &BlockRewardService{Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st))},
BlockRewardFetcher: &BlockRewardService{
Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st)),
DB: db,
},
}
url := "http://only.the.slot.number.at.the.end.is.important/2"
@@ -715,7 +729,9 @@ func TestSyncCommiteeRewards(t *testing.T) {
}},
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
BlockRewardFetcher: &BlockRewardService{Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st))},
BlockRewardFetcher: &BlockRewardService{
Replayer: mockstategen.NewReplayerBuilder(mockstategen.WithMockState(st)),
DB: dbutil.SetupDB(t)},
}
t.Run("ok - filtered vals", func(t *testing.T) {

View File

@@ -8,7 +8,9 @@ import (
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
coreblocks "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -26,6 +28,7 @@ type BlockRewardsFetcher interface {
// BlockRewardService implements BlockRewardsFetcher and can be declared to access the underlying functions
type BlockRewardService struct {
Replayer stategen.ReplayerBuilder
DB db.HeadAccessDatabase
}
// GetBlockRewardsData returns the BlockRewards object which is used for the BlockRewardsResponse and ProduceBlockV3.
@@ -124,6 +127,22 @@ func (rs *BlockRewardService) GetStateForRewards(ctx context.Context, blk interf
// We want to run several block processing functions that update the proposer's balance.
// This will allow us to calculate proposer rewards for each operation (atts, slashings etc).
// To do this, we replay the state up to the block's slot, but before processing the block.
// Try getting the state from the next slot cache first.
_, prevSlotRoots, err := rs.DB.BlockRootsBySlot(ctx, slots.PrevSlot(blk.Slot()))
if err != nil {
return nil, &httputil.DefaultJsonError{
Message: "Could not get roots for previous slot: " + err.Error(),
Code: http.StatusInternalServerError,
}
}
for _, r := range prevSlotRoots {
s := transition.NextSlotState(r[:], blk.Slot())
if s != nil {
return s, nil
}
}
st, err := rs.Replayer.ReplayerForSlot(slots.PrevSlot(blk.Slot())).ReplayToSlot(ctx, blk.Slot())
if err != nil {
return nil, &httputil.DefaultJsonError{

View File

@@ -0,0 +1,46 @@
package rewards
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
dbutil "github.com/prysmaticlabs/prysm/v5/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestGetStateForRewards_NextSlotCacheHit(t *testing.T) {
ctx := context.Background()
db := dbutil.SetupDB(t)
st, err := util.NewBeaconStateDeneb()
require.NoError(t, err)
b := util.HydrateSignedBeaconBlockDeneb(util.NewBeaconBlockDeneb())
parent, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, db.SaveBlock(ctx, parent))
r, err := parent.Block().HashTreeRoot()
require.NoError(t, err)
require.NoError(t, transition.UpdateNextSlotCache(ctx, r[:], st))
s := &BlockRewardService{
Replayer: nil, // setting to nil because replayer must not be invoked
DB: db,
}
b = util.HydrateSignedBeaconBlockDeneb(util.NewBeaconBlockDeneb())
sbb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
sbb.SetSlot(parent.Block().Slot() + 1)
result, err := s.GetStateForRewards(ctx, sbb.Block())
require.NoError(t, err)
_, lcs := transition.LastCachedState()
expected, err := lcs.HashTreeRoot(ctx)
require.NoError(t, err)
actual, err := result.HashTreeRoot(ctx)
require.NoError(t, err)
assert.DeepEqual(t, expected, actual)
}

View File

@@ -223,7 +223,7 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, indices []uint64
return nil, &core.RpcError{Err: errors.Wrap(err, "failed to retrieve block from db"), Reason: core.Internal}
}
// if block is not in the retention window return 200 w/ empty list
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(p.GenesisTimeFetcher.CurrentSlot())) {
if !p.BlobStorage.WithinRetentionPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(p.GenesisTimeFetcher.CurrentSlot())) {
return make([]*blocks.VerifiedROBlob, 0), nil
}
commitments, err := b.Block().Body().BlobKzgCommitments()

View File

@@ -164,10 +164,8 @@ func TestGetBlob(t *testing.T) {
db := testDB.SetupDB(t)
denebBlock, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 123, 4)
require.NoError(t, db.SaveBlock(context.Background(), denebBlock))
_, bs, err := filesystem.NewEphemeralBlobStorageWithFs(t)
require.NoError(t, err)
testSidecars, err := verification.BlobSidecarSliceNoop(blobs)
require.NoError(t, err)
_, bs := filesystem.NewEphemeralBlobStorageAndFs(t)
testSidecars := verification.FakeVerifySliceForTest(t, blobs)
for i := range testSidecars {
require.NoError(t, bs.Save(testSidecars[i]))
}

View File

@@ -4,10 +4,10 @@ go_library(
name = "go_default_library",
srcs = [
"aggregator.go",
"duties.go",
"attester.go",
"blocks.go",
"construct_generic_block.go",
"duties.go",
"exit.go",
"log.go",
"proposer.go",
@@ -179,10 +179,10 @@ go_test(
timeout = "moderate",
srcs = [
"aggregator_test.go",
"duties_test.go",
"attester_test.go",
"blocks_test.go",
"construct_generic_block_test.go",
"duties_test.go",
"exit_test.go",
"proposer_altair_test.go",
"proposer_attestations_test.go",
@@ -201,6 +201,7 @@ go_test(
"status_mainnet_test.go",
"status_test.go",
"sync_committee_test.go",
"unblinder_test.go",
"validator_test.go",
],
embed = [":go_default_library"],

View File

@@ -341,7 +341,7 @@ func (vs *Server) handleUnblindedBlock(block interfaces.SignedBeaconBlock, req *
if dbBlockContents == nil {
return nil, nil
}
return buildBlobSidecars(block, dbBlockContents.Blobs, dbBlockContents.KzgProofs)
return BuildBlobSidecars(block, dbBlockContents.Blobs, dbBlockContents.KzgProofs)
}
// broadcastReceiveBlock broadcasts a block and handles its reception.

View File

@@ -27,11 +27,20 @@ import (
"go.opencensus.io/trace"
)
// builderGetPayloadMissCount tracks the number of misses when validator tries to get a payload from builder
var builderGetPayloadMissCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "builder_get_payload_miss_count",
Help: "The number of get payload misses for validator requests to builder",
})
var (
builderValueGweiGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "builder_value_gwei",
Help: "Builder payload value in gwei",
})
localValueGweiGauge = promauto.NewGauge(prometheus.GaugeOpts{
Name: "local_value_gwei",
Help: "Local payload value in gwei",
})
builderGetPayloadMissCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "builder_get_payload_miss_count",
Help: "The number of get payload misses for validator requests to builder",
})
)
// emptyTransactionsRoot represents the returned value of ssz.TransactionsRoot([][]byte{}) and
// can be used as a constant to avoid recomputing this value in every call.
@@ -92,6 +101,8 @@ func setExecutionData(ctx context.Context, blk interfaces.SignedBeaconBlock, loc
"builderBoostFactor": builderBoostFactor,
}).Warn("Proposer: both local boost and builder boost are using non default values")
}
builderValueGweiGauge.Set(float64(builderValueGwei))
localValueGweiGauge.Set(float64(localValueGwei))
// If we can't get the builder value, just use local block.
if higherValueBuilder && withdrawalsMatched { // Builder value is higher and withdrawals match.

View File

@@ -56,8 +56,8 @@ func (c *blobsBundleCache) prune(minSlot primitives.Slot) {
}
}
// buildBlobSidecars given a block, builds the blob sidecars for the block.
func buildBlobSidecars(blk interfaces.SignedBeaconBlock, blobs [][]byte, kzgProofs [][]byte) ([]*ethpb.BlobSidecar, error) {
// BuildBlobSidecars given a block, builds the blob sidecars for the block.
func BuildBlobSidecars(blk interfaces.SignedBeaconBlock, blobs [][]byte, kzgProofs [][]byte) ([]*ethpb.BlobSidecar, error) {
if blk.Version() < version.Deneb {
return nil, nil // No blobs before deneb.
}

View File

@@ -51,7 +51,7 @@ func TestServer_buildBlobSidecars(t *testing.T) {
require.NoError(t, blk.SetBlobKzgCommitments(kzgCommitments))
proof, err := hexutil.Decode("0xb4021b0de10f743893d4f71e1bf830c019e832958efd6795baf2f83b8699a9eccc5dc99015d8d4d8ec370d0cc333c06a")
require.NoError(t, err)
scs, err := buildBlobSidecars(blk, [][]byte{
scs, err := BuildBlobSidecars(blk, [][]byte{
make([]byte, fieldparams.BlobLength), make([]byte, fieldparams.BlobLength),
}, [][]byte{
proof, proof,

View File

@@ -13,18 +13,25 @@ import (
)
func unblindBlobsSidecars(block interfaces.SignedBeaconBlock, bundle *enginev1.BlobsBundle) ([]*ethpb.BlobSidecar, error) {
if block.Version() < version.Deneb || bundle == nil {
if block.Version() < version.Deneb {
return nil, nil
}
header, err := block.Header()
if err != nil {
return nil, err
}
body := block.Block().Body()
blockCommitments, err := body.BlobKzgCommitments()
if err != nil {
return nil, err
}
if len(blockCommitments) == 0 {
return nil, nil
}
// Do not allow builders to provide no blob bundles for blocks which carry commitments.
if bundle == nil {
return nil, errors.New("no valid bundle provided")
}
header, err := block.Header()
if err != nil {
return nil, err
}
// Ensure there are equal counts of blobs/commitments/proofs.
if len(bundle.KzgCommitments) != len(bundle.Blobs) {

View File

@@ -0,0 +1,34 @@
package validator
import (
"testing"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
)
func TestUnblinder_UnblindBlobSidecars_InvalidBundle(t *testing.T) {
wBlock, err := consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockDeneb{
Block: &ethpb.BeaconBlockDeneb{
Body: &ethpb.BeaconBlockBodyDeneb{},
},
Signature: nil,
})
assert.NoError(t, err)
_, err = unblindBlobsSidecars(wBlock, nil)
assert.NoError(t, err)
wBlock, err = consensusblocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockDeneb{
Block: &ethpb.BeaconBlockDeneb{
Body: &ethpb.BeaconBlockBodyDeneb{
BlobKzgCommitments: [][]byte{[]byte("a"), []byte("b")},
},
},
Signature: nil,
})
assert.NoError(t, err)
_, err = unblindBlobsSidecars(wBlock, nil)
assert.ErrorContains(t, "no valid bundle provided", err)
}

View File

@@ -215,7 +215,7 @@ func NewService(ctx context.Context, cfg *Config) *Service {
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,
BlobStorage: s.cfg.BlobStorage,
}
rewardFetcher := &rewards.BlockRewardService{Replayer: ch}
rewardFetcher := &rewards.BlockRewardService{Replayer: ch, DB: s.cfg.BeaconDB}
coreService := &core.Service{
HeadFetcher: s.cfg.HeadFetcher,
GenesisTimeFetcher: s.cfg.GenesisTimeFetcher,

View File

@@ -19,6 +19,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/slasher",
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/prysmctl:__subpackages__",
"//testing/slasher/simulator:__subpackages__",
],
deps = [
@@ -27,6 +28,7 @@ go_library(
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/slasherkv:go_default_library",
"//beacon-chain/operations/slashings:go_default_library",
"//beacon-chain/slasher/types:go_default_library",
"//beacon-chain/startup:go_default_library",
@@ -45,6 +47,7 @@ go_library(
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_x_exp//maps:go_default_library",
],
)

View File

@@ -208,8 +208,8 @@ func (m *MinSpanChunksSlice) CheckSlashable(
}
if existingAttWrapper == nil {
// This case should normally not happen. If this happen, it means we previously
// recorded in our min/max DB an distance corresponding to an attestaiton, but WITHOUT
// This case should normally not happen. If this happens, it means we previously
// recorded in our min/max DB a distance corresponding to an attestation, but WITHOUT
// recording the attestation itself. As a consequence, we say there is no surrounding vote,
// but we log an error.
fields := logrus.Fields{
@@ -287,8 +287,8 @@ func (m *MaxSpanChunksSlice) CheckSlashable(
}
if existingAttWrapper == nil {
// This case should normally not happen. If this happen, it means we previously
// recorded in our min/max DB an distance corresponding to an attestaiton, but WITHOUT
// This case should normally not happen. If this happens, it means we previously
// recorded in our min/max DB a distance corresponding to an attestation, but WITHOUT
// recording the attestation itself. As a consequence, we say there is no surrounded vote,
// but we log an error.
fields := logrus.Fields{

View File

@@ -11,6 +11,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"go.opencensus.io/trace"
"golang.org/x/exp/maps"
)
// Takes in a list of indexed attestation wrappers and returns any
@@ -131,7 +132,7 @@ func (s *Service) checkSurroundVotes(
}
// Update the latest updated epoch for all validators involved to the current chunk.
indexes := s.params.validatorIndexesInChunk(validatorChunkIndex)
indexes := s.params.ValidatorIndexesInChunk(validatorChunkIndex)
for _, index := range indexes {
s.latestEpochUpdatedForValidator[index] = currentEpoch
}
@@ -272,44 +273,20 @@ func (s *Service) updatedChunkByChunkIndex(
// minFirstEpochToUpdate is set to the smallest first epoch to update for all validators in the chunk
// corresponding to the `validatorChunkIndex`.
var minFirstEpochToUpdate *primitives.Epoch
var (
minFirstEpochToUpdate *primitives.Epoch
neededChunkIndexesMap map[uint64]bool
neededChunkIndexesMap := map[uint64]bool{}
err error
)
validatorIndexes := s.params.ValidatorIndexesInChunk(validatorChunkIndex)
validatorIndexes := s.params.validatorIndexesInChunk(validatorChunkIndex)
for _, validatorIndex := range validatorIndexes {
// Retrieve the first epoch to write for the validator index.
isAnEpochToUpdate, firstEpochToUpdate, err := s.firstEpochToUpdate(validatorIndex, currentEpoch)
if err != nil {
return nil, errors.Wrapf(err, "could not get first epoch to write for validator index %d with current epoch %d", validatorIndex, currentEpoch)
}
if !isAnEpochToUpdate {
// If there is no epoch to write, skip.
continue
}
// If, for this validator index, the chunk corresponding to the first epoch to write
// (and all following epochs until the current epoch) are already flagged as needed,
// skip.
if minFirstEpochToUpdate != nil && *minFirstEpochToUpdate <= firstEpochToUpdate {
continue
}
minFirstEpochToUpdate = &firstEpochToUpdate
// Add new needed chunk indexes to the map.
for i := firstEpochToUpdate; i <= currentEpoch; i++ {
chunkIndex := s.params.chunkIndex(i)
neededChunkIndexesMap[chunkIndex] = true
}
if neededChunkIndexesMap, err = s.findNeededChunkIndexes(validatorIndexes, currentEpoch, minFirstEpochToUpdate); err != nil {
return nil, errors.Wrap(err, "could not find the needed chunk indexed")
}
// Get the list of needed chunk indexes.
neededChunkIndexes := make([]uint64, 0, len(neededChunkIndexesMap))
for chunkIndex := range neededChunkIndexesMap {
neededChunkIndexes = append(neededChunkIndexes, chunkIndex)
}
// Transform the map of needed chunk indexes to a slice.
neededChunkIndexes := maps.Keys(neededChunkIndexesMap)
// Retrieve needed chunks from the database.
chunkByChunkIndex, err := s.loadChunksFromDisk(ctx, validatorChunkIndex, chunkKind, neededChunkIndexes)
@@ -332,7 +309,7 @@ func (s *Service) updatedChunkByChunkIndex(
epochToUpdate := firstEpochToUpdate
for epochToUpdate <= currentEpoch {
// Get the chunk index for the ecpoh to write.
// Get the chunk index for the epoch to write.
chunkIndex := s.params.chunkIndex(epochToUpdate)
// Get the chunk corresponding to the chunk index from the `chunkByChunkIndex` map.
@@ -363,6 +340,45 @@ func (s *Service) updatedChunkByChunkIndex(
return chunkByChunkIndex, nil
}
// findNeededChunkIndexes returns a map of chunk indexes
// it loops over the validator indexes and finds the first epoch to update for each validator index.
func (s *Service) findNeededChunkIndexes(
validatorIndexes []primitives.ValidatorIndex,
currentEpoch primitives.Epoch,
minFirstEpochToUpdate *primitives.Epoch,
) (map[uint64]bool, error) {
neededChunkIndexesMap := map[uint64]bool{}
for _, validatorIndex := range validatorIndexes {
// Retrieve the first epoch to write for the validator index.
isAnEpochToUpdate, firstEpochToUpdate, err := s.firstEpochToUpdate(validatorIndex, currentEpoch)
if err != nil {
return nil, errors.Wrapf(err, "could not get first epoch to write for validator index %d with current epoch %d", validatorIndex, currentEpoch)
}
if !isAnEpochToUpdate {
// If there is no epoch to write, skip.
continue
}
// If, for this validator index, the chunk corresponding to the first epoch to write
// (and all following epochs until the current epoch) are already flagged as needed,
// skip.
if minFirstEpochToUpdate != nil && *minFirstEpochToUpdate <= firstEpochToUpdate {
continue
}
minFirstEpochToUpdate = &firstEpochToUpdate
// Add new needed chunk indexes to the map.
for i := firstEpochToUpdate; i <= currentEpoch; i++ {
chunkIndex := s.params.chunkIndex(i)
neededChunkIndexesMap[chunkIndex] = true
}
}
return neededChunkIndexesMap, nil
}
// firstEpochToUpdate, given a validator index and the current epoch, returns a boolean indicating
// if there is an epoch to write. If it is the case, it returns the first epoch to write.
func (s *Service) firstEpochToUpdate(validatorIndex primitives.ValidatorIndex, currentEpoch primitives.Epoch) (bool, primitives.Epoch, error) {

View File

@@ -1059,7 +1059,7 @@ func Test_updatedChunkByChunkIndex(t *testing.T) {
// Initialize the slasher database.
slasherDB := dbtest.SetupSlasherDB(t)
// Intialize the slasher service.
// Initialize the slasher service.
service := &Service{
params: &Parameters{
chunkSize: tt.chunkSize,
@@ -1502,7 +1502,7 @@ func runAttestationsBenchmark(b *testing.B, s *Service, numAtts, numValidators u
func Benchmark_checkSurroundVotes(b *testing.B) {
const (
// Approximatively the number of Holesky active validators on 2024-02-16
// Approximately the number of Holesky active validators on 2024-02-16
// This number is both a multiple of 32 (the number of slots per epoch) and 256 (the number of validators per chunk)
validatorsCount = 1_638_400
slotsPerEpoch = 32
@@ -1526,7 +1526,7 @@ func Benchmark_checkSurroundVotes(b *testing.B) {
// So for 1_638_400 validators with 32 slots per epoch, we would have 48_000 attestation wrappers per slot.
// With 256 validators per chunk, we would have only 188 modified chunks.
//
// In this benchmark, we use the worst case scenario where attestating validators are evenly splitted across all validators chunks.
// In this benchmark, we use the worst case scenario where attesting validators are evenly split across all validators chunks.
// We also suppose that only one chunk per validator chunk index is modified.
// For one given validator index, multiple chunk indexes could be modified.
//

View File

@@ -135,7 +135,7 @@ With 1_048_576 validators, we need 4096 * 2MB = 8GB
Storing both MIN and MAX spans for 1_048_576 validators takes 16GB.
Each chunk is stored snappy-compressed in the database.
If all validators attest ideally, a MIN SPAN chunk will contain only `2`s, and and MAX SPAN chunk will contain only `0`s.
If all validators attest ideally, a MIN SPAN chunk will contain only `2`s, and MAX SPAN chunk will contain only `0`s.
This will compress very well, and will let us store a lot of data in a small amount of space.
*/

View File

@@ -2,8 +2,11 @@ package slasher
import (
"bytes"
"context"
"fmt"
"strconv"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/slasherkv"
slashertypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/slasher/types"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
@@ -76,7 +79,7 @@ func (s *Service) filterAttestations(
continue
}
// If an attestations's target epoch is in the future, we defer processing for later.
// If an attestation's target epoch is in the future, we defer processing for later.
if attWrapper.IndexedAttestation.Data.Target.Epoch > currentEpoch {
validInFuture = append(validInFuture, attWrapper)
continue
@@ -159,3 +162,93 @@ func isDoubleProposal(incomingSigningRoot, existingSigningRoot [32]byte) bool {
}
return incomingSigningRoot != existingSigningRoot
}
type GetChunkFromDatabaseFilters struct {
ChunkKind slashertypes.ChunkKind
ValidatorIndex primitives.ValidatorIndex
SourceEpoch primitives.Epoch
IsDisplayAllValidatorsInChunk bool
IsDisplayAllEpochsInChunk bool
}
// GetChunkFromDatabase Utility function aiming at retrieving a chunk from the
// database.
func GetChunkFromDatabase(
ctx context.Context,
dbPath string,
filters GetChunkFromDatabaseFilters,
params *Parameters,
) (lastEpochForValidatorIndex primitives.Epoch, chunkIndex, validatorChunkIndex uint64, chunk Chunker, err error) {
// init store
d, err := slasherkv.NewKVStore(ctx, dbPath)
if err != nil {
return lastEpochForValidatorIndex, chunkIndex, validatorChunkIndex, chunk, fmt.Errorf("could not open database at path %s: %w", dbPath, err)
}
defer closeDB(d)
// init service
s := Service{
params: params,
serviceCfg: &ServiceConfig{
Database: d,
},
}
// variables
validatorIndex := filters.ValidatorIndex
sourceEpoch := filters.SourceEpoch
chunkKind := filters.ChunkKind
validatorChunkIndex = s.params.validatorChunkIndex(validatorIndex)
chunkIndex = s.params.chunkIndex(sourceEpoch)
// before getting the chunk, we need to verify if the requested epoch is in database
lastEpochForValidator, err := s.serviceCfg.Database.LastEpochWrittenForValidators(ctx, []primitives.ValidatorIndex{validatorIndex})
if err != nil {
return lastEpochForValidatorIndex,
chunkIndex,
validatorChunkIndex,
chunk,
fmt.Errorf("could not get last epoch written for validator %d: %w", validatorIndex, err)
}
if len(lastEpochForValidator) == 0 {
return lastEpochForValidatorIndex,
chunkIndex,
validatorChunkIndex,
chunk,
fmt.Errorf("could not get information at epoch %d for validator %d: there's no record found in slasher database",
sourceEpoch, validatorIndex,
)
}
lastEpochForValidatorIndex = lastEpochForValidator[0].Epoch
// if the epoch requested is within the range, we can proceed to get the chunk, otherwise return error
atBestSmallestEpoch := lastEpochForValidatorIndex.Sub(uint64(params.historyLength))
if sourceEpoch < atBestSmallestEpoch || sourceEpoch > lastEpochForValidatorIndex {
return lastEpochForValidatorIndex,
chunkIndex,
validatorChunkIndex,
chunk,
fmt.Errorf("requested epoch %d is outside the slasher history length %d, data can be provided within the epoch range [%d:%d] for validator %d",
sourceEpoch, params.historyLength, atBestSmallestEpoch, lastEpochForValidatorIndex, validatorIndex,
)
}
// fetch chunk from DB
chunk, err = s.getChunkFromDatabase(ctx, chunkKind, validatorChunkIndex, chunkIndex)
if err != nil {
return lastEpochForValidatorIndex,
chunkIndex,
validatorChunkIndex,
chunk,
fmt.Errorf("could not get chunk at index %d: %w", chunkIndex, err)
}
return lastEpochForValidatorIndex, chunkIndex, validatorChunkIndex, chunk, nil
}
func closeDB(d *slasherkv.Store) {
if err := d.Close(); err != nil {
log.WithError(err).Error("could not close database")
}
}

View File

@@ -16,6 +16,21 @@ type Parameters struct {
historyLength primitives.Epoch // H - defines how many epochs we keep of min or max spans.
}
// ChunkSize returns the chunk size.
func (p *Parameters) ChunkSize() uint64 {
return p.chunkSize
}
// ValidatorChunkSize returns the validator chunk size.
func (p *Parameters) ValidatorChunkSize() uint64 {
return p.validatorChunkSize
}
// HistoryLength returns the history length.
func (p *Parameters) HistoryLength() primitives.Epoch {
return p.historyLength
}
// DefaultParams defines default values for slasher's important parameters, defined
// based on optimization analysis for best and worst case scenarios for
// slasher's performance.
@@ -32,7 +47,15 @@ func DefaultParams() *Parameters {
}
}
// Validator min and max spans are split into chunks of length C = chunkSize.
func NewParams(chunkSize, validatorChunkSize uint64, historyLength primitives.Epoch) *Parameters {
return &Parameters{
chunkSize: chunkSize,
validatorChunkSize: validatorChunkSize,
historyLength: historyLength,
}
}
// ChunkIndex Validator min and max spans are split into chunks of length C = chunkSize.
// That is, if we are keeping N epochs worth of attesting history, finding what
// chunk a certain epoch, e, falls into can be computed as (e % N) / C. For example,
// if we are keeping 6 epochs worth of data, and we have chunks of size 2, then epoch
@@ -139,9 +162,9 @@ func (p *Parameters) flatSliceID(validatorChunkIndex, chunkIndex uint64) []byte
return ssz.MarshalUint64(make([]byte, 0), uint64(width.Mul(validatorChunkIndex).Add(chunkIndex)))
}
// Given a validator chunk index, we determine all of the validator
// ValidatorIndexesInChunk Given a validator chunk index, we determine all the validators
// indices that will belong in that chunk.
func (p *Parameters) validatorIndexesInChunk(validatorChunkIndex uint64) []primitives.ValidatorIndex {
func (p *Parameters) ValidatorIndexesInChunk(validatorChunkIndex uint64) []primitives.ValidatorIndex {
validatorIndices := make([]primitives.ValidatorIndex, 0)
low := validatorChunkIndex * p.validatorChunkSize
high := (validatorChunkIndex + 1) * p.validatorChunkSize

View File

@@ -468,7 +468,7 @@ func TestParams_validatorIndicesInChunk(t *testing.T) {
c := &Parameters{
validatorChunkSize: tt.fields.validatorChunkSize,
}
if got := c.validatorIndexesInChunk(tt.validatorChunkIdx); !reflect.DeepEqual(got, tt.want) {
if got := c.ValidatorIndexesInChunk(tt.validatorChunkIdx); !reflect.DeepEqual(got, tt.want) {
t.Errorf("validatorIndicesInChunk() = %v, want %v", got, tt.want)
}
})

View File

@@ -161,7 +161,7 @@ func (s *Service) Stop() error {
ctx, innerCancel := context.WithTimeout(context.Background(), shutdownTimeout)
defer innerCancel()
log.Info("Flushing last epoch written for each validator to disk, please wait")
if err := s.serviceCfg.Database.SaveLastEpochsWrittenForValidators(
if err := s.serviceCfg.Database.SaveLastEpochWrittenForValidators(
ctx, s.latestEpochUpdatedForValidator,
); err != nil {
log.Error(err)

View File

@@ -4,7 +4,10 @@ go_library(
name = "go_default_library",
srcs = ["types.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/slasher/types",
visibility = ["//beacon-chain:__subpackages__"],
visibility = [
"//beacon-chain:__subpackages__",
"//cmd/prysmctl:__subpackages__",
],
deps = [
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -14,6 +14,18 @@ const (
MaxSpan
)
// String returns the string representation of the chunk kind.
func (c ChunkKind) String() string {
switch c {
case MinSpan:
return "minspan"
case MaxSpan:
return "maxspan"
default:
return "unknown"
}
}
// IndexedAttestationWrapper contains an indexed attestation with its
// data root to reduce duplicated computation.
type IndexedAttestationWrapper struct {

View File

@@ -118,6 +118,7 @@ type ReadOnlyValidator interface {
// ReadOnlyValidators defines a struct which only has read access to validators methods.
type ReadOnlyValidators interface {
Validators() []*ethpb.Validator
ValidatorsReadOnly() []ReadOnlyValidator
ValidatorAtIndex(idx primitives.ValidatorIndex) (*ethpb.Validator, error)
ValidatorAtIndexReadOnly(idx primitives.ValidatorIndex) (ReadOnlyValidator, error)
ValidatorIndexByPubkey(key [fieldparams.BLSPubkeyLength]byte) (primitives.ValidatorIndex, bool)

View File

@@ -21,6 +21,14 @@ func (b *BeaconState) Validators() []*ethpb.Validator {
return b.validatorsVal()
}
// ValidatorsReadOnly participating in consensus on the beacon chain.
func (b *BeaconState) ValidatorsReadOnly() []state.ReadOnlyValidator {
b.lock.RLock()
defer b.lock.RUnlock()
return b.validatorsReadOnlyVal()
}
func (b *BeaconState) validatorsVal() []*ethpb.Validator {
var v []*ethpb.Validator
if features.Get().EnableExperimentalState {
@@ -46,6 +54,35 @@ func (b *BeaconState) validatorsVal() []*ethpb.Validator {
return res
}
func (b *BeaconState) validatorsReadOnlyVal() []state.ReadOnlyValidator {
var v []*ethpb.Validator
if features.Get().EnableExperimentalState {
if b.validatorsMultiValue == nil {
return nil
}
v = b.validatorsMultiValue.Value(b)
} else {
if b.validators == nil {
return nil
}
v = b.validators
}
res := make([]state.ReadOnlyValidator, len(v))
var err error
for i := 0; i < len(res); i++ {
val := v[i]
if val == nil {
continue
}
res[i], err = NewValidator(val)
if err != nil {
continue
}
}
return res
}
// references of validators participating in consensus on the beacon chain.
// This assumes that a lock is already held on BeaconState. This does not
// copy fully and instead just copies the reference.

View File

@@ -1,6 +1,7 @@
package backfill
import (
"context"
"fmt"
"sort"
"time"
@@ -55,6 +56,8 @@ const (
batchEndSequence
)
var retryDelay = time.Second
type batchId string
type batch struct {
@@ -62,6 +65,7 @@ type batch struct {
scheduled time.Time
seq int // sequence identifier, ie how many times has the sequence() method served this batch
retries int
retryAfter time.Time
begin primitives.Slot
end primitives.Slot // half-open interval, [begin, end), ie >= start, < end.
results verifiedROBlocks
@@ -74,7 +78,7 @@ type batch struct {
}
func (b batch) logFields() logrus.Fields {
return map[string]interface{}{
f := map[string]interface{}{
"batchId": b.id(),
"state": b.state.String(),
"scheduled": b.scheduled.String(),
@@ -86,6 +90,10 @@ func (b batch) logFields() logrus.Fields {
"blockPid": b.blockPid,
"blobPid": b.blobPid,
}
if b.retries > 0 {
f["retryAfter"] = b.retryAfter.String()
}
return f
}
func (b batch) replaces(r batch) bool {
@@ -153,7 +161,8 @@ func (b batch) withState(s batchState) batch {
switch b.state {
case batchErrRetryable:
b.retries += 1
log.WithFields(b.logFields()).Info("Sequencing batch for retry")
b.retryAfter = time.Now().Add(retryDelay)
log.WithFields(b.logFields()).Info("Sequencing batch for retry after delay")
case batchInit, batchNil:
b.firstScheduled = b.scheduled
}
@@ -190,8 +199,32 @@ func (b batch) availabilityStore() das.AvailabilityStore {
return b.bs.store
}
var batchBlockUntil = func(ctx context.Context, untilRetry time.Duration, b batch) error {
log.WithFields(b.logFields()).WithField("untilRetry", untilRetry.String()).
Debug("Sleeping for retry backoff delay")
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(untilRetry):
return nil
}
}
func (b batch) waitUntilReady(ctx context.Context) error {
// Wait to retry a failed batch to avoid hammering peers
// if we've hit a state where batches will consistently fail.
// Avoids spamming requests and logs.
if b.retries > 0 {
untilRetry := time.Until(b.retryAfter)
if untilRetry > time.Millisecond {
return batchBlockUntil(ctx, untilRetry, b)
}
}
return nil
}
func sortBatchDesc(bb []batch) {
sort.Slice(bb, func(i, j int) bool {
return bb[j].end < bb[i].end
return bb[i].end > bb[j].end
})
}

View File

@@ -1,8 +1,11 @@
package backfill
import (
"context"
"testing"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
@@ -19,3 +22,22 @@ func TestSortBatchDesc(t *testing.T) {
require.Equal(t, orderOut[i], batches[i].end)
}
}
func TestWaitUntilReady(t *testing.T) {
b := batch{}.withState(batchErrRetryable)
require.Equal(t, time.Time{}, b.retryAfter)
var got time.Duration
wur := batchBlockUntil
var errDerp = errors.New("derp")
batchBlockUntil = func(_ context.Context, ur time.Duration, _ batch) error {
got = ur
return errDerp
}
// retries counter and timestamp are set when we mark the batch for sequencing, if it is in the retry state
b = b.withState(batchSequenced)
require.ErrorIs(t, b.waitUntilReady(context.Background()), errDerp)
require.Equal(t, true, retryDelay-time.Until(b.retryAfter) < time.Millisecond)
require.Equal(t, true, got < retryDelay && got > retryDelay-time.Millisecond)
require.Equal(t, 1, b.retries)
batchBlockUntil = wur
}

Some files were not shown because too many files have changed in this diff Show More