Compare commits

...

20 Commits

Author SHA1 Message Date
terence
92bbf6344c Check kzg commitment for beacon-api propose block (#14702)
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
2024-12-09 14:38:28 +00:00
Nishant Das
df81fa3e9a Change Max Payload Size (#14692)
* Increase Max Payload Size

* Changelog

* Use MaxGossipSize

* Remove change
2024-12-09 13:53:24 +00:00
Rupam Dey
30a136f1fb save light client updates (diff) (#14683)
* update diff

* deps

* add tests for `SaveLightClientUpdate`

* cleanup imports

* lint

* changelog

* fix incorrect arithmetic

* check for lightclient feature flag

* fix tests

* fix `saveLightClientBootstrap` and `saveLightClientUpdate`

* replace and with or

* move feature check to `postBlockProcess`

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-12-04 21:22:43 +00:00
terence
b23c562b67 Pass alpha 9 spec tests (#14667)
* Add missed exit checks to consolidation processing

* Use safe add

* gaz

* Pass spec tests (except single attestation)

Revert params.SetupTestConfigCleanupWithLock(t)

* Update earlist exit epoch for upgrade to electra

* Validate that each committee bitfield in an aggregate contains at least one non-zero bit

* Add single attestation

* Add single attestation to ssz static

* Fix typo

Co-authored-by: Md Amaan <114795592+Redidacove@users.noreply.github.com>

* Update UpgradeToElectra comments

* Add no lint dupword

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
Co-authored-by: Md Amaan <114795592+Redidacove@users.noreply.github.com>
2024-12-04 16:08:10 +00:00
kasey
ae36630ccd Raise http body limit for fetching genesis state on Holesky (#14689)
* use larger limit when fetching genesis

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-12-03 21:08:19 +00:00
Preston Van Loon
ac72fe2e0e Remove interop genesis service from beacon node (#14417)
* Remove interop dependencies from production binary for beacon-chain. Specifically, remove the interop genesis service.

Finding links to pebble: 
bazel query 'somepath(//cmd/beacon-chain, @com_github_cockroachdb_pebble//...)' --notool_deps

* Update INTEROP.md

* Remove interop config

* Remove ancient interop script

* Add electra support for premine genesis

* Add example of --chain-config-file, test interop instructions

* Fixes

* Add binary size reduction

* Update binary size reduction

* Fix duplicate switch case

* Move CHANGELOG entries to unreleased section

* gofmt

* fix
2024-12-03 19:08:49 +00:00
Nishant Das
d09885b7ce Make QUIC The Default Transport (#14688)
* Make it the default

* Changelog

* Remove outdated flag

* Update `go-libp2p` to `v0.36.5` and `webtransport-go` to `master`.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2024-12-03 17:00:15 +00:00
Nishant Das
dc643c9f32 Fix Deadline Again During Rollback (#14686)
* fix it again

* CHANGELOG
2024-12-02 13:29:36 +00:00
Dhruv Bodani
9fa49e7bc9 Add error counter for SSE endpoint (#14681)
* add error counter for SSE endpoint

* add changelog entry
2024-11-29 12:18:53 +00:00
Sammy Rosso
1139c90ab2 Add metadata fields to getBlobSidecars (#14677)
* add metadata fields to getBlobSidecars

* gaz

* changelog

* Dhruv + Radek' reviews
2024-11-28 16:42:55 +00:00
Manu NALEPA
79d05a87bb listenForNewNodes and FindPeersWithSubnet: Stop using ReadNodes and use iterator instead. (#14669)
* `listenForNewNodes` and `FindPeersWithSubnet`: Stop using `Readnodes` and use iterator instead.

It avoids infinite loop in small devnets.

* Update beacon-chain/p2p/discovery.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-28 11:25:28 +00:00
kasey
1707cf3ec7 http response handling improvements (#14673)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-11-27 22:13:45 +00:00
wangjingcun
bdbb850250 chore: fix 404 status URL (#14675)
Signed-off-by: wangjingcun <wangjingcun@aliyun.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-11-27 15:54:00 +00:00
Dhruv Bodani
b28b1ed6ce Add error count prom metric (#14670)
* add error count prom metric

* address review comments

* add comment for response writer

* update changelog
2024-11-27 11:56:07 +00:00
Sammy Rosso
74bb0821a8 Use slot to determine fork version (#14653)
* Use slot to determine version

* gaz

* solve cyclic dependency

* Radek' review

* unit test

* gaz

* use require instead of assert

* fix test

* fix test

* fix TestGetAggregateAttestation

* fix ListAttestations test

* James' review

* Radek' review

* add extra checks to GetAttesterSlashingsV2

* fix matchingAtts

* improve tests + fix

* fix

* stop appending all non electra atts

* more tests

* changelog

* revert TestProduceSyncCommitteeContribution changes

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: rkapka <radoslaw.kapka@gmail.com>
2024-11-26 22:52:58 +00:00
terence
8025a483e2 Remove kzg proof check for blob reconstructor (#14671) 2024-11-26 19:36:42 +00:00
Manu NALEPA
0475631543 Improve connection/disconnection logging. (#14665)
* Improve disconnection logs.

* Update beacon-chain/p2p/handshake.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Address Sammy's comment.

* Update beacon-chain/p2p/handshake.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Fix Sammy's comment.

* Fix Sammy's comment.

* `MockPeerManager`: Stop mixing value and pointer receivers (deepsource).

* Remove unused parameters (deepsource)

* Fix receiver names (deepsource)

* Change not after into before (deepsource)

* Update beacon-chain/p2p/handshake.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/p2p/peers/status.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-26 17:53:27 +00:00
Potuz
f27092fa91 Check if validator exists when applying pending deposit (#14666)
* Check if validator exists when applying pending deposit

* Add test TestProcessPendingDepositsMultiplesSameDeposits

* keep a map of added pubkeys

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-11-25 20:31:02 +00:00
Radosław Kapka
67cef41cbf Better attestation packing for Electra (#14534)
* Better attestation packing for Electra

* changelog <3

* bzl

* sort before constructing on-chain aggregates

* move ctx to top

* extract Electra logic and add comments

* benchmark
2024-11-25 18:41:51 +00:00
Manu NALEPA
258908d50e Diverse log improvements, comment additions and small refactors. (#14658)
* `logProposedBlock`: Fix log.

Before, the value of the pointer to the function were printed for `blockNumber`
instead of the block number itself.

* Add blob prefix before sidecars.

In order to prepare for data columns sidecars.

* Verification: Add log prefix.

* `validate_aggregate_proof.go`: Add comments.

* `blobSubscriber`: Fix error message.

* `registerHandlers`: Rename, add comments and little refactor.

* Remove duplicate `pb` vs. `ethpb` import.

* `rpc_ping.go`: Factorize / Add comments.

* `blobSidecarsByRangeRPCHandler`: Do not write error response if rate limited.

* `sendRecentBeaconBlocksRequest` ==> `sendBeaconBlocksRequest`.

The function itself does not know anything about the age of the beacon block.

* `beaconBlocksByRangeRPCHandler`: Refactor and add logs.

* `retentionSeconds` ==> `retentionDuration`.

* `oneEpoch`: Add documentation.

* `TestProposer_ProposeBlock_OK`: Improve error message.

* `getLocalPayloadFromEngine`: Tiny refactor.

* `eth1DataMajorityVote`: Improve log message.

* Implement `ConvertPeerIDToNodeID`and do note generate random private key if peerDAS is enabled.

* Remove useless `_`.

* `parsePeersEnr`: Fix error mesages.

* `ShouldOverrideFCU`: Fix error message.

* `blocks.go`: Minor comments improvements.

* CI: Upgrade golanci and enable spancheck.

* `ConvertPeerIDToNodeID`: Add godoc comment.

* Update CHANGELOG.md

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/initial-sync/service_test.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/rpc_beacon_blocks_by_range.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/rpc_blob_sidecars_by_range.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/rpc_ping.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Remove trailing whitespace in godoc.

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-25 09:22:33 +00:00
209 changed files with 7461 additions and 5789 deletions

View File

@@ -54,7 +54,7 @@ jobs:
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
with:
version: v1.55.2
version: v1.56.1
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:

View File

@@ -73,6 +73,7 @@ linters:
- promlinter
- protogetter
- revive
- spancheck
- staticcheck
- stylecheck
- tagalign

View File

@@ -27,6 +27,11 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Added validator index label to `validator_statuses` metric.
- Added Validator REST mode use of Attestation V2 endpoints and Electra attestations.
- PeerDAS: Added proto for `DataColumnIdentifier`, `DataColumnSidecar`, `DataColumnSidecarsByRangeRequest` and `MetadataV2`.
- Better attestation packing for Electra. [PR](https://github.com/prysmaticlabs/prysm/pull/14534)
- P2P: Add logs when a peer is (dis)connected. Add the reason of the disconnection when we initiate it.
- Added a Prometheus error counter metric for HTTP requests to track beacon node requests.
- Added a Prometheus error counter metric for SSE requests.
- Save light client updates and bootstraps in DB.
### Changed
@@ -61,16 +66,28 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Updated light client protobufs. [PR](https://github.com/prysmaticlabs/prysm/pull/14650)
- Added `Eth-Consensus-Version` header to `ListAttestationsV2` and `GetAggregateAttestationV2` endpoints.
- Updated light client consensus types. [PR](https://github.com/prysmaticlabs/prysm/pull/14652)
- Update earliest exit epoch for upgrade to electra
- Add missed exit checks to consolidation processing
- Fixed pending deposits processing on Electra.
- Modified `ListAttestationsV2`, `GetAttesterSlashingsV2` and `GetAggregateAttestationV2` endpoints to use slot to determine fork version.
- Improvements to HTTP response handling. [pr](https://github.com/prysmaticlabs/prysm/pull/14673)
- Updated `Blobs` endpoint to return additional metadata fields.
- Made QUIC the default method to connect with peers.
- Check kzg commitments align with blobs and proofs for beacon api end point.
- Increase Max Payload Size in Gossip.
### Deprecated
- `/eth/v1alpha1/validator/activation/stream` grpc wait for activation stream is deprecated. [pr](https://github.com/prysmaticlabs/prysm/pull/14514)
- `--interop-genesis-time` and `--interop-num-validators` have been deprecated in the beacon node as the functionality has been removed. These flags have no effect.
### Removed
- Removed finalized validator index cache, no longer needed.
- Removed validator queue position log on key reload and wait for activation.
- Removed outdated spectest exclusions for EIP-6110.
- Removed support for starting a beacon node with a deterministic interop genesis state via interop flags. Alteratively, create a genesis state with prysmctl and use `--genesis-state`. This removes about 9Mb (~11%) of unnecessary code and dependencies from the final production binary.
- Removed kzg proof check from blob reconstructor.
### Fixed
@@ -90,6 +107,11 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Fix panic on attestation interface since we call data before validation
- corrects nil check on some interface attestation types
- temporary solution to handling electra attesation and attester_slashing events. [pr](14655)
- Diverse log improvements and comment additions.
- Validate that each committee bitfield in an aggregate contains at least one non-zero bit
- P2P: Avoid infinite loop when looking for peers in small networks.
- Fixed another rollback bug due to a context deadline.
- Fix checkpoint sync bug on holesky. [pr](https://github.com/prysmaticlabs/prysm/pull/14689)
### Security

View File

@@ -2,18 +2,21 @@
This README details how to setup Prysm for interop testing for usage with other Ethereum consensus clients.
> [!IMPORTANT]
> This guide is likely to be outdated. The Prysm team does not have capacity to troubleshoot
> outdated interop guides or instructions. If you experience issues with this guide, please file and
> issue for visibility and propose fixes, if possible.
## Installation & Setup
1. Install [Bazel](https://docs.bazel.build/versions/master/install.html) **(Recommended)**
2. `git clone https://github.com/prysmaticlabs/prysm && cd prysm`
3. `bazel build //...`
3. `bazel build //cmd/...`
## Starting from Genesis
Prysm supports a few ways to quickly launch a beacon node from basic configurations:
- `NumValidators + GenesisTime`: Launches a beacon node by deterministically generating a state from a num-validators flag along with a genesis time **(Recommended)**
- `SSZ Genesis`: Launches a beacon node from a .ssz file containing a SSZ-encoded, genesis beacon state
Prysm can be started from a built-in mainnet genesis state, or started with a provided genesis state by
using the `--genesis-state` flag and providing a path to the genesis.ssz file.
## Generating a Genesis State
@@ -21,21 +24,34 @@ To setup the necessary files for these quick starts, Prysm provides a tool to ge
a deterministically generated set of validator private keys following the official interop YAML format
[here](https://github.com/ethereum/eth2.0-pm/blob/master/interop/mocked_start).
You can use `bazel run //tools/genesis-state-gen` to create a deterministic genesis state for interop.
You can use `prysmctl` to create a deterministic genesis state for interop.
### Usage
- **--genesis-time** uint: Unix timestamp used as the genesis time in the generated genesis state (defaults to now)
- **--num-validators** int: Number of validators to deterministically include in the generated genesis state
- **--output-ssz** string: Output filename of the SSZ marshaling of the generated genesis state
- **--config-name=interop** string: name of the beacon chain config to use when generating the state. ex mainnet|minimal|interop
The example below creates 64 validator keys, instantiates a genesis state with those 64 validators and with genesis unix timestamp 1567542540,
and finally writes a ssz encoded output to ~/Desktop/genesis.ssz. This file can be used to kickstart the beacon chain in the next section. When using the `--interop-*` flags, the beacon node will assume the `interop` config should be used, unless a different config is specified on the command line.
```sh
# Download (or create) a chain config file.
curl https://raw.githubusercontent.com/ethereum/consensus-specs/refs/heads/dev/configs/minimal.yaml -o /tmp/minimal.yaml
# Run prysmctl to generate genesis with a 2 minute genesis delay and 256 validators.
bazel run //cmd/prysmctl --config=minimal -- \
testnet generate-genesis \
--genesis-time-delay=120 \
--num-validators=256 \
--output-ssz=/tmp/genesis.ssz \
--chain-config-file=/tmp/minimal.yaml
```
bazel run //tools/genesis-state-gen -- --config-name interop --output-ssz ~/Desktop/genesis.ssz --num-validators 64 --genesis-time 1567542540
```
The flags are explained below:
- `bazel run //cmd/prysmctl` is the bazel command to compile and run prysmctl.
- `--config=minimal` is a bazel build time configuration flag to compile Prysm with minimal state constants.
- `--` is an argument divider to tell bazel that everything after this divider should be passed as arguments to prysmctl. Without this divider, it isn't clear to bazel if the arguments are meant to be build time arguments or runtime arguments so the operation complains and fails to build without this divider.
- `testnet` is the primary command argument for prysmctl.
- `generate-genesis` is the subcommand to `testnet` in prysmctl.
- `--genesis-time-delay` uint: The number of seconds in the future to define genesis. Example: a value of 60 will set the genesis time to 1 minute in the future. This should be sufficiently large enough to allow for you to start the beacon node before the genesis time.
- `--num-validators` int: Number of validators to deterministically include in the generated genesis state
- `--output-ssz` string: Output filename of the SSZ marshaling of the generated genesis state
- `--chain-config-file` string: Filepath to a chain config yaml file.
Note: This guide saves items to the `/tmp/` directory which will not persist if your machine is
restarted. Consider tweaking the arguments if persistence is needed.
## Launching a Beacon Node + Validator Client
@@ -44,45 +60,33 @@ bazel run //tools/genesis-state-gen -- --config-name interop --output-ssz ~/Desk
Open up two terminal windows, run:
```
bazel run //beacon-chain -- \
--bootstrap-node= \
--deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 \
--datadir=/tmp/beacon-chain-interop \
--force-clear-db \
--min-sync-peers=0 \
--interop-num-validators 64 \
--interop-eth1data-votes
bazel run //cmd/beacon-chain --config=minimal -- \
--minimal-config \
--bootstrap-node= \
--deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 \
--datadir=/tmp/beacon-chain-minimal-devnet \
--force-clear-db \
--min-sync-peers=0 \
--genesis-state=/tmp/genesis.ssz \
--chain-config-file=/tmp/minimal.yaml
```
This will deterministically generate a beacon genesis state and start
the system with 64 validators and the genesis time set to the current unix timestamp.
Wait a bit until your beacon chain starts, and in the other window:
This will start the system with 256 validators. The flags used can be explained as such:
- `bazel run //cmd/beacon-chain --config=minimal` builds and runs the beacon node in minimal build configuration.
- `--` is a flag divider to distingish between bazel flags and flags that should be passed to the application. All flags and arguments after this divider are passed to the beacon chain.
- `--minimal-config` tells the beacon node to use minimal network configuration. This is different from the compile time state configuration flag `--config=minimal` and both are required.
- `--bootstrap-node=` disables the default bootstrap nodes. This prevents the client from attempting to peer with mainnet nodes.
- `--datadir=/tmp/beacon-chain-minimal-devnet` sets the data directory in a temporary location. Change this to your preferred destination.
- `--force-clear-db` will delete the beaconchain.db file without confirming with the user. This is helpful for iteratively running local devnets without changing the datadir, but less helpful for one off runs where there was no database in the data directory.
- `--min-sync-peers=0` allows the beacon node to skip initial sync without peers. This is essential because Prysm expects at least a few peers to start start the blockchain.
- `--genesis-state=/tmp/genesis.ssz` defines the path to the generated genesis ssz file. The beacon node will use this as the initial genesis state.
- `--chain-config-file=/tmp/minimal.yaml` defines the path to the yaml file with the chain configuration.
As soon as the beacon node has started, start the validator in the other terminal window.
```
bazel run //validator -- --keymanager=interop --keymanageropts='{"keys":64}'
bazel run //cmd/validator --config=minimal -- --datadir=/tmp/validator --interopt-num-validators=256 --minimal-config --suggested-fee-recipient=0x8A04d14125D0FDCDc742F4A05C051De07232EDa4
```
This will launch and kickstart the system with your 64 validators performing their duties accordingly.
### Launching from `genesis.ssz`
Assuming you generated a `genesis.ssz` file with 64 validators, open up two terminal windows, run:
```
bazel run //beacon-chain -- \
--bootstrap-node= \
--deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 \
--datadir=/tmp/beacon-chain-interop \
--force-clear-db \
--min-sync-peers=0 \
--interop-genesis-state /path/to/genesis.ssz \
--interop-eth1data-votes
```
Wait a bit until your beacon chain starts, and in the other window:
```
bazel run //validator -- --keymanager=interop --keymanageropts='{"keys":64}'
```
This will launch and kickstart the system with your 64 validators performing their duties accordingly.
This will launch and kickstart the system with your 256 validators performing their duties accordingly.

View File

@@ -227,7 +227,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-alpha.8"
consensus_spec_version = "v1.5.0-alpha.9"
bls_test_version = "v0.1.1"
@@ -243,7 +243,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-BsGIbEyJuYrzhShGl0tHhR4lP5Qwno8R3k8a6YBR/DA=",
integrity = "sha256-gHbvlnErUeJGWzW8/8JiVlk28JwmXSMhOzkynEIz+8g=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -259,7 +259,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-DkdvhPP2KiqUOpwFXQIFDCWCwsUDIC/xhTBD+TZevm0=",
integrity = "sha256-hQkQdpm5ng4miGYa5WsOKWa0q8WtZu99Oqbv9QtBeJM=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -275,7 +275,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-vkZqV0HB8A2Uc56C1Us/p5G57iaHL+zw2No93Xt6M/4=",
integrity = "sha256-33sBsmApnJpcyYfR3olKaPB+WC1q00ZKNzHa2TczIxk=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -290,7 +290,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-D/HPAW61lKqjoWwl7N0XvhdX+67dCEFAy8JxVzqBGtU=",
integrity = "sha256-GQulBKLc2khpql2K/MxV+NG/d2kAhLXl+gLnKIg7rt4=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -12,6 +12,7 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/client:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types:go_default_library",

View File

@@ -14,6 +14,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -176,7 +177,7 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
err = non200Err(r)
return
}
res, err = io.ReadAll(r.Body)
res, err = io.ReadAll(io.LimitReader(r.Body, client.MaxBodySize))
if err != nil {
err = errors.Wrap(err, "error reading http response body from builder server")
return
@@ -358,7 +359,7 @@ func (c *Client) Status(ctx context.Context) error {
}
func non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(response.Body)
bodyBytes, err := io.ReadAll(io.LimitReader(response.Body, client.MaxErrBodySize))
var errMessage ErrorMessage
var body string
if err != nil {

View File

@@ -10,11 +10,18 @@ import (
"github.com/pkg/errors"
)
const (
MaxBodySize int64 = 1 << 23 // 8MB default, WithMaxBodySize can override
MaxBodySizeState int64 = 1 << 29 // 512MB
MaxErrBodySize int64 = 1 << 17 // 128KB
)
// Client is a wrapper object around the HTTP client.
type Client struct {
hc *http.Client
baseURL *url.URL
token string
hc *http.Client
baseURL *url.URL
token string
maxBodySize int64
}
// NewClient constructs a new client with the provided options (ex WithTimeout).
@@ -26,8 +33,9 @@ func NewClient(host string, opts ...ClientOpt) (*Client, error) {
return nil, err
}
c := &Client{
hc: &http.Client{},
baseURL: u,
hc: &http.Client{},
baseURL: u,
maxBodySize: MaxBodySize,
}
for _, o := range opts {
o(c)
@@ -72,7 +80,7 @@ func (c *Client) NodeURL() string {
// Get is a generic, opinionated GET function to reduce boilerplate amongst the getters in this package.
func (c *Client) Get(ctx context.Context, path string, opts ...ReqOption) ([]byte, error) {
u := c.baseURL.ResolveReference(&url.URL{Path: path})
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), nil)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), http.NoBody)
if err != nil {
return nil, err
}
@@ -89,7 +97,7 @@ func (c *Client) Get(ctx context.Context, path string, opts ...ReqOption) ([]byt
if r.StatusCode != http.StatusOK {
return nil, Non200Err(r)
}
b, err := io.ReadAll(r.Body)
b, err := io.ReadAll(io.LimitReader(r.Body, c.maxBodySize))
if err != nil {
return nil, errors.Wrap(err, "error reading http response body")
}

View File

@@ -25,16 +25,16 @@ var ErrInvalidNodeVersion = errors.New("invalid node version response")
var ErrConnectionIssue = errors.New("could not connect")
// Non200Err is a function that parses an HTTP response to handle responses that are not 200 with a formatted error.
func Non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(response.Body)
func Non200Err(r *http.Response) error {
b, err := io.ReadAll(io.LimitReader(r.Body, MaxErrBodySize))
var body string
if err != nil {
body = "(Unable to read response body.)"
} else {
body = "response body:\n" + string(bodyBytes)
body = "response body:\n" + string(b)
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
msg := fmt.Sprintf("code=%d, url=%s, body=%s", r.StatusCode, r.Request.URL, body)
switch r.StatusCode {
case http.StatusNotFound:
return errors.Wrap(ErrNotFound, msg)
default:

View File

@@ -46,3 +46,10 @@ func WithAuthenticationToken(token string) ClientOpt {
c.token = token
}
}
// WithMaxBodySize overrides the default max body size of 8MB.
func WithMaxBodySize(size int64) ClientOpt {
return func(c *Client) {
c.maxBodySize = size
}
}

View File

@@ -36,9 +36,8 @@ go_library(
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/migration:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -1546,3 +1546,10 @@ func EventChainReorgFromV1(event *ethv1.EventChainReorg) *ChainReorgEvent {
ExecutionOptimistic: event.ExecutionOptimistic,
}
}
func SyncAggregateFromConsensus(sa *eth.SyncAggregate) *SyncAggregate {
return &SyncAggregate{
SyncCommitteeBits: hexutil.Encode(sa.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(sa.SyncCommitteeSignature),
}
}

View File

@@ -3,125 +3,227 @@ package structs
import (
"encoding/json"
"fmt"
"strconv"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
v1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
v2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v5/proto/migration"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
func LightClientUpdateFromConsensus(update *v2.LightClientUpdate) (*LightClientUpdate, error) {
attestedHeader, err := lightClientHeaderContainerToJSON(update.AttestedHeader)
func LightClientUpdateFromConsensus(update interfaces.LightClientUpdate) (*LightClientUpdate, error) {
attestedHeader, err := lightClientHeaderToJSON(update.AttestedHeader())
if err != nil {
return nil, errors.Wrap(err, "could not marshal attested light client header")
}
finalizedHeader, err := lightClientHeaderContainerToJSON(update.FinalizedHeader)
finalizedHeader, err := lightClientHeaderToJSON(update.FinalizedHeader())
if err != nil {
return nil, errors.Wrap(err, "could not marshal finalized light client header")
}
var scBranch [][32]byte
var finalityBranch [][32]byte
if update.Version() >= version.Electra {
scb, err := update.NextSyncCommitteeBranchElectra()
if err != nil {
return nil, err
}
scBranch = scb[:]
fb, err := update.FinalityBranchElectra()
if err != nil {
return nil, err
}
finalityBranch = fb[:]
} else {
scb, err := update.NextSyncCommitteeBranch()
if err != nil {
return nil, err
}
scBranch = scb[:]
fb, err := update.FinalityBranch()
if err != nil {
return nil, err
}
finalityBranch = fb[:]
}
return &LightClientUpdate{
AttestedHeader: attestedHeader,
NextSyncCommittee: SyncCommitteeFromConsensus(migration.V2SyncCommitteeToV1Alpha1(update.NextSyncCommittee)),
NextSyncCommitteeBranch: branchToJSON(update.NextSyncCommitteeBranch),
NextSyncCommittee: SyncCommitteeFromConsensus(update.NextSyncCommittee()),
NextSyncCommitteeBranch: branchToJSON(scBranch),
FinalizedHeader: finalizedHeader,
FinalityBranch: branchToJSON(update.FinalityBranch),
SyncAggregate: syncAggregateToJSON(update.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(update.SignatureSlot), 10),
FinalityBranch: branchToJSON(finalityBranch),
SyncAggregate: SyncAggregateFromConsensus(update.SyncAggregate()),
SignatureSlot: fmt.Sprintf("%d", update.SignatureSlot()),
}, nil
}
func LightClientFinalityUpdateFromConsensus(update *v2.LightClientFinalityUpdate) (*LightClientFinalityUpdate, error) {
attestedHeader, err := lightClientHeaderContainerToJSON(update.AttestedHeader)
func LightClientFinalityUpdateFromConsensus(update interfaces.LightClientFinalityUpdate) (*LightClientFinalityUpdate, error) {
attestedHeader, err := lightClientHeaderToJSON(update.AttestedHeader())
if err != nil {
return nil, errors.Wrap(err, "could not marshal attested light client header")
}
finalizedHeader, err := lightClientHeaderContainerToJSON(update.FinalizedHeader)
finalizedHeader, err := lightClientHeaderToJSON(update.FinalizedHeader())
if err != nil {
return nil, errors.Wrap(err, "could not marshal finalized light client header")
}
var finalityBranch [][32]byte
if update.Version() >= version.Electra {
b, err := update.FinalityBranchElectra()
if err != nil {
return nil, err
}
finalityBranch = b[:]
} else {
b, err := update.FinalityBranch()
if err != nil {
return nil, err
}
finalityBranch = b[:]
}
return &LightClientFinalityUpdate{
AttestedHeader: attestedHeader,
FinalizedHeader: finalizedHeader,
FinalityBranch: branchToJSON(update.FinalityBranch),
SyncAggregate: syncAggregateToJSON(update.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(update.SignatureSlot), 10),
FinalityBranch: branchToJSON(finalityBranch),
SyncAggregate: SyncAggregateFromConsensus(update.SyncAggregate()),
SignatureSlot: fmt.Sprintf("%d", update.SignatureSlot()),
}, nil
}
func LightClientOptimisticUpdateFromConsensus(update *v2.LightClientOptimisticUpdate) (*LightClientOptimisticUpdate, error) {
attestedHeader, err := lightClientHeaderContainerToJSON(update.AttestedHeader)
func LightClientOptimisticUpdateFromConsensus(update interfaces.LightClientOptimisticUpdate) (*LightClientOptimisticUpdate, error) {
attestedHeader, err := lightClientHeaderToJSON(update.AttestedHeader())
if err != nil {
return nil, errors.Wrap(err, "could not marshal attested light client header")
}
return &LightClientOptimisticUpdate{
AttestedHeader: attestedHeader,
SyncAggregate: syncAggregateToJSON(update.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(update.SignatureSlot), 10),
SyncAggregate: SyncAggregateFromConsensus(update.SyncAggregate()),
SignatureSlot: fmt.Sprintf("%d", update.SignatureSlot()),
}, nil
}
func branchToJSON(branchBytes [][]byte) []string {
func branchToJSON[S [][32]byte](branchBytes S) []string {
if branchBytes == nil {
return nil
}
branch := make([]string, len(branchBytes))
for i, root := range branchBytes {
branch[i] = hexutil.Encode(root)
branch[i] = hexutil.Encode(root[:])
}
return branch
}
func syncAggregateToJSON(input *v1.SyncAggregate) *SyncAggregate {
return &SyncAggregate{
SyncCommitteeBits: hexutil.Encode(input.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(input.SyncCommitteeSignature),
}
}
func lightClientHeaderContainerToJSON(container *v2.LightClientHeaderContainer) (json.RawMessage, error) {
func lightClientHeaderToJSON(header interfaces.LightClientHeader) (json.RawMessage, error) {
// In the case that a finalizedHeader is nil.
if container == nil {
if header == nil {
return nil, nil
}
beacon, err := container.GetBeacon()
if err != nil {
return nil, errors.Wrap(err, "could not get beacon block header")
}
var result any
var header any
switch t := (container.Header).(type) {
case *v2.LightClientHeaderContainer_HeaderAltair:
header = &LightClientHeader{Beacon: BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(beacon))}
case *v2.LightClientHeaderContainer_HeaderCapella:
execution, err := ExecutionPayloadHeaderCapellaFromConsensus(t.HeaderCapella.Execution)
switch v := header.Version(); v {
case version.Altair:
result = &LightClientHeader{Beacon: BeaconBlockHeaderFromConsensus(header.Beacon())}
case version.Capella:
exInterface, err := header.Execution()
if err != nil {
return nil, err
}
header = &LightClientHeaderCapella{
Beacon: BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(beacon)),
Execution: execution,
ExecutionBranch: branchToJSON(t.HeaderCapella.ExecutionBranch),
ex, ok := exInterface.Proto().(*enginev1.ExecutionPayloadHeaderCapella)
if !ok {
return nil, fmt.Errorf("execution data is not %T", &enginev1.ExecutionPayloadHeaderCapella{})
}
case *v2.LightClientHeaderContainer_HeaderDeneb:
execution, err := ExecutionPayloadHeaderDenebFromConsensus(t.HeaderDeneb.Execution)
execution, err := ExecutionPayloadHeaderCapellaFromConsensus(ex)
if err != nil {
return nil, err
}
header = &LightClientHeaderDeneb{
Beacon: BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(beacon)),
executionBranch, err := header.ExecutionBranch()
if err != nil {
return nil, err
}
result = &LightClientHeaderCapella{
Beacon: BeaconBlockHeaderFromConsensus(header.Beacon()),
Execution: execution,
ExecutionBranch: branchToJSON(t.HeaderDeneb.ExecutionBranch),
ExecutionBranch: branchToJSON(executionBranch[:]),
}
case version.Deneb:
exInterface, err := header.Execution()
if err != nil {
return nil, err
}
ex, ok := exInterface.Proto().(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, fmt.Errorf("execution data is not %T", &enginev1.ExecutionPayloadHeaderDeneb{})
}
execution, err := ExecutionPayloadHeaderDenebFromConsensus(ex)
if err != nil {
return nil, err
}
executionBranch, err := header.ExecutionBranch()
if err != nil {
return nil, err
}
result = &LightClientHeaderDeneb{
Beacon: BeaconBlockHeaderFromConsensus(header.Beacon()),
Execution: execution,
ExecutionBranch: branchToJSON(executionBranch[:]),
}
case version.Electra:
exInterface, err := header.Execution()
if err != nil {
return nil, err
}
ex, ok := exInterface.Proto().(*enginev1.ExecutionPayloadHeaderElectra)
if !ok {
return nil, fmt.Errorf("execution data is not %T", &enginev1.ExecutionPayloadHeaderElectra{})
}
execution, err := ExecutionPayloadHeaderElectraFromConsensus(ex)
if err != nil {
return nil, err
}
executionBranch, err := header.ExecutionBranch()
if err != nil {
return nil, err
}
result = &LightClientHeaderDeneb{
Beacon: BeaconBlockHeaderFromConsensus(header.Beacon()),
Execution: execution,
ExecutionBranch: branchToJSON(executionBranch[:]),
}
default:
return nil, fmt.Errorf("unsupported header type %T", t)
return nil, fmt.Errorf("unsupported header version %s", version.String(v))
}
return json.Marshal(header)
return json.Marshal(result)
}
func LightClientBootstrapFromConsensus(bootstrap interfaces.LightClientBootstrap) (*LightClientBootstrap, error) {
header, err := lightClientHeaderToJSON(bootstrap.Header())
if err != nil {
return nil, errors.Wrap(err, "could not marshal light client header")
}
var scBranch [][32]byte
if bootstrap.Version() >= version.Electra {
b, err := bootstrap.CurrentSyncCommitteeBranchElectra()
if err != nil {
return nil, err
}
scBranch = b[:]
} else {
b, err := bootstrap.CurrentSyncCommitteeBranch()
if err != nil {
return nil, err
}
scBranch = b[:]
}
return &LightClientBootstrap{
Header: header,
CurrentSyncCommittee: SyncCommitteeFromConsensus(bootstrap.CurrentSyncCommittee()),
CurrentSyncCommitteeBranch: branchToJSON(scBranch),
}, nil
}

View File

@@ -1,7 +1,10 @@
package structs
type SidecarsResponse struct {
Data []*Sidecar `json:"data"`
Version string `json:"version"`
Data []*Sidecar `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
}
type Sidecar struct {

View File

@@ -84,7 +84,6 @@ go_library(
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",

View File

@@ -13,7 +13,6 @@ import (
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
@@ -39,7 +38,7 @@ func prepareForkchoiceState(
payloadHash [32]byte,
justified *ethpb.Checkpoint,
finalized *ethpb.Checkpoint,
) (state.BeaconState, consensus_blocks.ROBlock, error) {
) (state.BeaconState, blocks.ROBlock, error) {
blockHeader := &ethpb.BeaconBlockHeader{
ParentRoot: parentRoot[:],
}
@@ -61,7 +60,7 @@ func prepareForkchoiceState(
base.BlockRoots[0] = append(base.BlockRoots[0], blockRoot[:]...)
st, err := state_native.InitializeFromProtoBellatrix(base)
if err != nil {
return nil, consensus_blocks.ROBlock{}, err
return nil, blocks.ROBlock{}, err
}
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
@@ -76,9 +75,9 @@ func prepareForkchoiceState(
}
signed, err := blocks.NewSignedBeaconBlock(blk)
if err != nil {
return nil, consensus_blocks.ROBlock{}, err
return nil, blocks.ROBlock{}, err
}
roblock, err := consensus_blocks.NewROBlockWithRoot(signed, blockRoot)
roblock, err := blocks.NewROBlockWithRoot(signed, blockRoot)
return st, roblock, err
}

View File

@@ -26,8 +26,7 @@ go_test(
deps = [
"//consensus-types/blocks:go_default_library",
"//testing/require:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"//testing/util:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,51 +1,14 @@
package kzg
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"testing"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/sirupsen/logrus"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func GetRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func GetRandBlob(seed int64) GoKZG.Blob {
var blob GoKZG.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := GetRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}
func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZGProof, error) {
commitment, err := kzgContext.BlobToKZGCommitment(blob, 0)
if err != nil {
@@ -74,7 +37,7 @@ func TestBytesToAny(t *testing.T) {
}
func TestGenerateCommitmentAndProof(t *testing.T) {
blob := GetRandBlob(123)
blob := util.GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
expectedCommitment := GoKZG.KZGCommitment{180, 218, 156, 194, 59, 20, 10, 189, 186, 254, 132, 93, 7, 127, 104, 172, 238, 240, 237, 70, 83, 89, 1, 152, 99, 0, 165, 65, 143, 62, 20, 215, 230, 14, 205, 95, 28, 245, 54, 25, 160, 16, 178, 31, 232, 207, 38, 85}

View File

@@ -67,7 +67,10 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
if s.inRegularSync() {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
defer s.sendLightClientFeeds(cfg)
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
defer s.saveLightClientUpdate(cfg)
}
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
defer reportAttestationInclusion(cfg.roblock.Block())
@@ -404,10 +407,9 @@ func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b interface
return errors.Wrapf(err, "could not save block from slot %d", b.Block().Slot())
}
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
log.Warnf("Rolling back insertion of block with root %#x", r)
if err := s.cfg.BeaconDB.DeleteBlock(ctx, r); err != nil {
log.WithError(err).Errorf("Could not delete block with block root %#x", r)
}
// Do not use parent context in the event it deadlined
ctx = trace.NewContext(context.Background(), span)
s.rollbackBlock(ctx, r)
return errors.Wrap(err, "could not save state")
}
return nil

View File

@@ -15,7 +15,6 @@ import (
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -24,7 +23,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v5/math"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
@@ -115,64 +113,123 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
})
}
// sendLightClientFeeds sends the light client feeds when feature flag is enabled.
func (s *Service) sendLightClientFeeds(cfg *postBlockProcessConfig) {
if features.Get().EnableLightClient {
if _, err := s.sendLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to send light client optimistic update")
}
// Get the finalized checkpoint
finalized := s.ForkChoicer().FinalizedCheckpoint()
// LightClientFinalityUpdate needs super majority
s.tryPublishLightClientFinalityUpdate(cfg.ctx, cfg.roblock, finalized, cfg.postState)
func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
if err := s.processLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client optimistic update")
}
if err := s.processLightClientFinalityUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client finality update")
}
}
func (s *Service) tryPublishLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, finalized *forkchoicetypes.Checkpoint, postState state.BeaconState) {
if finalized.Epoch <= s.lastPublishedLightClientEpoch {
return
}
config := params.BeaconConfig()
if finalized.Epoch < config.AltairForkEpoch {
return
}
syncAggregate, err := signed.Block().Body().SyncAggregate()
if err != nil || syncAggregate == nil {
return
}
// LightClientFinalityUpdate needs super majority
if syncAggregate.SyncCommitteeBits.Count()*3 < config.SyncCommitteeSize*2 {
return
}
_, err = s.sendLightClientFinalityUpdate(ctx, signed, postState)
// saveLightClientUpdate saves the light client update for this block
// if it's better than the already saved one, when feature flag is enabled.
func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
attestedRoot := cfg.roblock.Block().ParentRoot()
attestedBlock, err := s.getBlock(cfg.ctx, attestedRoot)
if err != nil {
log.WithError(err).Error("Failed to send light client finality update")
log.WithError(err).Error("Saving light client update failed: Could not get attested block")
return
}
if attestedBlock == nil || attestedBlock.IsNil() {
log.Error("Saving light client update failed: Attested block is nil")
return
}
attestedState, err := s.cfg.StateGen.StateByRoot(cfg.ctx, attestedRoot)
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not get attested state")
return
}
if attestedState == nil || attestedState.IsNil() {
log.Error("Saving light client update failed: Attested state is nil")
return
}
finalizedRoot := attestedState.FinalizedCheckpoint().Root
finalizedBlock, err := s.getBlock(cfg.ctx, [32]byte(finalizedRoot))
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not get finalized block")
return
}
update, err := lightclient.NewLightClientUpdateFromBeaconState(
cfg.ctx,
s.CurrentSlot(),
cfg.postState,
cfg.roblock,
attestedState,
attestedBlock,
finalizedBlock,
)
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not create light client update")
return
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
oldUpdate, err := s.cfg.BeaconDB.LightClientUpdate(cfg.ctx, period)
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not get current light client update")
return
}
if oldUpdate == nil {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
} else {
log.WithField("period", period).Debug("Saving light client update: Saved new update")
}
return
}
isNewUpdateBetter, err := lightclient.IsBetterUpdate(update, oldUpdate)
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not compare light client updates")
return
}
if isNewUpdateBetter {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
} else {
log.WithField("period", period).Debug("Saving light client update: Saved new update")
}
} else {
s.lastPublishedLightClientEpoch = finalized.Epoch
log.WithField("period", period).Debug("Saving light client update: New update is not better than the current one. Skipping save.")
}
}
// sendLightClientFinalityUpdate sends a light client finality update notification to the state feed.
func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
// saveLightClientBootstrap saves a light client bootstrap for this block
// when feature flag is enabled.
func (s *Service) saveLightClientBootstrap(cfg *postBlockProcessConfig) {
blockRoot := cfg.roblock.Root()
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(cfg.ctx, s.CurrentSlot(), cfg.postState, cfg.roblock)
if err != nil {
log.WithError(err).Error("Saving light client bootstrap failed: Could not create light client bootstrap")
return
}
err = s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap)
if err != nil {
log.WithError(err).Error("Saving light client bootstrap failed: Could not save light client bootstrap in DB")
}
}
func (s *Service) processLightClientFinalityUpdate(
ctx context.Context,
signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState,
) error {
attestedRoot := signed.Block().ParentRoot()
attestedBlock, err := s.cfg.BeaconDB.Block(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested block")
return errors.Wrap(err, "could not get attested block")
}
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
return errors.Wrap(err, "could not get attested state")
}
// Get finalized block
var finalizedBlock interfaces.ReadOnlySignedBeaconBlock
finalizedCheckPoint := attestedState.FinalizedCheckpoint()
if finalizedCheckPoint != nil {
@@ -185,6 +242,7 @@ func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed inte
update, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(
ctx,
postState.Slot(),
postState,
signed,
attestedState,
@@ -193,38 +251,31 @@ func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed inte
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
return errors.Wrap(err, "could not create light client finality update")
}
// Return the result
result := &ethpbv2.LightClientFinalityUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: update,
}
// Send event
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: result,
}), nil
Data: update,
})
return nil
}
// sendLightClientOptimisticUpdate sends a light client optimistic update notification to the state feed.
func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) error {
attestedRoot := signed.Block().ParentRoot()
attestedBlock, err := s.cfg.BeaconDB.Block(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested block")
return errors.Wrap(err, "could not get attested block")
}
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return 0, errors.Wrap(err, "could not get attested state")
return errors.Wrap(err, "could not get attested state")
}
update, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(
ctx,
postState.Slot(),
postState,
signed,
attestedState,
@@ -232,19 +283,15 @@ func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed in
)
if err != nil {
return 0, errors.Wrap(err, "could not create light client update")
return errors.Wrap(err, "could not create light client optimistic update")
}
// Return the result
result := &ethpbv2.LightClientOptimisticUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: update,
}
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: result,
}), nil
Data: update,
})
return nil
}
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch

View File

@@ -40,6 +40,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -2352,6 +2353,62 @@ func TestRollbackBlock(t *testing.T) {
require.Equal(t, false, hasState)
}
func TestRollbackBlock_SavePostStateInfo_ContextDeadline(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
st, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
require.NoError(t, service.saveGenesisData(ctx, st))
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
parentRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: parentRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: parentRoot[:]}))
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 128)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
// Save state summaries so that the cache is flushed and saved to disk
// later.
for i := 1; i <= 127; i++ {
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: primitives.Slot(i),
Root: bytesutil.Bytes32(uint64(i)),
}))
}
// Set deadlined context when saving block and state
cancCtx, canc := context.WithCancel(ctx)
canc()
require.ErrorContains(t, context.Canceled.Error(), service.savePostStateInfo(cancCtx, root, wsb, postState))
// The block should no longer exist.
require.Equal(t, false, service.cfg.BeaconDB.HasBlock(ctx, root))
hasState, err := service.cfg.StateGen.HasState(ctx, root)
require.NoError(t, err)
require.Equal(t, false, hasState)
}
func TestRollbackBlock_ContextDeadline(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
@@ -2446,3 +2503,290 @@ func fakeResult(missing []uint64) map[uint64]struct{} {
}
return r
}
func TestSaveLightClientUpdate(t *testing.T) {
s, tr := minimalTestService(t)
ctx := tr.ctx
t.Run("Altair", func(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
l := util.NewTestLightClient(t).SetupTestAltair()
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientUpdate(cfg)
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Altair)
reset()
})
t.Run("Capella", func(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
l := util.NewTestLightClient(t).SetupTestCapella(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientUpdate(cfg)
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Capella)
reset()
})
t.Run("Deneb", func(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
l := util.NewTestLightClient(t).SetupTestDeneb(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientUpdate(cfg)
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Deneb)
reset()
})
}
func TestSaveLightClientBootstrap(t *testing.T) {
s, tr := minimalTestService(t)
ctx := tr.ctx
t.Run("Altair", func(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
l := util.NewTestLightClient(t).SetupTestAltair()
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Altair)
reset()
})
t.Run("Capella", func(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
l := util.NewTestLightClient(t).SetupTestCapella(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Capella)
reset()
})
t.Run("Deneb", func(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
l := util.NewTestLightClient(t).SetupTestDeneb(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Deneb)
reset()
})
}

View File

@@ -10,7 +10,6 @@ import (
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -87,9 +86,7 @@ func TestProcessAttestations_Ok(t *testing.T) {
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
attsToSave := make([]ethpb.Att, len(atts))
for i, a := range atts {
attsToSave[i] = a
}
copy(attsToSave, atts)
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(attsToSave))
service.processAttestations(ctx, 0)
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations()))
@@ -119,7 +116,7 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
roblock, err := consensus_blocks.NewROBlockWithRoot(wsb, tRoot)
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
copied, err = service.cfg.StateGen.StateByRoot(ctx, tRoot)
@@ -131,9 +128,7 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
atts, err := util.GenerateAttestations(copied, pks, 1, 1, false)
require.NoError(t, err)
attsToSave := make([]ethpb.Att, len(atts))
for i, a := range atts {
attsToSave[i] = a
}
copy(attsToSave, atts)
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(attsToSave))
// Verify the target is in forkchoice
require.Equal(t, true, fcs.HasNode(bytesutil.ToBytes32(atts[0].GetData().BeaconBlockRoot)))
@@ -181,7 +176,7 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
roblock, err := consensus_blocks.NewROBlockWithRoot(wsb, tRoot)
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
require.Equal(t, 2, fcs.NodeCount())

View File

@@ -17,8 +17,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -85,7 +83,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
currentCheckpoints := s.saveCurrentCheckpoints(preState)
roblock, err := consensus_blocks.NewROBlockWithRoot(blockCopy, blockRoot)
roblock, err := blocks.NewROBlockWithRoot(blockCopy, blockRoot)
if err != nil {
return err
}
@@ -190,7 +188,7 @@ func (s *Service) updateCheckpoints(
func (s *Service) validateExecutionAndConsensus(
ctx context.Context,
preState state.BeaconState,
block consensusblocks.ROBlock,
block blocks.ROBlock,
) (state.BeaconState, bool, error) {
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
@@ -560,7 +558,7 @@ func (s *Service) sendBlockAttestationsToSlasher(signed interfaces.ReadOnlySigne
}
// validateExecutionOnBlock notifies the engine of the incoming block execution payload and returns true if the payload is valid
func (s *Service) validateExecutionOnBlock(ctx context.Context, ver int, header interfaces.ExecutionData, block consensusblocks.ROBlock) (bool, error) {
func (s *Service) validateExecutionOnBlock(ctx context.Context, ver int, header interfaces.ExecutionData, block blocks.ROBlock) (bool, error) {
isValidPayload, err := s.notifyNewPayload(ctx, ver, header, block)
if err != nil {
s.cfg.ForkChoiceStore.Lock()

View File

@@ -36,9 +36,7 @@ import (
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -49,25 +47,24 @@ import (
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
lastPublishedLightClientEpoch primitives.Epoch
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
}
// config options for the service.
@@ -308,7 +305,7 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint block")
}
roblock, err := consensus_blocks.NewROBlockWithRoot(finalizedBlock, fRoot)
roblock, err := blocks.NewROBlockWithRoot(finalizedBlock, fRoot)
if err != nil {
return err
}
@@ -524,7 +521,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
gb, err := consensus_blocks.NewROBlockWithRoot(genesisBlk, genesisBlkRoot)
gb, err := blocks.NewROBlockWithRoot(genesisBlk, genesisBlkRoot)
if err != nil {
return err
}

View File

@@ -41,6 +41,7 @@ go_library(
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//common/math:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],

View File

@@ -5,6 +5,7 @@ import (
"context"
"fmt"
"github.com/ethereum/go-ethereum/common/math"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
@@ -156,6 +157,13 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
// if target_validator.exit_epoch != FAR_FUTURE_EPOCH:
// return
//
// # Verify the source has been active long enough
// if current_epoch < source_validator.activation_epoch + SHARD_COMMITTEE_PERIOD:
// return
//
// # Verify the source has no pending withdrawals in the queue
// if get_pending_balance_to_withdraw(state, source_index) > 0:
// return
// # Initiate source validator exit and append pending consolidation
// source_validator.exit_epoch = compute_consolidation_epoch_and_update_churn(
// state, source_validator.effective_balance
@@ -258,6 +266,23 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
continue
}
e, overflow := math.SafeAdd(uint64(srcV.ActivationEpoch), uint64(params.BeaconConfig().ShardCommitteePeriod))
if overflow {
log.Error("Overflow when adding activation epoch and shard committee period")
continue
}
if uint64(curEpoch) < e {
continue
}
bal, err := st.PendingBalanceToWithdraw(srcIdx)
if err != nil {
log.WithError(err).Error("failed to fetch pending balance to withdraw")
continue
}
if bal > 0 {
continue
}
// Initiate the exit of the source validator.
exitEpoch, err := ComputeConsolidationEpochAndUpdateChurn(ctx, st, primitives.Gwei(srcV.EffectiveBalance))
if err != nil {

View File

@@ -213,6 +213,7 @@ func TestProcessConsolidationRequests(t *testing.T) {
name: "one valid request",
state: func() state.BeaconState {
st := &eth.BeaconStateElectra{
Slot: params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().ShardCommitteePeriod)),
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
}
// Validator scenario setup. See comments in reqs section.
@@ -222,6 +223,12 @@ func TestProcessConsolidationRequests(t *testing.T) {
st.Validators[12].ActivationEpoch = params.BeaconConfig().FarFutureEpoch
st.Validators[13].ExitEpoch = 10
st.Validators[16].ExitEpoch = 10
st.PendingPartialWithdrawals = []*eth.PendingPartialWithdrawal{
{
Index: 17,
Amount: 100,
},
}
s, err := state_native.InitializeFromProtoElectra(st)
require.NoError(t, err)
return s
@@ -287,6 +294,12 @@ func TestProcessConsolidationRequests(t *testing.T) {
SourcePubkey: []byte("val_0"),
TargetPubkey: []byte("val_0"),
},
// Has pending partial withdrawal
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(0)),
SourcePubkey: []byte("val_17"),
TargetPubkey: []byte("val_1"),
},
// Valid consolidation request. This should be last to ensure invalid requests do
// not end the processing early.
{
@@ -347,6 +360,7 @@ func TestProcessConsolidationRequests(t *testing.T) {
name: "pending consolidations limit reached during processing",
state: func() state.BeaconState {
st := &eth.BeaconStateElectra{
Slot: params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().ShardCommitteePeriod)),
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
PendingConsolidations: make([]*eth.PendingConsolidation, params.BeaconConfig().PendingConsolidationsLimit-1),
}

View File

@@ -386,8 +386,14 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
return errors.Wrap(err, "batch signature verification failed")
}
pubKeyMap := make(map[[48]byte]struct{}, len(pendingDeposits))
// Process each deposit individually
for _, pendingDeposit := range pendingDeposits {
_, found := pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)]
if !found {
pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)] = struct{}{}
}
validSignature := allSignaturesVerified
// If batch verification failed, check the individual deposit signature
@@ -405,9 +411,16 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
// Add validator to the registry if the signature is valid
if validSignature {
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
if found {
index, _ := state.ValidatorIndexByPubkey(bytesutil.ToBytes48(pendingDeposit.PublicKey))
if err := helpers.IncreaseBalance(state, index, pendingDeposit.Amount); err != nil {
return errors.Wrap(err, "could not increase balance")
}
} else {
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
}
}
}
}

View File

@@ -22,6 +22,40 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessPendingDepositsMultiplesSameDeposits(t *testing.T) {
st := stateWithActiveBalanceETH(t, 1000)
deps := make([]*eth.PendingDeposit, 2) // Make same deposit twice
validators := st.Validators()
sk, err := bls.RandKey()
require.NoError(t, err)
for i := 0; i < len(deps); i += 1 {
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(i)
validators[i].PublicKey = sk.PublicKey().Marshal()
validators[i].WithdrawalCredentials = wc
deps[i] = stateTesting.GeneratePendingDeposit(t, sk, 32, bytesutil.ToBytes32(wc), 0)
}
require.NoError(t, st.SetPendingDeposits(deps))
err = electra.ProcessPendingDeposits(context.TODO(), st, 10000)
require.NoError(t, err)
val := st.Validators()
seenPubkeys := make(map[string]struct{})
for i := 0; i < len(val); i += 1 {
if len(val[i].PublicKey) == 0 {
continue
}
_, ok := seenPubkeys[string(val[i].PublicKey)]
if ok {
t.Fatalf("duplicated pubkeys")
} else {
seenPubkeys[string(val[i].PublicKey)] = struct{}{}
}
}
}
func TestProcessPendingDeposits(t *testing.T) {
tests := []struct {
name string
@@ -285,7 +319,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
invalidDep := &eth.PendingDeposit{}
invalidDep := &eth.PendingDeposit{PublicKey: make([]byte, 48)}
// have a combination of valid and invalid deposits
deps := []*eth.PendingDeposit{validDep, invalidDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))

View File

@@ -16,36 +16,20 @@ import (
)
// UpgradeToElectra updates inputs a generic state to return the version Electra state.
//
// nolint:dupword
// Spec code:
// def upgrade_to_electra(pre: deneb.BeaconState) -> BeaconState:
//
// epoch = deneb.get_current_epoch(pre)
// latest_execution_payload_header = ExecutionPayloadHeader(
// parent_hash=pre.latest_execution_payload_header.parent_hash,
// fee_recipient=pre.latest_execution_payload_header.fee_recipient,
// state_root=pre.latest_execution_payload_header.state_root,
// receipts_root=pre.latest_execution_payload_header.receipts_root,
// logs_bloom=pre.latest_execution_payload_header.logs_bloom,
// prev_randao=pre.latest_execution_payload_header.prev_randao,
// block_number=pre.latest_execution_payload_header.block_number,
// gas_limit=pre.latest_execution_payload_header.gas_limit,
// gas_used=pre.latest_execution_payload_header.gas_used,
// timestamp=pre.latest_execution_payload_header.timestamp,
// extra_data=pre.latest_execution_payload_header.extra_data,
// base_fee_per_gas=pre.latest_execution_payload_header.base_fee_per_gas,
// block_hash=pre.latest_execution_payload_header.block_hash,
// transactions_root=pre.latest_execution_payload_header.transactions_root,
// withdrawals_root=pre.latest_execution_payload_header.withdrawals_root,
// blob_gas_used=pre.latest_execution_payload_header.blob_gas_used,
// excess_blob_gas=pre.latest_execution_payload_header.excess_blob_gas,
// deposit_requests_root=Root(), # [New in Electra:EIP6110]
// withdrawal_requests_root=Root(), # [New in Electra:EIP7002],
// consolidation_requests_root=Root(), # [New in Electra:EIP7251]
// )
// latest_execution_payload_header = pre.latest_execution_payload_header
//
// exit_epochs = [v.exit_epoch for v in pre.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// if not exit_epochs:
// exit_epochs = [get_current_epoch(pre)]
// earliest_exit_epoch = max(exit_epochs) + 1
// earliest_exit_epoch = compute_activation_exit_epoch(get_current_epoch(pre))
// for validator in pre.validators:
// if validator.exit_epoch != FAR_FUTURE_EPOCH:
// if validator.exit_epoch > earliest_exit_epoch:
// earliest_exit_epoch = validator.exit_epoch
// earliest_exit_epoch += Epoch(1)
//
// post = BeaconState(
// # Versioning
@@ -120,7 +104,20 @@ import (
// ))
//
// for index in pre_activation:
// queue_entire_balance_and_reset_validator(post, ValidatorIndex(index))
// balance = post.balances[index]
// post.balances[index] = 0
// validator = post.validators[index]
// validator.effective_balance = 0
// validator.activation_eligibility_epoch = FAR_FUTURE_EPOCH
// # Use bls.G2_POINT_AT_INFINITY as a signature field placeholder
// # and GENESIS_SLOT to distinguish from a pending deposit request
// post.pending_deposits.append(PendingDeposit(
// pubkey=validator.pubkey,
// withdrawal_credentials=validator.withdrawal_credentials,
// amount=balance,
// signature=bls.G2_POINT_AT_INFINITY,
// slot=GENESIS_SLOT,
// ))
//
// # Ensure early adopters of compounding credentials go through the activation churn
// for index, validator in enumerate(post.validators):
@@ -187,7 +184,7 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
}
// [New in Electra:EIP7251]
earliestExitEpoch := time.CurrentEpoch(beaconState)
earliestExitEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(beaconState))
preActivationIndices := make([]primitives.ValidatorIndex, 0)
compoundWithdrawalIndices := make([]primitives.ValidatorIndex, 0)
if err = beaconState.ReadFromEveryValidator(func(index int, val state.ReadOnlyValidator) error {

View File

@@ -159,7 +159,7 @@ func TestUpgradeToElectra(t *testing.T) {
eee, err := mSt.EarliestExitEpoch()
require.NoError(t, err)
require.Equal(t, primitives.Epoch(1), eee)
require.Equal(t, helpers.ActivationExitEpoch(primitives.Epoch(1)), eee)
cbtc, err := mSt.ConsolidationBalanceToConsume()
require.NoError(t, err)

View File

@@ -6,19 +6,22 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/execution:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
@@ -32,6 +35,7 @@ go_test(
"//consensus-types/blocks:go_default_library",
"//encoding/ssz:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -4,87 +4,74 @@ import (
"bytes"
"context"
"fmt"
"reflect"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
light_client "github.com/prysmaticlabs/prysm/v5/consensus-types/light-client"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
v11 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpbv1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"google.golang.org/protobuf/proto"
)
const (
FinalityBranchNumOfLeaves = 6
executionBranchNumOfLeaves = 4
)
func createLightClientFinalityUpdate(update *ethpbv2.LightClientUpdate) *ethpbv2.LightClientFinalityUpdate {
finalityUpdate := &ethpbv2.LightClientFinalityUpdate{
AttestedHeader: update.AttestedHeader,
FinalizedHeader: update.FinalizedHeader,
FinalityBranch: update.FinalityBranch,
SyncAggregate: update.SyncAggregate,
SignatureSlot: update.SignatureSlot,
}
return finalityUpdate
}
func createLightClientOptimisticUpdate(update *ethpbv2.LightClientUpdate) *ethpbv2.LightClientOptimisticUpdate {
optimisticUpdate := &ethpbv2.LightClientOptimisticUpdate{
AttestedHeader: update.AttestedHeader,
SyncAggregate: update.SyncAggregate,
SignatureSlot: update.SignatureSlot,
}
return optimisticUpdate
}
func NewLightClientFinalityUpdateFromBeaconState(
ctx context.Context,
currentSlot primitives.Slot,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
finalizedBlock interfaces.ReadOnlySignedBeaconBlock,
) (*ethpbv2.LightClientFinalityUpdate, error) {
update, err := NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock, finalizedBlock)
) (interfaces.LightClientFinalityUpdate, error) {
update, err := NewLightClientUpdateFromBeaconState(ctx, currentSlot, state, block, attestedState, attestedBlock, finalizedBlock)
if err != nil {
return nil, err
}
return createLightClientFinalityUpdate(update), nil
return light_client.NewFinalityUpdateFromUpdate(update)
}
func NewLightClientOptimisticUpdateFromBeaconState(
ctx context.Context,
currentSlot primitives.Slot,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
) (*ethpbv2.LightClientOptimisticUpdate, error) {
update, err := NewLightClientUpdateFromBeaconState(ctx, state, block, attestedState, attestedBlock, nil)
) (interfaces.LightClientOptimisticUpdate, error) {
update, err := NewLightClientUpdateFromBeaconState(ctx, currentSlot, state, block, attestedState, attestedBlock, nil)
if err != nil {
return nil, err
}
return createLightClientOptimisticUpdate(update), nil
return light_client.NewOptimisticUpdateFromUpdate(update)
}
// To form a LightClientUpdate, the following historical states and blocks are needed:
// - state: the post state of any block with a post-Altair parent block
// - block: the corresponding block
// - attested_state: the post state of attested_block
// - attested_block: the block referred to by block.parent_root
// - finalized_block: the block referred to by attested_state.finalized_checkpoint.root,
// if locally available (may be unavailable, e.g., when using checkpoint sync, or if it was pruned locally)
func NewLightClientUpdateFromBeaconState(
ctx context.Context,
currentSlot primitives.Slot,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
attestedState state.BeaconState,
attestedBlock interfaces.ReadOnlySignedBeaconBlock,
finalizedBlock interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientUpdate, error) {
finalizedBlock interfaces.ReadOnlySignedBeaconBlock) (interfaces.LightClientUpdate, error) {
// assert compute_epoch_at_slot(attested_state.slot) >= ALTAIR_FORK_EPOCH
attestedEpoch := slots.ToEpoch(attestedState.Slot())
if attestedEpoch < params.BeaconConfig().AltairForkEpoch {
@@ -129,7 +116,11 @@ func NewLightClientUpdateFromBeaconState(
// assert attested_state.slot == attested_state.latest_block_header.slot
if attestedState.Slot() != attestedState.LatestBlockHeader().Slot {
return nil, fmt.Errorf("attested state slot %d not equal to attested latest block header slot %d", attestedState.Slot(), attestedState.LatestBlockHeader().Slot)
return nil, fmt.Errorf(
"attested state slot %d not equal to attested latest block header slot %d",
attestedState.Slot(),
attestedState.LatestBlockHeader().Slot,
)
}
// attested_header = attested_state.latest_block_header.copy()
@@ -153,46 +144,58 @@ func NewLightClientUpdateFromBeaconState(
}
// assert hash_tree_root(attested_header) == hash_tree_root(attested_block.message) == block.message.parent_root
if attestedHeaderRoot != block.Block().ParentRoot() || attestedHeaderRoot != attestedBlockRoot {
return nil, fmt.Errorf("attested header root %#x not equal to block parent root %#x or attested block root %#x", attestedHeaderRoot, block.Block().ParentRoot(), attestedBlockRoot)
return nil, fmt.Errorf(
"attested header root %#x not equal to block parent root %#x or attested block root %#x",
attestedHeaderRoot,
block.Block().ParentRoot(),
attestedBlockRoot,
)
}
// update_attested_period = compute_sync_committee_period_at_slot(attested_block.message.slot)
updateAttestedPeriod := slots.SyncCommitteePeriod(slots.ToEpoch(attestedBlock.Block().Slot()))
// update = LightClientUpdate()
result, err := createDefaultLightClientUpdate()
result, err := CreateDefaultLightClientUpdate(currentSlot, attestedState)
if err != nil {
return nil, errors.Wrap(err, "could not create default light client update")
}
// update.attested_header = block_to_light_client_header(attested_block)
attestedLightClientHeader, err := BlockToLightClientHeader(attestedBlock)
attestedLightClientHeader, err := BlockToLightClientHeader(ctx, currentSlot, attestedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not get attested light client header")
}
result.AttestedHeader = attestedLightClientHeader
if err = result.SetAttestedHeader(attestedLightClientHeader); err != nil {
return nil, errors.Wrap(err, "could not set attested header")
}
// if update_attested_period == update_signature_period
if updateAttestedPeriod == updateSignaturePeriod {
// update.next_sync_committee = attested_state.next_sync_committee
tempNextSyncCommittee, err := attestedState.NextSyncCommittee()
if err != nil {
return nil, errors.Wrap(err, "could not get next sync committee")
}
nextSyncCommittee := &ethpbv2.SyncCommittee{
nextSyncCommittee := &pb.SyncCommittee{
Pubkeys: tempNextSyncCommittee.Pubkeys,
AggregatePubkey: tempNextSyncCommittee.AggregatePubkey,
}
result.SetNextSyncCommittee(nextSyncCommittee)
// update.next_sync_committee_branch = NextSyncCommitteeBranch(
// compute_merkle_proof(attested_state, next_sync_committee_gindex_at_slot(attested_state.slot)))
nextSyncCommitteeBranch, err := attestedState.NextSyncCommitteeProof(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get next sync committee proof")
}
// update.next_sync_committee = attested_state.next_sync_committee
result.NextSyncCommittee = nextSyncCommittee
// update.next_sync_committee_branch = NextSyncCommitteeBranch(
// compute_merkle_proof(attested_state, next_sync_committee_gindex_at_slot(attested_state.slot)))
result.NextSyncCommitteeBranch = nextSyncCommitteeBranch
if attestedBlock.Version() >= version.Electra {
if err = result.SetNextSyncCommitteeBranch(nextSyncCommitteeBranch); err != nil {
return nil, errors.Wrap(err, "could not set next sync committee branch")
}
} else if err = result.SetNextSyncCommitteeBranch(nextSyncCommitteeBranch); err != nil {
return nil, errors.Wrap(err, "could not set next sync committee branch")
}
}
// if finalized_block is not None
@@ -200,11 +203,13 @@ func NewLightClientUpdateFromBeaconState(
// if finalized_block.message.slot != GENESIS_SLOT
if finalizedBlock.Block().Slot() != 0 {
// update.finalized_header = block_to_light_client_header(finalized_block)
finalizedLightClientHeader, err := BlockToLightClientHeader(finalizedBlock)
finalizedLightClientHeader, err := BlockToLightClientHeader(ctx, currentSlot, finalizedBlock)
if err != nil {
return nil, errors.Wrap(err, "could not get finalized light client header")
}
result.FinalizedHeader = finalizedLightClientHeader
if err = result.SetFinalizedHeader(finalizedLightClientHeader); err != nil {
return nil, errors.Wrap(err, "could not set finalized header")
}
} else {
// assert attested_state.finalized_checkpoint.root == Bytes32()
if !bytes.Equal(attestedState.FinalizedCheckpoint().Root, make([]byte, 32)) {
@@ -218,49 +223,120 @@ func NewLightClientUpdateFromBeaconState(
if err != nil {
return nil, errors.Wrap(err, "could not get finalized root proof")
}
result.FinalityBranch = finalityBranch
if err = result.SetFinalityBranch(finalityBranch); err != nil {
return nil, errors.Wrap(err, "could not set finality branch")
}
}
// update.sync_aggregate = block.message.body.sync_aggregate
result.SyncAggregate = &ethpbv1.SyncAggregate{
result.SetSyncAggregate(&pb.SyncAggregate{
SyncCommitteeBits: syncAggregate.SyncCommitteeBits,
SyncCommitteeSignature: syncAggregate.SyncCommitteeSignature,
}
})
// update.signature_slot = block.message.slot
result.SignatureSlot = block.Block().Slot()
result.SetSignatureSlot(block.Block().Slot())
return result, nil
}
func createDefaultLightClientUpdate() (*ethpbv2.LightClientUpdate, error) {
func CreateDefaultLightClientUpdate(currentSlot primitives.Slot, attestedState state.BeaconState) (interfaces.LightClientUpdate, error) {
currentEpoch := slots.ToEpoch(currentSlot)
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize
pubKeys := make([][]byte, syncCommitteeSize)
for i := uint64(0); i < syncCommitteeSize; i++ {
pubKeys[i] = make([]byte, fieldparams.BLSPubkeyLength)
}
nextSyncCommittee := &ethpbv2.SyncCommittee{
nextSyncCommittee := &pb.SyncCommittee{
Pubkeys: pubKeys,
AggregatePubkey: make([]byte, fieldparams.BLSPubkeyLength),
}
nextSyncCommitteeBranch := make([][]byte, fieldparams.SyncCommitteeBranchDepth)
for i := 0; i < fieldparams.SyncCommitteeBranchDepth; i++ {
var nextSyncCommitteeBranch [][]byte
if attestedState.Version() >= version.Electra {
nextSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepthElectra)
} else {
nextSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepth)
}
for i := 0; i < len(nextSyncCommitteeBranch); i++ {
nextSyncCommitteeBranch[i] = make([]byte, fieldparams.RootLength)
}
executionBranch := make([][]byte, executionBranchNumOfLeaves)
for i := 0; i < executionBranchNumOfLeaves; i++ {
executionBranch := make([][]byte, fieldparams.ExecutionBranchDepth)
for i := 0; i < fieldparams.ExecutionBranchDepth; i++ {
executionBranch[i] = make([]byte, 32)
}
finalityBranch := make([][]byte, FinalityBranchNumOfLeaves)
for i := 0; i < FinalityBranchNumOfLeaves; i++ {
var finalityBranch [][]byte
if attestedState.Version() >= version.Electra {
finalityBranch = make([][]byte, fieldparams.FinalityBranchDepthElectra)
} else {
finalityBranch = make([][]byte, fieldparams.FinalityBranchDepth)
}
for i := 0; i < len(finalityBranch); i++ {
finalityBranch[i] = make([]byte, 32)
}
return &ethpbv2.LightClientUpdate{
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
}, nil
var m proto.Message
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
m = &pb.LightClientUpdateAltair{
AttestedHeader: &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{},
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
m = &pb.LightClientUpdateCapella{
AttestedHeader: &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderCapella{},
ExecutionBranch: executionBranch,
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
}
} else if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
m = &pb.LightClientUpdateDeneb{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
}
} else {
if attestedState.Version() >= version.Electra {
m = &pb.LightClientUpdateElectra{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
}
} else {
m = &pb.LightClientUpdateDeneb{
AttestedHeader: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
NextSyncCommittee: nextSyncCommittee,
NextSyncCommitteeBranch: nextSyncCommitteeBranch,
FinalityBranch: finalityBranch,
}
}
}
return light_client.NewWrappedUpdate(m)
}
func ComputeTransactionsRoot(payload interfaces.ExecutionData) ([]byte, error) {
@@ -299,48 +375,14 @@ func ComputeWithdrawalsRoot(payload interfaces.ExecutionData) ([]byte, error) {
return withdrawalsRoot, nil
}
func BlockToLightClientHeader(block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeaderContainer, error) {
switch block.Version() {
case version.Altair, version.Bellatrix:
altairHeader, err := blockToLightClientHeaderAltair(block)
if err != nil {
return nil, errors.Wrap(err, "could not get header")
}
return &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderAltair{
HeaderAltair: altairHeader,
},
}, nil
case version.Capella:
capellaHeader, err := blockToLightClientHeaderCapella(context.Background(), block)
if err != nil {
return nil, errors.Wrap(err, "could not get capella header")
}
return &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderCapella{
HeaderCapella: capellaHeader,
},
}, nil
case version.Deneb, version.Electra:
denebHeader, err := blockToLightClientHeaderDeneb(context.Background(), block)
if err != nil {
return nil, errors.Wrap(err, "could not get header")
}
return &ethpbv2.LightClientHeaderContainer{
Header: &ethpbv2.LightClientHeaderContainer_HeaderDeneb{
HeaderDeneb: denebHeader,
},
}, nil
default:
return nil, fmt.Errorf("unsupported block version %s", version.String(block.Version()))
}
}
func blockToLightClientHeaderAltair(block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeader, error) {
if block.Version() < version.Altair {
return nil, fmt.Errorf("block version is %s instead of Altair", version.String(block.Version()))
}
func BlockToLightClientHeader(
ctx context.Context,
currentSlot primitives.Slot,
block interfaces.ReadOnlySignedBeaconBlock,
) (interfaces.LightClientHeader, error) {
var m proto.Message
currentEpoch := slots.ToEpoch(currentSlot)
blockEpoch := slots.ToEpoch(block.Block().Slot())
parentRoot := block.Block().ParentRoot()
stateRoot := block.Block().StateRoot()
bodyRoot, err := block.Block().Body().HashTreeRoot()
@@ -348,147 +390,431 @@ func blockToLightClientHeaderAltair(block interfaces.ReadOnlySignedBeaconBlock)
return nil, errors.Wrap(err, "could not get body root")
}
return &ethpbv2.LightClientHeader{
Beacon: &ethpbv1.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
}, nil
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
m = &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
var payloadHeader *enginev1.ExecutionPayloadHeaderCapella
var payloadProof [][]byte
if blockEpoch < params.BeaconConfig().CapellaForkEpoch {
payloadHeader = &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
ExtraData: make([]byte, 0),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
TransactionsRoot: make([]byte, fieldparams.RootLength),
WithdrawalsRoot: make([]byte, fieldparams.RootLength),
}
payloadProof = emptyPayloadProof()
} else {
payload, err := block.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
payloadHeader = &enginev1.ExecutionPayloadHeaderCapella{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
payloadProof, err = blocks.PayloadProof(ctx, block.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
}
m = &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
Execution: payloadHeader,
ExecutionBranch: payloadProof,
}
} else {
var payloadHeader *enginev1.ExecutionPayloadHeaderDeneb
var payloadProof [][]byte
if blockEpoch < params.BeaconConfig().CapellaForkEpoch {
var ok bool
p, err := execution.EmptyExecutionPayload(version.Deneb)
if err != nil {
return nil, errors.Wrap(err, "could not get payload header")
}
payloadHeader, ok = p.(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, errors.Wrapf(err, "payload header type %T is not %T", payloadHeader, &enginev1.ExecutionPayloadHeaderDeneb{})
}
payloadProof = emptyPayloadProof()
} else {
payload, err := block.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get transactions root")
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
var blobGasUsed uint64
var excessBlobGas uint64
if blockEpoch >= params.BeaconConfig().DenebForkEpoch {
blobGasUsed, err = payload.BlobGasUsed()
if err != nil {
return nil, errors.Wrap(err, "could not get blob gas used")
}
excessBlobGas, err = payload.ExcessBlobGas()
if err != nil {
return nil, errors.Wrap(err, "could not get excess blob gas")
}
}
payloadHeader = &enginev1.ExecutionPayloadHeaderDeneb{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
BlobGasUsed: blobGasUsed,
ExcessBlobGas: excessBlobGas,
}
payloadProof, err = blocks.PayloadProof(ctx, block.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
}
m = &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
Execution: payloadHeader,
ExecutionBranch: payloadProof,
}
}
return light_client.NewWrappedHeader(m)
}
func blockToLightClientHeaderCapella(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeaderCapella, error) {
if block.Version() < version.Capella {
return nil, fmt.Errorf("block version is %s instead of Capella", version.String(block.Version()))
func emptyPayloadProof() [][]byte {
branch := interfaces.LightClientExecutionBranch{}
proof := make([][]byte, len(branch))
for i, b := range branch {
proof[i] = b[:]
}
payload, err := block.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, err
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, err
}
executionHeader := &v11.ExecutionPayloadHeaderCapella{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
executionPayloadProof, err := blocks.PayloadProof(ctx, block.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
parentRoot := block.Block().ParentRoot()
stateRoot := block.Block().StateRoot()
bodyRoot, err := block.Block().Body().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get body root")
}
return &ethpbv2.LightClientHeaderCapella{
Beacon: &ethpbv1.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
Execution: executionHeader,
ExecutionBranch: executionPayloadProof,
}, nil
return proof
}
func blockToLightClientHeaderDeneb(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) (*ethpbv2.LightClientHeaderDeneb, error) {
if block.Version() < version.Deneb {
return nil, fmt.Errorf("block version is %s instead of Deneb/Electra", version.String(block.Version()))
func HasRelevantSyncCommittee(update interfaces.LightClientUpdate) (bool, error) {
if update.Version() >= version.Electra {
branch, err := update.NextSyncCommitteeBranchElectra()
if err != nil {
return false, err
}
return !reflect.DeepEqual(branch, interfaces.LightClientSyncCommitteeBranchElectra{}), nil
}
payload, err := block.Block().Body().Execution()
branch, err := update.NextSyncCommitteeBranch()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
return false, err
}
transactionsRoot, err := ComputeTransactionsRoot(payload)
if err != nil {
return nil, err
}
withdrawalsRoot, err := ComputeWithdrawalsRoot(payload)
if err != nil {
return nil, err
}
blobGasUsed, err := payload.BlobGasUsed()
if err != nil {
return nil, errors.Wrap(err, "could not get blob gas used")
}
excessBlobGas, err := payload.ExcessBlobGas()
if err != nil {
return nil, errors.Wrap(err, "could not get excess blob gas")
}
executionHeader := &v11.ExecutionPayloadHeaderDeneb{
ParentHash: payload.ParentHash(),
FeeRecipient: payload.FeeRecipient(),
StateRoot: payload.StateRoot(),
ReceiptsRoot: payload.ReceiptsRoot(),
LogsBloom: payload.LogsBloom(),
PrevRandao: payload.PrevRandao(),
BlockNumber: payload.BlockNumber(),
GasLimit: payload.GasLimit(),
GasUsed: payload.GasUsed(),
Timestamp: payload.Timestamp(),
ExtraData: payload.ExtraData(),
BaseFeePerGas: payload.BaseFeePerGas(),
BlockHash: payload.BlockHash(),
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
BlobGasUsed: blobGasUsed,
ExcessBlobGas: excessBlobGas,
}
executionPayloadProof, err := blocks.PayloadProof(ctx, block.Block())
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload proof")
}
parentRoot := block.Block().ParentRoot()
stateRoot := block.Block().StateRoot()
bodyRoot, err := block.Block().Body().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get body root")
}
return &ethpbv2.LightClientHeaderDeneb{
Beacon: &ethpbv1.BeaconBlockHeader{
Slot: block.Block().Slot(),
ProposerIndex: block.Block().ProposerIndex(),
ParentRoot: parentRoot[:],
StateRoot: stateRoot[:],
BodyRoot: bodyRoot[:],
},
Execution: executionHeader,
ExecutionBranch: executionPayloadProof,
}, nil
return !reflect.DeepEqual(branch, interfaces.LightClientSyncCommitteeBranch{}), nil
}
func HasFinality(update interfaces.LightClientUpdate) (bool, error) {
if update.Version() >= version.Electra {
b, err := update.FinalityBranchElectra()
if err != nil {
return false, err
}
return !reflect.DeepEqual(b, interfaces.LightClientFinalityBranchElectra{}), nil
}
b, err := update.FinalityBranch()
if err != nil {
return false, err
}
return !reflect.DeepEqual(b, interfaces.LightClientFinalityBranch{}), nil
}
func IsBetterUpdate(newUpdate, oldUpdate interfaces.LightClientUpdate) (bool, error) {
maxActiveParticipants := newUpdate.SyncAggregate().SyncCommitteeBits.Len()
newNumActiveParticipants := newUpdate.SyncAggregate().SyncCommitteeBits.Count()
oldNumActiveParticipants := oldUpdate.SyncAggregate().SyncCommitteeBits.Count()
newHasSupermajority := newNumActiveParticipants*3 >= maxActiveParticipants*2
oldHasSupermajority := oldNumActiveParticipants*3 >= maxActiveParticipants*2
if newHasSupermajority != oldHasSupermajority {
return newHasSupermajority, nil
}
if !newHasSupermajority && newNumActiveParticipants != oldNumActiveParticipants {
return newNumActiveParticipants > oldNumActiveParticipants, nil
}
newUpdateAttestedHeaderBeacon := newUpdate.AttestedHeader().Beacon()
oldUpdateAttestedHeaderBeacon := oldUpdate.AttestedHeader().Beacon()
// Compare presence of relevant sync committee
newHasRelevantSyncCommittee, err := HasRelevantSyncCommittee(newUpdate)
if err != nil {
return false, err
}
newHasRelevantSyncCommittee = newHasRelevantSyncCommittee &&
(slots.SyncCommitteePeriod(slots.ToEpoch(newUpdateAttestedHeaderBeacon.Slot)) == slots.SyncCommitteePeriod(slots.ToEpoch(newUpdate.SignatureSlot())))
oldHasRelevantSyncCommittee, err := HasRelevantSyncCommittee(oldUpdate)
if err != nil {
return false, err
}
oldHasRelevantSyncCommittee = oldHasRelevantSyncCommittee &&
(slots.SyncCommitteePeriod(slots.ToEpoch(oldUpdateAttestedHeaderBeacon.Slot)) == slots.SyncCommitteePeriod(slots.ToEpoch(oldUpdate.SignatureSlot())))
if newHasRelevantSyncCommittee != oldHasRelevantSyncCommittee {
return newHasRelevantSyncCommittee, nil
}
// Compare indication of any finality
newHasFinality, err := HasFinality(newUpdate)
if err != nil {
return false, err
}
oldHasFinality, err := HasFinality(oldUpdate)
if err != nil {
return false, err
}
if newHasFinality != oldHasFinality {
return newHasFinality, nil
}
newUpdateFinalizedHeaderBeacon := newUpdate.FinalizedHeader().Beacon()
oldUpdateFinalizedHeaderBeacon := oldUpdate.FinalizedHeader().Beacon()
// Compare sync committee finality
if newHasFinality {
newHasSyncCommitteeFinality :=
slots.SyncCommitteePeriod(slots.ToEpoch(newUpdateFinalizedHeaderBeacon.Slot)) ==
slots.SyncCommitteePeriod(slots.ToEpoch(newUpdateAttestedHeaderBeacon.Slot))
oldHasSyncCommitteeFinality :=
slots.SyncCommitteePeriod(slots.ToEpoch(oldUpdateFinalizedHeaderBeacon.Slot)) ==
slots.SyncCommitteePeriod(slots.ToEpoch(oldUpdateAttestedHeaderBeacon.Slot))
if newHasSyncCommitteeFinality != oldHasSyncCommitteeFinality {
return newHasSyncCommitteeFinality, nil
}
}
// Tiebreaker 1: Sync committee participation beyond supermajority
if newNumActiveParticipants != oldNumActiveParticipants {
return newNumActiveParticipants > oldNumActiveParticipants, nil
}
// Tiebreaker 2: Prefer older data (fewer changes to best)
if newUpdateAttestedHeaderBeacon.Slot != oldUpdateAttestedHeaderBeacon.Slot {
return newUpdateAttestedHeaderBeacon.Slot < oldUpdateAttestedHeaderBeacon.Slot, nil
}
return newUpdate.SignatureSlot() < oldUpdate.SignatureSlot(), nil
}
func NewLightClientBootstrapFromBeaconState(
ctx context.Context,
currentSlot primitives.Slot,
state state.BeaconState,
block interfaces.ReadOnlySignedBeaconBlock,
) (interfaces.LightClientBootstrap, error) {
// assert compute_epoch_at_slot(state.slot) >= ALTAIR_FORK_EPOCH
if slots.ToEpoch(state.Slot()) < params.BeaconConfig().AltairForkEpoch {
return nil, fmt.Errorf("light client bootstrap is not supported before Altair, invalid slot %d", state.Slot())
}
// assert state.slot == state.latest_block_header.slot
latestBlockHeader := state.LatestBlockHeader()
if state.Slot() != latestBlockHeader.Slot {
return nil, fmt.Errorf("state slot %d not equal to latest block header slot %d", state.Slot(), latestBlockHeader.Slot)
}
// header.state_root = hash_tree_root(state)
stateRoot, err := state.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get state root")
}
latestBlockHeader.StateRoot = stateRoot[:]
// assert hash_tree_root(header) == hash_tree_root(block.message)
latestBlockHeaderRoot, err := latestBlockHeader.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get latest block header root")
}
beaconBlockRoot, err := block.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get block root")
}
if latestBlockHeaderRoot != beaconBlockRoot {
return nil, fmt.Errorf("latest block header root %#x not equal to block root %#x", latestBlockHeaderRoot, beaconBlockRoot)
}
bootstrap, err := createDefaultLightClientBootstrap(currentSlot)
if err != nil {
return nil, errors.Wrap(err, "could not create default light client bootstrap")
}
lightClientHeader, err := BlockToLightClientHeader(ctx, currentSlot, block)
if err != nil {
return nil, errors.Wrap(err, "could not convert block to light client header")
}
err = bootstrap.SetHeader(lightClientHeader)
if err != nil {
return nil, errors.Wrap(err, "could not set header")
}
currentSyncCommittee, err := state.CurrentSyncCommittee()
if err != nil {
return nil, errors.Wrap(err, "could not get current sync committee")
}
err = bootstrap.SetCurrentSyncCommittee(currentSyncCommittee)
if err != nil {
return nil, errors.Wrap(err, "could not set current sync committee")
}
currentSyncCommitteeProof, err := state.CurrentSyncCommitteeProof(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get current sync committee proof")
}
err = bootstrap.SetCurrentSyncCommitteeBranch(currentSyncCommitteeProof)
if err != nil {
return nil, errors.Wrap(err, "could not set current sync committee proof")
}
return bootstrap, nil
}
func createDefaultLightClientBootstrap(currentSlot primitives.Slot) (interfaces.LightClientBootstrap, error) {
currentEpoch := slots.ToEpoch(currentSlot)
syncCommitteeSize := params.BeaconConfig().SyncCommitteeSize
pubKeys := make([][]byte, syncCommitteeSize)
for i := uint64(0); i < syncCommitteeSize; i++ {
pubKeys[i] = make([]byte, fieldparams.BLSPubkeyLength)
}
currentSyncCommittee := &pb.SyncCommittee{
Pubkeys: pubKeys,
AggregatePubkey: make([]byte, fieldparams.BLSPubkeyLength),
}
var currentSyncCommitteeBranch [][]byte
if currentEpoch >= params.BeaconConfig().ElectraForkEpoch {
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepthElectra)
} else {
currentSyncCommitteeBranch = make([][]byte, fieldparams.SyncCommitteeBranchDepth)
}
for i := 0; i < len(currentSyncCommitteeBranch); i++ {
currentSyncCommitteeBranch[i] = make([]byte, fieldparams.RootLength)
}
executionBranch := make([][]byte, fieldparams.ExecutionBranchDepth)
for i := 0; i < fieldparams.ExecutionBranchDepth; i++ {
executionBranch[i] = make([]byte, 32)
}
// TODO: can this be based on the current epoch?
var m proto.Message
if currentEpoch < params.BeaconConfig().CapellaForkEpoch {
m = &pb.LightClientBootstrapAltair{
Header: &pb.LightClientHeaderAltair{
Beacon: &pb.BeaconBlockHeader{},
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else if currentEpoch < params.BeaconConfig().DenebForkEpoch {
m = &pb.LightClientBootstrapCapella{
Header: &pb.LightClientHeaderCapella{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderCapella{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
m = &pb.LightClientBootstrapDeneb{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
} else {
m = &pb.LightClientBootstrapElectra{
Header: &pb.LightClientHeaderDeneb{
Beacon: &pb.BeaconBlockHeader{},
Execution: &enginev1.ExecutionPayloadHeaderDeneb{},
ExecutionBranch: executionBranch,
},
CurrentSyncCommittee: currentSyncCommittee,
CurrentSyncCommitteeBranch: currentSyncCommitteeBranch,
}
}
return light_client.NewWrappedBootstrap(m)
}

View File

@@ -1,16 +1,17 @@
package light_client_test
import (
"reflect"
"testing"
"github.com/pkg/errors"
lightClient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
consensustypes "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
v11 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
lightClient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
@@ -19,39 +20,39 @@ func TestLightClient_NewLightClientOptimisticUpdateFromBeaconState(t *testing.T)
t.Run("Altair", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot, "Signature slot is not equal")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate)
l.CheckAttestedHeader(update.AttestedHeader)
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
})
t.Run("Capella", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestCapella(false)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot, "Signature slot is not equal")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate)
l.CheckAttestedHeader(update.AttestedHeader)
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
})
t.Run("Deneb", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestDeneb(false)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock)
update, err := lightClient.NewLightClientOptimisticUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot, "Signature slot is not equal")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate)
l.CheckAttestedHeader(update.AttestedHeader)
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
})
}
@@ -60,33 +61,33 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot, "Signature slot is not equal")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate)
l.CheckAttestedHeader(update.AttestedHeader)
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
finalizedBlockHeader, err := l.FinalizedBlock.Header()
require.NoError(t, err)
//zeroHash := params.BeaconConfig().ZeroHash[:]
require.NotNil(t, update.FinalizedHeader, "Finalized header is nil")
updateFinalizedHeaderBeacon, err := update.FinalizedHeader.GetBeacon()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader(), "Finalized header is nil")
require.Equal(t, reflect.TypeOf(update.FinalizedHeader().Proto()), reflect.TypeOf(&pb.LightClientHeaderAltair{}), "Finalized header is not Altair")
updateFinalizedHeaderBeacon := update.FinalizedHeader().Beacon()
require.Equal(t, finalizedBlockHeader.Header.Slot, updateFinalizedHeaderBeacon.Slot, "Finalized header slot is not equal")
require.Equal(t, finalizedBlockHeader.Header.ProposerIndex, updateFinalizedHeaderBeacon.ProposerIndex, "Finalized header proposer index is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.ParentRoot, updateFinalizedHeaderBeacon.ParentRoot, "Finalized header parent root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.StateRoot, updateFinalizedHeaderBeacon.StateRoot, "Finalized header state root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.BodyRoot, updateFinalizedHeaderBeacon.BodyRoot, "Finalized header body root is not equal")
require.Equal(t, lightClient.FinalityBranchNumOfLeaves, len(update.FinalityBranch), "Invalid finality branch leaves")
finalityBranch, err := l.AttestedState.FinalizedRootProof(l.Ctx)
fb, err := update.FinalityBranch()
require.NoError(t, err)
for i, leaf := range update.FinalityBranch {
require.DeepSSZEqual(t, finalityBranch[i], leaf, "Leaf is not equal")
proof, err := l.AttestedState.FinalizedRootProof(l.Ctx)
require.NoError(t, err)
for i, leaf := range fb {
require.DeepSSZEqual(t, proof[i], leaf[:], "Leaf is not equal")
}
})
})
@@ -95,30 +96,31 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestCapella(false)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot, "Signature slot is not equal")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate)
l.CheckAttestedHeader(update.AttestedHeader)
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
finalizedBlockHeader, err := l.FinalizedBlock.Header()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader, "Finalized header is nil")
updateFinalizedHeaderBeacon, err := update.FinalizedHeader.GetBeacon()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader(), "Finalized header is nil")
require.Equal(t, reflect.TypeOf(update.FinalizedHeader().Proto()), reflect.TypeOf(&pb.LightClientHeaderCapella{}), "Finalized header is not Capella")
updateFinalizedHeaderBeacon := update.FinalizedHeader().Beacon()
require.Equal(t, finalizedBlockHeader.Header.Slot, updateFinalizedHeaderBeacon.Slot, "Finalized header slot is not equal")
require.Equal(t, finalizedBlockHeader.Header.ProposerIndex, updateFinalizedHeaderBeacon.ProposerIndex, "Finalized header proposer index is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.ParentRoot, updateFinalizedHeaderBeacon.ParentRoot, "Finalized header parent root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.StateRoot, updateFinalizedHeaderBeacon.StateRoot, "Finalized header state root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.BodyRoot, updateFinalizedHeaderBeacon.BodyRoot, "Finalized header body root is not equal")
require.Equal(t, lightClient.FinalityBranchNumOfLeaves, len(update.FinalityBranch), "Invalid finality branch leaves")
finalityBranch, err := l.AttestedState.FinalizedRootProof(l.Ctx)
fb, err := update.FinalityBranch()
require.NoError(t, err)
for i, leaf := range update.FinalityBranch {
require.DeepSSZEqual(t, finalityBranch[i], leaf, "Leaf is not equal")
proof, err := l.AttestedState.FinalizedRootProof(l.Ctx)
require.NoError(t, err)
for i, leaf := range fb {
require.DeepSSZEqual(t, proof[i], leaf[:], "Leaf is not equal")
}
// Check Execution BlockHash
@@ -161,35 +163,38 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
require.DeepSSZEqual(t, execution, update.FinalizedHeader.GetHeaderCapella().Execution, "Finalized Block Execution is not equal")
updateExecution, err := update.FinalizedHeader().Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, execution, updateExecution.Proto(), "Finalized Block Execution is not equal")
})
t.Run("FinalizedBlock In Previous Fork", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestCapellaFinalizedBlockAltair(false)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot, "Signature slot is not equal")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate)
l.CheckAttestedHeader(update.AttestedHeader)
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
finalizedBlockHeader, err := l.FinalizedBlock.Header()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader, "Finalized header is nil")
updateFinalizedHeaderBeacon, err := update.FinalizedHeader.GetBeacon()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader(), "Finalized header is nil")
require.Equal(t, reflect.TypeOf(update.FinalizedHeader().Proto()), reflect.TypeOf(&pb.LightClientHeaderCapella{}), "Finalized header is not Capella")
updateFinalizedHeaderBeacon := update.FinalizedHeader().Beacon()
require.Equal(t, finalizedBlockHeader.Header.Slot, updateFinalizedHeaderBeacon.Slot, "Finalized header slot is not equal")
require.Equal(t, finalizedBlockHeader.Header.ProposerIndex, updateFinalizedHeaderBeacon.ProposerIndex, "Finalized header proposer index is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.ParentRoot, updateFinalizedHeaderBeacon.ParentRoot, "Finalized header parent root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.StateRoot, updateFinalizedHeaderBeacon.StateRoot, "Finalized header state root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.BodyRoot, updateFinalizedHeaderBeacon.BodyRoot, "Finalized header body root is not equal")
require.Equal(t, lightClient.FinalityBranchNumOfLeaves, len(update.FinalityBranch), "Invalid finality branch leaves")
finalityBranch, err := l.AttestedState.FinalizedRootProof(l.Ctx)
fb, err := update.FinalityBranch()
require.NoError(t, err)
for i, leaf := range update.FinalityBranch {
require.DeepSSZEqual(t, finalityBranch[i], leaf, "Leaf is not equal")
proof, err := l.AttestedState.FinalizedRootProof(l.Ctx)
require.NoError(t, err)
for i, leaf := range fb {
require.DeepSSZEqual(t, proof[i], leaf[:], "Leaf is not equal")
}
})
})
@@ -199,31 +204,31 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
t.Run("FinalizedBlock Not Nil", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestDeneb(false)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot, "Signature slot is not equal")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate)
l.CheckAttestedHeader(update.AttestedHeader)
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
//zeroHash := params.BeaconConfig().ZeroHash[:]
finalizedBlockHeader, err := l.FinalizedBlock.Header()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader, "Finalized header is nil")
updateFinalizedHeaderBeacon, err := update.FinalizedHeader.GetBeacon()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader(), "Finalized header is nil")
updateFinalizedHeaderBeacon := update.FinalizedHeader().Beacon()
require.Equal(t, finalizedBlockHeader.Header.Slot, updateFinalizedHeaderBeacon.Slot, "Finalized header slot is not equal")
require.Equal(t, finalizedBlockHeader.Header.ProposerIndex, updateFinalizedHeaderBeacon.ProposerIndex, "Finalized header proposer index is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.ParentRoot, updateFinalizedHeaderBeacon.ParentRoot, "Finalized header parent root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.StateRoot, updateFinalizedHeaderBeacon.StateRoot, "Finalized header state root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.BodyRoot, updateFinalizedHeaderBeacon.BodyRoot, "Finalized header body root is not equal")
require.Equal(t, lightClient.FinalityBranchNumOfLeaves, len(update.FinalityBranch), "Invalid finality branch leaves")
finalityBranch, err := l.AttestedState.FinalizedRootProof(l.Ctx)
fb, err := update.FinalityBranch()
require.NoError(t, err)
for i, leaf := range update.FinalityBranch {
require.DeepSSZEqual(t, finalityBranch[i], leaf, "Leaf is not equal")
proof, err := l.AttestedState.FinalizedRootProof(l.Ctx)
require.NoError(t, err)
for i, leaf := range fb {
require.DeepSSZEqual(t, proof[i], leaf[:], "Leaf is not equal")
}
// Check Execution BlockHash
@@ -266,36 +271,39 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
require.DeepSSZEqual(t, execution, update.FinalizedHeader.GetHeaderDeneb().Execution, "Finalized Block Execution is not equal")
updateExecution, err := update.FinalizedHeader().Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, execution, updateExecution.Proto(), "Finalized Block Execution is not equal")
})
t.Run("FinalizedBlock In Previous Fork", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestDenebFinalizedBlockCapella(false)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
update, err := lightClient.NewLightClientFinalityUpdateFromBeaconState(l.Ctx, l.State.Slot(), l.State, l.Block, l.AttestedState, l.AttestedBlock, l.FinalizedBlock)
require.NoError(t, err)
require.NotNil(t, update, "update is nil")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot, "Signature slot is not equal")
require.Equal(t, l.Block.Block().Slot(), update.SignatureSlot(), "Signature slot is not equal")
l.CheckSyncAggregate(update.SyncAggregate)
l.CheckAttestedHeader(update.AttestedHeader)
l.CheckSyncAggregate(update.SyncAggregate())
l.CheckAttestedHeader(update.AttestedHeader())
finalizedBlockHeader, err := l.FinalizedBlock.Header()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader, "Finalized header is nil")
updateFinalizedHeaderBeacon, err := update.FinalizedHeader.GetBeacon()
require.NoError(t, err)
require.NotNil(t, update.FinalizedHeader(), "Finalized header is nil")
updateFinalizedHeaderBeacon := update.FinalizedHeader().Beacon()
require.Equal(t, reflect.TypeOf(update.FinalizedHeader().Proto()), reflect.TypeOf(&pb.LightClientHeaderDeneb{}), "Finalized header is not Deneb")
require.Equal(t, finalizedBlockHeader.Header.Slot, updateFinalizedHeaderBeacon.Slot, "Finalized header slot is not equal")
require.Equal(t, finalizedBlockHeader.Header.ProposerIndex, updateFinalizedHeaderBeacon.ProposerIndex, "Finalized header proposer index is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.ParentRoot, updateFinalizedHeaderBeacon.ParentRoot, "Finalized header parent root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.StateRoot, updateFinalizedHeaderBeacon.StateRoot, "Finalized header state root is not equal")
require.DeepSSZEqual(t, finalizedBlockHeader.Header.BodyRoot, updateFinalizedHeaderBeacon.BodyRoot, "Finalized header body root is not equal")
require.Equal(t, lightClient.FinalityBranchNumOfLeaves, len(update.FinalityBranch), "Invalid finality branch leaves")
finalityBranch, err := l.AttestedState.FinalizedRootProof(l.Ctx)
fb, err := update.FinalityBranch()
require.NoError(t, err)
for i, leaf := range update.FinalityBranch {
require.DeepSSZEqual(t, finalityBranch[i], leaf, "Leaf is not equal")
proof, err := l.AttestedState.FinalizedRootProof(l.Ctx)
require.NoError(t, err)
for i, leaf := range fb {
require.DeepSSZEqual(t, proof[i], leaf[:], "Leaf is not equal")
}
// Check Execution BlockHash
@@ -321,7 +329,7 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
} else {
require.NoError(t, err)
}
execution := &v11.ExecutionPayloadHeaderCapella{
execution := &v11.ExecutionPayloadHeaderDeneb{
ParentHash: payloadInterface.ParentHash(),
FeeRecipient: payloadInterface.FeeRecipient(),
StateRoot: payloadInterface.StateRoot(),
@@ -338,7 +346,9 @@ func TestLightClient_NewLightClientFinalityUpdateFromBeaconState(t *testing.T) {
TransactionsRoot: transactionsRoot,
WithdrawalsRoot: withdrawalsRoot,
}
require.DeepSSZEqual(t, execution, update.FinalizedHeader.GetHeaderCapella().Execution, "Finalized Block Execution is not equal")
updateExecution, err := update.FinalizedHeader().Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, execution, updateExecution.Proto(), "Finalized Block Execution is not equal")
})
})
}
@@ -347,9 +357,8 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
t.Run("Altair", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
container, err := lightClient.BlockToLightClientHeader(l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
require.NoError(t, err)
header := container.GetHeaderAltair()
require.NotNil(t, header, "header is nil")
parentRoot := l.Block.Block().ParentRoot()
@@ -357,19 +366,18 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
bodyRoot, err := l.Block.Block().Body().HashTreeRoot()
require.NoError(t, err)
require.Equal(t, l.Block.Block().Slot(), header.Beacon.Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon.ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon.ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon.StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon.BodyRoot, "Body root is not equal")
require.Equal(t, l.Block.Block().Slot(), header.Beacon().Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon().ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon().ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon().StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon().BodyRoot, "Body root is not equal")
})
t.Run("Bellatrix", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestBellatrix()
container, err := lightClient.BlockToLightClientHeader(l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
require.NoError(t, err)
header := container.GetHeaderAltair()
require.NotNil(t, header, "header is nil")
parentRoot := l.Block.Block().ParentRoot()
@@ -377,20 +385,19 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
bodyRoot, err := l.Block.Block().Body().HashTreeRoot()
require.NoError(t, err)
require.Equal(t, l.Block.Block().Slot(), header.Beacon.Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon.ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon.ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon.StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon.BodyRoot, "Body root is not equal")
require.Equal(t, l.Block.Block().Slot(), header.Beacon().Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon().ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon().ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon().StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon().BodyRoot, "Body root is not equal")
})
t.Run("Capella", func(t *testing.T) {
t.Run("Non-Blinded Beacon Block", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestCapella(false)
container, err := lightClient.BlockToLightClientHeader(l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
require.NoError(t, err)
header := container.GetHeaderCapella()
require.NotNil(t, header, "header is nil")
parentRoot := l.Block.Block().ParentRoot()
@@ -428,23 +435,26 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
executionPayloadProof, err := blocks.PayloadProof(l.Ctx, l.Block.Block())
require.NoError(t, err)
require.Equal(t, l.Block.Block().Slot(), header.Beacon.Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon.ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon.ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon.StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon.BodyRoot, "Body root is not equal")
require.Equal(t, l.Block.Block().Slot(), header.Beacon().Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon().ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon().ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon().StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon().BodyRoot, "Body root is not equal")
require.DeepSSZEqual(t, executionHeader, header.Execution, "Execution headers are not equal")
headerExecution, err := header.Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, executionHeader, headerExecution.Proto(), "Execution headers are not equal")
require.DeepSSZEqual(t, executionPayloadProof, header.ExecutionBranch, "Execution payload proofs are not equal")
headerExecutionBranch, err := header.ExecutionBranch()
require.NoError(t, err)
require.DeepSSZEqual(t, executionPayloadProof, convertArrayToSlice(headerExecutionBranch), "Execution payload proofs are not equal")
})
t.Run("Blinded Beacon Block", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestCapella(true)
container, err := lightClient.BlockToLightClientHeader(l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
require.NoError(t, err)
header := container.GetHeaderCapella()
require.NotNil(t, header, "header is nil")
parentRoot := l.Block.Block().ParentRoot()
@@ -482,15 +492,19 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
executionPayloadProof, err := blocks.PayloadProof(l.Ctx, l.Block.Block())
require.NoError(t, err)
require.Equal(t, l.Block.Block().Slot(), header.Beacon.Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon.ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon.ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon.StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon.BodyRoot, "Body root is not equal")
require.Equal(t, l.Block.Block().Slot(), header.Beacon().Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon().ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon().ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon().StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon().BodyRoot, "Body root is not equal")
require.DeepSSZEqual(t, executionHeader, header.Execution, "Execution headers are not equal")
headerExecution, err := header.Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, executionHeader, headerExecution.Proto(), "Execution headers are not equal")
require.DeepSSZEqual(t, executionPayloadProof, header.ExecutionBranch, "Execution payload proofs are not equal")
headerExecutionBranch, err := header.ExecutionBranch()
require.NoError(t, err)
require.DeepSSZEqual(t, executionPayloadProof, convertArrayToSlice(headerExecutionBranch), "Execution payload proofs are not equal")
})
})
@@ -498,9 +512,8 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
t.Run("Non-Blinded Beacon Block", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestDeneb(false)
container, err := lightClient.BlockToLightClientHeader(l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
require.NoError(t, err)
header := container.GetHeaderDeneb()
require.NotNil(t, header, "header is nil")
parentRoot := l.Block.Block().ParentRoot()
@@ -546,23 +559,26 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
executionPayloadProof, err := blocks.PayloadProof(l.Ctx, l.Block.Block())
require.NoError(t, err)
require.Equal(t, l.Block.Block().Slot(), header.Beacon.Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon.ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon.ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon.StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon.BodyRoot, "Body root is not equal")
require.Equal(t, l.Block.Block().Slot(), header.Beacon().Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon().ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon().ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon().StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon().BodyRoot, "Body root is not equal")
require.DeepSSZEqual(t, executionHeader, header.Execution, "Execution headers are not equal")
headerExecution, err := header.Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, executionHeader, headerExecution.Proto(), "Execution headers are not equal")
require.DeepSSZEqual(t, executionPayloadProof, header.ExecutionBranch, "Execution payload proofs are not equal")
headerExecutionBranch, err := header.ExecutionBranch()
require.NoError(t, err)
require.DeepSSZEqual(t, executionPayloadProof, convertArrayToSlice(headerExecutionBranch), "Execution payload proofs are not equal")
})
t.Run("Blinded Beacon Block", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestDeneb(true)
container, err := lightClient.BlockToLightClientHeader(l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
require.NoError(t, err)
header := container.GetHeaderDeneb()
require.NotNil(t, header, "header is nil")
parentRoot := l.Block.Block().ParentRoot()
@@ -608,15 +624,19 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
executionPayloadProof, err := blocks.PayloadProof(l.Ctx, l.Block.Block())
require.NoError(t, err)
require.Equal(t, l.Block.Block().Slot(), header.Beacon.Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon.ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon.ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon.StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon.BodyRoot, "Body root is not equal")
require.Equal(t, l.Block.Block().Slot(), header.Beacon().Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon().ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon().ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon().StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon().BodyRoot, "Body root is not equal")
require.DeepSSZEqual(t, executionHeader, header.Execution, "Execution headers are not equal")
headerExecution, err := header.Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, executionHeader, headerExecution.Proto(), "Execution headers are not equal")
require.DeepSSZEqual(t, executionPayloadProof, header.ExecutionBranch, "Execution payload proofs are not equal")
headerExecutionBranch, err := header.ExecutionBranch()
require.NoError(t, err)
require.DeepSSZEqual(t, executionPayloadProof, convertArrayToSlice(headerExecutionBranch), "Execution payload proofs are not equal")
})
})
@@ -624,9 +644,8 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
t.Run("Non-Blinded Beacon Block", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestElectra(false)
container, err := lightClient.BlockToLightClientHeader(l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
require.NoError(t, err)
header := container.GetHeaderDeneb()
require.NotNil(t, header, "header is nil")
parentRoot := l.Block.Block().ParentRoot()
@@ -672,23 +691,26 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
executionPayloadProof, err := blocks.PayloadProof(l.Ctx, l.Block.Block())
require.NoError(t, err)
require.Equal(t, l.Block.Block().Slot(), header.Beacon.Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon.ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon.ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon.StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon.BodyRoot, "Body root is not equal")
require.Equal(t, l.Block.Block().Slot(), header.Beacon().Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon().ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon().ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon().StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon().BodyRoot, "Body root is not equal")
require.DeepSSZEqual(t, executionHeader, header.Execution, "Execution headers are not equal")
headerExecution, err := header.Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, executionHeader, headerExecution.Proto(), "Execution headers are not equal")
require.DeepSSZEqual(t, executionPayloadProof, header.ExecutionBranch, "Execution payload proofs are not equal")
headerExecutionBranch, err := header.ExecutionBranch()
require.NoError(t, err)
require.DeepSSZEqual(t, executionPayloadProof, convertArrayToSlice(headerExecutionBranch), "Execution payload proofs are not equal")
})
t.Run("Blinded Beacon Block", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestElectra(true)
container, err := lightClient.BlockToLightClientHeader(l.Block)
header, err := lightClient.BlockToLightClientHeader(l.Ctx, l.State.Slot(), l.Block)
require.NoError(t, err)
header := container.GetHeaderDeneb()
require.NotNil(t, header, "header is nil")
parentRoot := l.Block.Block().ParentRoot()
@@ -734,15 +756,27 @@ func TestLightClient_BlockToLightClientHeader(t *testing.T) {
executionPayloadProof, err := blocks.PayloadProof(l.Ctx, l.Block.Block())
require.NoError(t, err)
require.Equal(t, l.Block.Block().Slot(), header.Beacon.Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon.ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon.ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon.StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon.BodyRoot, "Body root is not equal")
require.Equal(t, l.Block.Block().Slot(), header.Beacon().Slot, "Slot is not equal")
require.Equal(t, l.Block.Block().ProposerIndex(), header.Beacon().ProposerIndex, "Proposer index is not equal")
require.DeepSSZEqual(t, parentRoot[:], header.Beacon().ParentRoot, "Parent root is not equal")
require.DeepSSZEqual(t, stateRoot[:], header.Beacon().StateRoot, "State root is not equal")
require.DeepSSZEqual(t, bodyRoot[:], header.Beacon().BodyRoot, "Body root is not equal")
require.DeepSSZEqual(t, executionHeader, header.Execution, "Execution headers are not equal")
headerExecution, err := header.Execution()
require.NoError(t, err)
require.DeepSSZEqual(t, executionHeader, headerExecution.Proto(), "Execution headers are not equal")
require.DeepSSZEqual(t, executionPayloadProof, header.ExecutionBranch, "Execution payload proofs are not equal")
headerExecutionBranch, err := header.ExecutionBranch()
require.NoError(t, err)
require.DeepSSZEqual(t, executionPayloadProof, convertArrayToSlice(headerExecutionBranch), "Execution payload proofs are not equal")
})
})
}
func convertArrayToSlice(arr [4][32]uint8) [][]uint8 {
slice := make([][]uint8, len(arr))
for i := range arr {
slice[i] = arr[i][:]
}
return slice
}

View File

@@ -18,7 +18,6 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//monitoring/backup:go_default_library",
"//proto/dbval:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
],

View File

@@ -7,8 +7,6 @@ import (
"context"
"io"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
"github.com/ethereum/go-ethereum/common"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filters"
slashertypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/slasher/types"
@@ -59,8 +57,9 @@ type ReadOnlyDatabase interface {
FeeRecipientByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (common.Address, error)
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
// light client operations
LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64) (map[uint64]*ethpbv2.LightClientUpdateWithVersion, error)
LightClientUpdate(ctx context.Context, period uint64) (*ethpbv2.LightClientUpdateWithVersion, error)
LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64) (map[uint64]interfaces.LightClientUpdate, error)
LightClientUpdate(ctx context.Context, period uint64) (interfaces.LightClientUpdate, error)
LightClientBootstrap(ctx context.Context, blockRoot []byte) (interfaces.LightClientBootstrap, error)
// origin checkpoint sync support
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
@@ -98,7 +97,8 @@ type NoHeadAccessDatabase interface {
SaveFeeRecipientsByValidatorIDs(ctx context.Context, ids []primitives.ValidatorIndex, addrs []common.Address) error
SaveRegistrationsByValidatorIDs(ctx context.Context, ids []primitives.ValidatorIndex, regs []*ethpb.ValidatorRegistrationV1) error
// light client operations
SaveLightClientUpdate(ctx context.Context, period uint64, update *ethpbv2.LightClientUpdateWithVersion) error
SaveLightClientUpdate(ctx context.Context, period uint64, update interfaces.LightClientUpdate) error
SaveLightClientBootstrap(ctx context.Context, blockRoot []byte, bootstrap interfaces.LightClientBootstrap) error
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error
}

View File

@@ -44,6 +44,7 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
@@ -53,7 +54,6 @@ go_library(
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/dbval:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time:go_default_library",
@@ -112,18 +112,18 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/light-client:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/dbval:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/testing:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -23,10 +23,10 @@ import (
bolt "go.etcd.io/bbolt"
)
// used to represent errors for inconsistent slot ranges.
// Used to represent errors for inconsistent slot ranges.
var errInvalidSlotRange = errors.New("invalid end slot and start slot provided")
// Block retrieval by root.
// Block retrieval by root. Return nil if block is not found.
func (s *Store) Block(ctx context.Context, blockRoot [32]byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.Block")
defer span.End()

View File

@@ -108,6 +108,7 @@ var Buckets = [][]byte{
stateSummaryBucket,
stateValidatorsBucket,
lightClientUpdatesBucket,
lightClientBootstrapBucket,
// Indices buckets.
blockSlotIndicesBucket,
stateSlotIndicesBucket,

View File

@@ -5,35 +5,126 @@ import (
"encoding/binary"
"fmt"
"github.com/golang/snappy"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
light_client "github.com/prysmaticlabs/prysm/v5/consensus-types/light-client"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
bolt "go.etcd.io/bbolt"
"google.golang.org/protobuf/proto"
)
func (s *Store) SaveLightClientUpdate(ctx context.Context, period uint64, update *ethpbv2.LightClientUpdateWithVersion) error {
ctx, span := trace.StartSpan(ctx, "BeaconDB.saveLightClientUpdate")
func (s *Store) SaveLightClientUpdate(ctx context.Context, period uint64, update interfaces.LightClientUpdate) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveLightClientUpdate")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(lightClientUpdatesBucket)
updateMarshalled, err := encode(ctx, update)
enc, err := encodeLightClientUpdate(update)
if err != nil {
return err
}
return bkt.Put(bytesutil.Uint64ToBytesBigEndian(period), updateMarshalled)
return bkt.Put(bytesutil.Uint64ToBytesBigEndian(period), enc)
})
}
func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64) (map[uint64]*ethpbv2.LightClientUpdateWithVersion, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.LightClientUpdates")
func (s *Store) SaveLightClientBootstrap(ctx context.Context, blockRoot []byte, bootstrap interfaces.LightClientBootstrap) error {
_, span := trace.StartSpan(ctx, "BeaconDB.SaveLightClientBootstrap")
defer span.End()
return s.db.Update(func(tx *bolt.Tx) error {
bkt := tx.Bucket(lightClientBootstrapBucket)
enc, err := encodeLightClientBootstrap(bootstrap)
if err != nil {
return err
}
return bkt.Put(blockRoot, enc)
})
}
func (s *Store) LightClientBootstrap(ctx context.Context, blockRoot []byte) (interfaces.LightClientBootstrap, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.LightClientBootstrap")
defer span.End()
var bootstrap interfaces.LightClientBootstrap
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(lightClientBootstrapBucket)
enc := bkt.Get(blockRoot)
if enc == nil {
return nil
}
var err error
bootstrap, err = decodeLightClientBootstrap(enc)
return err
})
return bootstrap, err
}
func encodeLightClientBootstrap(bootstrap interfaces.LightClientBootstrap) ([]byte, error) {
key, err := keyForLightClientUpdate(bootstrap.Version())
if err != nil {
return nil, err
}
enc, err := bootstrap.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal light client bootstrap")
}
fullEnc := make([]byte, len(key)+len(enc))
copy(fullEnc, key)
copy(fullEnc[len(key):], enc)
return snappy.Encode(nil, fullEnc), nil
}
func decodeLightClientBootstrap(enc []byte) (interfaces.LightClientBootstrap, error) {
var err error
enc, err = snappy.Decode(nil, enc)
if err != nil {
return nil, errors.Wrap(err, "could not snappy decode light client bootstrap")
}
var m proto.Message
switch {
case hasAltairKey(enc):
bootstrap := &ethpb.LightClientBootstrapAltair{}
if err := bootstrap.UnmarshalSSZ(enc[len(altairKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Altair light client bootstrap")
}
m = bootstrap
case hasCapellaKey(enc):
bootstrap := &ethpb.LightClientBootstrapCapella{}
if err := bootstrap.UnmarshalSSZ(enc[len(capellaKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Capella light client bootstrap")
}
m = bootstrap
case hasDenebKey(enc):
bootstrap := &ethpb.LightClientBootstrapDeneb{}
if err := bootstrap.UnmarshalSSZ(enc[len(denebKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Deneb light client bootstrap")
}
m = bootstrap
case hasElectraKey(enc):
bootstrap := &ethpb.LightClientBootstrapElectra{}
if err := bootstrap.UnmarshalSSZ(enc[len(electraKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Electra light client bootstrap")
}
m = bootstrap
default:
return nil, errors.New("decoding of saved light client bootstrap is unsupported")
}
return light_client.NewWrappedBootstrap(m)
}
func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod uint64) (map[uint64]interfaces.LightClientUpdate, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.LightClientUpdates")
defer span.End()
if startPeriod > endPeriod {
return nil, fmt.Errorf("start period %d is greater than end period %d", startPeriod, endPeriod)
}
updates := make(map[uint64]*ethpbv2.LightClientUpdateWithVersion)
updates := make(map[uint64]interfaces.LightClientUpdate)
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(lightClientUpdatesBucket)
c := bkt.Cursor()
@@ -46,11 +137,11 @@ func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod u
for k, v := c.Seek(bytesutil.Uint64ToBytesBigEndian(startPeriod)); k != nil && binary.BigEndian.Uint64(k) <= endPeriod; k, v = c.Next() {
currentPeriod := binary.BigEndian.Uint64(k)
var update ethpbv2.LightClientUpdateWithVersion
if err := decode(ctx, v, &update); err != nil {
update, err := decodeLightClientUpdate(v)
if err != nil {
return err
}
updates[currentPeriod] = &update
updates[currentPeriod] = update
}
return nil
@@ -62,18 +153,88 @@ func (s *Store) LightClientUpdates(ctx context.Context, startPeriod, endPeriod u
return updates, err
}
func (s *Store) LightClientUpdate(ctx context.Context, period uint64) (*ethpbv2.LightClientUpdateWithVersion, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.LightClientUpdate")
func (s *Store) LightClientUpdate(ctx context.Context, period uint64) (interfaces.LightClientUpdate, error) {
_, span := trace.StartSpan(ctx, "BeaconDB.LightClientUpdate")
defer span.End()
var update ethpbv2.LightClientUpdateWithVersion
var update interfaces.LightClientUpdate
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(lightClientUpdatesBucket)
updateBytes := bkt.Get(bytesutil.Uint64ToBytesBigEndian(period))
if updateBytes == nil {
return nil
}
return decode(ctx, updateBytes, &update)
var err error
update, err = decodeLightClientUpdate(updateBytes)
return err
})
return &update, err
return update, err
}
func encodeLightClientUpdate(update interfaces.LightClientUpdate) ([]byte, error) {
key, err := keyForLightClientUpdate(update.Version())
if err != nil {
return nil, err
}
enc, err := update.MarshalSSZ()
if err != nil {
return nil, errors.Wrap(err, "could not marshal light client update")
}
fullEnc := make([]byte, len(key)+len(enc))
copy(fullEnc, key)
copy(fullEnc[len(key):], enc)
return snappy.Encode(nil, fullEnc), nil
}
func decodeLightClientUpdate(enc []byte) (interfaces.LightClientUpdate, error) {
var err error
enc, err = snappy.Decode(nil, enc)
if err != nil {
return nil, errors.Wrap(err, "could not snappy decode light client update")
}
var m proto.Message
switch {
case hasAltairKey(enc):
update := &ethpb.LightClientUpdateAltair{}
if err := update.UnmarshalSSZ(enc[len(altairKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Altair light client update")
}
m = update
case hasCapellaKey(enc):
update := &ethpb.LightClientUpdateCapella{}
if err := update.UnmarshalSSZ(enc[len(capellaKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Capella light client update")
}
m = update
case hasDenebKey(enc):
update := &ethpb.LightClientUpdateDeneb{}
if err := update.UnmarshalSSZ(enc[len(denebKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Deneb light client update")
}
m = update
case hasElectraKey(enc):
update := &ethpb.LightClientUpdateElectra{}
if err := update.UnmarshalSSZ(enc[len(electraKey):]); err != nil {
return nil, errors.Wrap(err, "could not unmarshal Electra light client update")
}
m = update
default:
return nil, errors.New("decoding of saved light client update is unsupported")
}
return light_client.NewWrappedUpdate(m)
}
func keyForLightClientUpdate(v int) ([]byte, error) {
switch v {
case version.Electra:
return electraKey, nil
case version.Deneb:
return denebKey, nil
case version.Capella:
return capellaKey, nil
case version.Altair:
return altairKey, nil
default:
return nil, fmt.Errorf("unsupported light client update version %s", version.String(v))
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -18,7 +18,8 @@ var (
registrationBucket = []byte("registration")
// Light Client Updates Bucket
lightClientUpdatesBucket = []byte("light-client-updates")
lightClientUpdatesBucket = []byte("light-client-updates")
lightClientBootstrapBucket = []byte("light-client-bootstrap")
// Deprecated: This bucket was migrated in PR 6461. Do not use, except for migrations.
slotsHasObjectBucket = []byte("slots-has-objects")

View File

@@ -1,24 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = [
"log.go",
"service.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/deterministic-genesis",
visibility = ["//beacon-chain:__subpackages__"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime:go_default_library",
"//runtime/interop:go_default_library",
"//time/slots:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,7 +0,0 @@
package interopcoldstart
import (
"github.com/sirupsen/logrus"
)
var log = logrus.WithField("prefix", "deterministic-genesis")

View File

@@ -1,206 +0,0 @@
// Package interopcoldstart allows for spinning up a deterministic-genesis
// local chain without the need for eth1 deposits useful for
// local client development and interoperability testing.
package interopcoldstart
import (
"context"
"math/big"
"os"
"time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime"
"github.com/prysmaticlabs/prysm/v5/runtime/interop"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
var _ runtime.Service = (*Service)(nil)
var _ cache.FinalizedFetcher = (*Service)(nil)
var _ execution.ChainStartFetcher = (*Service)(nil)
// Service spins up an client interoperability service that handles responsibilities such
// as kickstarting a genesis state for the beacon node from cli flags or a genesis.ssz file.
type Service struct {
cfg *Config
ctx context.Context
cancel context.CancelFunc
chainStartDeposits []*ethpb.Deposit
}
// All of these methods are stubs as they are not used by a node running with deterministic-genesis.
func (s *Service) AllDepositContainers(ctx context.Context) []*ethpb.DepositContainer {
log.Errorf("AllDepositContainers should not be called")
return nil
}
func (s *Service) InsertPendingDeposit(ctx context.Context, d *ethpb.Deposit, blockNum uint64, index int64, depositRoot [32]byte) {
log.Errorf("InsertPendingDeposit should not be called")
}
func (s *Service) PendingDeposits(ctx context.Context, untilBlk *big.Int) []*ethpb.Deposit {
log.Errorf("PendingDeposits should not be called")
return nil
}
func (s *Service) PendingContainers(ctx context.Context, untilBlk *big.Int) []*ethpb.DepositContainer {
log.Errorf("PendingContainers should not be called")
return nil
}
func (s *Service) PrunePendingDeposits(ctx context.Context, merkleTreeIndex int64) {
log.Errorf("PrunePendingDeposits should not be called")
}
func (s *Service) PruneProofs(ctx context.Context, untilDepositIndex int64) error {
log.Errorf("PruneProofs should not be called")
return nil
}
// Config options for the interop service.
type Config struct {
GenesisTime uint64
NumValidators uint64
BeaconDB db.HeadAccessDatabase
DepositCache cache.DepositCache
GenesisPath string
}
// NewService is an interoperability testing service to inject a deterministically generated genesis state
// into the beacon chain database and running services at start up. This service should not be used in production
// as it does not have any value other than ease of use for testing purposes.
func NewService(ctx context.Context, cfg *Config) *Service {
ctx, cancel := context.WithCancel(ctx)
return &Service{
cfg: cfg,
ctx: ctx,
cancel: cancel,
}
}
// Start initializes the genesis state from configured flags.
func (s *Service) Start() {
log.Warn("Saving generated genesis state in database for interop testing")
if s.cfg.GenesisPath != "" {
data, err := os.ReadFile(s.cfg.GenesisPath)
if err != nil {
log.WithError(err).Fatal("Could not read pre-loaded state")
}
genesisState := &ethpb.BeaconState{}
if err := genesisState.UnmarshalSSZ(data); err != nil {
log.WithError(err).Fatal("Could not unmarshal pre-loaded state")
}
genesisTrie, err := state_native.InitializeFromProtoPhase0(genesisState)
if err != nil {
log.WithError(err).Fatal("Could not get state trie")
}
if err := s.saveGenesisState(s.ctx, genesisTrie); err != nil {
log.WithError(err).Fatal("Could not save interop genesis state")
}
return
}
// Save genesis state in db
genesisState, _, err := interop.GenerateGenesisState(s.ctx, s.cfg.GenesisTime, s.cfg.NumValidators)
if err != nil {
log.WithError(err).Fatal("Could not generate interop genesis state")
}
genesisTrie, err := state_native.InitializeFromProtoPhase0(genesisState)
if err != nil {
log.WithError(err).Fatal("Could not get state trie")
}
if s.cfg.GenesisTime == 0 {
// Generated genesis time; fetch it
s.cfg.GenesisTime = genesisTrie.GenesisTime()
}
gRoot, err := genesisTrie.HashTreeRoot(s.ctx)
if err != nil {
log.WithError(err).Fatal("Could not hash tree root genesis state")
}
go slots.CountdownToGenesis(s.ctx, time.Unix(int64(s.cfg.GenesisTime), 0), s.cfg.NumValidators, gRoot)
if err := s.saveGenesisState(s.ctx, genesisTrie); err != nil {
log.WithError(err).Fatal("Could not save interop genesis state")
}
}
// Stop does nothing.
func (_ *Service) Stop() error {
return nil
}
// Status always returns nil.
func (_ *Service) Status() error {
return nil
}
// AllDeposits mocks out the deposit cache functionality for interop.
func (_ *Service) AllDeposits(_ context.Context, _ *big.Int) []*ethpb.Deposit {
return []*ethpb.Deposit{}
}
// ChainStartEth1Data mocks out the powchain functionality for interop.
func (_ *Service) ChainStartEth1Data() *ethpb.Eth1Data {
return &ethpb.Eth1Data{}
}
// PreGenesisState returns an empty beacon state.
func (_ *Service) PreGenesisState() state.BeaconState {
s, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{})
if err != nil {
panic("could not initialize state")
}
return s
}
// ClearPreGenesisData --
func (_ *Service) ClearPreGenesisData() {
// no-op
}
// DepositByPubkey mocks out the deposit cache functionality for interop.
func (_ *Service) DepositByPubkey(_ context.Context, _ []byte) (*ethpb.Deposit, *big.Int) {
return &ethpb.Deposit{}, nil
}
// DepositsNumberAndRootAtHeight mocks out the deposit cache functionality for interop.
func (_ *Service) DepositsNumberAndRootAtHeight(_ context.Context, _ *big.Int) (uint64, [32]byte) {
return 0, [32]byte{}
}
// FinalizedDeposits mocks out the deposit cache functionality for interop.
func (_ *Service) FinalizedDeposits(ctx context.Context) (cache.FinalizedDeposits, error) {
return nil, nil
}
// NonFinalizedDeposits mocks out the deposit cache functionality for interop.
func (_ *Service) NonFinalizedDeposits(_ context.Context, _ int64, _ *big.Int) []*ethpb.Deposit {
return []*ethpb.Deposit{}
}
func (s *Service) saveGenesisState(ctx context.Context, genesisState state.BeaconState) error {
if err := s.cfg.BeaconDB.SaveGenesisData(ctx, genesisState); err != nil {
return err
}
s.chainStartDeposits = make([]*ethpb.Deposit, genesisState.NumValidators())
for i := primitives.ValidatorIndex(0); uint64(i) < uint64(genesisState.NumValidators()); i++ {
pk := genesisState.PubkeyAtIndex(i)
s.chainStartDeposits[i] = &ethpb.Deposit{
Data: &ethpb.Deposit_Data{
PublicKey: pk[:],
},
}
}
return nil
}

View File

@@ -623,13 +623,7 @@ func (s *Service) ReconstructBlobSidecars(ctx context.Context, block interfaces.
continue
}
// Verify the sidecar KZG proof
v := s.blobVerifier(roBlob, verification.ELMemPoolRequirements)
if err := v.SidecarKzgProofVerified(); err != nil {
log.WithError(err).WithField("index", i).Error("failed to verify KZG proof for sidecar")
continue
}
verifiedBlob, err := v.VerifiedROBlob()
if err != nil {
log.WithError(err).WithField("index", i).Error("failed to verify RO blob")
@@ -807,7 +801,7 @@ func tDStringToUint256(td string) (*uint256.Int, error) {
return i, nil
}
func buildEmptyExecutionPayload(v int) (proto.Message, error) {
func EmptyExecutionPayload(v int) (proto.Message, error) {
switch v {
case version.Bellatrix:
return &pb.ExecutionPayload{

View File

@@ -205,7 +205,7 @@ func (r *blindedBlockReconstructor) requestBodiesByHash(ctx context.Context, cli
func (r *blindedBlockReconstructor) payloadForHeader(header interfaces.ExecutionData, v int) (proto.Message, error) {
bodyKey := bytesutil.ToBytes32(header.BlockHash())
if bodyKey == params.BeaconConfig().ZeroHash {
payload, err := buildEmptyExecutionPayload(v)
payload, err := EmptyExecutionPayload(v)
if err != nil {
return nil, errors.Wrapf(err, "failed to reconstruct payload for body hash %#x", bodyKey)
}

View File

@@ -53,7 +53,7 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
// Only reorg blocks that arrive late
early, err := head.arrivedEarly(f.store.genesisTime)
if err != nil {
log.WithError(err).Error("could not check if block arrived early")
log.WithError(err).Error("Could not check if block arrived early")
return
}
if early {

View File

@@ -26,7 +26,6 @@ go_library(
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/db/slasherkv:go_default_library",
"//beacon-chain/deterministic-genesis:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/forkchoice/doubly-linked-tree:go_default_library",
@@ -92,12 +91,9 @@ go_test(
"//cmd:go_default_library",
"//cmd/beacon-chain/flags:go_default_library",
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime:go_default_library",
"//runtime/interop:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",

View File

@@ -144,23 +144,6 @@ func configureNetwork(cliCtx *cli.Context) {
}
}
func configureInteropConfig(cliCtx *cli.Context) error {
// an explicit chain config was specified, don't mess with it
if cliCtx.IsSet(cmd.ChainConfigFileFlag.Name) {
return nil
}
genTimeIsSet := cliCtx.IsSet(flags.InteropGenesisTimeFlag.Name)
numValsIsSet := cliCtx.IsSet(flags.InteropNumValidatorsFlag.Name)
votesIsSet := cliCtx.IsSet(flags.InteropMockEth1DataVotesFlag.Name)
if genTimeIsSet || numValsIsSet || votesIsSet {
if err := params.SetActive(params.InteropConfig().Copy()); err != nil {
return err
}
}
return nil
}
func configureExecutionSetting(cliCtx *cli.Context) error {
if cliCtx.IsSet(flags.TerminalTotalDifficultyOverride.Name) {
c := params.BeaconConfig()

View File

@@ -169,66 +169,6 @@ func TestConfigureNetwork_ConfigFile(t *testing.T) {
require.NoError(t, os.Remove("flags_test.yaml"))
}
func TestConfigureInterop(t *testing.T) {
params.SetupTestConfigCleanup(t)
tests := []struct {
name string
flagSetter func() *cli.Context
configName string
}{
{
"nothing set",
func() *cli.Context {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
return cli.NewContext(&app, set, nil)
},
"mainnet",
},
{
"mock votes set",
func() *cli.Context {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
set.Bool(flags.InteropMockEth1DataVotesFlag.Name, false, "")
assert.NoError(t, set.Set(flags.InteropMockEth1DataVotesFlag.Name, "true"))
return cli.NewContext(&app, set, nil)
},
"interop",
},
{
"validators set",
func() *cli.Context {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
set.Uint64(flags.InteropNumValidatorsFlag.Name, 0, "")
assert.NoError(t, set.Set(flags.InteropNumValidatorsFlag.Name, "20"))
return cli.NewContext(&app, set, nil)
},
"interop",
},
{
"genesis time set",
func() *cli.Context {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
set.Uint64(flags.InteropGenesisTimeFlag.Name, 0, "")
assert.NoError(t, set.Set(flags.InteropGenesisTimeFlag.Name, "200"))
return cli.NewContext(&app, set, nil)
},
"interop",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
require.NoError(t, configureInteropConfig(tt.flagSetter()))
assert.DeepEqual(t, tt.configName, params.BeaconConfig().ConfigName)
})
}
}
func TestAliasFlag(t *testing.T) {
// Create a new app with the flag
app := &cli.App{

View File

@@ -30,7 +30,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/db/slasherkv"
interopcoldstart "github.com/prysmaticlabs/prysm/v5/beacon-chain/deterministic-genesis"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice"
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
@@ -261,10 +260,6 @@ func configureBeacon(cliCtx *cli.Context) error {
configureNetwork(cliCtx)
if err := configureInteropConfig(cliCtx); err != nil {
return errors.Wrap(err, "could not configure interop config")
}
if err := configureExecutionSetting(cliCtx); err != nil {
return errors.Wrap(err, "could not configure execution setting")
}
@@ -324,11 +319,6 @@ func registerServices(cliCtx *cli.Context, beacon *BeaconNode, synchronizer *sta
return errors.Wrap(err, "could not register attestation pool service")
}
log.Debugln("Registering Deterministic Genesis Service")
if err := beacon.registerDeterministicGenesisService(); err != nil {
return errors.Wrap(err, "could not register deterministic genesis service")
}
log.Debugln("Registering Blockchain Service")
if err := beacon.registerBlockchainService(beacon.forkChoicer, synchronizer, beacon.initialSyncComplete); err != nil {
return errors.Wrap(err, "could not register blockchain service")
@@ -921,20 +911,8 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
}
}
genesisValidators := b.cliCtx.Uint64(flags.InteropNumValidatorsFlag.Name)
var depositFetcher cache.DepositFetcher
var chainStartFetcher execution.ChainStartFetcher
if genesisValidators > 0 {
var interopService *interopcoldstart.Service
if err := b.services.FetchService(&interopService); err != nil {
return err
}
depositFetcher = interopService
chainStartFetcher = interopService
} else {
depositFetcher = b.depositCache
chainStartFetcher = web3Service
}
depositFetcher := b.depositCache
chainStartFetcher := web3Service
host := b.cliCtx.String(flags.RPCHost.Name)
port := b.cliCtx.String(flags.RPCPort.Name)
@@ -1056,32 +1034,6 @@ func (b *BeaconNode) registerHTTPService(router *http.ServeMux) error {
return b.services.RegisterService(g)
}
func (b *BeaconNode) registerDeterministicGenesisService() error {
genesisTime := b.cliCtx.Uint64(flags.InteropGenesisTimeFlag.Name)
genesisValidators := b.cliCtx.Uint64(flags.InteropNumValidatorsFlag.Name)
if genesisValidators > 0 {
svc := interopcoldstart.NewService(b.ctx, &interopcoldstart.Config{
GenesisTime: genesisTime,
NumValidators: genesisValidators,
BeaconDB: b.db,
DepositCache: b.depositCache,
})
svc.Start()
// Register genesis state as start-up state when interop mode.
// The start-up state gets reused across services.
st, err := b.db.GenesisState(b.ctx)
if err != nil {
return err
}
b.finalizedStateAtStartUp = st
return b.services.RegisterService(svc)
}
return nil
}
func (b *BeaconNode) registerValidatorMonitorService(initialSyncComplete chan struct{}) error {
cliSlice := b.cliCtx.IntSlice(cmd.ValidatorMonitorIndicesFlag.Name)
if cliSlice == nil {

View File

@@ -6,9 +6,7 @@ import (
"fmt"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"strconv"
"testing"
"time"
@@ -21,13 +19,8 @@ import (
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/monitor"
"github.com/prysmaticlabs/prysm/v5/cmd"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime"
"github.com/prysmaticlabs/prysm/v5/runtime/interop"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
@@ -117,55 +110,6 @@ func TestNodeStart_SyncChecker(t *testing.T) {
require.LogsContain(t, hook, "Starting beacon node")
}
func TestNodeStart_Ok_registerDeterministicGenesisService(t *testing.T) {
numValidators := uint64(1)
hook := logTest.NewGlobal()
app := cli.App{}
tmp := fmt.Sprintf("%s/datadirtest2", t.TempDir())
set := flag.NewFlagSet("test", 0)
set.String("datadir", tmp, "node data directory")
set.Uint64(flags.InteropNumValidatorsFlag.Name, numValidators, "")
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
genesisState, _, err := interop.GenerateGenesisState(context.Background(), 0, numValidators)
require.NoError(t, err, "Could not generate genesis beacon state")
for i := uint64(1); i < 2; i++ {
var someRoot [32]byte
var someKey [fieldparams.BLSPubkeyLength]byte
copy(someRoot[:], strconv.Itoa(int(i)))
copy(someKey[:], strconv.Itoa(int(i)))
genesisState.Validators = append(genesisState.Validators, &ethpb.Validator{
PublicKey: someKey[:],
WithdrawalCredentials: someRoot[:],
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
Slashed: false,
ActivationEligibilityEpoch: 1,
ActivationEpoch: 1,
ExitEpoch: 1,
WithdrawableEpoch: 1,
})
genesisState.Balances = append(genesisState.Balances, params.BeaconConfig().MaxEffectiveBalance)
}
genesisBytes, err := genesisState.MarshalSSZ()
require.NoError(t, err)
require.NoError(t, os.WriteFile("genesis_ssz.json", genesisBytes, 0666))
set.String("genesis-state", "genesis_ssz.json", "")
ctx, cancel := newCliContextWithCancel(&app, set)
node, err := New(ctx, cancel, WithBlockchainFlagOptions([]blockchain.Option{}),
WithBuilderFlagOptions([]builder.Option{}),
WithExecutionChainOptions([]execution.Option{}),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)))
require.NoError(t, err)
node.services = &runtime.ServiceRegistry{}
go func() {
node.Start()
}()
time.Sleep(3 * time.Second)
node.Close()
require.LogsContain(t, hook, "Starting beacon node")
require.NoError(t, os.Remove("genesis_ssz.json"))
}
// TestClearDB tests clearing the database
func TestClearDB(t *testing.T) {
hook := logTest.NewGlobal()

View File

@@ -7,6 +7,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations/mock",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/operations/attestations:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
],

View File

@@ -3,13 +3,17 @@ package mock
import (
"context"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
var _ attestations.Pool = &PoolMock{}
// PoolMock --
type PoolMock struct {
AggregatedAtts []*ethpb.Attestation
AggregatedAtts []ethpb.Att
UnaggregatedAtts []ethpb.Att
}
// AggregateUnaggregatedAttestations --
@@ -23,18 +27,18 @@ func (*PoolMock) AggregateUnaggregatedAttestationsBySlotIndex(_ context.Context,
}
// SaveAggregatedAttestation --
func (*PoolMock) SaveAggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveAggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveAggregatedAttestations --
func (m *PoolMock) SaveAggregatedAttestations(atts []*ethpb.Attestation) error {
func (m *PoolMock) SaveAggregatedAttestations(atts []ethpb.Att) error {
m.AggregatedAtts = append(m.AggregatedAtts, atts...)
return nil
}
// AggregatedAttestations --
func (m *PoolMock) AggregatedAttestations() []*ethpb.Attestation {
func (m *PoolMock) AggregatedAttestations() []ethpb.Att {
return m.AggregatedAtts
}
@@ -43,13 +47,18 @@ func (*PoolMock) AggregatedAttestationsBySlotIndex(_ context.Context, _ primitiv
panic("implement me")
}
// AggregatedAttestationsBySlotIndexElectra --
func (*PoolMock) AggregatedAttestationsBySlotIndexElectra(_ context.Context, _ primitives.Slot, _ primitives.CommitteeIndex) []*ethpb.AttestationElectra {
panic("implement me")
}
// DeleteAggregatedAttestation --
func (*PoolMock) DeleteAggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteAggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
// HasAggregatedAttestation --
func (*PoolMock) HasAggregatedAttestation(_ *ethpb.Attestation) (bool, error) {
func (*PoolMock) HasAggregatedAttestation(_ ethpb.Att) (bool, error) {
panic("implement me")
}
@@ -59,18 +68,19 @@ func (*PoolMock) AggregatedAttestationCount() int {
}
// SaveUnaggregatedAttestation --
func (*PoolMock) SaveUnaggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveUnaggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveUnaggregatedAttestations --
func (*PoolMock) SaveUnaggregatedAttestations(_ []*ethpb.Attestation) error {
panic("implement me")
func (m *PoolMock) SaveUnaggregatedAttestations(atts []ethpb.Att) error {
m.UnaggregatedAtts = append(m.UnaggregatedAtts, atts...)
return nil
}
// UnaggregatedAttestations --
func (*PoolMock) UnaggregatedAttestations() ([]*ethpb.Attestation, error) {
panic("implement me")
func (m *PoolMock) UnaggregatedAttestations() ([]ethpb.Att, error) {
return m.UnaggregatedAtts, nil
}
// UnaggregatedAttestationsBySlotIndex --
@@ -78,8 +88,13 @@ func (*PoolMock) UnaggregatedAttestationsBySlotIndex(_ context.Context, _ primit
panic("implement me")
}
// UnaggregatedAttestationsBySlotIndexElectra --
func (*PoolMock) UnaggregatedAttestationsBySlotIndexElectra(_ context.Context, _ primitives.Slot, _ primitives.CommitteeIndex) []*ethpb.AttestationElectra {
panic("implement me")
}
// DeleteUnaggregatedAttestation --
func (*PoolMock) DeleteUnaggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteUnaggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
@@ -94,42 +109,42 @@ func (*PoolMock) UnaggregatedAttestationCount() int {
}
// SaveBlockAttestation --
func (*PoolMock) SaveBlockAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveBlockAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveBlockAttestations --
func (*PoolMock) SaveBlockAttestations(_ []*ethpb.Attestation) error {
func (*PoolMock) SaveBlockAttestations(_ []ethpb.Att) error {
panic("implement me")
}
// BlockAttestations --
func (*PoolMock) BlockAttestations() []*ethpb.Attestation {
func (*PoolMock) BlockAttestations() []ethpb.Att {
panic("implement me")
}
// DeleteBlockAttestation --
func (*PoolMock) DeleteBlockAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteBlockAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveForkchoiceAttestation --
func (*PoolMock) SaveForkchoiceAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveForkchoiceAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveForkchoiceAttestations --
func (*PoolMock) SaveForkchoiceAttestations(_ []*ethpb.Attestation) error {
func (*PoolMock) SaveForkchoiceAttestations(_ []ethpb.Att) error {
panic("implement me")
}
// ForkchoiceAttestations --
func (*PoolMock) ForkchoiceAttestations() []*ethpb.Attestation {
func (*PoolMock) ForkchoiceAttestations() []ethpb.Att {
panic("implement me")
}
// DeleteForkchoiceAttestation --
func (*PoolMock) DeleteForkchoiceAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteForkchoiceAttestation(_ ethpb.Att) error {
panic("implement me")
}

View File

@@ -17,7 +17,6 @@ go_library(
"handshake.go",
"info.go",
"interfaces.go",
"iterator.go",
"log.go",
"message_id.go",
"monitoring.go",
@@ -75,6 +74,8 @@ go_library(
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_btcsuite_btcd_btcec_v2//:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
@@ -162,12 +163,10 @@ go_test(
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/testing:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",

View File

@@ -225,11 +225,11 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Use shorter period for testing.
currentPeriod := pollingPeriod
pollingPeriod = 1 * time.Second
// Use smaller batch size for testing.
currentBatchSize := batchSize
batchSize = 2
defer func() {
pollingPeriod = currentPeriod
batchSize = currentBatchSize
}()
bootNode := bootListener.Self()

View File

@@ -33,7 +33,7 @@ func (*Service) InterceptPeerDial(_ peer.ID) (allow bool) {
// multiaddr for the given peer.
func (s *Service) InterceptAddrDial(pid peer.ID, m multiaddr.Multiaddr) (allow bool) {
// Disallow bad peers from dialing in.
if s.peers.IsBad(pid) {
if s.peers.IsBad(pid) != nil {
return false
}
return filterConnections(s.addrFilter, m)

View File

@@ -50,7 +50,7 @@ func TestPeer_AtMaxLimit(t *testing.T) {
}()
for i := 0; i < highWatermarkBuffer; i++ {
addPeer(t, s.peers, peers.PeerConnected, false)
addPeer(t, s.peers, peers.Connected, false)
}
// create alternate host
@@ -159,7 +159,7 @@ func TestService_RejectInboundPeersBeyondLimit(t *testing.T) {
inboundLimit += 1
// Add in up to inbound peer limit.
for i := 0; i < int(inboundLimit); i++ {
addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), false)
addPeer(t, s.peers, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED), false)
}
valid = s.InterceptAccept(&maEndpoints{raddr: multiAddress})
if valid {

View File

@@ -22,6 +22,7 @@ import (
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
)
type ListenerRebooter interface {
@@ -47,10 +48,12 @@ const (
udp6
)
const quickProtocolEnrKey = "quic"
type quicProtocol uint16
// quicProtocol is the "quic" key, which holds the QUIC port of the node.
func (quicProtocol) ENRKey() string { return "quic" }
func (quicProtocol) ENRKey() string { return quickProtocolEnrKey }
type listenerWrapper struct {
mu sync.RWMutex
@@ -133,68 +136,129 @@ func (l *listenerWrapper) RebootListener() error {
return nil
}
// RefreshENR uses an epoch to refresh the enr entry for our node
// with the tracked committee ids for the epoch, allowing our node
// to be dynamically discoverable by others given our tracked committee ids.
func (s *Service) RefreshENR() {
// return early if discv5 isn't running
// RefreshPersistentSubnets checks that we are tracking our local persistent subnets for a variety of gossip topics.
// This routine verifies and updates our attestation and sync committee subnets if they have been rotated.
func (s *Service) RefreshPersistentSubnets() {
// Return early if discv5 service isn't running.
if s.dv5Listener == nil || !s.isInitialized() {
return
}
currEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.genesisTime.Unix())))
if err := initializePersistentSubnets(s.dv5Listener.LocalNode().ID(), currEpoch); err != nil {
// Get the current epoch.
currentSlot := slots.CurrentSlot(uint64(s.genesisTime.Unix()))
currentEpoch := slots.ToEpoch(currentSlot)
// Get our node ID.
nodeID := s.dv5Listener.LocalNode().ID()
// Get our node record.
record := s.dv5Listener.Self().Record()
// Get the version of our metadata.
metadataVersion := s.Metadata().Version()
// Initialize persistent subnets.
if err := initializePersistentSubnets(nodeID, currentEpoch); err != nil {
log.WithError(err).Error("Could not initialize persistent subnets")
return
}
// Get the current attestation subnet bitfield.
bitV := bitfield.NewBitvector64()
committees := cache.SubnetIDs.GetAllSubnets()
for _, idx := range committees {
attestationCommittees := cache.SubnetIDs.GetAllSubnets()
for _, idx := range attestationCommittees {
bitV.SetBitAt(idx, true)
}
currentBitV, err := attBitvector(s.dv5Listener.Self().Record())
// Get the attestation subnet bitfield we store in our record.
inRecordBitV, err := attBitvector(record)
if err != nil {
log.WithError(err).Error("Could not retrieve att bitfield")
return
}
// Compare current epoch with our fork epochs
// Get the attestation subnet bitfield in our metadata.
inMetadataBitV := s.Metadata().AttnetsBitfield()
// Is our attestation bitvector record up to date?
isBitVUpToDate := bytes.Equal(bitV, inRecordBitV) && bytes.Equal(bitV, inMetadataBitV)
// Compare current epoch with Altair fork epoch
altairForkEpoch := params.BeaconConfig().AltairForkEpoch
switch {
case currEpoch < altairForkEpoch:
if currentEpoch < altairForkEpoch {
// Phase 0 behaviour.
if bytes.Equal(bitV, currentBitV) {
// return early if bitfield hasn't changed
if isBitVUpToDate {
// Return early if bitfield hasn't changed.
return
}
// Some data changed. Update the record and the metadata.
s.updateSubnetRecordWithMetadata(bitV)
default:
// Retrieve sync subnets from application level
// cache.
bitS := bitfield.Bitvector4{byte(0x00)}
committees = cache.SyncSubnetIDs.GetAllSubnets(currEpoch)
for _, idx := range committees {
bitS.SetBitAt(idx, true)
}
currentBitS, err := syncBitvector(s.dv5Listener.Self().Record())
if err != nil {
log.WithError(err).Error("Could not retrieve sync bitfield")
return
}
if bytes.Equal(bitV, currentBitV) && bytes.Equal(bitS, currentBitS) &&
s.Metadata().Version() == version.Altair {
// return early if bitfields haven't changed
return
}
s.updateSubnetRecordWithMetadataV2(bitV, bitS)
// Ping all peers.
s.pingPeersAndLogEnr()
return
}
// ping all peers to inform them of new metadata
s.pingPeers()
// Get the current sync subnet bitfield.
bitS := bitfield.Bitvector4{byte(0x00)}
syncCommittees := cache.SyncSubnetIDs.GetAllSubnets(currentEpoch)
for _, idx := range syncCommittees {
bitS.SetBitAt(idx, true)
}
// Get the sync subnet bitfield we store in our record.
inRecordBitS, err := syncBitvector(record)
if err != nil {
log.WithError(err).Error("Could not retrieve sync bitfield")
return
}
// Get the sync subnet bitfield in our metadata.
currentBitSInMetadata := s.Metadata().SyncnetsBitfield()
// Is our sync bitvector record up to date?
isBitSUpToDate := bytes.Equal(bitS, inRecordBitS) && bytes.Equal(bitS, currentBitSInMetadata)
if metadataVersion == version.Altair && isBitVUpToDate && isBitSUpToDate {
// Nothing to do, return early.
return
}
// Some data have changed, update our record and metadata.
s.updateSubnetRecordWithMetadataV2(bitV, bitS)
// Ping all peers to inform them of new metadata
s.pingPeersAndLogEnr()
}
// listen for new nodes watches for new nodes in the network and adds them to the peerstore.
func (s *Service) listenForNewNodes() {
iterator := filterNodes(s.ctx, s.dv5Listener.RandomNodes(), s.filterPeer)
const (
minLogInterval = 1 * time.Minute
thresholdLimit = 5
)
peersSummary := func(threshold uint) (uint, uint) {
// Retrieve how many active peers we have.
activePeers := s.Peers().Active()
activePeerCount := uint(len(activePeers))
// Compute how many peers we are missing to reach the threshold.
if activePeerCount >= threshold {
return activePeerCount, 0
}
missingPeerCount := threshold - activePeerCount
return activePeerCount, missingPeerCount
}
var lastLogTime time.Time
iterator := s.dv5Listener.RandomNodes()
defer iterator.Close()
connectivityTicker := time.NewTicker(1 * time.Minute)
thresholdCount := 0
@@ -203,25 +267,31 @@ func (s *Service) listenForNewNodes() {
select {
case <-s.ctx.Done():
return
case <-connectivityTicker.C:
// Skip the connectivity check if not enabled.
if !features.Get().EnableDiscoveryReboot {
continue
}
if !s.isBelowOutboundPeerThreshold() {
// Reset counter if we are beyond the threshold
thresholdCount = 0
continue
}
thresholdCount++
// Reboot listener if connectivity drops
if thresholdCount > 5 {
log.WithField("outboundConnectionCount", len(s.peers.OutboundConnected())).Warn("Rebooting discovery listener, reached threshold.")
if thresholdCount > thresholdLimit {
outBoundConnectedCount := len(s.peers.OutboundConnected())
log.WithField("outboundConnectionCount", outBoundConnectedCount).Warn("Rebooting discovery listener, reached threshold.")
if err := s.dv5Listener.RebootListener(); err != nil {
log.WithError(err).Error("Could not reboot listener")
continue
}
iterator = filterNodes(s.ctx, s.dv5Listener.RandomNodes(), s.filterPeer)
iterator = s.dv5Listener.RandomNodes()
thresholdCount = 0
}
default:
@@ -232,17 +302,35 @@ func (s *Service) listenForNewNodes() {
time.Sleep(pollingPeriod)
continue
}
wantedCount := s.wantedPeerDials()
if wantedCount == 0 {
// Compute the number of new peers we want to dial.
activePeerCount, missingPeerCount := peersSummary(s.cfg.MaxPeers)
fields := logrus.Fields{
"currentPeerCount": activePeerCount,
"targetPeerCount": s.cfg.MaxPeers,
}
if missingPeerCount == 0 {
log.Trace("Not looking for peers, at peer limit")
time.Sleep(pollingPeriod)
continue
}
if time.Since(lastLogTime) > minLogInterval {
lastLogTime = time.Now()
log.WithFields(fields).Debug("Searching for new active peers")
}
// Restrict dials if limit is applied.
if flags.MaxDialIsActive() {
wantedCount = min(wantedCount, flags.Get().MaxConcurrentDials)
maxConcurrentDials := uint(flags.Get().MaxConcurrentDials)
missingPeerCount = min(missingPeerCount, maxConcurrentDials)
}
wantedNodes := enode.ReadNodes(iterator, wantedCount)
// Search for new peers.
wantedNodes := searchForPeers(iterator, batchSize, missingPeerCount, s.filterPeer)
wg := new(sync.WaitGroup)
for i := 0; i < len(wantedNodes); i++ {
node := wantedNodes[i]
@@ -452,12 +540,14 @@ func (s *Service) filterPeer(node *enode.Node) bool {
}
// Ignore bad nodes.
if s.peers.IsBad(peerData.ID) {
if s.peers.IsBad(peerData.ID) != nil {
return false
}
// Ignore nodes that are already active.
if s.peers.IsActive(peerData.ID) {
// Constantly update enr for known peers
s.peers.UpdateENR(node.Record(), peerData.ID)
return false
}
@@ -526,17 +616,6 @@ func (s *Service) isBelowOutboundPeerThreshold() bool {
return outBoundCount < outBoundThreshold
}
func (s *Service) wantedPeerDials() int {
maxPeers := int(s.cfg.MaxPeers)
activePeers := len(s.Peers().Active())
wantedCount := 0
if maxPeers > activePeers {
wantedCount = maxPeers - activePeers
}
return wantedCount
}
// PeersFromStringAddrs converts peer raw ENRs into multiaddrs for p2p.
func PeersFromStringAddrs(addrs []string) ([]ma.Multiaddr, error) {
var allAddrs []ma.Multiaddr

View File

@@ -16,6 +16,8 @@ import (
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
@@ -30,13 +32,12 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
prysmNetwork "github.com/prysmaticlabs/prysm/v5/network"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -131,6 +132,10 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
}
func TestCreateLocalNode(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.Eip7594ForkEpoch = 1
params.OverrideBeaconConfig(cfg)
testCases := []struct {
name string
cfg *Config
@@ -378,14 +383,14 @@ func TestInboundPeerLimit(t *testing.T) {
}
for i := 0; i < 30; i++ {
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), false)
_ = addPeer(t, s.peers, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED), false)
}
require.Equal(t, true, s.isPeerAtLimit(false), "not at limit for outbound peers")
require.Equal(t, false, s.isPeerAtLimit(true), "at limit for inbound peers")
for i := 0; i < highWatermarkBuffer; i++ {
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), false)
_ = addPeer(t, s.peers, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED), false)
}
require.Equal(t, true, s.isPeerAtLimit(true), "not at limit for inbound peers")
@@ -404,13 +409,13 @@ func TestOutboundPeerThreshold(t *testing.T) {
}
for i := 0; i < 2; i++ {
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), true)
_ = addPeer(t, s.peers, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED), true)
}
require.Equal(t, true, s.isBelowOutboundPeerThreshold(), "not at outbound peer threshold")
for i := 0; i < 3; i++ {
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), true)
_ = addPeer(t, s.peers, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED), true)
}
require.Equal(t, false, s.isBelowOutboundPeerThreshold(), "still at outbound peer threshold")
@@ -477,7 +482,7 @@ func TestCorrectUDPVersion(t *testing.T) {
}
// addPeer is a helper to add a peer with a given connection state)
func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState, outbound bool) peer.ID {
func addPeer(t *testing.T, p *peers.Status, state peerdata.ConnectionState, outbound bool) peer.ID {
// Set up some peers with different states
mhBytes := []byte{0x11, 0x04}
idBytes := make([]byte, 4)
@@ -499,192 +504,270 @@ func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState,
return id
}
func TestRefreshENR_ForkBoundaries(t *testing.T) {
func createAndConnectPeer(t *testing.T, p2pService *testp2p.TestP2P, offset int) {
// Create the private key.
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(offset + i)
}
privateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
// Create the peer.
peer := testp2p.NewTestP2P(t, libp2p.Identity(privateKey))
// Add the peer and connect it.
p2pService.Peers().Add(&enr.Record{}, peer.PeerID(), nil, network.DirOutbound)
p2pService.Peers().SetConnectionState(peer.PeerID(), peers.Connected)
p2pService.Connect(peer)
}
// Define the ping count.
var actualPingCount int
type check struct {
pingCount int
metadataSequenceNumber uint64
attestationSubnets []uint64
syncSubnets []uint64
custodySubnetCount *uint64
}
func checkPingCountCacheMetadataRecord(
t *testing.T,
service *Service,
expected check,
) {
// Check the ping count.
require.Equal(t, expected.pingCount, actualPingCount)
// Check the attestation subnets in the cache.
actualAttestationSubnets := cache.SubnetIDs.GetAllSubnets()
require.DeepSSZEqual(t, expected.attestationSubnets, actualAttestationSubnets)
// Check the metadata sequence number.
actualMetadataSequenceNumber := service.metaData.SequenceNumber()
require.Equal(t, expected.metadataSequenceNumber, actualMetadataSequenceNumber)
// Compute expected attestation subnets bits.
expectedBitV := bitfield.NewBitvector64()
exists := false
for _, idx := range expected.attestationSubnets {
exists = true
expectedBitV.SetBitAt(idx, true)
}
// Check attnets in ENR.
var actualBitVENR bitfield.Bitvector64
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(attSubnetEnrKey, &actualBitVENR))
require.NoError(t, err)
require.DeepSSZEqual(t, expectedBitV, actualBitVENR)
// Check attnets in metadata.
if !exists {
expectedBitV = nil
}
actualBitVMetadata := service.metaData.AttnetsBitfield()
require.DeepSSZEqual(t, expectedBitV, actualBitVMetadata)
if expected.syncSubnets != nil {
// Compute expected sync subnets bits.
expectedBitS := bitfield.NewBitvector4()
exists = false
for _, idx := range expected.syncSubnets {
exists = true
expectedBitS.SetBitAt(idx, true)
}
// Check syncnets in ENR.
var actualBitSENR bitfield.Bitvector4
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(syncCommsSubnetEnrKey, &actualBitSENR))
require.NoError(t, err)
require.DeepSSZEqual(t, expectedBitS, actualBitSENR)
// Check syncnets in metadata.
if !exists {
expectedBitS = nil
}
actualBitSMetadata := service.metaData.SyncnetsBitfield()
require.DeepSSZEqual(t, expectedBitS, actualBitSMetadata)
}
}
func TestRefreshPersistentSubnets(t *testing.T) {
params.SetupTestConfigCleanup(t)
// Clean up caches after usage.
defer cache.SubnetIDs.EmptyAllCaches()
defer cache.SyncSubnetIDs.EmptyAllCaches()
tests := []struct {
name string
svcBuilder func(t *testing.T) *Service
postValidation func(t *testing.T, s *Service)
const (
altairForkEpoch = 5
eip7594ForkEpoch = 10
)
// Set up epochs.
defaultCfg := params.BeaconConfig()
cfg := defaultCfg.Copy()
cfg.AltairForkEpoch = altairForkEpoch
cfg.Eip7594ForkEpoch = eip7594ForkEpoch
params.OverrideBeaconConfig(cfg)
// Compute the number of seconds per epoch.
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
secondsPerEpoch := secondsPerSlot * uint64(slotsPerEpoch)
testCases := []struct {
name string
epochSinceGenesis uint64
checks []check
}{
{
name: "metadata no change",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
return s
},
postValidation: func(t *testing.T, s *Service) {
currEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.genesisTime.Unix())))
subs, err := computeSubscribedSubnets(s.dv5Listener.LocalNode().ID(), currEpoch)
assert.NoError(t, err)
bitV := bitfield.NewBitvector64()
for _, idx := range subs {
bitV.SetBitAt(idx, true)
}
assert.DeepEqual(t, bitV, s.metaData.AttnetsBitfield())
name: "Phase0",
epochSinceGenesis: 0,
checks: []check{
{
pingCount: 0,
metadataSequenceNumber: 0,
attestationSubnets: []uint64{},
},
{
pingCount: 1,
metadataSequenceNumber: 1,
attestationSubnets: []uint64{40, 41},
},
{
pingCount: 1,
metadataSequenceNumber: 1,
attestationSubnets: []uint64{40, 41},
},
{
pingCount: 1,
metadataSequenceNumber: 1,
attestationSubnets: []uint64{40, 41},
},
},
},
{
name: "metadata updated",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
cache.SubnetIDs.AddPersistentCommittee([]uint64{1, 2, 3, 23}, 0)
return s
},
postValidation: func(t *testing.T, s *Service) {
assert.DeepEqual(t, bitfield.Bitvector64{0xe, 0x0, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0}, s.metaData.AttnetsBitfield())
},
},
{
name: "metadata updated at fork epoch",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now().Add(-5 * oneEpochDuration()),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
// Update params
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
cache.SubnetIDs.AddPersistentCommittee([]uint64{1, 2, 3, 23}, 0)
return s
},
postValidation: func(t *testing.T, s *Service) {
assert.Equal(t, version.Altair, s.metaData.Version())
assert.DeepEqual(t, bitfield.Bitvector4{0x00}, s.metaData.MetadataObjV1().Syncnets)
assert.DeepEqual(t, bitfield.Bitvector64{0xe, 0x0, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0}, s.metaData.AttnetsBitfield())
},
},
{
name: "metadata updated at fork epoch with no bitfield",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now().Add(-5 * oneEpochDuration()),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
// Update params
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
return s
},
postValidation: func(t *testing.T, s *Service) {
assert.Equal(t, version.Altair, s.metaData.Version())
assert.DeepEqual(t, bitfield.Bitvector4{0x00}, s.metaData.MetadataObjV1().Syncnets)
currEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.genesisTime.Unix())))
subs, err := computeSubscribedSubnets(s.dv5Listener.LocalNode().ID(), currEpoch)
assert.NoError(t, err)
bitV := bitfield.NewBitvector64()
for _, idx := range subs {
bitV.SetBitAt(idx, true)
}
assert.DeepEqual(t, bitV, s.metaData.AttnetsBitfield())
},
},
{
name: "metadata updated past fork epoch with bitfields",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now().Add(-6 * oneEpochDuration()),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
// Update params
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
cache.SubnetIDs.AddPersistentCommittee([]uint64{1, 2, 3, 23}, 0)
cache.SyncSubnetIDs.AddSyncCommitteeSubnets([]byte{'A'}, 0, []uint64{0, 1}, 0)
return s
},
postValidation: func(t *testing.T, s *Service) {
assert.Equal(t, version.Altair, s.metaData.Version())
assert.DeepEqual(t, bitfield.Bitvector4{0x03}, s.metaData.MetadataObjV1().Syncnets)
assert.DeepEqual(t, bitfield.Bitvector64{0xe, 0x0, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0}, s.metaData.AttnetsBitfield())
name: "Altair",
epochSinceGenesis: altairForkEpoch,
checks: []check{
{
pingCount: 0,
metadataSequenceNumber: 0,
attestationSubnets: []uint64{},
syncSubnets: nil,
},
{
pingCount: 1,
metadataSequenceNumber: 1,
attestationSubnets: []uint64{40, 41},
syncSubnets: nil,
},
{
pingCount: 2,
metadataSequenceNumber: 2,
attestationSubnets: []uint64{40, 41},
syncSubnets: []uint64{1, 2},
},
{
pingCount: 2,
metadataSequenceNumber: 2,
attestationSubnets: []uint64{40, 41},
syncSubnets: []uint64{1, 2},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := tt.svcBuilder(t)
s.RefreshENR()
tt.postValidation(t, s)
s.dv5Listener.Close()
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const peerOffset = 1
// Initialize the ping count.
actualPingCount = 0
// Create the private key.
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(i)
}
unmarshalledPrivateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
privateKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(unmarshalledPrivateKey)
require.NoError(t, err)
// Create a p2p service.
p2p := testp2p.NewTestP2P(t)
// Create and connect a peer.
createAndConnectPeer(t, p2p, peerOffset)
// Create a service.
service := &Service{
pingMethod: func(_ context.Context, _ peer.ID) error {
actualPingCount++
return nil
},
cfg: &Config{UDPPort: 2000},
peers: p2p.Peers(),
genesisTime: time.Now().Add(-time.Duration(tc.epochSinceGenesis*secondsPerEpoch) * time.Second),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
}
// Set the listener and the metadata.
createListener := func() (*discover.UDPv5, error) {
return service.createListener(nil, privateKey)
}
listener, err := newListener(createListener)
require.NoError(t, err)
service.dv5Listener = listener
service.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[0])
// Refresh the persistent subnets.
service.RefreshPersistentSubnets()
time.Sleep(10 * time.Millisecond)
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[1])
// Add a sync committee subnet.
cache.SyncSubnetIDs.AddSyncCommitteeSubnets([]byte{'a'}, altairForkEpoch, []uint64{1, 2}, 1*time.Hour)
// Refresh the persistent subnets.
service.RefreshPersistentSubnets()
time.Sleep(10 * time.Millisecond)
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[2])
// Refresh the persistent subnets.
service.RefreshPersistentSubnets()
time.Sleep(10 * time.Millisecond)
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[3])
// Clean the test.
service.dv5Listener.Close()
cache.SubnetIDs.EmptyAllCaches()
cache.SyncSubnetIDs.EmptyAllCaches()
})
}
// Reset the config.
params.OverrideBeaconConfig(defaultCfg)
}

View File

@@ -18,6 +18,7 @@ var _ NetworkEncoding = (*SszNetworkEncoder)(nil)
// MaxGossipSize allowed for gossip messages.
var MaxGossipSize = params.BeaconConfig().GossipMaxSize // 10 Mib.
var MaxChunkSize = params.BeaconConfig().MaxChunkSize // 10 Mib.
var MaxUncompressedPayloadSize = 2 * MaxGossipSize // 20 Mib.
// This pool defines the sync pool for our buffered snappy writers, so that they
// can be constantly reused.
@@ -43,8 +44,8 @@ func (_ SszNetworkEncoder) EncodeGossip(w io.Writer, msg fastssz.Marshaler) (int
if err != nil {
return 0, err
}
if uint64(len(b)) > MaxGossipSize {
return 0, errors.Errorf("gossip message exceeds max gossip size: %d bytes > %d bytes", len(b), MaxGossipSize)
if uint64(len(b)) > MaxUncompressedPayloadSize {
return 0, errors.Errorf("gossip message exceeds max gossip size: %d bytes > %d bytes", len(b), MaxUncompressedPayloadSize)
}
b = snappy.Encode(nil /*dst*/, b)
return w.Write(b)
@@ -81,7 +82,7 @@ func doDecode(b []byte, to fastssz.Unmarshaler) error {
// DecodeGossip decodes the bytes to the protobuf gossip message provided.
func (_ SszNetworkEncoder) DecodeGossip(b []byte, to fastssz.Unmarshaler) error {
b, err := DecodeSnappy(b, MaxGossipSize)
b, err := DecodeSnappy(b, MaxUncompressedPayloadSize)
if err != nil {
return err
}

View File

@@ -555,7 +555,7 @@ func TestSszNetworkEncoder_FailsSnappyLength(t *testing.T) {
e := &encoder.SszNetworkEncoder{}
att := &ethpb.Fork{}
data := make([]byte, 32)
binary.PutUvarint(data, encoder.MaxGossipSize+32)
binary.PutUvarint(data, encoder.MaxUncompressedPayloadSize+32)
err := e.DecodeGossip(data, att)
require.ErrorContains(t, "snappy message exceeds max size", err)
}

View File

@@ -2,7 +2,6 @@ package p2p
import (
"context"
"errors"
"fmt"
"io"
"sync"
@@ -10,6 +9,7 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
@@ -25,6 +25,46 @@ func peerMultiaddrString(conn network.Conn) string {
return fmt.Sprintf("%s/p2p/%s", conn.RemoteMultiaddr().String(), conn.RemotePeer().String())
}
func (s *Service) connectToPeer(conn network.Conn) {
s.peers.SetConnectionState(conn.RemotePeer(), peers.Connected)
// Go through the handshake process.
log.WithFields(logrus.Fields{
"direction": conn.Stat().Direction.String(),
"multiAddr": peerMultiaddrString(conn),
"activePeers": len(s.peers.Active()),
}).Debug("Initiate peer connection")
}
func (s *Service) disconnectFromPeerOnError(
conn network.Conn,
goodByeFunc func(ctx context.Context, id peer.ID) error,
badPeerErr error,
) {
// Get the remote peer ID.
remotePeerID := conn.RemotePeer()
// Set the peer to disconnecting state.
s.peers.SetConnectionState(remotePeerID, peers.Disconnecting)
// Only attempt a goodbye if we are still connected to the peer.
if s.host.Network().Connectedness(remotePeerID) == network.Connected {
if err := goodByeFunc(context.TODO(), remotePeerID); err != nil {
log.WithError(err).Error("Unable to disconnect from peer")
}
}
log.
WithError(badPeerErr).
WithFields(logrus.Fields{
"multiaddr": peerMultiaddrString(conn),
"direction": conn.Stat().Direction.String(),
"remainingActivePeers": len(s.peers.Active()),
}).
Debug("Initiate peer disconnection")
s.peers.SetConnectionState(remotePeerID, peers.Disconnected)
}
// AddConnectionHandler adds a callback function which handles the connection with a
// newly added peer. It performs a handshake with that peer by sending a hello request
// and validating the response from the peer.
@@ -57,18 +97,9 @@ func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Con
}
s.host.Network().Notify(&network.NotifyBundle{
ConnectedF: func(net network.Network, conn network.Conn) {
ConnectedF: func(_ network.Network, conn network.Conn) {
remotePeer := conn.RemotePeer()
disconnectFromPeer := func() {
s.peers.SetConnectionState(remotePeer, peers.PeerDisconnecting)
// Only attempt a goodbye if we are still connected to the peer.
if s.host.Network().Connectedness(remotePeer) == network.Connected {
if err := goodByeFunc(context.TODO(), remotePeer); err != nil {
log.WithError(err).Error("Unable to disconnect from peer")
}
}
s.peers.SetConnectionState(remotePeer, peers.PeerDisconnected)
}
// Connection handler must be non-blocking as part of libp2p design.
go func() {
if peerHandshaking(remotePeer) {
@@ -77,28 +108,21 @@ func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Con
return
}
defer peerFinished(remotePeer)
// Handle the various pre-existing conditions that will result in us not handshaking.
peerConnectionState, err := s.peers.ConnectionState(remotePeer)
if err == nil && (peerConnectionState == peers.PeerConnected || peerConnectionState == peers.PeerConnecting) {
if err == nil && (peerConnectionState == peers.Connected || peerConnectionState == peers.Connecting) {
log.WithField("currentState", peerConnectionState).WithField("reason", "already active").Trace("Ignoring connection request")
return
}
s.peers.Add(nil /* ENR */, remotePeer, conn.RemoteMultiaddr(), conn.Stat().Direction)
// Defensive check in the event we still get a bad peer.
if s.peers.IsBad(remotePeer) {
log.WithField("reason", "bad peer").Trace("Ignoring connection request")
disconnectFromPeer()
if err := s.peers.IsBad(remotePeer); err != nil {
s.disconnectFromPeerOnError(conn, goodByeFunc, err)
return
}
validPeerConnection := func() {
s.peers.SetConnectionState(conn.RemotePeer(), peers.PeerConnected)
// Go through the handshake process.
log.WithFields(logrus.Fields{
"direction": conn.Stat().Direction,
"multiAddr": peerMultiaddrString(conn),
"activePeers": len(s.peers.Active()),
}).Debug("Peer connected")
}
// Do not perform handshake on inbound dials.
if conn.Stat().Direction == network.DirInbound {
@@ -117,63 +141,80 @@ func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Con
// If peer hasn't sent a status request, we disconnect with them
if _, err := s.peers.ChainState(remotePeer); errors.Is(err, peerdata.ErrPeerUnknown) || errors.Is(err, peerdata.ErrNoPeerStatus) {
statusMessageMissing.Inc()
disconnectFromPeer()
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.Wrap(err, "chain state"))
return
}
if peerExists {
updated, err := s.peers.ChainStateLastUpdated(remotePeer)
if err != nil {
disconnectFromPeer()
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.Wrap(err, "chain state last updated"))
return
}
// exit if we don't receive any current status messages from
// peer.
if updated.IsZero() || !updated.After(currentTime) {
disconnectFromPeer()
// Exit if we don't receive any current status messages from peer.
if updated.IsZero() {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.New("is zero"))
return
}
if updated.Before(currentTime) {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.New("did not update"))
return
}
}
validPeerConnection()
s.connectToPeer(conn)
return
}
s.peers.SetConnectionState(conn.RemotePeer(), peers.PeerConnecting)
s.peers.SetConnectionState(conn.RemotePeer(), peers.Connecting)
if err := reqFunc(context.TODO(), conn.RemotePeer()); err != nil && !errors.Is(err, io.EOF) {
log.WithError(err).Trace("Handshake failed")
disconnectFromPeer()
s.disconnectFromPeerOnError(conn, goodByeFunc, err)
return
}
validPeerConnection()
s.connectToPeer(conn)
}()
},
})
}
// AddDisconnectionHandler disconnects from peers. It handles updating the peer status.
// AddDisconnectionHandler disconnects from peers. It handles updating the peer status.
// This also calls the handler responsible for maintaining other parts of the sync or p2p system.
func (s *Service) AddDisconnectionHandler(handler func(ctx context.Context, id peer.ID) error) {
s.host.Network().Notify(&network.NotifyBundle{
DisconnectedF: func(net network.Network, conn network.Conn) {
log := log.WithField("multiAddr", peerMultiaddrString(conn))
peerID := conn.RemotePeer()
log.WithFields(logrus.Fields{
"multiAddr": peerMultiaddrString(conn),
"direction": conn.Stat().Direction.String(),
})
// Must be handled in a goroutine as this callback cannot be blocking.
go func() {
// Exit early if we are still connected to the peer.
if net.Connectedness(conn.RemotePeer()) == network.Connected {
if net.Connectedness(peerID) == network.Connected {
return
}
priorState, err := s.peers.ConnectionState(conn.RemotePeer())
priorState, err := s.peers.ConnectionState(peerID)
if err != nil {
// Can happen if the peer has already disconnected, so...
priorState = peers.PeerDisconnected
priorState = peers.Disconnected
}
s.peers.SetConnectionState(conn.RemotePeer(), peers.PeerDisconnecting)
s.peers.SetConnectionState(peerID, peers.Disconnecting)
if err := handler(context.TODO(), conn.RemotePeer()); err != nil {
log.WithError(err).Error("Disconnect handler failed")
}
s.peers.SetConnectionState(conn.RemotePeer(), peers.PeerDisconnected)
s.peers.SetConnectionState(peerID, peers.Disconnected)
// Only log disconnections if we were fully connected.
if priorState == peers.PeerConnected {
log.WithField("activePeers", len(s.peers.Active())).Debug("Peer disconnected")
if priorState == peers.Connected {
activePeersCount := len(s.peers.Active())
log.WithField("remainingActivePeers", activePeersCount).Debug("Peer disconnected")
}
}()
},

View File

@@ -82,7 +82,7 @@ type PeerManager interface {
Host() host.Host
ENR() *enr.Record
DiscoveryAddresses() ([]multiaddr.Multiaddr, error)
RefreshENR()
RefreshPersistentSubnets()
FindPeersWithSubnet(ctx context.Context, topic string, subIndex uint64, threshold int) (bool, error)
AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) error)
}

View File

@@ -1,36 +0,0 @@
package p2p
import (
"context"
"github.com/ethereum/go-ethereum/p2p/enode"
)
// filterNodes wraps an iterator such that Next only returns nodes for which
// the 'check' function returns true. This custom implementation also
// checks for context deadlines so that in the event the parent context has
// expired, we do exit from the search rather than perform more network
// lookups for additional peers.
func filterNodes(ctx context.Context, it enode.Iterator, check func(*enode.Node) bool) enode.Iterator {
return &filterIter{ctx, it, check}
}
type filterIter struct {
context.Context
enode.Iterator
check func(*enode.Node) bool
}
// Next looks up for the next valid node according to our
// filter criteria.
func (f *filterIter) Next() bool {
for f.Iterator.Next() {
if f.Context.Err() != nil {
return false
}
if f.check(f.Node()) {
return true
}
}
return false
}

View File

@@ -23,8 +23,8 @@ var (
ErrNoPeerStatus = errors.New("no chain status for peer")
)
// PeerConnectionState is the state of the connection.
type PeerConnectionState ethpb.ConnectionState
// ConnectionState is the state of the connection.
type ConnectionState ethpb.ConnectionState
// StoreConfig holds peer store parameters.
type StoreConfig struct {
@@ -49,7 +49,7 @@ type PeerData struct {
// Network related data.
Address ma.Multiaddr
Direction network.Direction
ConnState PeerConnectionState
ConnState ConnectionState
Enr *enr.Record
NextValidTime time.Time
// Chain related data.

View File

@@ -20,6 +20,7 @@ go_library(
"//crypto/rand:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -4,6 +4,7 @@ import (
"time"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
)
@@ -61,7 +62,7 @@ func (s *BadResponsesScorer) Score(pid peer.ID) float64 {
// scoreNoLock is a lock-free version of Score.
func (s *BadResponsesScorer) scoreNoLock(pid peer.ID) float64 {
if s.isBadPeerNoLock(pid) {
if s.isBadPeerNoLock(pid) != nil {
return BadPeerScore
}
score := float64(0)
@@ -116,18 +117,24 @@ func (s *BadResponsesScorer) Increment(pid peer.ID) {
// IsBadPeer states if the peer is to be considered bad.
// If the peer is unknown this will return `false`, which makes using this function easier than returning an error.
func (s *BadResponsesScorer) IsBadPeer(pid peer.ID) bool {
func (s *BadResponsesScorer) IsBadPeer(pid peer.ID) error {
s.store.RLock()
defer s.store.RUnlock()
return s.isBadPeerNoLock(pid)
}
// isBadPeerNoLock is lock-free version of IsBadPeer.
func (s *BadResponsesScorer) isBadPeerNoLock(pid peer.ID) bool {
func (s *BadResponsesScorer) isBadPeerNoLock(pid peer.ID) error {
if peerData, ok := s.store.PeerData(pid); ok {
return peerData.BadResponses >= s.config.Threshold
if peerData.BadResponses >= s.config.Threshold {
return errors.Errorf("peer exceeded bad responses threshold: got %d, threshold %d", peerData.BadResponses, s.config.Threshold)
}
return nil
}
return false
return nil
}
// BadPeers returns the peers that are considered bad.
@@ -137,7 +144,7 @@ func (s *BadResponsesScorer) BadPeers() []peer.ID {
badPeers := make([]peer.ID, 0)
for pid := range s.store.Peers() {
if s.isBadPeerNoLock(pid) {
if s.isBadPeerNoLock(pid) != nil {
badPeers = append(badPeers, pid)
}
}

View File

@@ -33,19 +33,19 @@ func TestScorers_BadResponses_Score(t *testing.T) {
assert.Equal(t, 0., scorer.Score(pid), "Unexpected score for unregistered peer")
scorer.Increment(pid)
assert.Equal(t, false, scorer.IsBadPeer(pid))
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, -2.5, scorer.Score(pid))
scorer.Increment(pid)
assert.Equal(t, false, scorer.IsBadPeer(pid))
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, float64(-5), scorer.Score(pid))
scorer.Increment(pid)
assert.Equal(t, false, scorer.IsBadPeer(pid))
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, float64(-7.5), scorer.Score(pid))
scorer.Increment(pid)
assert.Equal(t, true, scorer.IsBadPeer(pid))
assert.NotNil(t, scorer.IsBadPeer(pid))
assert.Equal(t, -100.0, scorer.Score(pid))
}
@@ -152,17 +152,17 @@ func TestScorers_BadResponses_IsBadPeer(t *testing.T) {
})
scorer := peerStatuses.Scorers().BadResponsesScorer()
pid := peer.ID("peer1")
assert.Equal(t, false, scorer.IsBadPeer(pid))
assert.NoError(t, scorer.IsBadPeer(pid))
peerStatuses.Add(nil, pid, nil, network.DirUnknown)
assert.Equal(t, false, scorer.IsBadPeer(pid))
assert.NoError(t, scorer.IsBadPeer(pid))
for i := 0; i < scorers.DefaultBadResponsesThreshold; i++ {
scorer.Increment(pid)
if i == scorers.DefaultBadResponsesThreshold-1 {
assert.Equal(t, true, scorer.IsBadPeer(pid), "Unexpected peer status")
assert.NotNil(t, scorer.IsBadPeer(pid), "Unexpected peer status")
} else {
assert.Equal(t, false, scorer.IsBadPeer(pid), "Unexpected peer status")
assert.NoError(t, scorer.IsBadPeer(pid), "Unexpected peer status")
}
}
}
@@ -185,11 +185,11 @@ func TestScorers_BadResponses_BadPeers(t *testing.T) {
scorer.Increment(pids[2])
scorer.Increment(pids[4])
}
assert.Equal(t, false, scorer.IsBadPeer(pids[0]), "Invalid peer status")
assert.Equal(t, true, scorer.IsBadPeer(pids[1]), "Invalid peer status")
assert.Equal(t, true, scorer.IsBadPeer(pids[2]), "Invalid peer status")
assert.Equal(t, false, scorer.IsBadPeer(pids[3]), "Invalid peer status")
assert.Equal(t, true, scorer.IsBadPeer(pids[4]), "Invalid peer status")
assert.NoError(t, scorer.IsBadPeer(pids[0]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[1]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[2]), "Invalid peer status")
assert.NoError(t, scorer.IsBadPeer(pids[3]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[4]), "Invalid peer status")
want := []peer.ID{pids[1], pids[2], pids[4]}
badPeers := scorer.BadPeers()
sort.Slice(badPeers, func(i, j int) bool {

View File

@@ -177,8 +177,8 @@ func (s *BlockProviderScorer) processedBlocksNoLock(pid peer.ID) uint64 {
// Block provider scorer cannot guarantee that lower score of a peer is indeed a sign of a bad peer.
// Therefore this scorer never marks peers as bad, and relies on scores to probabilistically sort
// out low-scorers (see WeightSorted method).
func (*BlockProviderScorer) IsBadPeer(_ peer.ID) bool {
return false
func (*BlockProviderScorer) IsBadPeer(_ peer.ID) error {
return nil
}
// BadPeers returns the peers that are considered bad.

View File

@@ -119,7 +119,7 @@ func TestScorers_BlockProvider_Score(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Run(tt.name, func(*testing.T) {
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
@@ -224,7 +224,7 @@ func TestScorers_BlockProvider_Sorted(t *testing.T) {
}{
{
name: "no peers",
update: func(s *scorers.BlockProviderScorer) {},
update: func(*scorers.BlockProviderScorer) {},
have: []peer.ID{},
want: []peer.ID{},
},
@@ -451,7 +451,7 @@ func TestScorers_BlockProvider_FormatScorePretty(t *testing.T) {
})
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Run(tt.name, func(*testing.T) {
peerStatuses := peerStatusGen()
scorer := peerStatuses.Scorers().BlockProviderScorer()
if tt.update != nil {
@@ -481,8 +481,8 @@ func TestScorers_BlockProvider_BadPeerMarking(t *testing.T) {
})
scorer := peerStatuses.Scorers().BlockProviderScorer()
assert.Equal(t, false, scorer.IsBadPeer("peer1"), "Unexpected status for unregistered peer")
assert.NoError(t, scorer.IsBadPeer("peer1"), "Unexpected status for unregistered peer")
scorer.IncrementProcessedBlocks("peer1", 64)
assert.Equal(t, false, scorer.IsBadPeer("peer1"))
assert.NoError(t, scorer.IsBadPeer("peer1"))
assert.Equal(t, 0, len(scorer.BadPeers()))
}

View File

@@ -2,6 +2,7 @@ package scorers
import (
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
pbrpc "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -51,19 +52,24 @@ func (s *GossipScorer) scoreNoLock(pid peer.ID) float64 {
}
// IsBadPeer states if the peer is to be considered bad.
func (s *GossipScorer) IsBadPeer(pid peer.ID) bool {
func (s *GossipScorer) IsBadPeer(pid peer.ID) error {
s.store.RLock()
defer s.store.RUnlock()
return s.isBadPeerNoLock(pid)
}
// isBadPeerNoLock is lock-free version of IsBadPeer.
func (s *GossipScorer) isBadPeerNoLock(pid peer.ID) bool {
func (s *GossipScorer) isBadPeerNoLock(pid peer.ID) error {
peerData, ok := s.store.PeerData(pid)
if !ok {
return false
return nil
}
return peerData.GossipScore < gossipThreshold
if peerData.GossipScore < gossipThreshold {
return errors.Errorf("gossip score below threshold: got %f - threshold %f", peerData.GossipScore, gossipThreshold)
}
return nil
}
// BadPeers returns the peers that are considered bad.
@@ -73,7 +79,7 @@ func (s *GossipScorer) BadPeers() []peer.ID {
badPeers := make([]peer.ID, 0)
for pid := range s.store.Peers() {
if s.isBadPeerNoLock(pid) {
if s.isBadPeerNoLock(pid) != nil {
badPeers = append(badPeers, pid)
}
}

View File

@@ -21,7 +21,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
}{
{
name: "nonexistent peer",
update: func(scorer *scorers.GossipScorer) {
update: func(*scorers.GossipScorer) {
},
check: func(scorer *scorers.GossipScorer) {
assert.Equal(t, 0.0, scorer.Score("peer1"), "Unexpected score")
@@ -34,7 +34,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
},
check: func(scorer *scorers.GossipScorer) {
assert.Equal(t, -101.0, scorer.Score("peer1"), "Unexpected score")
assert.Equal(t, true, scorer.IsBadPeer("peer1"), "Unexpected good peer")
assert.NotNil(t, scorer.IsBadPeer("peer1"), "Unexpected good peer")
},
},
{
@@ -44,7 +44,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
},
check: func(scorer *scorers.GossipScorer) {
assert.Equal(t, 10.0, scorer.Score("peer1"), "Unexpected score")
assert.Equal(t, false, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
assert.Equal(t, nil, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
_, _, topicMap, err := scorer.GossipData("peer1")
assert.NoError(t, err)
assert.Equal(t, uint64(100), topicMap["a"].TimeInMesh, "incorrect time in mesh")
@@ -53,7 +53,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Run(tt.name, func(*testing.T) {
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
ScorerParams: &scorers.Config{},
})

View File

@@ -46,7 +46,7 @@ func (s *PeerStatusScorer) Score(pid peer.ID) float64 {
// scoreNoLock is a lock-free version of Score.
func (s *PeerStatusScorer) scoreNoLock(pid peer.ID) float64 {
if s.isBadPeerNoLock(pid) {
if s.isBadPeerNoLock(pid) != nil {
return BadPeerScore
}
score := float64(0)
@@ -67,30 +67,34 @@ func (s *PeerStatusScorer) scoreNoLock(pid peer.ID) float64 {
}
// IsBadPeer states if the peer is to be considered bad.
func (s *PeerStatusScorer) IsBadPeer(pid peer.ID) bool {
func (s *PeerStatusScorer) IsBadPeer(pid peer.ID) error {
s.store.RLock()
defer s.store.RUnlock()
return s.isBadPeerNoLock(pid)
}
// isBadPeerNoLock is lock-free version of IsBadPeer.
func (s *PeerStatusScorer) isBadPeerNoLock(pid peer.ID) bool {
func (s *PeerStatusScorer) isBadPeerNoLock(pid peer.ID) error {
peerData, ok := s.store.PeerData(pid)
if !ok {
return false
return nil
}
// Mark peer as bad, if the latest error is one of the terminal ones.
terminalErrs := []error{
p2ptypes.ErrWrongForkDigestVersion,
p2ptypes.ErrInvalidFinalizedRoot,
p2ptypes.ErrInvalidRequest,
}
for _, err := range terminalErrs {
if errors.Is(peerData.ChainStateValidationError, err) {
return true
return err
}
}
return false
return nil
}
// BadPeers returns the peers that are considered bad.
@@ -100,7 +104,7 @@ func (s *PeerStatusScorer) BadPeers() []peer.ID {
badPeers := make([]peer.ID, 0)
for pid := range s.store.Peers() {
if s.isBadPeerNoLock(pid) {
if s.isBadPeerNoLock(pid) != nil {
badPeers = append(badPeers, pid)
}
}

View File

@@ -122,7 +122,7 @@ func TestScorers_PeerStatus_Score(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Run(tt.name, func(*testing.T) {
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
ScorerParams: &scorers.Config{},
})
@@ -140,12 +140,12 @@ func TestScorers_PeerStatus_IsBadPeer(t *testing.T) {
ScorerParams: &scorers.Config{},
})
pid := peer.ID("peer1")
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer(pid))
assert.Equal(t, false, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid))
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid, &pb.Status{}, p2ptypes.ErrWrongForkDigestVersion)
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer(pid))
assert.Equal(t, true, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid))
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer(pid))
assert.NotNil(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid))
}
func TestScorers_PeerStatus_BadPeers(t *testing.T) {
@@ -155,22 +155,22 @@ func TestScorers_PeerStatus_BadPeers(t *testing.T) {
pid1 := peer.ID("peer1")
pid2 := peer.ID("peer2")
pid3 := peer.ID("peer3")
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer(pid1))
assert.Equal(t, false, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid1))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer(pid2))
assert.Equal(t, false, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid2))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer(pid3))
assert.Equal(t, false, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid3))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid1))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid1))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid2))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid2))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid3))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid3))
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid1, &pb.Status{}, p2ptypes.ErrWrongForkDigestVersion)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid2, &pb.Status{}, nil)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid3, &pb.Status{}, p2ptypes.ErrWrongForkDigestVersion)
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer(pid1))
assert.Equal(t, true, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid1))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer(pid2))
assert.Equal(t, false, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid2))
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer(pid3))
assert.Equal(t, true, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid3))
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer(pid1))
assert.NotNil(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid1))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid2))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid2))
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer(pid3))
assert.NotNil(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid3))
assert.Equal(t, 2, len(peerStatuses.Scorers().PeerStatusScorer().BadPeers()))
assert.Equal(t, 2, len(peerStatuses.Scorers().BadPeers()))
}

View File

@@ -6,6 +6,7 @@ import (
"time"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v5/config/features"
)
@@ -24,7 +25,7 @@ const BadPeerScore = gossipThreshold
// Scorer defines minimum set of methods every peer scorer must expose.
type Scorer interface {
Score(pid peer.ID) float64
IsBadPeer(pid peer.ID) bool
IsBadPeer(pid peer.ID) error
BadPeers() []peer.ID
}
@@ -124,26 +125,29 @@ func (s *Service) ScoreNoLock(pid peer.ID) float64 {
}
// IsBadPeer traverses all the scorers to see if any of them classifies peer as bad.
func (s *Service) IsBadPeer(pid peer.ID) bool {
func (s *Service) IsBadPeer(pid peer.ID) error {
s.store.RLock()
defer s.store.RUnlock()
return s.IsBadPeerNoLock(pid)
}
// IsBadPeerNoLock is a lock-free version of IsBadPeer.
func (s *Service) IsBadPeerNoLock(pid peer.ID) bool {
if s.scorers.badResponsesScorer.isBadPeerNoLock(pid) {
return true
func (s *Service) IsBadPeerNoLock(pid peer.ID) error {
if err := s.scorers.badResponsesScorer.isBadPeerNoLock(pid); err != nil {
return errors.Wrap(err, "bad responses scorer")
}
if s.scorers.peerStatusScorer.isBadPeerNoLock(pid) {
return true
if err := s.scorers.peerStatusScorer.isBadPeerNoLock(pid); err != nil {
return errors.Wrap(err, "peer status scorer")
}
if features.Get().EnablePeerScorer {
if s.scorers.gossipScorer.isBadPeerNoLock(pid) {
return true
if err := s.scorers.gossipScorer.isBadPeerNoLock(pid); err != nil {
return errors.Wrap(err, "gossip scorer")
}
}
return false
return nil
}
// BadPeers returns the peers that are considered bad by any of registered scorers.
@@ -153,7 +157,7 @@ func (s *Service) BadPeers() []peer.ID {
badPeers := make([]peer.ID, 0)
for pid := range s.store.Peers() {
if s.IsBadPeerNoLock(pid) {
if s.IsBadPeerNoLock(pid) != nil {
badPeers = append(badPeers, pid)
}
}

View File

@@ -100,7 +100,7 @@ func TestScorers_Service_Score(t *testing.T) {
return scores
}
pack := func(scorer *scorers.Service, s1, s2, s3 float64) map[string]float64 {
pack := func(_ *scorers.Service, s1, s2, s3 float64) map[string]float64 {
return map[string]float64{
"peer1": roundScore(s1),
"peer2": roundScore(s2),
@@ -237,7 +237,7 @@ func TestScorers_Service_loop(t *testing.T) {
for i := 0; i < s1.Params().Threshold+5; i++ {
s1.Increment(pid1)
}
assert.Equal(t, true, s1.IsBadPeer(pid1), "Peer should be marked as bad")
assert.NotNil(t, s1.IsBadPeer(pid1), "Peer should be marked as bad")
s2.IncrementProcessedBlocks("peer1", 221)
assert.Equal(t, uint64(221), s2.ProcessedBlocks("peer1"))
@@ -252,7 +252,7 @@ func TestScorers_Service_loop(t *testing.T) {
for {
select {
case <-ticker.C:
if s1.IsBadPeer(pid1) == false && s2.ProcessedBlocks("peer1") == 0 {
if s1.IsBadPeer(pid1) == nil && s2.ProcessedBlocks("peer1") == 0 {
return
}
case <-ctx.Done():
@@ -263,7 +263,7 @@ func TestScorers_Service_loop(t *testing.T) {
}()
<-done
assert.Equal(t, false, s1.IsBadPeer(pid1), "Peer should not be marked as bad")
assert.NoError(t, s1.IsBadPeer(pid1), "Peer should not be marked as bad")
assert.Equal(t, uint64(0), s2.ProcessedBlocks("peer1"), "No blocks are expected")
}
@@ -278,10 +278,10 @@ func TestScorers_Service_IsBadPeer(t *testing.T) {
},
})
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
}
func TestScorers_Service_BadPeers(t *testing.T) {
@@ -295,16 +295,16 @@ func TestScorers_Service_BadPeers(t *testing.T) {
},
})
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.Equal(t, 0, len(peerStatuses.Scorers().BadPeers()))
for _, pid := range []peer.ID{"peer1", "peer3"} {
peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
}
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.Equal(t, 2, len(peerStatuses.Scorers().BadPeers()))
}

View File

@@ -34,6 +34,7 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
manet "github.com/multiformats/go-multiaddr/net"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
@@ -49,14 +50,14 @@ import (
)
const (
// PeerDisconnected means there is no connection to the peer.
PeerDisconnected peerdata.PeerConnectionState = iota
// PeerDisconnecting means there is an on-going attempt to disconnect from the peer.
PeerDisconnecting
// PeerConnected means the peer has an active connection.
PeerConnected
// PeerConnecting means there is an on-going attempt to connect to the peer.
PeerConnecting
// Disconnected means there is no connection to the peer.
Disconnected peerdata.ConnectionState = iota
// Disconnecting means there is an on-going attempt to disconnect from the peer.
Disconnecting
// Connected means the peer has an active connection.
Connected
// Connecting means there is an on-going attempt to connect to the peer.
Connecting
)
const (
@@ -117,6 +118,15 @@ func NewStatus(ctx context.Context, config *StatusConfig) *Status {
}
}
func (p *Status) UpdateENR(record *enr.Record, pid peer.ID) {
p.store.Lock()
defer p.store.Unlock()
if peerData, ok := p.store.PeerData(pid); ok {
peerData.Enr = record
}
}
// Scorers exposes peer scoring management service.
func (p *Status) Scorers() *scorers.Service {
return p.scorers
@@ -150,7 +160,7 @@ func (p *Status) Add(record *enr.Record, pid peer.ID, address ma.Multiaddr, dire
Address: address,
Direction: direction,
// Peers start disconnected; state will be updated when the handshake process begins.
ConnState: PeerDisconnected,
ConnState: Disconnected,
}
if record != nil {
peerData.Enr = record
@@ -212,7 +222,7 @@ func (p *Status) IsActive(pid peer.ID) bool {
defer p.store.RUnlock()
peerData, ok := p.store.PeerData(pid)
return ok && (peerData.ConnState == PeerConnected || peerData.ConnState == PeerConnecting)
return ok && (peerData.ConnState == Connected || peerData.ConnState == Connecting)
}
// IsAboveInboundLimit checks if we are above our current inbound
@@ -222,7 +232,7 @@ func (p *Status) IsAboveInboundLimit() bool {
defer p.store.RUnlock()
totalInbound := 0
for _, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected &&
if peerData.ConnState == Connected &&
peerData.Direction == network.DirInbound {
totalInbound += 1
}
@@ -286,7 +296,7 @@ func (p *Status) SubscribedToSubnet(index uint64) []peer.ID {
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
// look at active peers
connectedStatus := peerData.ConnState == PeerConnecting || peerData.ConnState == PeerConnected
connectedStatus := peerData.ConnState == Connecting || peerData.ConnState == Connected
if connectedStatus && peerData.MetaData != nil && !peerData.MetaData.IsNil() && peerData.MetaData.AttnetsBitfield() != nil {
indices := indicesFromBitfield(peerData.MetaData.AttnetsBitfield())
for _, idx := range indices {
@@ -301,7 +311,7 @@ func (p *Status) SubscribedToSubnet(index uint64) []peer.ID {
}
// SetConnectionState sets the connection state of the given remote peer.
func (p *Status) SetConnectionState(pid peer.ID, state peerdata.PeerConnectionState) {
func (p *Status) SetConnectionState(pid peer.ID, state peerdata.ConnectionState) {
p.store.Lock()
defer p.store.Unlock()
@@ -311,14 +321,14 @@ func (p *Status) SetConnectionState(pid peer.ID, state peerdata.PeerConnectionSt
// ConnectionState gets the connection state of the given remote peer.
// This will error if the peer does not exist.
func (p *Status) ConnectionState(pid peer.ID) (peerdata.PeerConnectionState, error) {
func (p *Status) ConnectionState(pid peer.ID) (peerdata.ConnectionState, error) {
p.store.RLock()
defer p.store.RUnlock()
if peerData, ok := p.store.PeerData(pid); ok {
return peerData.ConnState, nil
}
return PeerDisconnected, peerdata.ErrPeerUnknown
return Disconnected, peerdata.ErrPeerUnknown
}
// ChainStateLastUpdated gets the last time the chain state of the given remote peer was updated.
@@ -335,19 +345,29 @@ func (p *Status) ChainStateLastUpdated(pid peer.ID) (time.Time, error) {
// IsBad states if the peer is to be considered bad (by *any* of the registered scorers).
// If the peer is unknown this will return `false`, which makes using this function easier than returning an error.
func (p *Status) IsBad(pid peer.ID) bool {
func (p *Status) IsBad(pid peer.ID) error {
p.store.RLock()
defer p.store.RUnlock()
return p.isBad(pid)
}
// isBad is the lock-free version of IsBad.
func (p *Status) isBad(pid peer.ID) bool {
func (p *Status) isBad(pid peer.ID) error {
// Do not disconnect from trusted peers.
if p.store.IsTrustedPeer(pid) {
return false
return nil
}
return p.isfromBadIP(pid) || p.scorers.IsBadPeerNoLock(pid)
if err := p.isfromBadIP(pid); err != nil {
return errors.Wrap(err, "peer is from a bad IP")
}
if err := p.scorers.IsBadPeerNoLock(pid); err != nil {
return errors.Wrap(err, "is bad peer no lock")
}
return nil
}
// NextValidTime gets the earliest possible time it is to contact/dial
@@ -411,7 +431,7 @@ func (p *Status) Connecting() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnecting {
if peerData.ConnState == Connecting {
peers = append(peers, pid)
}
}
@@ -424,7 +444,7 @@ func (p *Status) Connected() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected {
if peerData.ConnState == Connected {
peers = append(peers, pid)
}
}
@@ -450,7 +470,7 @@ func (p *Status) InboundConnected() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirInbound {
if peerData.ConnState == Connected && peerData.Direction == network.DirInbound {
peers = append(peers, pid)
}
}
@@ -463,7 +483,7 @@ func (p *Status) InboundConnectedWithProtocol(protocol InternetProtocol) []peer.
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirInbound && strings.Contains(peerData.Address.String(), string(protocol)) {
if peerData.ConnState == Connected && peerData.Direction == network.DirInbound && strings.Contains(peerData.Address.String(), string(protocol)) {
peers = append(peers, pid)
}
}
@@ -489,7 +509,7 @@ func (p *Status) OutboundConnected() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirOutbound {
if peerData.ConnState == Connected && peerData.Direction == network.DirOutbound {
peers = append(peers, pid)
}
}
@@ -502,7 +522,7 @@ func (p *Status) OutboundConnectedWithProtocol(protocol InternetProtocol) []peer
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirOutbound && strings.Contains(peerData.Address.String(), string(protocol)) {
if peerData.ConnState == Connected && peerData.Direction == network.DirOutbound && strings.Contains(peerData.Address.String(), string(protocol)) {
peers = append(peers, pid)
}
}
@@ -515,7 +535,7 @@ func (p *Status) Active() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnecting || peerData.ConnState == PeerConnected {
if peerData.ConnState == Connecting || peerData.ConnState == Connected {
peers = append(peers, pid)
}
}
@@ -528,7 +548,7 @@ func (p *Status) Disconnecting() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerDisconnecting {
if peerData.ConnState == Disconnecting {
peers = append(peers, pid)
}
}
@@ -541,7 +561,7 @@ func (p *Status) Disconnected() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerDisconnected {
if peerData.ConnState == Disconnected {
peers = append(peers, pid)
}
}
@@ -554,7 +574,7 @@ func (p *Status) Inactive() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerDisconnecting || peerData.ConnState == PeerDisconnected {
if peerData.ConnState == Disconnecting || peerData.ConnState == Disconnected {
peers = append(peers, pid)
}
}
@@ -592,7 +612,7 @@ func (p *Status) Prune() {
return
}
notBadPeer := func(pid peer.ID) bool {
return !p.isBad(pid)
return p.isBad(pid) == nil
}
notTrustedPeer := func(pid peer.ID) bool {
return !p.isTrustedPeers(pid)
@@ -605,7 +625,7 @@ func (p *Status) Prune() {
// Select disconnected peers with a smaller bad response count.
for pid, peerData := range p.store.Peers() {
// Should not prune trusted peer or prune the peer dara and unset trusted peer.
if peerData.ConnState == PeerDisconnected && notBadPeer(pid) && notTrustedPeer(pid) {
if peerData.ConnState == Disconnected && notBadPeer(pid) && notTrustedPeer(pid) {
peersToPrune = append(peersToPrune, &peerResp{
pid: pid,
score: p.Scorers().ScoreNoLock(pid),
@@ -657,7 +677,7 @@ func (p *Status) deprecatedPrune() {
// Select disconnected peers with a smaller bad response count.
for pid, peerData := range p.store.Peers() {
// Should not prune trusted peer or prune the peer dara and unset trusted peer.
if peerData.ConnState == PeerDisconnected && notBadPeer(peerData) && notTrustedPeer(pid) {
if peerData.ConnState == Disconnected && notBadPeer(peerData) && notTrustedPeer(pid) {
peersToPrune = append(peersToPrune, &peerResp{
pid: pid,
badResp: peerData.BadResponses,
@@ -814,7 +834,7 @@ func (p *Status) PeersToPrune() []peer.ID {
peersToPrune := make([]*peerResp, 0)
// Select connected and inbound peers to prune.
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected &&
if peerData.ConnState == Connected &&
peerData.Direction == network.DirInbound && !p.store.IsTrustedPeer(pid) {
peersToPrune = append(peersToPrune, &peerResp{
pid: pid,
@@ -880,7 +900,7 @@ func (p *Status) deprecatedPeersToPrune() []peer.ID {
peersToPrune := make([]*peerResp, 0)
// Select connected and inbound peers to prune.
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == PeerConnected &&
if peerData.ConnState == Connected &&
peerData.Direction == network.DirInbound && !p.store.IsTrustedPeer(pid) {
peersToPrune = append(peersToPrune, &peerResp{
pid: pid,
@@ -982,24 +1002,28 @@ func (p *Status) isTrustedPeers(pid peer.ID) bool {
// this method assumes the store lock is acquired before
// executing the method.
func (p *Status) isfromBadIP(pid peer.ID) bool {
func (p *Status) isfromBadIP(pid peer.ID) error {
peerData, ok := p.store.PeerData(pid)
if !ok {
return false
return nil
}
if peerData.Address == nil {
return false
return nil
}
ip, err := manet.ToIP(peerData.Address)
if err != nil {
return true
return errors.Wrap(err, "to ip")
}
if val, ok := p.ipTracker[ip.String()]; ok {
if val > CollocationLimit {
return true
return errors.Errorf("collocation limit exceeded: got %d - limit %d", val, CollocationLimit)
}
}
return false
return nil
}
func (p *Status) addIpToTracker(pid peer.ID) {

View File

@@ -215,7 +215,7 @@ func TestPeerSubscribedToSubnet(t *testing.T) {
// Add some peers with different states
numPeers := 2
for i := 0; i < numPeers; i++ {
addPeer(t, p, peers.PeerConnected)
addPeer(t, p, peers.Connected)
}
expectedPeer := p.All()[1]
bitV := bitfield.NewBitvector64()
@@ -230,7 +230,7 @@ func TestPeerSubscribedToSubnet(t *testing.T) {
}))
numPeers = 3
for i := 0; i < numPeers; i++ {
addPeer(t, p, peers.PeerDisconnected)
addPeer(t, p, peers.Disconnected)
}
ps := p.SubscribedToSubnet(2)
assert.Equal(t, 1, len(ps), "Unexpected num of peers")
@@ -259,7 +259,7 @@ func TestPeerImplicitAdd(t *testing.T) {
id, err := peer.Decode("16Uiu2HAkyWZ4Ni1TpvDS8dPxsozmHY85KaiFjodQuV6Tz5tkHVeR")
require.NoError(t, err)
connectionState := peers.PeerConnecting
connectionState := peers.Connecting
p.SetConnectionState(id, connectionState)
resConnectionState, err := p.ConnectionState(id)
@@ -347,7 +347,7 @@ func TestPeerBadResponses(t *testing.T) {
require.NoError(t, err)
}
assert.Equal(t, false, p.IsBad(id), "Peer marked as bad when should be good")
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
address, err := ma.NewMultiaddr("/ip4/213.202.254.180/tcp/13000")
require.NoError(t, err, "Failed to create address")
@@ -358,25 +358,25 @@ func TestPeerBadResponses(t *testing.T) {
resBadResponses, err := scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 0, resBadResponses, "Unexpected bad responses")
assert.Equal(t, false, p.IsBad(id), "Peer marked as bad when should be good")
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 1, resBadResponses, "Unexpected bad responses")
assert.Equal(t, false, p.IsBad(id), "Peer marked as bad when should be good")
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 2, resBadResponses, "Unexpected bad responses")
assert.Equal(t, true, p.IsBad(id), "Peer not marked as bad when it should be")
assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 3, resBadResponses, "Unexpected bad responses")
assert.Equal(t, true, p.IsBad(id), "Peer not marked as bad when it should be")
assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
}
func TestAddMetaData(t *testing.T) {
@@ -393,7 +393,7 @@ func TestAddMetaData(t *testing.T) {
// Add some peers with different states
numPeers := 5
for i := 0; i < numPeers; i++ {
addPeer(t, p, peers.PeerConnected)
addPeer(t, p, peers.Connected)
}
newPeer := p.All()[2]
@@ -422,19 +422,19 @@ func TestPeerConnectionStatuses(t *testing.T) {
// Add some peers with different states
numPeersDisconnected := 11
for i := 0; i < numPeersDisconnected; i++ {
addPeer(t, p, peers.PeerDisconnected)
addPeer(t, p, peers.Disconnected)
}
numPeersConnecting := 7
for i := 0; i < numPeersConnecting; i++ {
addPeer(t, p, peers.PeerConnecting)
addPeer(t, p, peers.Connecting)
}
numPeersConnected := 43
for i := 0; i < numPeersConnected; i++ {
addPeer(t, p, peers.PeerConnected)
addPeer(t, p, peers.Connected)
}
numPeersDisconnecting := 4
for i := 0; i < numPeersDisconnecting; i++ {
addPeer(t, p, peers.PeerDisconnecting)
addPeer(t, p, peers.Disconnecting)
}
// Now confirm the states
@@ -463,7 +463,7 @@ func TestPeerValidTime(t *testing.T) {
numPeersConnected := 6
for i := 0; i < numPeersConnected; i++ {
addPeer(t, p, peers.PeerConnected)
addPeer(t, p, peers.Connected)
}
allPeers := p.All()
@@ -510,10 +510,10 @@ func TestPrune(t *testing.T) {
for i := 0; i < p.MaxPeerLimit()+100; i++ {
if i%7 == 0 {
// Peer added as disconnected.
_ = addPeer(t, p, peers.PeerDisconnected)
_ = addPeer(t, p, peers.Disconnected)
}
// Peer added to peer handler.
_ = addPeer(t, p, peers.PeerConnected)
_ = addPeer(t, p, peers.Connected)
}
disPeers := p.Disconnected()
@@ -571,23 +571,23 @@ func TestPeerIPTracker(t *testing.T) {
if err != nil {
t.Fatal(err)
}
badPeers = append(badPeers, createPeer(t, p, addr, network.DirUnknown, peerdata.PeerConnectionState(ethpb.ConnectionState_DISCONNECTED)))
badPeers = append(badPeers, createPeer(t, p, addr, network.DirUnknown, peerdata.ConnectionState(ethpb.ConnectionState_DISCONNECTED)))
}
for _, pr := range badPeers {
assert.Equal(t, true, p.IsBad(pr), "peer with bad ip is not bad")
assert.NotNil(t, p.IsBad(pr), "peer with bad ip is not bad")
}
// Add in bad peers, so that our records are trimmed out
// from the peer store.
for i := 0; i < p.MaxPeerLimit()+100; i++ {
// Peer added to peer handler.
pid := addPeer(t, p, peers.PeerDisconnected)
pid := addPeer(t, p, peers.Disconnected)
p.Scorers().BadResponsesScorer().Increment(pid)
}
p.Prune()
for _, pr := range badPeers {
assert.Equal(t, false, p.IsBad(pr), "peer with good ip is regarded as bad")
assert.NoError(t, p.IsBad(pr), "peer with good ip is regarded as bad")
}
}
@@ -601,8 +601,11 @@ func TestTrimmedOrderedPeers(t *testing.T) {
},
})
expectedTarget := primitives.Epoch(2)
maxPeers := 3
const (
expectedTarget = primitives.Epoch(2)
maxPeers = 3
)
var mockroot2 [32]byte
var mockroot3 [32]byte
var mockroot4 [32]byte
@@ -611,36 +614,41 @@ func TestTrimmedOrderedPeers(t *testing.T) {
copy(mockroot3[:], "three")
copy(mockroot4[:], "four")
copy(mockroot5[:], "five")
// Peer 1
pid1 := addPeer(t, p, peers.PeerConnected)
pid1 := addPeer(t, p, peers.Connected)
p.SetChainState(pid1, &pb.Status{
HeadSlot: 3 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 3,
FinalizedRoot: mockroot3[:],
})
// Peer 2
pid2 := addPeer(t, p, peers.PeerConnected)
pid2 := addPeer(t, p, peers.Connected)
p.SetChainState(pid2, &pb.Status{
HeadSlot: 4 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 4,
FinalizedRoot: mockroot4[:],
})
// Peer 3
pid3 := addPeer(t, p, peers.PeerConnected)
pid3 := addPeer(t, p, peers.Connected)
p.SetChainState(pid3, &pb.Status{
HeadSlot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 5,
FinalizedRoot: mockroot5[:],
})
// Peer 4
pid4 := addPeer(t, p, peers.PeerConnected)
pid4 := addPeer(t, p, peers.Connected)
p.SetChainState(pid4, &pb.Status{
HeadSlot: 2 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 2,
FinalizedRoot: mockroot2[:],
})
// Peer 5
pid5 := addPeer(t, p, peers.PeerConnected)
pid5 := addPeer(t, p, peers.Connected)
p.SetChainState(pid5, &pb.Status{
HeadSlot: 2 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 2,
@@ -680,12 +688,12 @@ func TestAtInboundPeerLimit(t *testing.T) {
})
for i := 0; i < 15; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirOutbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirOutbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
}
assert.Equal(t, false, p.IsAboveInboundLimit(), "Inbound limit exceeded")
for i := 0; i < 31; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
}
assert.Equal(t, true, p.IsAboveInboundLimit(), "Inbound limit not exceeded")
}
@@ -705,7 +713,7 @@ func TestPrunePeers(t *testing.T) {
})
for i := 0; i < 15; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirOutbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirOutbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Assert there are no prunable peers.
peersToPrune := p.PeersToPrune()
@@ -713,7 +721,7 @@ func TestPrunePeers(t *testing.T) {
for i := 0; i < 18; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Assert there are the correct prunable peers.
@@ -723,7 +731,7 @@ func TestPrunePeers(t *testing.T) {
// Add in more peers.
for i := 0; i < 13; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Set up bad scores for inbound peers.
@@ -767,7 +775,7 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
for i := 0; i < 15; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirOutbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirOutbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Assert there are no prunable peers.
peersToPrune := p.PeersToPrune()
@@ -775,7 +783,7 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
for i := 0; i < 18; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Assert there are the correct prunable peers.
@@ -785,7 +793,7 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
// Add in more peers.
for i := 0; i < 13; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
}
var trustedPeers []peer.ID
@@ -821,7 +829,7 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
// Add more peers to check if trusted peers can be pruned after they are deleted from trusted peer set.
for i := 0; i < 9; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Delete trusted peers.
@@ -865,14 +873,14 @@ func TestStatus_BestPeer(t *testing.T) {
headSlot primitives.Slot
finalizedEpoch primitives.Epoch
}
tests := []struct {
name string
peers []*peerConfig
limitPeers int
ourFinalizedEpoch primitives.Epoch
targetEpoch primitives.Epoch
// targetEpochSupport denotes how many peers support returned epoch.
targetEpochSupport int
name string
peers []*peerConfig
limitPeers int
ourFinalizedEpoch primitives.Epoch
targetEpoch primitives.Epoch
targetEpochSupport int // Denotes how many peers support returned epoch.
}{
{
name: "head slot matches finalized epoch",
@@ -885,6 +893,7 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 3, headSlot: 3 * params.BeaconConfig().SlotsPerEpoch},
},
limitPeers: 15,
ourFinalizedEpoch: 0,
targetEpoch: 4,
targetEpochSupport: 4,
},
@@ -902,6 +911,7 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 3, headSlot: 4 * params.BeaconConfig().SlotsPerEpoch},
},
limitPeers: 15,
ourFinalizedEpoch: 0,
targetEpoch: 4,
targetEpochSupport: 4,
},
@@ -916,6 +926,7 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 3, headSlot: 42 * params.BeaconConfig().SlotsPerEpoch},
},
limitPeers: 15,
ourFinalizedEpoch: 0,
targetEpoch: 4,
targetEpochSupport: 4,
},
@@ -930,8 +941,8 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 3, headSlot: 46 * params.BeaconConfig().SlotsPerEpoch},
{finalizedEpoch: 6, headSlot: 6 * params.BeaconConfig().SlotsPerEpoch},
},
ourFinalizedEpoch: 5,
limitPeers: 15,
ourFinalizedEpoch: 5,
targetEpoch: 6,
targetEpochSupport: 1,
},
@@ -950,8 +961,8 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 7, headSlot: 7 * params.BeaconConfig().SlotsPerEpoch},
{finalizedEpoch: 8, headSlot: 8 * params.BeaconConfig().SlotsPerEpoch},
},
ourFinalizedEpoch: 5,
limitPeers: 15,
ourFinalizedEpoch: 5,
targetEpoch: 6,
targetEpochSupport: 5,
},
@@ -970,8 +981,8 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 7, headSlot: 7 * params.BeaconConfig().SlotsPerEpoch},
{finalizedEpoch: 8, headSlot: 8 * params.BeaconConfig().SlotsPerEpoch},
},
ourFinalizedEpoch: 5,
limitPeers: 4,
ourFinalizedEpoch: 5,
targetEpoch: 6,
targetEpochSupport: 4,
},
@@ -986,8 +997,8 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 8, headSlot: 8 * params.BeaconConfig().SlotsPerEpoch},
{finalizedEpoch: 8, headSlot: 8 * params.BeaconConfig().SlotsPerEpoch},
},
ourFinalizedEpoch: 5,
limitPeers: 15,
ourFinalizedEpoch: 5,
targetEpoch: 8,
targetEpochSupport: 3,
},
@@ -1002,7 +1013,7 @@ func TestStatus_BestPeer(t *testing.T) {
},
})
for _, peerConfig := range tt.peers {
p.SetChainState(addPeer(t, p, peers.PeerConnected), &pb.Status{
p.SetChainState(addPeer(t, p, peers.Connected), &pb.Status{
FinalizedEpoch: peerConfig.finalizedEpoch,
HeadSlot: peerConfig.headSlot,
})
@@ -1028,7 +1039,7 @@ func TestBestFinalized_returnsMaxValue(t *testing.T) {
for i := 0; i <= maxPeers+100; i++ {
p.Add(new(enr.Record), peer.ID(rune(i)), nil, network.DirOutbound)
p.SetConnectionState(peer.ID(rune(i)), peers.PeerConnected)
p.SetConnectionState(peer.ID(rune(i)), peers.Connected)
p.SetChainState(peer.ID(rune(i)), &pb.Status{
FinalizedEpoch: 10,
})
@@ -1051,7 +1062,7 @@ func TestStatus_BestNonFinalized(t *testing.T) {
peerSlots := []primitives.Slot{32, 32, 32, 32, 235, 233, 258, 268, 270}
for i, headSlot := range peerSlots {
p.Add(new(enr.Record), peer.ID(rune(i)), nil, network.DirOutbound)
p.SetConnectionState(peer.ID(rune(i)), peers.PeerConnected)
p.SetConnectionState(peer.ID(rune(i)), peers.Connected)
p.SetChainState(peer.ID(rune(i)), &pb.Status{
HeadSlot: headSlot,
})
@@ -1074,17 +1085,17 @@ func TestStatus_CurrentEpoch(t *testing.T) {
},
})
// Peer 1
pid1 := addPeer(t, p, peers.PeerConnected)
pid1 := addPeer(t, p, peers.Connected)
p.SetChainState(pid1, &pb.Status{
HeadSlot: params.BeaconConfig().SlotsPerEpoch * 4,
})
// Peer 2
pid2 := addPeer(t, p, peers.PeerConnected)
pid2 := addPeer(t, p, peers.Connected)
p.SetChainState(pid2, &pb.Status{
HeadSlot: params.BeaconConfig().SlotsPerEpoch * 5,
})
// Peer 3
pid3 := addPeer(t, p, peers.PeerConnected)
pid3 := addPeer(t, p, peers.Connected)
p.SetChainState(pid3, &pb.Status{
HeadSlot: params.BeaconConfig().SlotsPerEpoch * 4,
})
@@ -1103,8 +1114,8 @@ func TestInbound(t *testing.T) {
})
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirInbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirOutbound, peers.PeerConnected)
inbound := createPeer(t, p, addr, network.DirInbound, peers.Connected)
createPeer(t, p, addr, network.DirOutbound, peers.Connected)
result := p.Inbound()
require.Equal(t, 1, len(result))
@@ -1123,8 +1134,8 @@ func TestInboundConnected(t *testing.T) {
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirInbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirInbound, peers.PeerConnecting)
inbound := createPeer(t, p, addr, network.DirInbound, peers.Connected)
createPeer(t, p, addr, network.DirInbound, peers.Connecting)
result := p.InboundConnected()
require.Equal(t, 1, len(result))
@@ -1157,7 +1168,7 @@ func TestInboundConnectedWithProtocol(t *testing.T) {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.PeerConnected)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.Connected)
expectedTCP[peer.String()] = true
}
@@ -1166,7 +1177,7 @@ func TestInboundConnectedWithProtocol(t *testing.T) {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.PeerConnected)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.Connected)
expectedQUIC[peer.String()] = true
}
@@ -1203,8 +1214,8 @@ func TestOutbound(t *testing.T) {
})
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
createPeer(t, p, addr, network.DirInbound, peers.PeerConnected)
outbound := createPeer(t, p, addr, network.DirOutbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirInbound, peers.Connected)
outbound := createPeer(t, p, addr, network.DirOutbound, peers.Connected)
result := p.Outbound()
require.Equal(t, 1, len(result))
@@ -1223,8 +1234,8 @@ func TestOutboundConnected(t *testing.T) {
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirOutbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirOutbound, peers.PeerConnecting)
inbound := createPeer(t, p, addr, network.DirOutbound, peers.Connected)
createPeer(t, p, addr, network.DirOutbound, peers.Connecting)
result := p.OutboundConnected()
require.Equal(t, 1, len(result))
@@ -1257,7 +1268,7 @@ func TestOutboundConnectedWithProtocol(t *testing.T) {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.PeerConnected)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.Connected)
expectedTCP[peer.String()] = true
}
@@ -1266,7 +1277,7 @@ func TestOutboundConnectedWithProtocol(t *testing.T) {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.PeerConnected)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.Connected)
expectedQUIC[peer.String()] = true
}
@@ -1293,7 +1304,7 @@ func TestOutboundConnectedWithProtocol(t *testing.T) {
}
// addPeer is a helper to add a peer with a given connection state)
func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState) peer.ID {
func addPeer(t *testing.T, p *peers.Status, state peerdata.ConnectionState) peer.ID {
// Set up some peers with different states
mhBytes := []byte{0x11, 0x04}
idBytes := make([]byte, 4)
@@ -1312,7 +1323,7 @@ func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState)
}
func createPeer(t *testing.T, p *peers.Status, addr ma.Multiaddr,
dir network.Direction, state peerdata.PeerConnectionState) peer.ID {
dir network.Direction, state peerdata.ConnectionState) peer.ID {
mhBytes := []byte{0x11, 0x04}
idBytes := make([]byte, 4)
_, err := rand.Read(idBytes)

View File

@@ -165,14 +165,14 @@ func (s *Service) pubsubOptions() []pubsub.Option {
func parsePeersEnr(peers []string) ([]peer.AddrInfo, error) {
addrs, err := PeersFromStringAddrs(peers)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers raw ENRs into multiaddresses: %w", err)
return nil, fmt.Errorf("cannot convert peers raw ENRs into multiaddresses: %w", err)
}
if len(addrs) == 0 {
return nil, fmt.Errorf("Converting peers raw ENRs into multiaddresses resulted in an empty list")
return nil, fmt.Errorf("converting peers raw ENRs into multiaddresses resulted in an empty list")
}
directAddrInfos, err := peer.AddrInfosFromP2pAddrs(addrs...)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers multiaddresses into AddrInfos: %w", err)
return nil, fmt.Errorf("cannot convert peers multiaddresses into AddrInfos: %w", err)
}
return directAddrInfos, nil
}

View File

@@ -10,15 +10,25 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/network/forks"
"github.com/sirupsen/logrus"
)
var _ pubsub.SubscriptionFilter = (*Service)(nil)
// It is set at this limit to handle the possibility
// of double topic subscriptions at fork boundaries.
// -> 64 Attestation Subnets * 2.
// -> 4 Sync Committee Subnets * 2.
// -> Block,Aggregate,ProposerSlashing,AttesterSlashing,Exits,SyncContribution * 2.
// -> BeaconBlock * 2 = 2
// -> BeaconAggregateAndProof * 2 = 2
// -> VoluntaryExit * 2 = 2
// -> ProposerSlashing * 2 = 2
// -> AttesterSlashing * 2 = 2
// -> 64 Beacon Attestation * 2 = 128
// -> SyncContributionAndProof * 2 = 2
// -> 4 SyncCommitteeSubnets * 2 = 8
// -> BlsToExecutionChange * 2 = 2
// -> 6 BlobSidecar * 2 = 12
// -------------------------------------
// TOTAL = 162
const pubsubSubscriptionRequestLimit = 200
// CanSubscribe returns true if the topic is of interest and we could subscribe to it.
@@ -95,8 +105,15 @@ func (s *Service) CanSubscribe(topic string) bool {
// FilterIncomingSubscriptions is invoked for all RPCs containing subscription notifications.
// This method returns only the topics of interest and may return an error if the subscription
// request contains too many topics.
func (s *Service) FilterIncomingSubscriptions(_ peer.ID, subs []*pubsubpb.RPC_SubOpts) ([]*pubsubpb.RPC_SubOpts, error) {
func (s *Service) FilterIncomingSubscriptions(peerID peer.ID, subs []*pubsubpb.RPC_SubOpts) ([]*pubsubpb.RPC_SubOpts, error) {
if len(subs) > pubsubSubscriptionRequestLimit {
subsCount := len(subs)
log.WithFields(logrus.Fields{
"peerID": peerID,
"subscriptionCounts": subsCount,
"subscriptionLimit": pubsubSubscriptionRequestLimit,
}).Debug("Too many incoming subscriptions, filtering them")
return nil, pubsub.ErrTooManySubscriptions
}

View File

@@ -43,6 +43,10 @@ var _ runtime.Service = (*Service)(nil)
// defined below.
var pollingPeriod = 6 * time.Second
// When looking for new nodes, if not enough nodes are found,
// we stop after this amount of iterations.
var batchSize = 2_000
// Refresh rate of ENR set at twice per slot.
var refreshRate = slots.DivideSlotBy(2)
@@ -202,12 +206,13 @@ func (s *Service) Start() {
s.startupErr = err
return
}
err = s.connectToBootnodes()
if err != nil {
log.WithError(err).Error("Could not add bootnode to the exclusion list")
if err := s.connectToBootnodes(); err != nil {
log.WithError(err).Error("Could not connect to boot nodes")
s.startupErr = err
return
}
s.dv5Listener = listener
go s.listenForNewNodes()
}
@@ -226,7 +231,7 @@ func (s *Service) Start() {
}
// Initialize metadata according to the
// current epoch.
s.RefreshENR()
s.RefreshPersistentSubnets()
// Periodic functions.
async.RunEvery(s.ctx, params.BeaconConfig().TtfbTimeoutDuration(), func() {
@@ -234,7 +239,7 @@ func (s *Service) Start() {
})
async.RunEvery(s.ctx, 30*time.Minute, s.Peers().Prune)
async.RunEvery(s.ctx, time.Duration(params.BeaconConfig().RespTimeout)*time.Second, s.updateMetrics)
async.RunEvery(s.ctx, refreshRate, s.RefreshENR)
async.RunEvery(s.ctx, refreshRate, s.RefreshPersistentSubnets)
async.RunEvery(s.ctx, 1*time.Minute, func() {
inboundQUICCount := len(s.peers.InboundConnectedWithProtocol(peers.QUIC))
inboundTCPCount := len(s.peers.InboundConnectedWithProtocol(peers.TCP))
@@ -384,12 +389,17 @@ func (s *Service) AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) er
s.pingMethodLock.Unlock()
}
func (s *Service) pingPeers() {
func (s *Service) pingPeersAndLogEnr() {
s.pingMethodLock.RLock()
defer s.pingMethodLock.RUnlock()
localENR := s.dv5Listener.Self()
log.WithField("ENR", localENR).Info("New node record")
if s.pingMethod == nil {
return
}
for _, pid := range s.peers.Connected() {
go func(id peer.ID) {
if err := s.pingMethod(s.ctx, id); err != nil {
@@ -462,8 +472,8 @@ func (s *Service) connectWithPeer(ctx context.Context, info peer.AddrInfo) error
if info.ID == s.host.ID() {
return nil
}
if s.Peers().IsBad(info.ID) {
return errors.New("refused to connect to bad peer")
if err := s.Peers().IsBad(info.ID); err != nil {
return errors.Wrap(err, "refused to connect to bad peer")
}
ctx, cancel := context.WithTimeout(ctx, maxDialTimeout)
defer cancel()

View File

@@ -2,6 +2,7 @@ package p2p
import (
"context"
"math"
"strings"
"sync"
"time"
@@ -19,22 +20,24 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v5/math"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
)
var attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
var syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
var (
attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
var attSubnetEnrKey = params.BeaconNetworkConfig().AttSubnetKey
var syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey
attSubnetEnrKey = params.BeaconNetworkConfig().AttSubnetKey
syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey
)
// The value used with the subnet, in order
// to create an appropriate key to retrieve
// the relevant lock. This is used to differentiate
// sync subnets from attestation subnets. This is deliberately
// chosen as more than 64(attestation subnet count).
// sync subnets from others. This is deliberately
// chosen as more than 64 (attestation subnet count).
const syncLockerVal = 100
// The value used with the blob sidecar subnet, in order
@@ -44,6 +47,77 @@ const syncLockerVal = 100
// chosen more than sync and attestation subnet combined.
const blobSubnetLockerVal = 110
// nodeFilter return a function that filters nodes based on the subnet topic and subnet index.
func (s *Service) nodeFilter(topic string, index uint64) (func(node *enode.Node) bool, error) {
switch {
case strings.Contains(topic, GossipAttestationMessage):
return s.filterPeerForAttSubnet(index), nil
case strings.Contains(topic, GossipSyncCommitteeMessage):
return s.filterPeerForSyncSubnet(index), nil
default:
return nil, errors.Errorf("no subnet exists for provided topic: %s", topic)
}
}
// searchForPeers performs a network search for peers subscribed to a particular subnet.
// It exits as soon as one of these conditions is met:
// - It looped through `batchSize` nodes.
// - It found `peersToFindCount“ peers corresponding to the `filter` criteria.
// - Iterator is exhausted.
func searchForPeers(
iterator enode.Iterator,
batchSize int,
peersToFindCount uint,
filter func(node *enode.Node) bool,
) []*enode.Node {
nodeFromNodeID := make(map[enode.ID]*enode.Node, batchSize)
for i := 0; i < batchSize && uint(len(nodeFromNodeID)) <= peersToFindCount && iterator.Next(); i++ {
node := iterator.Node()
// Filter out nodes that do not meet the criteria.
if !filter(node) {
continue
}
// Remove duplicates, keeping the node with higher seq.
prevNode, ok := nodeFromNodeID[node.ID()]
if ok && prevNode.Seq() > node.Seq() {
continue
}
nodeFromNodeID[node.ID()] = node
}
// Convert the map to a slice.
nodes := make([]*enode.Node, 0, len(nodeFromNodeID))
for _, node := range nodeFromNodeID {
nodes = append(nodes, node)
}
return nodes
}
// dialPeer dials a peer in a separate goroutine.
func (s *Service) dialPeer(ctx context.Context, wg *sync.WaitGroup, node *enode.Node) {
info, _, err := convertToAddrInfo(node)
if err != nil {
return
}
if info == nil {
return
}
wg.Add(1)
go func() {
if err := s.connectWithPeer(ctx, *info); err != nil {
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
}
wg.Done()
}()
}
// FindPeersWithSubnet performs a network search for peers
// subscribed to a particular subnet. Then it tries to connect
// with those peers. This method will block until either:
@@ -52,67 +126,104 @@ const blobSubnetLockerVal = 110
// On some edge cases, this method may hang indefinitely while peers
// are actually found. In such a case, the user should cancel the context
// and re-run the method again.
func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
index uint64, threshold int) (bool, error) {
func (s *Service) FindPeersWithSubnet(
ctx context.Context,
topic string,
index uint64,
threshold int,
) (bool, error) {
const minLogInterval = 1 * time.Minute
ctx, span := trace.StartSpan(ctx, "p2p.FindPeersWithSubnet")
defer span.End()
span.SetAttributes(trace.Int64Attribute("index", int64(index))) // lint:ignore uintcast -- It's safe to do this for tracing.
if s.dv5Listener == nil {
// return if discovery isn't set
// Return if discovery isn't set
return false, nil
}
topic += s.Encoding().ProtocolSuffix()
iterator := s.dv5Listener.RandomNodes()
defer iterator.Close()
switch {
case strings.Contains(topic, GossipAttestationMessage):
iterator = filterNodes(ctx, iterator, s.filterPeerForAttSubnet(index))
case strings.Contains(topic, GossipSyncCommitteeMessage):
iterator = filterNodes(ctx, iterator, s.filterPeerForSyncSubnet(index))
default:
return false, errors.New("no subnet exists for provided topic")
filter, err := s.nodeFilter(topic, index)
if err != nil {
return false, errors.Wrap(err, "node filter")
}
peersSummary := func(topic string, threshold int) (int, int) {
// Retrieve how many peers we have for this topic.
peerCountForTopic := len(s.pubsub.ListPeers(topic))
// Compute how many peers we are missing to reach the threshold.
missingPeerCountForTopic := max(0, threshold-peerCountForTopic)
return peerCountForTopic, missingPeerCountForTopic
}
// Compute how many peers we are missing to reach the threshold.
peerCountForTopic, missingPeerCountForTopic := peersSummary(topic, threshold)
// Exit early if we have enough peers.
if missingPeerCountForTopic == 0 {
return true, nil
}
log := log.WithFields(logrus.Fields{
"topic": topic,
"targetPeerCount": threshold,
})
log.WithField("currentPeerCount", peerCountForTopic).Debug("Searching for new peers for a subnet - start")
lastLogTime := time.Now()
wg := new(sync.WaitGroup)
for {
currNum := len(s.pubsub.ListPeers(topic))
if currNum >= threshold {
// If the context is done, we can exit the loop. This is the unhappy path.
if err := ctx.Err(); err != nil {
return false, errors.Errorf(
"unable to find requisite number of peers for topic %s - only %d out of %d peers available after searching",
topic, peerCountForTopic, threshold,
)
}
// Search for new peers in the network.
nodes := searchForPeers(iterator, batchSize, uint(missingPeerCountForTopic), filter)
// Restrict dials if limit is applied.
maxConcurrentDials := math.MaxInt
if flags.MaxDialIsActive() {
maxConcurrentDials = flags.Get().MaxConcurrentDials
}
// Dial the peers in batches.
for start := 0; start < len(nodes); start += maxConcurrentDials {
stop := min(start+maxConcurrentDials, len(nodes))
for _, node := range nodes[start:stop] {
s.dialPeer(ctx, wg, node)
}
// Wait for all dials to be completed.
wg.Wait()
}
peerCountForTopic, missingPeerCountForTopic := peersSummary(topic, threshold)
// If we have enough peers, we can exit the loop. This is the happy path.
if missingPeerCountForTopic == 0 {
break
}
if err := ctx.Err(); err != nil {
return false, errors.Errorf("unable to find requisite number of peers for topic %s - "+
"only %d out of %d peers were able to be found", topic, currNum, threshold)
}
nodeCount := int(params.BeaconNetworkConfig().MinimumPeersInSubnetSearch)
// Restrict dials if limit is applied.
if flags.MaxDialIsActive() {
nodeCount = min(nodeCount, flags.Get().MaxConcurrentDials)
}
nodes := enode.ReadNodes(iterator, nodeCount)
for _, node := range nodes {
info, _, err := convertToAddrInfo(node)
if err != nil {
continue
}
if info == nil {
continue
}
wg.Add(1)
go func() {
if err := s.connectWithPeer(ctx, *info); err != nil {
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
}
wg.Done()
}()
if time.Since(lastLogTime) > minLogInterval {
lastLogTime = time.Now()
log.WithField("currentPeerCount", peerCountForTopic).Debug("Searching for new peers for a subnet - continue")
}
// Wait for all dials to be completed.
wg.Wait()
}
log.WithField("currentPeerCount", threshold).Debug("Searching for new peers for a subnet - success")
return true, nil
}
@@ -156,11 +267,17 @@ func (s *Service) filterPeerForSyncSubnet(index uint64) func(node *enode.Node) b
// lower threshold to broadcast object compared to searching
// for a subnet. So that even in the event of poor peer
// connectivity, we can still broadcast an attestation.
func (s *Service) hasPeerWithSubnet(topic string) bool {
func (s *Service) hasPeerWithSubnet(subnetTopic string) bool {
// In the event peer threshold is lower, we will choose the lower
// threshold.
minPeers := mathutil.Min(1, uint64(flags.Get().MinimumPeersPerSubnet))
return len(s.pubsub.ListPeers(topic+s.Encoding().ProtocolSuffix())) >= int(minPeers) // lint:ignore uintcast -- Min peers can be safely cast to int.
minPeers := min(1, flags.Get().MinimumPeersPerSubnet)
topic := subnetTopic + s.Encoding().ProtocolSuffix()
peersWithSubnet := s.pubsub.ListPeers(topic)
peersWithSubnetCount := len(peersWithSubnet)
enoughPeers := peersWithSubnetCount >= minPeers
return enoughPeers
}
// Updates the service's discv5 listener record's attestation subnet
@@ -355,10 +472,10 @@ func syncBitvector(record *enr.Record) (bitfield.Bitvector4, error) {
// The subnet locker is a map which keeps track of all
// mutexes stored per subnet. This locker is re-used
// between both the attestation and sync subnets. In
// order to differentiate between attestation and sync
// subnets. Sync subnets are stored by (subnet+syncLockerVal). This
// is to prevent conflicts while allowing both subnets
// between both the attestation, sync and blob subnets.
// Sync subnets are stored by (subnet+syncLockerVal).
// Blob subnets are stored by (subnet+blobSubnetLockerVal).
// This is to prevent conflicts while allowing subnets
// to use a single locker.
func (s *Service) subnetLocker(i uint64) *sync.RWMutex {
s.subnetsLockLock.Lock()

View File

@@ -27,6 +27,7 @@ go_library(
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p//config:go_default_library",
"@com_github_libp2p_go_libp2p//core:go_default_library",
"@com_github_libp2p_go_libp2p//core/connmgr:go_default_library",
"@com_github_libp2p_go_libp2p//core/control:go_default_library",

View File

@@ -27,148 +27,148 @@ func NewFuzzTestP2P() *FakeP2P {
}
// Encoding -- fake.
func (_ *FakeP2P) Encoding() encoder.NetworkEncoding {
func (*FakeP2P) Encoding() encoder.NetworkEncoding {
return &encoder.SszNetworkEncoder{}
}
// AddConnectionHandler -- fake.
func (_ *FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
}
// AddDisconnectionHandler -- fake.
func (_ *FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
}
// AddPingMethod -- fake.
func (_ *FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
}
// PeerID -- fake.
func (_ *FakeP2P) PeerID() peer.ID {
func (*FakeP2P) PeerID() peer.ID {
return "fake"
}
// ENR returns the enr of the local peer.
func (_ *FakeP2P) ENR() *enr.Record {
func (*FakeP2P) ENR() *enr.Record {
return new(enr.Record)
}
// DiscoveryAddresses -- fake
func (_ *FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
func (*FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil
}
// FindPeersWithSubnet mocks the p2p func.
func (_ *FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (*FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return false, nil
}
// RefreshENR mocks the p2p func.
func (_ *FakeP2P) RefreshENR() {}
// RefreshPersistentSubnets mocks the p2p func.
func (*FakeP2P) RefreshPersistentSubnets() {}
// LeaveTopic -- fake.
func (_ *FakeP2P) LeaveTopic(_ string) error {
func (*FakeP2P) LeaveTopic(_ string) error {
return nil
}
// Metadata -- fake.
func (_ *FakeP2P) Metadata() metadata.Metadata {
func (*FakeP2P) Metadata() metadata.Metadata {
return nil
}
// Peers -- fake.
func (_ *FakeP2P) Peers() *peers.Status {
func (*FakeP2P) Peers() *peers.Status {
return nil
}
// PublishToTopic -- fake.
func (_ *FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
func (*FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
return nil
}
// Send -- fake.
func (_ *FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
func (*FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
return nil, nil
}
// PubSub -- fake.
func (_ *FakeP2P) PubSub() *pubsub.PubSub {
func (*FakeP2P) PubSub() *pubsub.PubSub {
return nil
}
// MetadataSeq -- fake.
func (_ *FakeP2P) MetadataSeq() uint64 {
func (*FakeP2P) MetadataSeq() uint64 {
return 0
}
// SetStreamHandler -- fake.
func (_ *FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
func (*FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
}
// SubscribeToTopic -- fake.
func (_ *FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
func (*FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
return nil, nil
}
// JoinTopic -- fake.
func (_ *FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
func (*FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
return nil, nil
}
// Host -- fake.
func (_ *FakeP2P) Host() host.Host {
func (*FakeP2P) Host() host.Host {
return nil
}
// Disconnect -- fake.
func (_ *FakeP2P) Disconnect(_ peer.ID) error {
func (*FakeP2P) Disconnect(_ peer.ID) error {
return nil
}
// Broadcast -- fake.
func (_ *FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
func (*FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
return nil
}
// BroadcastAttestation -- fake.
func (_ *FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
func (*FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
return nil
}
// BroadcastSyncCommitteeMessage -- fake.
func (_ *FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
func (*FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
return nil
}
// BroadcastBlob -- fake.
func (_ *FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
func (*FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
return nil
}
// InterceptPeerDial -- fake.
func (_ *FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
func (*FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true
}
// InterceptAddrDial -- fake.
func (_ *FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
func (*FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
return true
}
// InterceptAccept -- fake.
func (_ *FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
func (*FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptSecured -- fake.
func (_ *FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
func (*FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptUpgraded -- fake.
func (_ *FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
func (*FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0
}

View File

@@ -18,12 +18,12 @@ type MockHost struct {
}
// ID --
func (_ *MockHost) ID() peer.ID {
func (*MockHost) ID() peer.ID {
return ""
}
// Peerstore --
func (_ *MockHost) Peerstore() peerstore.Peerstore {
func (*MockHost) Peerstore() peerstore.Peerstore {
return nil
}
@@ -33,46 +33,46 @@ func (m *MockHost) Addrs() []ma.Multiaddr {
}
// Network --
func (_ *MockHost) Network() network.Network {
func (*MockHost) Network() network.Network {
return nil
}
// Mux --
func (_ *MockHost) Mux() protocol.Switch {
func (*MockHost) Mux() protocol.Switch {
return nil
}
// Connect --
func (_ *MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
func (*MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
return nil
}
// SetStreamHandler --
func (_ *MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
func (*MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
// SetStreamHandlerMatch --
func (_ *MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
func (*MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
}
// RemoveStreamHandler --
func (_ *MockHost) RemoveStreamHandler(_ protocol.ID) {}
func (*MockHost) RemoveStreamHandler(_ protocol.ID) {}
// NewStream --
func (_ *MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
func (*MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
return nil, nil
}
// Close --
func (_ *MockHost) Close() error {
func (*MockHost) Close() error {
return nil
}
// ConnManager --
func (_ *MockHost) ConnManager() connmgr.ConnManager {
func (*MockHost) ConnManager() connmgr.ConnManager {
return nil
}
// EventBus --
func (_ *MockHost) EventBus() event.Bus {
func (*MockHost) EventBus() event.Bus {
return nil
}

View File

@@ -20,7 +20,7 @@ type MockPeerManager struct {
}
// Disconnect .
func (_ *MockPeerManager) Disconnect(peer.ID) error {
func (*MockPeerManager) Disconnect(peer.ID) error {
return nil
}
@@ -35,25 +35,25 @@ func (m *MockPeerManager) Host() host.Host {
}
// ENR .
func (m MockPeerManager) ENR() *enr.Record {
func (m *MockPeerManager) ENR() *enr.Record {
return m.Enr
}
// DiscoveryAddresses .
func (m MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
func (m *MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
if m.FailDiscoveryAddr {
return nil, errors.New("fail")
}
return m.DiscoveryAddr, nil
}
// RefreshENR .
func (_ MockPeerManager) RefreshENR() {}
// RefreshPersistentSubnets .
func (*MockPeerManager) RefreshPersistentSubnets() {}
// FindPeersWithSubnet .
func (_ MockPeerManager) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (*MockPeerManager) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return true, nil
}
// AddPingMethod .
func (_ MockPeerManager) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {}
func (*MockPeerManager) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {}

View File

@@ -64,7 +64,7 @@ func (m *MockPeersProvider) Peers() *peers.Status {
log.WithError(err).Debug("Cannot decode")
}
m.peers.Add(createENR(), id0, ma0, network.DirInbound)
m.peers.SetConnectionState(id0, peers.PeerConnected)
m.peers.SetConnectionState(id0, peers.Connected)
m.peers.SetChainState(id0, &pb.Status{FinalizedEpoch: 10})
id1, err := peer.Decode(MockRawPeerId1)
if err != nil {
@@ -75,7 +75,7 @@ func (m *MockPeersProvider) Peers() *peers.Status {
log.WithError(err).Debug("Cannot decode")
}
m.peers.Add(createENR(), id1, ma1, network.DirOutbound)
m.peers.SetConnectionState(id1, peers.PeerConnected)
m.peers.SetConnectionState(id1, peers.Connected)
m.peers.SetChainState(id1, &pb.Status{FinalizedEpoch: 11})
}
return m.peers

View File

@@ -10,9 +10,11 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/config"
core "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/control"
"github.com/libp2p/go-libp2p/core/host"
@@ -34,13 +36,17 @@ import (
// We have to declare this again here to prevent a circular dependency
// with the main p2p package.
const metatadataV1Topic = "/eth2/beacon_chain/req/metadata/1"
const metatadataV2Topic = "/eth2/beacon_chain/req/metadata/2"
const (
metadataV1Topic = "/eth2/beacon_chain/req/metadata/1"
metadataV2Topic = "/eth2/beacon_chain/req/metadata/2"
metadataV3Topic = "/eth2/beacon_chain/req/metadata/3"
)
// TestP2P represents a p2p implementation that can be used for testing.
type TestP2P struct {
t *testing.T
BHost host.Host
EnodeID enode.ID
pubsub *pubsub.PubSub
joinedTopics map[string]*pubsub.Topic
BroadcastCalled atomic.Bool
@@ -51,9 +57,17 @@ type TestP2P struct {
}
// NewTestP2P initializes a new p2p test service.
func NewTestP2P(t *testing.T) *TestP2P {
func NewTestP2P(t *testing.T, userOptions ...config.Option) *TestP2P {
ctx := context.Background()
h, err := libp2p.New(libp2p.ResourceManager(&network.NullResourceManager{}), libp2p.Transport(tcp.NewTCPTransport), libp2p.DefaultListenAddrs)
options := []config.Option{
libp2p.ResourceManager(&network.NullResourceManager{}),
libp2p.Transport(tcp.NewTCPTransport),
libp2p.DefaultListenAddrs,
}
options = append(options, userOptions...)
h, err := libp2p.New(options...)
require.NoError(t, err)
ps, err := pubsub.NewFloodSub(ctx, h,
pubsub.WithMessageSigning(false),
@@ -239,7 +253,7 @@ func (p *TestP2P) LeaveTopic(topic string) error {
}
// Encoding returns ssz encoding.
func (_ *TestP2P) Encoding() encoder.NetworkEncoding {
func (*TestP2P) Encoding() encoder.NetworkEncoding {
return &encoder.SszNetworkEncoder{}
}
@@ -266,34 +280,39 @@ func (p *TestP2P) Host() host.Host {
}
// ENR returns the enr of the local peer.
func (_ *TestP2P) ENR() *enr.Record {
func (*TestP2P) ENR() *enr.Record {
return new(enr.Record)
}
// NodeID returns the node id of the local peer.
func (p *TestP2P) NodeID() enode.ID {
return p.EnodeID
}
// DiscoveryAddresses --
func (_ *TestP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
func (*TestP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil
}
// AddConnectionHandler handles the connection with a newly connected peer.
func (p *TestP2P) AddConnectionHandler(f, _ func(ctx context.Context, id peer.ID) error) {
p.BHost.Network().Notify(&network.NotifyBundle{
ConnectedF: func(net network.Network, conn network.Conn) {
ConnectedF: func(_ network.Network, conn network.Conn) {
// Must be handled in a goroutine as this callback cannot be blocking.
go func() {
p.peers.Add(new(enr.Record), conn.RemotePeer(), conn.RemoteMultiaddr(), conn.Stat().Direction)
ctx := context.Background()
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerConnecting)
p.peers.SetConnectionState(conn.RemotePeer(), peers.Connecting)
if err := f(ctx, conn.RemotePeer()); err != nil {
logrus.WithError(err).Error("Could not send successful hello rpc request")
if err := p.Disconnect(conn.RemotePeer()); err != nil {
logrus.WithError(err).Errorf("Unable to close peer %s", conn.RemotePeer())
}
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerDisconnected)
p.peers.SetConnectionState(conn.RemotePeer(), peers.Disconnected)
return
}
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerConnected)
p.peers.SetConnectionState(conn.RemotePeer(), peers.Connected)
}()
},
})
@@ -302,14 +321,14 @@ func (p *TestP2P) AddConnectionHandler(f, _ func(ctx context.Context, id peer.ID
// AddDisconnectionHandler --
func (p *TestP2P) AddDisconnectionHandler(f func(ctx context.Context, id peer.ID) error) {
p.BHost.Network().Notify(&network.NotifyBundle{
DisconnectedF: func(net network.Network, conn network.Conn) {
DisconnectedF: func(_ network.Network, conn network.Conn) {
// Must be handled in a goroutine as this callback cannot be blocking.
go func() {
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerDisconnecting)
p.peers.SetConnectionState(conn.RemotePeer(), peers.Disconnecting)
if err := f(context.Background(), conn.RemotePeer()); err != nil {
logrus.WithError(err).Debug("Unable to invoke callback")
}
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerDisconnected)
p.peers.SetConnectionState(conn.RemotePeer(), peers.Disconnected)
}()
},
})
@@ -317,6 +336,8 @@ func (p *TestP2P) AddDisconnectionHandler(f func(ctx context.Context, id peer.ID
// Send a message to a specific peer.
func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid peer.ID) (network.Stream, error) {
metadataTopics := map[string]bool{metadataV1Topic: true, metadataV2Topic: true, metadataV3Topic: true}
t := topic
if t == "" {
return nil, fmt.Errorf("protocol doesn't exist for proto message: %v", msg)
@@ -326,7 +347,7 @@ func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid p
return nil, err
}
if topic != metatadataV1Topic && topic != metatadataV2Topic {
if !metadataTopics[topic] {
castedMsg, ok := msg.(ssz.Marshaler)
if !ok {
p.t.Fatalf("%T doesn't support ssz marshaler", msg)
@@ -353,7 +374,7 @@ func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid p
}
// Started always returns true.
func (_ *TestP2P) Started() bool {
func (*TestP2P) Started() bool {
return true
}
@@ -363,12 +384,12 @@ func (p *TestP2P) Peers() *peers.Status {
}
// FindPeersWithSubnet mocks the p2p func.
func (_ *TestP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (*TestP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return false, nil
}
// RefreshENR mocks the p2p func.
func (_ *TestP2P) RefreshENR() {}
// RefreshPersistentSubnets mocks the p2p func.
func (*TestP2P) RefreshPersistentSubnets() {}
// ForkDigest mocks the p2p func.
func (p *TestP2P) ForkDigest() ([4]byte, error) {
@@ -386,31 +407,31 @@ func (p *TestP2P) MetadataSeq() uint64 {
}
// AddPingMethod mocks the p2p func.
func (_ *TestP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
func (*TestP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
// no-op
}
// InterceptPeerDial .
func (_ *TestP2P) InterceptPeerDial(peer.ID) (allow bool) {
func (*TestP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true
}
// InterceptAddrDial .
func (_ *TestP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
func (*TestP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
return true
}
// InterceptAccept .
func (_ *TestP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
func (*TestP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptSecured .
func (_ *TestP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
func (*TestP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptUpgraded .
func (_ *TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
func (*TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0
}

View File

@@ -12,10 +12,15 @@ import (
"path"
"time"
"github.com/btcsuite/btcd/btcec/v2"
gCrypto "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/io/file"
@@ -62,6 +67,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
}
if defaultKeysExist {
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
return privKeyFromFile(defaultKeyPath)
}
@@ -71,8 +77,8 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
// If the StaticPeerID flag is not set, return the private key.
if !cfg.StaticPeerID {
// If the StaticPeerID flag is not set and if peerDAS is not enabled, return the private key.
if !(cfg.StaticPeerID || params.PeerDASEnabled()) {
return ecdsaprysm.ConvertFromInterfacePrivKey(priv)
}
@@ -89,7 +95,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
log.Info("Wrote network key to file")
log.WithField("path", defaultKeyPath).Info("Wrote network key to file")
// Read the key from the defaultKeyPath file just written
// for the strongest guarantee that the next start will be the same as this one.
return privKeyFromFile(defaultKeyPath)
@@ -173,3 +179,27 @@ func verifyConnectivity(addr string, port uint, protocol string) {
}
}
}
// ConvertPeerIDToNodeID converts a peer ID (libp2p) to a node ID (devp2p).
func ConvertPeerIDToNodeID(pid peer.ID) (enode.ID, error) {
// Retrieve the public key object of the peer under "crypto" form.
pubkeyObjCrypto, err := pid.ExtractPublicKey()
if err != nil {
return [32]byte{}, errors.Wrapf(err, "extract public key from peer ID `%s`", pid)
}
// Extract the bytes representation of the public key.
compressedPubKeyBytes, err := pubkeyObjCrypto.Raw()
if err != nil {
return [32]byte{}, errors.Wrap(err, "public key raw")
}
// Retrieve the public key object of the peer under "SECP256K1" form.
pubKeyObjSecp256k1, err := btcec.ParsePubKey(compressedPubKeyBytes)
if err != nil {
return [32]byte{}, errors.Wrap(err, "parse public key")
}
newPubkey := &ecdsa.PublicKey{Curve: gCrypto.S256(), X: pubKeyObjSecp256k1.X(), Y: pubKeyObjSecp256k1.Y()}
return enode.PubkeyToIDV4(newPubkey), nil
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -64,3 +65,19 @@ func TestSerializeENR(t *testing.T) {
assert.ErrorContains(t, "could not serialize nil record", err)
})
}
func TestConvertPeerIDToNodeID(t *testing.T) {
const (
peerIDStr = "16Uiu2HAmRrhnqEfybLYimCiAYer2AtZKDGamQrL1VwRCyeh2YiFc"
expectedNodeIDStr = "eed26c5d2425ab95f57246a5dca87317c41cacee4bcafe8bbe57e5965527c290"
)
peerID, err := peer.Decode(peerIDStr)
require.NoError(t, err)
actualNodeID, err := ConvertPeerIDToNodeID(peerID)
require.NoError(t, err)
actualNodeIDStr := actualNodeID.String()
require.Equal(t, expectedNodeIDStr, actualNodeIDStr)
}

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"endpoints.go",
"log.go",
"metrics.go",
"service.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc",

View File

@@ -34,18 +34,48 @@ type endpoint struct {
methods []string
}
// responseWriter is the wrapper to http Response writer.
type responseWriter struct {
http.ResponseWriter
statusCode int
}
// WriteHeader wraps the WriteHeader method of the underlying http.ResponseWriter to capture the status code.
// Refer for WriteHeader doc: https://pkg.go.dev/net/http@go1.23.3#ResponseWriter.
func (w *responseWriter) WriteHeader(statusCode int) {
w.statusCode = statusCode
w.ResponseWriter.WriteHeader(statusCode)
}
func (e *endpoint) handlerWithMiddleware() http.HandlerFunc {
handler := http.Handler(e.handler)
for _, m := range e.middleware {
handler = m(handler)
}
return promhttp.InstrumentHandlerDuration(
handler = promhttp.InstrumentHandlerDuration(
httpRequestLatency.MustCurryWith(prometheus.Labels{"endpoint": e.name}),
promhttp.InstrumentHandlerCounter(
httpRequestCount.MustCurryWith(prometheus.Labels{"endpoint": e.name}),
handler,
),
)
return func(w http.ResponseWriter, r *http.Request) {
// SSE errors are handled separately to avoid interference with the streaming
// mechanism and ensure accurate error tracking.
if e.template == "/eth/v1/events" {
handler.ServeHTTP(w, r)
return
}
rw := &responseWriter{ResponseWriter: w, statusCode: http.StatusOK}
handler.ServeHTTP(rw, r)
if rw.statusCode >= 400 {
httpErrorCount.WithLabelValues(r.URL.Path, http.StatusText(rw.statusCode), r.Method).Inc()
}
}
}
func (s *Service) endpoints(
@@ -142,9 +172,11 @@ func (s *Service) builderEndpoints(stater lookup.Stater) []endpoint {
}
}
func (*Service) blobEndpoints(blocker lookup.Blocker) []endpoint {
func (s *Service) blobEndpoints(blocker lookup.Blocker) []endpoint {
server := &blob.Server{
Blocker: blocker,
Blocker: blocker,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
FinalizationFetcher: s.cfg.FinalizationFetcher,
}
const namespace = "blob"
@@ -886,10 +918,11 @@ func (*Service) configEndpoints() []endpoint {
func (s *Service) lightClientEndpoints(blocker lookup.Blocker, stater lookup.Stater) []endpoint {
server := &lightclient.Server{
Blocker: blocker,
Stater: stater,
HeadFetcher: s.cfg.HeadFetcher,
BeaconDB: s.cfg.BeaconDB,
Blocker: blocker,
Stater: stater,
HeadFetcher: s.cfg.HeadFetcher,
ChainInfoFetcher: s.cfg.ChainInfoFetcher,
BeaconDB: s.cfg.BeaconDB,
}
const namespace = "lightclient"

View File

@@ -58,6 +58,7 @@ go_library(
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//crypto/kzg4844:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
@@ -118,6 +119,7 @@ go_test(
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -11,6 +11,7 @@ import (
"strings"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/crypto/kzg4844"
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v5/api"
@@ -1032,21 +1033,17 @@ func unmarshalStrict(data []byte, v interface{}) error {
func (s *Server) validateBroadcast(ctx context.Context, r *http.Request, blk *eth.GenericSignedBeaconBlock) error {
switch r.URL.Query().Get(broadcastValidationQueryParam) {
case broadcastValidationConsensus:
b, err := blocks.NewSignedBeaconBlock(blk.Block)
if err != nil {
return errors.Wrapf(err, "could not create signed beacon block")
}
if err = s.validateConsensus(ctx, b); err != nil {
if err := s.validateConsensus(ctx, blk); err != nil {
return errors.Wrap(err, "consensus validation failed")
}
case broadcastValidationConsensusAndEquivocation:
if err := s.validateConsensus(r.Context(), blk); err != nil {
return errors.Wrap(err, "consensus validation failed")
}
b, err := blocks.NewSignedBeaconBlock(blk.Block)
if err != nil {
return errors.Wrapf(err, "could not create signed beacon block")
}
if err = s.validateConsensus(r.Context(), b); err != nil {
return errors.Wrap(err, "consensus validation failed")
}
if err = s.validateEquivocation(b.Block()); err != nil {
return errors.Wrap(err, "equivocation validation failed")
}
@@ -1056,7 +1053,12 @@ func (s *Server) validateBroadcast(ctx context.Context, r *http.Request, blk *et
return nil
}
func (s *Server) validateConsensus(ctx context.Context, blk interfaces.ReadOnlySignedBeaconBlock) error {
func (s *Server) validateConsensus(ctx context.Context, b *eth.GenericSignedBeaconBlock) error {
blk, err := blocks.NewSignedBeaconBlock(b.Block)
if err != nil {
return errors.Wrapf(err, "could not create signed beacon block")
}
parentBlockRoot := blk.Block().ParentRoot()
parentBlock, err := s.Blocker.Block(ctx, parentBlockRoot[:])
if err != nil {
@@ -1076,6 +1078,24 @@ func (s *Server) validateConsensus(ctx context.Context, blk interfaces.ReadOnlyS
if err != nil {
return errors.Wrap(err, "could not execute state transition")
}
var blobs [][]byte
var proofs [][]byte
switch {
case blk.Version() == version.Electra:
blobs = b.GetElectra().Blobs
proofs = b.GetElectra().KzgProofs
case blk.Version() == version.Deneb:
blobs = b.GetDeneb().Blobs
proofs = b.GetDeneb().KzgProofs
default:
return nil
}
if err := s.validateBlobSidecars(blk, blobs, proofs); err != nil {
return err
}
return nil
}
@@ -1086,6 +1106,25 @@ func (s *Server) validateEquivocation(blk interfaces.ReadOnlyBeaconBlock) error
return nil
}
func (s *Server) validateBlobSidecars(blk interfaces.SignedBeaconBlock, blobs [][]byte, proofs [][]byte) error {
if blk.Version() < version.Deneb {
return nil
}
kzgs, err := blk.Block().Body().BlobKzgCommitments()
if err != nil {
return errors.Wrap(err, "could not get blob kzg commitments")
}
if len(blobs) != len(proofs) || len(blobs) != len(kzgs) {
return errors.New("number of blobs, proofs, and commitments do not match")
}
for i, blob := range blobs {
if err := kzg4844.VerifyBlobProof(kzg4844.Blob(blob), kzg4844.Commitment(kzgs[i]), kzg4844.Proof(proofs[i])); err != nil {
return errors.Wrap(err, "could not verify blob proof")
}
}
return nil
}
// GetBlockRoot retrieves the root of a block.
func (s *Server) GetBlockRoot(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.GetBlockRoot")

View File

@@ -87,7 +87,7 @@ func (s *Server) ListAttestations(w http.ResponseWriter, r *http.Request) {
// ListAttestationsV2 retrieves attestations known by the node but
// not necessarily incorporated into any block. Allows filtering by committee index or slot.
func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.ListAttestationsV2")
_, span := trace.StartSpan(r.Context(), "beacon.ListAttestationsV2")
defer span.End()
rawSlot, slot, ok := shared.UintFromQuery(w, r, "slot", false)
@@ -98,13 +98,10 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
if !ok {
return
}
headState, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
if err != nil {
httputil.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
return
v := slots.ToForkVersion(primitives.Slot(slot))
if rawSlot == "" {
v = slots.ToForkVersion(s.TimeFetcher.CurrentSlot())
}
attestations := s.AttestationsPool.AggregatedAttestations()
unaggAtts, err := s.AttestationsPool.UnaggregatedAttestations()
if err != nil {
@@ -116,7 +113,7 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
filteredAtts := make([]interface{}, 0, len(attestations))
for _, att := range attestations {
var includeAttestation bool
if headState.Version() >= version.Electra {
if v >= version.Electra && att.Version() >= version.Electra {
attElectra, ok := att.(*eth.AttestationElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert attestation of type %T", att), http.StatusInternalServerError)
@@ -128,7 +125,7 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
attStruct := structs.AttElectraFromConsensus(attElectra)
filteredAtts = append(filteredAtts, attStruct)
}
} else {
} else if v < version.Electra && att.Version() < version.Electra {
attOld, ok := att.(*eth.Attestation)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert attestation of type %T", att), http.StatusInternalServerError)
@@ -149,9 +146,9 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
return
}
w.Header().Set(api.VersionHeader, version.String(headState.Version()))
w.Header().Set(api.VersionHeader, version.String(v))
httputil.WriteJson(w, &structs.ListAttestationsResponse{
Version: version.String(headState.Version()),
Version: version.String(v),
Data: attsData,
})
}
@@ -726,31 +723,33 @@ func (s *Server) GetAttesterSlashingsV2(w http.ResponseWriter, r *http.Request)
ctx, span := trace.StartSpan(r.Context(), "beacon.GetAttesterSlashingsV2")
defer span.End()
v := slots.ToForkVersion(s.TimeFetcher.CurrentSlot())
headState, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
if err != nil {
httputil.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
return
}
var attStructs []interface{}
sourceSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, headState, true /* return unlimited slashings */)
for _, slashing := range sourceSlashings {
var attStruct interface{}
if headState.Version() >= version.Electra {
if v >= version.Electra && slashing.Version() >= version.Electra {
a, ok := slashing.(*eth.AttesterSlashingElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert slashing of type %T to an Electra slashing", slashing), http.StatusInternalServerError)
return
}
attStruct = structs.AttesterSlashingElectraFromConsensus(a)
} else {
} else if v < version.Electra && slashing.Version() < version.Electra {
a, ok := slashing.(*eth.AttesterSlashing)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert slashing of type %T to a Phase0 slashing", slashing), http.StatusInternalServerError)
return
}
attStruct = structs.AttesterSlashingFromConsensus(a)
} else {
continue
}
attStructs = append(attStructs, attStruct)
}
@@ -762,10 +761,10 @@ func (s *Server) GetAttesterSlashingsV2(w http.ResponseWriter, r *http.Request)
}
resp := &structs.GetAttesterSlashingsResponse{
Version: version.String(headState.Version()),
Version: version.String(v),
Data: attBytes,
}
w.Header().Set(api.VersionHeader, version.String(headState.Version()))
w.Header().Set(api.VersionHeader, version.String(v))
httputil.WriteJson(w, resp)
}

View File

@@ -115,9 +115,16 @@ func TestListAttestations(t *testing.T) {
Signature: bytesutil.PadTo([]byte("signature4"), 96),
}
t.Run("V1", func(t *testing.T) {
bs, err := util.NewBeaconState()
require.NoError(t, err)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
AttestationsPool: attestations.NewPool(),
}
require.NoError(t, s.AttestationsPool.SaveAggregatedAttestations([]ethpbv1alpha1.Att{att1, att2}))
require.NoError(t, s.AttestationsPool.SaveUnaggregatedAttestations([]ethpbv1alpha1.Att{att3, att4}))
@@ -204,10 +211,19 @@ func TestListAttestations(t *testing.T) {
t.Run("Pre-Electra", func(t *testing.T) {
bs, err := util.NewBeaconState()
require.NoError(t, err)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
AttestationsPool: attestations.NewPool(),
}
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.DenebForkEpoch = 0
params.OverrideBeaconConfig(config)
require.NoError(t, s.AttestationsPool.SaveAggregatedAttestations([]ethpbv1alpha1.Att{att1, att2}))
require.NoError(t, s.AttestationsPool.SaveUnaggregatedAttestations([]ethpbv1alpha1.Att{att3, att4}))
t.Run("empty request", func(t *testing.T) {
@@ -226,7 +242,7 @@ func TestListAttestations(t *testing.T) {
var atts []*structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &atts))
assert.Equal(t, 4, len(atts))
assert.Equal(t, "phase0", resp.Version)
assert.Equal(t, "deneb", resp.Version)
})
t.Run("slot request", func(t *testing.T) {
url := "http://example.com?slot=2"
@@ -244,7 +260,7 @@ func TestListAttestations(t *testing.T) {
var atts []*structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &atts))
assert.Equal(t, 2, len(atts))
assert.Equal(t, "phase0", resp.Version)
assert.Equal(t, "deneb", resp.Version)
for _, a := range atts {
assert.Equal(t, "2", a.Data.Slot)
}
@@ -265,7 +281,7 @@ func TestListAttestations(t *testing.T) {
var atts []*structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &atts))
assert.Equal(t, 2, len(atts))
assert.Equal(t, "phase0", resp.Version)
assert.Equal(t, "deneb", resp.Version)
for _, a := range atts {
assert.Equal(t, "4", a.Data.CommitteeIndex)
}
@@ -286,7 +302,7 @@ func TestListAttestations(t *testing.T) {
var atts []*structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &atts))
assert.Equal(t, 1, len(atts))
assert.Equal(t, "phase0", resp.Version)
assert.Equal(t, "deneb", resp.Version)
for _, a := range atts {
assert.Equal(t, "2", a.Data.Slot)
assert.Equal(t, "4", a.Data.CommitteeIndex)
@@ -370,12 +386,21 @@ func TestListAttestations(t *testing.T) {
}
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.ElectraForkEpoch = 0
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
AttestationsPool: attestations.NewPool(),
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
}
require.NoError(t, s.AttestationsPool.SaveAggregatedAttestations([]ethpbv1alpha1.Att{attElectra1, attElectra2}))
require.NoError(t, s.AttestationsPool.SaveUnaggregatedAttestations([]ethpbv1alpha1.Att{attElectra3, attElectra4}))
// Added one pre electra attestation to ensure it is ignored.
require.NoError(t, s.AttestationsPool.SaveAggregatedAttestations([]ethpbv1alpha1.Att{attElectra1, attElectra2, att1}))
require.NoError(t, s.AttestationsPool.SaveUnaggregatedAttestations([]ethpbv1alpha1.Att{attElectra3, attElectra4, att3}))
t.Run("empty request", func(t *testing.T) {
url := "http://example.com"
@@ -1658,12 +1683,59 @@ func TestGetAttesterSlashings(t *testing.T) {
})
})
t.Run("V2", func(t *testing.T) {
t.Run("post-electra-ok-1-pre-slashing", func(t *testing.T) {
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.ElectraForkEpoch = 100
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PostElectra, slashing2PostElectra, slashing1PreElectra}},
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/beacon/pool/attester_slashings", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAttesterSlashingsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetAttesterSlashingsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.Equal(t, "electra", resp.Version)
// Unmarshal resp.Data into a slice of slashings
var slashings []*structs.AttesterSlashingElectra
require.NoError(t, json.Unmarshal(resp.Data, &slashings))
ss, err := structs.AttesterSlashingsElectraToConsensus(slashings)
require.NoError(t, err)
require.DeepEqual(t, slashing1PostElectra, ss[0])
require.DeepEqual(t, slashing2PostElectra, ss[1])
})
t.Run("post-electra-ok", func(t *testing.T) {
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.ElectraForkEpoch = 100
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PostElectra, slashing2PostElectra}},
}
@@ -1692,9 +1764,11 @@ func TestGetAttesterSlashings(t *testing.T) {
t.Run("pre-electra-ok", func(t *testing.T) {
bs, err := util.NewBeaconState()
require.NoError(t, err)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PreElectra, slashing2PreElectra}},
}
@@ -1722,8 +1796,15 @@ func TestGetAttesterSlashings(t *testing.T) {
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.ElectraForkEpoch = 100
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{}},
}

View File

@@ -12,6 +12,7 @@ import (
"testing"
"time"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
@@ -33,6 +34,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -2922,8 +2924,6 @@ func TestValidateConsensus(t *testing.T) {
require.NoError(t, err)
block, err := util.GenerateFullBlock(st, privs, util.DefaultBlockGenConfig(), st.Slot())
require.NoError(t, err)
sbb, err := blocks.NewSignedBeaconBlock(block)
require.NoError(t, err)
parentRoot, err := parentSbb.Block().HashTreeRoot()
require.NoError(t, err)
server := &Server{
@@ -2931,7 +2931,11 @@ func TestValidateConsensus(t *testing.T) {
Stater: &testutil.MockStater{StatesByRoot: map[[32]byte]state.BeaconState{bytesutil.ToBytes32(parentBlock.Block.StateRoot): parentState}},
}
require.NoError(t, server.validateConsensus(ctx, sbb))
require.NoError(t, server.validateConsensus(ctx, &eth.GenericSignedBeaconBlock{
Block: &eth.GenericSignedBeaconBlock_Phase0{
Phase0: block,
},
}))
}
func TestValidateEquivocation(t *testing.T) {
@@ -4152,3 +4156,24 @@ func TestServer_broadcastBlobSidecars(t *testing.T) {
require.NoError(t, server.broadcastSeenBlockSidecars(context.Background(), blk, b.GetDeneb().Blobs, b.GetDeneb().KzgProofs))
require.LogsContain(t, hook, "Broadcasted blob sidecar for already seen block")
}
func Test_validateBlobSidecars(t *testing.T) {
blob := util.GetRandBlob(123)
commitment := GoKZG.KZGCommitment{180, 218, 156, 194, 59, 20, 10, 189, 186, 254, 132, 93, 7, 127, 104, 172, 238, 240, 237, 70, 83, 89, 1, 152, 99, 0, 165, 65, 143, 62, 20, 215, 230, 14, 205, 95, 28, 245, 54, 25, 160, 16, 178, 31, 232, 207, 38, 85}
proof := GoKZG.KZGProof{128, 110, 116, 170, 56, 111, 126, 87, 229, 234, 211, 42, 110, 150, 129, 206, 73, 142, 167, 243, 90, 149, 240, 240, 236, 204, 143, 182, 229, 249, 81, 27, 153, 171, 83, 70, 144, 250, 42, 1, 188, 215, 71, 235, 30, 7, 175, 86}
blk := util.NewBeaconBlockDeneb()
blk.Block.Body.BlobKzgCommitments = [][]byte{commitment[:]}
b, err := blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
s := &Server{}
require.NoError(t, s.validateBlobSidecars(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
require.ErrorContains(t, "number of blobs, proofs, and commitments do not match", s.validateBlobSidecars(b, [][]byte{blob[:]}, [][]byte{}))
sk, err := bls.RandKey()
require.NoError(t, err)
blk.Block.Body.BlobKzgCommitments = [][]byte{sk.PublicKey().Marshal()}
b, err = blocks.NewSignedBeaconBlock(blk)
require.NoError(t, err)
require.ErrorContains(t, "could not verify blob proof: can't verify opening proof", s.validateBlobSidecars(b, [][]byte{blob[:]}, [][]byte{proof[:]}))
}

View File

@@ -10,12 +10,14 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/blocks:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],

Some files were not shown because too many files have changed in this diff Show More