Compare commits

..

2 Commits

Author SHA1 Message Date
nisdas
ef92530045 Logging 2024-11-04 13:15:42 +08:00
nisdas
7879732c55 Hack 2024-11-04 13:12:35 +08:00
251 changed files with 2757 additions and 9563 deletions

View File

@@ -54,7 +54,7 @@ jobs:
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
with:
version: v1.56.1
version: v1.55.2
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:

View File

@@ -73,7 +73,6 @@ linters:
- promlinter
- protogetter
- revive
- spancheck
- staticcheck
- stylecheck
- tagalign

View File

@@ -8,29 +8,16 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
### Added
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430).
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430)
- Add Bellatrix tests for light client functions.
- Add Discovery Rebooter Feature.
- Added GetBlockAttestationsV2 endpoint.
- Light client support: Consensus types for Electra.
- Light client support: Consensus types for Electra
- Added SubmitPoolAttesterSlashingV2 endpoint.
- Added SubmitAggregateAndProofsRequestV2 endpoint.
- Updated the `beacon-chain/monitor` package to Electra. [PR](https://github.com/prysmaticlabs/prysm/pull/14562)
- Added ListAttestationsV2 endpoint.
- Add ability to rollback node's internal state during processing.
- Change how unsafe protobuf state is created to prevent unnecessary copies.
- Added benchmarks for process slots for Capella, Deneb, Electra.
- Add helper to cast bytes to string without allocating memory.
- Added GetAggregatedAttestationV2 endpoint.
- Added SubmitAttestationsV2 endpoint.
- Validator REST mode Electra block support.
- Added validator index label to `validator_statuses` metric.
- Added Validator REST mode use of Attestation V2 endpoints and Electra attestations.
- PeerDAS: Added proto for `DataColumnIdentifier`, `DataColumnSidecar`, `DataColumnSidecarsByRangeRequest` and `MetadataV2`.
- Better attestation packing for Electra. [PR](https://github.com/prysmaticlabs/prysm/pull/14534)
- P2P: Add logs when a peer is (dis)connected. Add the reason of the disconnection when we initiate it.
- Added a Prometheus error counter metric for HTTP requests to track beacon node requests.
- Added a Prometheus error counter metric for SSE requests.
### Changed
@@ -45,31 +32,12 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Use read only validator for core processing to avoid unnecessary copying.
- Use ROBlock across block processing pipeline.
- Added missing Eth-Consensus-Version headers to GetBlockAttestationsV2 and GetAttesterSlashingsV2 endpoints.
- When instantiating new validators, explicit set `Slashed` to false and move `EffectiveBalance` to match struct definition.
- Updated pgo profile for beacon chain with holesky data. This improves the profile guided
optimizations in the go compiler.
- Use read only state when computing the active validator list.
- Simplified `ExitedValidatorIndices`.
- Simplified `EjectedValidatorIndices`.
- `engine_newPayloadV4`,`engine_getPayloadV4` are changes due to new execution request serialization decisions, [PR](https://github.com/prysmaticlabs/prysm/pull/14580)
- Fixed various small things in state-native code.
- Use ROBlock earlier in block syncing pipeline.
- Changed the signature of `ProcessPayload`.
- Only Build the Protobuf state once during serialization.
- Capella blocks are execution.
- Fixed panic when http request to subscribe to event stream fails.
- Return early for blob reconstructor during capella fork.
- Updated block endpoint from V1 to V2.
- Rename instances of "deposit receipts" to "deposit requests".
- Non-blocking payload attribute event handling in beacon api [pr](https://github.com/prysmaticlabs/prysm/pull/14644).
- Updated light client protobufs. [PR](https://github.com/prysmaticlabs/prysm/pull/14650)
- Added `Eth-Consensus-Version` header to `ListAttestationsV2` and `GetAggregateAttestationV2` endpoints.
- Updated light client consensus types. [PR](https://github.com/prysmaticlabs/prysm/pull/14652)
- Fixed pending deposits processing on Electra.
- Modified `ListAttestationsV2`, `GetAttesterSlashingsV2` and `GetAggregateAttestationV2` endpoints to use slot to determine fork version.
- Improvements to HTTP response handling. [pr](https://github.com/prysmaticlabs/prysm/pull/14673)
- Updated `Blobs` endpoint to return additional metadata fields.
- Refactor static and dynamic subnets subscription.
### Deprecated
@@ -79,8 +47,6 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Removed finalized validator index cache, no longer needed.
- Removed validator queue position log on key reload and wait for activation.
- Removed outdated spectest exclusions for EIP-6110.
- Removed kzg proof check from blob reconstructor.
### Fixed
@@ -95,13 +61,6 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Fix keymanager API so that get keys returns an empty response instead of a 500 error when using an unsupported keystore.
- Small log imporvement, removing some redundant or duplicate logs
- EIP7521 - Fixes withdrawal bug by accounting for pending partial withdrawals and deducting already withdrawn amounts from the sweep balance. [PR](https://github.com/prysmaticlabs/prysm/pull/14578)
- unskip electra merkle spec test
- Fix panic in validator REST mode when checking status after removing all keys
- Fix panic on attestation interface since we call data before validation
- corrects nil check on some interface attestation types
- temporary solution to handling electra attesation and attester_slashing events. [pr](14655)
- Diverse log improvements and comment additions.
- P2P: Avoid infinite loop when looking for peers in small networks.
### Security
@@ -216,7 +175,6 @@ Updating to this release is recommended at your convenience.
- Light client support: fix light client attested header execution fields' wrong version bug.
- Testing: added custom matcher for better push settings testing.
- Registered `GetDepositSnapshot` Beacon API endpoint.
- Fix rolling back of a block due to a context deadline.
### Security

View File

@@ -12,7 +12,6 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/client:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types:go_default_library",

View File

@@ -14,7 +14,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -177,7 +176,7 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
err = non200Err(r)
return
}
res, err = io.ReadAll(io.LimitReader(r.Body, client.MaxBodySize))
res, err = io.ReadAll(r.Body)
if err != nil {
err = errors.Wrap(err, "error reading http response body from builder server")
return
@@ -359,7 +358,7 @@ func (c *Client) Status(ctx context.Context) error {
}
func non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(io.LimitReader(response.Body, client.MaxErrBodySize))
bodyBytes, err := io.ReadAll(response.Body)
var errMessage ErrorMessage
var body string
if err != nil {

View File

@@ -10,17 +10,11 @@ import (
"github.com/pkg/errors"
)
const (
MaxBodySize int64 = 1 << 23 // 8MB default, WithMaxBodySize can override
MaxErrBodySize int64 = 1 << 17 // 128KB
)
// Client is a wrapper object around the HTTP client.
type Client struct {
hc *http.Client
baseURL *url.URL
token string
maxBodySize int64
hc *http.Client
baseURL *url.URL
token string
}
// NewClient constructs a new client with the provided options (ex WithTimeout).
@@ -32,9 +26,8 @@ func NewClient(host string, opts ...ClientOpt) (*Client, error) {
return nil, err
}
c := &Client{
hc: &http.Client{},
baseURL: u,
maxBodySize: MaxBodySize,
hc: &http.Client{},
baseURL: u,
}
for _, o := range opts {
o(c)
@@ -79,7 +72,7 @@ func (c *Client) NodeURL() string {
// Get is a generic, opinionated GET function to reduce boilerplate amongst the getters in this package.
func (c *Client) Get(ctx context.Context, path string, opts ...ReqOption) ([]byte, error) {
u := c.baseURL.ResolveReference(&url.URL{Path: path})
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), http.NoBody)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), nil)
if err != nil {
return nil, err
}
@@ -96,7 +89,7 @@ func (c *Client) Get(ctx context.Context, path string, opts ...ReqOption) ([]byt
if r.StatusCode != http.StatusOK {
return nil, Non200Err(r)
}
b, err := io.ReadAll(io.LimitReader(r.Body, c.maxBodySize))
b, err := io.ReadAll(r.Body)
if err != nil {
return nil, errors.Wrap(err, "error reading http response body")
}

View File

@@ -25,16 +25,16 @@ var ErrInvalidNodeVersion = errors.New("invalid node version response")
var ErrConnectionIssue = errors.New("could not connect")
// Non200Err is a function that parses an HTTP response to handle responses that are not 200 with a formatted error.
func Non200Err(r *http.Response) error {
b, err := io.ReadAll(io.LimitReader(r.Body, MaxErrBodySize))
func Non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(response.Body)
var body string
if err != nil {
body = "(Unable to read response body.)"
} else {
body = "response body:\n" + string(b)
body = "response body:\n" + string(bodyBytes)
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", r.StatusCode, r.Request.URL, body)
switch r.StatusCode {
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case http.StatusNotFound:
return errors.Wrap(ErrNotFound, msg)
default:

View File

@@ -93,7 +93,6 @@ func (h *EventStream) Subscribe(eventsChannel chan<- *Event) {
EventType: EventConnectionError,
Data: []byte(errors.Wrap(err, client.ErrConnectionIssue.Error()).Error()),
}
return
}
defer func() {

View File

@@ -40,7 +40,7 @@ func TestNewEventStream(t *testing.T) {
func TestEventStream(t *testing.T) {
mux := http.NewServeMux()
mux.HandleFunc("/eth/v1/events", func(w http.ResponseWriter, _ *http.Request) {
mux.HandleFunc("/eth/v1/events", func(w http.ResponseWriter, r *http.Request) {
flusher, ok := w.(http.Flusher)
require.Equal(t, true, ok)
for i := 1; i <= 3; i++ {
@@ -79,23 +79,3 @@ func TestEventStream(t *testing.T) {
}
}
}
func TestEventStreamRequestError(t *testing.T) {
topics := []string{"head"}
eventsChannel := make(chan *Event, 1)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// use valid url that will result in failed request with nil body
stream, err := NewEventStream(ctx, http.DefaultClient, "http://badhost:1234", topics)
require.NoError(t, err)
// error will happen when request is made, should be received over events channel
go stream.Subscribe(eventsChannel)
event := <-eventsChannel
if event.EventType != EventConnectionError {
t.Errorf("Expected event type %q, got %q", EventConnectionError, event.EventType)
}
}

View File

@@ -46,10 +46,3 @@ func WithAuthenticationToken(token string) ClientOpt {
c.token = token
}
}
// WithMaxBodySize overrides the default max body size of 8MB.
func WithMaxBodySize(size int64) ClientOpt {
return func(c *Client) {
c.maxBodySize = size
}
}

View File

@@ -26,7 +26,7 @@ type ListAttestationsResponse struct {
}
type SubmitAttestationsRequest struct {
Data json.RawMessage `json:"data"`
Data []*Attestation `json:"data"`
}
type ListVoluntaryExitsResponse struct {

View File

@@ -1,10 +1,7 @@
package structs
type SidecarsResponse struct {
Version string `json:"version"`
Data []*Sidecar `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*Sidecar `json:"data"`
}
type Sidecar struct {

View File

@@ -7,8 +7,7 @@ import (
)
type AggregateAttestationResponse struct {
Version string `json:"version,omitempty"`
Data json.RawMessage `json:"data"`
Data *Attestation `json:"data"`
}
type SubmitContributionAndProofsRequest struct {

View File

@@ -6,11 +6,8 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -72,7 +69,6 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
if arg.attributes == nil {
arg.attributes = payloadattribute.EmptyWithVersion(headBlk.Version())
}
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), arg)
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, arg.attributes)
if err != nil {
switch {
@@ -171,38 +167,6 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
return payloadID, nil
}
func firePayloadAttributesEvent(ctx context.Context, f event.SubscriberSender, cfg *fcuConfig) {
pidx, err := helpers.BeaconProposerIndex(ctx, cfg.headState)
if err != nil {
log.WithError(err).
WithField("head_root", cfg.headRoot[:]).
Error("Could not get proposer index for PayloadAttributes event")
return
}
evd := payloadattribute.EventData{
ProposerIndex: pidx,
ProposalSlot: cfg.headState.Slot(),
ParentBlockRoot: cfg.headRoot[:],
Attributer: cfg.attributes,
HeadRoot: cfg.headRoot,
HeadState: cfg.headState,
HeadBlock: cfg.headBlock,
}
if cfg.headBlock != nil && !cfg.headBlock.IsNil() {
headPayload, err := cfg.headBlock.Block().Body().Execution()
if err != nil {
log.WithError(err).Error("Could not get execution payload for head block")
return
}
evd.ParentBlockHash = headPayload.BlockHash()
evd.ParentBlockNumber = headPayload.BlockNumber()
}
f.Send(&feed.Event{
Type: statefeed.PayloadAttributes,
Data: evd,
})
}
// getPayloadHash returns the payload hash given the block root.
// if the block is before bellatrix fork epoch, it returns the zero hash.
func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, error) {

View File

@@ -92,12 +92,12 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
{
name: "process nil attestation",
a: nil,
wantedErr: "attestation is nil",
wantedErr: "attestation can't be nil",
},
{
name: "process nil field (a.Data) in attestation",
a: &ethpb.Attestation{},
wantedErr: "attestation is nil",
wantedErr: "attestation's data can't be nil",
},
{
name: "process nil field (a.Target) in attestation",

View File

@@ -7,6 +7,8 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -74,8 +76,6 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
err := s.cfg.ForkChoiceStore.InsertNode(ctx, cfg.postState, cfg.roblock)
if err != nil {
// Do not use parent context in the event it deadlined
ctx = trace.NewContext(context.Background(), span)
s.rollbackBlock(ctx, cfg.roblock.Root())
return errors.Wrapf(err, "could not insert block %d to fork choice store", cfg.roblock.Block().Slot())
}
@@ -618,6 +618,9 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if !s.inRegularSync() {
return
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
@@ -645,13 +648,6 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: nil,
attributes: attribute,
}
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), fcuArgs)
return
}

View File

@@ -1520,9 +1520,7 @@ func TestStore_NoViableHead_NewPayload(t *testing.T) {
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
rowsb, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, rowsb)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, wsb, root)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head and store's headroot are the previous head (since the invalid block did
// not finish importing and it was never imported to forkchoice). Check
@@ -1716,9 +1714,7 @@ func TestStore_NoViableHead_Liveness(t *testing.T) {
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
rowsb, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, rowsb)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, wsb, root)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that forkchoice's head and store's headroot are the previous head (since the invalid block did
@@ -1968,9 +1964,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
require.NoError(t, err)
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
require.NoError(t, err)
rowsb, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, rowsb)
_, err = service.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, wsb, root)
require.ErrorContains(t, "received an INVALID payload from execution engine", err)
// Check that the headroot/state are not in DB and restart the node
@@ -2352,85 +2346,6 @@ func TestRollbackBlock(t *testing.T) {
require.Equal(t, false, hasState)
}
func TestRollbackBlock_ContextDeadline(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
st, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
require.NoError(t, service.saveGenesisData(ctx, st))
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
parentRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: parentRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: parentRoot[:]}))
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 33)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
b, err = util.GenerateFullBlock(postState, keys, util.DefaultBlockGenConfig(), 34)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.Equal(t, true, service.cfg.BeaconDB.HasBlock(ctx, root))
hasState, err := service.cfg.StateGen.HasState(ctx, root)
require.NoError(t, err)
require.Equal(t, true, hasState)
// Set deadlined context when processing the block
cancCtx, canc := context.WithCancel(context.Background())
canc()
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
parentRoot = roblock.Block().ParentRoot()
cj := &ethpb.Checkpoint{}
cj.Epoch = 1
cj.Root = parentRoot[:]
require.NoError(t, postState.SetCurrentJustifiedCheckpoint(cj))
require.NoError(t, postState.SetFinalizedCheckpoint(cj))
// Rollback block insertion into db and caches.
require.ErrorContains(t, "context canceled", service.postBlockProcess(&postBlockProcessConfig{cancCtx, roblock, [32]byte{}, postState, false}))
// The block should no longer exist.
require.Equal(t, false, service.cfg.BeaconDB.HasBlock(ctx, root))
hasState, err = service.cfg.StateGen.HasState(ctx, root)
require.NoError(t, err)
require.Equal(t, false, hasState)
}
func fakeCommitments(n int) [][]byte {
f := make([][]byte, n)
for i := range f {

View File

@@ -18,7 +18,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -85,12 +84,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
currentCheckpoints := s.saveCurrentCheckpoints(preState)
roblock, err := consensus_blocks.NewROBlockWithRoot(blockCopy, blockRoot)
if err != nil {
return err
}
postState, isValidPayload, err := s.validateExecutionAndConsensus(ctx, preState, roblock)
postState, isValidPayload, err := s.validateExecutionAndConsensus(ctx, preState, blockCopy, blockRoot)
if err != nil {
return err
}
@@ -107,6 +101,10 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
if err := s.savePostStateInfo(ctx, blockRoot, blockCopy, postState); err != nil {
return errors.Wrap(err, "could not save post state info")
}
roblock, err := consensus_blocks.NewROBlockWithRoot(blockCopy, blockRoot)
if err != nil {
return err
}
args := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
@@ -190,7 +188,8 @@ func (s *Service) updateCheckpoints(
func (s *Service) validateExecutionAndConsensus(
ctx context.Context,
preState state.BeaconState,
block consensusblocks.ROBlock,
block interfaces.SignedBeaconBlock,
blockRoot [32]byte,
) (state.BeaconState, bool, error) {
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
@@ -209,7 +208,7 @@ func (s *Service) validateExecutionAndConsensus(
var isValidPayload bool
eg.Go(func() error {
var err error
isValidPayload, err = s.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, block)
isValidPayload, err = s.validateExecutionOnBlock(ctx, preStateVersion, preStateHeader, block, blockRoot)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
@@ -560,16 +559,16 @@ func (s *Service) sendBlockAttestationsToSlasher(signed interfaces.ReadOnlySigne
}
// validateExecutionOnBlock notifies the engine of the incoming block execution payload and returns true if the payload is valid
func (s *Service) validateExecutionOnBlock(ctx context.Context, ver int, header interfaces.ExecutionData, block consensusblocks.ROBlock) (bool, error) {
isValidPayload, err := s.notifyNewPayload(ctx, ver, header, block)
func (s *Service) validateExecutionOnBlock(ctx context.Context, ver int, header interfaces.ExecutionData, signed interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) (bool, error) {
isValidPayload, err := s.notifyNewPayload(ctx, ver, header, signed)
if err != nil {
s.cfg.ForkChoiceStore.Lock()
err = s.handleInvalidExecutionError(ctx, err, block.Root(), block.Block().ParentRoot())
err = s.handleInvalidExecutionError(ctx, err, blockRoot, signed.Block().ParentRoot())
s.cfg.ForkChoiceStore.Unlock()
return false, err
}
if block.Block().Version() < version.Capella && isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, ver, header, block); err != nil {
if signed.Version() < version.Capella && isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, ver, header, signed); err != nil {
return isValidPayload, err
}
}

View File

@@ -187,12 +187,11 @@ func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdr
// return Validator(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// effective_balance=effective_balance,
// slashed=False,
// activation_eligibility_epoch=FAR_FUTURE_EPOCH,
// activation_epoch=FAR_FUTURE_EPOCH,
// exit_epoch=FAR_FUTURE_EPOCH,
// withdrawable_epoch=FAR_FUTURE_EPOCH,
// effective_balance=effective_balance,
// )
func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount uint64) *ethpb.Validator {
effectiveBalance := amount - (amount % params.BeaconConfig().EffectiveBalanceIncrement)
@@ -203,11 +202,10 @@ func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount
return &ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
EffectiveBalance: effectiveBalance,
Slashed: false,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: effectiveBalance,
}
}

View File

@@ -448,7 +448,6 @@ func TestValidateIndexedAttestation_AboveMaxLength(t *testing.T) {
Target: &ethpb.Checkpoint{
Epoch: primitives.Epoch(i),
},
Source: &ethpb.Checkpoint{},
}
}
@@ -490,7 +489,6 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
Target: &ethpb.Checkpoint{
Root: []byte{},
},
Source: &ethpb.Checkpoint{},
},
Signature: sig.Marshal(),
AggregationBits: list,

View File

@@ -61,9 +61,6 @@ func IsExecutionBlock(body interfaces.ReadOnlyBeaconBlockBody) (bool, error) {
if body == nil {
return false, errors.New("nil block body")
}
if body.Version() >= version.Capella {
return true, nil
}
payload, err := body.Execution()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
@@ -205,24 +202,24 @@ func ValidatePayload(st state.BeaconState, payload interfaces.ExecutionData) err
// block_hash=payload.block_hash,
// transactions_root=hash_tree_root(payload.transactions),
// )
func ProcessPayload(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) error {
func ProcessPayload(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) (state.BeaconState, error) {
payload, err := body.Execution()
if err != nil {
return err
return nil, err
}
if err := verifyBlobCommitmentCount(body); err != nil {
return err
return nil, err
}
if err := ValidatePayloadWhenMergeCompletes(st, payload); err != nil {
return err
return nil, err
}
if err := ValidatePayload(st, payload); err != nil {
return err
return nil, err
}
if err := st.SetLatestExecutionPayloadHeader(payload); err != nil {
return err
return nil, err
}
return nil
return st, nil
}
func verifyBlobCommitmentCount(body interfaces.ReadOnlyBeaconBlockBody) error {

View File

@@ -253,8 +253,7 @@ func Test_IsExecutionBlockCapella(t *testing.T) {
require.NoError(t, err)
got, err := blocks.IsExecutionBlock(wrappedBlock.Body())
require.NoError(t, err)
// #14614
require.Equal(t, true, got)
require.Equal(t, false, got)
}
func Test_IsExecutionEnabled(t *testing.T) {
@@ -588,7 +587,8 @@ func Test_ProcessPayload(t *testing.T) {
ExecutionPayload: tt.payload,
})
require.NoError(t, err)
if err := blocks.ProcessPayload(st, body); err != nil {
st, err := blocks.ProcessPayload(st, body)
if err != nil {
require.Equal(t, tt.err.Error(), err.Error())
} else {
require.Equal(t, tt.err, err)
@@ -619,7 +619,8 @@ func Test_ProcessPayloadCapella(t *testing.T) {
ExecutionPayload: payload,
})
require.NoError(t, err)
require.NoError(t, blocks.ProcessPayload(st, body))
_, err = blocks.ProcessPayload(st, body)
require.NoError(t, err)
}
func Test_ProcessPayload_Blinded(t *testing.T) {
@@ -676,7 +677,8 @@ func Test_ProcessPayload_Blinded(t *testing.T) {
ExecutionPayloadHeader: p,
})
require.NoError(t, err)
if err := blocks.ProcessPayload(st, body); err != nil {
st, err := blocks.ProcessPayload(st, body)
if err != nil {
require.Equal(t, tt.err.Error(), err.Error())
} else {
require.Equal(t, tt.err, err)

View File

@@ -193,7 +193,7 @@ func ProcessWithdrawals(st state.BeaconState, executionData interfaces.Execution
}
if st.Version() >= version.Electra {
if err := st.DequeuePendingPartialWithdrawals(processedPartialWithdrawalsCount); err != nil {
if err := st.DequeuePartialWithdrawals(processedPartialWithdrawalsCount); err != nil {
return nil, fmt.Errorf("unable to dequeue partial withdrawals from state: %w", err)
}
}

View File

@@ -386,14 +386,8 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
return errors.Wrap(err, "batch signature verification failed")
}
pubKeyMap := make(map[[48]byte]struct{}, len(pendingDeposits))
// Process each deposit individually
for _, pendingDeposit := range pendingDeposits {
_, found := pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)]
if !found {
pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)] = struct{}{}
}
validSignature := allSignaturesVerified
// If batch verification failed, check the individual deposit signature
@@ -411,16 +405,9 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
// Add validator to the registry if the signature is valid
if validSignature {
if found {
index, _ := state.ValidatorIndexByPubkey(bytesutil.ToBytes48(pendingDeposit.PublicKey))
if err := helpers.IncreaseBalance(state, index, pendingDeposit.Amount); err != nil {
return errors.Wrap(err, "could not increase balance")
}
} else {
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
}
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
}
}
}
@@ -521,12 +508,11 @@ func AddValidatorToRegistry(beaconState state.BeaconState, pubKey []byte, withdr
// validator = Validator(
// pubkey=pubkey,
// withdrawal_credentials=withdrawal_credentials,
// effective_balance=Gwei(0),
// slashed=False,
// activation_eligibility_epoch=FAR_FUTURE_EPOCH,
// activation_epoch=FAR_FUTURE_EPOCH,
// exit_epoch=FAR_FUTURE_EPOCH,
// withdrawable_epoch=FAR_FUTURE_EPOCH,
// effective_balance=Gwei(0),
// )
//
// # [Modified in Electra:EIP7251]
@@ -538,12 +524,11 @@ func GetValidatorFromDeposit(pubKey []byte, withdrawalCredentials []byte, amount
validator := &ethpb.Validator{
PublicKey: pubKey,
WithdrawalCredentials: withdrawalCredentials,
EffectiveBalance: 0,
Slashed: false,
ActivationEligibilityEpoch: params.BeaconConfig().FarFutureEpoch,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
EffectiveBalance: 0,
}
v, err := state_native.NewValidator(validator)
if err != nil {
@@ -573,7 +558,7 @@ func ProcessDepositRequests(ctx context.Context, beaconState state.BeaconState,
return beaconState, nil
}
// processDepositRequest processes the specific deposit request
// processDepositRequest processes the specific deposit receipt
// def process_deposit_request(state: BeaconState, deposit_request: DepositRequest) -> None:
//
// # Set deposit request start index
@@ -603,8 +588,8 @@ func processDepositRequest(beaconState state.BeaconState, request *enginev1.Depo
}
if err := beaconState.AppendPendingDeposit(&ethpb.PendingDeposit{
PublicKey: bytesutil.SafeCopyBytes(request.Pubkey),
WithdrawalCredentials: bytesutil.SafeCopyBytes(request.WithdrawalCredentials),
Amount: request.Amount,
WithdrawalCredentials: bytesutil.SafeCopyBytes(request.WithdrawalCredentials),
Signature: bytesutil.SafeCopyBytes(request.Signature),
Slot: beaconState.Slot(),
}); err != nil {

View File

@@ -22,40 +22,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessPendingDepositsMultiplesSameDeposits(t *testing.T) {
st := stateWithActiveBalanceETH(t, 1000)
deps := make([]*eth.PendingDeposit, 2) // Make same deposit twice
validators := st.Validators()
sk, err := bls.RandKey()
require.NoError(t, err)
for i := 0; i < len(deps); i += 1 {
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(i)
validators[i].PublicKey = sk.PublicKey().Marshal()
validators[i].WithdrawalCredentials = wc
deps[i] = stateTesting.GeneratePendingDeposit(t, sk, 32, bytesutil.ToBytes32(wc), 0)
}
require.NoError(t, st.SetPendingDeposits(deps))
err = electra.ProcessPendingDeposits(context.TODO(), st, 10000)
require.NoError(t, err)
val := st.Validators()
seenPubkeys := make(map[string]struct{})
for i := 0; i < len(val); i += 1 {
if len(val[i].PublicKey) == 0 {
continue
}
_, ok := seenPubkeys[string(val[i].PublicKey)]
if ok {
t.Fatalf("duplicated pubkeys")
} else {
seenPubkeys[string(val[i].PublicKey)] = struct{}{}
}
}
}
func TestProcessPendingDeposits(t *testing.T) {
tests := []struct {
name string
@@ -319,7 +285,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
invalidDep := &eth.PendingDeposit{PublicKey: make([]byte, 48)}
invalidDep := &eth.PendingDeposit{}
// have a combination of valid and invalid deposits
deps := []*eth.PendingDeposit{validDep, invalidDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))

View File

@@ -29,6 +29,7 @@ var (
ProcessParticipationFlagUpdates = altair.ProcessParticipationFlagUpdates
ProcessSyncCommitteeUpdates = altair.ProcessSyncCommitteeUpdates
AttestationsDelta = altair.AttestationsDelta
ProcessSyncAggregate = altair.ProcessSyncAggregate
)
// ProcessEpoch describes the per epoch operations that are performed on the beacon state.

View File

@@ -84,11 +84,11 @@ func ProcessOperations(
}
st, err = ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit requests")
return nil, errors.Wrap(err, "could not process deposit receipts")
}
st, err = ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawal requests")
return nil, errors.Wrap(err, "could not process execution layer withdrawal requests")
}
if err := ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
return nil, fmt.Errorf("could not process consolidation requests: %w", err)

View File

@@ -31,8 +31,6 @@ const (
LightClientFinalityUpdate
// LightClientOptimisticUpdate event
LightClientOptimisticUpdate
// PayloadAttributes events are fired upon a missed slot or new head.
PayloadAttributes
)
// BlockProcessedData is the data sent with BlockProcessed events.

View File

@@ -23,8 +23,11 @@ var (
// Access to these nil fields will result in run time panic,
// it is recommended to run these checks as first line of defense.
func ValidateNilAttestation(attestation ethpb.Att) error {
if attestation == nil || attestation.IsNil() {
return errors.New("attestation is nil")
if attestation == nil {
return errors.New("attestation can't be nil")
}
if attestation.GetData() == nil {
return errors.New("attestation's data can't be nil")
}
if attestation.GetData().Source == nil {
return errors.New("attestation's source can't be nil")

View File

@@ -260,12 +260,12 @@ func TestValidateNilAttestation(t *testing.T) {
{
name: "nil attestation",
attestation: nil,
errString: "attestation is nil",
errString: "attestation can't be nil",
},
{
name: "nil attestation data",
attestation: &ethpb.Attestation{},
errString: "attestation is nil",
errString: "attestation's data can't be nil",
},
{
name: "nil attestation source",

View File

@@ -333,7 +333,8 @@ func ProcessBlockForStateRoot(
return nil, errors.Wrap(err, "could not process withdrawals")
}
}
if err = b.ProcessPayload(state, blk.Body()); err != nil {
state, err = b.ProcessPayload(state, blk.Body())
if err != nil {
return nil, errors.Wrap(err, "could not process execution data")
}
}

View File

@@ -698,45 +698,3 @@ func TestProcessSlotsConditionally(t *testing.T) {
assert.Equal(t, primitives.Slot(6), s.Slot())
})
}
func BenchmarkProcessSlots_Capella(b *testing.B) {
st, _ := util.DeterministicGenesisStateCapella(b, params.BeaconConfig().MaxValidatorsPerCommittee)
var err error
b.ResetTimer()
for i := 0; i < b.N; i++ {
st, err = transition.ProcessSlots(context.Background(), st, st.Slot()+1)
if err != nil {
b.Fatalf("Failed to process slot %v", err)
}
}
}
func BenchmarkProcessSlots_Deneb(b *testing.B) {
st, _ := util.DeterministicGenesisStateDeneb(b, params.BeaconConfig().MaxValidatorsPerCommittee)
var err error
b.ResetTimer()
for i := 0; i < b.N; i++ {
st, err = transition.ProcessSlots(context.Background(), st, st.Slot()+1)
if err != nil {
b.Fatalf("Failed to process slot %v", err)
}
}
}
func BenchmarkProcessSlots_Electra(b *testing.B) {
st, _ := util.DeterministicGenesisStateElectra(b, params.BeaconConfig().MaxValidatorsPerCommittee)
var err error
b.ResetTimer()
for i := 0; i < b.N; i++ {
st, err = transition.ProcessSlots(context.Background(), st, st.Slot()+1)
if err != nil {
b.Fatalf("Failed to process slot %v", err)
}
}
}

View File

@@ -23,10 +23,10 @@ import (
bolt "go.etcd.io/bbolt"
)
// Used to represent errors for inconsistent slot ranges.
// used to represent errors for inconsistent slot ranges.
var errInvalidSlotRange = errors.New("invalid end slot and start slot provided")
// Block retrieval by root. Return nil if block is not found.
// Block retrieval by root.
func (s *Store) Block(ctx context.Context, blockRoot [32]byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.Block")
defer span.End()

View File

@@ -18,7 +18,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
bolt "go.etcd.io/bbolt"
@@ -140,6 +139,7 @@ func (s *Store) SaveStates(ctx context.Context, states []state.ReadOnlyBeaconSta
}
multipleEncs[i] = stateBytes
}
log.Infof("Total serialization: %s", time.Since(startTime).String())
if err := s.db.Update(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateBucket)
@@ -604,14 +604,14 @@ func (s *Store) unmarshalState(_ context.Context, enc []byte, validatorEntries [
// marshal versioned state from struct type down to bytes.
func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, error) {
switch st.Version() {
case version.Phase0:
switch st.ToProtoUnsafe().(type) {
case *ethpb.BeaconState:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconState)
if !ok {
return nil, errors.New("non valid inner state")
}
return encode(ctx, rState)
case version.Altair:
case *ethpb.BeaconStateAltair:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateAltair)
if !ok {
return nil, errors.New("non valid inner state")
@@ -624,7 +624,7 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
return nil, err
}
return snappy.Encode(nil, append(altairKey, rawObj...)), nil
case version.Bellatrix:
case *ethpb.BeaconStateBellatrix:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateBellatrix)
if !ok {
return nil, errors.New("non valid inner state")
@@ -637,7 +637,7 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
return nil, err
}
return snappy.Encode(nil, append(bellatrixKey, rawObj...)), nil
case version.Capella:
case *ethpb.BeaconStateCapella:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateCapella)
if !ok {
return nil, errors.New("non valid inner state")
@@ -650,7 +650,7 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
return nil, err
}
return snappy.Encode(nil, append(capellaKey, rawObj...)), nil
case version.Deneb:
case *ethpb.BeaconStateDeneb:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateDeneb)
if !ok {
return nil, errors.New("non valid inner state")
@@ -663,7 +663,7 @@ func marshalState(ctx context.Context, st state.ReadOnlyBeaconState) ([]byte, er
return nil, err
}
return snappy.Encode(nil, append(denebKey, rawObj...)), nil
case version.Electra:
case *ethpb.BeaconStateElectra:
rState, ok := st.ToProtoUnsafe().(*ethpb.BeaconStateElectra)
if !ok {
return nil, errors.New("non valid inner state")

View File

@@ -688,7 +688,7 @@ func decodeSlasherChunk(enc []byte) ([]uint16, error) {
// Encode attestation record to bytes.
// The output encoded attestation record consists in the signing root concatenated with the compressed attestation record.
func encodeAttestationRecord(att *slashertypes.IndexedAttestationWrapper) ([]byte, error) {
if att == nil || att.IndexedAttestation == nil || att.IndexedAttestation.IsNil() {
if att == nil || att.IndexedAttestation == nil {
return []byte{}, errors.New("nil proposal record")
}

View File

@@ -623,7 +623,13 @@ func (s *Service) ReconstructBlobSidecars(ctx context.Context, block interfaces.
continue
}
// Verify the sidecar KZG proof
v := s.blobVerifier(roBlob, verification.ELMemPoolRequirements)
if err := v.SidecarKzgProofVerified(); err != nil {
log.WithError(err).WithField("index", i).Error("failed to verify KZG proof for sidecar")
continue
}
verifiedBlob, err := v.VerifiedROBlob()
if err != nil {
log.WithError(err).WithField("index", i).Error("failed to verify RO blob")

View File

@@ -53,7 +53,7 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
// Only reorg blocks that arrive late
early, err := head.arrivedEarly(f.store.genesisTime)
if err != nil {
log.WithError(err).Error("Could not check if block arrived early")
log.WithError(err).Error("could not check if block arrived early")
return
}
if early {

View File

@@ -192,13 +192,20 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.verifyInitWaiter = verification.NewInitializerWaiter(
beacon.clockWaiter, forkchoice.NewROForkChoice(beacon.forkChoicer), beacon.stateGen)
pa := peers.NewAssigner(beacon.fetchP2P().Peers(), beacon.forkChoicer)
beacon.BackfillOpts = append(
beacon.BackfillOpts,
backfill.WithVerifierWaiter(beacon.verifyInitWaiter),
backfill.WithInitSyncWaiter(initSyncWaiter(ctx, beacon.initialSyncComplete)),
)
if err := registerServices(cliCtx, beacon, synchronizer, bfs); err != nil {
bf, err := backfill.NewService(ctx, bfs, beacon.BlobStorage, beacon.clockWaiter, beacon.fetchP2P(), pa, beacon.BackfillOpts...)
if err != nil {
return nil, errors.Wrap(err, "error initializing backfill service")
}
if err := registerServices(cliCtx, beacon, synchronizer, bf, bfs); err != nil {
return nil, errors.Wrap(err, "could not register services")
}
@@ -285,6 +292,11 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
return nil, errors.Wrap(err, "could not start slashing DB")
}
log.Debugln("Registering P2P Service")
if err := beacon.registerP2P(cliCtx); err != nil {
return nil, errors.Wrap(err, "could not register P2P service")
}
bfs, err := backfill.NewUpdater(ctx, beacon.db)
if err != nil {
return nil, errors.Wrap(err, "could not create backfill updater")
@@ -303,15 +315,9 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
return bfs, nil
}
func registerServices(cliCtx *cli.Context, beacon *BeaconNode, synchronizer *startup.ClockSynchronizer, bfs *backfill.Store) error {
log.Debugln("Registering P2P Service")
if err := beacon.registerP2P(cliCtx); err != nil {
return errors.Wrap(err, "could not register P2P service")
}
log.Debugln("Registering Backfill Service")
if err := beacon.RegisterBackfillService(cliCtx, bfs); err != nil {
return errors.Wrap(err, "could not register Back Fill service")
func registerServices(cliCtx *cli.Context, beacon *BeaconNode, synchronizer *startup.ClockSynchronizer, bf *backfill.Service, bfs *backfill.Store) error {
if err := beacon.services.RegisterService(bf); err != nil {
return errors.Wrap(err, "could not register backfill service")
}
log.Debugln("Registering POW Chain Service")
@@ -1130,16 +1136,6 @@ func (b *BeaconNode) registerBuilderService(cliCtx *cli.Context) error {
return b.services.RegisterService(svc)
}
func (b *BeaconNode) RegisterBackfillService(cliCtx *cli.Context, bfs *backfill.Store) error {
pa := peers.NewAssigner(b.fetchP2P().Peers(), b.forkChoicer)
bf, err := backfill.NewService(cliCtx.Context, bfs, b.BlobStorage, b.clockWaiter, b.fetchP2P(), pa, b.BackfillOpts...)
if err != nil {
return errors.Wrap(err, "error initializing backfill service")
}
return b.services.RegisterService(bf)
}
func hasNetworkFlag(cliCtx *cli.Context) bool {
for _, flag := range features.NetworkFlags {
for _, name := range flag.Names() {

View File

@@ -49,12 +49,12 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
{
name: "nil attestation",
att: nil,
wantErrString: "attestation is nil",
wantErrString: "attestation can't be nil",
},
{
name: "nil attestation data",
att: &ethpb.Attestation{},
wantErrString: "attestation is nil",
wantErrString: "attestation's data can't be nil",
},
{
name: "not aggregated",
@@ -206,7 +206,7 @@ func TestKV_Aggregated_AggregatedAttestations(t *testing.T) {
func TestKV_Aggregated_DeleteAggregatedAttestation(t *testing.T) {
t.Run("nil attestation", func(t *testing.T) {
cache := NewAttCaches()
assert.ErrorContains(t, "attestation is nil", cache.DeleteAggregatedAttestation(nil))
assert.ErrorContains(t, "attestation can't be nil", cache.DeleteAggregatedAttestation(nil))
att := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10101}, Data: &ethpb.AttestationData{Slot: 2}})
assert.NoError(t, cache.DeleteAggregatedAttestation(att))
})
@@ -288,7 +288,7 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
name: "nil attestation",
input: nil,
want: false,
err: errors.New("is nil"),
err: errors.New("can't be nil"),
},
{
name: "nil attestation data",
@@ -296,7 +296,7 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
AggregationBits: bitfield.Bitlist{0b1111},
},
want: false,
err: errors.New("is nil"),
err: errors.New("can't be nil"),
},
{
name: "empty cache aggregated",

View File

@@ -8,7 +8,7 @@ import (
// SaveBlockAttestation saves an block attestation in cache.
func (c *AttCaches) SaveBlockAttestation(att ethpb.Att) error {
if att == nil || att.IsNil() {
if att == nil {
return nil
}
@@ -53,9 +53,10 @@ func (c *AttCaches) BlockAttestations() []ethpb.Att {
// DeleteBlockAttestation deletes a block attestation in cache.
func (c *AttCaches) DeleteBlockAttestation(att ethpb.Att) error {
if att == nil || att.IsNil() {
if att == nil {
return nil
}
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")

View File

@@ -8,7 +8,7 @@ import (
// SaveForkchoiceAttestation saves an forkchoice attestation in cache.
func (c *AttCaches) SaveForkchoiceAttestation(att ethpb.Att) error {
if att == nil || att.IsNil() {
if att == nil {
return nil
}
@@ -50,7 +50,7 @@ func (c *AttCaches) ForkchoiceAttestations() []ethpb.Att {
// DeleteForkchoiceAttestation deletes a forkchoice attestation in cache.
func (c *AttCaches) DeleteForkchoiceAttestation(att ethpb.Att) error {
if att == nil || att.IsNil() {
if att == nil {
return nil
}

View File

@@ -14,7 +14,7 @@ import (
// SaveUnaggregatedAttestation saves an unaggregated attestation in cache.
func (c *AttCaches) SaveUnaggregatedAttestation(att ethpb.Att) error {
if att == nil || att.IsNil() {
if att == nil {
return nil
}
if helpers.IsAggregated(att) {
@@ -130,10 +130,9 @@ func (c *AttCaches) UnaggregatedAttestationsBySlotIndexElectra(
// DeleteUnaggregatedAttestation deletes the unaggregated attestations in cache.
func (c *AttCaches) DeleteUnaggregatedAttestation(att ethpb.Att) error {
if att == nil || att.IsNil() {
if att == nil {
return nil
}
if helpers.IsAggregated(att) {
return errors.New("attestation is aggregated")
}
@@ -162,7 +161,7 @@ func (c *AttCaches) DeleteSeenUnaggregatedAttestations() (int, error) {
count := 0
for r, att := range c.unAggregatedAtt {
if att == nil || att.IsNil() || helpers.IsAggregated(att) {
if att == nil || helpers.IsAggregated(att) {
continue
}
if seen, err := c.hasSeenBit(att); err == nil && seen {

View File

@@ -7,7 +7,6 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations/mock",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/operations/attestations:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
],

View File

@@ -3,17 +3,13 @@ package mock
import (
"context"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
var _ attestations.Pool = &PoolMock{}
// PoolMock --
type PoolMock struct {
AggregatedAtts []ethpb.Att
UnaggregatedAtts []ethpb.Att
AggregatedAtts []*ethpb.Attestation
}
// AggregateUnaggregatedAttestations --
@@ -27,18 +23,18 @@ func (*PoolMock) AggregateUnaggregatedAttestationsBySlotIndex(_ context.Context,
}
// SaveAggregatedAttestation --
func (*PoolMock) SaveAggregatedAttestation(_ ethpb.Att) error {
func (*PoolMock) SaveAggregatedAttestation(_ *ethpb.Attestation) error {
panic("implement me")
}
// SaveAggregatedAttestations --
func (m *PoolMock) SaveAggregatedAttestations(atts []ethpb.Att) error {
func (m *PoolMock) SaveAggregatedAttestations(atts []*ethpb.Attestation) error {
m.AggregatedAtts = append(m.AggregatedAtts, atts...)
return nil
}
// AggregatedAttestations --
func (m *PoolMock) AggregatedAttestations() []ethpb.Att {
func (m *PoolMock) AggregatedAttestations() []*ethpb.Attestation {
return m.AggregatedAtts
}
@@ -47,18 +43,13 @@ func (*PoolMock) AggregatedAttestationsBySlotIndex(_ context.Context, _ primitiv
panic("implement me")
}
// AggregatedAttestationsBySlotIndexElectra --
func (*PoolMock) AggregatedAttestationsBySlotIndexElectra(_ context.Context, _ primitives.Slot, _ primitives.CommitteeIndex) []*ethpb.AttestationElectra {
panic("implement me")
}
// DeleteAggregatedAttestation --
func (*PoolMock) DeleteAggregatedAttestation(_ ethpb.Att) error {
func (*PoolMock) DeleteAggregatedAttestation(_ *ethpb.Attestation) error {
panic("implement me")
}
// HasAggregatedAttestation --
func (*PoolMock) HasAggregatedAttestation(_ ethpb.Att) (bool, error) {
func (*PoolMock) HasAggregatedAttestation(_ *ethpb.Attestation) (bool, error) {
panic("implement me")
}
@@ -68,19 +59,18 @@ func (*PoolMock) AggregatedAttestationCount() int {
}
// SaveUnaggregatedAttestation --
func (*PoolMock) SaveUnaggregatedAttestation(_ ethpb.Att) error {
func (*PoolMock) SaveUnaggregatedAttestation(_ *ethpb.Attestation) error {
panic("implement me")
}
// SaveUnaggregatedAttestations --
func (m *PoolMock) SaveUnaggregatedAttestations(atts []ethpb.Att) error {
m.UnaggregatedAtts = append(m.UnaggregatedAtts, atts...)
return nil
func (*PoolMock) SaveUnaggregatedAttestations(_ []*ethpb.Attestation) error {
panic("implement me")
}
// UnaggregatedAttestations --
func (m *PoolMock) UnaggregatedAttestations() ([]ethpb.Att, error) {
return m.UnaggregatedAtts, nil
func (*PoolMock) UnaggregatedAttestations() ([]*ethpb.Attestation, error) {
panic("implement me")
}
// UnaggregatedAttestationsBySlotIndex --
@@ -88,13 +78,8 @@ func (*PoolMock) UnaggregatedAttestationsBySlotIndex(_ context.Context, _ primit
panic("implement me")
}
// UnaggregatedAttestationsBySlotIndexElectra --
func (*PoolMock) UnaggregatedAttestationsBySlotIndexElectra(_ context.Context, _ primitives.Slot, _ primitives.CommitteeIndex) []*ethpb.AttestationElectra {
panic("implement me")
}
// DeleteUnaggregatedAttestation --
func (*PoolMock) DeleteUnaggregatedAttestation(_ ethpb.Att) error {
func (*PoolMock) DeleteUnaggregatedAttestation(_ *ethpb.Attestation) error {
panic("implement me")
}
@@ -109,42 +94,42 @@ func (*PoolMock) UnaggregatedAttestationCount() int {
}
// SaveBlockAttestation --
func (*PoolMock) SaveBlockAttestation(_ ethpb.Att) error {
func (*PoolMock) SaveBlockAttestation(_ *ethpb.Attestation) error {
panic("implement me")
}
// SaveBlockAttestations --
func (*PoolMock) SaveBlockAttestations(_ []ethpb.Att) error {
func (*PoolMock) SaveBlockAttestations(_ []*ethpb.Attestation) error {
panic("implement me")
}
// BlockAttestations --
func (*PoolMock) BlockAttestations() []ethpb.Att {
func (*PoolMock) BlockAttestations() []*ethpb.Attestation {
panic("implement me")
}
// DeleteBlockAttestation --
func (*PoolMock) DeleteBlockAttestation(_ ethpb.Att) error {
func (*PoolMock) DeleteBlockAttestation(_ *ethpb.Attestation) error {
panic("implement me")
}
// SaveForkchoiceAttestation --
func (*PoolMock) SaveForkchoiceAttestation(_ ethpb.Att) error {
func (*PoolMock) SaveForkchoiceAttestation(_ *ethpb.Attestation) error {
panic("implement me")
}
// SaveForkchoiceAttestations --
func (*PoolMock) SaveForkchoiceAttestations(_ []ethpb.Att) error {
func (*PoolMock) SaveForkchoiceAttestations(_ []*ethpb.Attestation) error {
panic("implement me")
}
// ForkchoiceAttestations --
func (*PoolMock) ForkchoiceAttestations() []ethpb.Att {
func (*PoolMock) ForkchoiceAttestations() []*ethpb.Attestation {
panic("implement me")
}
// DeleteForkchoiceAttestation --
func (*PoolMock) DeleteForkchoiceAttestation(_ ethpb.Att) error {
func (*PoolMock) DeleteForkchoiceAttestation(_ *ethpb.Attestation) error {
panic("implement me")
}

View File

@@ -17,6 +17,7 @@ go_library(
"handshake.go",
"info.go",
"interfaces.go",
"iterator.go",
"log.go",
"message_id.go",
"monitoring.go",
@@ -74,8 +75,6 @@ go_library(
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_btcsuite_btcd_btcec_v2//:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
@@ -163,10 +162,12 @@ go_test(
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/testing:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",

View File

@@ -225,11 +225,11 @@ func TestService_BroadcastAttestationWithDiscoveryAttempts(t *testing.T) {
require.NoError(t, err)
defer bootListener.Close()
// Use smaller batch size for testing.
currentBatchSize := batchSize
batchSize = 2
// Use shorter period for testing.
currentPeriod := pollingPeriod
pollingPeriod = 1 * time.Second
defer func() {
batchSize = currentBatchSize
pollingPeriod = currentPeriod
}()
bootNode := bootListener.Self()

View File

@@ -33,7 +33,7 @@ func (*Service) InterceptPeerDial(_ peer.ID) (allow bool) {
// multiaddr for the given peer.
func (s *Service) InterceptAddrDial(pid peer.ID, m multiaddr.Multiaddr) (allow bool) {
// Disallow bad peers from dialing in.
if s.peers.IsBad(pid) != nil {
if s.peers.IsBad(pid) {
return false
}
return filterConnections(s.addrFilter, m)

View File

@@ -50,7 +50,7 @@ func TestPeer_AtMaxLimit(t *testing.T) {
}()
for i := 0; i < highWatermarkBuffer; i++ {
addPeer(t, s.peers, peers.Connected, false)
addPeer(t, s.peers, peers.PeerConnected, false)
}
// create alternate host
@@ -159,7 +159,7 @@ func TestService_RejectInboundPeersBeyondLimit(t *testing.T) {
inboundLimit += 1
// Add in up to inbound peer limit.
for i := 0; i < int(inboundLimit); i++ {
addPeer(t, s.peers, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED), false)
addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), false)
}
valid = s.InterceptAccept(&maEndpoints{raddr: multiAddress})
if valid {

View File

@@ -22,7 +22,6 @@ import (
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
)
type ListenerRebooter interface {
@@ -48,12 +47,10 @@ const (
udp6
)
const quickProtocolEnrKey = "quic"
type quicProtocol uint16
// quicProtocol is the "quic" key, which holds the QUIC port of the node.
func (quicProtocol) ENRKey() string { return quickProtocolEnrKey }
func (quicProtocol) ENRKey() string { return "quic" }
type listenerWrapper struct {
mu sync.RWMutex
@@ -136,129 +133,68 @@ func (l *listenerWrapper) RebootListener() error {
return nil
}
// RefreshPersistentSubnets checks that we are tracking our local persistent subnets for a variety of gossip topics.
// This routine verifies and updates our attestation and sync committee subnets if they have been rotated.
func (s *Service) RefreshPersistentSubnets() {
// Return early if discv5 service isn't running.
// RefreshENR uses an epoch to refresh the enr entry for our node
// with the tracked committee ids for the epoch, allowing our node
// to be dynamically discoverable by others given our tracked committee ids.
func (s *Service) RefreshENR() {
// return early if discv5 isn't running
if s.dv5Listener == nil || !s.isInitialized() {
return
}
// Get the current epoch.
currentSlot := slots.CurrentSlot(uint64(s.genesisTime.Unix()))
currentEpoch := slots.ToEpoch(currentSlot)
// Get our node ID.
nodeID := s.dv5Listener.LocalNode().ID()
// Get our node record.
record := s.dv5Listener.Self().Record()
// Get the version of our metadata.
metadataVersion := s.Metadata().Version()
// Initialize persistent subnets.
if err := initializePersistentSubnets(nodeID, currentEpoch); err != nil {
currEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.genesisTime.Unix())))
if err := initializePersistentSubnets(s.dv5Listener.LocalNode().ID(), currEpoch); err != nil {
log.WithError(err).Error("Could not initialize persistent subnets")
return
}
// Get the current attestation subnet bitfield.
bitV := bitfield.NewBitvector64()
attestationCommittees := cache.SubnetIDs.GetAllSubnets()
for _, idx := range attestationCommittees {
committees := cache.SubnetIDs.GetAllSubnets()
for _, idx := range committees {
bitV.SetBitAt(idx, true)
}
// Get the attestation subnet bitfield we store in our record.
inRecordBitV, err := attBitvector(record)
currentBitV, err := attBitvector(s.dv5Listener.Self().Record())
if err != nil {
log.WithError(err).Error("Could not retrieve att bitfield")
return
}
// Get the attestation subnet bitfield in our metadata.
inMetadataBitV := s.Metadata().AttnetsBitfield()
// Is our attestation bitvector record up to date?
isBitVUpToDate := bytes.Equal(bitV, inRecordBitV) && bytes.Equal(bitV, inMetadataBitV)
// Compare current epoch with Altair fork epoch
// Compare current epoch with our fork epochs
altairForkEpoch := params.BeaconConfig().AltairForkEpoch
if currentEpoch < altairForkEpoch {
switch {
case currEpoch < altairForkEpoch:
// Phase 0 behaviour.
if isBitVUpToDate {
// Return early if bitfield hasn't changed.
if bytes.Equal(bitV, currentBitV) {
// return early if bitfield hasn't changed
return
}
// Some data changed. Update the record and the metadata.
s.updateSubnetRecordWithMetadata(bitV)
// Ping all peers.
s.pingPeersAndLogEnr()
return
default:
// Retrieve sync subnets from application level
// cache.
bitS := bitfield.Bitvector4{byte(0x00)}
committees = cache.SyncSubnetIDs.GetAllSubnets(currEpoch)
for _, idx := range committees {
bitS.SetBitAt(idx, true)
}
currentBitS, err := syncBitvector(s.dv5Listener.Self().Record())
if err != nil {
log.WithError(err).Error("Could not retrieve sync bitfield")
return
}
if bytes.Equal(bitV, currentBitV) && bytes.Equal(bitS, currentBitS) &&
s.Metadata().Version() == version.Altair {
// return early if bitfields haven't changed
return
}
s.updateSubnetRecordWithMetadataV2(bitV, bitS)
}
// Get the current sync subnet bitfield.
bitS := bitfield.Bitvector4{byte(0x00)}
syncCommittees := cache.SyncSubnetIDs.GetAllSubnets(currentEpoch)
for _, idx := range syncCommittees {
bitS.SetBitAt(idx, true)
}
// Get the sync subnet bitfield we store in our record.
inRecordBitS, err := syncBitvector(record)
if err != nil {
log.WithError(err).Error("Could not retrieve sync bitfield")
return
}
// Get the sync subnet bitfield in our metadata.
currentBitSInMetadata := s.Metadata().SyncnetsBitfield()
// Is our sync bitvector record up to date?
isBitSUpToDate := bytes.Equal(bitS, inRecordBitS) && bytes.Equal(bitS, currentBitSInMetadata)
if metadataVersion == version.Altair && isBitVUpToDate && isBitSUpToDate {
// Nothing to do, return early.
return
}
// Some data have changed, update our record and metadata.
s.updateSubnetRecordWithMetadataV2(bitV, bitS)
// Ping all peers to inform them of new metadata
s.pingPeersAndLogEnr()
// ping all peers to inform them of new metadata
s.pingPeers()
}
// listen for new nodes watches for new nodes in the network and adds them to the peerstore.
func (s *Service) listenForNewNodes() {
const (
minLogInterval = 1 * time.Minute
thresholdLimit = 5
)
peersSummary := func(threshold uint) (uint, uint) {
// Retrieve how many active peers we have.
activePeers := s.Peers().Active()
activePeerCount := uint(len(activePeers))
// Compute how many peers we are missing to reach the threshold.
if activePeerCount >= threshold {
return activePeerCount, 0
}
missingPeerCount := threshold - activePeerCount
return activePeerCount, missingPeerCount
}
var lastLogTime time.Time
iterator := s.dv5Listener.RandomNodes()
iterator := filterNodes(s.ctx, s.dv5Listener.RandomNodes(), s.filterPeer)
defer iterator.Close()
connectivityTicker := time.NewTicker(1 * time.Minute)
thresholdCount := 0
@@ -267,31 +203,25 @@ func (s *Service) listenForNewNodes() {
select {
case <-s.ctx.Done():
return
case <-connectivityTicker.C:
// Skip the connectivity check if not enabled.
if !features.Get().EnableDiscoveryReboot {
continue
}
if !s.isBelowOutboundPeerThreshold() {
// Reset counter if we are beyond the threshold
thresholdCount = 0
continue
}
thresholdCount++
// Reboot listener if connectivity drops
if thresholdCount > thresholdLimit {
outBoundConnectedCount := len(s.peers.OutboundConnected())
log.WithField("outboundConnectionCount", outBoundConnectedCount).Warn("Rebooting discovery listener, reached threshold.")
if thresholdCount > 5 {
log.WithField("outboundConnectionCount", len(s.peers.OutboundConnected())).Warn("Rebooting discovery listener, reached threshold.")
if err := s.dv5Listener.RebootListener(); err != nil {
log.WithError(err).Error("Could not reboot listener")
continue
}
iterator = s.dv5Listener.RandomNodes()
iterator = filterNodes(s.ctx, s.dv5Listener.RandomNodes(), s.filterPeer)
thresholdCount = 0
}
default:
@@ -302,35 +232,17 @@ func (s *Service) listenForNewNodes() {
time.Sleep(pollingPeriod)
continue
}
// Compute the number of new peers we want to dial.
activePeerCount, missingPeerCount := peersSummary(s.cfg.MaxPeers)
fields := logrus.Fields{
"currentPeerCount": activePeerCount,
"targetPeerCount": s.cfg.MaxPeers,
}
if missingPeerCount == 0 {
wantedCount := s.wantedPeerDials()
if wantedCount == 0 {
log.Trace("Not looking for peers, at peer limit")
time.Sleep(pollingPeriod)
continue
}
if time.Since(lastLogTime) > minLogInterval {
lastLogTime = time.Now()
log.WithFields(fields).Debug("Searching for new active peers")
}
// Restrict dials if limit is applied.
if flags.MaxDialIsActive() {
maxConcurrentDials := uint(flags.Get().MaxConcurrentDials)
missingPeerCount = min(missingPeerCount, maxConcurrentDials)
wantedCount = min(wantedCount, flags.Get().MaxConcurrentDials)
}
// Search for new peers.
wantedNodes := searchForPeers(iterator, batchSize, missingPeerCount, s.filterPeer)
wantedNodes := enode.ReadNodes(iterator, wantedCount)
wg := new(sync.WaitGroup)
for i := 0; i < len(wantedNodes); i++ {
node := wantedNodes[i]
@@ -540,14 +452,12 @@ func (s *Service) filterPeer(node *enode.Node) bool {
}
// Ignore bad nodes.
if s.peers.IsBad(peerData.ID) != nil {
if s.peers.IsBad(peerData.ID) {
return false
}
// Ignore nodes that are already active.
if s.peers.IsActive(peerData.ID) {
// Constantly update enr for known peers
s.peers.UpdateENR(node.Record(), peerData.ID)
return false
}
@@ -616,6 +526,17 @@ func (s *Service) isBelowOutboundPeerThreshold() bool {
return outBoundCount < outBoundThreshold
}
func (s *Service) wantedPeerDials() int {
maxPeers := int(s.cfg.MaxPeers)
activePeers := len(s.Peers().Active())
wantedCount := 0
if maxPeers > activePeers {
wantedCount = maxPeers - activePeers
}
return wantedCount
}
// PeersFromStringAddrs converts peer raw ENRs into multiaddrs for p2p.
func PeersFromStringAddrs(addrs []string) ([]ma.Multiaddr, error) {
var allAddrs []ma.Multiaddr

View File

@@ -16,8 +16,6 @@ import (
"github.com/ethereum/go-ethereum/p2p/discover"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
@@ -32,12 +30,13 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
leakybucket "github.com/prysmaticlabs/prysm/v5/container/leaky-bucket"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
prysmNetwork "github.com/prysmaticlabs/prysm/v5/network"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -132,10 +131,6 @@ func TestStartDiscV5_DiscoverAllPeers(t *testing.T) {
}
func TestCreateLocalNode(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.Eip7594ForkEpoch = 1
params.OverrideBeaconConfig(cfg)
testCases := []struct {
name string
cfg *Config
@@ -383,14 +378,14 @@ func TestInboundPeerLimit(t *testing.T) {
}
for i := 0; i < 30; i++ {
_ = addPeer(t, s.peers, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED), false)
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), false)
}
require.Equal(t, true, s.isPeerAtLimit(false), "not at limit for outbound peers")
require.Equal(t, false, s.isPeerAtLimit(true), "at limit for inbound peers")
for i := 0; i < highWatermarkBuffer; i++ {
_ = addPeer(t, s.peers, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED), false)
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), false)
}
require.Equal(t, true, s.isPeerAtLimit(true), "not at limit for inbound peers")
@@ -409,13 +404,13 @@ func TestOutboundPeerThreshold(t *testing.T) {
}
for i := 0; i < 2; i++ {
_ = addPeer(t, s.peers, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED), true)
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), true)
}
require.Equal(t, true, s.isBelowOutboundPeerThreshold(), "not at outbound peer threshold")
for i := 0; i < 3; i++ {
_ = addPeer(t, s.peers, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED), true)
_ = addPeer(t, s.peers, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED), true)
}
require.Equal(t, false, s.isBelowOutboundPeerThreshold(), "still at outbound peer threshold")
@@ -482,7 +477,7 @@ func TestCorrectUDPVersion(t *testing.T) {
}
// addPeer is a helper to add a peer with a given connection state)
func addPeer(t *testing.T, p *peers.Status, state peerdata.ConnectionState, outbound bool) peer.ID {
func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState, outbound bool) peer.ID {
// Set up some peers with different states
mhBytes := []byte{0x11, 0x04}
idBytes := make([]byte, 4)
@@ -504,270 +499,192 @@ func addPeer(t *testing.T, p *peers.Status, state peerdata.ConnectionState, outb
return id
}
func createAndConnectPeer(t *testing.T, p2pService *testp2p.TestP2P, offset int) {
// Create the private key.
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(offset + i)
}
privateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
// Create the peer.
peer := testp2p.NewTestP2P(t, libp2p.Identity(privateKey))
// Add the peer and connect it.
p2pService.Peers().Add(&enr.Record{}, peer.PeerID(), nil, network.DirOutbound)
p2pService.Peers().SetConnectionState(peer.PeerID(), peers.Connected)
p2pService.Connect(peer)
}
// Define the ping count.
var actualPingCount int
type check struct {
pingCount int
metadataSequenceNumber uint64
attestationSubnets []uint64
syncSubnets []uint64
custodySubnetCount *uint64
}
func checkPingCountCacheMetadataRecord(
t *testing.T,
service *Service,
expected check,
) {
// Check the ping count.
require.Equal(t, expected.pingCount, actualPingCount)
// Check the attestation subnets in the cache.
actualAttestationSubnets := cache.SubnetIDs.GetAllSubnets()
require.DeepSSZEqual(t, expected.attestationSubnets, actualAttestationSubnets)
// Check the metadata sequence number.
actualMetadataSequenceNumber := service.metaData.SequenceNumber()
require.Equal(t, expected.metadataSequenceNumber, actualMetadataSequenceNumber)
// Compute expected attestation subnets bits.
expectedBitV := bitfield.NewBitvector64()
exists := false
for _, idx := range expected.attestationSubnets {
exists = true
expectedBitV.SetBitAt(idx, true)
}
// Check attnets in ENR.
var actualBitVENR bitfield.Bitvector64
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(attSubnetEnrKey, &actualBitVENR))
require.NoError(t, err)
require.DeepSSZEqual(t, expectedBitV, actualBitVENR)
// Check attnets in metadata.
if !exists {
expectedBitV = nil
}
actualBitVMetadata := service.metaData.AttnetsBitfield()
require.DeepSSZEqual(t, expectedBitV, actualBitVMetadata)
if expected.syncSubnets != nil {
// Compute expected sync subnets bits.
expectedBitS := bitfield.NewBitvector4()
exists = false
for _, idx := range expected.syncSubnets {
exists = true
expectedBitS.SetBitAt(idx, true)
}
// Check syncnets in ENR.
var actualBitSENR bitfield.Bitvector4
err := service.dv5Listener.LocalNode().Node().Record().Load(enr.WithEntry(syncCommsSubnetEnrKey, &actualBitSENR))
require.NoError(t, err)
require.DeepSSZEqual(t, expectedBitS, actualBitSENR)
// Check syncnets in metadata.
if !exists {
expectedBitS = nil
}
actualBitSMetadata := service.metaData.SyncnetsBitfield()
require.DeepSSZEqual(t, expectedBitS, actualBitSMetadata)
}
}
func TestRefreshPersistentSubnets(t *testing.T) {
func TestRefreshENR_ForkBoundaries(t *testing.T) {
params.SetupTestConfigCleanup(t)
// Clean up caches after usage.
defer cache.SubnetIDs.EmptyAllCaches()
defer cache.SyncSubnetIDs.EmptyAllCaches()
const (
altairForkEpoch = 5
eip7594ForkEpoch = 10
)
// Set up epochs.
defaultCfg := params.BeaconConfig()
cfg := defaultCfg.Copy()
cfg.AltairForkEpoch = altairForkEpoch
cfg.Eip7594ForkEpoch = eip7594ForkEpoch
params.OverrideBeaconConfig(cfg)
// Compute the number of seconds per epoch.
secondsPerSlot := params.BeaconConfig().SecondsPerSlot
slotsPerEpoch := params.BeaconConfig().SlotsPerEpoch
secondsPerEpoch := secondsPerSlot * uint64(slotsPerEpoch)
testCases := []struct {
name string
epochSinceGenesis uint64
checks []check
tests := []struct {
name string
svcBuilder func(t *testing.T) *Service
postValidation func(t *testing.T, s *Service)
}{
{
name: "Phase0",
epochSinceGenesis: 0,
checks: []check{
{
pingCount: 0,
metadataSequenceNumber: 0,
attestationSubnets: []uint64{},
},
{
pingCount: 1,
metadataSequenceNumber: 1,
attestationSubnets: []uint64{40, 41},
},
{
pingCount: 1,
metadataSequenceNumber: 1,
attestationSubnets: []uint64{40, 41},
},
{
pingCount: 1,
metadataSequenceNumber: 1,
attestationSubnets: []uint64{40, 41},
},
name: "metadata no change",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
return s
},
postValidation: func(t *testing.T, s *Service) {
currEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.genesisTime.Unix())))
subs, err := computeSubscribedSubnets(s.dv5Listener.LocalNode().ID(), currEpoch)
assert.NoError(t, err)
bitV := bitfield.NewBitvector64()
for _, idx := range subs {
bitV.SetBitAt(idx, true)
}
assert.DeepEqual(t, bitV, s.metaData.AttnetsBitfield())
},
},
{
name: "Altair",
epochSinceGenesis: altairForkEpoch,
checks: []check{
{
pingCount: 0,
metadataSequenceNumber: 0,
attestationSubnets: []uint64{},
syncSubnets: nil,
},
{
pingCount: 1,
metadataSequenceNumber: 1,
attestationSubnets: []uint64{40, 41},
syncSubnets: nil,
},
{
pingCount: 2,
metadataSequenceNumber: 2,
attestationSubnets: []uint64{40, 41},
syncSubnets: []uint64{1, 2},
},
{
pingCount: 2,
metadataSequenceNumber: 2,
attestationSubnets: []uint64{40, 41},
syncSubnets: []uint64{1, 2},
},
name: "metadata updated",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now(),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
cache.SubnetIDs.AddPersistentCommittee([]uint64{1, 2, 3, 23}, 0)
return s
},
postValidation: func(t *testing.T, s *Service) {
assert.DeepEqual(t, bitfield.Bitvector64{0xe, 0x0, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0}, s.metaData.AttnetsBitfield())
},
},
{
name: "metadata updated at fork epoch",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now().Add(-5 * oneEpochDuration()),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
// Update params
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01})
cache.SubnetIDs.AddPersistentCommittee([]uint64{1, 2, 3, 23}, 0)
return s
},
postValidation: func(t *testing.T, s *Service) {
assert.Equal(t, version.Altair, s.metaData.Version())
assert.DeepEqual(t, bitfield.Bitvector4{0x00}, s.metaData.MetadataObjV1().Syncnets)
assert.DeepEqual(t, bitfield.Bitvector64{0xe, 0x0, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0}, s.metaData.AttnetsBitfield())
},
},
{
name: "metadata updated at fork epoch with no bitfield",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now().Add(-5 * oneEpochDuration()),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
// Update params
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
return s
},
postValidation: func(t *testing.T, s *Service) {
assert.Equal(t, version.Altair, s.metaData.Version())
assert.DeepEqual(t, bitfield.Bitvector4{0x00}, s.metaData.MetadataObjV1().Syncnets)
currEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.genesisTime.Unix())))
subs, err := computeSubscribedSubnets(s.dv5Listener.LocalNode().ID(), currEpoch)
assert.NoError(t, err)
bitV := bitfield.NewBitvector64()
for _, idx := range subs {
bitV.SetBitAt(idx, true)
}
assert.DeepEqual(t, bitV, s.metaData.AttnetsBitfield())
},
},
{
name: "metadata updated past fork epoch with bitfields",
svcBuilder: func(t *testing.T) *Service {
port := 2000
ipAddr, pkey := createAddrAndPrivKey(t)
s := &Service{
genesisTime: time.Now().Add(-6 * oneEpochDuration()),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
cfg: &Config{UDPPort: uint(port)},
}
createListener := func() (*discover.UDPv5, error) {
return s.createListener(ipAddr, pkey)
}
listener, err := newListener(createListener)
assert.NoError(t, err)
// Update params
cfg := params.BeaconConfig().Copy()
cfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
s.dv5Listener = listener
s.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
s.updateSubnetRecordWithMetadata([]byte{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00})
cache.SubnetIDs.AddPersistentCommittee([]uint64{1, 2, 3, 23}, 0)
cache.SyncSubnetIDs.AddSyncCommitteeSubnets([]byte{'A'}, 0, []uint64{0, 1}, 0)
return s
},
postValidation: func(t *testing.T, s *Service) {
assert.Equal(t, version.Altair, s.metaData.Version())
assert.DeepEqual(t, bitfield.Bitvector4{0x03}, s.metaData.MetadataObjV1().Syncnets)
assert.DeepEqual(t, bitfield.Bitvector64{0xe, 0x0, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0}, s.metaData.AttnetsBitfield())
},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
const peerOffset = 1
// Initialize the ping count.
actualPingCount = 0
// Create the private key.
privateKeyBytes := make([]byte, 32)
for i := 0; i < 32; i++ {
privateKeyBytes[i] = byte(i)
}
unmarshalledPrivateKey, err := crypto.UnmarshalSecp256k1PrivateKey(privateKeyBytes)
require.NoError(t, err)
privateKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(unmarshalledPrivateKey)
require.NoError(t, err)
// Create a p2p service.
p2p := testp2p.NewTestP2P(t)
// Create and connect a peer.
createAndConnectPeer(t, p2p, peerOffset)
// Create a service.
service := &Service{
pingMethod: func(_ context.Context, _ peer.ID) error {
actualPingCount++
return nil
},
cfg: &Config{UDPPort: 2000},
peers: p2p.Peers(),
genesisTime: time.Now().Add(-time.Duration(tc.epochSinceGenesis*secondsPerEpoch) * time.Second),
genesisValidatorsRoot: bytesutil.PadTo([]byte{'A'}, 32),
}
// Set the listener and the metadata.
createListener := func() (*discover.UDPv5, error) {
return service.createListener(nil, privateKey)
}
listener, err := newListener(createListener)
require.NoError(t, err)
service.dv5Listener = listener
service.metaData = wrapper.WrappedMetadataV0(new(ethpb.MetaDataV0))
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[0])
// Refresh the persistent subnets.
service.RefreshPersistentSubnets()
time.Sleep(10 * time.Millisecond)
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[1])
// Add a sync committee subnet.
cache.SyncSubnetIDs.AddSyncCommitteeSubnets([]byte{'a'}, altairForkEpoch, []uint64{1, 2}, 1*time.Hour)
// Refresh the persistent subnets.
service.RefreshPersistentSubnets()
time.Sleep(10 * time.Millisecond)
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[2])
// Refresh the persistent subnets.
service.RefreshPersistentSubnets()
time.Sleep(10 * time.Millisecond)
// Run a check.
checkPingCountCacheMetadataRecord(t, service, tc.checks[3])
// Clean the test.
service.dv5Listener.Close()
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := tt.svcBuilder(t)
s.RefreshENR()
tt.postValidation(t, s)
s.dv5Listener.Close()
cache.SubnetIDs.EmptyAllCaches()
cache.SyncSubnetIDs.EmptyAllCaches()
})
}
// Reset the config.
params.OverrideBeaconConfig(defaultCfg)
}

View File

@@ -2,6 +2,7 @@ package p2p
import (
"context"
"errors"
"fmt"
"io"
"sync"
@@ -9,7 +10,6 @@ import (
"github.com/libp2p/go-libp2p/core/network"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
@@ -25,46 +25,6 @@ func peerMultiaddrString(conn network.Conn) string {
return fmt.Sprintf("%s/p2p/%s", conn.RemoteMultiaddr().String(), conn.RemotePeer().String())
}
func (s *Service) connectToPeer(conn network.Conn) {
s.peers.SetConnectionState(conn.RemotePeer(), peers.Connected)
// Go through the handshake process.
log.WithFields(logrus.Fields{
"direction": conn.Stat().Direction.String(),
"multiAddr": peerMultiaddrString(conn),
"activePeers": len(s.peers.Active()),
}).Debug("Initiate peer connection")
}
func (s *Service) disconnectFromPeerOnError(
conn network.Conn,
goodByeFunc func(ctx context.Context, id peer.ID) error,
badPeerErr error,
) {
// Get the remote peer ID.
remotePeerID := conn.RemotePeer()
// Set the peer to disconnecting state.
s.peers.SetConnectionState(remotePeerID, peers.Disconnecting)
// Only attempt a goodbye if we are still connected to the peer.
if s.host.Network().Connectedness(remotePeerID) == network.Connected {
if err := goodByeFunc(context.TODO(), remotePeerID); err != nil {
log.WithError(err).Error("Unable to disconnect from peer")
}
}
log.
WithError(badPeerErr).
WithFields(logrus.Fields{
"multiaddr": peerMultiaddrString(conn),
"direction": conn.Stat().Direction.String(),
"remainingActivePeers": len(s.peers.Active()),
}).
Debug("Initiate peer disconnection")
s.peers.SetConnectionState(remotePeerID, peers.Disconnected)
}
// AddConnectionHandler adds a callback function which handles the connection with a
// newly added peer. It performs a handshake with that peer by sending a hello request
// and validating the response from the peer.
@@ -97,9 +57,18 @@ func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Con
}
s.host.Network().Notify(&network.NotifyBundle{
ConnectedF: func(_ network.Network, conn network.Conn) {
ConnectedF: func(net network.Network, conn network.Conn) {
remotePeer := conn.RemotePeer()
disconnectFromPeer := func() {
s.peers.SetConnectionState(remotePeer, peers.PeerDisconnecting)
// Only attempt a goodbye if we are still connected to the peer.
if s.host.Network().Connectedness(remotePeer) == network.Connected {
if err := goodByeFunc(context.TODO(), remotePeer); err != nil {
log.WithError(err).Error("Unable to disconnect from peer")
}
}
s.peers.SetConnectionState(remotePeer, peers.PeerDisconnected)
}
// Connection handler must be non-blocking as part of libp2p design.
go func() {
if peerHandshaking(remotePeer) {
@@ -108,21 +77,28 @@ func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Con
return
}
defer peerFinished(remotePeer)
// Handle the various pre-existing conditions that will result in us not handshaking.
peerConnectionState, err := s.peers.ConnectionState(remotePeer)
if err == nil && (peerConnectionState == peers.Connected || peerConnectionState == peers.Connecting) {
if err == nil && (peerConnectionState == peers.PeerConnected || peerConnectionState == peers.PeerConnecting) {
log.WithField("currentState", peerConnectionState).WithField("reason", "already active").Trace("Ignoring connection request")
return
}
s.peers.Add(nil /* ENR */, remotePeer, conn.RemoteMultiaddr(), conn.Stat().Direction)
// Defensive check in the event we still get a bad peer.
if err := s.peers.IsBad(remotePeer); err != nil {
s.disconnectFromPeerOnError(conn, goodByeFunc, err)
if s.peers.IsBad(remotePeer) {
log.WithField("reason", "bad peer").Trace("Ignoring connection request")
disconnectFromPeer()
return
}
validPeerConnection := func() {
s.peers.SetConnectionState(conn.RemotePeer(), peers.PeerConnected)
// Go through the handshake process.
log.WithFields(logrus.Fields{
"direction": conn.Stat().Direction,
"multiAddr": peerMultiaddrString(conn),
"activePeers": len(s.peers.Active()),
}).Debug("Peer connected")
}
// Do not perform handshake on inbound dials.
if conn.Stat().Direction == network.DirInbound {
@@ -141,80 +117,63 @@ func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Con
// If peer hasn't sent a status request, we disconnect with them
if _, err := s.peers.ChainState(remotePeer); errors.Is(err, peerdata.ErrPeerUnknown) || errors.Is(err, peerdata.ErrNoPeerStatus) {
statusMessageMissing.Inc()
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.Wrap(err, "chain state"))
disconnectFromPeer()
return
}
if peerExists {
updated, err := s.peers.ChainStateLastUpdated(remotePeer)
if err != nil {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.Wrap(err, "chain state last updated"))
disconnectFromPeer()
return
}
// Exit if we don't receive any current status messages from peer.
if updated.IsZero() {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.New("is zero"))
return
}
if updated.Before(currentTime) {
s.disconnectFromPeerOnError(conn, goodByeFunc, errors.New("did not update"))
// exit if we don't receive any current status messages from
// peer.
if updated.IsZero() || !updated.After(currentTime) {
disconnectFromPeer()
return
}
}
s.connectToPeer(conn)
validPeerConnection()
return
}
s.peers.SetConnectionState(conn.RemotePeer(), peers.Connecting)
s.peers.SetConnectionState(conn.RemotePeer(), peers.PeerConnecting)
if err := reqFunc(context.TODO(), conn.RemotePeer()); err != nil && !errors.Is(err, io.EOF) {
s.disconnectFromPeerOnError(conn, goodByeFunc, err)
log.WithError(err).Trace("Handshake failed")
disconnectFromPeer()
return
}
s.connectToPeer(conn)
validPeerConnection()
}()
},
})
}
// AddDisconnectionHandler disconnects from peers. It handles updating the peer status.
// AddDisconnectionHandler disconnects from peers. It handles updating the peer status.
// This also calls the handler responsible for maintaining other parts of the sync or p2p system.
func (s *Service) AddDisconnectionHandler(handler func(ctx context.Context, id peer.ID) error) {
s.host.Network().Notify(&network.NotifyBundle{
DisconnectedF: func(net network.Network, conn network.Conn) {
peerID := conn.RemotePeer()
log.WithFields(logrus.Fields{
"multiAddr": peerMultiaddrString(conn),
"direction": conn.Stat().Direction.String(),
})
log := log.WithField("multiAddr", peerMultiaddrString(conn))
// Must be handled in a goroutine as this callback cannot be blocking.
go func() {
// Exit early if we are still connected to the peer.
if net.Connectedness(peerID) == network.Connected {
if net.Connectedness(conn.RemotePeer()) == network.Connected {
return
}
priorState, err := s.peers.ConnectionState(peerID)
priorState, err := s.peers.ConnectionState(conn.RemotePeer())
if err != nil {
// Can happen if the peer has already disconnected, so...
priorState = peers.Disconnected
priorState = peers.PeerDisconnected
}
s.peers.SetConnectionState(peerID, peers.Disconnecting)
s.peers.SetConnectionState(conn.RemotePeer(), peers.PeerDisconnecting)
if err := handler(context.TODO(), conn.RemotePeer()); err != nil {
log.WithError(err).Error("Disconnect handler failed")
}
s.peers.SetConnectionState(peerID, peers.Disconnected)
s.peers.SetConnectionState(conn.RemotePeer(), peers.PeerDisconnected)
// Only log disconnections if we were fully connected.
if priorState == peers.Connected {
activePeersCount := len(s.peers.Active())
log.WithField("remainingActivePeers", activePeersCount).Debug("Peer disconnected")
if priorState == peers.PeerConnected {
log.WithField("activePeers", len(s.peers.Active())).Debug("Peer disconnected")
}
}()
},

View File

@@ -82,7 +82,7 @@ type PeerManager interface {
Host() host.Host
ENR() *enr.Record
DiscoveryAddresses() ([]multiaddr.Multiaddr, error)
RefreshPersistentSubnets()
RefreshENR()
FindPeersWithSubnet(ctx context.Context, topic string, subIndex uint64, threshold int) (bool, error)
AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) error)
}

View File

@@ -0,0 +1,36 @@
package p2p
import (
"context"
"github.com/ethereum/go-ethereum/p2p/enode"
)
// filterNodes wraps an iterator such that Next only returns nodes for which
// the 'check' function returns true. This custom implementation also
// checks for context deadlines so that in the event the parent context has
// expired, we do exit from the search rather than perform more network
// lookups for additional peers.
func filterNodes(ctx context.Context, it enode.Iterator, check func(*enode.Node) bool) enode.Iterator {
return &filterIter{ctx, it, check}
}
type filterIter struct {
context.Context
enode.Iterator
check func(*enode.Node) bool
}
// Next looks up for the next valid node according to our
// filter criteria.
func (f *filterIter) Next() bool {
for f.Iterator.Next() {
if f.Context.Err() != nil {
return false
}
if f.check(f.Node()) {
return true
}
}
return false
}

View File

@@ -29,7 +29,7 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
// never be hit.
msg := make([]byte, 20)
copy(msg, "invalid")
return bytesutil.UnsafeCastToString(msg)
return string(msg)
}
digest, err := ExtractGossipDigest(*pmsg.Topic)
if err != nil {
@@ -37,7 +37,7 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
// never be hit.
msg := make([]byte, 20)
copy(msg, "invalid")
return bytesutil.UnsafeCastToString(msg)
return string(msg)
}
_, fEpoch, err := forks.RetrieveForkDataFromDigest(digest, genesisValidatorsRoot)
if err != nil {
@@ -45,7 +45,7 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
// never be hit.
msg := make([]byte, 20)
copy(msg, "invalid")
return bytesutil.UnsafeCastToString(msg)
return string(msg)
}
if fEpoch >= params.BeaconConfig().AltairForkEpoch {
return postAltairMsgID(pmsg, fEpoch)
@@ -54,11 +54,11 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
if err != nil {
combinedData := append(params.BeaconConfig().MessageDomainInvalidSnappy[:], pmsg.Data...)
h := hash.Hash(combinedData)
return bytesutil.UnsafeCastToString(h[:20])
return string(h[:20])
}
combinedData := append(params.BeaconConfig().MessageDomainValidSnappy[:], decodedData...)
h := hash.Hash(combinedData)
return bytesutil.UnsafeCastToString(h[:20])
return string(h[:20])
}
// Spec:
@@ -93,13 +93,13 @@ func postAltairMsgID(pmsg *pubsubpb.Message, fEpoch primitives.Epoch) string {
// should never happen
msg := make([]byte, 20)
copy(msg, "invalid")
return bytesutil.UnsafeCastToString(msg)
return string(msg)
}
if uint64(totalLength) > gossipPubSubSize {
// this should never happen
msg := make([]byte, 20)
copy(msg, "invalid")
return bytesutil.UnsafeCastToString(msg)
return string(msg)
}
combinedData := make([]byte, 0, totalLength)
combinedData = append(combinedData, params.BeaconConfig().MessageDomainInvalidSnappy[:]...)
@@ -107,7 +107,7 @@ func postAltairMsgID(pmsg *pubsubpb.Message, fEpoch primitives.Epoch) string {
combinedData = append(combinedData, topic...)
combinedData = append(combinedData, pmsg.Data...)
h := hash.Hash(combinedData)
return bytesutil.UnsafeCastToString(h[:20])
return string(h[:20])
}
totalLength, err := math.AddInt(
len(params.BeaconConfig().MessageDomainValidSnappy),
@@ -120,7 +120,7 @@ func postAltairMsgID(pmsg *pubsubpb.Message, fEpoch primitives.Epoch) string {
// should never happen
msg := make([]byte, 20)
copy(msg, "invalid")
return bytesutil.UnsafeCastToString(msg)
return string(msg)
}
combinedData := make([]byte, 0, totalLength)
combinedData = append(combinedData, params.BeaconConfig().MessageDomainValidSnappy[:]...)
@@ -128,5 +128,5 @@ func postAltairMsgID(pmsg *pubsubpb.Message, fEpoch primitives.Epoch) string {
combinedData = append(combinedData, topic...)
combinedData = append(combinedData, decodedData...)
h := hash.Hash(combinedData)
return bytesutil.UnsafeCastToString(h[:20])
return string(h[:20])
}

View File

@@ -23,8 +23,8 @@ var (
ErrNoPeerStatus = errors.New("no chain status for peer")
)
// ConnectionState is the state of the connection.
type ConnectionState ethpb.ConnectionState
// PeerConnectionState is the state of the connection.
type PeerConnectionState ethpb.ConnectionState
// StoreConfig holds peer store parameters.
type StoreConfig struct {
@@ -49,7 +49,7 @@ type PeerData struct {
// Network related data.
Address ma.Multiaddr
Direction network.Direction
ConnState ConnectionState
ConnState PeerConnectionState
Enr *enr.Record
NextValidTime time.Time
// Chain related data.

View File

@@ -20,7 +20,6 @@ go_library(
"//crypto/rand:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_libp2p_go_libp2p//core/peer:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -4,7 +4,6 @@ import (
"time"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
)
@@ -62,7 +61,7 @@ func (s *BadResponsesScorer) Score(pid peer.ID) float64 {
// scoreNoLock is a lock-free version of Score.
func (s *BadResponsesScorer) scoreNoLock(pid peer.ID) float64 {
if s.isBadPeerNoLock(pid) != nil {
if s.isBadPeerNoLock(pid) {
return BadPeerScore
}
score := float64(0)
@@ -117,24 +116,18 @@ func (s *BadResponsesScorer) Increment(pid peer.ID) {
// IsBadPeer states if the peer is to be considered bad.
// If the peer is unknown this will return `false`, which makes using this function easier than returning an error.
func (s *BadResponsesScorer) IsBadPeer(pid peer.ID) error {
func (s *BadResponsesScorer) IsBadPeer(pid peer.ID) bool {
s.store.RLock()
defer s.store.RUnlock()
return s.isBadPeerNoLock(pid)
}
// isBadPeerNoLock is lock-free version of IsBadPeer.
func (s *BadResponsesScorer) isBadPeerNoLock(pid peer.ID) error {
func (s *BadResponsesScorer) isBadPeerNoLock(pid peer.ID) bool {
if peerData, ok := s.store.PeerData(pid); ok {
if peerData.BadResponses >= s.config.Threshold {
return errors.Errorf("peer exceeded bad responses threshold: got %d, threshold %d", peerData.BadResponses, s.config.Threshold)
}
return nil
return peerData.BadResponses >= s.config.Threshold
}
return nil
return false
}
// BadPeers returns the peers that are considered bad.
@@ -144,7 +137,7 @@ func (s *BadResponsesScorer) BadPeers() []peer.ID {
badPeers := make([]peer.ID, 0)
for pid := range s.store.Peers() {
if s.isBadPeerNoLock(pid) != nil {
if s.isBadPeerNoLock(pid) {
badPeers = append(badPeers, pid)
}
}

View File

@@ -33,19 +33,19 @@ func TestScorers_BadResponses_Score(t *testing.T) {
assert.Equal(t, 0., scorer.Score(pid), "Unexpected score for unregistered peer")
scorer.Increment(pid)
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, false, scorer.IsBadPeer(pid))
assert.Equal(t, -2.5, scorer.Score(pid))
scorer.Increment(pid)
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, false, scorer.IsBadPeer(pid))
assert.Equal(t, float64(-5), scorer.Score(pid))
scorer.Increment(pid)
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, false, scorer.IsBadPeer(pid))
assert.Equal(t, float64(-7.5), scorer.Score(pid))
scorer.Increment(pid)
assert.NotNil(t, scorer.IsBadPeer(pid))
assert.Equal(t, true, scorer.IsBadPeer(pid))
assert.Equal(t, -100.0, scorer.Score(pid))
}
@@ -152,17 +152,17 @@ func TestScorers_BadResponses_IsBadPeer(t *testing.T) {
})
scorer := peerStatuses.Scorers().BadResponsesScorer()
pid := peer.ID("peer1")
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, false, scorer.IsBadPeer(pid))
peerStatuses.Add(nil, pid, nil, network.DirUnknown)
assert.NoError(t, scorer.IsBadPeer(pid))
assert.Equal(t, false, scorer.IsBadPeer(pid))
for i := 0; i < scorers.DefaultBadResponsesThreshold; i++ {
scorer.Increment(pid)
if i == scorers.DefaultBadResponsesThreshold-1 {
assert.NotNil(t, scorer.IsBadPeer(pid), "Unexpected peer status")
assert.Equal(t, true, scorer.IsBadPeer(pid), "Unexpected peer status")
} else {
assert.NoError(t, scorer.IsBadPeer(pid), "Unexpected peer status")
assert.Equal(t, false, scorer.IsBadPeer(pid), "Unexpected peer status")
}
}
}
@@ -185,11 +185,11 @@ func TestScorers_BadResponses_BadPeers(t *testing.T) {
scorer.Increment(pids[2])
scorer.Increment(pids[4])
}
assert.NoError(t, scorer.IsBadPeer(pids[0]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[1]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[2]), "Invalid peer status")
assert.NoError(t, scorer.IsBadPeer(pids[3]), "Invalid peer status")
assert.NotNil(t, scorer.IsBadPeer(pids[4]), "Invalid peer status")
assert.Equal(t, false, scorer.IsBadPeer(pids[0]), "Invalid peer status")
assert.Equal(t, true, scorer.IsBadPeer(pids[1]), "Invalid peer status")
assert.Equal(t, true, scorer.IsBadPeer(pids[2]), "Invalid peer status")
assert.Equal(t, false, scorer.IsBadPeer(pids[3]), "Invalid peer status")
assert.Equal(t, true, scorer.IsBadPeer(pids[4]), "Invalid peer status")
want := []peer.ID{pids[1], pids[2], pids[4]}
badPeers := scorer.BadPeers()
sort.Slice(badPeers, func(i, j int) bool {

View File

@@ -177,8 +177,8 @@ func (s *BlockProviderScorer) processedBlocksNoLock(pid peer.ID) uint64 {
// Block provider scorer cannot guarantee that lower score of a peer is indeed a sign of a bad peer.
// Therefore this scorer never marks peers as bad, and relies on scores to probabilistically sort
// out low-scorers (see WeightSorted method).
func (*BlockProviderScorer) IsBadPeer(_ peer.ID) error {
return nil
func (*BlockProviderScorer) IsBadPeer(_ peer.ID) bool {
return false
}
// BadPeers returns the peers that are considered bad.

View File

@@ -119,7 +119,7 @@ func TestScorers_BlockProvider_Score(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(*testing.T) {
t.Run(tt.name, func(t *testing.T) {
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
PeerLimit: 30,
ScorerParams: &scorers.Config{
@@ -224,7 +224,7 @@ func TestScorers_BlockProvider_Sorted(t *testing.T) {
}{
{
name: "no peers",
update: func(*scorers.BlockProviderScorer) {},
update: func(s *scorers.BlockProviderScorer) {},
have: []peer.ID{},
want: []peer.ID{},
},
@@ -451,7 +451,7 @@ func TestScorers_BlockProvider_FormatScorePretty(t *testing.T) {
})
}
for _, tt := range tests {
t.Run(tt.name, func(*testing.T) {
t.Run(tt.name, func(t *testing.T) {
peerStatuses := peerStatusGen()
scorer := peerStatuses.Scorers().BlockProviderScorer()
if tt.update != nil {
@@ -481,8 +481,8 @@ func TestScorers_BlockProvider_BadPeerMarking(t *testing.T) {
})
scorer := peerStatuses.Scorers().BlockProviderScorer()
assert.NoError(t, scorer.IsBadPeer("peer1"), "Unexpected status for unregistered peer")
assert.Equal(t, false, scorer.IsBadPeer("peer1"), "Unexpected status for unregistered peer")
scorer.IncrementProcessedBlocks("peer1", 64)
assert.NoError(t, scorer.IsBadPeer("peer1"))
assert.Equal(t, false, scorer.IsBadPeer("peer1"))
assert.Equal(t, 0, len(scorer.BadPeers()))
}

View File

@@ -2,7 +2,6 @@ package scorers
import (
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
pbrpc "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -52,24 +51,19 @@ func (s *GossipScorer) scoreNoLock(pid peer.ID) float64 {
}
// IsBadPeer states if the peer is to be considered bad.
func (s *GossipScorer) IsBadPeer(pid peer.ID) error {
func (s *GossipScorer) IsBadPeer(pid peer.ID) bool {
s.store.RLock()
defer s.store.RUnlock()
return s.isBadPeerNoLock(pid)
}
// isBadPeerNoLock is lock-free version of IsBadPeer.
func (s *GossipScorer) isBadPeerNoLock(pid peer.ID) error {
func (s *GossipScorer) isBadPeerNoLock(pid peer.ID) bool {
peerData, ok := s.store.PeerData(pid)
if !ok {
return nil
return false
}
if peerData.GossipScore < gossipThreshold {
return errors.Errorf("gossip score below threshold: got %f - threshold %f", peerData.GossipScore, gossipThreshold)
}
return nil
return peerData.GossipScore < gossipThreshold
}
// BadPeers returns the peers that are considered bad.
@@ -79,7 +73,7 @@ func (s *GossipScorer) BadPeers() []peer.ID {
badPeers := make([]peer.ID, 0)
for pid := range s.store.Peers() {
if s.isBadPeerNoLock(pid) != nil {
if s.isBadPeerNoLock(pid) {
badPeers = append(badPeers, pid)
}
}

View File

@@ -21,7 +21,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
}{
{
name: "nonexistent peer",
update: func(*scorers.GossipScorer) {
update: func(scorer *scorers.GossipScorer) {
},
check: func(scorer *scorers.GossipScorer) {
assert.Equal(t, 0.0, scorer.Score("peer1"), "Unexpected score")
@@ -34,7 +34,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
},
check: func(scorer *scorers.GossipScorer) {
assert.Equal(t, -101.0, scorer.Score("peer1"), "Unexpected score")
assert.NotNil(t, scorer.IsBadPeer("peer1"), "Unexpected good peer")
assert.Equal(t, true, scorer.IsBadPeer("peer1"), "Unexpected good peer")
},
},
{
@@ -44,7 +44,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
},
check: func(scorer *scorers.GossipScorer) {
assert.Equal(t, 10.0, scorer.Score("peer1"), "Unexpected score")
assert.Equal(t, nil, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
assert.Equal(t, false, scorer.IsBadPeer("peer1"), "Unexpected bad peer")
_, _, topicMap, err := scorer.GossipData("peer1")
assert.NoError(t, err)
assert.Equal(t, uint64(100), topicMap["a"].TimeInMesh, "incorrect time in mesh")
@@ -53,7 +53,7 @@ func TestScorers_Gossip_Score(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(*testing.T) {
t.Run(tt.name, func(t *testing.T) {
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
ScorerParams: &scorers.Config{},
})

View File

@@ -46,7 +46,7 @@ func (s *PeerStatusScorer) Score(pid peer.ID) float64 {
// scoreNoLock is a lock-free version of Score.
func (s *PeerStatusScorer) scoreNoLock(pid peer.ID) float64 {
if s.isBadPeerNoLock(pid) != nil {
if s.isBadPeerNoLock(pid) {
return BadPeerScore
}
score := float64(0)
@@ -67,34 +67,30 @@ func (s *PeerStatusScorer) scoreNoLock(pid peer.ID) float64 {
}
// IsBadPeer states if the peer is to be considered bad.
func (s *PeerStatusScorer) IsBadPeer(pid peer.ID) error {
func (s *PeerStatusScorer) IsBadPeer(pid peer.ID) bool {
s.store.RLock()
defer s.store.RUnlock()
return s.isBadPeerNoLock(pid)
}
// isBadPeerNoLock is lock-free version of IsBadPeer.
func (s *PeerStatusScorer) isBadPeerNoLock(pid peer.ID) error {
func (s *PeerStatusScorer) isBadPeerNoLock(pid peer.ID) bool {
peerData, ok := s.store.PeerData(pid)
if !ok {
return nil
return false
}
// Mark peer as bad, if the latest error is one of the terminal ones.
terminalErrs := []error{
p2ptypes.ErrWrongForkDigestVersion,
p2ptypes.ErrInvalidFinalizedRoot,
p2ptypes.ErrInvalidRequest,
}
for _, err := range terminalErrs {
if errors.Is(peerData.ChainStateValidationError, err) {
return err
return true
}
}
return nil
return false
}
// BadPeers returns the peers that are considered bad.
@@ -104,7 +100,7 @@ func (s *PeerStatusScorer) BadPeers() []peer.ID {
badPeers := make([]peer.ID, 0)
for pid := range s.store.Peers() {
if s.isBadPeerNoLock(pid) != nil {
if s.isBadPeerNoLock(pid) {
badPeers = append(badPeers, pid)
}
}

View File

@@ -122,7 +122,7 @@ func TestScorers_PeerStatus_Score(t *testing.T) {
}
for _, tt := range tests {
t.Run(tt.name, func(*testing.T) {
t.Run(tt.name, func(t *testing.T) {
peerStatuses := peers.NewStatus(ctx, &peers.StatusConfig{
ScorerParams: &scorers.Config{},
})
@@ -140,12 +140,12 @@ func TestScorers_PeerStatus_IsBadPeer(t *testing.T) {
ScorerParams: &scorers.Config{},
})
pid := peer.ID("peer1")
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer(pid))
assert.Equal(t, false, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid))
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid, &pb.Status{}, p2ptypes.ErrWrongForkDigestVersion)
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer(pid))
assert.NotNil(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid))
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer(pid))
assert.Equal(t, true, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid))
}
func TestScorers_PeerStatus_BadPeers(t *testing.T) {
@@ -155,22 +155,22 @@ func TestScorers_PeerStatus_BadPeers(t *testing.T) {
pid1 := peer.ID("peer1")
pid2 := peer.ID("peer2")
pid3 := peer.ID("peer3")
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid1))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid1))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid2))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid2))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid3))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid3))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer(pid1))
assert.Equal(t, false, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid1))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer(pid2))
assert.Equal(t, false, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid2))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer(pid3))
assert.Equal(t, false, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid3))
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid1, &pb.Status{}, p2ptypes.ErrWrongForkDigestVersion)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid2, &pb.Status{}, nil)
peerStatuses.Scorers().PeerStatusScorer().SetPeerStatus(pid3, &pb.Status{}, p2ptypes.ErrWrongForkDigestVersion)
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer(pid1))
assert.NotNil(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid1))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer(pid2))
assert.NoError(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid2))
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer(pid3))
assert.NotNil(t, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid3))
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer(pid1))
assert.Equal(t, true, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid1))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer(pid2))
assert.Equal(t, false, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid2))
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer(pid3))
assert.Equal(t, true, peerStatuses.Scorers().PeerStatusScorer().IsBadPeer(pid3))
assert.Equal(t, 2, len(peerStatuses.Scorers().PeerStatusScorer().BadPeers()))
assert.Equal(t, 2, len(peerStatuses.Scorers().BadPeers()))
}

View File

@@ -6,7 +6,6 @@ import (
"time"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v5/config/features"
)
@@ -25,7 +24,7 @@ const BadPeerScore = gossipThreshold
// Scorer defines minimum set of methods every peer scorer must expose.
type Scorer interface {
Score(pid peer.ID) float64
IsBadPeer(pid peer.ID) error
IsBadPeer(pid peer.ID) bool
BadPeers() []peer.ID
}
@@ -125,29 +124,26 @@ func (s *Service) ScoreNoLock(pid peer.ID) float64 {
}
// IsBadPeer traverses all the scorers to see if any of them classifies peer as bad.
func (s *Service) IsBadPeer(pid peer.ID) error {
func (s *Service) IsBadPeer(pid peer.ID) bool {
s.store.RLock()
defer s.store.RUnlock()
return s.IsBadPeerNoLock(pid)
}
// IsBadPeerNoLock is a lock-free version of IsBadPeer.
func (s *Service) IsBadPeerNoLock(pid peer.ID) error {
if err := s.scorers.badResponsesScorer.isBadPeerNoLock(pid); err != nil {
return errors.Wrap(err, "bad responses scorer")
func (s *Service) IsBadPeerNoLock(pid peer.ID) bool {
if s.scorers.badResponsesScorer.isBadPeerNoLock(pid) {
return true
}
if err := s.scorers.peerStatusScorer.isBadPeerNoLock(pid); err != nil {
return errors.Wrap(err, "peer status scorer")
if s.scorers.peerStatusScorer.isBadPeerNoLock(pid) {
return true
}
if features.Get().EnablePeerScorer {
if err := s.scorers.gossipScorer.isBadPeerNoLock(pid); err != nil {
return errors.Wrap(err, "gossip scorer")
if s.scorers.gossipScorer.isBadPeerNoLock(pid) {
return true
}
}
return nil
return false
}
// BadPeers returns the peers that are considered bad by any of registered scorers.
@@ -157,7 +153,7 @@ func (s *Service) BadPeers() []peer.ID {
badPeers := make([]peer.ID, 0)
for pid := range s.store.Peers() {
if s.IsBadPeerNoLock(pid) != nil {
if s.IsBadPeerNoLock(pid) {
badPeers = append(badPeers, pid)
}
}

View File

@@ -100,7 +100,7 @@ func TestScorers_Service_Score(t *testing.T) {
return scores
}
pack := func(_ *scorers.Service, s1, s2, s3 float64) map[string]float64 {
pack := func(scorer *scorers.Service, s1, s2, s3 float64) map[string]float64 {
return map[string]float64{
"peer1": roundScore(s1),
"peer2": roundScore(s2),
@@ -237,7 +237,7 @@ func TestScorers_Service_loop(t *testing.T) {
for i := 0; i < s1.Params().Threshold+5; i++ {
s1.Increment(pid1)
}
assert.NotNil(t, s1.IsBadPeer(pid1), "Peer should be marked as bad")
assert.Equal(t, true, s1.IsBadPeer(pid1), "Peer should be marked as bad")
s2.IncrementProcessedBlocks("peer1", 221)
assert.Equal(t, uint64(221), s2.ProcessedBlocks("peer1"))
@@ -252,7 +252,7 @@ func TestScorers_Service_loop(t *testing.T) {
for {
select {
case <-ticker.C:
if s1.IsBadPeer(pid1) == nil && s2.ProcessedBlocks("peer1") == 0 {
if s1.IsBadPeer(pid1) == false && s2.ProcessedBlocks("peer1") == 0 {
return
}
case <-ctx.Done():
@@ -263,7 +263,7 @@ func TestScorers_Service_loop(t *testing.T) {
}()
<-done
assert.NoError(t, s1.IsBadPeer(pid1), "Peer should not be marked as bad")
assert.Equal(t, false, s1.IsBadPeer(pid1), "Peer should not be marked as bad")
assert.Equal(t, uint64(0), s2.ProcessedBlocks("peer1"), "No blocks are expected")
}
@@ -278,10 +278,10 @@ func TestScorers_Service_IsBadPeer(t *testing.T) {
},
})
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer("peer1"))
peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
peerStatuses.Scorers().BadResponsesScorer().Increment("peer1")
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer("peer1"))
}
func TestScorers_Service_BadPeers(t *testing.T) {
@@ -295,16 +295,16 @@ func TestScorers_Service_BadPeers(t *testing.T) {
},
})
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.Equal(t, 0, len(peerStatuses.Scorers().BadPeers()))
for _, pid := range []peer.ID{"peer1", "peer3"} {
peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
peerStatuses.Scorers().BadResponsesScorer().Increment(pid)
}
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.NoError(t, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.NotNil(t, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer("peer1"))
assert.Equal(t, false, peerStatuses.Scorers().IsBadPeer("peer2"))
assert.Equal(t, true, peerStatuses.Scorers().IsBadPeer("peer3"))
assert.Equal(t, 2, len(peerStatuses.Scorers().BadPeers()))
}

View File

@@ -34,7 +34,6 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
ma "github.com/multiformats/go-multiaddr"
manet "github.com/multiformats/go-multiaddr/net"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/peerdata"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/peers/scorers"
@@ -50,14 +49,14 @@ import (
)
const (
// Disconnected means there is no connection to the peer.
Disconnected peerdata.ConnectionState = iota
// Disconnecting means there is an on-going attempt to disconnect from the peer.
Disconnecting
// Connected means the peer has an active connection.
Connected
// Connecting means there is an on-going attempt to connect to the peer.
Connecting
// PeerDisconnected means there is no connection to the peer.
PeerDisconnected peerdata.PeerConnectionState = iota
// PeerDisconnecting means there is an on-going attempt to disconnect from the peer.
PeerDisconnecting
// PeerConnected means the peer has an active connection.
PeerConnected
// PeerConnecting means there is an on-going attempt to connect to the peer.
PeerConnecting
)
const (
@@ -118,15 +117,6 @@ func NewStatus(ctx context.Context, config *StatusConfig) *Status {
}
}
func (p *Status) UpdateENR(record *enr.Record, pid peer.ID) {
p.store.Lock()
defer p.store.Unlock()
if peerData, ok := p.store.PeerData(pid); ok {
peerData.Enr = record
}
}
// Scorers exposes peer scoring management service.
func (p *Status) Scorers() *scorers.Service {
return p.scorers
@@ -160,7 +150,7 @@ func (p *Status) Add(record *enr.Record, pid peer.ID, address ma.Multiaddr, dire
Address: address,
Direction: direction,
// Peers start disconnected; state will be updated when the handshake process begins.
ConnState: Disconnected,
ConnState: PeerDisconnected,
}
if record != nil {
peerData.Enr = record
@@ -222,7 +212,7 @@ func (p *Status) IsActive(pid peer.ID) bool {
defer p.store.RUnlock()
peerData, ok := p.store.PeerData(pid)
return ok && (peerData.ConnState == Connected || peerData.ConnState == Connecting)
return ok && (peerData.ConnState == PeerConnected || peerData.ConnState == PeerConnecting)
}
// IsAboveInboundLimit checks if we are above our current inbound
@@ -232,7 +222,7 @@ func (p *Status) IsAboveInboundLimit() bool {
defer p.store.RUnlock()
totalInbound := 0
for _, peerData := range p.store.Peers() {
if peerData.ConnState == Connected &&
if peerData.ConnState == PeerConnected &&
peerData.Direction == network.DirInbound {
totalInbound += 1
}
@@ -296,7 +286,7 @@ func (p *Status) SubscribedToSubnet(index uint64) []peer.ID {
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
// look at active peers
connectedStatus := peerData.ConnState == Connecting || peerData.ConnState == Connected
connectedStatus := peerData.ConnState == PeerConnecting || peerData.ConnState == PeerConnected
if connectedStatus && peerData.MetaData != nil && !peerData.MetaData.IsNil() && peerData.MetaData.AttnetsBitfield() != nil {
indices := indicesFromBitfield(peerData.MetaData.AttnetsBitfield())
for _, idx := range indices {
@@ -311,7 +301,7 @@ func (p *Status) SubscribedToSubnet(index uint64) []peer.ID {
}
// SetConnectionState sets the connection state of the given remote peer.
func (p *Status) SetConnectionState(pid peer.ID, state peerdata.ConnectionState) {
func (p *Status) SetConnectionState(pid peer.ID, state peerdata.PeerConnectionState) {
p.store.Lock()
defer p.store.Unlock()
@@ -321,14 +311,14 @@ func (p *Status) SetConnectionState(pid peer.ID, state peerdata.ConnectionState)
// ConnectionState gets the connection state of the given remote peer.
// This will error if the peer does not exist.
func (p *Status) ConnectionState(pid peer.ID) (peerdata.ConnectionState, error) {
func (p *Status) ConnectionState(pid peer.ID) (peerdata.PeerConnectionState, error) {
p.store.RLock()
defer p.store.RUnlock()
if peerData, ok := p.store.PeerData(pid); ok {
return peerData.ConnState, nil
}
return Disconnected, peerdata.ErrPeerUnknown
return PeerDisconnected, peerdata.ErrPeerUnknown
}
// ChainStateLastUpdated gets the last time the chain state of the given remote peer was updated.
@@ -345,29 +335,19 @@ func (p *Status) ChainStateLastUpdated(pid peer.ID) (time.Time, error) {
// IsBad states if the peer is to be considered bad (by *any* of the registered scorers).
// If the peer is unknown this will return `false`, which makes using this function easier than returning an error.
func (p *Status) IsBad(pid peer.ID) error {
func (p *Status) IsBad(pid peer.ID) bool {
p.store.RLock()
defer p.store.RUnlock()
return p.isBad(pid)
}
// isBad is the lock-free version of IsBad.
func (p *Status) isBad(pid peer.ID) error {
func (p *Status) isBad(pid peer.ID) bool {
// Do not disconnect from trusted peers.
if p.store.IsTrustedPeer(pid) {
return nil
return false
}
if err := p.isfromBadIP(pid); err != nil {
return errors.Wrap(err, "peer is from a bad IP")
}
if err := p.scorers.IsBadPeerNoLock(pid); err != nil {
return errors.Wrap(err, "is bad peer no lock")
}
return nil
return p.isfromBadIP(pid) || p.scorers.IsBadPeerNoLock(pid)
}
// NextValidTime gets the earliest possible time it is to contact/dial
@@ -431,7 +411,7 @@ func (p *Status) Connecting() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Connecting {
if peerData.ConnState == PeerConnecting {
peers = append(peers, pid)
}
}
@@ -444,7 +424,7 @@ func (p *Status) Connected() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Connected {
if peerData.ConnState == PeerConnected {
peers = append(peers, pid)
}
}
@@ -470,7 +450,7 @@ func (p *Status) InboundConnected() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Connected && peerData.Direction == network.DirInbound {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirInbound {
peers = append(peers, pid)
}
}
@@ -483,7 +463,7 @@ func (p *Status) InboundConnectedWithProtocol(protocol InternetProtocol) []peer.
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Connected && peerData.Direction == network.DirInbound && strings.Contains(peerData.Address.String(), string(protocol)) {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirInbound && strings.Contains(peerData.Address.String(), string(protocol)) {
peers = append(peers, pid)
}
}
@@ -509,7 +489,7 @@ func (p *Status) OutboundConnected() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Connected && peerData.Direction == network.DirOutbound {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirOutbound {
peers = append(peers, pid)
}
}
@@ -522,7 +502,7 @@ func (p *Status) OutboundConnectedWithProtocol(protocol InternetProtocol) []peer
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Connected && peerData.Direction == network.DirOutbound && strings.Contains(peerData.Address.String(), string(protocol)) {
if peerData.ConnState == PeerConnected && peerData.Direction == network.DirOutbound && strings.Contains(peerData.Address.String(), string(protocol)) {
peers = append(peers, pid)
}
}
@@ -535,7 +515,7 @@ func (p *Status) Active() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Connecting || peerData.ConnState == Connected {
if peerData.ConnState == PeerConnecting || peerData.ConnState == PeerConnected {
peers = append(peers, pid)
}
}
@@ -548,7 +528,7 @@ func (p *Status) Disconnecting() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Disconnecting {
if peerData.ConnState == PeerDisconnecting {
peers = append(peers, pid)
}
}
@@ -561,7 +541,7 @@ func (p *Status) Disconnected() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Disconnected {
if peerData.ConnState == PeerDisconnected {
peers = append(peers, pid)
}
}
@@ -574,7 +554,7 @@ func (p *Status) Inactive() []peer.ID {
defer p.store.RUnlock()
peers := make([]peer.ID, 0)
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Disconnecting || peerData.ConnState == Disconnected {
if peerData.ConnState == PeerDisconnecting || peerData.ConnState == PeerDisconnected {
peers = append(peers, pid)
}
}
@@ -612,7 +592,7 @@ func (p *Status) Prune() {
return
}
notBadPeer := func(pid peer.ID) bool {
return p.isBad(pid) == nil
return !p.isBad(pid)
}
notTrustedPeer := func(pid peer.ID) bool {
return !p.isTrustedPeers(pid)
@@ -625,7 +605,7 @@ func (p *Status) Prune() {
// Select disconnected peers with a smaller bad response count.
for pid, peerData := range p.store.Peers() {
// Should not prune trusted peer or prune the peer dara and unset trusted peer.
if peerData.ConnState == Disconnected && notBadPeer(pid) && notTrustedPeer(pid) {
if peerData.ConnState == PeerDisconnected && notBadPeer(pid) && notTrustedPeer(pid) {
peersToPrune = append(peersToPrune, &peerResp{
pid: pid,
score: p.Scorers().ScoreNoLock(pid),
@@ -677,7 +657,7 @@ func (p *Status) deprecatedPrune() {
// Select disconnected peers with a smaller bad response count.
for pid, peerData := range p.store.Peers() {
// Should not prune trusted peer or prune the peer dara and unset trusted peer.
if peerData.ConnState == Disconnected && notBadPeer(peerData) && notTrustedPeer(pid) {
if peerData.ConnState == PeerDisconnected && notBadPeer(peerData) && notTrustedPeer(pid) {
peersToPrune = append(peersToPrune, &peerResp{
pid: pid,
badResp: peerData.BadResponses,
@@ -834,7 +814,7 @@ func (p *Status) PeersToPrune() []peer.ID {
peersToPrune := make([]*peerResp, 0)
// Select connected and inbound peers to prune.
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Connected &&
if peerData.ConnState == PeerConnected &&
peerData.Direction == network.DirInbound && !p.store.IsTrustedPeer(pid) {
peersToPrune = append(peersToPrune, &peerResp{
pid: pid,
@@ -900,7 +880,7 @@ func (p *Status) deprecatedPeersToPrune() []peer.ID {
peersToPrune := make([]*peerResp, 0)
// Select connected and inbound peers to prune.
for pid, peerData := range p.store.Peers() {
if peerData.ConnState == Connected &&
if peerData.ConnState == PeerConnected &&
peerData.Direction == network.DirInbound && !p.store.IsTrustedPeer(pid) {
peersToPrune = append(peersToPrune, &peerResp{
pid: pid,
@@ -1002,28 +982,24 @@ func (p *Status) isTrustedPeers(pid peer.ID) bool {
// this method assumes the store lock is acquired before
// executing the method.
func (p *Status) isfromBadIP(pid peer.ID) error {
func (p *Status) isfromBadIP(pid peer.ID) bool {
peerData, ok := p.store.PeerData(pid)
if !ok {
return nil
return false
}
if peerData.Address == nil {
return nil
return false
}
ip, err := manet.ToIP(peerData.Address)
if err != nil {
return errors.Wrap(err, "to ip")
return true
}
if val, ok := p.ipTracker[ip.String()]; ok {
if val > CollocationLimit {
return errors.Errorf("collocation limit exceeded: got %d - limit %d", val, CollocationLimit)
return true
}
}
return nil
return false
}
func (p *Status) addIpToTracker(pid peer.ID) {

View File

@@ -215,7 +215,7 @@ func TestPeerSubscribedToSubnet(t *testing.T) {
// Add some peers with different states
numPeers := 2
for i := 0; i < numPeers; i++ {
addPeer(t, p, peers.Connected)
addPeer(t, p, peers.PeerConnected)
}
expectedPeer := p.All()[1]
bitV := bitfield.NewBitvector64()
@@ -230,7 +230,7 @@ func TestPeerSubscribedToSubnet(t *testing.T) {
}))
numPeers = 3
for i := 0; i < numPeers; i++ {
addPeer(t, p, peers.Disconnected)
addPeer(t, p, peers.PeerDisconnected)
}
ps := p.SubscribedToSubnet(2)
assert.Equal(t, 1, len(ps), "Unexpected num of peers")
@@ -259,7 +259,7 @@ func TestPeerImplicitAdd(t *testing.T) {
id, err := peer.Decode("16Uiu2HAkyWZ4Ni1TpvDS8dPxsozmHY85KaiFjodQuV6Tz5tkHVeR")
require.NoError(t, err)
connectionState := peers.Connecting
connectionState := peers.PeerConnecting
p.SetConnectionState(id, connectionState)
resConnectionState, err := p.ConnectionState(id)
@@ -347,7 +347,7 @@ func TestPeerBadResponses(t *testing.T) {
require.NoError(t, err)
}
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
assert.Equal(t, false, p.IsBad(id), "Peer marked as bad when should be good")
address, err := ma.NewMultiaddr("/ip4/213.202.254.180/tcp/13000")
require.NoError(t, err, "Failed to create address")
@@ -358,25 +358,25 @@ func TestPeerBadResponses(t *testing.T) {
resBadResponses, err := scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 0, resBadResponses, "Unexpected bad responses")
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
assert.Equal(t, false, p.IsBad(id), "Peer marked as bad when should be good")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 1, resBadResponses, "Unexpected bad responses")
assert.NoError(t, p.IsBad(id), "Peer marked as bad when should be good")
assert.Equal(t, false, p.IsBad(id), "Peer marked as bad when should be good")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 2, resBadResponses, "Unexpected bad responses")
assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
assert.Equal(t, true, p.IsBad(id), "Peer not marked as bad when it should be")
scorer.Increment(id)
resBadResponses, err = scorer.Count(id)
require.NoError(t, err)
assert.Equal(t, 3, resBadResponses, "Unexpected bad responses")
assert.NotNil(t, p.IsBad(id), "Peer not marked as bad when it should be")
assert.Equal(t, true, p.IsBad(id), "Peer not marked as bad when it should be")
}
func TestAddMetaData(t *testing.T) {
@@ -393,7 +393,7 @@ func TestAddMetaData(t *testing.T) {
// Add some peers with different states
numPeers := 5
for i := 0; i < numPeers; i++ {
addPeer(t, p, peers.Connected)
addPeer(t, p, peers.PeerConnected)
}
newPeer := p.All()[2]
@@ -422,19 +422,19 @@ func TestPeerConnectionStatuses(t *testing.T) {
// Add some peers with different states
numPeersDisconnected := 11
for i := 0; i < numPeersDisconnected; i++ {
addPeer(t, p, peers.Disconnected)
addPeer(t, p, peers.PeerDisconnected)
}
numPeersConnecting := 7
for i := 0; i < numPeersConnecting; i++ {
addPeer(t, p, peers.Connecting)
addPeer(t, p, peers.PeerConnecting)
}
numPeersConnected := 43
for i := 0; i < numPeersConnected; i++ {
addPeer(t, p, peers.Connected)
addPeer(t, p, peers.PeerConnected)
}
numPeersDisconnecting := 4
for i := 0; i < numPeersDisconnecting; i++ {
addPeer(t, p, peers.Disconnecting)
addPeer(t, p, peers.PeerDisconnecting)
}
// Now confirm the states
@@ -463,7 +463,7 @@ func TestPeerValidTime(t *testing.T) {
numPeersConnected := 6
for i := 0; i < numPeersConnected; i++ {
addPeer(t, p, peers.Connected)
addPeer(t, p, peers.PeerConnected)
}
allPeers := p.All()
@@ -510,10 +510,10 @@ func TestPrune(t *testing.T) {
for i := 0; i < p.MaxPeerLimit()+100; i++ {
if i%7 == 0 {
// Peer added as disconnected.
_ = addPeer(t, p, peers.Disconnected)
_ = addPeer(t, p, peers.PeerDisconnected)
}
// Peer added to peer handler.
_ = addPeer(t, p, peers.Connected)
_ = addPeer(t, p, peers.PeerConnected)
}
disPeers := p.Disconnected()
@@ -571,23 +571,23 @@ func TestPeerIPTracker(t *testing.T) {
if err != nil {
t.Fatal(err)
}
badPeers = append(badPeers, createPeer(t, p, addr, network.DirUnknown, peerdata.ConnectionState(ethpb.ConnectionState_DISCONNECTED)))
badPeers = append(badPeers, createPeer(t, p, addr, network.DirUnknown, peerdata.PeerConnectionState(ethpb.ConnectionState_DISCONNECTED)))
}
for _, pr := range badPeers {
assert.NotNil(t, p.IsBad(pr), "peer with bad ip is not bad")
assert.Equal(t, true, p.IsBad(pr), "peer with bad ip is not bad")
}
// Add in bad peers, so that our records are trimmed out
// from the peer store.
for i := 0; i < p.MaxPeerLimit()+100; i++ {
// Peer added to peer handler.
pid := addPeer(t, p, peers.Disconnected)
pid := addPeer(t, p, peers.PeerDisconnected)
p.Scorers().BadResponsesScorer().Increment(pid)
}
p.Prune()
for _, pr := range badPeers {
assert.NoError(t, p.IsBad(pr), "peer with good ip is regarded as bad")
assert.Equal(t, false, p.IsBad(pr), "peer with good ip is regarded as bad")
}
}
@@ -601,11 +601,8 @@ func TestTrimmedOrderedPeers(t *testing.T) {
},
})
const (
expectedTarget = primitives.Epoch(2)
maxPeers = 3
)
expectedTarget := primitives.Epoch(2)
maxPeers := 3
var mockroot2 [32]byte
var mockroot3 [32]byte
var mockroot4 [32]byte
@@ -614,41 +611,36 @@ func TestTrimmedOrderedPeers(t *testing.T) {
copy(mockroot3[:], "three")
copy(mockroot4[:], "four")
copy(mockroot5[:], "five")
// Peer 1
pid1 := addPeer(t, p, peers.Connected)
pid1 := addPeer(t, p, peers.PeerConnected)
p.SetChainState(pid1, &pb.Status{
HeadSlot: 3 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 3,
FinalizedRoot: mockroot3[:],
})
// Peer 2
pid2 := addPeer(t, p, peers.Connected)
pid2 := addPeer(t, p, peers.PeerConnected)
p.SetChainState(pid2, &pb.Status{
HeadSlot: 4 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 4,
FinalizedRoot: mockroot4[:],
})
// Peer 3
pid3 := addPeer(t, p, peers.Connected)
pid3 := addPeer(t, p, peers.PeerConnected)
p.SetChainState(pid3, &pb.Status{
HeadSlot: 5 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 5,
FinalizedRoot: mockroot5[:],
})
// Peer 4
pid4 := addPeer(t, p, peers.Connected)
pid4 := addPeer(t, p, peers.PeerConnected)
p.SetChainState(pid4, &pb.Status{
HeadSlot: 2 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 2,
FinalizedRoot: mockroot2[:],
})
// Peer 5
pid5 := addPeer(t, p, peers.Connected)
pid5 := addPeer(t, p, peers.PeerConnected)
p.SetChainState(pid5, &pb.Status{
HeadSlot: 2 * params.BeaconConfig().SlotsPerEpoch,
FinalizedEpoch: 2,
@@ -688,12 +680,12 @@ func TestAtInboundPeerLimit(t *testing.T) {
})
for i := 0; i < 15; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirOutbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirOutbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
}
assert.Equal(t, false, p.IsAboveInboundLimit(), "Inbound limit exceeded")
for i := 0; i < 31; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
}
assert.Equal(t, true, p.IsAboveInboundLimit(), "Inbound limit not exceeded")
}
@@ -713,7 +705,7 @@ func TestPrunePeers(t *testing.T) {
})
for i := 0; i < 15; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirOutbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirOutbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Assert there are no prunable peers.
peersToPrune := p.PeersToPrune()
@@ -721,7 +713,7 @@ func TestPrunePeers(t *testing.T) {
for i := 0; i < 18; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Assert there are the correct prunable peers.
@@ -731,7 +723,7 @@ func TestPrunePeers(t *testing.T) {
// Add in more peers.
for i := 0; i < 13; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Set up bad scores for inbound peers.
@@ -775,7 +767,7 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
for i := 0; i < 15; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirOutbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirOutbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Assert there are no prunable peers.
peersToPrune := p.PeersToPrune()
@@ -783,7 +775,7 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
for i := 0; i < 18; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Assert there are the correct prunable peers.
@@ -793,7 +785,7 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
// Add in more peers.
for i := 0; i < 13; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
}
var trustedPeers []peer.ID
@@ -829,7 +821,7 @@ func TestPrunePeers_TrustedPeers(t *testing.T) {
// Add more peers to check if trusted peers can be pruned after they are deleted from trusted peer set.
for i := 0; i < 9; i++ {
// Peer added to peer handler.
createPeer(t, p, nil, network.DirInbound, peerdata.ConnectionState(ethpb.ConnectionState_CONNECTED))
createPeer(t, p, nil, network.DirInbound, peerdata.PeerConnectionState(ethpb.ConnectionState_CONNECTED))
}
// Delete trusted peers.
@@ -873,14 +865,14 @@ func TestStatus_BestPeer(t *testing.T) {
headSlot primitives.Slot
finalizedEpoch primitives.Epoch
}
tests := []struct {
name string
peers []*peerConfig
limitPeers int
ourFinalizedEpoch primitives.Epoch
targetEpoch primitives.Epoch
targetEpochSupport int // Denotes how many peers support returned epoch.
name string
peers []*peerConfig
limitPeers int
ourFinalizedEpoch primitives.Epoch
targetEpoch primitives.Epoch
// targetEpochSupport denotes how many peers support returned epoch.
targetEpochSupport int
}{
{
name: "head slot matches finalized epoch",
@@ -893,7 +885,6 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 3, headSlot: 3 * params.BeaconConfig().SlotsPerEpoch},
},
limitPeers: 15,
ourFinalizedEpoch: 0,
targetEpoch: 4,
targetEpochSupport: 4,
},
@@ -911,7 +902,6 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 3, headSlot: 4 * params.BeaconConfig().SlotsPerEpoch},
},
limitPeers: 15,
ourFinalizedEpoch: 0,
targetEpoch: 4,
targetEpochSupport: 4,
},
@@ -926,7 +916,6 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 3, headSlot: 42 * params.BeaconConfig().SlotsPerEpoch},
},
limitPeers: 15,
ourFinalizedEpoch: 0,
targetEpoch: 4,
targetEpochSupport: 4,
},
@@ -941,8 +930,8 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 3, headSlot: 46 * params.BeaconConfig().SlotsPerEpoch},
{finalizedEpoch: 6, headSlot: 6 * params.BeaconConfig().SlotsPerEpoch},
},
limitPeers: 15,
ourFinalizedEpoch: 5,
limitPeers: 15,
targetEpoch: 6,
targetEpochSupport: 1,
},
@@ -961,8 +950,8 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 7, headSlot: 7 * params.BeaconConfig().SlotsPerEpoch},
{finalizedEpoch: 8, headSlot: 8 * params.BeaconConfig().SlotsPerEpoch},
},
limitPeers: 15,
ourFinalizedEpoch: 5,
limitPeers: 15,
targetEpoch: 6,
targetEpochSupport: 5,
},
@@ -981,8 +970,8 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 7, headSlot: 7 * params.BeaconConfig().SlotsPerEpoch},
{finalizedEpoch: 8, headSlot: 8 * params.BeaconConfig().SlotsPerEpoch},
},
limitPeers: 4,
ourFinalizedEpoch: 5,
limitPeers: 4,
targetEpoch: 6,
targetEpochSupport: 4,
},
@@ -997,8 +986,8 @@ func TestStatus_BestPeer(t *testing.T) {
{finalizedEpoch: 8, headSlot: 8 * params.BeaconConfig().SlotsPerEpoch},
{finalizedEpoch: 8, headSlot: 8 * params.BeaconConfig().SlotsPerEpoch},
},
limitPeers: 15,
ourFinalizedEpoch: 5,
limitPeers: 15,
targetEpoch: 8,
targetEpochSupport: 3,
},
@@ -1013,7 +1002,7 @@ func TestStatus_BestPeer(t *testing.T) {
},
})
for _, peerConfig := range tt.peers {
p.SetChainState(addPeer(t, p, peers.Connected), &pb.Status{
p.SetChainState(addPeer(t, p, peers.PeerConnected), &pb.Status{
FinalizedEpoch: peerConfig.finalizedEpoch,
HeadSlot: peerConfig.headSlot,
})
@@ -1039,7 +1028,7 @@ func TestBestFinalized_returnsMaxValue(t *testing.T) {
for i := 0; i <= maxPeers+100; i++ {
p.Add(new(enr.Record), peer.ID(rune(i)), nil, network.DirOutbound)
p.SetConnectionState(peer.ID(rune(i)), peers.Connected)
p.SetConnectionState(peer.ID(rune(i)), peers.PeerConnected)
p.SetChainState(peer.ID(rune(i)), &pb.Status{
FinalizedEpoch: 10,
})
@@ -1062,7 +1051,7 @@ func TestStatus_BestNonFinalized(t *testing.T) {
peerSlots := []primitives.Slot{32, 32, 32, 32, 235, 233, 258, 268, 270}
for i, headSlot := range peerSlots {
p.Add(new(enr.Record), peer.ID(rune(i)), nil, network.DirOutbound)
p.SetConnectionState(peer.ID(rune(i)), peers.Connected)
p.SetConnectionState(peer.ID(rune(i)), peers.PeerConnected)
p.SetChainState(peer.ID(rune(i)), &pb.Status{
HeadSlot: headSlot,
})
@@ -1085,17 +1074,17 @@ func TestStatus_CurrentEpoch(t *testing.T) {
},
})
// Peer 1
pid1 := addPeer(t, p, peers.Connected)
pid1 := addPeer(t, p, peers.PeerConnected)
p.SetChainState(pid1, &pb.Status{
HeadSlot: params.BeaconConfig().SlotsPerEpoch * 4,
})
// Peer 2
pid2 := addPeer(t, p, peers.Connected)
pid2 := addPeer(t, p, peers.PeerConnected)
p.SetChainState(pid2, &pb.Status{
HeadSlot: params.BeaconConfig().SlotsPerEpoch * 5,
})
// Peer 3
pid3 := addPeer(t, p, peers.Connected)
pid3 := addPeer(t, p, peers.PeerConnected)
p.SetChainState(pid3, &pb.Status{
HeadSlot: params.BeaconConfig().SlotsPerEpoch * 4,
})
@@ -1114,8 +1103,8 @@ func TestInbound(t *testing.T) {
})
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirInbound, peers.Connected)
createPeer(t, p, addr, network.DirOutbound, peers.Connected)
inbound := createPeer(t, p, addr, network.DirInbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirOutbound, peers.PeerConnected)
result := p.Inbound()
require.Equal(t, 1, len(result))
@@ -1134,8 +1123,8 @@ func TestInboundConnected(t *testing.T) {
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirInbound, peers.Connected)
createPeer(t, p, addr, network.DirInbound, peers.Connecting)
inbound := createPeer(t, p, addr, network.DirInbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirInbound, peers.PeerConnecting)
result := p.InboundConnected()
require.Equal(t, 1, len(result))
@@ -1168,7 +1157,7 @@ func TestInboundConnectedWithProtocol(t *testing.T) {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.Connected)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.PeerConnected)
expectedTCP[peer.String()] = true
}
@@ -1177,7 +1166,7 @@ func TestInboundConnectedWithProtocol(t *testing.T) {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.Connected)
peer := createPeer(t, p, multiaddr, network.DirInbound, peers.PeerConnected)
expectedQUIC[peer.String()] = true
}
@@ -1214,8 +1203,8 @@ func TestOutbound(t *testing.T) {
})
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
createPeer(t, p, addr, network.DirInbound, peers.Connected)
outbound := createPeer(t, p, addr, network.DirOutbound, peers.Connected)
createPeer(t, p, addr, network.DirInbound, peers.PeerConnected)
outbound := createPeer(t, p, addr, network.DirOutbound, peers.PeerConnected)
result := p.Outbound()
require.Equal(t, 1, len(result))
@@ -1234,8 +1223,8 @@ func TestOutboundConnected(t *testing.T) {
addr, err := ma.NewMultiaddr("/ip4/127.0.0.1/tcp/33333")
require.NoError(t, err)
inbound := createPeer(t, p, addr, network.DirOutbound, peers.Connected)
createPeer(t, p, addr, network.DirOutbound, peers.Connecting)
inbound := createPeer(t, p, addr, network.DirOutbound, peers.PeerConnected)
createPeer(t, p, addr, network.DirOutbound, peers.PeerConnecting)
result := p.OutboundConnected()
require.Equal(t, 1, len(result))
@@ -1268,7 +1257,7 @@ func TestOutboundConnectedWithProtocol(t *testing.T) {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.Connected)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.PeerConnected)
expectedTCP[peer.String()] = true
}
@@ -1277,7 +1266,7 @@ func TestOutboundConnectedWithProtocol(t *testing.T) {
multiaddr, err := ma.NewMultiaddr(addr)
require.NoError(t, err)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.Connected)
peer := createPeer(t, p, multiaddr, network.DirOutbound, peers.PeerConnected)
expectedQUIC[peer.String()] = true
}
@@ -1304,7 +1293,7 @@ func TestOutboundConnectedWithProtocol(t *testing.T) {
}
// addPeer is a helper to add a peer with a given connection state)
func addPeer(t *testing.T, p *peers.Status, state peerdata.ConnectionState) peer.ID {
func addPeer(t *testing.T, p *peers.Status, state peerdata.PeerConnectionState) peer.ID {
// Set up some peers with different states
mhBytes := []byte{0x11, 0x04}
idBytes := make([]byte, 4)
@@ -1323,7 +1312,7 @@ func addPeer(t *testing.T, p *peers.Status, state peerdata.ConnectionState) peer
}
func createPeer(t *testing.T, p *peers.Status, addr ma.Multiaddr,
dir network.Direction, state peerdata.ConnectionState) peer.ID {
dir network.Direction, state peerdata.PeerConnectionState) peer.ID {
mhBytes := []byte{0x11, 0x04}
idBytes := make([]byte, 4)
_, err := rand.Read(idBytes)

View File

@@ -165,14 +165,14 @@ func (s *Service) pubsubOptions() []pubsub.Option {
func parsePeersEnr(peers []string) ([]peer.AddrInfo, error) {
addrs, err := PeersFromStringAddrs(peers)
if err != nil {
return nil, fmt.Errorf("cannot convert peers raw ENRs into multiaddresses: %w", err)
return nil, fmt.Errorf("Cannot convert peers raw ENRs into multiaddresses: %w", err)
}
if len(addrs) == 0 {
return nil, fmt.Errorf("converting peers raw ENRs into multiaddresses resulted in an empty list")
return nil, fmt.Errorf("Converting peers raw ENRs into multiaddresses resulted in an empty list")
}
directAddrInfos, err := peer.AddrInfosFromP2pAddrs(addrs...)
if err != nil {
return nil, fmt.Errorf("cannot convert peers multiaddresses into AddrInfos: %w", err)
return nil, fmt.Errorf("Cannot convert peers multiaddresses into AddrInfos: %w", err)
}
return directAddrInfos, nil
}

View File

@@ -10,25 +10,15 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/encoder"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/network/forks"
"github.com/sirupsen/logrus"
)
var _ pubsub.SubscriptionFilter = (*Service)(nil)
// It is set at this limit to handle the possibility
// of double topic subscriptions at fork boundaries.
// -> BeaconBlock * 2 = 2
// -> BeaconAggregateAndProof * 2 = 2
// -> VoluntaryExit * 2 = 2
// -> ProposerSlashing * 2 = 2
// -> AttesterSlashing * 2 = 2
// -> 64 Beacon Attestation * 2 = 128
// -> SyncContributionAndProof * 2 = 2
// -> 4 SyncCommitteeSubnets * 2 = 8
// -> BlsToExecutionChange * 2 = 2
// -> 6 BlobSidecar * 2 = 12
// -------------------------------------
// TOTAL = 162
// -> 64 Attestation Subnets * 2.
// -> 4 Sync Committee Subnets * 2.
// -> Block,Aggregate,ProposerSlashing,AttesterSlashing,Exits,SyncContribution * 2.
const pubsubSubscriptionRequestLimit = 200
// CanSubscribe returns true if the topic is of interest and we could subscribe to it.
@@ -105,15 +95,8 @@ func (s *Service) CanSubscribe(topic string) bool {
// FilterIncomingSubscriptions is invoked for all RPCs containing subscription notifications.
// This method returns only the topics of interest and may return an error if the subscription
// request contains too many topics.
func (s *Service) FilterIncomingSubscriptions(peerID peer.ID, subs []*pubsubpb.RPC_SubOpts) ([]*pubsubpb.RPC_SubOpts, error) {
func (s *Service) FilterIncomingSubscriptions(_ peer.ID, subs []*pubsubpb.RPC_SubOpts) ([]*pubsubpb.RPC_SubOpts, error) {
if len(subs) > pubsubSubscriptionRequestLimit {
subsCount := len(subs)
log.WithFields(logrus.Fields{
"peerID": peerID,
"subscriptionCounts": subsCount,
"subscriptionLimit": pubsubSubscriptionRequestLimit,
}).Debug("Too many incoming subscriptions, filtering them")
return nil, pubsub.ErrTooManySubscriptions
}

View File

@@ -43,10 +43,6 @@ var _ runtime.Service = (*Service)(nil)
// defined below.
var pollingPeriod = 6 * time.Second
// When looking for new nodes, if not enough nodes are found,
// we stop after this amount of iterations.
var batchSize = 2_000
// Refresh rate of ENR set at twice per slot.
var refreshRate = slots.DivideSlotBy(2)
@@ -206,13 +202,12 @@ func (s *Service) Start() {
s.startupErr = err
return
}
if err := s.connectToBootnodes(); err != nil {
log.WithError(err).Error("Could not connect to boot nodes")
err = s.connectToBootnodes()
if err != nil {
log.WithError(err).Error("Could not add bootnode to the exclusion list")
s.startupErr = err
return
}
s.dv5Listener = listener
go s.listenForNewNodes()
}
@@ -231,7 +226,7 @@ func (s *Service) Start() {
}
// Initialize metadata according to the
// current epoch.
s.RefreshPersistentSubnets()
s.RefreshENR()
// Periodic functions.
async.RunEvery(s.ctx, params.BeaconConfig().TtfbTimeoutDuration(), func() {
@@ -239,7 +234,7 @@ func (s *Service) Start() {
})
async.RunEvery(s.ctx, 30*time.Minute, s.Peers().Prune)
async.RunEvery(s.ctx, time.Duration(params.BeaconConfig().RespTimeout)*time.Second, s.updateMetrics)
async.RunEvery(s.ctx, refreshRate, s.RefreshPersistentSubnets)
async.RunEvery(s.ctx, refreshRate, s.RefreshENR)
async.RunEvery(s.ctx, 1*time.Minute, func() {
inboundQUICCount := len(s.peers.InboundConnectedWithProtocol(peers.QUIC))
inboundTCPCount := len(s.peers.InboundConnectedWithProtocol(peers.TCP))
@@ -389,17 +384,12 @@ func (s *Service) AddPingMethod(reqFunc func(ctx context.Context, id peer.ID) er
s.pingMethodLock.Unlock()
}
func (s *Service) pingPeersAndLogEnr() {
func (s *Service) pingPeers() {
s.pingMethodLock.RLock()
defer s.pingMethodLock.RUnlock()
localENR := s.dv5Listener.Self()
log.WithField("ENR", localENR).Info("New node record")
if s.pingMethod == nil {
return
}
for _, pid := range s.peers.Connected() {
go func(id peer.ID) {
if err := s.pingMethod(s.ctx, id); err != nil {
@@ -472,8 +462,8 @@ func (s *Service) connectWithPeer(ctx context.Context, info peer.AddrInfo) error
if info.ID == s.host.ID() {
return nil
}
if err := s.Peers().IsBad(info.ID); err != nil {
return errors.Wrap(err, "refused to connect to bad peer")
if s.Peers().IsBad(info.ID) {
return errors.New("refused to connect to bad peer")
}
ctx, cancel := context.WithTimeout(ctx, maxDialTimeout)
defer cancel()

View File

@@ -2,7 +2,6 @@ package p2p
import (
"context"
"math"
"strings"
"sync"
"time"
@@ -20,24 +19,22 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v5/math"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
)
var (
attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
var attestationSubnetCount = params.BeaconConfig().AttestationSubnetCount
var syncCommsSubnetCount = params.BeaconConfig().SyncCommitteeSubnetCount
attSubnetEnrKey = params.BeaconNetworkConfig().AttSubnetKey
syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey
)
var attSubnetEnrKey = params.BeaconNetworkConfig().AttSubnetKey
var syncCommsSubnetEnrKey = params.BeaconNetworkConfig().SyncCommsSubnetKey
// The value used with the subnet, in order
// to create an appropriate key to retrieve
// the relevant lock. This is used to differentiate
// sync subnets from others. This is deliberately
// chosen as more than 64 (attestation subnet count).
// sync subnets from attestation subnets. This is deliberately
// chosen as more than 64(attestation subnet count).
const syncLockerVal = 100
// The value used with the blob sidecar subnet, in order
@@ -47,77 +44,6 @@ const syncLockerVal = 100
// chosen more than sync and attestation subnet combined.
const blobSubnetLockerVal = 110
// nodeFilter return a function that filters nodes based on the subnet topic and subnet index.
func (s *Service) nodeFilter(topic string, index uint64) (func(node *enode.Node) bool, error) {
switch {
case strings.Contains(topic, GossipAttestationMessage):
return s.filterPeerForAttSubnet(index), nil
case strings.Contains(topic, GossipSyncCommitteeMessage):
return s.filterPeerForSyncSubnet(index), nil
default:
return nil, errors.Errorf("no subnet exists for provided topic: %s", topic)
}
}
// searchForPeers performs a network search for peers subscribed to a particular subnet.
// It exits as soon as one of these conditions is met:
// - It looped through `batchSize` nodes.
// - It found `peersToFindCount“ peers corresponding to the `filter` criteria.
// - Iterator is exhausted.
func searchForPeers(
iterator enode.Iterator,
batchSize int,
peersToFindCount uint,
filter func(node *enode.Node) bool,
) []*enode.Node {
nodeFromNodeID := make(map[enode.ID]*enode.Node, batchSize)
for i := 0; i < batchSize && uint(len(nodeFromNodeID)) <= peersToFindCount && iterator.Next(); i++ {
node := iterator.Node()
// Filter out nodes that do not meet the criteria.
if !filter(node) {
continue
}
// Remove duplicates, keeping the node with higher seq.
prevNode, ok := nodeFromNodeID[node.ID()]
if ok && prevNode.Seq() > node.Seq() {
continue
}
nodeFromNodeID[node.ID()] = node
}
// Convert the map to a slice.
nodes := make([]*enode.Node, 0, len(nodeFromNodeID))
for _, node := range nodeFromNodeID {
nodes = append(nodes, node)
}
return nodes
}
// dialPeer dials a peer in a separate goroutine.
func (s *Service) dialPeer(ctx context.Context, wg *sync.WaitGroup, node *enode.Node) {
info, _, err := convertToAddrInfo(node)
if err != nil {
return
}
if info == nil {
return
}
wg.Add(1)
go func() {
if err := s.connectWithPeer(ctx, *info); err != nil {
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
}
wg.Done()
}()
}
// FindPeersWithSubnet performs a network search for peers
// subscribed to a particular subnet. Then it tries to connect
// with those peers. This method will block until either:
@@ -126,104 +52,67 @@ func (s *Service) dialPeer(ctx context.Context, wg *sync.WaitGroup, node *enode.
// On some edge cases, this method may hang indefinitely while peers
// are actually found. In such a case, the user should cancel the context
// and re-run the method again.
func (s *Service) FindPeersWithSubnet(
ctx context.Context,
topic string,
index uint64,
threshold int,
) (bool, error) {
const minLogInterval = 1 * time.Minute
func (s *Service) FindPeersWithSubnet(ctx context.Context, topic string,
index uint64, threshold int) (bool, error) {
ctx, span := trace.StartSpan(ctx, "p2p.FindPeersWithSubnet")
defer span.End()
span.SetAttributes(trace.Int64Attribute("index", int64(index))) // lint:ignore uintcast -- It's safe to do this for tracing.
if s.dv5Listener == nil {
// Return if discovery isn't set
// return if discovery isn't set
return false, nil
}
topic += s.Encoding().ProtocolSuffix()
iterator := s.dv5Listener.RandomNodes()
defer iterator.Close()
filter, err := s.nodeFilter(topic, index)
if err != nil {
return false, errors.Wrap(err, "node filter")
switch {
case strings.Contains(topic, GossipAttestationMessage):
iterator = filterNodes(ctx, iterator, s.filterPeerForAttSubnet(index))
case strings.Contains(topic, GossipSyncCommitteeMessage):
iterator = filterNodes(ctx, iterator, s.filterPeerForSyncSubnet(index))
default:
return false, errors.New("no subnet exists for provided topic")
}
peersSummary := func(topic string, threshold int) (int, int) {
// Retrieve how many peers we have for this topic.
peerCountForTopic := len(s.pubsub.ListPeers(topic))
// Compute how many peers we are missing to reach the threshold.
missingPeerCountForTopic := max(0, threshold-peerCountForTopic)
return peerCountForTopic, missingPeerCountForTopic
}
// Compute how many peers we are missing to reach the threshold.
peerCountForTopic, missingPeerCountForTopic := peersSummary(topic, threshold)
// Exit early if we have enough peers.
if missingPeerCountForTopic == 0 {
return true, nil
}
log := log.WithFields(logrus.Fields{
"topic": topic,
"targetPeerCount": threshold,
})
log.WithField("currentPeerCount", peerCountForTopic).Debug("Searching for new peers for a subnet - start")
lastLogTime := time.Now()
wg := new(sync.WaitGroup)
for {
// If the context is done, we can exit the loop. This is the unhappy path.
if err := ctx.Err(); err != nil {
return false, errors.Errorf(
"unable to find requisite number of peers for topic %s - only %d out of %d peers available after searching",
topic, peerCountForTopic, threshold,
)
}
// Search for new peers in the network.
nodes := searchForPeers(iterator, batchSize, uint(missingPeerCountForTopic), filter)
// Restrict dials if limit is applied.
maxConcurrentDials := math.MaxInt
if flags.MaxDialIsActive() {
maxConcurrentDials = flags.Get().MaxConcurrentDials
}
// Dial the peers in batches.
for start := 0; start < len(nodes); start += maxConcurrentDials {
stop := min(start+maxConcurrentDials, len(nodes))
for _, node := range nodes[start:stop] {
s.dialPeer(ctx, wg, node)
}
// Wait for all dials to be completed.
wg.Wait()
}
peerCountForTopic, missingPeerCountForTopic := peersSummary(topic, threshold)
// If we have enough peers, we can exit the loop. This is the happy path.
if missingPeerCountForTopic == 0 {
currNum := len(s.pubsub.ListPeers(topic))
if currNum >= threshold {
break
}
if time.Since(lastLogTime) > minLogInterval {
lastLogTime = time.Now()
log.WithField("currentPeerCount", peerCountForTopic).Debug("Searching for new peers for a subnet - continue")
if err := ctx.Err(); err != nil {
return false, errors.Errorf("unable to find requisite number of peers for topic %s - "+
"only %d out of %d peers were able to be found", topic, currNum, threshold)
}
}
nodeCount := int(params.BeaconNetworkConfig().MinimumPeersInSubnetSearch)
// Restrict dials if limit is applied.
if flags.MaxDialIsActive() {
nodeCount = min(nodeCount, flags.Get().MaxConcurrentDials)
}
nodes := enode.ReadNodes(iterator, nodeCount)
for _, node := range nodes {
info, _, err := convertToAddrInfo(node)
if err != nil {
continue
}
log.WithField("currentPeerCount", threshold).Debug("Searching for new peers for a subnet - success")
if info == nil {
continue
}
wg.Add(1)
go func() {
if err := s.connectWithPeer(ctx, *info); err != nil {
log.WithError(err).Tracef("Could not connect with peer %s", info.String())
}
wg.Done()
}()
}
// Wait for all dials to be completed.
wg.Wait()
}
return true, nil
}
@@ -267,17 +156,11 @@ func (s *Service) filterPeerForSyncSubnet(index uint64) func(node *enode.Node) b
// lower threshold to broadcast object compared to searching
// for a subnet. So that even in the event of poor peer
// connectivity, we can still broadcast an attestation.
func (s *Service) hasPeerWithSubnet(subnetTopic string) bool {
func (s *Service) hasPeerWithSubnet(topic string) bool {
// In the event peer threshold is lower, we will choose the lower
// threshold.
minPeers := min(1, flags.Get().MinimumPeersPerSubnet)
topic := subnetTopic + s.Encoding().ProtocolSuffix()
peersWithSubnet := s.pubsub.ListPeers(topic)
peersWithSubnetCount := len(peersWithSubnet)
enoughPeers := peersWithSubnetCount >= minPeers
return enoughPeers
minPeers := mathutil.Min(1, uint64(flags.Get().MinimumPeersPerSubnet))
return len(s.pubsub.ListPeers(topic+s.Encoding().ProtocolSuffix())) >= int(minPeers) // lint:ignore uintcast -- Min peers can be safely cast to int.
}
// Updates the service's discv5 listener record's attestation subnet
@@ -472,10 +355,10 @@ func syncBitvector(record *enr.Record) (bitfield.Bitvector4, error) {
// The subnet locker is a map which keeps track of all
// mutexes stored per subnet. This locker is re-used
// between both the attestation, sync and blob subnets.
// Sync subnets are stored by (subnet+syncLockerVal).
// Blob subnets are stored by (subnet+blobSubnetLockerVal).
// This is to prevent conflicts while allowing subnets
// between both the attestation and sync subnets. In
// order to differentiate between attestation and sync
// subnets. Sync subnets are stored by (subnet+syncLockerVal). This
// is to prevent conflicts while allowing both subnets
// to use a single locker.
func (s *Service) subnetLocker(i uint64) *sync.RWMutex {
s.subnetsLockLock.Lock()

View File

@@ -27,7 +27,6 @@ go_library(
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",
"@com_github_libp2p_go_libp2p//:go_default_library",
"@com_github_libp2p_go_libp2p//config:go_default_library",
"@com_github_libp2p_go_libp2p//core:go_default_library",
"@com_github_libp2p_go_libp2p//core/connmgr:go_default_library",
"@com_github_libp2p_go_libp2p//core/control:go_default_library",

View File

@@ -27,148 +27,148 @@ func NewFuzzTestP2P() *FakeP2P {
}
// Encoding -- fake.
func (*FakeP2P) Encoding() encoder.NetworkEncoding {
func (_ *FakeP2P) Encoding() encoder.NetworkEncoding {
return &encoder.SszNetworkEncoder{}
}
// AddConnectionHandler -- fake.
func (*FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
func (_ *FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
}
// AddDisconnectionHandler -- fake.
func (*FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
func (_ *FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
}
// AddPingMethod -- fake.
func (*FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
func (_ *FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
}
// PeerID -- fake.
func (*FakeP2P) PeerID() peer.ID {
func (_ *FakeP2P) PeerID() peer.ID {
return "fake"
}
// ENR returns the enr of the local peer.
func (*FakeP2P) ENR() *enr.Record {
func (_ *FakeP2P) ENR() *enr.Record {
return new(enr.Record)
}
// DiscoveryAddresses -- fake
func (*FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
func (_ *FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil
}
// FindPeersWithSubnet mocks the p2p func.
func (*FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (_ *FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return false, nil
}
// RefreshPersistentSubnets mocks the p2p func.
func (*FakeP2P) RefreshPersistentSubnets() {}
// RefreshENR mocks the p2p func.
func (_ *FakeP2P) RefreshENR() {}
// LeaveTopic -- fake.
func (*FakeP2P) LeaveTopic(_ string) error {
func (_ *FakeP2P) LeaveTopic(_ string) error {
return nil
}
// Metadata -- fake.
func (*FakeP2P) Metadata() metadata.Metadata {
func (_ *FakeP2P) Metadata() metadata.Metadata {
return nil
}
// Peers -- fake.
func (*FakeP2P) Peers() *peers.Status {
func (_ *FakeP2P) Peers() *peers.Status {
return nil
}
// PublishToTopic -- fake.
func (*FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
func (_ *FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
return nil
}
// Send -- fake.
func (*FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
func (_ *FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
return nil, nil
}
// PubSub -- fake.
func (*FakeP2P) PubSub() *pubsub.PubSub {
func (_ *FakeP2P) PubSub() *pubsub.PubSub {
return nil
}
// MetadataSeq -- fake.
func (*FakeP2P) MetadataSeq() uint64 {
func (_ *FakeP2P) MetadataSeq() uint64 {
return 0
}
// SetStreamHandler -- fake.
func (*FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
func (_ *FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
}
// SubscribeToTopic -- fake.
func (*FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
func (_ *FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
return nil, nil
}
// JoinTopic -- fake.
func (*FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
func (_ *FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
return nil, nil
}
// Host -- fake.
func (*FakeP2P) Host() host.Host {
func (_ *FakeP2P) Host() host.Host {
return nil
}
// Disconnect -- fake.
func (*FakeP2P) Disconnect(_ peer.ID) error {
func (_ *FakeP2P) Disconnect(_ peer.ID) error {
return nil
}
// Broadcast -- fake.
func (*FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
func (_ *FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
return nil
}
// BroadcastAttestation -- fake.
func (*FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
func (_ *FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
return nil
}
// BroadcastSyncCommitteeMessage -- fake.
func (*FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
func (_ *FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
return nil
}
// BroadcastBlob -- fake.
func (*FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
func (_ *FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
return nil
}
// InterceptPeerDial -- fake.
func (*FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
func (_ *FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true
}
// InterceptAddrDial -- fake.
func (*FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
func (_ *FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
return true
}
// InterceptAccept -- fake.
func (*FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
func (_ *FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptSecured -- fake.
func (*FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
func (_ *FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptUpgraded -- fake.
func (*FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
func (_ *FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0
}

View File

@@ -18,12 +18,12 @@ type MockHost struct {
}
// ID --
func (*MockHost) ID() peer.ID {
func (_ *MockHost) ID() peer.ID {
return ""
}
// Peerstore --
func (*MockHost) Peerstore() peerstore.Peerstore {
func (_ *MockHost) Peerstore() peerstore.Peerstore {
return nil
}
@@ -33,46 +33,46 @@ func (m *MockHost) Addrs() []ma.Multiaddr {
}
// Network --
func (*MockHost) Network() network.Network {
func (_ *MockHost) Network() network.Network {
return nil
}
// Mux --
func (*MockHost) Mux() protocol.Switch {
func (_ *MockHost) Mux() protocol.Switch {
return nil
}
// Connect --
func (*MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
func (_ *MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
return nil
}
// SetStreamHandler --
func (*MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
func (_ *MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
// SetStreamHandlerMatch --
func (*MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
func (_ *MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
}
// RemoveStreamHandler --
func (*MockHost) RemoveStreamHandler(_ protocol.ID) {}
func (_ *MockHost) RemoveStreamHandler(_ protocol.ID) {}
// NewStream --
func (*MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
func (_ *MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
return nil, nil
}
// Close --
func (*MockHost) Close() error {
func (_ *MockHost) Close() error {
return nil
}
// ConnManager --
func (*MockHost) ConnManager() connmgr.ConnManager {
func (_ *MockHost) ConnManager() connmgr.ConnManager {
return nil
}
// EventBus --
func (*MockHost) EventBus() event.Bus {
func (_ *MockHost) EventBus() event.Bus {
return nil
}

View File

@@ -20,7 +20,7 @@ type MockPeerManager struct {
}
// Disconnect .
func (*MockPeerManager) Disconnect(peer.ID) error {
func (_ *MockPeerManager) Disconnect(peer.ID) error {
return nil
}
@@ -35,25 +35,25 @@ func (m *MockPeerManager) Host() host.Host {
}
// ENR .
func (m *MockPeerManager) ENR() *enr.Record {
func (m MockPeerManager) ENR() *enr.Record {
return m.Enr
}
// DiscoveryAddresses .
func (m *MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
func (m MockPeerManager) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
if m.FailDiscoveryAddr {
return nil, errors.New("fail")
}
return m.DiscoveryAddr, nil
}
// RefreshPersistentSubnets .
func (*MockPeerManager) RefreshPersistentSubnets() {}
// RefreshENR .
func (_ MockPeerManager) RefreshENR() {}
// FindPeersWithSubnet .
func (*MockPeerManager) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (_ MockPeerManager) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return true, nil
}
// AddPingMethod .
func (*MockPeerManager) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {}
func (_ MockPeerManager) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {}

View File

@@ -64,7 +64,7 @@ func (m *MockPeersProvider) Peers() *peers.Status {
log.WithError(err).Debug("Cannot decode")
}
m.peers.Add(createENR(), id0, ma0, network.DirInbound)
m.peers.SetConnectionState(id0, peers.Connected)
m.peers.SetConnectionState(id0, peers.PeerConnected)
m.peers.SetChainState(id0, &pb.Status{FinalizedEpoch: 10})
id1, err := peer.Decode(MockRawPeerId1)
if err != nil {
@@ -75,7 +75,7 @@ func (m *MockPeersProvider) Peers() *peers.Status {
log.WithError(err).Debug("Cannot decode")
}
m.peers.Add(createENR(), id1, ma1, network.DirOutbound)
m.peers.SetConnectionState(id1, peers.Connected)
m.peers.SetConnectionState(id1, peers.PeerConnected)
m.peers.SetChainState(id1, &pb.Status{FinalizedEpoch: 11})
}
return m.peers

View File

@@ -10,11 +10,9 @@ import (
"testing"
"time"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/config"
core "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/control"
"github.com/libp2p/go-libp2p/core/host"
@@ -36,17 +34,13 @@ import (
// We have to declare this again here to prevent a circular dependency
// with the main p2p package.
const (
metadataV1Topic = "/eth2/beacon_chain/req/metadata/1"
metadataV2Topic = "/eth2/beacon_chain/req/metadata/2"
metadataV3Topic = "/eth2/beacon_chain/req/metadata/3"
)
const metatadataV1Topic = "/eth2/beacon_chain/req/metadata/1"
const metatadataV2Topic = "/eth2/beacon_chain/req/metadata/2"
// TestP2P represents a p2p implementation that can be used for testing.
type TestP2P struct {
t *testing.T
BHost host.Host
EnodeID enode.ID
pubsub *pubsub.PubSub
joinedTopics map[string]*pubsub.Topic
BroadcastCalled atomic.Bool
@@ -57,17 +51,9 @@ type TestP2P struct {
}
// NewTestP2P initializes a new p2p test service.
func NewTestP2P(t *testing.T, userOptions ...config.Option) *TestP2P {
func NewTestP2P(t *testing.T) *TestP2P {
ctx := context.Background()
options := []config.Option{
libp2p.ResourceManager(&network.NullResourceManager{}),
libp2p.Transport(tcp.NewTCPTransport),
libp2p.DefaultListenAddrs,
}
options = append(options, userOptions...)
h, err := libp2p.New(options...)
h, err := libp2p.New(libp2p.ResourceManager(&network.NullResourceManager{}), libp2p.Transport(tcp.NewTCPTransport), libp2p.DefaultListenAddrs)
require.NoError(t, err)
ps, err := pubsub.NewFloodSub(ctx, h,
pubsub.WithMessageSigning(false),
@@ -253,7 +239,7 @@ func (p *TestP2P) LeaveTopic(topic string) error {
}
// Encoding returns ssz encoding.
func (*TestP2P) Encoding() encoder.NetworkEncoding {
func (_ *TestP2P) Encoding() encoder.NetworkEncoding {
return &encoder.SszNetworkEncoder{}
}
@@ -280,39 +266,34 @@ func (p *TestP2P) Host() host.Host {
}
// ENR returns the enr of the local peer.
func (*TestP2P) ENR() *enr.Record {
func (_ *TestP2P) ENR() *enr.Record {
return new(enr.Record)
}
// NodeID returns the node id of the local peer.
func (p *TestP2P) NodeID() enode.ID {
return p.EnodeID
}
// DiscoveryAddresses --
func (*TestP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
func (_ *TestP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil
}
// AddConnectionHandler handles the connection with a newly connected peer.
func (p *TestP2P) AddConnectionHandler(f, _ func(ctx context.Context, id peer.ID) error) {
p.BHost.Network().Notify(&network.NotifyBundle{
ConnectedF: func(_ network.Network, conn network.Conn) {
ConnectedF: func(net network.Network, conn network.Conn) {
// Must be handled in a goroutine as this callback cannot be blocking.
go func() {
p.peers.Add(new(enr.Record), conn.RemotePeer(), conn.RemoteMultiaddr(), conn.Stat().Direction)
ctx := context.Background()
p.peers.SetConnectionState(conn.RemotePeer(), peers.Connecting)
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerConnecting)
if err := f(ctx, conn.RemotePeer()); err != nil {
logrus.WithError(err).Error("Could not send successful hello rpc request")
if err := p.Disconnect(conn.RemotePeer()); err != nil {
logrus.WithError(err).Errorf("Unable to close peer %s", conn.RemotePeer())
}
p.peers.SetConnectionState(conn.RemotePeer(), peers.Disconnected)
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerDisconnected)
return
}
p.peers.SetConnectionState(conn.RemotePeer(), peers.Connected)
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerConnected)
}()
},
})
@@ -321,14 +302,14 @@ func (p *TestP2P) AddConnectionHandler(f, _ func(ctx context.Context, id peer.ID
// AddDisconnectionHandler --
func (p *TestP2P) AddDisconnectionHandler(f func(ctx context.Context, id peer.ID) error) {
p.BHost.Network().Notify(&network.NotifyBundle{
DisconnectedF: func(_ network.Network, conn network.Conn) {
DisconnectedF: func(net network.Network, conn network.Conn) {
// Must be handled in a goroutine as this callback cannot be blocking.
go func() {
p.peers.SetConnectionState(conn.RemotePeer(), peers.Disconnecting)
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerDisconnecting)
if err := f(context.Background(), conn.RemotePeer()); err != nil {
logrus.WithError(err).Debug("Unable to invoke callback")
}
p.peers.SetConnectionState(conn.RemotePeer(), peers.Disconnected)
p.peers.SetConnectionState(conn.RemotePeer(), peers.PeerDisconnected)
}()
},
})
@@ -336,8 +317,6 @@ func (p *TestP2P) AddDisconnectionHandler(f func(ctx context.Context, id peer.ID
// Send a message to a specific peer.
func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid peer.ID) (network.Stream, error) {
metadataTopics := map[string]bool{metadataV1Topic: true, metadataV2Topic: true, metadataV3Topic: true}
t := topic
if t == "" {
return nil, fmt.Errorf("protocol doesn't exist for proto message: %v", msg)
@@ -347,7 +326,7 @@ func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid p
return nil, err
}
if !metadataTopics[topic] {
if topic != metatadataV1Topic && topic != metatadataV2Topic {
castedMsg, ok := msg.(ssz.Marshaler)
if !ok {
p.t.Fatalf("%T doesn't support ssz marshaler", msg)
@@ -374,7 +353,7 @@ func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid p
}
// Started always returns true.
func (*TestP2P) Started() bool {
func (_ *TestP2P) Started() bool {
return true
}
@@ -384,12 +363,12 @@ func (p *TestP2P) Peers() *peers.Status {
}
// FindPeersWithSubnet mocks the p2p func.
func (*TestP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (_ *TestP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return false, nil
}
// RefreshPersistentSubnets mocks the p2p func.
func (*TestP2P) RefreshPersistentSubnets() {}
// RefreshENR mocks the p2p func.
func (_ *TestP2P) RefreshENR() {}
// ForkDigest mocks the p2p func.
func (p *TestP2P) ForkDigest() ([4]byte, error) {
@@ -407,31 +386,31 @@ func (p *TestP2P) MetadataSeq() uint64 {
}
// AddPingMethod mocks the p2p func.
func (*TestP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
func (_ *TestP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
// no-op
}
// InterceptPeerDial .
func (*TestP2P) InterceptPeerDial(peer.ID) (allow bool) {
func (_ *TestP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true
}
// InterceptAddrDial .
func (*TestP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
func (_ *TestP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
return true
}
// InterceptAccept .
func (*TestP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
func (_ *TestP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptSecured .
func (*TestP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
func (_ *TestP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptUpgraded .
func (*TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
func (_ *TestP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0
}

View File

@@ -12,15 +12,10 @@ import (
"path"
"time"
"github.com/btcsuite/btcd/btcec/v2"
gCrypto "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/io/file"
@@ -67,7 +62,6 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
}
if defaultKeysExist {
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
return privKeyFromFile(defaultKeyPath)
}
@@ -77,8 +71,8 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
// If the StaticPeerID flag is not set and if peerDAS is not enabled, return the private key.
if !(cfg.StaticPeerID || params.PeerDASEnabled()) {
// If the StaticPeerID flag is not set, return the private key.
if !cfg.StaticPeerID {
return ecdsaprysm.ConvertFromInterfacePrivKey(priv)
}
@@ -95,7 +89,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
log.WithField("path", defaultKeyPath).Info("Wrote network key to file")
log.Info("Wrote network key to file")
// Read the key from the defaultKeyPath file just written
// for the strongest guarantee that the next start will be the same as this one.
return privKeyFromFile(defaultKeyPath)
@@ -179,27 +173,3 @@ func verifyConnectivity(addr string, port uint, protocol string) {
}
}
}
// ConvertPeerIDToNodeID converts a peer ID (libp2p) to a node ID (devp2p).
func ConvertPeerIDToNodeID(pid peer.ID) (enode.ID, error) {
// Retrieve the public key object of the peer under "crypto" form.
pubkeyObjCrypto, err := pid.ExtractPublicKey()
if err != nil {
return [32]byte{}, errors.Wrapf(err, "extract public key from peer ID `%s`", pid)
}
// Extract the bytes representation of the public key.
compressedPubKeyBytes, err := pubkeyObjCrypto.Raw()
if err != nil {
return [32]byte{}, errors.Wrap(err, "public key raw")
}
// Retrieve the public key object of the peer under "SECP256K1" form.
pubKeyObjSecp256k1, err := btcec.ParsePubKey(compressedPubKeyBytes)
if err != nil {
return [32]byte{}, errors.Wrap(err, "parse public key")
}
newPubkey := &ecdsa.PublicKey{Curve: gCrypto.S256(), X: pubKeyObjSecp256k1.X(), Y: pubKeyObjSecp256k1.Y()}
return enode.PubkeyToIDV4(newPubkey), nil
}

View File

@@ -6,7 +6,6 @@ import (
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -65,19 +64,3 @@ func TestSerializeENR(t *testing.T) {
assert.ErrorContains(t, "could not serialize nil record", err)
})
}
func TestConvertPeerIDToNodeID(t *testing.T) {
const (
peerIDStr = "16Uiu2HAmRrhnqEfybLYimCiAYer2AtZKDGamQrL1VwRCyeh2YiFc"
expectedNodeIDStr = "eed26c5d2425ab95f57246a5dca87317c41cacee4bcafe8bbe57e5965527c290"
)
peerID, err := peer.Decode(peerIDStr)
require.NoError(t, err)
actualNodeID, err := ConvertPeerIDToNodeID(peerID)
require.NoError(t, err)
actualNodeIDStr := actualNodeID.String()
require.Equal(t, expectedNodeIDStr, actualNodeIDStr)
}

View File

@@ -5,7 +5,6 @@ go_library(
srcs = [
"endpoints.go",
"log.go",
"metrics.go",
"service.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc",

View File

@@ -381,12 +381,21 @@ func (s *Service) SubmitSignedAggregateSelectionProof(
ctx, span := trace.StartSpan(ctx, "coreService.SubmitSignedAggregateSelectionProof")
defer span.End()
if agg == nil || agg.IsNil() {
if agg == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
attAndProof := agg.AggregateAttestationAndProof()
if attAndProof == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
att := attAndProof.AggregateVal()
if att == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
data := att.GetData()
if data == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
emptySig := make([]byte, fieldparams.BLSSignatureLength)
if bytes.Equal(agg.GetSignature(), emptySig) || bytes.Equal(attAndProof.GetSelectionProof(), emptySig) {
return &RpcError{Err: errors.New("signed signatures can't be zero hashes"), Reason: BadRequest}

View File

@@ -34,48 +34,18 @@ type endpoint struct {
methods []string
}
// responseWriter is the wrapper to http Response writer.
type responseWriter struct {
http.ResponseWriter
statusCode int
}
// WriteHeader wraps the WriteHeader method of the underlying http.ResponseWriter to capture the status code.
// Refer for WriteHeader doc: https://pkg.go.dev/net/http@go1.23.3#ResponseWriter.
func (w *responseWriter) WriteHeader(statusCode int) {
w.statusCode = statusCode
w.ResponseWriter.WriteHeader(statusCode)
}
func (e *endpoint) handlerWithMiddleware() http.HandlerFunc {
handler := http.Handler(e.handler)
for _, m := range e.middleware {
handler = m(handler)
}
handler = promhttp.InstrumentHandlerDuration(
return promhttp.InstrumentHandlerDuration(
httpRequestLatency.MustCurryWith(prometheus.Labels{"endpoint": e.name}),
promhttp.InstrumentHandlerCounter(
httpRequestCount.MustCurryWith(prometheus.Labels{"endpoint": e.name}),
handler,
),
)
return func(w http.ResponseWriter, r *http.Request) {
// SSE errors are handled separately to avoid interference with the streaming
// mechanism and ensure accurate error tracking.
if e.template == "/eth/v1/events" {
handler.ServeHTTP(w, r)
return
}
rw := &responseWriter{ResponseWriter: w, statusCode: http.StatusOK}
handler.ServeHTTP(rw, r)
if rw.statusCode >= 400 {
httpErrorCount.WithLabelValues(r.URL.Path, http.StatusText(rw.statusCode), r.Method).Inc()
}
}
}
func (s *Service) endpoints(
@@ -172,11 +142,9 @@ func (s *Service) builderEndpoints(stater lookup.Stater) []endpoint {
}
}
func (s *Service) blobEndpoints(blocker lookup.Blocker) []endpoint {
func (*Service) blobEndpoints(blocker lookup.Blocker) []endpoint {
server := &blob.Server{
Blocker: blocker,
OptimisticModeFetcher: s.cfg.OptimisticModeFetcher,
FinalizationFetcher: s.cfg.FinalizationFetcher,
Blocker: blocker,
}
const namespace = "blob"
@@ -231,15 +199,6 @@ func (s *Service) validatorEndpoints(
handler: server.GetAggregateAttestation,
methods: []string{http.MethodGet},
},
{
template: "/eth/v2/validator/aggregate_attestation",
name: namespace + ".GetAggregateAttestationV2",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.GetAggregateAttestationV2,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/validator/contribution_and_proofs",
name: namespace + ".SubmitContributionAndProofs",
@@ -642,7 +601,7 @@ func (s *Service) beaconEndpoints(
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.GetBlockAttestationsV2,
handler: server.GetBlockAttestations,
methods: []string{http.MethodGet},
},
{
@@ -691,16 +650,6 @@ func (s *Service) beaconEndpoints(
handler: server.SubmitAttestations,
methods: []string{http.MethodPost},
},
{
template: "/eth/v2/beacon/pool/attestations",
name: namespace + ".SubmitAttestationsV2",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.SubmitAttestationsV2,
methods: []string{http.MethodPost},
},
{
template: "/eth/v1/beacon/pool/voluntary_exits",
name: namespace + ".ListVoluntaryExits",

View File

@@ -41,7 +41,7 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/beacon/deposit_snapshot": {http.MethodGet},
"/eth/v1/beacon/blinded_blocks/{block_id}": {http.MethodGet},
"/eth/v1/beacon/pool/attestations": {http.MethodGet, http.MethodPost},
"/eth/v2/beacon/pool/attestations": {http.MethodGet, http.MethodPost},
"/eth/v2/beacon/pool/attestations": {http.MethodGet},
"/eth/v1/beacon/pool/attester_slashings": {http.MethodGet, http.MethodPost},
"/eth/v2/beacon/pool/attester_slashings": {http.MethodGet, http.MethodPost},
"/eth/v1/beacon/pool/proposer_slashings": {http.MethodGet, http.MethodPost},
@@ -101,7 +101,6 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/validator/blinded_blocks/{slot}": {http.MethodGet},
"/eth/v1/validator/attestation_data": {http.MethodGet},
"/eth/v1/validator/aggregate_attestation": {http.MethodGet},
"/eth/v2/validator/aggregate_attestation": {http.MethodGet},
"/eth/v1/validator/aggregate_and_proofs": {http.MethodPost},
"/eth/v2/validator/aggregate_and_proofs": {http.MethodPost},
"/eth/v1/validator/beacon_committee_subscriptions": {http.MethodPost},

View File

@@ -3,14 +3,13 @@ package beacon
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
@@ -87,7 +86,7 @@ func (s *Server) ListAttestations(w http.ResponseWriter, r *http.Request) {
// ListAttestationsV2 retrieves attestations known by the node but
// not necessarily incorporated into any block. Allows filtering by committee index or slot.
func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
_, span := trace.StartSpan(r.Context(), "beacon.ListAttestationsV2")
ctx, span := trace.StartSpan(r.Context(), "beacon.ListAttestationsV2")
defer span.End()
rawSlot, slot, ok := shared.UintFromQuery(w, r, "slot", false)
@@ -98,10 +97,13 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
if !ok {
return
}
v := slots.ToForkVersion(primitives.Slot(slot))
if rawSlot == "" {
v = slots.ToForkVersion(s.TimeFetcher.CurrentSlot())
headState, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
if err != nil {
httputil.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
return
}
attestations := s.AttestationsPool.AggregatedAttestations()
unaggAtts, err := s.AttestationsPool.UnaggregatedAttestations()
if err != nil {
@@ -113,7 +115,7 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
filteredAtts := make([]interface{}, 0, len(attestations))
for _, att := range attestations {
var includeAttestation bool
if v >= version.Electra && att.Version() >= version.Electra {
if headState.Version() >= version.Electra {
attElectra, ok := att.(*eth.AttestationElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert attestation of type %T", att), http.StatusInternalServerError)
@@ -125,7 +127,7 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
attStruct := structs.AttElectraFromConsensus(attElectra)
filteredAtts = append(filteredAtts, attStruct)
}
} else if v < version.Electra && att.Version() < version.Electra {
} else {
attOld, ok := att.(*eth.Attestation)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert attestation of type %T", att), http.StatusInternalServerError)
@@ -146,9 +148,8 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
return
}
w.Header().Set(api.VersionHeader, version.String(v))
httputil.WriteJson(w, &structs.ListAttestationsResponse{
Version: version.String(v),
Version: version.String(headState.Version()),
Data: attsData,
})
}
@@ -188,180 +189,14 @@ func (s *Server) SubmitAttestations(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
attFailures, failedBroadcasts, err := s.handleAttestations(ctx, req.Data)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
http.StatusInternalServerError,
)
return
}
if len(attFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
Code: http.StatusBadRequest,
Message: "One or more attestations failed validation",
Failures: attFailures,
}
httputil.WriteError(w, failuresErr)
}
}
// SubmitAttestationsV2 submits an attestation object to node. If the attestation passes all validation
// constraints, node MUST publish the attestation on an appropriate subnet.
func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttestationsV2")
defer span.End()
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
return
}
v, err := version.FromString(versionHeader)
if err != nil {
httputil.HandleError(w, "Invalid version: "+err.Error(), http.StatusBadRequest)
return
}
var req structs.SubmitAttestationsRequest
err = json.NewDecoder(r.Body).Decode(&req.Data)
switch {
case errors.Is(err, io.EOF):
if len(req.Data) == 0 {
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
var attFailures []*server.IndexedVerificationFailure
var failedBroadcasts []string
if v >= version.Electra {
attFailures, failedBroadcasts, err = s.handleAttestationsElectra(ctx, req.Data)
} else {
attFailures, failedBroadcasts, err = s.handleAttestations(ctx, req.Data)
}
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Failed to handle attestations: %v", err), http.StatusBadRequest)
return
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
http.StatusInternalServerError,
)
return
}
if len(attFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
Code: http.StatusBadRequest,
Message: "One or more attestations failed validation",
Failures: attFailures,
}
httputil.WriteError(w, failuresErr)
}
}
func (s *Server) handleAttestationsElectra(ctx context.Context, data json.RawMessage) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
var sourceAttestations []*structs.AttestationElectra
if err = json.Unmarshal(data, &sourceAttestations); err != nil {
return nil, nil, errors.Wrap(err, "failed to unmarshal attestation")
}
if len(sourceAttestations) == 0 {
return nil, nil, errors.New("no data submitted")
}
var validAttestations []*eth.AttestationElectra
for i, sourceAtt := range sourceAttestations {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
continue
}
validAttestations = append(validAttestations, att)
}
for i, att := range validAttestations {
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
if !corehelpers.IsAggregated(att) {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
}
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
committeeIndex, err := att.GetCommitteeIndex()
if err != nil {
return nil, nil, errors.Wrap(err, "failed to retrieve attestation committee index")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), committeeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
if corehelpers.IsAggregated(att) {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated attestation")
}
}
}
return attFailures, failedBroadcasts, nil
}
func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
var sourceAttestations []*structs.Attestation
if err = json.Unmarshal(data, &sourceAttestations); err != nil {
return nil, nil, errors.Wrap(err, "failed to unmarshal attestation")
}
if len(sourceAttestations) == 0 {
return nil, nil, errors.New("no data submitted")
}
var validAttestations []*eth.Attestation
for i, sourceAtt := range sourceAttestations {
var attFailures []*server.IndexedVerificationFailure
for i, sourceAtt := range req.Data {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
@@ -377,10 +212,7 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
})
continue
}
validAttestations = append(validAttestations, att)
}
for i, att := range validAttestations {
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
@@ -393,18 +225,22 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
})
}
validAttestations = append(validAttestations, att)
}
failedBroadcasts := make([]string, 0)
for i, att := range validAttestations {
// Determine subnet to broadcast attestation to
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
httputil.HandleError(w, "Could not get head validator indices: "+err.Error(), http.StatusInternalServerError)
return
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.Data.CommitteeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
if corehelpers.IsAggregated(att) {
@@ -417,8 +253,23 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
}
}
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
http.StatusInternalServerError,
)
return
}
return attFailures, failedBroadcasts, nil
if len(attFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
Code: http.StatusBadRequest,
Message: "One or more attestations failed validation",
Failures: attFailures,
}
httputil.WriteError(w, failuresErr)
}
}
// ListVoluntaryExits retrieves voluntary exits known by the node but
@@ -723,33 +574,31 @@ func (s *Server) GetAttesterSlashingsV2(w http.ResponseWriter, r *http.Request)
ctx, span := trace.StartSpan(r.Context(), "beacon.GetAttesterSlashingsV2")
defer span.End()
v := slots.ToForkVersion(s.TimeFetcher.CurrentSlot())
headState, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
if err != nil {
httputil.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
return
}
var attStructs []interface{}
sourceSlashings := s.SlashingsPool.PendingAttesterSlashings(ctx, headState, true /* return unlimited slashings */)
for _, slashing := range sourceSlashings {
var attStruct interface{}
if v >= version.Electra && slashing.Version() >= version.Electra {
if headState.Version() >= version.Electra {
a, ok := slashing.(*eth.AttesterSlashingElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert slashing of type %T to an Electra slashing", slashing), http.StatusInternalServerError)
return
}
attStruct = structs.AttesterSlashingElectraFromConsensus(a)
} else if v < version.Electra && slashing.Version() < version.Electra {
} else {
a, ok := slashing.(*eth.AttesterSlashing)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Unable to convert slashing of type %T to a Phase0 slashing", slashing), http.StatusInternalServerError)
return
}
attStruct = structs.AttesterSlashingFromConsensus(a)
} else {
continue
}
attStructs = append(attStructs, attStruct)
}
@@ -761,10 +610,10 @@ func (s *Server) GetAttesterSlashingsV2(w http.ResponseWriter, r *http.Request)
}
resp := &structs.GetAttesterSlashingsResponse{
Version: version.String(v),
Version: version.String(headState.Version()),
Data: attBytes,
}
w.Header().Set(api.VersionHeader, version.String(v))
w.Header().Set(api.VersionHeader, version.String(headState.Version()))
httputil.WriteJson(w, resp)
}

View File

@@ -115,16 +115,9 @@ func TestListAttestations(t *testing.T) {
Signature: bytesutil.PadTo([]byte("signature4"), 96),
}
t.Run("V1", func(t *testing.T) {
bs, err := util.NewBeaconState()
require.NoError(t, err)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
AttestationsPool: attestations.NewPool(),
}
require.NoError(t, s.AttestationsPool.SaveAggregatedAttestations([]ethpbv1alpha1.Att{att1, att2}))
require.NoError(t, s.AttestationsPool.SaveUnaggregatedAttestations([]ethpbv1alpha1.Att{att3, att4}))
@@ -211,19 +204,10 @@ func TestListAttestations(t *testing.T) {
t.Run("Pre-Electra", func(t *testing.T) {
bs, err := util.NewBeaconState()
require.NoError(t, err)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
AttestationsPool: attestations.NewPool(),
}
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.DenebForkEpoch = 0
params.OverrideBeaconConfig(config)
require.NoError(t, s.AttestationsPool.SaveAggregatedAttestations([]ethpbv1alpha1.Att{att1, att2}))
require.NoError(t, s.AttestationsPool.SaveUnaggregatedAttestations([]ethpbv1alpha1.Att{att3, att4}))
t.Run("empty request", func(t *testing.T) {
@@ -242,7 +226,7 @@ func TestListAttestations(t *testing.T) {
var atts []*structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &atts))
assert.Equal(t, 4, len(atts))
assert.Equal(t, "deneb", resp.Version)
assert.Equal(t, "phase0", resp.Version)
})
t.Run("slot request", func(t *testing.T) {
url := "http://example.com?slot=2"
@@ -260,7 +244,7 @@ func TestListAttestations(t *testing.T) {
var atts []*structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &atts))
assert.Equal(t, 2, len(atts))
assert.Equal(t, "deneb", resp.Version)
assert.Equal(t, "phase0", resp.Version)
for _, a := range atts {
assert.Equal(t, "2", a.Data.Slot)
}
@@ -281,7 +265,7 @@ func TestListAttestations(t *testing.T) {
var atts []*structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &atts))
assert.Equal(t, 2, len(atts))
assert.Equal(t, "deneb", resp.Version)
assert.Equal(t, "phase0", resp.Version)
for _, a := range atts {
assert.Equal(t, "4", a.Data.CommitteeIndex)
}
@@ -302,7 +286,7 @@ func TestListAttestations(t *testing.T) {
var atts []*structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &atts))
assert.Equal(t, 1, len(atts))
assert.Equal(t, "deneb", resp.Version)
assert.Equal(t, "phase0", resp.Version)
for _, a := range atts {
assert.Equal(t, "2", a.Data.Slot)
assert.Equal(t, "4", a.Data.CommitteeIndex)
@@ -386,21 +370,12 @@ func TestListAttestations(t *testing.T) {
}
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.ElectraForkEpoch = 0
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
AttestationsPool: attestations.NewPool(),
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
}
// Added one pre electra attestation to ensure it is ignored.
require.NoError(t, s.AttestationsPool.SaveAggregatedAttestations([]ethpbv1alpha1.Att{attElectra1, attElectra2, att1}))
require.NoError(t, s.AttestationsPool.SaveUnaggregatedAttestations([]ethpbv1alpha1.Att{attElectra3, attElectra4, att3}))
require.NoError(t, s.AttestationsPool.SaveAggregatedAttestations([]ethpbv1alpha1.Att{attElectra1, attElectra2}))
require.NoError(t, s.AttestationsPool.SaveUnaggregatedAttestations([]ethpbv1alpha1.Att{attElectra3, attElectra4}))
t.Run("empty request", func(t *testing.T) {
url := "http://example.com"
@@ -525,292 +500,95 @@ func TestSubmitAttestations(t *testing.T) {
ChainInfoFetcher: chainService,
OperationNotifier: &blockchainmock.MockOperationNotifier{},
}
t.Run("V1", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
s.SubmitAttestations(writer, request)
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "no data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("V2", func(t *testing.T) {
t.Run("pre-electra", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var body bytes.Buffer
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "no data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
t.Run("post-electra", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(singleAttElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAttsElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "no data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAttElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
}
func TestListVoluntaryExits(t *testing.T) {
@@ -1683,59 +1461,12 @@ func TestGetAttesterSlashings(t *testing.T) {
})
})
t.Run("V2", func(t *testing.T) {
t.Run("post-electra-ok-1-pre-slashing", func(t *testing.T) {
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.ElectraForkEpoch = 100
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PostElectra, slashing2PostElectra, slashing1PreElectra}},
}
request := httptest.NewRequest(http.MethodGet, "http://example.com/eth/v2/beacon/pool/attester_slashings", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAttesterSlashingsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetAttesterSlashingsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.Equal(t, "electra", resp.Version)
// Unmarshal resp.Data into a slice of slashings
var slashings []*structs.AttesterSlashingElectra
require.NoError(t, json.Unmarshal(resp.Data, &slashings))
ss, err := structs.AttesterSlashingsElectraToConsensus(slashings)
require.NoError(t, err)
require.DeepEqual(t, slashing1PostElectra, ss[0])
require.DeepEqual(t, slashing2PostElectra, ss[1])
})
t.Run("post-electra-ok", func(t *testing.T) {
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.ElectraForkEpoch = 100
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PostElectra, slashing2PostElectra}},
}
@@ -1764,11 +1495,9 @@ func TestGetAttesterSlashings(t *testing.T) {
t.Run("pre-electra-ok", func(t *testing.T) {
bs, err := util.NewBeaconState()
require.NoError(t, err)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{slashing1PreElectra, slashing2PreElectra}},
}
@@ -1796,15 +1525,8 @@ func TestGetAttesterSlashings(t *testing.T) {
bs, err := util.NewBeaconStateElectra()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.ElectraForkEpoch = 100
params.OverrideBeaconConfig(config)
chainService := &blockchainmock.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
ChainInfoFetcher: &blockchainmock.ChainService{State: bs},
SlashingsPool: &slashingsmock.PoolMock{PendingAttSlashings: []ethpbv1alpha1.AttSlashing{}},
}
@@ -2341,85 +2063,6 @@ var (
}
}
}
]`
singleAttElectra = `[
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
]`
multipleAttsElectra = `[
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0x736f75726365726f6f7431000000000000000000000000000000000000000000"
},
"target": {
"epoch": "0",
"root": "0x746172676574726f6f7431000000000000000000000000000000000000000000"
}
}
},
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0x736f75726365726f6f7431000000000000000000000000000000000000000000"
},
"target": {
"epoch": "0",
"root": "0x746172676574726f6f7432000000000000000000000000000000000000000000"
}
}
}
]`
// signature is invalid
invalidAttElectra = `[
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
]`
exit1 = `{
"message": {

View File

@@ -10,14 +10,12 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/rpc/core:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/blocks:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],

View File

@@ -15,7 +15,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// Blobs is an HTTP handler for Beacon API getBlobs.
@@ -60,30 +59,7 @@ func (s *Server) Blobs(w http.ResponseWriter, r *http.Request) {
return
}
blk, err := s.Blocker.Block(ctx, []byte(blockId))
if err != nil {
httputil.HandleError(w, "Could not fetch block: "+err.Error(), http.StatusInternalServerError)
return
}
blkRoot, err := blk.Block().HashTreeRoot()
if err != nil {
httputil.HandleError(w, "Could not hash block: "+err.Error(), http.StatusInternalServerError)
return
}
isOptimistic, err := s.OptimisticModeFetcher.IsOptimisticForRoot(ctx, blkRoot)
if err != nil {
httputil.HandleError(w, "Could not check if block is optimistic: "+err.Error(), http.StatusInternalServerError)
return
}
data := buildSidecarsJsonResponse(verifiedBlobs)
resp := &structs.SidecarsResponse{
Version: version.String(blk.Version()),
Data: data,
ExecutionOptimistic: isOptimistic,
Finalized: s.FinalizationFetcher.IsFinalized(ctx, blkRoot),
}
httputil.WriteJson(w, resp)
httputil.WriteJson(w, buildSidecarsJsonResponse(verifiedBlobs))
}
// parseIndices filters out invalid and duplicate blob indices
@@ -116,14 +92,14 @@ loop:
return indices, nil
}
func buildSidecarsJsonResponse(verifiedBlobs []*blocks.VerifiedROBlob) []*structs.Sidecar {
sidecars := make([]*structs.Sidecar, len(verifiedBlobs))
func buildSidecarsJsonResponse(verifiedBlobs []*blocks.VerifiedROBlob) *structs.SidecarsResponse {
resp := &structs.SidecarsResponse{Data: make([]*structs.Sidecar, len(verifiedBlobs))}
for i, sc := range verifiedBlobs {
proofs := make([]string, len(sc.CommitmentInclusionProof))
for j := range sc.CommitmentInclusionProof {
proofs[j] = hexutil.Encode(sc.CommitmentInclusionProof[j])
}
sidecars[i] = &structs.Sidecar{
resp.Data[i] = &structs.Sidecar{
Index: strconv.FormatUint(sc.Index, 10),
Blob: hexutil.Encode(sc.Blob),
KzgCommitment: hexutil.Encode(sc.KzgCommitment),
@@ -132,7 +108,7 @@ func buildSidecarsJsonResponse(verifiedBlobs []*blocks.VerifiedROBlob) []*struct
CommitmentInclusionProof: proofs,
}
}
return sidecars
return resp
}
func buildSidecarsSSZResponse(verifiedBlobs []*blocks.VerifiedROBlob) ([]byte, error) {

View File

@@ -46,20 +46,16 @@ func TestBlobs(t *testing.T) {
}
blockRoot := blobs[0].BlockRoot()
mockChainService := &mockChain.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
}
t.Run("genesis", func(t *testing.T) {
u := "http://foo.example/genesis"
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
blocker := &lookup.BeaconDbBlocker{}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
@@ -73,14 +69,18 @@ func TestBlobs(t *testing.T) {
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{Root: blockRoot[:], Block: denebBlock},
blocker := &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{Root: blockRoot[:]},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
@@ -111,96 +111,118 @@ func TestBlobs(t *testing.T) {
assert.Equal(t, hexutil.Encode(blobs[3].Blob), sidecar.Blob)
assert.Equal(t, hexutil.Encode(blobs[3].KzgCommitment), sidecar.KzgCommitment)
assert.Equal(t, hexutil.Encode(blobs[3].KzgProof), sidecar.KzgProof)
require.Equal(t, "deneb", resp.Version)
require.Equal(t, false, resp.ExecutionOptimistic)
require.Equal(t, false, resp.Finalized)
})
t.Run("finalized", func(t *testing.T) {
u := "http://foo.example/finalized"
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}, Block: denebBlock},
blocker := &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
})
t.Run("justified", func(t *testing.T) {
u := "http://foo.example/justified"
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
blocker := &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{CurrentJustifiedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
}
s := &Server{
Blocker: blocker,
}
require.Equal(t, "deneb", resp.Version)
require.Equal(t, false, resp.ExecutionOptimistic)
require.Equal(t, false, resp.Finalized)
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
})
t.Run("root", func(t *testing.T) {
u := "http://foo.example/" + hexutil.Encode(blockRoot[:])
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{Block: denebBlock},
BeaconDB: db,
blocker := &lookup.BeaconDbBlocker{
BeaconDB: db,
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BlobStorage: bs,
}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
require.Equal(t, "deneb", resp.Version)
require.Equal(t, false, resp.ExecutionOptimistic)
require.Equal(t, false, resp.Finalized)
})
t.Run("slot", func(t *testing.T) {
u := "http://foo.example/123"
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{Block: denebBlock},
BeaconDB: db,
blocker := &lookup.BeaconDbBlocker{
BeaconDB: db,
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BlobStorage: bs,
}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 4, len(resp.Data))
require.Equal(t, "deneb", resp.Version)
require.Equal(t, false, resp.ExecutionOptimistic)
require.Equal(t, false, resp.Finalized)
})
t.Run("one blob only", func(t *testing.T) {
u := "http://foo.example/123?indices=2"
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}, Block: denebBlock},
blocker := &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
@@ -213,47 +235,45 @@ func TestBlobs(t *testing.T) {
assert.Equal(t, hexutil.Encode(blobs[2].Blob), sidecar.Blob)
assert.Equal(t, hexutil.Encode(blobs[2].KzgCommitment), sidecar.KzgCommitment)
assert.Equal(t, hexutil.Encode(blobs[2].KzgProof), sidecar.KzgProof)
require.Equal(t, "deneb", resp.Version)
require.Equal(t, false, resp.ExecutionOptimistic)
require.Equal(t, false, resp.Finalized)
})
t.Run("no blobs returns an empty array", func(t *testing.T) {
u := "http://foo.example/123"
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}, Block: denebBlock},
blocker := &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: filesystem.NewEphemeralBlobStorage(t), // new ephemeral storage
}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, len(resp.Data), 0)
require.Equal(t, "deneb", resp.Version)
require.Equal(t, false, resp.ExecutionOptimistic)
require.Equal(t, false, resp.Finalized)
})
t.Run("outside retention period returns 200 w/ empty list ", func(t *testing.T) {
u := "http://foo.example/123"
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
moc := &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}, Block: denebBlock}
s.Blocker = &lookup.BeaconDbBlocker{
moc := &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}}
blocker := &lookup.BeaconDbBlocker{
ChainInfoFetcher: moc,
GenesisTimeFetcher: moc, // genesis time is set to 0 here, so it results in current epoch being extremely large
BeaconDB: db,
BlobStorage: bs,
}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
@@ -261,10 +281,6 @@ func TestBlobs(t *testing.T) {
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 0, len(resp.Data))
require.Equal(t, "deneb", resp.Version)
require.Equal(t, false, resp.ExecutionOptimistic)
require.Equal(t, false, resp.Finalized)
})
t.Run("block without commitments returns 200 w/empty list ", func(t *testing.T) {
denebBlock, _ := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 333, 0)
@@ -277,14 +293,17 @@ func TestBlobs(t *testing.T) {
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}, Block: denebBlock},
blocker := &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: bs,
}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
@@ -292,17 +311,16 @@ func TestBlobs(t *testing.T) {
resp := &structs.SidecarsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.Equal(t, 0, len(resp.Data))
require.Equal(t, "deneb", resp.Version)
require.Equal(t, false, resp.ExecutionOptimistic)
require.Equal(t, false, resp.Finalized)
})
t.Run("slot before Deneb fork", func(t *testing.T) {
u := "http://foo.example/31"
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
blocker := &lookup.BeaconDbBlocker{}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
@@ -317,7 +335,11 @@ func TestBlobs(t *testing.T) {
request := httptest.NewRequest("GET", u, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{}
blocker := &lookup.BeaconDbBlocker{}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
@@ -332,7 +354,7 @@ func TestBlobs(t *testing.T) {
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
blocker := &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
@@ -340,8 +362,10 @@ func TestBlobs(t *testing.T) {
BeaconDB: db,
BlobStorage: bs,
}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
require.Equal(t, len(writer.Body.Bytes()), fieldparams.BlobSidecarSize) // size of each sidecar
// can directly unmarshal to sidecar since there's only 1
@@ -355,7 +379,7 @@ func TestBlobs(t *testing.T) {
request.Header.Add("Accept", "application/octet-stream")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.Blocker = &lookup.BeaconDbBlocker{
blocker := &lookup.BeaconDbBlocker{
ChainInfoFetcher: &mockChain.ChainService{FinalizedCheckPoint: &eth.Checkpoint{Root: blockRoot[:]}},
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
@@ -363,8 +387,10 @@ func TestBlobs(t *testing.T) {
BeaconDB: db,
BlobStorage: bs,
}
s := &Server{
Blocker: blocker,
}
s.Blobs(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
require.Equal(t, len(writer.Body.Bytes()), fieldparams.BlobSidecarSize*4) // size of each sidecar
})

View File

@@ -1,12 +1,9 @@
package blob
import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/rpc/lookup"
)
type Server struct {
Blocker lookup.Blocker
OptimisticModeFetcher blockchain.OptimisticModeFetcher
FinalizationFetcher blockchain.FinalizationFetcher
Blocker lookup.Blocker
}

View File

@@ -79,7 +79,6 @@ func TestGetSpec(t *testing.T) {
config.DenebForkEpoch = 105
config.ElectraForkVersion = []byte("ElectraForkVersion")
config.ElectraForkEpoch = 107
config.Eip7594ForkEpoch = 109
config.BLSWithdrawalPrefixByte = byte('b')
config.ETH1AddressWithdrawalPrefixByte = byte('c')
config.GenesisDelay = 24
@@ -190,7 +189,7 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
assert.Equal(t, 156, len(data))
assert.Equal(t, 155, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -268,8 +267,6 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "0x"+hex.EncodeToString([]byte("ElectraForkVersion")), v)
case "ELECTRA_FORK_EPOCH":
assert.Equal(t, "107", v)
case "EIP7594_FORK_EPOCH":
assert.Equal(t, "109", v)
case "MIN_ANCHOR_POW_BLOCK_DIFFICULTY":
assert.Equal(t, "1000", v)
case "BLS_WITHDRAWAL_PREFIX":

View File

@@ -19,12 +19,11 @@ go_library(
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//config/params:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -32,8 +31,6 @@ go_library(
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
@@ -55,7 +52,6 @@ go_test(
"//config/fieldparams:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -7,13 +7,10 @@ import (
"fmt"
"io"
"net/http"
"strconv"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
@@ -21,12 +18,11 @@ import (
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
chaintime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/config/params"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
engine "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -35,7 +31,6 @@ import (
)
const DefaultEventFeedDepth = 1000
const payloadAttributeTimeout = 2 * time.Second
const (
InvalidTopic = "__invalid__"
@@ -78,14 +73,6 @@ var (
errWriterUnusable = errors.New("http response writer is unusable")
)
var httpSSEErrorCount = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "http_sse_error_count",
Help: "Total HTTP errors for server sent events endpoint",
},
[]string{"endpoint", "error"},
)
// The eventStreamer uses lazyReaders to defer serialization until the moment the value is ready to be written to the client.
type lazyReader func() io.Reader
@@ -102,12 +89,12 @@ var opsFeedEventTopics = map[feed.EventType]string{
var stateFeedEventTopics = map[feed.EventType]string{
statefeed.NewHead: HeadTopic,
statefeed.MissedSlot: PayloadAttributesTopic,
statefeed.FinalizedCheckpoint: FinalizedCheckpointTopic,
statefeed.LightClientFinalityUpdate: LightClientFinalityUpdateTopic,
statefeed.LightClientOptimisticUpdate: LightClientOptimisticUpdateTopic,
statefeed.Reorg: ChainReorgTopic,
statefeed.BlockProcessed: BlockTopic,
statefeed.PayloadAttributes: PayloadAttributesTopic,
}
var topicsForStateFeed = topicsForFeed(stateFeedEventTopics)
@@ -155,13 +142,6 @@ func newTopicRequest(topics []string) (*topicRequest, error) {
// Servers may send SSE comments beginning with ':' for any purpose,
// including to keep the event stream connection alive in the presence of proxy servers.
func (s *Server) StreamEvents(w http.ResponseWriter, r *http.Request) {
var err error
defer func() {
if err != nil {
httpSSEErrorCount.WithLabelValues(r.URL.Path, err.Error()).Inc()
}
}()
log.Debug("Starting StreamEvents handler")
ctx, span := trace.StartSpan(r.Context(), "events.StreamEvents")
defer span.End()
@@ -191,7 +171,7 @@ func (s *Server) StreamEvents(w http.ResponseWriter, r *http.Request) {
defer cancel()
es := newEventStreamer(buffSize, ka)
go es.outboxWriteLoop(ctx, cancel, sw, r.URL.Path)
go es.outboxWriteLoop(ctx, cancel, sw)
if err := es.recvEventLoop(ctx, cancel, topics, s); err != nil {
log.WithError(err).Debug("Shutting down StreamEvents handler.")
}
@@ -281,12 +261,11 @@ func newlineReader() io.Reader {
// outboxWriteLoop runs in a separate goroutine. Its job is to write the values in the outbox to
// the client as fast as the client can read them.
func (es *eventStreamer) outboxWriteLoop(ctx context.Context, cancel context.CancelFunc, w *streamingResponseWriterController, endpoint string) {
func (es *eventStreamer) outboxWriteLoop(ctx context.Context, cancel context.CancelFunc, w *streamingResponseWriterController) {
var err error
defer func() {
if err != nil {
log.WithError(err).Debug("Event streamer shutting down due to error.")
httpSSEErrorCount.WithLabelValues(endpoint, err.Error()).Inc()
}
es.exit()
}()
@@ -439,9 +418,10 @@ func topicForEvent(event *feed.Event) string {
return ChainReorgTopic
case *statefeed.BlockProcessedData:
return BlockTopic
case payloadattribute.EventData:
return PayloadAttributesTopic
default:
if event.Type == statefeed.MissedSlot {
return PayloadAttributesTopic
}
return InvalidTopic
}
}
@@ -451,17 +431,31 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
if !topics.requested(eventName) {
return nil, errNotRequested
}
if eventName == PayloadAttributesTopic {
return s.currentPayloadAttributes(ctx)
}
if event == nil || event.Data == nil {
return nil, errors.New("event or event data is nil")
}
switch v := event.Data.(type) {
case payloadattribute.EventData:
return s.payloadAttributesReader(ctx, v)
case *ethpb.EventHead:
// The head event is a special case because, if the client requested the payload attributes topic,
// we send two event messages in reaction; the head event and the payload attributes.
return func() io.Reader {
headReader := func() io.Reader {
return jsonMarshalReader(eventName, structs.HeadEventFromV1(v))
}
// Don't do the expensive attr lookup unless the client requested it.
if !topics.requested(PayloadAttributesTopic) {
return headReader, nil
}
// Since payload attributes could change before the outbox is written, we need to do a blocking operation to
// get the current payload attributes right here.
attrReader, err := s.currentPayloadAttributes(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get payload attributes for head event")
}
return func() io.Reader {
return io.MultiReader(headReader(), attrReader())
}, nil
case *operation.AggregatedAttReceivedData:
return func() io.Reader {
@@ -469,20 +463,14 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
return jsonMarshalReader(eventName, att)
}, nil
case *operation.UnAggregatedAttReceivedData:
switch att := v.Attestation.(type) {
case *eth.Attestation:
return func() io.Reader {
att := structs.AttFromConsensus(att)
return jsonMarshalReader(eventName, att)
}, nil
case *eth.AttestationElectra:
return func() io.Reader {
att := structs.AttElectraFromConsensus(att)
return jsonMarshalReader(eventName, att)
}, nil
default:
att, ok := v.Attestation.(*eth.Attestation)
if !ok {
return nil, errors.Wrapf(errUnhandledEventData, "Unexpected type %T for the .Attestation field of UnAggregatedAttReceivedData", v.Attestation)
}
return func() io.Reader {
att := structs.AttFromConsensus(att)
return jsonMarshalReader(eventName, att)
}, nil
case *operation.ExitReceivedData:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.SignedExitFromConsensus(v.Exit))
@@ -507,18 +495,13 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
})
}, nil
case *operation.AttesterSlashingReceivedData:
switch slashing := v.AttesterSlashing.(type) {
case *eth.AttesterSlashing:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.AttesterSlashingFromConsensus(slashing))
}, nil
case *eth.AttesterSlashingElectra:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.AttesterSlashingElectraFromConsensus(slashing))
}, nil
default:
slashing, ok := v.AttesterSlashing.(*eth.AttesterSlashing)
if !ok {
return nil, errors.Wrapf(errUnhandledEventData, "Unexpected type %T for the .AttesterSlashing field of AttesterSlashingReceivedData", v.AttesterSlashing)
}
return func() io.Reader {
return jsonMarshalReader(eventName, structs.AttesterSlashingFromConsensus(slashing))
}, nil
case *operation.ProposerSlashingReceivedData:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.ProposerSlashingFromConsensus(v.ProposerSlashing))
@@ -573,202 +556,115 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
}
}
var errUnsupportedPayloadAttribute = errors.New("cannot compute payload attributes pre-Bellatrix")
func (s *Server) computePayloadAttributes(ctx context.Context, ev payloadattribute.EventData) (payloadattribute.Attributer, error) {
v := ev.HeadState.Version()
if v < version.Bellatrix {
return nil, errors.Wrapf(errUnsupportedPayloadAttribute, "%s is not supported", version.String(v))
// This event stream is intended to be used by builders and relays.
// Parent fields are based on state at N_{current_slot}, while the rest of fields are based on state of N_{current_slot + 1}
func (s *Server) currentPayloadAttributes(ctx context.Context) (lazyReader, error) {
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head root")
}
st, err := s.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head state")
}
// advance the head state
headState, err := transition.ProcessSlotsIfPossible(ctx, st, s.ChainInfoFetcher.CurrentSlot()+1)
if err != nil {
return nil, errors.Wrap(err, "could not advance head state")
}
t, err := slots.ToTime(ev.HeadState.GenesisTime(), ev.HeadState.Slot())
headBlock, err := s.HeadFetcher.HeadBlock(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head block")
}
headPayload, err := headBlock.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
t, err := slots.ToTime(headState.GenesisTime(), headState.Slot())
if err != nil {
return nil, errors.Wrap(err, "could not get head state slot time")
}
timestamp := uint64(t.Unix())
prevRando, err := helpers.RandaoMix(ev.HeadState, chaintime.CurrentEpoch(ev.HeadState))
prevRando, err := helpers.RandaoMix(headState, chaintime.CurrentEpoch(headState))
if err != nil {
return nil, errors.Wrap(err, "could not get head state randao mix")
}
proposerIndex, err := helpers.BeaconProposerIndex(ctx, ev.HeadState)
proposerIndex, err := helpers.BeaconProposerIndex(ctx, headState)
if err != nil {
return nil, errors.Wrap(err, "could not get head state proposer index")
}
feeRecpt := params.BeaconConfig().DefaultFeeRecipient.Bytes()
feeRecipient := params.BeaconConfig().DefaultFeeRecipient.Bytes()
tValidator, exists := s.TrackedValidatorsCache.Validator(proposerIndex)
if exists {
feeRecpt = tValidator.FeeRecipient[:]
feeRecipient = tValidator.FeeRecipient[:]
}
var attributes interface{}
switch headState.Version() {
case version.Bellatrix:
attributes = &structs.PayloadAttributesV1{
Timestamp: fmt.Sprintf("%d", t.Unix()),
PrevRandao: hexutil.Encode(prevRando),
SuggestedFeeRecipient: hexutil.Encode(feeRecipient),
}
case version.Capella:
withdrawals, _, err := headState.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get head state expected withdrawals")
}
attributes = &structs.PayloadAttributesV2{
Timestamp: fmt.Sprintf("%d", t.Unix()),
PrevRandao: hexutil.Encode(prevRando),
SuggestedFeeRecipient: hexutil.Encode(feeRecipient),
Withdrawals: structs.WithdrawalsFromConsensus(withdrawals),
}
case version.Deneb, version.Electra:
withdrawals, _, err := headState.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get head state expected withdrawals")
}
parentRoot, err := headBlock.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get head block root")
}
attributes = &structs.PayloadAttributesV3{
Timestamp: fmt.Sprintf("%d", t.Unix()),
PrevRandao: hexutil.Encode(prevRando),
SuggestedFeeRecipient: hexutil.Encode(feeRecipient),
Withdrawals: structs.WithdrawalsFromConsensus(withdrawals),
ParentBeaconBlockRoot: hexutil.Encode(parentRoot[:]),
}
default:
return nil, errors.Wrapf(err, "Payload version %s is not supported", version.String(headState.Version()))
}
if v == version.Bellatrix {
return payloadattribute.New(&engine.PayloadAttributes{
Timestamp: timestamp,
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecpt,
})
}
w, _, err := ev.HeadState.ExpectedWithdrawals()
attributesBytes, err := json.Marshal(attributes)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals from head state")
return nil, errors.Wrap(err, "errors marshaling payload attributes to json")
}
if v == version.Capella {
return payloadattribute.New(&engine.PayloadAttributesV2{
Timestamp: timestamp,
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: w,
})
eventData := structs.PayloadAttributesEventData{
ProposerIndex: fmt.Sprintf("%d", proposerIndex),
ProposalSlot: fmt.Sprintf("%d", headState.Slot()),
ParentBlockNumber: fmt.Sprintf("%d", headPayload.BlockNumber()),
ParentBlockRoot: hexutil.Encode(headRoot),
ParentBlockHash: hexutil.Encode(headPayload.BlockHash()),
PayloadAttributes: attributesBytes,
}
pr, err := ev.HeadBlock.Block().HashTreeRoot()
eventDataBytes, err := json.Marshal(eventData)
if err != nil {
return nil, errors.Wrap(err, "could not compute head block root")
return nil, errors.Wrap(err, "errors marshaling payload attributes event data to json")
}
return payloadattribute.New(&engine.PayloadAttributesV3{
Timestamp: timestamp,
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: w,
ParentBeaconBlockRoot: pr[:],
})
}
type asyncPayloadAttrData struct {
data json.RawMessage
version string
err error
}
func (s *Server) fillEventData(ctx context.Context, ev payloadattribute.EventData) (payloadattribute.EventData, error) {
if ev.HeadBlock == nil || ev.HeadBlock.IsNil() {
hb, err := s.HeadFetcher.HeadBlock(ctx)
if err != nil {
return ev, errors.Wrap(err, "Could not look up head block")
}
root, err := hb.Block().HashTreeRoot()
if err != nil {
return ev, errors.Wrap(err, "Could not compute head block root")
}
if ev.HeadRoot != root {
return ev, errors.Wrap(err, "head root changed before payload attribute event handler execution")
}
ev.HeadBlock = hb
payload, err := hb.Block().Body().Execution()
if err != nil {
return ev, errors.Wrap(err, "Could not get execution payload for head block")
}
ev.ParentBlockHash = payload.BlockHash()
ev.ParentBlockNumber = payload.BlockNumber()
}
attr := ev.Attributer
if attr == nil || attr.IsEmpty() {
attr, err := s.computePayloadAttributes(ctx, ev)
if err != nil {
return ev, errors.Wrap(err, "Could not compute payload attributes")
}
ev.Attributer = attr
}
return ev, nil
}
// This event stream is intended to be used by builders and relays.
// Parent fields are based on state at N_{current_slot}, while the rest of fields are based on state of N_{current_slot + 1}
func (s *Server) payloadAttributesReader(ctx context.Context, ev payloadattribute.EventData) (lazyReader, error) {
ctx, cancel := context.WithTimeout(ctx, payloadAttributeTimeout)
edc := make(chan asyncPayloadAttrData)
go func() {
d := asyncPayloadAttrData{
version: version.String(ev.HeadState.Version()),
}
defer func() {
edc <- d
}()
ev, err := s.fillEventData(ctx, ev)
if err != nil {
d.err = errors.Wrap(err, "Could not fill event data")
return
}
attributesBytes, err := marshalAttributes(ev.Attributer)
if err != nil {
d.err = errors.Wrap(err, "errors marshaling payload attributes to json")
return
}
d.data, d.err = json.Marshal(structs.PayloadAttributesEventData{
ProposerIndex: strconv.FormatUint(uint64(ev.ProposerIndex), 10),
ProposalSlot: strconv.FormatUint(uint64(ev.ProposalSlot), 10),
ParentBlockNumber: strconv.FormatUint(ev.ParentBlockNumber, 10),
ParentBlockRoot: hexutil.Encode(ev.ParentBlockRoot),
ParentBlockHash: hexutil.Encode(ev.ParentBlockHash),
PayloadAttributes: attributesBytes,
})
if d.err != nil {
d.err = errors.Wrap(d.err, "errors marshaling payload attributes event data to json")
}
}()
return func() io.Reader {
defer cancel()
select {
case <-ctx.Done():
log.WithError(ctx.Err()).Warn("Context canceled while waiting for payload attributes event data")
return nil
case ed := <-edc:
if ed.err != nil {
log.WithError(ed.err).Warn("Error while marshaling payload attributes event data")
return nil
}
return jsonMarshalReader(PayloadAttributesTopic, &structs.PayloadAttributesEvent{
Version: ed.version,
Data: ed.data,
})
}
return jsonMarshalReader(PayloadAttributesTopic, &structs.PayloadAttributesEvent{
Version: version.String(headState.Version()),
Data: eventDataBytes,
})
}, nil
}
func marshalAttributes(attr payloadattribute.Attributer) ([]byte, error) {
v := attr.Version()
if v < version.Bellatrix {
return nil, errors.Wrapf(errUnsupportedPayloadAttribute, "Payload version %s is not supported", version.String(v))
}
timestamp := strconv.FormatUint(attr.Timestamp(), 10)
prevRandao := hexutil.Encode(attr.PrevRandao())
feeRecpt := hexutil.Encode(attr.SuggestedFeeRecipient())
if v == version.Bellatrix {
return json.Marshal(&structs.PayloadAttributesV1{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecpt,
})
}
w, err := attr.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals from payload attributes event")
}
withdrawals := structs.WithdrawalsFromConsensus(w)
if v == version.Capella {
return json.Marshal(&structs.PayloadAttributesV2{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: withdrawals,
})
}
parentRoot, err := attr.ParentBeaconBlockRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get parent beacon block root from payload attributes event")
}
return json.Marshal(&structs.PayloadAttributesV3{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: withdrawals,
ParentBeaconBlockRoot: hexutil.Encode(parentRoot),
})
}
func newStreamingResponseController(rw http.ResponseWriter, timeout time.Duration) *streamingResponseWriterController {
rc := http.NewResponseController(rw)
return &streamingResponseWriterController{

View File

@@ -21,7 +21,6 @@ import (
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -490,21 +489,7 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
require.NoError(t, err)
request := topics.testHttpRequest(testSync.ctx, t)
w := NewStreamingResponseWriterRecorder(testSync.ctx)
events := []*feed.Event{
&feed.Event{
Type: statefeed.PayloadAttributes,
Data: payloadattribute.EventData{
ProposerIndex: 0,
ProposalSlot: 0,
ParentBlockNumber: 0,
ParentBlockRoot: make([]byte, 32),
ParentBlockHash: make([]byte, 32),
HeadState: st,
HeadBlock: b,
HeadRoot: [fieldparams.RootLength]byte{},
},
},
}
events := []*feed.Event{&feed.Event{Type: statefeed.MissedSlot}}
go func() {
s.StreamEvents(w, request)

View File

@@ -12,7 +12,7 @@ import (
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
)
// Server defines a server implementation of the http events service,
// Server defines a server implementation of the gRPC events service,
// providing RPC endpoints to subscribe to events from the beacon node.
type Server struct {
StateNotifier statefeed.Notifier

View File

@@ -117,13 +117,13 @@ func TestGetPeers(t *testing.T) {
switch i {
case 0, 1:
peerStatus.SetConnectionState(id, peers.Connecting)
peerStatus.SetConnectionState(id, peers.PeerConnecting)
case 2, 3:
peerStatus.SetConnectionState(id, peers.Connected)
peerStatus.SetConnectionState(id, peers.PeerConnected)
case 4, 5:
peerStatus.SetConnectionState(id, peers.Disconnecting)
peerStatus.SetConnectionState(id, peers.PeerDisconnecting)
case 6, 7:
peerStatus.SetConnectionState(id, peers.Disconnected)
peerStatus.SetConnectionState(id, peers.PeerDisconnected)
default:
t.Fatalf("Failed to set connection state for peer")
}
@@ -289,13 +289,13 @@ func TestGetPeerCount(t *testing.T) {
switch i {
case 0:
peerStatus.SetConnectionState(id, peers.Connecting)
peerStatus.SetConnectionState(id, peers.PeerConnecting)
case 1, 2:
peerStatus.SetConnectionState(id, peers.Connected)
peerStatus.SetConnectionState(id, peers.PeerConnected)
case 3, 4, 5:
peerStatus.SetConnectionState(id, peers.Disconnecting)
peerStatus.SetConnectionState(id, peers.PeerDisconnecting)
case 6, 7, 8, 9:
peerStatus.SetConnectionState(id, peers.Disconnected)
peerStatus.SetConnectionState(id, peers.PeerDisconnected)
default:
t.Fatalf("Failed to set connection state for peer")
}

View File

@@ -40,7 +40,6 @@ go_library(
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
@@ -83,7 +82,6 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/httputil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -95,7 +93,6 @@ go_test(
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],

View File

@@ -2,13 +2,11 @@ package validator
import (
"bytes"
"cmp"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"slices"
"sort"
"strconv"
"time"
@@ -34,7 +32,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
ethpbalpha "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation/aggregation/attestations"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
@@ -51,171 +48,71 @@ func (s *Server) GetAggregateAttestation(w http.ResponseWriter, r *http.Request)
if !ok {
return
}
_, slot, ok := shared.UintFromQuery(w, r, "slot", true)
if !ok {
return
}
agg := s.aggregatedAttestation(w, primitives.Slot(slot), attDataRoot, 0)
if agg == nil {
return
}
typedAgg, ok := agg.(*ethpbalpha.Attestation)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", &ethpbalpha.Attestation{}), http.StatusInternalServerError)
return
}
data, err := json.Marshal(structs.AttFromConsensus(typedAgg))
if err != nil {
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
return
}
httputil.WriteJson(w, &structs.AggregateAttestationResponse{Data: data})
}
// GetAggregateAttestationV2 aggregates all attestations matching the given attestation data root and slot, returning the aggregated result.
func (s *Server) GetAggregateAttestationV2(w http.ResponseWriter, r *http.Request) {
_, span := trace.StartSpan(r.Context(), "validator.GetAggregateAttestationV2")
defer span.End()
_, attDataRoot, ok := shared.HexFromQuery(w, r, "attestation_data_root", fieldparams.RootLength, true)
if !ok {
return
}
_, slot, ok := shared.UintFromQuery(w, r, "slot", true)
if !ok {
return
}
_, index, ok := shared.UintFromQuery(w, r, "committee_index", true)
if !ok {
return
}
v := slots.ToForkVersion(primitives.Slot(slot))
agg := s.aggregatedAttestation(w, primitives.Slot(slot), attDataRoot, primitives.CommitteeIndex(index))
if agg == nil {
return
}
resp := &structs.AggregateAttestationResponse{
Version: version.String(v),
}
if v >= version.Electra {
typedAgg, ok := agg.(*ethpbalpha.AttestationElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", &ethpbalpha.AttestationElectra{}), http.StatusInternalServerError)
return
}
data, err := json.Marshal(structs.AttElectraFromConsensus(typedAgg))
if err != nil {
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
return
}
resp.Data = data
} else {
typedAgg, ok := agg.(*ethpbalpha.Attestation)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", &ethpbalpha.Attestation{}), http.StatusInternalServerError)
return
}
data, err := json.Marshal(structs.AttFromConsensus(typedAgg))
if err != nil {
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
return
}
resp.Data = data
}
w.Header().Set(api.VersionHeader, version.String(v))
httputil.WriteJson(w, resp)
}
func (s *Server) aggregatedAttestation(w http.ResponseWriter, slot primitives.Slot, attDataRoot []byte, index primitives.CommitteeIndex) ethpbalpha.Att {
var match ethpbalpha.Att
var err error
match, err := matchingAtts(s.AttestationsPool.AggregatedAttestations(), slot, attDataRoot, index)
match, err = matchingAtt(s.AttestationsPool.AggregatedAttestations(), primitives.Slot(slot), attDataRoot)
if err != nil {
httputil.HandleError(w, "Could not get matching attestations: "+err.Error(), http.StatusInternalServerError)
return nil
httputil.HandleError(w, "Could not get matching attestation: "+err.Error(), http.StatusInternalServerError)
return
}
if len(match) > 0 {
// If there are multiple matching aggregated attestations,
// then we return the one with the most aggregation bits.
slices.SortFunc(match, func(a, b ethpbalpha.Att) int {
return cmp.Compare(b.GetAggregationBits().Count(), a.GetAggregationBits().Count())
})
return match[0]
if match == nil {
atts, err := s.AttestationsPool.UnaggregatedAttestations()
if err != nil {
httputil.HandleError(w, "Could not get unaggregated attestations: "+err.Error(), http.StatusInternalServerError)
return
}
match, err = matchingAtt(atts, primitives.Slot(slot), attDataRoot)
if err != nil {
httputil.HandleError(w, "Could not get matching attestation: "+err.Error(), http.StatusInternalServerError)
return
}
}
if match == nil {
httputil.HandleError(w, "No matching attestation found", http.StatusNotFound)
return
}
atts, err := s.AttestationsPool.UnaggregatedAttestations()
if err != nil {
httputil.HandleError(w, "Could not get unaggregated attestations: "+err.Error(), http.StatusInternalServerError)
return nil
}
match, err = matchingAtts(atts, slot, attDataRoot, index)
if err != nil {
httputil.HandleError(w, "Could not get matching attestations: "+err.Error(), http.StatusInternalServerError)
return nil
}
if len(match) == 0 {
httputil.HandleError(w, "No matching attestations found", http.StatusNotFound)
return nil
}
agg, err := attestations.Aggregate(match)
if err != nil {
httputil.HandleError(w, "Could not aggregate unaggregated attestations: "+err.Error(), http.StatusInternalServerError)
return nil
}
// Aggregating unaggregated attestations will in theory always return just one aggregate,
// so we can take the first one and be done with it.
return agg[0]
response := &structs.AggregateAttestationResponse{
Data: &structs.Attestation{
AggregationBits: hexutil.Encode(match.GetAggregationBits()),
Data: &structs.AttestationData{
Slot: strconv.FormatUint(uint64(match.GetData().Slot), 10),
CommitteeIndex: strconv.FormatUint(uint64(match.GetData().CommitteeIndex), 10),
BeaconBlockRoot: hexutil.Encode(match.GetData().BeaconBlockRoot),
Source: &structs.Checkpoint{
Epoch: strconv.FormatUint(uint64(match.GetData().Source.Epoch), 10),
Root: hexutil.Encode(match.GetData().Source.Root),
},
Target: &structs.Checkpoint{
Epoch: strconv.FormatUint(uint64(match.GetData().Target.Epoch), 10),
Root: hexutil.Encode(match.GetData().Target.Root),
},
},
Signature: hexutil.Encode(match.GetSignature()),
}}
httputil.WriteJson(w, response)
}
func matchingAtts(atts []ethpbalpha.Att, slot primitives.Slot, attDataRoot []byte, index primitives.CommitteeIndex) ([]ethpbalpha.Att, error) {
if len(atts) == 0 {
return []ethpbalpha.Att{}, nil
}
postElectra := slots.ToForkVersion(slot) >= version.Electra
result := make([]ethpbalpha.Att, 0)
func matchingAtt(atts []ethpbalpha.Att, slot primitives.Slot, attDataRoot []byte) (ethpbalpha.Att, error) {
for _, att := range atts {
if att.GetData().Slot != slot {
continue
}
// We ignore the committee index from the request before Electra.
// This is because before Electra the committee index is part of the attestation data,
// meaning that comparing the data root is sufficient.
// Post-Electra the committee index in the data root is always 0, so we need to
// compare the committee index separately.
if postElectra {
if att.Version() >= version.Electra {
ci, err := att.GetCommitteeIndex()
if err != nil {
return nil, err
}
if ci != index {
continue
}
} else {
continue
if att.GetData().Slot == slot {
root, err := att.GetData().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get attestation data root")
}
} else {
// If postElectra is false and att.Version >= version.Electra, ignore the attestation.
if att.Version() >= version.Electra {
continue
if bytes.Equal(root[:], attDataRoot) {
return att, nil
}
}
root, err := att.GetData().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get attestation data root")
}
if bytes.Equal(root[:], attDataRoot) {
result = append(result, att)
}
}
return result, nil
return nil, nil
}
// SubmitContributionAndProofs publishes multiple signed sync committee contribution and proofs.

View File

@@ -14,7 +14,6 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
mockChain "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
@@ -36,7 +35,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/crypto/bls/common"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
ethpbalpha "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -50,465 +48,292 @@ import (
func TestGetAggregateAttestation(t *testing.T) {
root1 := bytesutil.PadTo([]byte("root1"), 32)
root2 := bytesutil.PadTo([]byte("root2"), 32)
key, err := bls.RandKey()
require.NoError(t, err)
sig := key.Sign([]byte("sig"))
// It is important to use 0 as the index because that's the only way
// pre and post-Electra attestations can both match,
// which allows us to properly test that attestations from the
// wrong fork are ignored.
committeeIndex := uint64(0)
createAttestation := func(slot primitives.Slot, aggregationBits bitfield.Bitlist, root []byte) *ethpbalpha.Attestation {
return &ethpbalpha.Attestation{
AggregationBits: aggregationBits,
Data: createAttestationData(slot, primitives.CommitteeIndex(committeeIndex), root),
Signature: sig.Marshal(),
}
sig1 := bytesutil.PadTo([]byte("sig1"), fieldparams.BLSSignatureLength)
attSlot1 := &ethpbalpha.Attestation{
AggregationBits: []byte{0, 1},
Data: &ethpbalpha.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root1,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root1,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root1,
},
},
Signature: sig1,
}
root21 := bytesutil.PadTo([]byte("root2_1"), 32)
sig21 := bytesutil.PadTo([]byte("sig2_1"), fieldparams.BLSSignatureLength)
attslot21 := &ethpbalpha.Attestation{
AggregationBits: []byte{0, 1, 1},
Data: &ethpbalpha.AttestationData{
Slot: 2,
CommitteeIndex: 1,
BeaconBlockRoot: root21,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root21,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root21,
},
},
Signature: sig21,
}
root22 := bytesutil.PadTo([]byte("root2_2"), 32)
sig22 := bytesutil.PadTo([]byte("sig2_2"), fieldparams.BLSSignatureLength)
attslot22 := &ethpbalpha.Attestation{
AggregationBits: []byte{0, 1, 1, 1},
Data: &ethpbalpha.AttestationData{
Slot: 2,
CommitteeIndex: 1,
BeaconBlockRoot: root22,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root22,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root22,
},
},
Signature: sig22,
}
root31 := bytesutil.PadTo([]byte("root3_1"), 32)
sig31 := bls.NewAggregateSignature().Marshal()
attslot31 := &ethpbalpha.Attestation{
AggregationBits: []byte{1, 0},
Data: &ethpbalpha.AttestationData{
Slot: 3,
CommitteeIndex: 1,
BeaconBlockRoot: root31,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root31,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root31,
},
},
Signature: sig31,
}
root32 := bytesutil.PadTo([]byte("root3_2"), 32)
sig32 := bls.NewAggregateSignature().Marshal()
attslot32 := &ethpbalpha.Attestation{
AggregationBits: []byte{0, 1},
Data: &ethpbalpha.AttestationData{
Slot: 3,
CommitteeIndex: 1,
BeaconBlockRoot: root32,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root32,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root32,
},
},
Signature: sig32,
}
createAttestationElectra := func(slot primitives.Slot, aggregationBits bitfield.Bitlist, root []byte) *ethpbalpha.AttestationElectra {
committeeBits := bitfield.NewBitvector64()
committeeBits.SetBitAt(committeeIndex, true)
pool := attestations.NewPool()
err := pool.SaveAggregatedAttestations([]ethpbalpha.Att{attSlot1, attslot21, attslot22})
assert.NoError(t, err)
err = pool.SaveUnaggregatedAttestations([]ethpbalpha.Att{attslot31, attslot32})
assert.NoError(t, err)
return &ethpbalpha.AttestationElectra{
CommitteeBits: committeeBits,
AggregationBits: aggregationBits,
Data: createAttestationData(slot, primitives.CommitteeIndex(committeeIndex), root),
Signature: sig.Marshal(),
}
s := &Server{
AttestationsPool: pool,
}
t.Run("V1", func(t *testing.T) {
aggSlot1_Root1_1 := createAttestation(1, bitfield.Bitlist{0b11100}, root1)
aggSlot1_Root1_2 := createAttestation(1, bitfield.Bitlist{0b10111}, root1)
aggSlot1_Root2 := createAttestation(1, bitfield.Bitlist{0b11100}, root2)
aggSlot2 := createAttestation(2, bitfield.Bitlist{0b11100}, root1)
unaggSlot3_Root1_1 := createAttestation(3, bitfield.Bitlist{0b11000}, root1)
unaggSlot3_Root1_2 := createAttestation(3, bitfield.Bitlist{0b10100}, root1)
unaggSlot3_Root2 := createAttestation(3, bitfield.Bitlist{0b11000}, root2)
unaggSlot4 := createAttestation(4, bitfield.Bitlist{0b11000}, root1)
compareResult := func(
t *testing.T,
attestation structs.Attestation,
expectedSlot string,
expectedAggregationBits string,
expectedRoot []byte,
expectedSig []byte,
) {
assert.Equal(t, expectedAggregationBits, attestation.AggregationBits, "Unexpected aggregation bits in attestation")
assert.Equal(t, hexutil.Encode(expectedSig), attestation.Signature, "Signature mismatch")
assert.Equal(t, expectedSlot, attestation.Data.Slot, "Slot mismatch in attestation data")
assert.Equal(t, "0", attestation.Data.CommitteeIndex, "Committee index mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.BeaconBlockRoot, "Beacon block root mismatch")
// Source checkpoint checks
require.NotNil(t, attestation.Data.Source, "Source checkpoint should not be nil")
assert.Equal(t, "1", attestation.Data.Source.Epoch, "Source epoch mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.Source.Root, "Source root mismatch")
// Target checkpoint checks
require.NotNil(t, attestation.Data.Target, "Target checkpoint should not be nil")
assert.Equal(t, "1", attestation.Data.Target.Epoch, "Target epoch mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.Target.Root, "Target root mismatch")
}
pool := attestations.NewPool()
require.NoError(t, pool.SaveUnaggregatedAttestations([]ethpbalpha.Att{unaggSlot3_Root1_1, unaggSlot3_Root1_2, unaggSlot3_Root2, unaggSlot4}), "Failed to save unaggregated attestations")
unagg, err := pool.UnaggregatedAttestations()
t.Run("matching aggregated att", func(t *testing.T) {
reqRoot, err := attslot22.Data.HashTreeRoot()
require.NoError(t, err)
require.Equal(t, 4, len(unagg), "Expected 4 unaggregated attestations")
require.NoError(t, pool.SaveAggregatedAttestations([]ethpbalpha.Att{aggSlot1_Root1_1, aggSlot1_Root1_2, aggSlot1_Root2, aggSlot2}), "Failed to save aggregated attestations")
agg := pool.AggregatedAttestations()
require.Equal(t, 4, len(agg), "Expected 4 aggregated attestations")
s := &Server{
AttestationsPool: pool,
}
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
t.Run("non-matching attestation request", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code, "Expected HTTP status NotFound for non-matching request")
})
t.Run("1 matching aggregated attestation", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal())
})
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal())
})
t.Run("1 matching unaggregated attestation", func(t *testing.T) {
reqRoot, err := unaggSlot4.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=4"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "4", hexutil.Encode(unaggSlot4.AggregationBits), root1, sig.Marshal())
})
t.Run("multiple matching unaggregated attestations - their aggregate is returned", func(t *testing.T) {
reqRoot, err := unaggSlot3_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=3"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
sig1, err := bls.SignatureFromBytes(unaggSlot3_Root1_1.Signature)
require.NoError(t, err)
sig2, err := bls.SignatureFromBytes(unaggSlot3_Root1_2.Signature)
require.NoError(t, err)
expectedSig := bls.AggregateSignatures([]common.Signature{sig1, sig2})
compareResult(t, attestation, "3", hexutil.Encode(bitfield.Bitlist{0b11100}), root1, expectedSig.Marshal())
})
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.AggregateAttestationResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.DeepEqual(t, "0x00010101", resp.Data.AggregationBits)
assert.DeepEqual(t, hexutil.Encode(sig22), resp.Data.Signature)
assert.Equal(t, "2", resp.Data.Data.Slot)
assert.Equal(t, "1", resp.Data.Data.CommitteeIndex)
assert.DeepEqual(t, hexutil.Encode(root22), resp.Data.Data.BeaconBlockRoot)
require.NotNil(t, resp.Data.Data.Source)
assert.Equal(t, "1", resp.Data.Data.Source.Epoch)
assert.DeepEqual(t, hexutil.Encode(root22), resp.Data.Data.Source.Root)
require.NotNil(t, resp.Data.Data.Target)
assert.Equal(t, "1", resp.Data.Data.Target.Epoch)
assert.DeepEqual(t, hexutil.Encode(root22), resp.Data.Data.Target.Root)
})
t.Run("V2", func(t *testing.T) {
t.Run("pre-electra", func(t *testing.T) {
committeeBits := bitfield.NewBitvector64()
committeeBits.SetBitAt(1, true)
t.Run("matching unaggregated att", func(t *testing.T) {
reqRoot, err := attslot32.Data.HashTreeRoot()
require.NoError(t, err)
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=3"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
aggSlot1_Root1_1 := createAttestation(1, bitfield.Bitlist{0b11100}, root1)
aggSlot1_Root1_2 := createAttestation(1, bitfield.Bitlist{0b10111}, root1)
aggSlot1_Root2 := createAttestation(1, bitfield.Bitlist{0b11100}, root2)
aggSlot2 := createAttestation(2, bitfield.Bitlist{0b11100}, root1)
unaggSlot3_Root1_1 := createAttestation(3, bitfield.Bitlist{0b11000}, root1)
unaggSlot3_Root1_2 := createAttestation(3, bitfield.Bitlist{0b10100}, root1)
unaggSlot3_Root2 := createAttestation(3, bitfield.Bitlist{0b11000}, root2)
unaggSlot4 := createAttestation(4, bitfield.Bitlist{0b11000}, root1)
// Add one post-electra attestation to ensure that it is being ignored.
// We choose slot 2 where we have one pre-electra attestation with less attestation bits.
postElectraAtt := createAttestationElectra(2, bitfield.Bitlist{0b11111}, root1)
compareResult := func(
t *testing.T,
attestation structs.Attestation,
expectedSlot string,
expectedAggregationBits string,
expectedRoot []byte,
expectedSig []byte,
) {
assert.Equal(t, expectedAggregationBits, attestation.AggregationBits, "Unexpected aggregation bits in attestation")
assert.Equal(t, hexutil.Encode(expectedSig), attestation.Signature, "Signature mismatch")
assert.Equal(t, expectedSlot, attestation.Data.Slot, "Slot mismatch in attestation data")
assert.Equal(t, "0", attestation.Data.CommitteeIndex, "Committee index mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.BeaconBlockRoot, "Beacon block root mismatch")
// Source checkpoint checks
require.NotNil(t, attestation.Data.Source, "Source checkpoint should not be nil")
assert.Equal(t, "1", attestation.Data.Source.Epoch, "Source epoch mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.Source.Root, "Source root mismatch")
// Target checkpoint checks
require.NotNil(t, attestation.Data.Target, "Target checkpoint should not be nil")
assert.Equal(t, "1", attestation.Data.Target.Epoch, "Target epoch mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.Target.Root, "Target root mismatch")
}
pool := attestations.NewPool()
require.NoError(t, pool.SaveUnaggregatedAttestations([]ethpbalpha.Att{unaggSlot3_Root1_1, unaggSlot3_Root1_2, unaggSlot3_Root2, unaggSlot4}), "Failed to save unaggregated attestations")
unagg, err := pool.UnaggregatedAttestations()
require.NoError(t, err)
require.Equal(t, 4, len(unagg), "Expected 4 unaggregated attestations")
require.NoError(t, pool.SaveAggregatedAttestations([]ethpbalpha.Att{aggSlot1_Root1_1, aggSlot1_Root1_2, aggSlot1_Root2, aggSlot2, postElectraAtt}), "Failed to save aggregated attestations")
agg := pool.AggregatedAttestations()
require.Equal(t, 5, len(agg), "Expected 5 aggregated attestations, 4 pre electra and 1 post electra")
s := &Server{
AttestationsPool: pool,
}
t.Run("non-matching attestation request", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code, "Expected HTTP status NotFound for non-matching request")
})
t.Run("1 matching aggregated attestation", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal())
})
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal())
})
})
t.Run("post-electra", func(t *testing.T) {
aggSlot1_Root1_1 := createAttestationElectra(1, bitfield.Bitlist{0b11100}, root1)
aggSlot1_Root1_2 := createAttestationElectra(1, bitfield.Bitlist{0b10111}, root1)
aggSlot1_Root2 := createAttestationElectra(1, bitfield.Bitlist{0b11100}, root2)
aggSlot2 := createAttestationElectra(2, bitfield.Bitlist{0b11100}, root1)
unaggSlot3_Root1_1 := createAttestationElectra(3, bitfield.Bitlist{0b11000}, root1)
unaggSlot3_Root1_2 := createAttestationElectra(3, bitfield.Bitlist{0b10100}, root1)
unaggSlot3_Root2 := createAttestationElectra(3, bitfield.Bitlist{0b11000}, root2)
unaggSlot4 := createAttestationElectra(4, bitfield.Bitlist{0b11000}, root1)
// Add one pre-electra attestation to ensure that it is being ignored.
// We choose slot 2 where we have one post-electra attestation with less attestation bits.
preElectraAtt := createAttestation(2, bitfield.Bitlist{0b11111}, root1)
compareResult := func(
t *testing.T,
attestation structs.AttestationElectra,
expectedSlot string,
expectedAggregationBits string,
expectedRoot []byte,
expectedSig []byte,
expectedCommitteeBits string,
) {
assert.Equal(t, expectedAggregationBits, attestation.AggregationBits, "Unexpected aggregation bits in attestation")
assert.Equal(t, expectedCommitteeBits, attestation.CommitteeBits)
assert.Equal(t, hexutil.Encode(expectedSig), attestation.Signature, "Signature mismatch")
assert.Equal(t, expectedSlot, attestation.Data.Slot, "Slot mismatch in attestation data")
assert.Equal(t, "0", attestation.Data.CommitteeIndex, "Committee index mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.BeaconBlockRoot, "Beacon block root mismatch")
// Source checkpoint checks
require.NotNil(t, attestation.Data.Source, "Source checkpoint should not be nil")
assert.Equal(t, "1", attestation.Data.Source.Epoch, "Source epoch mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.Source.Root, "Source root mismatch")
// Target checkpoint checks
require.NotNil(t, attestation.Data.Target, "Target checkpoint should not be nil")
assert.Equal(t, "1", attestation.Data.Target.Epoch, "Target epoch mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.Target.Root, "Target root mismatch")
}
pool := attestations.NewPool()
require.NoError(t, pool.SaveUnaggregatedAttestations([]ethpbalpha.Att{unaggSlot3_Root1_1, unaggSlot3_Root1_2, unaggSlot3_Root2, unaggSlot4}), "Failed to save unaggregated attestations")
unagg, err := pool.UnaggregatedAttestations()
require.NoError(t, err)
require.Equal(t, 4, len(unagg), "Expected 4 unaggregated attestations")
require.NoError(t, pool.SaveAggregatedAttestations([]ethpbalpha.Att{aggSlot1_Root1_1, aggSlot1_Root1_2, aggSlot1_Root2, aggSlot2, preElectraAtt}), "Failed to save aggregated attestations")
agg := pool.AggregatedAttestations()
require.Equal(t, 5, len(agg), "Expected 5 aggregated attestations, 4 electra and 1 pre electra")
bs, err := util.NewBeaconState()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
config := params.BeaconConfig()
config.ElectraForkEpoch = 0
params.OverrideBeaconConfig(config)
chainService := &mockChain.ChainService{State: bs}
s := &Server{
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
AttestationsPool: pool,
}
t.Run("non-matching attestation request", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code, "Expected HTTP status NotFound for non-matching request")
})
t.Run("1 matching aggregated attestation", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.AttestationElectra
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot2.CommitteeBits))
})
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.AttestationElectra
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot1_Root1_1.CommitteeBits))
})
t.Run("1 matching unaggregated attestation", func(t *testing.T) {
reqRoot, err := unaggSlot4.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=4" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.AttestationElectra
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "4", hexutil.Encode(unaggSlot4.AggregationBits), root1, sig.Marshal(), hexutil.Encode(unaggSlot4.CommitteeBits))
})
t.Run("multiple matching unaggregated attestations - their aggregate is returned", func(t *testing.T) {
reqRoot, err := unaggSlot3_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=3" + "&committee_index=0"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.AttestationElectra
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
sig1, err := bls.SignatureFromBytes(unaggSlot3_Root1_1.Signature)
require.NoError(t, err)
sig2, err := bls.SignatureFromBytes(unaggSlot3_Root1_2.Signature)
require.NoError(t, err)
expectedSig := bls.AggregateSignatures([]common.Signature{sig1, sig2})
compareResult(t, attestation, "3", hexutil.Encode(bitfield.Bitlist{0b11100}), root1, expectedSig.Marshal(), hexutil.Encode(unaggSlot3_Root1_1.CommitteeBits))
})
t.Run("pre-electra attestation is ignored", func(t *testing.T) {
})
})
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.AggregateAttestationResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.DeepEqual(t, "0x0001", resp.Data.AggregationBits)
assert.DeepEqual(t, hexutil.Encode(sig32), resp.Data.Signature)
assert.Equal(t, "3", resp.Data.Data.Slot)
assert.Equal(t, "1", resp.Data.Data.CommitteeIndex)
assert.DeepEqual(t, hexutil.Encode(root32), resp.Data.Data.BeaconBlockRoot)
require.NotNil(t, resp.Data.Data.Source)
assert.Equal(t, "1", resp.Data.Data.Source.Epoch)
assert.DeepEqual(t, hexutil.Encode(root32), resp.Data.Data.Source.Root)
require.NotNil(t, resp.Data.Data.Target)
assert.Equal(t, "1", resp.Data.Data.Target.Epoch)
assert.DeepEqual(t, hexutil.Encode(root32), resp.Data.Data.Target.Root)
})
t.Run("no matching attestation", func(t *testing.T) {
attDataRoot := hexutil.Encode(bytesutil.PadTo([]byte("foo"), 32))
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusNotFound, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No matching attestation found"))
})
t.Run("no attestation_data_root provided", func(t *testing.T) {
url := "http://example.com?slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "attestation_data_root is required"))
})
t.Run("invalid attestation_data_root provided", func(t *testing.T) {
url := "http://example.com?attestation_data_root=foo&slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "attestation_data_root is invalid"))
})
t.Run("no slot provided", func(t *testing.T) {
attDataRoot := hexutil.Encode(bytesutil.PadTo([]byte("foo"), 32))
url := "http://example.com?attestation_data_root=" + attDataRoot
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "slot is required"))
})
t.Run("invalid slot provided", func(t *testing.T) {
attDataRoot := hexutil.Encode(bytesutil.PadTo([]byte("foo"), 32))
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=foo"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "slot is invalid"))
})
}
func createAttestationData(slot primitives.Slot, committeeIndex primitives.CommitteeIndex, root []byte) *ethpbalpha.AttestationData {
return &ethpbalpha.AttestationData{
Slot: slot,
CommitteeIndex: committeeIndex,
BeaconBlockRoot: root,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root,
func TestGetAggregateAttestation_SameSlotAndRoot_ReturnMostAggregationBits(t *testing.T) {
root := bytesutil.PadTo([]byte("root"), 32)
sig := bytesutil.PadTo([]byte("sig"), fieldparams.BLSSignatureLength)
att1 := &ethpbalpha.Attestation{
AggregationBits: []byte{3, 0, 0, 1},
Data: &ethpbalpha.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root,
},
},
Signature: sig,
}
att2 := &ethpbalpha.Attestation{
AggregationBits: []byte{0, 3, 0, 1},
Data: &ethpbalpha.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root,
},
},
Signature: sig,
}
pool := attestations.NewPool()
err := pool.SaveAggregatedAttestations([]ethpbalpha.Att{att1, att2})
assert.NoError(t, err)
s := &Server{
AttestationsPool: pool,
}
reqRoot, err := att1.Data.HashTreeRoot()
require.NoError(t, err)
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.AggregateAttestationResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
assert.DeepEqual(t, "0x03000001", resp.Data.AggregationBits)
}
func TestSubmitContributionAndProofs(t *testing.T) {

View File

@@ -1,31 +0,0 @@
package rpc
import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
httpRequestLatency = promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "http_request_latency_seconds",
Help: "Latency of HTTP requests in seconds",
Buckets: []float64{0.001, 0.01, 0.025, 0.1, 0.25, 1, 2.5, 10},
},
[]string{"endpoint", "code", "method"},
)
httpRequestCount = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "http_request_count",
Help: "Number of HTTP requests",
},
[]string{"endpoint", "code", "method"},
)
httpErrorCount = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "http_error_count",
Help: "Total HTTP errors for beacon node requests",
},
[]string{"endpoint", "code", "method"},
)
)

Some files were not shown because too many files have changed in this diff Show More