Compare commits

...

28 Commits

Author SHA1 Message Date
nisdas
764aab3822 add it 2024-11-28 14:01:34 +08:00
Potuz
f27092fa91 Check if validator exists when applying pending deposit (#14666)
* Check if validator exists when applying pending deposit

* Add test TestProcessPendingDepositsMultiplesSameDeposits

* keep a map of added pubkeys

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-11-25 20:31:02 +00:00
Radosław Kapka
67cef41cbf Better attestation packing for Electra (#14534)
* Better attestation packing for Electra

* changelog <3

* bzl

* sort before constructing on-chain aggregates

* move ctx to top

* extract Electra logic and add comments

* benchmark
2024-11-25 18:41:51 +00:00
Manu NALEPA
258908d50e Diverse log improvements, comment additions and small refactors. (#14658)
* `logProposedBlock`: Fix log.

Before, the value of the pointer to the function were printed for `blockNumber`
instead of the block number itself.

* Add blob prefix before sidecars.

In order to prepare for data columns sidecars.

* Verification: Add log prefix.

* `validate_aggregate_proof.go`: Add comments.

* `blobSubscriber`: Fix error message.

* `registerHandlers`: Rename, add comments and little refactor.

* Remove duplicate `pb` vs. `ethpb` import.

* `rpc_ping.go`: Factorize / Add comments.

* `blobSidecarsByRangeRPCHandler`: Do not write error response if rate limited.

* `sendRecentBeaconBlocksRequest` ==> `sendBeaconBlocksRequest`.

The function itself does not know anything about the age of the beacon block.

* `beaconBlocksByRangeRPCHandler`: Refactor and add logs.

* `retentionSeconds` ==> `retentionDuration`.

* `oneEpoch`: Add documentation.

* `TestProposer_ProposeBlock_OK`: Improve error message.

* `getLocalPayloadFromEngine`: Tiny refactor.

* `eth1DataMajorityVote`: Improve log message.

* Implement `ConvertPeerIDToNodeID`and do note generate random private key if peerDAS is enabled.

* Remove useless `_`.

* `parsePeersEnr`: Fix error mesages.

* `ShouldOverrideFCU`: Fix error message.

* `blocks.go`: Minor comments improvements.

* CI: Upgrade golanci and enable spancheck.

* `ConvertPeerIDToNodeID`: Add godoc comment.

* Update CHANGELOG.md

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/initial-sync/service_test.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/rpc_beacon_blocks_by_range.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/rpc_blob_sidecars_by_range.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Update beacon-chain/sync/rpc_ping.go

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

* Remove trailing whitespace in godoc.

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-25 09:22:33 +00:00
Manu NALEPA
415a42a4aa Add proto for DataColumnIdentifier, DataColumnSidecar, DataColumnSidecarsByRangeRequest and MetadataV2. (#14649)
* Add data column sidecars proto.

* Fix Terence's comment.

* Re-add everything.
2024-11-22 09:50:06 +00:00
kasey
25eae3acda Fix eventstream electra atts (#14655)
* fix handler for electra atts

* same fix for attestation_slashing

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-11-22 03:04:00 +00:00
Rupam Dey
956d9d108c Update light-client consensus types (#14652)
* update diff

* deps

* changelog

* remove `SetNextSyncCommitteeBranchElectra`
2024-11-21 12:28:44 +00:00
Sammy Rosso
c285715f9f Add missing Eth-Consensus-Version headers (#14647)
* add missing Eth-Consensus-Version headers

* changelog

* fix header return value
2024-11-20 22:16:33 +00:00
james-prysm
9382ae736d validator REST: attestation v2 (#14633)
* wip

* fixing tests

* adding unit tests

* fixing tests

* adding back v1 usage

* changelog

* rolling back test and adding placeholder

* adding electra tests

* adding attestation nil check based on review

* reduce code duplication

* linting

* fixing tests

* based on sammy review

* radek feedback

* adding fall back for pre electra and updated tests

* fixing api calls and associated tests

* gaz

* Update validator/client/beacon-api/propose_attestation.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* review feedback

* add missing fallback

* fixing tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-11-20 17:13:57 +00:00
Radosław Kapka
f16ff45a6b Update light client protobufs (#14650)
* Update light client protobufs

* changelog <3
2024-11-20 14:47:54 +00:00
kasey
8d6577be84 defer payload attribute computation (#14644)
* defer payload attribute computation

* fire payload event on skipped slots

* changelog

* fix test and missing version attr

* fix lint

* deepsource

* mv head block lookup for missed slots to streamer

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2024-11-19 16:49:52 +00:00
james-prysm
9de75b5376 reorganizing p2p and backfill service registration for consistency (#14640)
* reorganizing for consistency

* Update beacon-chain/node/node.go

Co-authored-by: kasey <489222+kasey@users.noreply.github.com>

* kasey's feedback

---------

Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
2024-11-19 16:29:59 +00:00
james-prysm
a7ba11df37 adding nil checks on attestation interface (#14638)
* adding nil checks on interface

* changelog

* add linting

* adding missed checks

* review feedback

* attestation bits should not be in nil check

* fixing nil checks

* simplifying function

* fixing some missed items

* more missed items

* fixing more tests

* reverting some changes and fixing more tests

* adding in source check back in

* missed test

* sammy's review

* radek feedback
2024-11-18 17:51:17 +00:00
Stefano
00aeea3656 feat(issue-12348): add validator index label to validator_statuses me… (#14473)
* feat(issue-12348): add validator index label to validator_statuses metric

* fix: epochDuties added label on emission of metric

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-11-18 16:35:05 +00:00
james-prysm
9dbf979e77 move get data after nil check for attestations (#14642)
* move getData to after validations

* changelog
2024-11-15 18:28:35 +00:00
james-prysm
be60504512 Validator REST api: adding in check for empty keys changed (#14637)
* adding in check for empty keys changed

* changelog

* kasey feedback

* fixing unit tests

* Update CHANGELOG.md

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
2024-11-13 16:09:11 +00:00
james-prysm
1857496159 Electra: unskipping merkle spec tests: (#14635)
* unskipping spec tests

* changelog
2024-11-12 15:41:44 +00:00
Justin Traglia
ccf61e1700 Rename remaining "deposit receipt" to "deposit request" (#14629)
* Rename remaining "deposit receipt" to "deposit request"

* Add changelog entry

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-11-08 21:15:43 +00:00
Justin Traglia
4edbd2f9ef Remove outdated spectest exclusions for EIP-6110 (#14630)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-11-08 20:41:02 +00:00
james-prysm
5179af1438 validator REST API: block v2 and Electra support (#14623)
* adding electra to validator client rest for get and post, also migrates to use the v2 endpoints

* changelog

* fixing test

* fixing linting
2024-11-08 18:24:51 +00:00
Sammy Rosso
c0f9689e30 Add POST /eth/v2/beacon/pool/attestations endpoint (#14621)
* modify v1 and add v2

* test

* changelog

* small fixes

* fix tests

* simplify functions + remove duplication

* Radek' review + group V2 tests

* better errors

* fix tests
2024-11-08 11:33:27 +00:00
Sammy Rosso
ff8240a04f Add /eth/v2/validator/aggregate_attestation (#14481)
* add endpoint

* changelog

* fix tests

* fix endpoint

* remove useless broken code

* review + fix endpoint

* gaz

* fix aggregate selection proof test

* fixes

* new way of aggregating

* nit

* fix part of the tests

* fix tests

* cleanup

* fix AggSelectionProof test

* tests

* v1 tests

* v2 tests

* commiittee bits

---------

Co-authored-by: rkapka <radoslaw.kapka@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-11-07 13:34:18 +00:00
Nishant Das
847498c648 Optimize Message ID Computation (#14591)
* Cast to String Without Allocating

* Make it its own method

* Changelog

* Gosec

* Add benchmark, fuzz test, and @kasey's implementation.

* Gosec

* Fix benchmark test names

* Kasey's Suggestion

* Radek's Suggestion

---------

Co-authored-by: Preston Van Loon <preston@pvl.dev>
2024-11-07 12:54:58 +00:00
Jun Song
2633684339 Use GetBlockAttestationV2 at handler (#14624) 2024-11-07 05:52:53 +00:00
terence
ab3f1963e2 Return early blob constructor if not deneb (#14605)
* Return early blob constructor if not deneb

* Update CHANGELOG.md

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Remove test

* Remove space

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2024-11-06 15:09:20 +00:00
Justin Traglia
b87d02eeb3 Fix various small things in state-native code (#14604)
* Add nil checks in AppendPending*() functions

* Import errors

* Run goimports

* Move PendingDeposit.Amount to right spot

* Rename DequeuePartialWithdrawals to DequeuePendingPartialWithdrawals

* Remove parans from errNotSupported arg

* In electraField, move LatestExecutionPayloadHeader

* Add changelog entry

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-11-05 16:07:40 +00:00
Cam Sweeney
bcb4155523 prevent panic by returning on connection error (#14602)
* prevent panic by returning on connection error

* add test

* don't close eventschannel on error

---------

Co-authored-by: Sammy Rosso <15244892+saolyn@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2024-11-04 21:10:58 +00:00
Preston Van Loon
77f10b9e0e Benchmark process slots (#14616)
* Benchmark process slots

* Update changelog
2024-11-04 16:27:07 +00:00
169 changed files with 7413 additions and 1559 deletions

View File

@@ -54,7 +54,7 @@ jobs:
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
with:
version: v1.55.2
version: v1.56.1
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:

View File

@@ -73,6 +73,7 @@ linters:
- promlinter
- protogetter
- revive
- spancheck
- staticcheck
- stylecheck
- tagalign

View File

@@ -8,17 +8,26 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
### Added
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430)
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430).
- Add Bellatrix tests for light client functions.
- Add Discovery Rebooter Feature.
- Added GetBlockAttestationsV2 endpoint.
- Light client support: Consensus types for Electra
- Light client support: Consensus types for Electra.
- Added SubmitPoolAttesterSlashingV2 endpoint.
- Added SubmitAggregateAndProofsRequestV2 endpoint.
- Updated the `beacon-chain/monitor` package to Electra. [PR](https://github.com/prysmaticlabs/prysm/pull/14562)
- Added ListAttestationsV2 endpoint.
- Add ability to rollback node's internal state during processing.
- Change how unsafe protobuf state is created to prevent unnecessary copies.
- Added benchmarks for process slots for Capella, Deneb, Electra.
- Add helper to cast bytes to string without allocating memory.
- Added GetAggregatedAttestationV2 endpoint.
- Added SubmitAttestationsV2 endpoint.
- Validator REST mode Electra block support.
- Added validator index label to `validator_statuses` metric.
- Added Validator REST mode use of Attestation V2 endpoints and Electra attestations.
- PeerDAS: Added proto for `DataColumnIdentifier`, `DataColumnSidecar`, `DataColumnSidecarsByRangeRequest` and `MetadataV2`.
- Better attestation packing for Electra. [PR](https://github.com/prysmaticlabs/prysm/pull/14534)
### Changed
@@ -40,10 +49,20 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Simplified `ExitedValidatorIndices`.
- Simplified `EjectedValidatorIndices`.
- `engine_newPayloadV4`,`engine_getPayloadV4` are changes due to new execution request serialization decisions, [PR](https://github.com/prysmaticlabs/prysm/pull/14580)
- Fixed various small things in state-native code.
- Use ROBlock earlier in block syncing pipeline.
- Changed the signature of `ProcessPayload`.
- Only Build the Protobuf state once during serialization.
- Capella blocks are execution.
- Fixed panic when http request to subscribe to event stream fails.
- Return early for blob reconstructor during capella fork.
- Updated block endpoint from V1 to V2.
- Rename instances of "deposit receipts" to "deposit requests".
- Non-blocking payload attribute event handling in beacon api [pr](https://github.com/prysmaticlabs/prysm/pull/14644).
- Updated light client protobufs. [PR](https://github.com/prysmaticlabs/prysm/pull/14650)
- Added `Eth-Consensus-Version` header to `ListAttestationsV2` and `GetAggregateAttestationV2` endpoints.
- Updated light client consensus types. [PR](https://github.com/prysmaticlabs/prysm/pull/14652)
- Fixed pending deposits processing on Electra.
### Deprecated
@@ -53,6 +72,7 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Removed finalized validator index cache, no longer needed.
- Removed validator queue position log on key reload and wait for activation.
- Removed outdated spectest exclusions for EIP-6110.
### Fixed
@@ -67,6 +87,12 @@ The format is based on Keep a Changelog, and this project adheres to Semantic Ve
- Fix keymanager API so that get keys returns an empty response instead of a 500 error when using an unsupported keystore.
- Small log imporvement, removing some redundant or duplicate logs
- EIP7521 - Fixes withdrawal bug by accounting for pending partial withdrawals and deducting already withdrawn amounts from the sweep balance. [PR](https://github.com/prysmaticlabs/prysm/pull/14578)
- unskip electra merkle spec test
- Fix panic in validator REST mode when checking status after removing all keys
- Fix panic on attestation interface since we call data before validation
- corrects nil check on some interface attestation types
- temporary solution to handling electra attesation and attester_slashing events. [pr](14655)
- Diverse log improvements and comment additions.
### Security

View File

@@ -93,6 +93,7 @@ func (h *EventStream) Subscribe(eventsChannel chan<- *Event) {
EventType: EventConnectionError,
Data: []byte(errors.Wrap(err, client.ErrConnectionIssue.Error()).Error()),
}
return
}
defer func() {

View File

@@ -40,7 +40,7 @@ func TestNewEventStream(t *testing.T) {
func TestEventStream(t *testing.T) {
mux := http.NewServeMux()
mux.HandleFunc("/eth/v1/events", func(w http.ResponseWriter, r *http.Request) {
mux.HandleFunc("/eth/v1/events", func(w http.ResponseWriter, _ *http.Request) {
flusher, ok := w.(http.Flusher)
require.Equal(t, true, ok)
for i := 1; i <= 3; i++ {
@@ -79,3 +79,23 @@ func TestEventStream(t *testing.T) {
}
}
}
func TestEventStreamRequestError(t *testing.T) {
topics := []string{"head"}
eventsChannel := make(chan *Event, 1)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// use valid url that will result in failed request with nil body
stream, err := NewEventStream(ctx, http.DefaultClient, "http://badhost:1234", topics)
require.NoError(t, err)
// error will happen when request is made, should be received over events channel
go stream.Subscribe(eventsChannel)
event := <-eventsChannel
if event.EventType != EventConnectionError {
t.Errorf("Expected event type %q, got %q", EventConnectionError, event.EventType)
}
}

View File

@@ -26,7 +26,7 @@ type ListAttestationsResponse struct {
}
type SubmitAttestationsRequest struct {
Data []*Attestation `json:"data"`
Data json.RawMessage `json:"data"`
}
type ListVoluntaryExitsResponse struct {

View File

@@ -7,7 +7,8 @@ import (
)
type AggregateAttestationResponse struct {
Data *Attestation `json:"data"`
Version string `json:"version,omitempty"`
Data json.RawMessage `json:"data"`
}
type SubmitContributionAndProofsRequest struct {

View File

@@ -6,8 +6,11 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -69,6 +72,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
if arg.attributes == nil {
arg.attributes = payloadattribute.EmptyWithVersion(headBlk.Version())
}
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), arg)
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, arg.attributes)
if err != nil {
switch {
@@ -167,6 +171,38 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
return payloadID, nil
}
func firePayloadAttributesEvent(ctx context.Context, f event.SubscriberSender, cfg *fcuConfig) {
pidx, err := helpers.BeaconProposerIndex(ctx, cfg.headState)
if err != nil {
log.WithError(err).
WithField("head_root", cfg.headRoot[:]).
Error("Could not get proposer index for PayloadAttributes event")
return
}
evd := payloadattribute.EventData{
ProposerIndex: pidx,
ProposalSlot: cfg.headState.Slot(),
ParentBlockRoot: cfg.headRoot[:],
Attributer: cfg.attributes,
HeadRoot: cfg.headRoot,
HeadState: cfg.headState,
HeadBlock: cfg.headBlock,
}
if cfg.headBlock != nil && !cfg.headBlock.IsNil() {
headPayload, err := cfg.headBlock.Block().Body().Execution()
if err != nil {
log.WithError(err).Error("Could not get execution payload for head block")
return
}
evd.ParentBlockHash = headPayload.BlockHash()
evd.ParentBlockNumber = headPayload.BlockNumber()
}
f.Send(&feed.Event{
Type: statefeed.PayloadAttributes,
Data: evd,
})
}
// getPayloadHash returns the payload hash given the block root.
// if the block is before bellatrix fork epoch, it returns the zero hash.
func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, error) {

View File

@@ -92,12 +92,12 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
{
name: "process nil attestation",
a: nil,
wantedErr: "attestation can't be nil",
wantedErr: "attestation is nil",
},
{
name: "process nil field (a.Data) in attestation",
a: &ethpb.Attestation{},
wantedErr: "attestation's data can't be nil",
wantedErr: "attestation is nil",
},
{
name: "process nil field (a.Target) in attestation",

View File

@@ -7,8 +7,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -620,9 +618,6 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if !s.inRegularSync() {
return
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
@@ -650,6 +645,13 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: nil,
attributes: attribute,
}
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), fcuArgs)
return
}

View File

@@ -448,6 +448,7 @@ func TestValidateIndexedAttestation_AboveMaxLength(t *testing.T) {
Target: &ethpb.Checkpoint{
Epoch: primitives.Epoch(i),
},
Source: &ethpb.Checkpoint{},
}
}
@@ -489,6 +490,7 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
Target: &ethpb.Checkpoint{
Root: []byte{},
},
Source: &ethpb.Checkpoint{},
},
Signature: sig.Marshal(),
AggregationBits: list,

View File

@@ -193,7 +193,7 @@ func ProcessWithdrawals(st state.BeaconState, executionData interfaces.Execution
}
if st.Version() >= version.Electra {
if err := st.DequeuePartialWithdrawals(processedPartialWithdrawalsCount); err != nil {
if err := st.DequeuePendingPartialWithdrawals(processedPartialWithdrawalsCount); err != nil {
return nil, fmt.Errorf("unable to dequeue partial withdrawals from state: %w", err)
}
}

View File

@@ -386,8 +386,14 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
return errors.Wrap(err, "batch signature verification failed")
}
pubKeyMap := make(map[[48]byte]struct{}, len(pendingDeposits))
// Process each deposit individually
for _, pendingDeposit := range pendingDeposits {
_, found := pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)]
if !found {
pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)] = struct{}{}
}
validSignature := allSignaturesVerified
// If batch verification failed, check the individual deposit signature
@@ -405,9 +411,16 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
// Add validator to the registry if the signature is valid
if validSignature {
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
if found {
index, _ := state.ValidatorIndexByPubkey(bytesutil.ToBytes48(pendingDeposit.PublicKey))
if err := helpers.IncreaseBalance(state, index, pendingDeposit.Amount); err != nil {
return errors.Wrap(err, "could not increase balance")
}
} else {
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
}
}
}
}
@@ -560,7 +573,7 @@ func ProcessDepositRequests(ctx context.Context, beaconState state.BeaconState,
return beaconState, nil
}
// processDepositRequest processes the specific deposit receipt
// processDepositRequest processes the specific deposit request
// def process_deposit_request(state: BeaconState, deposit_request: DepositRequest) -> None:
//
// # Set deposit request start index
@@ -590,8 +603,8 @@ func processDepositRequest(beaconState state.BeaconState, request *enginev1.Depo
}
if err := beaconState.AppendPendingDeposit(&ethpb.PendingDeposit{
PublicKey: bytesutil.SafeCopyBytes(request.Pubkey),
Amount: request.Amount,
WithdrawalCredentials: bytesutil.SafeCopyBytes(request.WithdrawalCredentials),
Amount: request.Amount,
Signature: bytesutil.SafeCopyBytes(request.Signature),
Slot: beaconState.Slot(),
}); err != nil {

View File

@@ -22,6 +22,40 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessPendingDepositsMultiplesSameDeposits(t *testing.T) {
st := stateWithActiveBalanceETH(t, 1000)
deps := make([]*eth.PendingDeposit, 2) // Make same deposit twice
validators := st.Validators()
sk, err := bls.RandKey()
require.NoError(t, err)
for i := 0; i < len(deps); i += 1 {
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(i)
validators[i].PublicKey = sk.PublicKey().Marshal()
validators[i].WithdrawalCredentials = wc
deps[i] = stateTesting.GeneratePendingDeposit(t, sk, 32, bytesutil.ToBytes32(wc), 0)
}
require.NoError(t, st.SetPendingDeposits(deps))
err = electra.ProcessPendingDeposits(context.TODO(), st, 10000)
require.NoError(t, err)
val := st.Validators()
seenPubkeys := make(map[string]struct{})
for i := 0; i < len(val); i += 1 {
if len(val[i].PublicKey) == 0 {
continue
}
_, ok := seenPubkeys[string(val[i].PublicKey)]
if ok {
t.Fatalf("duplicated pubkeys")
} else {
seenPubkeys[string(val[i].PublicKey)] = struct{}{}
}
}
}
func TestProcessPendingDeposits(t *testing.T) {
tests := []struct {
name string
@@ -285,7 +319,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
invalidDep := &eth.PendingDeposit{}
invalidDep := &eth.PendingDeposit{PublicKey: make([]byte, 48)}
// have a combination of valid and invalid deposits
deps := []*eth.PendingDeposit{validDep, invalidDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))

View File

@@ -29,7 +29,6 @@ var (
ProcessParticipationFlagUpdates = altair.ProcessParticipationFlagUpdates
ProcessSyncCommitteeUpdates = altair.ProcessSyncCommitteeUpdates
AttestationsDelta = altair.AttestationsDelta
ProcessSyncAggregate = altair.ProcessSyncAggregate
)
// ProcessEpoch describes the per epoch operations that are performed on the beacon state.

View File

@@ -84,11 +84,11 @@ func ProcessOperations(
}
st, err = ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit receipts")
return nil, errors.Wrap(err, "could not process deposit requests")
}
st, err = ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
if err != nil {
return nil, errors.Wrap(err, "could not process execution layer withdrawal requests")
return nil, errors.Wrap(err, "could not process withdrawal requests")
}
if err := ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
return nil, fmt.Errorf("could not process consolidation requests: %w", err)

View File

@@ -31,6 +31,8 @@ const (
LightClientFinalityUpdate
// LightClientOptimisticUpdate event
LightClientOptimisticUpdate
// PayloadAttributes events are fired upon a missed slot or new head.
PayloadAttributes
)
// BlockProcessedData is the data sent with BlockProcessed events.

View File

@@ -23,11 +23,8 @@ var (
// Access to these nil fields will result in run time panic,
// it is recommended to run these checks as first line of defense.
func ValidateNilAttestation(attestation ethpb.Att) error {
if attestation == nil {
return errors.New("attestation can't be nil")
}
if attestation.GetData() == nil {
return errors.New("attestation's data can't be nil")
if attestation == nil || attestation.IsNil() {
return errors.New("attestation is nil")
}
if attestation.GetData().Source == nil {
return errors.New("attestation's source can't be nil")

View File

@@ -260,12 +260,12 @@ func TestValidateNilAttestation(t *testing.T) {
{
name: "nil attestation",
attestation: nil,
errString: "attestation can't be nil",
errString: "attestation is nil",
},
{
name: "nil attestation data",
attestation: &ethpb.Attestation{},
errString: "attestation's data can't be nil",
errString: "attestation is nil",
},
{
name: "nil attestation source",

View File

@@ -698,3 +698,45 @@ func TestProcessSlotsConditionally(t *testing.T) {
assert.Equal(t, primitives.Slot(6), s.Slot())
})
}
func BenchmarkProcessSlots_Capella(b *testing.B) {
st, _ := util.DeterministicGenesisStateCapella(b, params.BeaconConfig().MaxValidatorsPerCommittee)
var err error
b.ResetTimer()
for i := 0; i < b.N; i++ {
st, err = transition.ProcessSlots(context.Background(), st, st.Slot()+1)
if err != nil {
b.Fatalf("Failed to process slot %v", err)
}
}
}
func BenchmarkProcessSlots_Deneb(b *testing.B) {
st, _ := util.DeterministicGenesisStateDeneb(b, params.BeaconConfig().MaxValidatorsPerCommittee)
var err error
b.ResetTimer()
for i := 0; i < b.N; i++ {
st, err = transition.ProcessSlots(context.Background(), st, st.Slot()+1)
if err != nil {
b.Fatalf("Failed to process slot %v", err)
}
}
}
func BenchmarkProcessSlots_Electra(b *testing.B) {
st, _ := util.DeterministicGenesisStateElectra(b, params.BeaconConfig().MaxValidatorsPerCommittee)
var err error
b.ResetTimer()
for i := 0; i < b.N; i++ {
st, err = transition.ProcessSlots(context.Background(), st, st.Slot()+1)
if err != nil {
b.Fatalf("Failed to process slot %v", err)
}
}
}

View File

@@ -23,10 +23,10 @@ import (
bolt "go.etcd.io/bbolt"
)
// used to represent errors for inconsistent slot ranges.
// Used to represent errors for inconsistent slot ranges.
var errInvalidSlotRange = errors.New("invalid end slot and start slot provided")
// Block retrieval by root.
// Block retrieval by root. Return nil if block is not found.
func (s *Store) Block(ctx context.Context, blockRoot [32]byte) (interfaces.ReadOnlySignedBeaconBlock, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.Block")
defer span.End()

View File

@@ -688,7 +688,7 @@ func decodeSlasherChunk(enc []byte) ([]uint16, error) {
// Encode attestation record to bytes.
// The output encoded attestation record consists in the signing root concatenated with the compressed attestation record.
func encodeAttestationRecord(att *slashertypes.IndexedAttestationWrapper) ([]byte, error) {
if att == nil || att.IndexedAttestation == nil {
if att == nil || att.IndexedAttestation == nil || att.IndexedAttestation.IsNil() {
return []byte{}, errors.New("nil proposal record")
}

View File

@@ -53,7 +53,7 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
// Only reorg blocks that arrive late
early, err := head.arrivedEarly(f.store.genesisTime)
if err != nil {
log.WithError(err).Error("could not check if block arrived early")
log.WithError(err).Error("Could not check if block arrived early")
return
}
if early {

View File

@@ -192,20 +192,13 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
beacon.verifyInitWaiter = verification.NewInitializerWaiter(
beacon.clockWaiter, forkchoice.NewROForkChoice(beacon.forkChoicer), beacon.stateGen)
pa := peers.NewAssigner(beacon.fetchP2P().Peers(), beacon.forkChoicer)
beacon.BackfillOpts = append(
beacon.BackfillOpts,
backfill.WithVerifierWaiter(beacon.verifyInitWaiter),
backfill.WithInitSyncWaiter(initSyncWaiter(ctx, beacon.initialSyncComplete)),
)
bf, err := backfill.NewService(ctx, bfs, beacon.BlobStorage, beacon.clockWaiter, beacon.fetchP2P(), pa, beacon.BackfillOpts...)
if err != nil {
return nil, errors.Wrap(err, "error initializing backfill service")
}
if err := registerServices(cliCtx, beacon, synchronizer, bf, bfs); err != nil {
if err := registerServices(cliCtx, beacon, synchronizer, bfs); err != nil {
return nil, errors.Wrap(err, "could not register services")
}
@@ -292,11 +285,6 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
return nil, errors.Wrap(err, "could not start slashing DB")
}
log.Debugln("Registering P2P Service")
if err := beacon.registerP2P(cliCtx); err != nil {
return nil, errors.Wrap(err, "could not register P2P service")
}
bfs, err := backfill.NewUpdater(ctx, beacon.db)
if err != nil {
return nil, errors.Wrap(err, "could not create backfill updater")
@@ -315,9 +303,15 @@ func startBaseServices(cliCtx *cli.Context, beacon *BeaconNode, depositAddress s
return bfs, nil
}
func registerServices(cliCtx *cli.Context, beacon *BeaconNode, synchronizer *startup.ClockSynchronizer, bf *backfill.Service, bfs *backfill.Store) error {
if err := beacon.services.RegisterService(bf); err != nil {
return errors.Wrap(err, "could not register backfill service")
func registerServices(cliCtx *cli.Context, beacon *BeaconNode, synchronizer *startup.ClockSynchronizer, bfs *backfill.Store) error {
log.Debugln("Registering P2P Service")
if err := beacon.registerP2P(cliCtx); err != nil {
return errors.Wrap(err, "could not register P2P service")
}
log.Debugln("Registering Backfill Service")
if err := beacon.RegisterBackfillService(cliCtx, bfs); err != nil {
return errors.Wrap(err, "could not register Back Fill service")
}
log.Debugln("Registering POW Chain Service")
@@ -1136,6 +1130,16 @@ func (b *BeaconNode) registerBuilderService(cliCtx *cli.Context) error {
return b.services.RegisterService(svc)
}
func (b *BeaconNode) RegisterBackfillService(cliCtx *cli.Context, bfs *backfill.Store) error {
pa := peers.NewAssigner(b.fetchP2P().Peers(), b.forkChoicer)
bf, err := backfill.NewService(cliCtx.Context, bfs, b.BlobStorage, b.clockWaiter, b.fetchP2P(), pa, b.BackfillOpts...)
if err != nil {
return errors.Wrap(err, "error initializing backfill service")
}
return b.services.RegisterService(bf)
}
func hasNetworkFlag(cliCtx *cli.Context) bool {
for _, flag := range features.NetworkFlags {
for _, name := range flag.Names() {

View File

@@ -49,12 +49,12 @@ func TestKV_Aggregated_SaveAggregatedAttestation(t *testing.T) {
{
name: "nil attestation",
att: nil,
wantErrString: "attestation can't be nil",
wantErrString: "attestation is nil",
},
{
name: "nil attestation data",
att: &ethpb.Attestation{},
wantErrString: "attestation's data can't be nil",
wantErrString: "attestation is nil",
},
{
name: "not aggregated",
@@ -206,7 +206,7 @@ func TestKV_Aggregated_AggregatedAttestations(t *testing.T) {
func TestKV_Aggregated_DeleteAggregatedAttestation(t *testing.T) {
t.Run("nil attestation", func(t *testing.T) {
cache := NewAttCaches()
assert.ErrorContains(t, "attestation can't be nil", cache.DeleteAggregatedAttestation(nil))
assert.ErrorContains(t, "attestation is nil", cache.DeleteAggregatedAttestation(nil))
att := util.HydrateAttestation(&ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b10101}, Data: &ethpb.AttestationData{Slot: 2}})
assert.NoError(t, cache.DeleteAggregatedAttestation(att))
})
@@ -288,7 +288,7 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
name: "nil attestation",
input: nil,
want: false,
err: errors.New("can't be nil"),
err: errors.New("is nil"),
},
{
name: "nil attestation data",
@@ -296,7 +296,7 @@ func TestKV_Aggregated_HasAggregatedAttestation(t *testing.T) {
AggregationBits: bitfield.Bitlist{0b1111},
},
want: false,
err: errors.New("can't be nil"),
err: errors.New("is nil"),
},
{
name: "empty cache aggregated",

View File

@@ -8,7 +8,7 @@ import (
// SaveBlockAttestation saves an block attestation in cache.
func (c *AttCaches) SaveBlockAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}
@@ -53,10 +53,9 @@ func (c *AttCaches) BlockAttestations() []ethpb.Att {
// DeleteBlockAttestation deletes a block attestation in cache.
func (c *AttCaches) DeleteBlockAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrap(err, "could not create attestation ID")

View File

@@ -8,7 +8,7 @@ import (
// SaveForkchoiceAttestation saves an forkchoice attestation in cache.
func (c *AttCaches) SaveForkchoiceAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}
@@ -50,7 +50,7 @@ func (c *AttCaches) ForkchoiceAttestations() []ethpb.Att {
// DeleteForkchoiceAttestation deletes a forkchoice attestation in cache.
func (c *AttCaches) DeleteForkchoiceAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}

View File

@@ -14,7 +14,7 @@ import (
// SaveUnaggregatedAttestation saves an unaggregated attestation in cache.
func (c *AttCaches) SaveUnaggregatedAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}
if helpers.IsAggregated(att) {
@@ -130,9 +130,10 @@ func (c *AttCaches) UnaggregatedAttestationsBySlotIndexElectra(
// DeleteUnaggregatedAttestation deletes the unaggregated attestations in cache.
func (c *AttCaches) DeleteUnaggregatedAttestation(att ethpb.Att) error {
if att == nil {
if att == nil || att.IsNil() {
return nil
}
if helpers.IsAggregated(att) {
return errors.New("attestation is aggregated")
}
@@ -161,7 +162,7 @@ func (c *AttCaches) DeleteSeenUnaggregatedAttestations() (int, error) {
count := 0
for r, att := range c.unAggregatedAtt {
if att == nil || helpers.IsAggregated(att) {
if att == nil || att.IsNil() || helpers.IsAggregated(att) {
continue
}
if seen, err := c.hasSeenBit(att); err == nil && seen {

View File

@@ -7,6 +7,7 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations/mock",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/operations/attestations:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
],

View File

@@ -3,13 +3,17 @@ package mock
import (
"context"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
var _ attestations.Pool = &PoolMock{}
// PoolMock --
type PoolMock struct {
AggregatedAtts []*ethpb.Attestation
AggregatedAtts []ethpb.Att
UnaggregatedAtts []ethpb.Att
}
// AggregateUnaggregatedAttestations --
@@ -23,18 +27,18 @@ func (*PoolMock) AggregateUnaggregatedAttestationsBySlotIndex(_ context.Context,
}
// SaveAggregatedAttestation --
func (*PoolMock) SaveAggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveAggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveAggregatedAttestations --
func (m *PoolMock) SaveAggregatedAttestations(atts []*ethpb.Attestation) error {
func (m *PoolMock) SaveAggregatedAttestations(atts []ethpb.Att) error {
m.AggregatedAtts = append(m.AggregatedAtts, atts...)
return nil
}
// AggregatedAttestations --
func (m *PoolMock) AggregatedAttestations() []*ethpb.Attestation {
func (m *PoolMock) AggregatedAttestations() []ethpb.Att {
return m.AggregatedAtts
}
@@ -43,13 +47,18 @@ func (*PoolMock) AggregatedAttestationsBySlotIndex(_ context.Context, _ primitiv
panic("implement me")
}
// AggregatedAttestationsBySlotIndexElectra --
func (*PoolMock) AggregatedAttestationsBySlotIndexElectra(_ context.Context, _ primitives.Slot, _ primitives.CommitteeIndex) []*ethpb.AttestationElectra {
panic("implement me")
}
// DeleteAggregatedAttestation --
func (*PoolMock) DeleteAggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteAggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
// HasAggregatedAttestation --
func (*PoolMock) HasAggregatedAttestation(_ *ethpb.Attestation) (bool, error) {
func (*PoolMock) HasAggregatedAttestation(_ ethpb.Att) (bool, error) {
panic("implement me")
}
@@ -59,18 +68,19 @@ func (*PoolMock) AggregatedAttestationCount() int {
}
// SaveUnaggregatedAttestation --
func (*PoolMock) SaveUnaggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveUnaggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveUnaggregatedAttestations --
func (*PoolMock) SaveUnaggregatedAttestations(_ []*ethpb.Attestation) error {
panic("implement me")
func (m *PoolMock) SaveUnaggregatedAttestations(atts []ethpb.Att) error {
m.UnaggregatedAtts = append(m.UnaggregatedAtts, atts...)
return nil
}
// UnaggregatedAttestations --
func (*PoolMock) UnaggregatedAttestations() ([]*ethpb.Attestation, error) {
panic("implement me")
func (m *PoolMock) UnaggregatedAttestations() ([]ethpb.Att, error) {
return m.UnaggregatedAtts, nil
}
// UnaggregatedAttestationsBySlotIndex --
@@ -78,8 +88,13 @@ func (*PoolMock) UnaggregatedAttestationsBySlotIndex(_ context.Context, _ primit
panic("implement me")
}
// UnaggregatedAttestationsBySlotIndexElectra --
func (*PoolMock) UnaggregatedAttestationsBySlotIndexElectra(_ context.Context, _ primitives.Slot, _ primitives.CommitteeIndex) []*ethpb.AttestationElectra {
panic("implement me")
}
// DeleteUnaggregatedAttestation --
func (*PoolMock) DeleteUnaggregatedAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteUnaggregatedAttestation(_ ethpb.Att) error {
panic("implement me")
}
@@ -94,42 +109,42 @@ func (*PoolMock) UnaggregatedAttestationCount() int {
}
// SaveBlockAttestation --
func (*PoolMock) SaveBlockAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveBlockAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveBlockAttestations --
func (*PoolMock) SaveBlockAttestations(_ []*ethpb.Attestation) error {
func (*PoolMock) SaveBlockAttestations(_ []ethpb.Att) error {
panic("implement me")
}
// BlockAttestations --
func (*PoolMock) BlockAttestations() []*ethpb.Attestation {
func (*PoolMock) BlockAttestations() []ethpb.Att {
panic("implement me")
}
// DeleteBlockAttestation --
func (*PoolMock) DeleteBlockAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteBlockAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveForkchoiceAttestation --
func (*PoolMock) SaveForkchoiceAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) SaveForkchoiceAttestation(_ ethpb.Att) error {
panic("implement me")
}
// SaveForkchoiceAttestations --
func (*PoolMock) SaveForkchoiceAttestations(_ []*ethpb.Attestation) error {
func (*PoolMock) SaveForkchoiceAttestations(_ []ethpb.Att) error {
panic("implement me")
}
// ForkchoiceAttestations --
func (*PoolMock) ForkchoiceAttestations() []*ethpb.Attestation {
func (*PoolMock) ForkchoiceAttestations() []ethpb.Att {
panic("implement me")
}
// DeleteForkchoiceAttestation --
func (*PoolMock) DeleteForkchoiceAttestation(_ *ethpb.Attestation) error {
func (*PoolMock) DeleteForkchoiceAttestation(_ ethpb.Att) error {
panic("implement me")
}

View File

@@ -75,6 +75,8 @@ go_library(
"//runtime/version:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_btcsuite_btcd_btcec_v2//:go_default_library",
"@com_github_ethereum_go_ethereum//crypto:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/discover:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enode:go_default_library",
"@com_github_ethereum_go_ethereum//p2p/enr:go_default_library",

View File

@@ -29,7 +29,7 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
// never be hit.
msg := make([]byte, 20)
copy(msg, "invalid")
return string(msg)
return bytesutil.UnsafeCastToString(msg)
}
digest, err := ExtractGossipDigest(*pmsg.Topic)
if err != nil {
@@ -37,7 +37,7 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
// never be hit.
msg := make([]byte, 20)
copy(msg, "invalid")
return string(msg)
return bytesutil.UnsafeCastToString(msg)
}
_, fEpoch, err := forks.RetrieveForkDataFromDigest(digest, genesisValidatorsRoot)
if err != nil {
@@ -45,7 +45,7 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
// never be hit.
msg := make([]byte, 20)
copy(msg, "invalid")
return string(msg)
return bytesutil.UnsafeCastToString(msg)
}
if fEpoch >= params.BeaconConfig().AltairForkEpoch {
return postAltairMsgID(pmsg, fEpoch)
@@ -54,11 +54,11 @@ func MsgID(genesisValidatorsRoot []byte, pmsg *pubsubpb.Message) string {
if err != nil {
combinedData := append(params.BeaconConfig().MessageDomainInvalidSnappy[:], pmsg.Data...)
h := hash.Hash(combinedData)
return string(h[:20])
return bytesutil.UnsafeCastToString(h[:20])
}
combinedData := append(params.BeaconConfig().MessageDomainValidSnappy[:], decodedData...)
h := hash.Hash(combinedData)
return string(h[:20])
return bytesutil.UnsafeCastToString(h[:20])
}
// Spec:
@@ -93,13 +93,13 @@ func postAltairMsgID(pmsg *pubsubpb.Message, fEpoch primitives.Epoch) string {
// should never happen
msg := make([]byte, 20)
copy(msg, "invalid")
return string(msg)
return bytesutil.UnsafeCastToString(msg)
}
if uint64(totalLength) > gossipPubSubSize {
// this should never happen
msg := make([]byte, 20)
copy(msg, "invalid")
return string(msg)
return bytesutil.UnsafeCastToString(msg)
}
combinedData := make([]byte, 0, totalLength)
combinedData = append(combinedData, params.BeaconConfig().MessageDomainInvalidSnappy[:]...)
@@ -107,7 +107,7 @@ func postAltairMsgID(pmsg *pubsubpb.Message, fEpoch primitives.Epoch) string {
combinedData = append(combinedData, topic...)
combinedData = append(combinedData, pmsg.Data...)
h := hash.Hash(combinedData)
return string(h[:20])
return bytesutil.UnsafeCastToString(h[:20])
}
totalLength, err := math.AddInt(
len(params.BeaconConfig().MessageDomainValidSnappy),
@@ -120,7 +120,7 @@ func postAltairMsgID(pmsg *pubsubpb.Message, fEpoch primitives.Epoch) string {
// should never happen
msg := make([]byte, 20)
copy(msg, "invalid")
return string(msg)
return bytesutil.UnsafeCastToString(msg)
}
combinedData := make([]byte, 0, totalLength)
combinedData = append(combinedData, params.BeaconConfig().MessageDomainValidSnappy[:]...)
@@ -128,5 +128,5 @@ func postAltairMsgID(pmsg *pubsubpb.Message, fEpoch primitives.Epoch) string {
combinedData = append(combinedData, topic...)
combinedData = append(combinedData, decodedData...)
h := hash.Hash(combinedData)
return string(h[:20])
return bytesutil.UnsafeCastToString(h[:20])
}

View File

@@ -165,14 +165,14 @@ func (s *Service) pubsubOptions() []pubsub.Option {
func parsePeersEnr(peers []string) ([]peer.AddrInfo, error) {
addrs, err := PeersFromStringAddrs(peers)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers raw ENRs into multiaddresses: %w", err)
return nil, fmt.Errorf("cannot convert peers raw ENRs into multiaddresses: %w", err)
}
if len(addrs) == 0 {
return nil, fmt.Errorf("Converting peers raw ENRs into multiaddresses resulted in an empty list")
return nil, fmt.Errorf("converting peers raw ENRs into multiaddresses resulted in an empty list")
}
directAddrInfos, err := peer.AddrInfosFromP2pAddrs(addrs...)
if err != nil {
return nil, fmt.Errorf("Cannot convert peers multiaddresses into AddrInfos: %w", err)
return nil, fmt.Errorf("cannot convert peers multiaddresses into AddrInfos: %w", err)
}
return directAddrInfos, nil
}

View File

@@ -27,148 +27,148 @@ func NewFuzzTestP2P() *FakeP2P {
}
// Encoding -- fake.
func (_ *FakeP2P) Encoding() encoder.NetworkEncoding {
func (*FakeP2P) Encoding() encoder.NetworkEncoding {
return &encoder.SszNetworkEncoder{}
}
// AddConnectionHandler -- fake.
func (_ *FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddConnectionHandler(_, _ func(ctx context.Context, id peer.ID) error) {
}
// AddDisconnectionHandler -- fake.
func (_ *FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddDisconnectionHandler(_ func(ctx context.Context, id peer.ID) error) {
}
// AddPingMethod -- fake.
func (_ *FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
func (*FakeP2P) AddPingMethod(_ func(ctx context.Context, id peer.ID) error) {
}
// PeerID -- fake.
func (_ *FakeP2P) PeerID() peer.ID {
func (*FakeP2P) PeerID() peer.ID {
return "fake"
}
// ENR returns the enr of the local peer.
func (_ *FakeP2P) ENR() *enr.Record {
func (*FakeP2P) ENR() *enr.Record {
return new(enr.Record)
}
// DiscoveryAddresses -- fake
func (_ *FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
func (*FakeP2P) DiscoveryAddresses() ([]multiaddr.Multiaddr, error) {
return nil, nil
}
// FindPeersWithSubnet mocks the p2p func.
func (_ *FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
func (*FakeP2P) FindPeersWithSubnet(_ context.Context, _ string, _ uint64, _ int) (bool, error) {
return false, nil
}
// RefreshENR mocks the p2p func.
func (_ *FakeP2P) RefreshENR() {}
func (*FakeP2P) RefreshENR() {}
// LeaveTopic -- fake.
func (_ *FakeP2P) LeaveTopic(_ string) error {
func (*FakeP2P) LeaveTopic(_ string) error {
return nil
}
// Metadata -- fake.
func (_ *FakeP2P) Metadata() metadata.Metadata {
func (*FakeP2P) Metadata() metadata.Metadata {
return nil
}
// Peers -- fake.
func (_ *FakeP2P) Peers() *peers.Status {
func (*FakeP2P) Peers() *peers.Status {
return nil
}
// PublishToTopic -- fake.
func (_ *FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
func (*FakeP2P) PublishToTopic(_ context.Context, _ string, _ []byte, _ ...pubsub.PubOpt) error {
return nil
}
// Send -- fake.
func (_ *FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
func (*FakeP2P) Send(_ context.Context, _ interface{}, _ string, _ peer.ID) (network.Stream, error) {
return nil, nil
}
// PubSub -- fake.
func (_ *FakeP2P) PubSub() *pubsub.PubSub {
func (*FakeP2P) PubSub() *pubsub.PubSub {
return nil
}
// MetadataSeq -- fake.
func (_ *FakeP2P) MetadataSeq() uint64 {
func (*FakeP2P) MetadataSeq() uint64 {
return 0
}
// SetStreamHandler -- fake.
func (_ *FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
func (*FakeP2P) SetStreamHandler(_ string, _ network.StreamHandler) {
}
// SubscribeToTopic -- fake.
func (_ *FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
func (*FakeP2P) SubscribeToTopic(_ string, _ ...pubsub.SubOpt) (*pubsub.Subscription, error) {
return nil, nil
}
// JoinTopic -- fake.
func (_ *FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
func (*FakeP2P) JoinTopic(_ string, _ ...pubsub.TopicOpt) (*pubsub.Topic, error) {
return nil, nil
}
// Host -- fake.
func (_ *FakeP2P) Host() host.Host {
func (*FakeP2P) Host() host.Host {
return nil
}
// Disconnect -- fake.
func (_ *FakeP2P) Disconnect(_ peer.ID) error {
func (*FakeP2P) Disconnect(_ peer.ID) error {
return nil
}
// Broadcast -- fake.
func (_ *FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
func (*FakeP2P) Broadcast(_ context.Context, _ proto.Message) error {
return nil
}
// BroadcastAttestation -- fake.
func (_ *FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
func (*FakeP2P) BroadcastAttestation(_ context.Context, _ uint64, _ ethpb.Att) error {
return nil
}
// BroadcastSyncCommitteeMessage -- fake.
func (_ *FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
func (*FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *ethpb.SyncCommitteeMessage) error {
return nil
}
// BroadcastBlob -- fake.
func (_ *FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
func (*FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
return nil
}
// InterceptPeerDial -- fake.
func (_ *FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
func (*FakeP2P) InterceptPeerDial(peer.ID) (allow bool) {
return true
}
// InterceptAddrDial -- fake.
func (_ *FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
func (*FakeP2P) InterceptAddrDial(peer.ID, multiaddr.Multiaddr) (allow bool) {
return true
}
// InterceptAccept -- fake.
func (_ *FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
func (*FakeP2P) InterceptAccept(_ network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptSecured -- fake.
func (_ *FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
func (*FakeP2P) InterceptSecured(network.Direction, peer.ID, network.ConnMultiaddrs) (allow bool) {
return true
}
// InterceptUpgraded -- fake.
func (_ *FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
func (*FakeP2P) InterceptUpgraded(network.Conn) (allow bool, reason control.DisconnectReason) {
return true, 0
}

View File

@@ -18,12 +18,12 @@ type MockHost struct {
}
// ID --
func (_ *MockHost) ID() peer.ID {
func (*MockHost) ID() peer.ID {
return ""
}
// Peerstore --
func (_ *MockHost) Peerstore() peerstore.Peerstore {
func (*MockHost) Peerstore() peerstore.Peerstore {
return nil
}
@@ -33,46 +33,46 @@ func (m *MockHost) Addrs() []ma.Multiaddr {
}
// Network --
func (_ *MockHost) Network() network.Network {
func (*MockHost) Network() network.Network {
return nil
}
// Mux --
func (_ *MockHost) Mux() protocol.Switch {
func (*MockHost) Mux() protocol.Switch {
return nil
}
// Connect --
func (_ *MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
func (*MockHost) Connect(_ context.Context, _ peer.AddrInfo) error {
return nil
}
// SetStreamHandler --
func (_ *MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
func (*MockHost) SetStreamHandler(_ protocol.ID, _ network.StreamHandler) {}
// SetStreamHandlerMatch --
func (_ *MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
func (*MockHost) SetStreamHandlerMatch(protocol.ID, func(id protocol.ID) bool, network.StreamHandler) {
}
// RemoveStreamHandler --
func (_ *MockHost) RemoveStreamHandler(_ protocol.ID) {}
func (*MockHost) RemoveStreamHandler(_ protocol.ID) {}
// NewStream --
func (_ *MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
func (*MockHost) NewStream(_ context.Context, _ peer.ID, _ ...protocol.ID) (network.Stream, error) {
return nil, nil
}
// Close --
func (_ *MockHost) Close() error {
func (*MockHost) Close() error {
return nil
}
// ConnManager --
func (_ *MockHost) ConnManager() connmgr.ConnManager {
func (*MockHost) ConnManager() connmgr.ConnManager {
return nil
}
// EventBus --
func (_ *MockHost) EventBus() event.Bus {
func (*MockHost) EventBus() event.Bus {
return nil
}

View File

@@ -12,10 +12,15 @@ import (
"path"
"time"
"github.com/btcsuite/btcd/btcec/v2"
gCrypto "github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/ethereum/go-ethereum/p2p/enr"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/wrapper"
ecdsaprysm "github.com/prysmaticlabs/prysm/v5/crypto/ecdsa"
"github.com/prysmaticlabs/prysm/v5/io/file"
@@ -62,6 +67,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
}
if defaultKeysExist {
log.WithField("filePath", defaultKeyPath).Info("Reading static P2P private key from a file. To generate a new random private key at every start, please remove this file.")
return privKeyFromFile(defaultKeyPath)
}
@@ -71,8 +77,8 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
// If the StaticPeerID flag is not set, return the private key.
if !cfg.StaticPeerID {
// If the StaticPeerID flag is not set and if peerDAS is not enabled, return the private key.
if !(cfg.StaticPeerID || params.PeerDASEnabled()) {
return ecdsaprysm.ConvertFromInterfacePrivKey(priv)
}
@@ -89,7 +95,7 @@ func privKey(cfg *Config) (*ecdsa.PrivateKey, error) {
return nil, err
}
log.Info("Wrote network key to file")
log.WithField("path", defaultKeyPath).Info("Wrote network key to file")
// Read the key from the defaultKeyPath file just written
// for the strongest guarantee that the next start will be the same as this one.
return privKeyFromFile(defaultKeyPath)
@@ -173,3 +179,27 @@ func verifyConnectivity(addr string, port uint, protocol string) {
}
}
}
// ConvertPeerIDToNodeID converts a peer ID (libp2p) to a node ID (devp2p).
func ConvertPeerIDToNodeID(pid peer.ID) (enode.ID, error) {
// Retrieve the public key object of the peer under "crypto" form.
pubkeyObjCrypto, err := pid.ExtractPublicKey()
if err != nil {
return [32]byte{}, errors.Wrapf(err, "extract public key from peer ID `%s`", pid)
}
// Extract the bytes representation of the public key.
compressedPubKeyBytes, err := pubkeyObjCrypto.Raw()
if err != nil {
return [32]byte{}, errors.Wrap(err, "public key raw")
}
// Retrieve the public key object of the peer under "SECP256K1" form.
pubKeyObjSecp256k1, err := btcec.ParsePubKey(compressedPubKeyBytes)
if err != nil {
return [32]byte{}, errors.Wrap(err, "parse public key")
}
newPubkey := &ecdsa.PublicKey{Curve: gCrypto.S256(), X: pubKeyObjSecp256k1.X(), Y: pubKeyObjSecp256k1.Y()}
return enode.PubkeyToIDV4(newPubkey), nil
}

View File

@@ -6,6 +6,7 @@ import (
"github.com/ethereum/go-ethereum/crypto"
"github.com/ethereum/go-ethereum/p2p/enode"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -64,3 +65,19 @@ func TestSerializeENR(t *testing.T) {
assert.ErrorContains(t, "could not serialize nil record", err)
})
}
func TestConvertPeerIDToNodeID(t *testing.T) {
const (
peerIDStr = "16Uiu2HAmRrhnqEfybLYimCiAYer2AtZKDGamQrL1VwRCyeh2YiFc"
expectedNodeIDStr = "eed26c5d2425ab95f57246a5dca87317c41cacee4bcafe8bbe57e5965527c290"
)
peerID, err := peer.Decode(peerIDStr)
require.NoError(t, err)
actualNodeID, err := ConvertPeerIDToNodeID(peerID)
require.NoError(t, err)
actualNodeIDStr := actualNodeID.String()
require.Equal(t, expectedNodeIDStr, actualNodeIDStr)
}

View File

@@ -381,21 +381,12 @@ func (s *Service) SubmitSignedAggregateSelectionProof(
ctx, span := trace.StartSpan(ctx, "coreService.SubmitSignedAggregateSelectionProof")
defer span.End()
if agg == nil {
if agg == nil || agg.IsNil() {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
attAndProof := agg.AggregateAttestationAndProof()
if attAndProof == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
att := attAndProof.AggregateVal()
if att == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
data := att.GetData()
if data == nil {
return &RpcError{Err: errors.New("signed aggregate request can't be nil"), Reason: BadRequest}
}
emptySig := make([]byte, fieldparams.BLSSignatureLength)
if bytes.Equal(agg.GetSignature(), emptySig) || bytes.Equal(attAndProof.GetSelectionProof(), emptySig) {
return &RpcError{Err: errors.New("signed signatures can't be zero hashes"), Reason: BadRequest}

View File

@@ -199,6 +199,15 @@ func (s *Service) validatorEndpoints(
handler: server.GetAggregateAttestation,
methods: []string{http.MethodGet},
},
{
template: "/eth/v2/validator/aggregate_attestation",
name: namespace + ".GetAggregateAttestationV2",
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.GetAggregateAttestationV2,
methods: []string{http.MethodGet},
},
{
template: "/eth/v1/validator/contribution_and_proofs",
name: namespace + ".SubmitContributionAndProofs",
@@ -601,7 +610,7 @@ func (s *Service) beaconEndpoints(
middleware: []middleware.Middleware{
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.GetBlockAttestations,
handler: server.GetBlockAttestationsV2,
methods: []string{http.MethodGet},
},
{
@@ -650,6 +659,16 @@ func (s *Service) beaconEndpoints(
handler: server.SubmitAttestations,
methods: []string{http.MethodPost},
},
{
template: "/eth/v2/beacon/pool/attestations",
name: namespace + ".SubmitAttestationsV2",
middleware: []middleware.Middleware{
middleware.ContentTypeHandler([]string{api.JsonMediaType}),
middleware.AcceptHeaderHandler([]string{api.JsonMediaType}),
},
handler: server.SubmitAttestationsV2,
methods: []string{http.MethodPost},
},
{
template: "/eth/v1/beacon/pool/voluntary_exits",
name: namespace + ".ListVoluntaryExits",

View File

@@ -41,7 +41,7 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/beacon/deposit_snapshot": {http.MethodGet},
"/eth/v1/beacon/blinded_blocks/{block_id}": {http.MethodGet},
"/eth/v1/beacon/pool/attestations": {http.MethodGet, http.MethodPost},
"/eth/v2/beacon/pool/attestations": {http.MethodGet},
"/eth/v2/beacon/pool/attestations": {http.MethodGet, http.MethodPost},
"/eth/v1/beacon/pool/attester_slashings": {http.MethodGet, http.MethodPost},
"/eth/v2/beacon/pool/attester_slashings": {http.MethodGet, http.MethodPost},
"/eth/v1/beacon/pool/proposer_slashings": {http.MethodGet, http.MethodPost},
@@ -101,6 +101,7 @@ func Test_endpoints(t *testing.T) {
"/eth/v1/validator/blinded_blocks/{slot}": {http.MethodGet},
"/eth/v1/validator/attestation_data": {http.MethodGet},
"/eth/v1/validator/aggregate_attestation": {http.MethodGet},
"/eth/v2/validator/aggregate_attestation": {http.MethodGet},
"/eth/v1/validator/aggregate_and_proofs": {http.MethodPost},
"/eth/v2/validator/aggregate_and_proofs": {http.MethodPost},
"/eth/v1/validator/beacon_committee_subscriptions": {http.MethodPost},

View File

@@ -3,13 +3,14 @@ package beacon
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
@@ -148,6 +149,7 @@ func (s *Server) ListAttestationsV2(w http.ResponseWriter, r *http.Request) {
return
}
w.Header().Set(api.VersionHeader, version.String(headState.Version()))
httputil.WriteJson(w, &structs.ListAttestationsResponse{
Version: version.String(headState.Version()),
Data: attsData,
@@ -189,70 +191,13 @@ func (s *Server) SubmitAttestations(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
if len(req.Data) == 0 {
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
attFailures, failedBroadcasts, err := s.handleAttestations(ctx, req.Data)
if err != nil {
httputil.HandleError(w, err.Error(), http.StatusBadRequest)
return
}
var validAttestations []*eth.Attestation
var attFailures []*server.IndexedVerificationFailure
for i, sourceAtt := range req.Data {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
continue
}
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
if !corehelpers.IsAggregated(att) {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
}
validAttestations = append(validAttestations, att)
}
failedBroadcasts := make([]string, 0)
for i, att := range validAttestations {
// Determine subnet to broadcast attestation to
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
httputil.HandleError(w, "Could not get head validator indices: "+err.Error(), http.StatusInternalServerError)
return
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.Data.CommitteeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
}
if corehelpers.IsAggregated(att) {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated attestation")
}
}
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
@@ -272,6 +217,213 @@ func (s *Server) SubmitAttestations(w http.ResponseWriter, r *http.Request) {
}
}
// SubmitAttestationsV2 submits an attestation object to node. If the attestation passes all validation
// constraints, node MUST publish the attestation on an appropriate subnet.
func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttestationsV2")
defer span.End()
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
return
}
v, err := version.FromString(versionHeader)
if err != nil {
httputil.HandleError(w, "Invalid version: "+err.Error(), http.StatusBadRequest)
return
}
var req structs.SubmitAttestationsRequest
err = json.NewDecoder(r.Body).Decode(&req.Data)
switch {
case errors.Is(err, io.EOF):
httputil.HandleError(w, "No data submitted", http.StatusBadRequest)
return
case err != nil:
httputil.HandleError(w, "Could not decode request body: "+err.Error(), http.StatusBadRequest)
return
}
var attFailures []*server.IndexedVerificationFailure
var failedBroadcasts []string
if v >= version.Electra {
attFailures, failedBroadcasts, err = s.handleAttestationsElectra(ctx, req.Data)
} else {
attFailures, failedBroadcasts, err = s.handleAttestations(ctx, req.Data)
}
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Failed to handle attestations: %v", err), http.StatusBadRequest)
return
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
http.StatusInternalServerError,
)
return
}
if len(attFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
Code: http.StatusBadRequest,
Message: "One or more attestations failed validation",
Failures: attFailures,
}
httputil.WriteError(w, failuresErr)
}
}
func (s *Server) handleAttestationsElectra(ctx context.Context, data json.RawMessage) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
var sourceAttestations []*structs.AttestationElectra
if err = json.Unmarshal(data, &sourceAttestations); err != nil {
return nil, nil, errors.Wrap(err, "failed to unmarshal attestation")
}
if len(sourceAttestations) == 0 {
return nil, nil, errors.New("no data submitted")
}
var validAttestations []*eth.AttestationElectra
for i, sourceAtt := range sourceAttestations {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
continue
}
validAttestations = append(validAttestations, att)
}
for i, att := range validAttestations {
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
if !corehelpers.IsAggregated(att) {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
}
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
committeeIndex, err := att.GetCommitteeIndex()
if err != nil {
return nil, nil, errors.Wrap(err, "failed to retrieve attestation committee index")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), committeeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
if corehelpers.IsAggregated(att) {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated attestation")
}
}
}
return attFailures, failedBroadcasts, nil
}
func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
var sourceAttestations []*structs.Attestation
if err = json.Unmarshal(data, &sourceAttestations); err != nil {
return nil, nil, errors.Wrap(err, "failed to unmarshal attestation")
}
if len(sourceAttestations) == 0 {
return nil, nil, errors.New("no data submitted")
}
var validAttestations []*eth.Attestation
for i, sourceAtt := range sourceAttestations {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
continue
}
validAttestations = append(validAttestations, att)
}
for i, att := range validAttestations {
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
// Note we can't send for aggregated att because we don't have selection proof.
if !corehelpers.IsAggregated(att) {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.UnaggregatedAttReceived,
Data: &operation.UnAggregatedAttReceivedData{
Attestation: att,
},
})
}
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.Data.CommitteeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
}
if corehelpers.IsAggregated(att) {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated attestation")
}
}
}
return attFailures, failedBroadcasts, nil
}
// ListVoluntaryExits retrieves voluntary exits known by the node but
// not necessarily incorporated into any block.
func (s *Server) ListVoluntaryExits(w http.ResponseWriter, r *http.Request) {

View File

@@ -500,95 +500,292 @@ func TestSubmitAttestations(t *testing.T) {
ChainInfoFetcher: chainService,
OperationNotifier: &blockchainmock.MockOperationNotifier{},
}
t.Run("V1", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "no data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
t.Run("V2", func(t *testing.T) {
t.Run("pre-electra", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAtts)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "no data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
t.Run("post-electra", func(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(singleAttElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 1, broadcaster.NumAttestations())
assert.Equal(t, "0x03", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetAggregationBits()))
assert.Equal(t, "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetSignature()))
assert.Equal(t, primitives.Slot(0), broadcaster.BroadcastAttestations[0].GetData().Slot)
assert.Equal(t, primitives.CommitteeIndex(0), broadcaster.BroadcastAttestations[0].GetData().CommitteeIndex)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().BeaconBlockRoot))
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Source.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
s.Broadcaster = broadcaster
s.AttestationsPool = attestations.NewPool()
var body bytes.Buffer
_, err := body.WriteString(multipleAttsElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "no data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAttElectra)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Electra))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("empty", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString("[]")
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("invalid", func(t *testing.T) {
var body bytes.Buffer
_, err := body.WriteString(invalidAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
}
func TestListVoluntaryExits(t *testing.T) {
@@ -2063,6 +2260,85 @@ var (
}
}
}
]`
singleAttElectra = `[
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
]`
multipleAttsElectra = `[
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0x736f75726365726f6f7431000000000000000000000000000000000000000000"
},
"target": {
"epoch": "0",
"root": "0x746172676574726f6f7431000000000000000000000000000000000000000000"
}
}
},
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x8146f4397bfd8fd057ebbcd6a67327bdc7ed5fb650533edcb6377b650dea0b6da64c14ecd60846d5c0a0cd43893d6972092500f82c9d8a955e2b58c5ed3cbe885d84008ace6bd86ba9e23652f58e2ec207cec494c916063257abf285b9b15b15",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0x736f75726365726f6f7431000000000000000000000000000000000000000000"
},
"target": {
"epoch": "0",
"root": "0x746172676574726f6f7432000000000000000000000000000000000000000000"
}
}
}
]`
// signature is invalid
invalidAttElectra = `[
{
"aggregation_bits": "0x03",
"committee_bits": "0x0100000000000000",
"signature": "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"data": {
"slot": "0",
"index": "0",
"beacon_block_root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2",
"source": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
"target": {
"epoch": "0",
"root": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
}
}
}
]`
exit1 = `{
"message": {

View File

@@ -79,6 +79,7 @@ func TestGetSpec(t *testing.T) {
config.DenebForkEpoch = 105
config.ElectraForkVersion = []byte("ElectraForkVersion")
config.ElectraForkEpoch = 107
config.Eip7594ForkEpoch = 109
config.BLSWithdrawalPrefixByte = byte('b')
config.ETH1AddressWithdrawalPrefixByte = byte('c')
config.GenesisDelay = 24
@@ -189,7 +190,7 @@ func TestGetSpec(t *testing.T) {
data, ok := resp.Data.(map[string]interface{})
require.Equal(t, true, ok)
assert.Equal(t, 155, len(data))
assert.Equal(t, 156, len(data))
for k, v := range data {
t.Run(k, func(t *testing.T) {
switch k {
@@ -267,6 +268,8 @@ func TestGetSpec(t *testing.T) {
assert.Equal(t, "0x"+hex.EncodeToString([]byte("ElectraForkVersion")), v)
case "ELECTRA_FORK_EPOCH":
assert.Equal(t, "107", v)
case "EIP7594_FORK_EPOCH":
assert.Equal(t, "109", v)
case "MIN_ANCHOR_POW_BLOCK_DIFFICULTY":
assert.Equal(t, "1000", v)
case "BLS_WITHDRAWAL_PREFIX":

View File

@@ -19,11 +19,12 @@ go_library(
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//config/params:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -52,6 +53,7 @@ go_test(
"//config/fieldparams:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",

View File

@@ -7,6 +7,7 @@ import (
"fmt"
"io"
"net/http"
"strconv"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
@@ -18,11 +19,12 @@ import (
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
chaintime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/config/params"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
engine "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -31,6 +33,7 @@ import (
)
const DefaultEventFeedDepth = 1000
const payloadAttributeTimeout = 2 * time.Second
const (
InvalidTopic = "__invalid__"
@@ -89,12 +92,12 @@ var opsFeedEventTopics = map[feed.EventType]string{
var stateFeedEventTopics = map[feed.EventType]string{
statefeed.NewHead: HeadTopic,
statefeed.MissedSlot: PayloadAttributesTopic,
statefeed.FinalizedCheckpoint: FinalizedCheckpointTopic,
statefeed.LightClientFinalityUpdate: LightClientFinalityUpdateTopic,
statefeed.LightClientOptimisticUpdate: LightClientOptimisticUpdateTopic,
statefeed.Reorg: ChainReorgTopic,
statefeed.BlockProcessed: BlockTopic,
statefeed.PayloadAttributes: PayloadAttributesTopic,
}
var topicsForStateFeed = topicsForFeed(stateFeedEventTopics)
@@ -418,10 +421,9 @@ func topicForEvent(event *feed.Event) string {
return ChainReorgTopic
case *statefeed.BlockProcessedData:
return BlockTopic
case payloadattribute.EventData:
return PayloadAttributesTopic
default:
if event.Type == statefeed.MissedSlot {
return PayloadAttributesTopic
}
return InvalidTopic
}
}
@@ -431,31 +433,17 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
if !topics.requested(eventName) {
return nil, errNotRequested
}
if eventName == PayloadAttributesTopic {
return s.currentPayloadAttributes(ctx)
}
if event == nil || event.Data == nil {
return nil, errors.New("event or event data is nil")
}
switch v := event.Data.(type) {
case payloadattribute.EventData:
return s.payloadAttributesReader(ctx, v)
case *ethpb.EventHead:
// The head event is a special case because, if the client requested the payload attributes topic,
// we send two event messages in reaction; the head event and the payload attributes.
headReader := func() io.Reader {
return jsonMarshalReader(eventName, structs.HeadEventFromV1(v))
}
// Don't do the expensive attr lookup unless the client requested it.
if !topics.requested(PayloadAttributesTopic) {
return headReader, nil
}
// Since payload attributes could change before the outbox is written, we need to do a blocking operation to
// get the current payload attributes right here.
attrReader, err := s.currentPayloadAttributes(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get payload attributes for head event")
}
return func() io.Reader {
return io.MultiReader(headReader(), attrReader())
return jsonMarshalReader(eventName, structs.HeadEventFromV1(v))
}, nil
case *operation.AggregatedAttReceivedData:
return func() io.Reader {
@@ -463,14 +451,20 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
return jsonMarshalReader(eventName, att)
}, nil
case *operation.UnAggregatedAttReceivedData:
att, ok := v.Attestation.(*eth.Attestation)
if !ok {
switch att := v.Attestation.(type) {
case *eth.Attestation:
return func() io.Reader {
att := structs.AttFromConsensus(att)
return jsonMarshalReader(eventName, att)
}, nil
case *eth.AttestationElectra:
return func() io.Reader {
att := structs.AttElectraFromConsensus(att)
return jsonMarshalReader(eventName, att)
}, nil
default:
return nil, errors.Wrapf(errUnhandledEventData, "Unexpected type %T for the .Attestation field of UnAggregatedAttReceivedData", v.Attestation)
}
return func() io.Reader {
att := structs.AttFromConsensus(att)
return jsonMarshalReader(eventName, att)
}, nil
case *operation.ExitReceivedData:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.SignedExitFromConsensus(v.Exit))
@@ -495,13 +489,18 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
})
}, nil
case *operation.AttesterSlashingReceivedData:
slashing, ok := v.AttesterSlashing.(*eth.AttesterSlashing)
if !ok {
switch slashing := v.AttesterSlashing.(type) {
case *eth.AttesterSlashing:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.AttesterSlashingFromConsensus(slashing))
}, nil
case *eth.AttesterSlashingElectra:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.AttesterSlashingElectraFromConsensus(slashing))
}, nil
default:
return nil, errors.Wrapf(errUnhandledEventData, "Unexpected type %T for the .AttesterSlashing field of AttesterSlashingReceivedData", v.AttesterSlashing)
}
return func() io.Reader {
return jsonMarshalReader(eventName, structs.AttesterSlashingFromConsensus(slashing))
}, nil
case *operation.ProposerSlashingReceivedData:
return func() io.Reader {
return jsonMarshalReader(eventName, structs.ProposerSlashingFromConsensus(v.ProposerSlashing))
@@ -556,115 +555,202 @@ func (s *Server) lazyReaderForEvent(ctx context.Context, event *feed.Event, topi
}
}
// This event stream is intended to be used by builders and relays.
// Parent fields are based on state at N_{current_slot}, while the rest of fields are based on state of N_{current_slot + 1}
func (s *Server) currentPayloadAttributes(ctx context.Context) (lazyReader, error) {
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head root")
}
st, err := s.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head state")
}
// advance the head state
headState, err := transition.ProcessSlotsIfPossible(ctx, st, s.ChainInfoFetcher.CurrentSlot()+1)
if err != nil {
return nil, errors.Wrap(err, "could not advance head state")
var errUnsupportedPayloadAttribute = errors.New("cannot compute payload attributes pre-Bellatrix")
func (s *Server) computePayloadAttributes(ctx context.Context, ev payloadattribute.EventData) (payloadattribute.Attributer, error) {
v := ev.HeadState.Version()
if v < version.Bellatrix {
return nil, errors.Wrapf(errUnsupportedPayloadAttribute, "%s is not supported", version.String(v))
}
headBlock, err := s.HeadFetcher.HeadBlock(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head block")
}
headPayload, err := headBlock.Block().Body().Execution()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload")
}
t, err := slots.ToTime(headState.GenesisTime(), headState.Slot())
t, err := slots.ToTime(ev.HeadState.GenesisTime(), ev.HeadState.Slot())
if err != nil {
return nil, errors.Wrap(err, "could not get head state slot time")
}
prevRando, err := helpers.RandaoMix(headState, chaintime.CurrentEpoch(headState))
timestamp := uint64(t.Unix())
prevRando, err := helpers.RandaoMix(ev.HeadState, chaintime.CurrentEpoch(ev.HeadState))
if err != nil {
return nil, errors.Wrap(err, "could not get head state randao mix")
}
proposerIndex, err := helpers.BeaconProposerIndex(ctx, headState)
proposerIndex, err := helpers.BeaconProposerIndex(ctx, ev.HeadState)
if err != nil {
return nil, errors.Wrap(err, "could not get head state proposer index")
}
feeRecipient := params.BeaconConfig().DefaultFeeRecipient.Bytes()
feeRecpt := params.BeaconConfig().DefaultFeeRecipient.Bytes()
tValidator, exists := s.TrackedValidatorsCache.Validator(proposerIndex)
if exists {
feeRecipient = tValidator.FeeRecipient[:]
}
var attributes interface{}
switch headState.Version() {
case version.Bellatrix:
attributes = &structs.PayloadAttributesV1{
Timestamp: fmt.Sprintf("%d", t.Unix()),
PrevRandao: hexutil.Encode(prevRando),
SuggestedFeeRecipient: hexutil.Encode(feeRecipient),
}
case version.Capella:
withdrawals, _, err := headState.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get head state expected withdrawals")
}
attributes = &structs.PayloadAttributesV2{
Timestamp: fmt.Sprintf("%d", t.Unix()),
PrevRandao: hexutil.Encode(prevRando),
SuggestedFeeRecipient: hexutil.Encode(feeRecipient),
Withdrawals: structs.WithdrawalsFromConsensus(withdrawals),
}
case version.Deneb, version.Electra:
withdrawals, _, err := headState.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get head state expected withdrawals")
}
parentRoot, err := headBlock.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get head block root")
}
attributes = &structs.PayloadAttributesV3{
Timestamp: fmt.Sprintf("%d", t.Unix()),
PrevRandao: hexutil.Encode(prevRando),
SuggestedFeeRecipient: hexutil.Encode(feeRecipient),
Withdrawals: structs.WithdrawalsFromConsensus(withdrawals),
ParentBeaconBlockRoot: hexutil.Encode(parentRoot[:]),
}
default:
return nil, errors.Wrapf(err, "Payload version %s is not supported", version.String(headState.Version()))
feeRecpt = tValidator.FeeRecipient[:]
}
attributesBytes, err := json.Marshal(attributes)
if err != nil {
return nil, errors.Wrap(err, "errors marshaling payload attributes to json")
}
eventData := structs.PayloadAttributesEventData{
ProposerIndex: fmt.Sprintf("%d", proposerIndex),
ProposalSlot: fmt.Sprintf("%d", headState.Slot()),
ParentBlockNumber: fmt.Sprintf("%d", headPayload.BlockNumber()),
ParentBlockRoot: hexutil.Encode(headRoot),
ParentBlockHash: hexutil.Encode(headPayload.BlockHash()),
PayloadAttributes: attributesBytes,
}
eventDataBytes, err := json.Marshal(eventData)
if err != nil {
return nil, errors.Wrap(err, "errors marshaling payload attributes event data to json")
}
return func() io.Reader {
return jsonMarshalReader(PayloadAttributesTopic, &structs.PayloadAttributesEvent{
Version: version.String(headState.Version()),
Data: eventDataBytes,
if v == version.Bellatrix {
return payloadattribute.New(&engine.PayloadAttributes{
Timestamp: timestamp,
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecpt,
})
}
w, _, err := ev.HeadState.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals from head state")
}
if v == version.Capella {
return payloadattribute.New(&engine.PayloadAttributesV2{
Timestamp: timestamp,
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: w,
})
}
pr, err := ev.HeadBlock.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not compute head block root")
}
return payloadattribute.New(&engine.PayloadAttributesV3{
Timestamp: timestamp,
PrevRandao: prevRando,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: w,
ParentBeaconBlockRoot: pr[:],
})
}
type asyncPayloadAttrData struct {
data json.RawMessage
version string
err error
}
func (s *Server) fillEventData(ctx context.Context, ev payloadattribute.EventData) (payloadattribute.EventData, error) {
if ev.HeadBlock == nil || ev.HeadBlock.IsNil() {
hb, err := s.HeadFetcher.HeadBlock(ctx)
if err != nil {
return ev, errors.Wrap(err, "Could not look up head block")
}
root, err := hb.Block().HashTreeRoot()
if err != nil {
return ev, errors.Wrap(err, "Could not compute head block root")
}
if ev.HeadRoot != root {
return ev, errors.Wrap(err, "head root changed before payload attribute event handler execution")
}
ev.HeadBlock = hb
payload, err := hb.Block().Body().Execution()
if err != nil {
return ev, errors.Wrap(err, "Could not get execution payload for head block")
}
ev.ParentBlockHash = payload.BlockHash()
ev.ParentBlockNumber = payload.BlockNumber()
}
attr := ev.Attributer
if attr == nil || attr.IsEmpty() {
attr, err := s.computePayloadAttributes(ctx, ev)
if err != nil {
return ev, errors.Wrap(err, "Could not compute payload attributes")
}
ev.Attributer = attr
}
return ev, nil
}
// This event stream is intended to be used by builders and relays.
// Parent fields are based on state at N_{current_slot}, while the rest of fields are based on state of N_{current_slot + 1}
func (s *Server) payloadAttributesReader(ctx context.Context, ev payloadattribute.EventData) (lazyReader, error) {
ctx, cancel := context.WithTimeout(ctx, payloadAttributeTimeout)
edc := make(chan asyncPayloadAttrData)
go func() {
d := asyncPayloadAttrData{
version: version.String(ev.HeadState.Version()),
}
defer func() {
edc <- d
}()
ev, err := s.fillEventData(ctx, ev)
if err != nil {
d.err = errors.Wrap(err, "Could not fill event data")
return
}
attributesBytes, err := marshalAttributes(ev.Attributer)
if err != nil {
d.err = errors.Wrap(err, "errors marshaling payload attributes to json")
return
}
d.data, d.err = json.Marshal(structs.PayloadAttributesEventData{
ProposerIndex: strconv.FormatUint(uint64(ev.ProposerIndex), 10),
ProposalSlot: strconv.FormatUint(uint64(ev.ProposalSlot), 10),
ParentBlockNumber: strconv.FormatUint(ev.ParentBlockNumber, 10),
ParentBlockRoot: hexutil.Encode(ev.ParentBlockRoot),
ParentBlockHash: hexutil.Encode(ev.ParentBlockHash),
PayloadAttributes: attributesBytes,
})
if d.err != nil {
d.err = errors.Wrap(d.err, "errors marshaling payload attributes event data to json")
}
}()
return func() io.Reader {
defer cancel()
select {
case <-ctx.Done():
log.WithError(ctx.Err()).Warn("Context canceled while waiting for payload attributes event data")
return nil
case ed := <-edc:
if ed.err != nil {
log.WithError(ed.err).Warn("Error while marshaling payload attributes event data")
return nil
}
return jsonMarshalReader(PayloadAttributesTopic, &structs.PayloadAttributesEvent{
Version: ed.version,
Data: ed.data,
})
}
}, nil
}
func marshalAttributes(attr payloadattribute.Attributer) ([]byte, error) {
v := attr.Version()
if v < version.Bellatrix {
return nil, errors.Wrapf(errUnsupportedPayloadAttribute, "Payload version %s is not supported", version.String(v))
}
timestamp := strconv.FormatUint(attr.Timestamp(), 10)
prevRandao := hexutil.Encode(attr.PrevRandao())
feeRecpt := hexutil.Encode(attr.SuggestedFeeRecipient())
if v == version.Bellatrix {
return json.Marshal(&structs.PayloadAttributesV1{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecpt,
})
}
w, err := attr.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals from payload attributes event")
}
withdrawals := structs.WithdrawalsFromConsensus(w)
if v == version.Capella {
return json.Marshal(&structs.PayloadAttributesV2{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: withdrawals,
})
}
parentRoot, err := attr.ParentBeaconBlockRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get parent beacon block root from payload attributes event")
}
return json.Marshal(&structs.PayloadAttributesV3{
Timestamp: timestamp,
PrevRandao: prevRandao,
SuggestedFeeRecipient: feeRecpt,
Withdrawals: withdrawals,
ParentBeaconBlockRoot: hexutil.Encode(parentRoot),
})
}
func newStreamingResponseController(rw http.ResponseWriter, timeout time.Duration) *streamingResponseWriterController {
rc := http.NewResponseController(rw)
return &streamingResponseWriterController{

View File

@@ -21,6 +21,7 @@ import (
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -489,7 +490,21 @@ func TestStreamEvents_OperationsEvents(t *testing.T) {
require.NoError(t, err)
request := topics.testHttpRequest(testSync.ctx, t)
w := NewStreamingResponseWriterRecorder(testSync.ctx)
events := []*feed.Event{&feed.Event{Type: statefeed.MissedSlot}}
events := []*feed.Event{
&feed.Event{
Type: statefeed.PayloadAttributes,
Data: payloadattribute.EventData{
ProposerIndex: 0,
ProposalSlot: 0,
ParentBlockNumber: 0,
ParentBlockRoot: make([]byte, 32),
ParentBlockHash: make([]byte, 32),
HeadState: st,
HeadBlock: b,
HeadRoot: [fieldparams.RootLength]byte{},
},
},
}
go func() {
s.StreamEvents(w, request)

View File

@@ -40,6 +40,7 @@ go_library(
"//monitoring/tracing/trace:go_default_library",
"//network/httputil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation/aggregation/attestations:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
@@ -82,6 +83,7 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//encoding/bytesutil:go_default_library",
"//network/httputil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
@@ -93,6 +95,7 @@ go_test(
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],

View File

@@ -2,11 +2,13 @@ package validator
import (
"bytes"
"cmp"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"slices"
"sort"
"strconv"
"time"
@@ -32,6 +34,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
ethpbalpha "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation/aggregation/attestations"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
@@ -48,71 +51,165 @@ func (s *Server) GetAggregateAttestation(w http.ResponseWriter, r *http.Request)
if !ok {
return
}
_, slot, ok := shared.UintFromQuery(w, r, "slot", true)
if !ok {
return
}
var match ethpbalpha.Att
var err error
match, err = matchingAtt(s.AttestationsPool.AggregatedAttestations(), primitives.Slot(slot), attDataRoot)
agg := s.aggregatedAttestation(w, primitives.Slot(slot), attDataRoot, 0)
if agg == nil {
return
}
typedAgg, ok := agg.(*ethpbalpha.Attestation)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", &ethpbalpha.Attestation{}), http.StatusInternalServerError)
return
}
data, err := json.Marshal(structs.AttFromConsensus(typedAgg))
if err != nil {
httputil.HandleError(w, "Could not get matching attestation: "+err.Error(), http.StatusInternalServerError)
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
return
}
if match == nil {
atts, err := s.AttestationsPool.UnaggregatedAttestations()
if err != nil {
httputil.HandleError(w, "Could not get unaggregated attestations: "+err.Error(), http.StatusInternalServerError)
return
}
match, err = matchingAtt(atts, primitives.Slot(slot), attDataRoot)
if err != nil {
httputil.HandleError(w, "Could not get matching attestation: "+err.Error(), http.StatusInternalServerError)
return
}
}
if match == nil {
httputil.HandleError(w, "No matching attestation found", http.StatusNotFound)
return
}
response := &structs.AggregateAttestationResponse{
Data: &structs.Attestation{
AggregationBits: hexutil.Encode(match.GetAggregationBits()),
Data: &structs.AttestationData{
Slot: strconv.FormatUint(uint64(match.GetData().Slot), 10),
CommitteeIndex: strconv.FormatUint(uint64(match.GetData().CommitteeIndex), 10),
BeaconBlockRoot: hexutil.Encode(match.GetData().BeaconBlockRoot),
Source: &structs.Checkpoint{
Epoch: strconv.FormatUint(uint64(match.GetData().Source.Epoch), 10),
Root: hexutil.Encode(match.GetData().Source.Root),
},
Target: &structs.Checkpoint{
Epoch: strconv.FormatUint(uint64(match.GetData().Target.Epoch), 10),
Root: hexutil.Encode(match.GetData().Target.Root),
},
},
Signature: hexutil.Encode(match.GetSignature()),
}}
httputil.WriteJson(w, response)
httputil.WriteJson(w, &structs.AggregateAttestationResponse{Data: data})
}
func matchingAtt(atts []ethpbalpha.Att, slot primitives.Slot, attDataRoot []byte) (ethpbalpha.Att, error) {
// GetAggregateAttestationV2 aggregates all attestations matching the given attestation data root and slot, returning the aggregated result.
func (s *Server) GetAggregateAttestationV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "validator.GetAggregateAttestationV2")
defer span.End()
_, attDataRoot, ok := shared.HexFromQuery(w, r, "attestation_data_root", fieldparams.RootLength, true)
if !ok {
return
}
_, slot, ok := shared.UintFromQuery(w, r, "slot", true)
if !ok {
return
}
_, index, ok := shared.UintFromQuery(w, r, "committee_index", true)
if !ok {
return
}
agg := s.aggregatedAttestation(w, primitives.Slot(slot), attDataRoot, primitives.CommitteeIndex(index))
if agg == nil {
return
}
resp := &structs.AggregateAttestationResponse{
Version: version.String(agg.Version()),
}
if agg.Version() >= version.Electra {
typedAgg, ok := agg.(*ethpbalpha.AttestationElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", &ethpbalpha.AttestationElectra{}), http.StatusInternalServerError)
return
}
data, err := json.Marshal(structs.AttElectraFromConsensus(typedAgg))
if err != nil {
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
return
}
resp.Data = data
} else {
typedAgg, ok := agg.(*ethpbalpha.Attestation)
if !ok {
httputil.HandleError(w, fmt.Sprintf("Attestation is not of type %T", &ethpbalpha.Attestation{}), http.StatusInternalServerError)
return
}
data, err := json.Marshal(structs.AttFromConsensus(typedAgg))
if err != nil {
httputil.HandleError(w, "Could not marshal attestation: "+err.Error(), http.StatusInternalServerError)
return
}
resp.Data = data
}
headState, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
if err != nil {
httputil.HandleError(w, "Could not get head state: "+err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set(api.VersionHeader, version.String(headState.Version()))
httputil.WriteJson(w, resp)
}
func (s *Server) aggregatedAttestation(w http.ResponseWriter, slot primitives.Slot, attDataRoot []byte, index primitives.CommitteeIndex) ethpbalpha.Att {
var err error
match, err := matchingAtts(s.AttestationsPool.AggregatedAttestations(), slot, attDataRoot, index)
if err != nil {
httputil.HandleError(w, "Could not get matching attestations: "+err.Error(), http.StatusInternalServerError)
return nil
}
if len(match) > 0 {
// If there are multiple matching aggregated attestations,
// then we return the one with the most aggregation bits.
slices.SortFunc(match, func(a, b ethpbalpha.Att) int {
return cmp.Compare(b.GetAggregationBits().Count(), a.GetAggregationBits().Count())
})
return match[0]
}
atts, err := s.AttestationsPool.UnaggregatedAttestations()
if err != nil {
httputil.HandleError(w, "Could not get unaggregated attestations: "+err.Error(), http.StatusInternalServerError)
return nil
}
match, err = matchingAtts(atts, slot, attDataRoot, index)
if err != nil {
httputil.HandleError(w, "Could not get matching attestations: "+err.Error(), http.StatusInternalServerError)
return nil
}
if len(match) == 0 {
httputil.HandleError(w, "No matching attestations found", http.StatusNotFound)
return nil
}
agg, err := attestations.Aggregate(match)
if err != nil {
httputil.HandleError(w, "Could not aggregate unaggregated attestations: "+err.Error(), http.StatusInternalServerError)
return nil
}
// Aggregating unaggregated attestations will in theory always return just one aggregate,
// so we can take the first one and be done with it.
return agg[0]
}
func matchingAtts(atts []ethpbalpha.Att, slot primitives.Slot, attDataRoot []byte, index primitives.CommitteeIndex) ([]ethpbalpha.Att, error) {
if len(atts) == 0 {
return []ethpbalpha.Att{}, nil
}
postElectra := atts[0].Version() >= version.Electra
result := make([]ethpbalpha.Att, 0)
for _, att := range atts {
if att.GetData().Slot == slot {
root, err := att.GetData().HashTreeRoot()
if att.GetData().Slot != slot {
continue
}
// We ignore the committee index from the request before Electra.
// This is because before Electra the committee index is part of the attestation data,
// meaning that comparing the data root is sufficient.
// Post-Electra the committee index in the data root is always 0, so we need to
// compare the committee index separately.
if postElectra {
ci, err := att.GetCommitteeIndex()
if err != nil {
return nil, errors.Wrap(err, "could not get attestation data root")
return nil, err
}
if bytes.Equal(root[:], attDataRoot) {
return att, nil
if ci != index {
continue
}
}
root, err := att.GetData().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get attestation data root")
}
if bytes.Equal(root[:], attDataRoot) {
result = append(result, att)
}
}
return nil, nil
return result, nil
}
// SubmitContributionAndProofs publishes multiple signed sync committee contribution and proofs.

View File

@@ -14,6 +14,7 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
mockChain "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
@@ -35,6 +36,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/crypto/bls/common"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/network/httputil"
ethpbalpha "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -48,292 +50,340 @@ import (
func TestGetAggregateAttestation(t *testing.T) {
root1 := bytesutil.PadTo([]byte("root1"), 32)
sig1 := bytesutil.PadTo([]byte("sig1"), fieldparams.BLSSignatureLength)
attSlot1 := &ethpbalpha.Attestation{
AggregationBits: []byte{0, 1},
Data: &ethpbalpha.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root1,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root1,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root1,
},
},
Signature: sig1,
}
root21 := bytesutil.PadTo([]byte("root2_1"), 32)
sig21 := bytesutil.PadTo([]byte("sig2_1"), fieldparams.BLSSignatureLength)
attslot21 := &ethpbalpha.Attestation{
AggregationBits: []byte{0, 1, 1},
Data: &ethpbalpha.AttestationData{
Slot: 2,
CommitteeIndex: 1,
BeaconBlockRoot: root21,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root21,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root21,
},
},
Signature: sig21,
}
root22 := bytesutil.PadTo([]byte("root2_2"), 32)
sig22 := bytesutil.PadTo([]byte("sig2_2"), fieldparams.BLSSignatureLength)
attslot22 := &ethpbalpha.Attestation{
AggregationBits: []byte{0, 1, 1, 1},
Data: &ethpbalpha.AttestationData{
Slot: 2,
CommitteeIndex: 1,
BeaconBlockRoot: root22,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root22,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root22,
},
},
Signature: sig22,
}
root31 := bytesutil.PadTo([]byte("root3_1"), 32)
sig31 := bls.NewAggregateSignature().Marshal()
attslot31 := &ethpbalpha.Attestation{
AggregationBits: []byte{1, 0},
Data: &ethpbalpha.AttestationData{
Slot: 3,
CommitteeIndex: 1,
BeaconBlockRoot: root31,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root31,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root31,
},
},
Signature: sig31,
}
root32 := bytesutil.PadTo([]byte("root3_2"), 32)
sig32 := bls.NewAggregateSignature().Marshal()
attslot32 := &ethpbalpha.Attestation{
AggregationBits: []byte{0, 1},
Data: &ethpbalpha.AttestationData{
Slot: 3,
CommitteeIndex: 1,
BeaconBlockRoot: root32,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root32,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root32,
},
},
Signature: sig32,
}
root2 := bytesutil.PadTo([]byte("root2"), 32)
key, err := bls.RandKey()
require.NoError(t, err)
sig := key.Sign([]byte("sig"))
pool := attestations.NewPool()
err := pool.SaveAggregatedAttestations([]ethpbalpha.Att{attSlot1, attslot21, attslot22})
assert.NoError(t, err)
err = pool.SaveUnaggregatedAttestations([]ethpbalpha.Att{attslot31, attslot32})
assert.NoError(t, err)
t.Run("V1", func(t *testing.T) {
createAttestation := func(slot primitives.Slot, aggregationBits bitfield.Bitlist, root []byte) *ethpbalpha.Attestation {
return &ethpbalpha.Attestation{
AggregationBits: aggregationBits,
Data: createAttestationData(slot, 1, 1, root),
Signature: sig.Marshal(),
}
}
s := &Server{
AttestationsPool: pool,
}
aggSlot1_Root1_1 := createAttestation(1, bitfield.Bitlist{0b11100}, root1)
aggSlot1_Root1_2 := createAttestation(1, bitfield.Bitlist{0b10111}, root1)
aggSlot1_Root2 := createAttestation(1, bitfield.Bitlist{0b11100}, root2)
aggSlot2 := createAttestation(2, bitfield.Bitlist{0b11100}, root1)
unaggSlot3_Root1_1 := createAttestation(3, bitfield.Bitlist{0b11000}, root1)
unaggSlot3_Root1_2 := createAttestation(3, bitfield.Bitlist{0b10100}, root1)
unaggSlot3_Root2 := createAttestation(3, bitfield.Bitlist{0b11000}, root2)
unaggSlot4 := createAttestation(4, bitfield.Bitlist{0b11000}, root1)
t.Run("matching aggregated att", func(t *testing.T) {
reqRoot, err := attslot22.Data.HashTreeRoot()
compareResult := func(
t *testing.T,
attestation structs.Attestation,
expectedSlot string,
expectedAggregationBits string,
expectedRoot []byte,
expectedSig []byte,
) {
assert.Equal(t, expectedAggregationBits, attestation.AggregationBits, "Unexpected aggregation bits in attestation")
assert.Equal(t, hexutil.Encode(expectedSig), attestation.Signature, "Signature mismatch")
assert.Equal(t, expectedSlot, attestation.Data.Slot, "Slot mismatch in attestation data")
assert.Equal(t, "1", attestation.Data.CommitteeIndex, "Committee index mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.BeaconBlockRoot, "Beacon block root mismatch")
// Source checkpoint checks
require.NotNil(t, attestation.Data.Source, "Source checkpoint should not be nil")
assert.Equal(t, "1", attestation.Data.Source.Epoch, "Source epoch mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.Source.Root, "Source root mismatch")
// Target checkpoint checks
require.NotNil(t, attestation.Data.Target, "Target checkpoint should not be nil")
assert.Equal(t, "1", attestation.Data.Target.Epoch, "Target epoch mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.Target.Root, "Target root mismatch")
}
pool := attestations.NewPool()
require.NoError(t, pool.SaveUnaggregatedAttestations([]ethpbalpha.Att{unaggSlot3_Root1_1, unaggSlot3_Root1_2, unaggSlot3_Root2, unaggSlot4}), "Failed to save unaggregated attestations")
unagg, err := pool.UnaggregatedAttestations()
require.NoError(t, err)
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
require.Equal(t, 4, len(unagg), "Expected 4 unaggregated attestations")
require.NoError(t, pool.SaveAggregatedAttestations([]ethpbalpha.Att{aggSlot1_Root1_1, aggSlot1_Root1_2, aggSlot1_Root2, aggSlot2}), "Failed to save aggregated attestations")
agg := pool.AggregatedAttestations()
require.Equal(t, 4, len(agg), "Expected 4 aggregated attestations")
s := &Server{
AttestationsPool: pool,
}
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.AggregateAttestationResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.DeepEqual(t, "0x00010101", resp.Data.AggregationBits)
assert.DeepEqual(t, hexutil.Encode(sig22), resp.Data.Signature)
assert.Equal(t, "2", resp.Data.Data.Slot)
assert.Equal(t, "1", resp.Data.Data.CommitteeIndex)
assert.DeepEqual(t, hexutil.Encode(root22), resp.Data.Data.BeaconBlockRoot)
require.NotNil(t, resp.Data.Data.Source)
assert.Equal(t, "1", resp.Data.Data.Source.Epoch)
assert.DeepEqual(t, hexutil.Encode(root22), resp.Data.Data.Source.Root)
require.NotNil(t, resp.Data.Data.Target)
assert.Equal(t, "1", resp.Data.Data.Target.Epoch)
assert.DeepEqual(t, hexutil.Encode(root22), resp.Data.Data.Target.Root)
t.Run("non-matching attestation request", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code, "Expected HTTP status NotFound for non-matching request")
})
t.Run("1 matching aggregated attestation", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal())
})
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal())
})
t.Run("1 matching unaggregated attestation", func(t *testing.T) {
reqRoot, err := unaggSlot4.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=4"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "4", hexutil.Encode(unaggSlot4.AggregationBits), root1, sig.Marshal())
})
t.Run("multiple matching unaggregated attestations - their aggregate is returned", func(t *testing.T) {
reqRoot, err := unaggSlot3_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=3"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.Attestation
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
sig1, err := bls.SignatureFromBytes(unaggSlot3_Root1_1.Signature)
require.NoError(t, err)
sig2, err := bls.SignatureFromBytes(unaggSlot3_Root1_2.Signature)
require.NoError(t, err)
expectedSig := bls.AggregateSignatures([]common.Signature{sig1, sig2})
compareResult(t, attestation, "3", hexutil.Encode(bitfield.Bitlist{0b11100}), root1, expectedSig.Marshal())
})
})
t.Run("matching unaggregated att", func(t *testing.T) {
reqRoot, err := attslot32.Data.HashTreeRoot()
t.Run("V2", func(t *testing.T) {
createAttestation := func(slot primitives.Slot, aggregationBits bitfield.Bitlist, root []byte, bits uint64) *ethpbalpha.AttestationElectra {
committeeBits := bitfield.NewBitvector64()
committeeBits.SetBitAt(bits, true)
return &ethpbalpha.AttestationElectra{
CommitteeBits: committeeBits,
AggregationBits: aggregationBits,
Data: createAttestationData(slot, 0, 1, root),
Signature: sig.Marshal(),
}
}
aggSlot1_Root1_1 := createAttestation(1, bitfield.Bitlist{0b11100}, root1, 1)
aggSlot1_Root1_2 := createAttestation(1, bitfield.Bitlist{0b10111}, root1, 1)
aggSlot1_Root2 := createAttestation(1, bitfield.Bitlist{0b11100}, root2, 1)
aggSlot2 := createAttestation(2, bitfield.Bitlist{0b11100}, root1, 1)
unaggSlot3_Root1_1 := createAttestation(3, bitfield.Bitlist{0b11000}, root1, 1)
unaggSlot3_Root1_2 := createAttestation(3, bitfield.Bitlist{0b10100}, root1, 1)
unaggSlot3_Root2 := createAttestation(3, bitfield.Bitlist{0b11000}, root2, 1)
unaggSlot4 := createAttestation(4, bitfield.Bitlist{0b11000}, root1, 1)
compareResult := func(
t *testing.T,
attestation structs.AttestationElectra,
expectedSlot string,
expectedAggregationBits string,
expectedRoot []byte,
expectedSig []byte,
expectedCommitteeBits string,
) {
assert.Equal(t, expectedAggregationBits, attestation.AggregationBits, "Unexpected aggregation bits in attestation")
assert.Equal(t, expectedCommitteeBits, attestation.CommitteeBits)
assert.Equal(t, hexutil.Encode(expectedSig), attestation.Signature, "Signature mismatch")
assert.Equal(t, expectedSlot, attestation.Data.Slot, "Slot mismatch in attestation data")
assert.Equal(t, "0", attestation.Data.CommitteeIndex, "Committee index mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.BeaconBlockRoot, "Beacon block root mismatch")
// Source checkpoint checks
require.NotNil(t, attestation.Data.Source, "Source checkpoint should not be nil")
assert.Equal(t, "1", attestation.Data.Source.Epoch, "Source epoch mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.Source.Root, "Source root mismatch")
// Target checkpoint checks
require.NotNil(t, attestation.Data.Target, "Target checkpoint should not be nil")
assert.Equal(t, "1", attestation.Data.Target.Epoch, "Target epoch mismatch")
assert.Equal(t, hexutil.Encode(expectedRoot), attestation.Data.Target.Root, "Target root mismatch")
}
pool := attestations.NewPool()
require.NoError(t, pool.SaveUnaggregatedAttestations([]ethpbalpha.Att{unaggSlot3_Root1_1, unaggSlot3_Root1_2, unaggSlot3_Root2, unaggSlot4}), "Failed to save unaggregated attestations")
unagg, err := pool.UnaggregatedAttestations()
require.NoError(t, err)
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=3"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
require.Equal(t, 4, len(unagg), "Expected 4 unaggregated attestations")
require.NoError(t, pool.SaveAggregatedAttestations([]ethpbalpha.Att{aggSlot1_Root1_1, aggSlot1_Root1_2, aggSlot1_Root2, aggSlot2}), "Failed to save aggregated attestations")
agg := pool.AggregatedAttestations()
require.Equal(t, 4, len(agg), "Expected 4 aggregated attestations")
bs, err := util.NewBeaconState()
require.NoError(t, err)
s := &Server{
ChainInfoFetcher: &mockChain.ChainService{State: bs},
AttestationsPool: pool,
}
t.Run("non-matching attestation request", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.AggregateAttestationResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
require.NotNil(t, resp.Data)
assert.DeepEqual(t, "0x0001", resp.Data.AggregationBits)
assert.DeepEqual(t, hexutil.Encode(sig32), resp.Data.Signature)
assert.Equal(t, "3", resp.Data.Data.Slot)
assert.Equal(t, "1", resp.Data.Data.CommitteeIndex)
assert.DeepEqual(t, hexutil.Encode(root32), resp.Data.Data.BeaconBlockRoot)
require.NotNil(t, resp.Data.Data.Source)
assert.Equal(t, "1", resp.Data.Data.Source.Epoch)
assert.DeepEqual(t, hexutil.Encode(root32), resp.Data.Data.Source.Root)
require.NotNil(t, resp.Data.Data.Target)
assert.Equal(t, "1", resp.Data.Data.Target.Epoch)
assert.DeepEqual(t, hexutil.Encode(root32), resp.Data.Data.Target.Root)
})
t.Run("no matching attestation", func(t *testing.T) {
attDataRoot := hexutil.Encode(bytesutil.PadTo([]byte("foo"), 32))
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAggregateAttestationV2(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code, "Expected HTTP status NotFound for non-matching request")
})
t.Run("1 matching aggregated attestation", func(t *testing.T) {
reqRoot, err := aggSlot2.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=2" + "&committee_index=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusNotFound, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusNotFound, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No matching attestation found"))
})
t.Run("no attestation_data_root provided", func(t *testing.T) {
url := "http://example.com?slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "attestation_data_root is required"))
})
t.Run("invalid attestation_data_root provided", func(t *testing.T) {
url := "http://example.com?attestation_data_root=foo&slot=2"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "attestation_data_root is invalid"))
})
t.Run("no slot provided", func(t *testing.T) {
attDataRoot := hexutil.Encode(bytesutil.PadTo([]byte("foo"), 32))
url := "http://example.com?attestation_data_root=" + attDataRoot
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
var attestation structs.AttestationElectra
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "slot is required"))
})
t.Run("invalid slot provided", func(t *testing.T) {
attDataRoot := hexutil.Encode(bytesutil.PadTo([]byte("foo"), 32))
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=foo"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
compareResult(t, attestation, "2", hexutil.Encode(aggSlot2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot2.CommitteeBits))
})
t.Run("multiple matching aggregated attestations - return the one with most bits", func(t *testing.T) {
reqRoot, err := aggSlot1_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1" + "&committee_index=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &httputil.DefaultJsonError{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "slot is invalid"))
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.AttestationElectra
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "1", hexutil.Encode(aggSlot1_Root1_2.AggregationBits), root1, sig.Marshal(), hexutil.Encode(aggSlot1_Root1_1.CommitteeBits))
})
t.Run("1 matching unaggregated attestation", func(t *testing.T) {
reqRoot, err := unaggSlot4.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=4" + "&committee_index=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.AttestationElectra
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
compareResult(t, attestation, "4", hexutil.Encode(unaggSlot4.AggregationBits), root1, sig.Marshal(), hexutil.Encode(unaggSlot4.CommitteeBits))
})
t.Run("multiple matching unaggregated attestations - their aggregate is returned", func(t *testing.T) {
reqRoot, err := unaggSlot3_Root1_1.Data.HashTreeRoot()
require.NoError(t, err, "Failed to generate attestation data hash tree root")
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=3" + "&committee_index=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
s.GetAggregateAttestationV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code, "Expected HTTP status OK")
var resp structs.AggregateAttestationResponse
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), &resp), "Failed to unmarshal response")
require.NotNil(t, resp.Data, "Response data should not be nil")
var attestation structs.AttestationElectra
require.NoError(t, json.Unmarshal(resp.Data, &attestation), "Failed to unmarshal attestation data")
sig1, err := bls.SignatureFromBytes(unaggSlot3_Root1_1.Signature)
require.NoError(t, err)
sig2, err := bls.SignatureFromBytes(unaggSlot3_Root1_2.Signature)
require.NoError(t, err)
expectedSig := bls.AggregateSignatures([]common.Signature{sig1, sig2})
compareResult(t, attestation, "3", hexutil.Encode(bitfield.Bitlist{0b11100}), root1, expectedSig.Marshal(), hexutil.Encode(unaggSlot3_Root1_1.CommitteeBits))
})
})
}
func TestGetAggregateAttestation_SameSlotAndRoot_ReturnMostAggregationBits(t *testing.T) {
root := bytesutil.PadTo([]byte("root"), 32)
sig := bytesutil.PadTo([]byte("sig"), fieldparams.BLSSignatureLength)
att1 := &ethpbalpha.Attestation{
AggregationBits: []byte{3, 0, 0, 1},
Data: &ethpbalpha.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root,
},
func createAttestationData(
slot primitives.Slot,
committeeIndex primitives.CommitteeIndex,
epoch primitives.Epoch,
root []byte,
) *ethpbalpha.AttestationData {
return &ethpbalpha.AttestationData{
Slot: slot,
CommitteeIndex: committeeIndex,
BeaconBlockRoot: root,
Source: &ethpbalpha.Checkpoint{
Epoch: epoch,
Root: root,
},
Signature: sig,
}
att2 := &ethpbalpha.Attestation{
AggregationBits: []byte{0, 3, 0, 1},
Data: &ethpbalpha.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: root,
Source: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root,
},
Target: &ethpbalpha.Checkpoint{
Epoch: 1,
Root: root,
},
Target: &ethpbalpha.Checkpoint{
Epoch: epoch,
Root: root,
},
Signature: sig,
}
pool := attestations.NewPool()
err := pool.SaveAggregatedAttestations([]ethpbalpha.Att{att1, att2})
assert.NoError(t, err)
s := &Server{
AttestationsPool: pool,
}
reqRoot, err := att1.Data.HashTreeRoot()
require.NoError(t, err)
attDataRoot := hexutil.Encode(reqRoot[:])
url := "http://example.com?attestation_data_root=" + attDataRoot + "&slot=1"
request := httptest.NewRequest(http.MethodGet, url, nil)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetAggregateAttestation(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
resp := &structs.AggregateAttestationResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
require.NotNil(t, resp)
assert.DeepEqual(t, "0x03000001", resp.Data.AggregationBits)
}
func TestSubmitContributionAndProofs(t *testing.T) {

View File

@@ -212,7 +212,9 @@ go_test(
embed = [":go_default_library"],
eth_network = "minimal",
tags = ["minimal"],
deps = common_deps,
deps = common_deps + [
"//beacon-chain/operations/attestations/mock:go_default_library",
],
)
go_test(

View File

@@ -339,7 +339,7 @@ func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.Signe
sidecars, err := unblindBlobsSidecars(copiedBlock, bundle)
if err != nil {
return nil, nil, errors.Wrap(err, "unblind sidecars failed")
return nil, nil, errors.Wrap(err, "unblind blobs sidecars: commitment value doesn't match block")
}
return copiedBlock, sidecars, nil

View File

@@ -91,14 +91,7 @@ func (vs *Server) packAttestations(ctx context.Context, latestState state.Beacon
var attsForInclusion proposerAtts
if postElectra {
// TODO: hack for Electra devnet-1, take only one aggregate per ID
// (which essentially means one aggregate for an attestation_data+committee combination
topAggregates := make([]ethpb.Att, 0)
for _, v := range attsById {
topAggregates = append(topAggregates, v[0])
}
attsForInclusion, err = computeOnChainAggregate(topAggregates)
attsForInclusion, err = onChainAggregates(attsById)
if err != nil {
return nil, err
}
@@ -113,14 +106,68 @@ func (vs *Server) packAttestations(ctx context.Context, latestState state.Beacon
if err != nil {
return nil, err
}
sorted, err := deduped.sort()
if err != nil {
return nil, err
var sorted proposerAtts
if postElectra {
sorted, err = deduped.sortOnChainAggregates()
if err != nil {
return nil, err
}
} else {
sorted, err = deduped.sort()
if err != nil {
return nil, err
}
}
atts = sorted.limitToMaxAttestations()
return vs.filterAttestationBySignature(ctx, atts, latestState)
}
func onChainAggregates(attsById map[attestation.Id][]ethpb.Att) (proposerAtts, error) {
var result proposerAtts
var err error
// When constructing on-chain aggregates, we want to combine the most profitable
// aggregate for each ID, then the second most profitable, and so on and so forth.
// Because of this we sort attestations at the beginning.
for id, as := range attsById {
attsById[id], err = proposerAtts(as).sort()
if err != nil {
return nil, err
}
}
// We construct the first on-chain aggregate by taking the first aggregate for each ID.
// We construct the second on-chain aggregate by taking the second aggregate for each ID.
// We continue doing this until we run out of aggregates.
idx := 0
for {
topAggregates := make([]ethpb.Att, 0, len(attsById))
for _, as := range attsById {
// In case there are no more aggregates for an ID, we skip that ID.
if len(as) > idx {
topAggregates = append(topAggregates, as[idx])
}
}
// Once there are no more aggregates for any ID, we are done.
if len(topAggregates) == 0 {
break
}
onChainAggs, err := computeOnChainAggregate(topAggregates)
if err != nil {
return nil, err
}
result = append(result, onChainAggs...)
idx++
}
return result, nil
}
// filter separates attestation list into two groups: valid and invalid attestations.
// The first group passes the all the required checks for attestation to be considered for proposing.
// And attestations from the second group should be deleted.
@@ -223,6 +270,14 @@ func (a proposerAtts) sort() (proposerAtts, error) {
return a.sortBySlotAndCommittee()
}
func (a proposerAtts) sortOnChainAggregates() (proposerAtts, error) {
if len(a) < 2 {
return a, nil
}
return a.sortByProfitabilityUsingMaxCover()
}
// Separate attestations by slot, as slot number takes higher precedence when sorting.
// Also separate by committee index because maxcover will prefer attestations for the same
// committee with disjoint bits over attestations for different committees with overlapping
@@ -231,7 +286,6 @@ func (a proposerAtts) sortBySlotAndCommittee() (proposerAtts, error) {
type slotAtts struct {
candidates map[primitives.CommitteeIndex]proposerAtts
selected map[primitives.CommitteeIndex]proposerAtts
leftover map[primitives.CommitteeIndex]proposerAtts
}
var slots []primitives.Slot
@@ -250,7 +304,6 @@ func (a proposerAtts) sortBySlotAndCommittee() (proposerAtts, error) {
var err error
for _, sa := range attsBySlot {
sa.selected = make(map[primitives.CommitteeIndex]proposerAtts)
sa.leftover = make(map[primitives.CommitteeIndex]proposerAtts)
for ci, committeeAtts := range sa.candidates {
sa.selected[ci], err = committeeAtts.sortByProfitabilityUsingMaxCover_committeeAwarePacking()
if err != nil {
@@ -266,9 +319,6 @@ func (a proposerAtts) sortBySlotAndCommittee() (proposerAtts, error) {
for _, slot := range slots {
sortedAtts = append(sortedAtts, sortSlotAttestations(attsBySlot[slot].selected)...)
}
for _, slot := range slots {
sortedAtts = append(sortedAtts, sortSlotAttestations(attsBySlot[slot].leftover)...)
}
return sortedAtts, nil
}
@@ -287,15 +337,11 @@ func (a proposerAtts) sortByProfitabilityUsingMaxCover_committeeAwarePacking() (
return nil, err
}
}
// Add selected candidates on top, those that are not selected - append at bottom.
selectedKeys, _, err := aggregation.MaxCover(candidates, len(candidates), true /* allowOverlaps */)
if err != nil {
log.WithError(err).Debug("MaxCover aggregation failed")
return a, nil
}
// Pick selected attestations first, leftover attestations will be appended at the end.
// Both lists will be sorted by number of bits set.
selected := make(proposerAtts, selectedKeys.Count())
for i, key := range selectedKeys.BitIndices() {
selected[i] = a[key]

View File

@@ -13,6 +13,9 @@ import (
// computeOnChainAggregate constructs a final aggregate form a list of network aggregates with equal attestation data.
// It assumes that each network aggregate has exactly one committee bit set.
//
// Our implementation allows to pass aggregates for different attestation data, in which case the function will return
// one final aggregate per attestation data.
//
// Spec definition:
//
// def compute_on_chain_aggregate(network_aggregates: Sequence[Attestation]) -> Attestation:

View File

@@ -3,16 +3,21 @@ package validator
import (
"bytes"
"context"
"math/rand"
"sort"
"strconv"
"testing"
"github.com/prysmaticlabs/go-bitfield"
chainMock "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations/mock"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls/blst"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
@@ -680,6 +685,212 @@ func Test_packAttestations(t *testing.T) {
})
}
func Test_packAttestations_ElectraOnChainAggregates(t *testing.T) {
ctx := context.Background()
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.ElectraForkEpoch = 1
params.OverrideBeaconConfig(cfg)
key, err := blst.RandKey()
require.NoError(t, err)
sig := key.Sign([]byte{'X'})
cb0 := primitives.NewAttestationCommitteeBits()
cb0.SetBitAt(0, true)
cb1 := primitives.NewAttestationCommitteeBits()
cb1.SetBitAt(1, true)
data0 := util.HydrateAttestationData(&ethpb.AttestationData{BeaconBlockRoot: bytesutil.PadTo([]byte{'0'}, 32)})
data1 := util.HydrateAttestationData(&ethpb.AttestationData{BeaconBlockRoot: bytesutil.PadTo([]byte{'1'}, 32)})
// Glossary:
// - Single Aggregate: aggregate with exactly one committee bit set, from which an On-Chain Aggregate is constructed
// - On-Chain Aggregate: final aggregate packed into a block
//
// We construct the following number of single aggregates:
// - data_root_0 and committee_index_0: 3 single aggregates
// - data_root_0 and committee_index_1: 2 single aggregates
// - data_root_1 and committee_index_0: 1 single aggregate
// - data_root_1 and committee_index_1: 3 single aggregates
//
// Because the function tries to aggregate attestations, we have to create attestations which are not aggregatable
// and are not redundant when using MaxCover.
// The function should also sort attestation by ID before computing the On-Chain Aggregate, so we want unsorted aggregation bits
// to test the sorting part.
//
// The result should be the following six on-chain aggregates:
// - for data_root_0 combining the most profitable aggregate for each committee
// - for data_root_0 combining the second most profitable aggregate for each committee
// - for data_root_0 constructed from the single aggregate at index 2 for committee_index_0
// - for data_root_1 combining the most profitable aggregate for each committee
// - for data_root_1 constructed from the single aggregate at index 1 for committee_index_1
// - for data_root_1 constructed from the single aggregate at index 2 for committee_index_1
d0_c0_a1 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1000011},
CommitteeBits: cb0,
Data: data0,
Signature: sig.Marshal(),
}
d0_c0_a2 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1100101},
CommitteeBits: cb0,
Data: data0,
Signature: sig.Marshal(),
}
d0_c0_a3 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1111000},
CommitteeBits: cb0,
Data: data0,
Signature: sig.Marshal(),
}
d0_c1_a1 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1111100},
CommitteeBits: cb1,
Data: data0,
Signature: sig.Marshal(),
}
d0_c1_a2 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1001111},
CommitteeBits: cb1,
Data: data0,
Signature: sig.Marshal(),
}
d1_c0_a1 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1111111},
CommitteeBits: cb0,
Data: data1,
Signature: sig.Marshal(),
}
d1_c1_a1 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1000011},
CommitteeBits: cb1,
Data: data1,
Signature: sig.Marshal(),
}
d1_c1_a2 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1100101},
CommitteeBits: cb1,
Data: data1,
Signature: sig.Marshal(),
}
d1_c1_a3 := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1111000},
CommitteeBits: cb1,
Data: data1,
Signature: sig.Marshal(),
}
pool := &mock.PoolMock{}
require.NoError(t, pool.SaveAggregatedAttestations([]ethpb.Att{d0_c0_a1, d0_c0_a2, d0_c0_a3, d0_c1_a1, d0_c1_a2, d1_c0_a1, d1_c1_a1, d1_c1_a2, d1_c1_a3}))
slot := primitives.Slot(1)
s := &Server{AttPool: pool, HeadFetcher: &chainMock.ChainService{}, TimeFetcher: &chainMock.ChainService{Slot: &slot}}
// We need the correct number of validators so that there are at least 2 committees per slot
// and each committee has exactly 6 validators (this is because we have 6 aggregation bits).
st, _ := util.DeterministicGenesisStateElectra(t, 192)
require.NoError(t, st.SetSlot(params.BeaconConfig().SlotsPerEpoch+1))
atts, err := s.packAttestations(ctx, st, params.BeaconConfig().SlotsPerEpoch)
require.NoError(t, err)
require.Equal(t, 6, len(atts))
assert.Equal(t, true,
atts[0].GetAggregationBits().Count() >= atts[1].GetAggregationBits().Count() &&
atts[1].GetAggregationBits().Count() >= atts[2].GetAggregationBits().Count() &&
atts[2].GetAggregationBits().Count() >= atts[3].GetAggregationBits().Count() &&
atts[3].GetAggregationBits().Count() >= atts[4].GetAggregationBits().Count() &&
atts[4].GetAggregationBits().Count() >= atts[5].GetAggregationBits().Count(),
"on-chain aggregates are not sorted by aggregation bit count",
)
t.Run("slot takes precedence", func(t *testing.T) {
moreRecentAtt := &ethpb.AttestationElectra{
AggregationBits: bitfield.Bitlist{0b1100000}, // we set only one bit for committee_index_0
CommitteeBits: cb1,
Data: util.HydrateAttestationData(&ethpb.AttestationData{Slot: 1, BeaconBlockRoot: bytesutil.PadTo([]byte{'0'}, 32)}),
Signature: sig.Marshal(),
}
require.NoError(t, pool.SaveUnaggregatedAttestations([]ethpb.Att{moreRecentAtt}))
atts, err = s.packAttestations(ctx, st, params.BeaconConfig().SlotsPerEpoch)
require.NoError(t, err)
require.Equal(t, 7, len(atts))
assert.Equal(t, true, atts[0].GetData().Slot == 1)
})
}
func Benchmark_packAttestations_Electra(b *testing.B) {
ctx := context.Background()
params.SetupTestConfigCleanup(b)
cfg := params.MainnetConfig().Copy()
cfg.ElectraForkEpoch = 1
params.OverrideBeaconConfig(cfg)
valCount := uint64(1048576)
committeeCount := helpers.SlotCommitteeCount(valCount)
valsPerCommittee := valCount / committeeCount / uint64(params.BeaconConfig().SlotsPerEpoch)
st, _ := util.DeterministicGenesisStateElectra(b, valCount)
key, err := blst.RandKey()
require.NoError(b, err)
sig := key.Sign([]byte{'X'})
r := rand.New(rand.NewSource(123))
var atts []ethpb.Att
for c := uint64(0); c < committeeCount; c++ {
for a := uint64(0); a < params.BeaconConfig().TargetAggregatorsPerCommittee; a++ {
cb := primitives.NewAttestationCommitteeBits()
cb.SetBitAt(c, true)
var att *ethpb.AttestationElectra
// Last two aggregators send aggregates for some random block root with only a few bits set.
if a >= params.BeaconConfig().TargetAggregatorsPerCommittee-2 {
root := bytesutil.PadTo([]byte("root_"+strconv.Itoa(r.Intn(100))), 32)
att = &ethpb.AttestationElectra{
Data: util.HydrateAttestationData(&ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch - 1, BeaconBlockRoot: root}),
AggregationBits: bitfield.NewBitlist(valsPerCommittee),
CommitteeBits: cb,
Signature: sig.Marshal(),
}
for bit := uint64(0); bit < valsPerCommittee; bit++ {
att.AggregationBits.SetBitAt(bit, r.Intn(100) < 2) // 2% that the bit is set
}
} else {
att = &ethpb.AttestationElectra{
Data: util.HydrateAttestationData(&ethpb.AttestationData{Slot: params.BeaconConfig().SlotsPerEpoch - 1, BeaconBlockRoot: bytesutil.PadTo([]byte("root"), 32)}),
AggregationBits: bitfield.NewBitlist(valsPerCommittee),
CommitteeBits: cb,
Signature: sig.Marshal(),
}
for bit := uint64(0); bit < valsPerCommittee; bit++ {
att.AggregationBits.SetBitAt(bit, r.Intn(100) < 98) // 98% that the bit is set
}
}
atts = append(atts, att)
}
}
pool := &mock.PoolMock{}
require.NoError(b, pool.SaveAggregatedAttestations(atts))
slot := primitives.Slot(1)
s := &Server{AttPool: pool, HeadFetcher: &chainMock.ChainService{}, TimeFetcher: &chainMock.ChainService{Slot: &slot}}
require.NoError(b, st.SetSlot(params.BeaconConfig().SlotsPerEpoch))
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err = s.packAttestations(ctx, st, params.BeaconConfig().SlotsPerEpoch+1)
require.NoError(b, err)
}
}
func Test_limitToMaxAttestations(t *testing.T) {
t.Run("Phase 0", func(t *testing.T) {
atts := make([]ethpb.Att, params.BeaconConfig().MaxAttestations+1)

View File

@@ -54,7 +54,7 @@ func (vs *Server) eth1DataMajorityVote(ctx context.Context, beaconState state.Be
// by ETH1_FOLLOW_DISTANCE. The head state should maintain the same ETH1Data until this condition has passed, so
// trust the existing head for the right eth1 vote until we can get a meaningful value from the deposit contract.
if latestValidTime < genesisTime+followDistanceSeconds {
log.WithField("genesisTime", genesisTime).WithField("latestValidTime", latestValidTime).Warn("voting period before genesis + follow distance, using eth1data from head")
log.WithField("genesisTime", genesisTime).WithField("latestValidTime", latestValidTime).Warn("Voting period before genesis + follow distance, using eth1data from head")
return vs.HeadFetcher.HeadETH1Data(), nil
}

View File

@@ -84,7 +84,6 @@ func (vs *Server) getLocalPayloadFromEngine(
}
setFeeRecipientIfBurnAddress(&val)
var err error
if ok && payloadId != [8]byte{} {
// Payload ID is cache hit. Return the cached payload ID.
var pid primitives.PayloadID
@@ -102,7 +101,7 @@ func (vs *Server) getLocalPayloadFromEngine(
return nil, errors.Wrap(err, "could not get cached payload from execution client")
}
}
log.WithFields(logFields).Debug("payload ID cache miss")
log.WithFields(logFields).Debug("Payload ID cache miss")
parentHash, err := vs.getParentBlockHash(ctx, st, slot)
switch {
case errors.Is(err, errActivationNotReached) || errors.Is(err, errNoTerminalBlockHash):
@@ -191,7 +190,7 @@ func (vs *Server) getLocalPayloadFromEngine(
}
warnIfFeeRecipientDiffers(val.FeeRecipient[:], res.ExecutionData.FeeRecipient())
log.WithField("value", res.Bid).Debug("received execution payload from local engine")
log.WithField("value", res.Bid).Debug("Received execution payload from local engine")
return res, nil
}

View File

@@ -912,7 +912,7 @@ func TestProposer_ProposeBlock_OK(t *testing.T) {
return &ethpb.GenericSignedBeaconBlock{Block: blk}
},
useBuilder: true,
err: "unblind sidecars failed: commitment value doesn't match block",
err: "unblind blobs sidecars: commitment value doesn't match block",
},
{
name: "electra block no blob",

View File

@@ -287,6 +287,9 @@ func (vs *Server) validatorStatus(
Status: ethpb.ValidatorStatus_UNKNOWN_STATUS,
ActivationEpoch: params.BeaconConfig().FarFutureEpoch,
}
if len(pubKey) == 0 {
return resp, nonExistentIndex
}
vStatus, idx, err := statusForPubKey(headState, pubKey)
if err != nil && !errors.Is(err, errPubkeyDoesNotExist) {
tracing.AnnotateError(span, err)

View File

@@ -97,10 +97,7 @@ func (s *Service) filterAttestations(
// detection (except for the genesis epoch).
func validateAttestationIntegrity(att ethpb.IndexedAtt) bool {
// If an attestation is malformed, we drop it.
if att == nil ||
att.GetData() == nil ||
att.GetData().Source == nil ||
att.GetData().Target == nil {
if att == nil || att.IsNil() || att.GetData().Source == nil || att.GetData().Target == nil {
return false
}

View File

@@ -316,7 +316,7 @@ type WriteOnlySyncCommittee interface {
type WriteOnlyWithdrawals interface {
AppendPendingPartialWithdrawal(ppw *ethpb.PendingPartialWithdrawal) error
DequeuePartialWithdrawals(num uint64) error
DequeuePendingPartialWithdrawals(num uint64) error
SetNextWithdrawalIndex(i uint64) error
SetNextWithdrawalValidatorIndex(i primitives.ValidatorIndex) error
}

View File

@@ -29,7 +29,6 @@ type BeaconState struct {
stateRoots customtypes.StateRoots
stateRootsMultiValue *MultiValueStateRoots
historicalRoots customtypes.HistoricalRoots
historicalSummaries []*ethpb.HistoricalSummary
eth1Data *ethpb.Eth1Data
eth1DataVotes []*ethpb.Eth1Data
eth1DepositIndex uint64
@@ -55,8 +54,11 @@ type BeaconState struct {
latestExecutionPayloadHeader *enginev1.ExecutionPayloadHeader
latestExecutionPayloadHeaderCapella *enginev1.ExecutionPayloadHeaderCapella
latestExecutionPayloadHeaderDeneb *enginev1.ExecutionPayloadHeaderDeneb
nextWithdrawalIndex uint64
nextWithdrawalValidatorIndex primitives.ValidatorIndex
// Capella fields
nextWithdrawalIndex uint64
nextWithdrawalValidatorIndex primitives.ValidatorIndex
historicalSummaries []*ethpb.HistoricalSummary
// Electra fields
depositRequestsStartIndex uint64
@@ -90,7 +92,6 @@ type beaconStateMarshalable struct {
BlockRoots customtypes.BlockRoots `json:"block_roots" yaml:"block_roots"`
StateRoots customtypes.StateRoots `json:"state_roots" yaml:"state_roots"`
HistoricalRoots customtypes.HistoricalRoots `json:"historical_roots" yaml:"historical_roots"`
HistoricalSummaries []*ethpb.HistoricalSummary `json:"historical_summaries" yaml:"historical_summaries"`
Eth1Data *ethpb.Eth1Data `json:"eth_1_data" yaml:"eth_1_data"`
Eth1DataVotes []*ethpb.Eth1Data `json:"eth_1_data_votes" yaml:"eth_1_data_votes"`
Eth1DepositIndex uint64 `json:"eth_1_deposit_index" yaml:"eth_1_deposit_index"`
@@ -114,6 +115,7 @@ type beaconStateMarshalable struct {
LatestExecutionPayloadHeaderDeneb *enginev1.ExecutionPayloadHeaderDeneb `json:"latest_execution_payload_header_deneb" yaml:"latest_execution_payload_header_deneb"`
NextWithdrawalIndex uint64 `json:"next_withdrawal_index" yaml:"next_withdrawal_index"`
NextWithdrawalValidatorIndex primitives.ValidatorIndex `json:"next_withdrawal_validator_index" yaml:"next_withdrawal_validator_index"`
HistoricalSummaries []*ethpb.HistoricalSummary `json:"historical_summaries" yaml:"historical_summaries"`
DepositRequestsStartIndex uint64 `json:"deposit_requests_start_index" yaml:"deposit_requests_start_index"`
DepositBalanceToConsume primitives.Gwei `json:"deposit_balance_to_consume" yaml:"deposit_balance_to_consume"`
ExitBalanceToConsume primitives.Gwei `json:"exit_balance_to_consume" yaml:"exit_balance_to_consume"`
@@ -159,7 +161,6 @@ func (b *BeaconState) MarshalJSON() ([]byte, error) {
BlockRoots: bRoots,
StateRoots: sRoots,
HistoricalRoots: b.historicalRoots,
HistoricalSummaries: b.historicalSummaries,
Eth1Data: b.eth1Data,
Eth1DataVotes: b.eth1DataVotes,
Eth1DepositIndex: b.eth1DepositIndex,
@@ -183,6 +184,7 @@ func (b *BeaconState) MarshalJSON() ([]byte, error) {
LatestExecutionPayloadHeaderDeneb: b.latestExecutionPayloadHeaderDeneb,
NextWithdrawalIndex: b.nextWithdrawalIndex,
NextWithdrawalValidatorIndex: b.nextWithdrawalValidatorIndex,
HistoricalSummaries: b.historicalSummaries,
DepositRequestsStartIndex: b.depositRequestsStartIndex,
DepositBalanceToConsume: b.depositBalanceToConsume,
ExitBalanceToConsume: b.exitBalanceToConsume,

View File

@@ -4,7 +4,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// DepositRequestsStartIndex is used for returning the deposit receipts start index which is used for eip6110
// DepositRequestsStartIndex is used for returning the deposit requests start index which is used for eip6110
func (b *BeaconState) DepositRequestsStartIndex() (uint64, error) {
if b.version < version.Electra {
return 0, errNotSupported("DepositRequestsStartIndex", b.version)

View File

@@ -1,6 +1,8 @@
package state_native
import (
"errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -15,6 +17,9 @@ func (b *BeaconState) AppendPendingConsolidation(val *ethpb.PendingConsolidation
if b.version < version.Electra {
return errNotSupported("AppendPendingConsolidation", b.version)
}
if val == nil {
return errors.New("cannot append nil pending consolidation")
}
b.lock.Lock()
defer b.lock.Unlock()

View File

@@ -1,6 +1,8 @@
package state_native
import (
"errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stateutil"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -15,6 +17,9 @@ func (b *BeaconState) AppendPendingDeposit(pd *ethpb.PendingDeposit) error {
if b.version < version.Electra {
return errNotSupported("AppendPendingDeposit", b.version)
}
if pd == nil {
return errors.New("cannot append nil pending deposit")
}
b.lock.Lock()
defer b.lock.Unlock()

View File

@@ -64,10 +64,10 @@ func (b *BeaconState) AppendPendingPartialWithdrawal(ppw *eth.PendingPartialWith
return nil
}
// DequeuePartialWithdrawals removes the partial withdrawals from the beginning of the partial withdrawals list.
func (b *BeaconState) DequeuePartialWithdrawals(n uint64) error {
// DequeuePendingPartialWithdrawals removes the partial withdrawals from the beginning of the partial withdrawals list.
func (b *BeaconState) DequeuePendingPartialWithdrawals(n uint64) error {
if b.version < version.Electra {
return errNotSupported("DequeuePartialWithdrawals", b.version)
return errNotSupported("DequeuePendingPartialWithdrawals", b.version)
}
if n > uint64(len(b.pendingPartialWithdrawals)) {

View File

@@ -68,7 +68,7 @@ func TestDequeuePendingWithdrawals(t *testing.T) {
num, err := s.NumPendingPartialWithdrawals()
require.NoError(t, err)
require.Equal(t, uint64(3), num)
require.NoError(t, s.DequeuePartialWithdrawals(2))
require.NoError(t, s.DequeuePendingPartialWithdrawals(2))
num, err = s.NumPendingPartialWithdrawals()
require.NoError(t, err)
require.Equal(t, uint64(1), num)
@@ -77,13 +77,13 @@ func TestDequeuePendingWithdrawals(t *testing.T) {
num, err = s.NumPendingPartialWithdrawals()
require.NoError(t, err)
require.Equal(t, uint64(1), num)
require.ErrorContains(t, "cannot dequeue more withdrawals than are in the queue", s.DequeuePartialWithdrawals(2))
require.ErrorContains(t, "cannot dequeue more withdrawals than are in the queue", s.DequeuePendingPartialWithdrawals(2))
// Removing all pending partial withdrawals should be OK.
num, err = s.NumPendingPartialWithdrawals()
require.NoError(t, err)
require.Equal(t, uint64(1), num)
require.NoError(t, s.DequeuePartialWithdrawals(1))
require.NoError(t, s.DequeuePendingPartialWithdrawals(1))
num, err = s.Copy().NumPendingPartialWithdrawals()
require.NoError(t, err)
require.Equal(t, uint64(0), num)
@@ -91,7 +91,7 @@ func TestDequeuePendingWithdrawals(t *testing.T) {
s, err = InitializeFromProtoDeneb(&eth.BeaconStateDeneb{})
require.NoError(t, err)
require.ErrorContains(t, "is not supported", s.DequeuePartialWithdrawals(0))
require.ErrorContains(t, "is not supported", s.DequeuePendingPartialWithdrawals(0))
}
func TestAppendPendingWithdrawals(t *testing.T) {

View File

@@ -14,7 +14,7 @@ func (b *BeaconState) ProportionalSlashingMultiplier() (uint64, error) {
case version.Phase0:
return params.BeaconConfig().ProportionalSlashingMultiplier, nil
}
return 0, errNotSupported("ProportionalSlashingMultiplier()", b.version)
return 0, errNotSupported("ProportionalSlashingMultiplier", b.version)
}
func (b *BeaconState) InactivityPenaltyQuotient() (uint64, error) {
@@ -26,5 +26,5 @@ func (b *BeaconState) InactivityPenaltyQuotient() (uint64, error) {
case version.Phase0:
return params.BeaconConfig().InactivityPenaltyQuotient, nil
}
return 0, errNotSupported("InactivityPenaltyQuotient()", b.version)
return 0, errNotSupported("InactivityPenaltyQuotient", b.version)
}

View File

@@ -96,10 +96,10 @@ var denebFields = append(
var electraFields = append(
altairFields,
types.LatestExecutionPayloadHeaderDeneb,
types.NextWithdrawalIndex,
types.NextWithdrawalValidatorIndex,
types.HistoricalSummaries,
types.LatestExecutionPayloadHeaderDeneb,
types.DepositRequestsStartIndex,
types.DepositBalanceToConsume,
types.ExitBalanceToConsume,

View File

@@ -107,7 +107,7 @@ type blobBatchVerifier struct {
func (bbv *blobBatchVerifier) newVerifier(rb blocks.ROBlob) verification.BlobVerifier {
m := bbv.verifiers[rb.BlockRoot()]
m[rb.Index] = bbv.newBlobVerifier(rb, verification.BackfillSidecarRequirements)
m[rb.Index] = bbv.newBlobVerifier(rb, verification.BackfillBlobSidecarRequirements)
bbv.verifiers[rb.BlockRoot()] = m
return m[rb.Index]
}

View File

@@ -388,6 +388,7 @@ func TestService_CheckForPreviousEpochFork(t *testing.T) {
}
}
// oneEpoch returns the duration of one epoch.
func oneEpoch() time.Duration {
return time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second
}

View File

@@ -172,7 +172,7 @@ func (s *Service) processFetchedDataRegSync(
if len(bwb) == 0 {
return
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
batchFields := logrus.Fields{
"firstSlot": data.bwb[0].Block.Block().Slot(),
@@ -331,7 +331,7 @@ func (s *Service) processBatchedBlocks(ctx context.Context, genesis time.Time,
errParentDoesNotExist, first.Block().ParentRoot(), first.Block().Slot())
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
s.logBatchSyncStatus(genesis, first, len(bwb))
for _, bb := range bwb {

View File

@@ -340,7 +340,7 @@ func (s *Service) fetchOriginBlobs(pids []peer.ID) error {
if len(sidecars) != len(req) {
continue
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncSidecarRequirements)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.InitsyncBlobSidecarRequirements)
avs := das.NewLazilyPersistentStore(s.cfg.BlobStorage, bv)
current := s.clock.CurrentSlot()
if err := avs.Persist(current, sidecars...); err != nil {

View File

@@ -495,8 +495,8 @@ func TestOriginOutsideRetention(t *testing.T) {
bdb := dbtest.SetupDB(t)
genesis := time.Unix(0, 0)
secsPerEpoch := params.BeaconConfig().SecondsPerSlot * uint64(params.BeaconConfig().SlotsPerEpoch)
retentionSeconds := time.Second * time.Duration(uint64(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest+1)*secsPerEpoch)
outsideRetention := genesis.Add(retentionSeconds)
retentionPeriod := time.Second * time.Duration(uint64(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest+1)*secsPerEpoch)
outsideRetention := genesis.Add(retentionPeriod)
now := func() time.Time {
return outsideRetention
}

View File

@@ -315,7 +315,7 @@ func (s *Service) sendBatchRootRequest(ctx context.Context, roots [][32]byte, ra
if uint64(len(roots)) > maxReqBlock {
req = roots[:maxReqBlock]
}
if err := s.sendRecentBeaconBlocksRequest(ctx, &req, pid); err != nil {
if err := s.sendBeaconBlocksRequest(ctx, &req, pid); err != nil {
tracing.AnnotateError(span, err)
log.WithError(err).Debug("Could not send recent block request")
}

View File

@@ -16,6 +16,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
)
// beaconBlocksByRangeRPCHandler looks up the request blocks from the database from a given start block.
@@ -26,15 +27,23 @@ func (s *Service) beaconBlocksByRangeRPCHandler(ctx context.Context, msg interfa
defer cancel()
SetRPCStreamDeadlines(stream)
remotePeer := stream.Conn().RemotePeer()
m, ok := msg.(*pb.BeaconBlocksByRangeRequest)
if !ok {
return errors.New("message is not type *pb.BeaconBlockByRangeRequest")
}
log.WithField("startSlot", m.StartSlot).WithField("count", m.Count).Debug("Serving block by range request")
log.WithFields(logrus.Fields{
"startSlot": m.StartSlot,
"count": m.Count,
"peer": remotePeer,
}).Debug("Serving block by range request")
rp, err := validateRangeRequest(m, s.cfg.clock.CurrentSlot())
if err != nil {
s.writeErrorResponseToStream(responseCodeInvalidRequest, err.Error(), stream)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(remotePeer)
tracing.AnnotateError(span, err)
return err
}
@@ -50,12 +59,12 @@ func (s *Service) beaconBlocksByRangeRPCHandler(ctx context.Context, msg interfa
if err != nil {
return err
}
remainingBucketCapacity := blockLimiter.Remaining(stream.Conn().RemotePeer().String())
remainingBucketCapacity := blockLimiter.Remaining(remotePeer.String())
span.SetAttributes(
trace.Int64Attribute("start", int64(rp.start)), // lint:ignore uintcast -- This conversion is OK for tracing.
trace.Int64Attribute("end", int64(rp.end)), // lint:ignore uintcast -- This conversion is OK for tracing.
trace.Int64Attribute("count", int64(m.Count)),
trace.StringAttribute("peer", stream.Conn().RemotePeer().String()),
trace.StringAttribute("peer", remotePeer.String()),
trace.Int64Attribute("remaining_capacity", remainingBucketCapacity),
)
@@ -82,12 +91,19 @@ func (s *Service) beaconBlocksByRangeRPCHandler(ctx context.Context, msg interfa
}
rpcBlocksByRangeResponseLatency.Observe(float64(time.Since(batchStart).Milliseconds()))
}
if err := batch.error(); err != nil {
log.WithError(err).Debug("error in BlocksByRange batch")
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
log.WithError(err).Debug("Serving block by range request - BlocksByRange batch")
// If a rate limit is hit, it means an error response has already been sent and the stream has been closed.
if !errors.Is(err, p2ptypes.ErrRateLimited) {
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
}
tracing.AnnotateError(span, err)
return err
}
closeStream(stream, log)
return nil
}

View File

@@ -20,9 +20,9 @@ import (
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
// sendRecentBeaconBlocksRequest sends a recent beacon blocks request to a peer to get
// sendBeaconBlocksRequest sends a recent beacon blocks request to a peer to get
// those corresponding blocks from that peer.
func (s *Service) sendRecentBeaconBlocksRequest(ctx context.Context, requests *types.BeaconBlockByRootsReq, id peer.ID) error {
func (s *Service) sendBeaconBlocksRequest(ctx context.Context, requests *types.BeaconBlockByRootsReq, id peer.ID) error {
ctx, cancel := context.WithTimeout(ctx, respTimeout)
defer cancel()
@@ -151,7 +151,7 @@ func (s *Service) sendAndSaveBlobSidecars(ctx context.Context, request types.Blo
if len(sidecars) != len(request) {
return fmt.Errorf("received %d blob sidecars, expected %d for RPC", len(sidecars), len(request))
}
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.PendingQueueSidecarRequirements)
bv := verification.NewBlobBatchVerifier(s.newBlobVerifier, verification.PendingQueueBlobSidecarRequirements)
for _, sidecar := range sidecars {
if err := verify.BlobAlignsWithBlock(sidecar, RoBlock); err != nil {
return err

View File

@@ -253,7 +253,7 @@ func TestRecentBeaconBlocks_RPCRequestSent(t *testing.T) {
})
p1.Connect(p2)
require.NoError(t, r.sendRecentBeaconBlocksRequest(context.Background(), &expectedRoots, p2.PeerID()))
require.NoError(t, r.sendBeaconBlocksRequest(context.Background(), &expectedRoots, p2.PeerID()))
if util.WaitTimeout(&wg, 1*time.Second) {
t.Fatal("Did not receive stream within 1 sec")
@@ -328,7 +328,7 @@ func TestRecentBeaconBlocks_RPCRequestSent_IncorrectRoot(t *testing.T) {
})
p1.Connect(p2)
require.ErrorContains(t, "received unexpected block with root", r.sendRecentBeaconBlocksRequest(context.Background(), &expectedRoots, p2.PeerID()))
require.ErrorContains(t, "received unexpected block with root", r.sendBeaconBlocksRequest(context.Background(), &expectedRoots, p2.PeerID()))
}
func TestRecentBeaconBlocksRPCHandler_HandleZeroBlocks(t *testing.T) {

View File

@@ -99,6 +99,7 @@ func (s *Service) blobSidecarsByRangeRPCHandler(ctx context.Context, msg interfa
}
var batch blockBatch
wQuota := params.BeaconConfig().MaxRequestBlobSidecars
for batch, ok = batcher.next(ctx, stream); ok; batch, ok = batcher.next(ctx, stream) {
batchStart := time.Now()
@@ -114,7 +115,12 @@ func (s *Service) blobSidecarsByRangeRPCHandler(ctx context.Context, msg interfa
}
if err := batch.error(); err != nil {
log.WithError(err).Debug("error in BlobSidecarsByRange batch")
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
// If a rate limit is hit, it means an error response has already been sent and the stream has been closed.
if !errors.Is(err, p2ptypes.ErrRateLimited) {
s.writeErrorResponseToStream(responseCodeServerError, p2ptypes.ErrGeneric.Error(), stream)
}
tracing.AnnotateError(span, err)
return err
}

View File

@@ -2,12 +2,12 @@ package sync
import (
"context"
"errors"
"fmt"
"strings"
libp2pcore "github.com/libp2p/go-libp2p/core"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p"
p2ptypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/p2p/types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -16,127 +16,191 @@ import (
)
// pingHandler reads the incoming ping rpc message from the peer.
// If the peer's sequence number is higher than the one stored locally,
// a METADATA request is sent to the peer to retrieve and update the latest metadata.
// Note: This function is misnamed, as it performs more than just reading a ping message.
func (s *Service) pingHandler(_ context.Context, msg interface{}, stream libp2pcore.Stream) error {
SetRPCStreamDeadlines(stream)
// Convert the message to SSW Uint64 type.
m, ok := msg.(*primitives.SSZUint64)
if !ok {
return fmt.Errorf("wrong message type for ping, got %T, wanted *uint64", msg)
}
// Validate the incoming request regarding rate limiting.
if err := s.rateLimiter.validateRequest(stream, 1); err != nil {
return err
return errors.Wrap(err, "validate request")
}
s.rateLimiter.add(stream, 1)
valid, err := s.validateSequenceNum(*m, stream.Conn().RemotePeer())
// Retrieve the peer ID.
peerID := stream.Conn().RemotePeer()
// Check if the peer sequence number is higher than the one we have in our store.
valid, err := s.validateSequenceNum(*m, peerID)
if err != nil {
// Descore peer for giving us a bad sequence number.
if errors.Is(err, p2ptypes.ErrInvalidSequenceNum) {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
s.writeErrorResponseToStream(responseCodeInvalidRequest, p2ptypes.ErrInvalidSequenceNum.Error(), stream)
}
return err
return errors.Wrap(err, "validate sequence number")
}
// We can already prepare a success response to the peer.
if _, err := stream.Write([]byte{responseCodeSuccess}); err != nil {
return err
return errors.Wrap(err, "write response")
}
sq := primitives.SSZUint64(s.cfg.p2p.MetadataSeq())
if _, err := s.cfg.p2p.Encoding().EncodeWithMaxLength(stream, &sq); err != nil {
// Retrieve our own sequence number.
seqNumber := s.cfg.p2p.MetadataSeq()
// SSZ encode our sequence number.
seqNumberSSZ := primitives.SSZUint64(seqNumber)
// Send our sequence number back to the peer.
if _, err := s.cfg.p2p.Encoding().EncodeWithMaxLength(stream, &seqNumberSSZ); err != nil {
return err
}
closeStream(stream, log)
if valid {
// If the sequence number was valid we're done.
// If the peer's sequence numberwas valid we're done.
return nil
}
// The sequence number was not valid. Start our own ping back to the peer.
// The peer's sequence number was not valid. We ask the peer for its metadata.
go func() {
// New context so the calling function doesn't cancel on us.
// Define a new context so the calling function doesn't cancel on us.
ctx, cancel := context.WithTimeout(context.Background(), ttfbTimeout)
defer cancel()
md, err := s.sendMetaDataRequest(ctx, stream.Conn().RemotePeer())
// Send a METADATA request to the peer.
peerMetadata, err := s.sendMetaDataRequest(ctx, peerID)
if err != nil {
// We cannot compare errors directly as the stream muxer error
// type isn't compatible with the error we have, so a direct
// equality checks fails.
if !strings.Contains(err.Error(), p2ptypes.ErrIODeadline.Error()) {
log.WithField("peer", stream.Conn().RemotePeer()).WithError(err).Debug("Could not send metadata request")
log.WithField("peer", peerID).WithError(err).Debug("Could not send metadata request")
}
return
}
// update metadata if there is no error
s.cfg.p2p.Peers().SetMetadata(stream.Conn().RemotePeer(), md)
// Update peer's metadata.
s.cfg.p2p.Peers().SetMetadata(peerID, peerMetadata)
}()
return nil
}
func (s *Service) sendPingRequest(ctx context.Context, id peer.ID) error {
// sendPingRequest first sends a PING request to the peer.
// If the peer responds with a sequence number higher than latest one for it we have in our store,
// then this function sends a METADATA request to the peer, and stores the metadata received.
// This function is actually poorly named, since it does more than just sending a ping request.
func (s *Service) sendPingRequest(ctx context.Context, peerID peer.ID) error {
ctx, cancel := context.WithTimeout(ctx, respTimeout)
defer cancel()
metadataSeq := primitives.SSZUint64(s.cfg.p2p.MetadataSeq())
topic, err := p2p.TopicFromMessage(p2p.PingMessageName, slots.ToEpoch(s.cfg.clock.CurrentSlot()))
// Get the current epoch.
currentSlot := s.cfg.clock.CurrentSlot()
currentEpoch := slots.ToEpoch(currentSlot)
// SSZ encode our metadata sequence number.
metadataSeq := s.cfg.p2p.MetadataSeq()
encodedMetadataSeq := primitives.SSZUint64(metadataSeq)
// Get the PING topic for the current epoch.
topic, err := p2p.TopicFromMessage(p2p.PingMessageName, currentEpoch)
if err != nil {
return err
return errors.Wrap(err, "topic from message")
}
stream, err := s.cfg.p2p.Send(ctx, &metadataSeq, topic, id)
// Send the PING request to the peer.
stream, err := s.cfg.p2p.Send(ctx, &encodedMetadataSeq, topic, peerID)
if err != nil {
return err
return errors.Wrap(err, "send ping request")
}
currentTime := time.Now()
defer closeStream(stream, log)
startTime := time.Now()
// Read the response from the peer.
code, errMsg, err := ReadStatusCode(stream, s.cfg.p2p.Encoding())
if err != nil {
return err
return errors.Wrap(err, "read status code")
}
// Records the latency of the ping request for that peer.
s.cfg.p2p.Host().Peerstore().RecordLatency(id, time.Now().Sub(currentTime))
// Record the latency of the ping request for that peer.
s.cfg.p2p.Host().Peerstore().RecordLatency(peerID, time.Now().Sub(startTime))
// If the peer responded with an error, increment the bad responses scorer.
if code != 0 {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
return errors.New(errMsg)
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
return errors.Errorf("code: %d - %s", code, errMsg)
}
// Decode the sequence number from the peer.
msg := new(primitives.SSZUint64)
if err := s.cfg.p2p.Encoding().DecodeWithMaxLength(stream, msg); err != nil {
return err
return errors.Wrap(err, "decode sequence number")
}
valid, err := s.validateSequenceNum(*msg, stream.Conn().RemotePeer())
// Determine if the peer's sequence number returned by the peer is higher than the one we have in our store.
valid, err := s.validateSequenceNum(*msg, peerID)
if err != nil {
// Descore peer for giving us a bad sequence number.
if errors.Is(err, p2ptypes.ErrInvalidSequenceNum) {
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(stream.Conn().RemotePeer())
s.cfg.p2p.Peers().Scorers().BadResponsesScorer().Increment(peerID)
}
return err
return errors.Wrap(err, "validate sequence number")
}
// The sequence number have in our store for this peer is the same as the one returned by the peer, all good.
if valid {
return nil
}
md, err := s.sendMetaDataRequest(ctx, stream.Conn().RemotePeer())
// We need to send a METADATA request to the peer to get its latest metadata.
md, err := s.sendMetaDataRequest(ctx, peerID)
if err != nil {
// do not increment bad responses, as its
// already done in the request method.
return err
// do not increment bad responses, as its already done in the request method.
return errors.Wrap(err, "send metadata request")
}
s.cfg.p2p.Peers().SetMetadata(stream.Conn().RemotePeer(), md)
// Update the metadata for the peer.
s.cfg.p2p.Peers().SetMetadata(peerID, md)
return nil
}
// validates the peer's sequence number.
// validateSequenceNum validates the peer's sequence number.
// - If the peer's sequence number is greater than the sequence number we have in our store for the peer, return false.
// - If the peer's sequence number is equal to the sequence number we have in our store for the peer, return true.
// - If the peer's sequence number is less than the sequence number we have in our store for the peer, return an error.
func (s *Service) validateSequenceNum(seq primitives.SSZUint64, id peer.ID) (bool, error) {
// Retrieve the metadata for the peer we got in our store.
md, err := s.cfg.p2p.Peers().Metadata(id)
if err != nil {
return false, err
return false, errors.Wrap(err, "get metadata")
}
// If we have no metadata for the peer, return false.
if md == nil || md.IsNil() {
return false, nil
}
// Return error on invalid sequence number.
// The peer's sequence number must be less than or equal to the sequence number we have in our store.
if md.SequenceNumber() > uint64(seq) {
return false, p2ptypes.ErrInvalidSequenceNum
}
// Return true if the peer's sequence number is equal to the sequence number we have in our store.
return md.SequenceNumber() == uint64(seq), nil
}

View File

@@ -19,7 +19,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
pb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
@@ -49,7 +48,7 @@ type BeaconBlockProcessor func(block interfaces.ReadOnlySignedBeaconBlock) error
// SendBeaconBlocksByRangeRequest sends BeaconBlocksByRange and returns fetched blocks, if any.
func SendBeaconBlocksByRangeRequest(
ctx context.Context, tor blockchain.TemporalOracle, p2pProvider p2p.SenderEncoder, pid peer.ID,
req *pb.BeaconBlocksByRangeRequest, blockProcessor BeaconBlockProcessor,
req *ethpb.BeaconBlocksByRangeRequest, blockProcessor BeaconBlockProcessor,
) ([]interfaces.ReadOnlySignedBeaconBlock, error) {
topic, err := p2p.TopicFromMessage(p2p.BeaconBlocksByRangeMessageName, slots.ToEpoch(tor.CurrentSlot()))
if err != nil {
@@ -155,7 +154,7 @@ func SendBeaconBlocksByRootRequest(
return blocks, nil
}
func SendBlobsByRangeRequest(ctx context.Context, tor blockchain.TemporalOracle, p2pApi p2p.SenderEncoder, pid peer.ID, ctxMap ContextByteVersions, req *pb.BlobSidecarsByRangeRequest, bvs ...BlobResponseValidation) ([]blocks.ROBlob, error) {
func SendBlobsByRangeRequest(ctx context.Context, tor blockchain.TemporalOracle, p2pApi p2p.SenderEncoder, pid peer.ID, ctxMap ContextByteVersions, req *ethpb.BlobSidecarsByRangeRequest, bvs ...BlobResponseValidation) ([]blocks.ROBlob, error) {
topic, err := p2p.TopicFromMessage(p2p.BlobSidecarsByRangeName, slots.ToEpoch(tor.CurrentSlot()))
if err != nil {
return nil, err
@@ -298,7 +297,7 @@ func blobValidatorFromRootReq(req *p2ptypes.BlobSidecarsByRootReq) BlobResponseV
}
}
func blobValidatorFromRangeReq(req *pb.BlobSidecarsByRangeRequest) BlobResponseValidation {
func blobValidatorFromRangeReq(req *ethpb.BlobSidecarsByRangeRequest) BlobResponseValidation {
end := req.StartSlot + primitives.Slot(req.Count)
return func(sc blocks.ROBlob) error {
if sc.Slot() < req.StartSlot || sc.Slot() >= end {

View File

@@ -15,6 +15,8 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
gcache "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/trailofbits/go-mutexasserts"
"github.com/prysmaticlabs/prysm/v5/async"
"github.com/prysmaticlabs/prysm/v5/async/abool"
"github.com/prysmaticlabs/prysm/v5/async/event"
@@ -44,22 +46,24 @@ import (
"github.com/prysmaticlabs/prysm/v5/runtime"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/trailofbits/go-mutexasserts"
)
var _ runtime.Service = (*Service)(nil)
const rangeLimit uint64 = 1024
const seenBlockSize = 1000
const seenBlobSize = seenBlockSize * 4 // Each block can have max 4 blobs. Worst case 164kB for cache.
const seenUnaggregatedAttSize = 20000
const seenAggregatedAttSize = 16384
const seenSyncMsgSize = 1000 // Maximum of 512 sync committee members, 1000 is a safe amount.
const seenSyncContributionSize = 512 // Maximum of SYNC_COMMITTEE_SIZE as specified by the spec.
const seenExitSize = 100
const seenProposerSlashingSize = 100
const badBlockSize = 1000
const syncMetricsInterval = 10 * time.Second
const (
rangeLimit uint64 = 1024
seenBlockSize = 1000
seenBlobSize = seenBlockSize * 6 // Each block can have max 6 blobs.
seenDataColumnSize = seenBlockSize * 128 // Each block can have max 128 data columns.
seenUnaggregatedAttSize = 20000
seenAggregatedAttSize = 16384
seenSyncMsgSize = 1000 // Maximum of 512 sync committee members, 1000 is a safe amount.
seenSyncContributionSize = 512 // Maximum of SYNC_COMMITTEE_SIZE as specified by the spec.
seenExitSize = 100
seenProposerSlashingSize = 100
badBlockSize = 1000
syncMetricsInterval = 10 * time.Second
)
var (
// Seconds in one epoch.
@@ -162,18 +166,18 @@ type Service struct {
// NewService initializes new regular sync service.
func NewService(ctx context.Context, opts ...Option) *Service {
c := gcache.New(pendingBlockExpTime /* exp time */, 0 /* disable janitor */)
ctx, cancel := context.WithCancel(ctx)
r := &Service{
ctx: ctx,
cancel: cancel,
chainStarted: abool.New(),
cfg: &config{clock: startup.NewClock(time.Unix(0, 0), [32]byte{})},
slotToPendingBlocks: c,
slotToPendingBlocks: gcache.New(pendingBlockExpTime /* exp time */, 0 /* disable janitor */),
seenPendingBlocks: make(map[[32]byte]bool),
blkRootToPendingAtts: make(map[[32]byte][]ethpb.SignedAggregateAttAndProof),
signatureChan: make(chan *signatureVerifier, verifierLimit),
}
for _, opt := range opts {
if err := opt(r); err != nil {
return nil
@@ -224,7 +228,7 @@ func (s *Service) Start() {
s.newBlobVerifier = newBlobVerifierFromInitializer(v)
go s.verifierRoutine()
go s.registerHandlers()
go s.startTasksPostInitialSync()
s.cfg.p2p.AddConnectionHandler(s.reValidatePeer, s.sendGoodbye)
s.cfg.p2p.AddDisconnectionHandler(func(_ context.Context, _ peer.ID) error {
@@ -315,23 +319,31 @@ func (s *Service) waitForChainStart() {
s.markForChainStart()
}
func (s *Service) registerHandlers() {
func (s *Service) startTasksPostInitialSync() {
// Wait for the chain to start.
s.waitForChainStart()
select {
case <-s.initialSyncComplete:
// Register respective pubsub handlers at state synced event.
digest, err := s.currentForkDigest()
// Compute the current epoch.
currentSlot := slots.CurrentSlot(uint64(s.cfg.clock.GenesisTime().Unix()))
currentEpoch := slots.ToEpoch(currentSlot)
// Compute the current fork forkDigest.
forkDigest, err := s.currentForkDigest()
if err != nil {
log.WithError(err).Error("Could not retrieve current fork digest")
return
}
currentEpoch := slots.ToEpoch(slots.CurrentSlot(uint64(s.cfg.clock.GenesisTime().Unix())))
s.registerSubscribers(currentEpoch, digest)
// Register respective pubsub handlers at state synced event.
s.registerSubscribers(currentEpoch, forkDigest)
// Start the fork watcher.
go s.forkWatcher()
return
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
return
}
}

View File

@@ -62,7 +62,7 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
}
topic := "/eth2/%x/beacon_block"
go r.registerHandlers()
go r.startTasksPostInitialSync()
time.Sleep(100 * time.Millisecond)
var vr [32]byte
@@ -143,7 +143,7 @@ func TestSyncHandlers_WaitTillSynced(t *testing.T) {
syncCompleteCh := make(chan bool)
go func() {
r.registerHandlers()
r.startTasksPostInitialSync()
syncCompleteCh <- true
}()
@@ -200,7 +200,7 @@ func TestSyncService_StopCleanly(t *testing.T) {
initialSyncComplete: make(chan struct{}),
}
go r.registerHandlers()
go r.startTasksPostInitialSync()
var vr [32]byte
require.NoError(t, gs.SetClock(startup.NewClock(time.Now(), vr)))
r.waitForChainStart()

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/io/file"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"google.golang.org/protobuf/proto"
)
@@ -62,6 +63,10 @@ func (s *Service) beaconBlockSubscriber(ctx context.Context, msg proto.Message)
// This function reconstructs the blob sidecars from the EL using the block's KZG commitments,
// broadcasts the reconstructed blobs over P2P, and saves them into the blob storage.
func (s *Service) reconstructAndBroadcastBlobs(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock) {
if block.Version() < version.Deneb {
return
}
startTime, err := slots.ToTime(uint64(s.cfg.chain.GenesisTime().Unix()), block.Block().Slot())
if err != nil {
log.WithError(err).Error("Failed to convert slot to time")

View File

@@ -13,7 +13,7 @@ import (
func (s *Service) blobSubscriber(ctx context.Context, msg proto.Message) error {
b, ok := msg.(blocks.VerifiedROBlob)
if !ok {
return fmt.Errorf("message was not type blocks.ROBlob, type=%T", msg)
return fmt.Errorf("message was not type blocks.VerifiedROBlob, type=%T", msg)
}
return s.subscribeBlob(ctx, b)

View File

@@ -57,11 +57,10 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
}
aggregate := m.AggregateAttestationAndProof().AggregateVal()
data := aggregate.GetData()
if err := helpers.ValidateNilAttestation(aggregate); err != nil {
return pubsub.ValidationReject, err
}
data := aggregate.GetData()
// Do not process slot 0 aggregates.
if data.Slot == 0 {
return pubsub.ValidationIgnore, nil
@@ -118,6 +117,9 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
if seen {
return pubsub.ValidationIgnore, nil
}
// Verify the block being voted on is in the beacon chain.
// If not, store this attestation in the map of pending attestations.
if !s.validateBlockInAttestation(ctx, m) {
return pubsub.ValidationIgnore, nil
}
@@ -223,6 +225,8 @@ func (s *Service) validateAggregatedAtt(ctx context.Context, signed ethpb.Signed
return s.validateWithBatchVerifier(ctx, "aggregate", set)
}
// validateBlocksInAttestation checks if the block being voted on is in the beaconDB.
// If not, it store this attestation in the map of pending attestations.
func (s *Service) validateBlockInAttestation(ctx context.Context, satt ethpb.SignedAggregateAttAndProof) bool {
// Verify the block being voted and the processed state is in beaconDB. The block should have passed validation if it's in the beaconDB.
blockRoot := bytesutil.ToBytes32(satt.AggregateAttestationAndProof().AggregateVal().GetData().BeaconBlockRoot)

View File

@@ -62,12 +62,11 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(ctx context.Context, p
if !ok {
return pubsub.ValidationReject, errWrongMessage
}
data := att.GetData()
if err := helpers.ValidateNilAttestation(att); err != nil {
return pubsub.ValidationReject, err
}
data := att.GetData()
// Do not process slot 0 attestations.
if data.Slot == 0 {
return pubsub.ValidationIgnore, nil

View File

@@ -211,11 +211,16 @@ func (s *Service) validateBeaconBlockPubSub(ctx context.Context, pid peer.ID, ms
// Log the arrival time of the accepted block
graffiti := blk.Block().Body().Graffiti()
exec, err := blk.Block().Body().Execution()
if err != nil {
log.WithError(err)
}
startTime, err := slots.ToTime(genesisTime, blk.Block().Slot())
logFields := logrus.Fields{
"blockSlot": blk.Block().Slot(),
"proposerIndex": blk.Block().ProposerIndex(),
"graffiti": string(graffiti[:]),
"extraData": string(exec.ExtraData()),
}
if err != nil {
log.WithError(err).WithFields(logFields).Warn("Received block, could not report timing information.")

View File

@@ -51,7 +51,7 @@ func (s *Service) validateBlob(ctx context.Context, pid peer.ID, msg *pubsub.Mes
if err != nil {
return pubsub.ValidationReject, errors.Wrap(err, "roblob conversion failure")
}
vf := s.newBlobVerifier(blob, verification.GossipSidecarRequirements)
vf := s.newBlobVerifier(blob, verification.GossipBlobSidecarRequirements)
if err := vf.BlobIndexInBounds(); err != nil {
return pubsub.ValidationReject, err

View File

@@ -10,6 +10,7 @@ go_library(
"fake.go",
"initializer.go",
"interface.go",
"log.go",
"metrics.go",
"mock.go",
"result.go",

View File

@@ -169,7 +169,7 @@ func TestBatchVerifier(t *testing.T) {
blk, blbs := c.bandb(t, c.nblobs)
reqs := c.reqs
if reqs == nil {
reqs = InitsyncSidecarRequirements
reqs = InitsyncBlobSidecarRequirements
}
bbv := NewBlobBatchVerifier(c.nv(), reqs)
if c.cv == nil {

View File

@@ -2,6 +2,7 @@ package verification
import (
"context"
goError "errors"
"github.com/pkg/errors"
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
@@ -12,7 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/runtime/logging"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
)
const (
@@ -29,7 +29,7 @@ const (
RequireSidecarProposerExpected
)
var allSidecarRequirements = []Requirement{
var allBlobSidecarRequirements = []Requirement{
RequireBlobIndexInBounds,
RequireNotFromFutureSlot,
RequireSlotAboveFinalized,
@@ -43,21 +43,21 @@ var allSidecarRequirements = []Requirement{
RequireSidecarProposerExpected,
}
// GossipSidecarRequirements defines the set of requirements that BlobSidecars received on gossip
// GossipBlobSidecarRequirements defines the set of requirements that BlobSidecars received on gossip
// must satisfy in order to upgrade an ROBlob to a VerifiedROBlob.
var GossipSidecarRequirements = requirementList(allSidecarRequirements).excluding()
var GossipBlobSidecarRequirements = requirementList(allBlobSidecarRequirements).excluding()
// SpectestSidecarRequirements is used by the forkchoice spectests when verifying blobs used in the on_block tests.
// SpectestBlobSidecarRequirements is used by the forkchoice spectests when verifying blobs used in the on_block tests.
// The only requirements we exclude for these tests are the parent validity and seen tests, as these are specific to
// gossip processing and require the bad block cache that we only use there.
var SpectestSidecarRequirements = requirementList(GossipSidecarRequirements).excluding(
var SpectestBlobSidecarRequirements = requirementList(GossipBlobSidecarRequirements).excluding(
RequireSidecarParentSeen, RequireSidecarParentValid)
// InitsyncSidecarRequirements is the list of verification requirements to be used by the init-sync service
// InitsyncBlobSidecarRequirements is the list of verification requirements to be used by the init-sync service
// for batch-mode syncing. Because we only perform batch verification as part of the IsDataAvailable method
// for blobs after the block has been verified, and the blobs to be verified are keyed in the cache by the
// block root, the list of required verifications is much shorter than gossip.
var InitsyncSidecarRequirements = requirementList(GossipSidecarRequirements).excluding(
var InitsyncBlobSidecarRequirements = requirementList(GossipBlobSidecarRequirements).excluding(
RequireNotFromFutureSlot,
RequireSlotAboveFinalized,
RequireSidecarParentSeen,
@@ -71,36 +71,16 @@ var InitsyncSidecarRequirements = requirementList(GossipSidecarRequirements).exc
// execution layer mempool. Only the KZG proof verification is required.
var ELMemPoolRequirements = []Requirement{RequireSidecarKzgProofVerified}
// BackfillSidecarRequirements is the same as InitsyncSidecarRequirements.
var BackfillSidecarRequirements = requirementList(InitsyncSidecarRequirements).excluding()
// BackfillBlobSidecarRequirements is the same as InitsyncBlobSidecarRequirements.
var BackfillBlobSidecarRequirements = requirementList(InitsyncBlobSidecarRequirements).excluding()
// PendingQueueSidecarRequirements is the same as InitsyncSidecarRequirements, used by the pending blocks queue.
var PendingQueueSidecarRequirements = requirementList(InitsyncSidecarRequirements).excluding()
// PendingQueueBlobSidecarRequirements is the same as InitsyncBlobSidecarRequirements, used by the pending blocks queue.
var PendingQueueBlobSidecarRequirements = requirementList(InitsyncBlobSidecarRequirements).excluding()
var (
ErrBlobInvalid = errors.New("blob failed verification")
// ErrBlobIndexInvalid means RequireBlobIndexInBounds failed.
ErrBlobIndexInvalid = errors.Wrap(ErrBlobInvalid, "incorrect blob sidecar index")
// ErrFromFutureSlot means RequireSlotNotTooEarly failed.
ErrFromFutureSlot = errors.Wrap(ErrBlobInvalid, "slot is too far in the future")
// ErrSlotNotAfterFinalized means RequireSlotAboveFinalized failed.
ErrSlotNotAfterFinalized = errors.Wrap(ErrBlobInvalid, "slot <= finalized checkpoint")
// ErrInvalidProposerSignature means RequireValidProposerSignature failed.
ErrInvalidProposerSignature = errors.Wrap(ErrBlobInvalid, "proposer signature could not be verified")
// ErrSidecarParentNotSeen means RequireSidecarParentSeen failed.
ErrSidecarParentNotSeen = errors.Wrap(ErrBlobInvalid, "parent root has not been seen")
// ErrSidecarParentInvalid means RequireSidecarParentValid failed.
ErrSidecarParentInvalid = errors.Wrap(ErrBlobInvalid, "parent block is not valid")
// ErrSlotNotAfterParent means RequireSidecarParentSlotLower failed.
ErrSlotNotAfterParent = errors.Wrap(ErrBlobInvalid, "slot <= slot")
// ErrSidecarNotFinalizedDescendent means RequireSidecarDescendsFromFinalized failed.
ErrSidecarNotFinalizedDescendent = errors.Wrap(ErrBlobInvalid, "blob parent is not descended from the finalized block")
// ErrSidecarInclusionProofInvalid means RequireSidecarInclusionProven failed.
ErrSidecarInclusionProofInvalid = errors.Wrap(ErrBlobInvalid, "sidecar inclusion proof verification failed")
// ErrSidecarKzgProofInvalid means RequireSidecarKzgProofVerified failed.
ErrSidecarKzgProofInvalid = errors.Wrap(ErrBlobInvalid, "sidecar kzg commitment proof verification failed")
// ErrSidecarUnexpectedProposer means RequireSidecarProposerExpected failed.
ErrSidecarUnexpectedProposer = errors.Wrap(ErrBlobInvalid, "sidecar was not proposed by the expected proposer_index")
ErrBlobIndexInvalid = errors.New("incorrect blob sidecar index")
)
type ROBlobVerifier struct {
@@ -149,7 +129,7 @@ func (bv *ROBlobVerifier) BlobIndexInBounds() (err error) {
defer bv.recordResult(RequireBlobIndexInBounds, &err)
if bv.blob.Index >= fieldparams.MaxBlobsPerBlock {
log.WithFields(logging.BlobFields(bv.blob)).Debug("Sidecar index >= MAX_BLOBS_PER_BLOCK")
return ErrBlobIndexInvalid
return blobErrBuilder(ErrBlobIndexInvalid)
}
return nil
}
@@ -168,7 +148,7 @@ func (bv *ROBlobVerifier) NotFromFutureSlot() (err error) {
// If the system time is still before earliestStart, we consider the blob from a future slot and return an error.
if bv.clock.Now().Before(earliestStart) {
log.WithFields(logging.BlobFields(bv.blob)).Debug("sidecar slot is too far in the future")
return ErrFromFutureSlot
return blobErrBuilder(ErrFromFutureSlot)
}
return nil
}
@@ -181,11 +161,11 @@ func (bv *ROBlobVerifier) SlotAboveFinalized() (err error) {
fcp := bv.fc.FinalizedCheckpoint()
fSlot, err := slots.EpochStart(fcp.Epoch)
if err != nil {
return errors.Wrapf(ErrSlotNotAfterFinalized, "error computing epoch start slot for finalized checkpoint (%d) %s", fcp.Epoch, err.Error())
return errors.Wrapf(blobErrBuilder(ErrSlotNotAfterFinalized), "error computing epoch start slot for finalized checkpoint (%d) %s", fcp.Epoch, err.Error())
}
if bv.blob.Slot() <= fSlot {
log.WithFields(logging.BlobFields(bv.blob)).Debug("sidecar slot is not after finalized checkpoint")
return ErrSlotNotAfterFinalized
return blobErrBuilder(ErrSlotNotAfterFinalized)
}
return nil
}
@@ -203,7 +183,7 @@ func (bv *ROBlobVerifier) ValidProposerSignature(ctx context.Context) (err error
if err != nil {
log.WithFields(logging.BlobFields(bv.blob)).WithError(err).Debug("reusing failed proposer signature validation from cache")
blobVerificationProposerSignatureCache.WithLabelValues("hit-invalid").Inc()
return ErrInvalidProposerSignature
return blobErrBuilder(ErrInvalidProposerSignature)
}
return nil
}
@@ -213,12 +193,12 @@ func (bv *ROBlobVerifier) ValidProposerSignature(ctx context.Context) (err error
parent, err := bv.parentState(ctx)
if err != nil {
log.WithFields(logging.BlobFields(bv.blob)).WithError(err).Debug("could not replay parent state for blob signature verification")
return ErrInvalidProposerSignature
return blobErrBuilder(ErrInvalidProposerSignature)
}
// Full verification, which will subsequently be cached for anything sharing the signature cache.
if err = bv.sc.VerifySignature(sd, parent); err != nil {
log.WithFields(logging.BlobFields(bv.blob)).WithError(err).Debug("signature verification failed")
return ErrInvalidProposerSignature
return blobErrBuilder(ErrInvalidProposerSignature)
}
return nil
}
@@ -235,7 +215,7 @@ func (bv *ROBlobVerifier) SidecarParentSeen(parentSeen func([32]byte) bool) (err
return nil
}
log.WithFields(logging.BlobFields(bv.blob)).Debug("parent root has not been seen")
return ErrSidecarParentNotSeen
return blobErrBuilder(ErrSidecarParentNotSeen)
}
// SidecarParentValid represents the spec verification:
@@ -244,7 +224,7 @@ func (bv *ROBlobVerifier) SidecarParentValid(badParent func([32]byte) bool) (err
defer bv.recordResult(RequireSidecarParentValid, &err)
if badParent != nil && badParent(bv.blob.ParentRoot()) {
log.WithFields(logging.BlobFields(bv.blob)).Debug("parent root is invalid")
return ErrSidecarParentInvalid
return blobErrBuilder(ErrSidecarParentInvalid)
}
return nil
}
@@ -255,10 +235,10 @@ func (bv *ROBlobVerifier) SidecarParentSlotLower() (err error) {
defer bv.recordResult(RequireSidecarParentSlotLower, &err)
parentSlot, err := bv.fc.Slot(bv.blob.ParentRoot())
if err != nil {
return errors.Wrap(ErrSlotNotAfterParent, "parent root not in forkchoice")
return errors.Wrap(blobErrBuilder(ErrSlotNotAfterParent), "parent root not in forkchoice")
}
if parentSlot >= bv.blob.Slot() {
return ErrSlotNotAfterParent
return blobErrBuilder(ErrSlotNotAfterParent)
}
return nil
}
@@ -270,7 +250,7 @@ func (bv *ROBlobVerifier) SidecarDescendsFromFinalized() (err error) {
defer bv.recordResult(RequireSidecarDescendsFromFinalized, &err)
if !bv.fc.HasNode(bv.blob.ParentRoot()) {
log.WithFields(logging.BlobFields(bv.blob)).Debug("parent root not in forkchoice")
return ErrSidecarNotFinalizedDescendent
return blobErrBuilder(ErrSidecarNotFinalizedDescendent)
}
return nil
}
@@ -281,7 +261,7 @@ func (bv *ROBlobVerifier) SidecarInclusionProven() (err error) {
defer bv.recordResult(RequireSidecarInclusionProven, &err)
if err = blocks.VerifyKZGInclusionProof(bv.blob); err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("sidecar inclusion proof verification failed")
return ErrSidecarInclusionProofInvalid
return blobErrBuilder(ErrSidecarInclusionProofInvalid)
}
return nil
}
@@ -293,7 +273,7 @@ func (bv *ROBlobVerifier) SidecarKzgProofVerified() (err error) {
defer bv.recordResult(RequireSidecarKzgProofVerified, &err)
if err = bv.verifyBlobCommitment(bv.blob); err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("kzg commitment proof verification failed")
return ErrSidecarKzgProofInvalid
return blobErrBuilder(ErrSidecarKzgProofInvalid)
}
return nil
}
@@ -311,7 +291,7 @@ func (bv *ROBlobVerifier) SidecarProposerExpected(ctx context.Context) (err erro
}
r, err := bv.fc.TargetRootForEpoch(bv.blob.ParentRoot(), e)
if err != nil {
return ErrSidecarUnexpectedProposer
return blobErrBuilder(ErrSidecarUnexpectedProposer)
}
c := &forkchoicetypes.Checkpoint{Root: r, Epoch: e}
idx, cached := bv.pc.Proposer(c, bv.blob.Slot())
@@ -319,19 +299,19 @@ func (bv *ROBlobVerifier) SidecarProposerExpected(ctx context.Context) (err erro
pst, err := bv.parentState(ctx)
if err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("state replay to parent_root failed")
return ErrSidecarUnexpectedProposer
return blobErrBuilder(ErrSidecarUnexpectedProposer)
}
idx, err = bv.pc.ComputeProposer(ctx, bv.blob.ParentRoot(), bv.blob.Slot(), pst)
if err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("error computing proposer index from parent state")
return ErrSidecarUnexpectedProposer
return blobErrBuilder(ErrSidecarUnexpectedProposer)
}
}
if idx != bv.blob.ProposerIndex() {
log.WithError(ErrSidecarUnexpectedProposer).
log.WithError(blobErrBuilder(ErrSidecarUnexpectedProposer)).
WithFields(logging.BlobFields(bv.blob)).WithField("expectedProposer", idx).
Debug("unexpected blob proposer")
return ErrSidecarUnexpectedProposer
return blobErrBuilder(ErrSidecarUnexpectedProposer)
}
return nil
}
@@ -357,3 +337,7 @@ func blobToSignatureData(b blocks.ROBlob) SignatureData {
Slot: b.Slot(),
}
}
func blobErrBuilder(baseErr error) error {
return goError.Join(ErrBlobInvalid, baseErr)
}

View File

@@ -27,13 +27,13 @@ func TestBlobIndexInBounds(t *testing.T) {
_, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, 1)
b := blobs[0]
// set Index to a value that is out of bounds
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.BlobIndexInBounds())
require.Equal(t, true, v.results.executed(RequireBlobIndexInBounds))
require.NoError(t, v.results.result(RequireBlobIndexInBounds))
b.Index = fieldparams.MaxBlobsPerBlock
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.BlobIndexInBounds(), ErrBlobIndexInvalid)
require.Equal(t, true, v.results.executed(RequireBlobIndexInBounds))
require.NotNil(t, v.results.result(RequireBlobIndexInBounds))
@@ -52,7 +52,7 @@ func TestSlotNotTooEarly(t *testing.T) {
// This clock will give a current slot of 1 on the nose
happyClock := startup.NewClock(genesis, [32]byte{}, startup.WithNower(func() time.Time { return now }))
ini := Initializer{shared: &sharedResources{clock: happyClock}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.NotFromFutureSlot())
require.Equal(t, true, v.results.executed(RequireNotFromFutureSlot))
require.NoError(t, v.results.result(RequireNotFromFutureSlot))
@@ -61,7 +61,7 @@ func TestSlotNotTooEarly(t *testing.T) {
// but still in the previous slot.
closeClock := startup.NewClock(genesis, [32]byte{}, startup.WithNower(func() time.Time { return now.Add(-1 * params.BeaconConfig().MaximumGossipClockDisparityDuration() / 2) }))
ini = Initializer{shared: &sharedResources{clock: closeClock}}
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.NotFromFutureSlot())
// This clock will give a current slot of 0, with now coming more than max clock disparity before slot 1
@@ -69,7 +69,7 @@ func TestSlotNotTooEarly(t *testing.T) {
dispClock := startup.NewClock(genesis, [32]byte{}, startup.WithNower(func() time.Time { return disparate }))
// Set up initializer to use the clock that will set now to a little to far before slot 1
ini = Initializer{shared: &sharedResources{clock: dispClock}}
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.NotFromFutureSlot(), ErrFromFutureSlot)
require.Equal(t, true, v.results.executed(RequireNotFromFutureSlot))
require.NotNil(t, v.results.result(RequireNotFromFutureSlot))
@@ -114,7 +114,7 @@ func TestSlotAboveFinalized(t *testing.T) {
_, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 0, 1)
b := blobs[0]
b.SignedBlockHeader.Header.Slot = c.slot
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
err := v.SlotAboveFinalized()
require.Equal(t, true, v.results.executed(RequireSlotAboveFinalized))
if c.err == nil {
@@ -146,7 +146,7 @@ func TestValidProposerSignature_Cached(t *testing.T) {
},
}
ini := Initializer{shared: &sharedResources{sc: sc, sr: &mockStateByRooter{sbr: sbrErrorIfCalled(t)}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.ValidProposerSignature(ctx))
require.Equal(t, true, v.results.executed(RequireValidProposerSignature))
require.NoError(t, v.results.result(RequireValidProposerSignature))
@@ -159,7 +159,7 @@ func TestValidProposerSignature_Cached(t *testing.T) {
return true, errors.New("derp")
}
ini = Initializer{shared: &sharedResources{sc: sc, sr: &mockStateByRooter{sbr: sbrErrorIfCalled(t)}}}
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.ValidProposerSignature(ctx), ErrInvalidProposerSignature)
require.Equal(t, true, v.results.executed(RequireValidProposerSignature))
require.NotNil(t, v.results.result(RequireValidProposerSignature))
@@ -182,14 +182,14 @@ func TestValidProposerSignature_CacheMiss(t *testing.T) {
},
}
ini := Initializer{shared: &sharedResources{sc: sc, sr: sbrForValOverride(b.ProposerIndex(), &ethpb.Validator{})}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.ValidProposerSignature(ctx))
require.Equal(t, true, v.results.executed(RequireValidProposerSignature))
require.NoError(t, v.results.result(RequireValidProposerSignature))
// simulate state not found
ini = Initializer{shared: &sharedResources{sc: sc, sr: sbrNotFound(t, expectedSd.Parent)}}
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.ValidProposerSignature(ctx), ErrInvalidProposerSignature)
require.Equal(t, true, v.results.executed(RequireValidProposerSignature))
require.NotNil(t, v.results.result(RequireValidProposerSignature))
@@ -206,7 +206,7 @@ func TestValidProposerSignature_CacheMiss(t *testing.T) {
},
}
ini = Initializer{shared: &sharedResources{sc: sc, sr: sbr}}
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
// make sure all the histories are clean before calling the method
// so we don't get polluted by previous usages
@@ -255,14 +255,14 @@ func TestSidecarParentSeen(t *testing.T) {
t.Run("happy path", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{fc: fcHas}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarParentSeen(nil))
require.Equal(t, true, v.results.executed(RequireSidecarParentSeen))
require.NoError(t, v.results.result(RequireSidecarParentSeen))
})
t.Run("HasNode false, no badParent cb, expected error", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{fc: fcLacks}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarParentSeen(nil), ErrSidecarParentNotSeen)
require.Equal(t, true, v.results.executed(RequireSidecarParentSeen))
require.NotNil(t, v.results.result(RequireSidecarParentSeen))
@@ -270,14 +270,14 @@ func TestSidecarParentSeen(t *testing.T) {
t.Run("HasNode false, badParent true", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{fc: fcLacks}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarParentSeen(badParentCb(t, b.ParentRoot(), true)))
require.Equal(t, true, v.results.executed(RequireSidecarParentSeen))
require.NoError(t, v.results.result(RequireSidecarParentSeen))
})
t.Run("HasNode false, badParent false", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{fc: fcLacks}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarParentSeen(badParentCb(t, b.ParentRoot(), false)), ErrSidecarParentNotSeen)
require.Equal(t, true, v.results.executed(RequireSidecarParentSeen))
require.NotNil(t, v.results.result(RequireSidecarParentSeen))
@@ -289,14 +289,14 @@ func TestSidecarParentValid(t *testing.T) {
b := blobs[0]
t.Run("parent valid", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarParentValid(badParentCb(t, b.ParentRoot(), false)))
require.Equal(t, true, v.results.executed(RequireSidecarParentValid))
require.NoError(t, v.results.result(RequireSidecarParentValid))
})
t.Run("parent not valid", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarParentValid(badParentCb(t, b.ParentRoot(), true)), ErrSidecarParentInvalid)
require.Equal(t, true, v.results.executed(RequireSidecarParentValid))
require.NotNil(t, v.results.result(RequireSidecarParentValid))
@@ -340,7 +340,7 @@ func TestSidecarParentSlotLower(t *testing.T) {
}
return c.fcSlot, c.fcErr
}}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
err := v.SidecarParentSlotLower()
require.Equal(t, true, v.results.executed(RequireSidecarParentSlotLower))
if c.err == nil {
@@ -364,7 +364,7 @@ func TestSidecarDescendsFromFinalized(t *testing.T) {
}
return false
}}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarDescendsFromFinalized(), ErrSidecarNotFinalizedDescendent)
require.Equal(t, true, v.results.executed(RequireSidecarDescendsFromFinalized))
require.NotNil(t, v.results.result(RequireSidecarDescendsFromFinalized))
@@ -376,7 +376,7 @@ func TestSidecarDescendsFromFinalized(t *testing.T) {
}
return true
}}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarDescendsFromFinalized())
require.Equal(t, true, v.results.executed(RequireSidecarDescendsFromFinalized))
require.NoError(t, v.results.result(RequireSidecarDescendsFromFinalized))
@@ -389,7 +389,7 @@ func TestSidecarInclusionProven(t *testing.T) {
b := blobs[0]
ini := Initializer{}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarInclusionProven())
require.Equal(t, true, v.results.executed(RequireSidecarInclusionProven))
require.NoError(t, v.results.result(RequireSidecarInclusionProven))
@@ -397,7 +397,7 @@ func TestSidecarInclusionProven(t *testing.T) {
// Invert bits of the first byte of the body root to mess up the proof
byte0 := b.SignedBlockHeader.Header.BodyRoot[0]
b.SignedBlockHeader.Header.BodyRoot[0] = byte0 ^ 255
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarInclusionProven(), ErrSidecarInclusionProofInvalid)
require.Equal(t, true, v.results.executed(RequireSidecarInclusionProven))
require.NotNil(t, v.results.result(RequireSidecarInclusionProven))
@@ -409,7 +409,7 @@ func TestSidecarInclusionProvenElectra(t *testing.T) {
b := blobs[0]
ini := Initializer{}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarInclusionProven())
require.Equal(t, true, v.results.executed(RequireSidecarInclusionProven))
require.NoError(t, v.results.result(RequireSidecarInclusionProven))
@@ -417,7 +417,7 @@ func TestSidecarInclusionProvenElectra(t *testing.T) {
// Invert bits of the first byte of the body root to mess up the proof
byte0 := b.SignedBlockHeader.Header.BodyRoot[0]
b.SignedBlockHeader.Header.BodyRoot[0] = byte0 ^ 255
v = ini.NewBlobVerifier(b, GossipSidecarRequirements)
v = ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarInclusionProven(), ErrSidecarInclusionProofInvalid)
require.Equal(t, true, v.results.executed(RequireSidecarInclusionProven))
require.NotNil(t, v.results.result(RequireSidecarInclusionProven))
@@ -452,21 +452,21 @@ func TestSidecarProposerExpected(t *testing.T) {
b := blobs[0]
t.Run("cached, matches", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{pc: &mockProposerCache{ProposerCB: pcReturnsIdx(b.ProposerIndex())}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarProposerExpected(ctx))
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NoError(t, v.results.result(RequireSidecarProposerExpected))
})
t.Run("cached, does not match", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{pc: &mockProposerCache{ProposerCB: pcReturnsIdx(b.ProposerIndex() + 1)}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarProposerExpected(ctx), ErrSidecarUnexpectedProposer)
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
})
t.Run("not cached, state lookup failure", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{sr: sbrNotFound(t, b.ParentRoot()), pc: &mockProposerCache{ProposerCB: pcReturnsNotFound()}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarProposerExpected(ctx), ErrSidecarUnexpectedProposer)
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
@@ -475,14 +475,14 @@ func TestSidecarProposerExpected(t *testing.T) {
t.Run("not cached, proposer matches", func(t *testing.T) {
pc := &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, b.ParentRoot(), root)
require.Equal(t, b.Slot(), slot)
return b.ProposerIndex(), nil
},
}
ini := Initializer{shared: &sharedResources{sr: sbrForValOverride(b.ProposerIndex(), &ethpb.Validator{}), pc: pc, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarProposerExpected(ctx))
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NoError(t, v.results.result(RequireSidecarProposerExpected))
@@ -490,14 +490,14 @@ func TestSidecarProposerExpected(t *testing.T) {
t.Run("not cached, proposer does not match", func(t *testing.T) {
pc := &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, b.ParentRoot(), root)
require.Equal(t, b.Slot(), slot)
return b.ProposerIndex() + 1, nil
},
}
ini := Initializer{shared: &sharedResources{sr: sbrForValOverride(b.ProposerIndex(), &ethpb.Validator{}), pc: pc, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarProposerExpected(ctx), ErrSidecarUnexpectedProposer)
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
@@ -505,14 +505,14 @@ func TestSidecarProposerExpected(t *testing.T) {
t.Run("not cached, ComputeProposer fails", func(t *testing.T) {
pc := &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, b.ParentRoot(), root)
require.Equal(t, b.Slot(), slot)
return 0, errors.New("ComputeProposer failed")
},
}
ini := Initializer{shared: &sharedResources{sr: sbrForValOverride(b.ProposerIndex(), &ethpb.Validator{}), pc: pc, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarProposerExpected(ctx), ErrSidecarUnexpectedProposer)
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
@@ -523,7 +523,7 @@ func TestRequirementSatisfaction(t *testing.T) {
_, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 1)
b := blobs[0]
ini := Initializer{}
v := ini.NewBlobVerifier(b, GossipSidecarRequirements)
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
_, err := v.VerifiedROBlob()
require.ErrorIs(t, err, ErrBlobInvalid)
@@ -537,7 +537,7 @@ func TestRequirementSatisfaction(t *testing.T) {
}
// satisfy everything through the backdoor and ensure we get the verified ro blob at the end
for _, r := range GossipSidecarRequirements {
for _, r := range GossipBlobSidecarRequirements {
v.results.record(r, nil)
}
require.Equal(t, true, v.results.allSatisfied())

View File

@@ -17,7 +17,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
log "github.com/sirupsen/logrus"
"github.com/sirupsen/logrus"
)
const (
@@ -50,8 +50,8 @@ type SignatureData struct {
Slot primitives.Slot
}
func (d SignatureData) logFields() log.Fields {
return log.Fields{
func (d SignatureData) logFields() logrus.Fields {
return logrus.Fields{
"root": fmt.Sprintf("%#x", d.Root),
"parentRoot": fmt.Sprintf("%#x", d.Parent),
"signature": fmt.Sprintf("%#x", d.Signature),

View File

@@ -2,8 +2,40 @@ package verification
import "github.com/pkg/errors"
// ErrMissingVerification indicates that the given verification function was never performed on the value.
var ErrMissingVerification = errors.New("verification was not performed for requirement")
var (
// ErrFromFutureSlot means RequireSlotNotTooEarly failed.
ErrFromFutureSlot = errors.New("slot is too far in the future")
// ErrSlotNotAfterFinalized means RequireSlotAboveFinalized failed.
ErrSlotNotAfterFinalized = errors.New("slot <= finalized checkpoint")
// ErrInvalidProposerSignature means RequireValidProposerSignature failed.
ErrInvalidProposerSignature = errors.New("proposer signature could not be verified")
// ErrSidecarParentNotSeen means RequireSidecarParentSeen failed.
ErrSidecarParentNotSeen = errors.New("parent root has not been seen")
// ErrSidecarParentInvalid means RequireSidecarParentValid failed.
ErrSidecarParentInvalid = errors.New("parent block is not valid")
// ErrSlotNotAfterParent means RequireSidecarParentSlotLower failed.
ErrSlotNotAfterParent = errors.New("slot <= slot")
// ErrSidecarNotFinalizedDescendent means RequireSidecarDescendsFromFinalized failed.
ErrSidecarNotFinalizedDescendent = errors.New("parent is not descended from the finalized block")
// ErrSidecarInclusionProofInvalid means RequireSidecarInclusionProven failed.
ErrSidecarInclusionProofInvalid = errors.New("sidecar inclusion proof verification failed")
// ErrSidecarKzgProofInvalid means RequireSidecarKzgProofVerified failed.
ErrSidecarKzgProofInvalid = errors.New("sidecar kzg commitment proof verification failed")
// ErrSidecarUnexpectedProposer means RequireSidecarProposerExpected failed.
ErrSidecarUnexpectedProposer = errors.New("sidecar was not proposed by the expected proposer_index")
// ErrMissingVerification indicates that the given verification function was never performed on the value.
ErrMissingVerification = errors.New("verification was not performed for requirement")
)
// VerificationMultiError is a custom error that can be used to access individual verification failures.
type VerificationMultiError struct {

View File

@@ -0,0 +1,5 @@
package verification
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "verification")

View File

@@ -39,7 +39,7 @@ func TestResultList(t *testing.T) {
func TestExportedBlobSanityCheck(t *testing.T) {
// make sure all requirement lists contain the bare minimum checks
sanity := []Requirement{RequireValidProposerSignature, RequireSidecarKzgProofVerified, RequireBlobIndexInBounds, RequireSidecarInclusionProven}
reqs := [][]Requirement{GossipSidecarRequirements, SpectestSidecarRequirements, InitsyncSidecarRequirements, BackfillSidecarRequirements, PendingQueueSidecarRequirements}
reqs := [][]Requirement{GossipBlobSidecarRequirements, SpectestBlobSidecarRequirements, InitsyncBlobSidecarRequirements, BackfillBlobSidecarRequirements, PendingQueueBlobSidecarRequirements}
for i := range reqs {
r := reqs[i]
reqMap := make(map[Requirement]struct{})
@@ -51,13 +51,13 @@ func TestExportedBlobSanityCheck(t *testing.T) {
require.Equal(t, true, ok)
}
}
require.DeepEqual(t, allSidecarRequirements, GossipSidecarRequirements)
require.DeepEqual(t, allBlobSidecarRequirements, GossipBlobSidecarRequirements)
}
func TestAllBlobRequirementsHaveStrings(t *testing.T) {
var derp Requirement = math.MaxInt
require.Equal(t, unknownRequirementName, derp.String())
for i := range allSidecarRequirements {
require.NotEqual(t, unknownRequirementName, allSidecarRequirements[i].String())
for i := range allBlobSidecarRequirements {
require.NotEqual(t, unknownRequirementName, allBlobSidecarRequirements[i].String())
}
}

View File

@@ -37,6 +37,7 @@ const (
SyncCommitteeBranchDepth = 5 // SyncCommitteeBranchDepth defines the number of leaves in a merkle proof of a sync committee.
SyncCommitteeBranchDepthElectra = 6 // SyncCommitteeBranchDepthElectra defines the number of leaves in a merkle proof of a sync committee.
FinalityBranchDepth = 6 // FinalityBranchDepth defines the number of leaves in a merkle proof of the finalized checkpoint root.
FinalityBranchDepthElectra = 7 // FinalityBranchDepthElectra defines the number of leaves in a merkle proof of the finalized checkpoint root.
PendingDepositsLimit = 134217728 // Maximum number of pending balance deposits in the beacon state.
PendingPartialWithdrawalsLimit = 134217728 // Maximum number of pending partial withdrawals in the beacon state.
PendingConsolidationsLimit = 262144 // Maximum number of pending consolidations in the beacon state.

View File

@@ -37,6 +37,7 @@ const (
SyncCommitteeBranchDepth = 5 // SyncCommitteeBranchDepth defines the number of leaves in a merkle proof of a sync committee.
SyncCommitteeBranchDepthElectra = 6 // SyncCommitteeBranchDepthElectra defines the number of leaves in a merkle proof of a sync committee.
FinalityBranchDepth = 6 // FinalityBranchDepth defines the number of leaves in a merkle proof of the finalized checkpoint root.
FinalityBranchDepthElectra = 7 // FinalityBranchDepthElectra defines the number of leaves in a merkle proof of the finalized checkpoint root.
PendingDepositsLimit = 134217728 // Maximum number of pending balance deposits in the beacon state.
PendingPartialWithdrawalsLimit = 64 // Maximum number of pending partial withdrawals in the beacon state.
PendingConsolidationsLimit = 64 // Maximum number of pending consolidations in the beacon state.

View File

@@ -166,6 +166,7 @@ type BeaconChainConfig struct {
DenebForkEpoch primitives.Epoch `yaml:"DENEB_FORK_EPOCH" spec:"true"` // DenebForkEpoch is used to represent the assigned fork epoch for deneb.
ElectraForkVersion []byte `yaml:"ELECTRA_FORK_VERSION" spec:"true"` // ElectraForkVersion is used to represent the fork version for electra.
ElectraForkEpoch primitives.Epoch `yaml:"ELECTRA_FORK_EPOCH" spec:"true"` // ElectraForkEpoch is used to represent the assigned fork epoch for electra.
Eip7594ForkEpoch primitives.Epoch `yaml:"EIP7594_FORK_EPOCH" spec:"true"` // EIP7594ForkEpoch is used to represent the assigned fork epoch for peer das.
ForkVersionSchedule map[[fieldparams.VersionLength]byte]primitives.Epoch // Schedule of fork epochs by version.
ForkVersionNames map[[fieldparams.VersionLength]byte]string // Human-readable names of fork versions.
@@ -255,6 +256,13 @@ type BeaconChainConfig struct {
MaxDepositRequestsPerPayload uint64 `yaml:"MAX_DEPOSIT_REQUESTS_PER_PAYLOAD" spec:"true"` // MaxDepositRequestsPerPayload is the maximum number of execution layer deposits in each payload
UnsetDepositRequestsStartIndex uint64 `yaml:"UNSET_DEPOSIT_REQUESTS_START_INDEX" spec:"true"` // UnsetDepositRequestsStartIndex is used to check the start index for eip6110
// PeerDAS Values
SamplesPerSlot uint64 `yaml:"SAMPLES_PER_SLOT"` // SamplesPerSlot refers to the number of random samples a node queries per slot.
CustodyRequirement uint64 `yaml:"CUSTODY_REQUIREMENT"` // CustodyRequirement refers to the minimum amount of subnets a peer must custody and serve samples from.
MinEpochsForDataColumnSidecarsRequest primitives.Epoch `yaml:"MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS"` // MinEpochsForDataColumnSidecarsRequest is the minimum number of epochs the node will keep the data columns for.
MaxCellsInExtendedMatrix uint64 `yaml:"MAX_CELLS_IN_EXTENDED_MATRIX" spec:"true"` // MaxCellsInExtendedMatrix is the full data of one-dimensional erasure coding extended blobs (in row major format).
NumberOfColumns uint64 `yaml:"NUMBER_OF_COLUMNS" spec:"true"` // NumberOfColumns in the extended data matrix.
// Networking Specific Parameters
GossipMaxSize uint64 `yaml:"GOSSIP_MAX_SIZE" spec:"true"` // GossipMaxSize is the maximum allowed size of uncompressed gossip messages.
MaxChunkSize uint64 `yaml:"MAX_CHUNK_SIZE" spec:"true"` // MaxChunkSize is the maximum allowed size of uncompressed req/resp chunked responses.
@@ -272,10 +280,6 @@ type BeaconChainConfig struct {
AttestationSubnetPrefixBits uint64 `yaml:"ATTESTATION_SUBNET_PREFIX_BITS" spec:"true"` // AttestationSubnetPrefixBits is defined as (ceillog2(ATTESTATION_SUBNET_COUNT) + ATTESTATION_SUBNET_EXTRA_BITS).
SubnetsPerNode uint64 `yaml:"SUBNETS_PER_NODE" spec:"true"` // SubnetsPerNode is the number of long-lived subnets a beacon node should be subscribed to.
NodeIdBits uint64 `yaml:"NODE_ID_BITS" spec:"true"` // NodeIdBits defines the bit length of a node id.
// PeerDAS
NumberOfColumns uint64 `yaml:"NUMBER_OF_COLUMNS" spec:"true"` // NumberOfColumns in the extended data matrix.
MaxCellsInExtendedMatrix uint64 `yaml:"MAX_CELLS_IN_EXTENDED_MATRIX" spec:"true"` // MaxCellsInExtendedMatrix is the full data of one-dimensional erasure coding extended blobs (in row major format).
}
// InitializeForkSchedule initializes the schedules forks baked into the config.
@@ -360,6 +364,12 @@ func DenebEnabled() bool {
return BeaconConfig().DenebForkEpoch < math.MaxUint64
}
// PeerDASEnabled centralizes the check to determine if code paths
// that are specific to peerdas should be allowed to execute.
func PeerDASEnabled() bool {
return BeaconConfig().Eip7594ForkEpoch < math.MaxUint64
}
// WithinDAPeriod checks if the block epoch is within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS of the given current epoch.
func WithinDAPeriod(block, current primitives.Epoch) bool {
return block+BeaconConfig().MinEpochsForBlobsSidecarsRequest >= current

View File

@@ -25,12 +25,10 @@ import (
// IMPORTANT: Use one field per line and sort these alphabetically to reduce conflicts.
var placeholderFields = []string{
"BYTES_PER_LOGS_BLOOM", // Compile time constant on ExecutionPayload.logs_bloom.
"CUSTODY_REQUIREMENT",
"EIP6110_FORK_EPOCH",
"EIP6110_FORK_VERSION",
"EIP7002_FORK_EPOCH",
"EIP7002_FORK_VERSION",
"EIP7594_FORK_EPOCH",
"EIP7594_FORK_VERSION",
"EIP7732_FORK_EPOCH",
"EIP7732_FORK_VERSION",
@@ -43,7 +41,6 @@ var placeholderFields = []string{
"MAX_REQUEST_PAYLOADS", // Compile time constant on BeaconBlockBody.ExecutionRequests
"MAX_TRANSACTIONS_PER_PAYLOAD", // Compile time constant on ExecutionPayload.transactions.
"REORG_HEAD_WEIGHT_THRESHOLD",
"SAMPLES_PER_SLOT",
"TARGET_NUMBER_OF_PEERS",
"UPDATE_TIMEOUT",
"WHISK_EPOCHS_PER_SHUFFLING_PHASE",

Some files were not shown because too many files have changed in this diff Show More