Compare commits

...

26 Commits

Author SHA1 Message Date
Manu NALEPA
1aab9ef91c When possible, replace time.Sleep by helpers.Sleep. 2025-10-20 11:42:21 +02:00
Manu NALEPA
b24f1369a3 randomPeer: Use the helpers.Sleep function. 2025-10-20 11:12:36 +02:00
Manu NALEPA
d3497576a5 Helpers: Define Sleep function. 2025-10-20 11:11:52 +02:00
Manu NALEPA
02cf25e32b Rename beaconConfig ==> cfg.
Follow up of https://github.com/OffchainLabs/prysm/pull/15880#discussion_r2436826215
2025-10-20 10:56:37 +02:00
terence
64ec665890 Fix sync committee subscription to use subnet indices instead of committee indices (#15885)
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-17 19:03:53 +00:00
kasey
fdb06ea461 clear genesis state file when --(force-)clear-db is specified (#15883)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-17 14:03:15 +00:00
Manu NALEPA
0486631d73 Improve error message when the byte count read from disk when reading a data column sidecar is lower than expected. (Mostly, because the file is truncated.) (#15881)
* `VerifiedRODataColumnError`: Don't reuse Blob error.

* `VerifiedRODataColumnFromDisk`: Use a specific error when the count of read bytes is lower than expected.

* Add changelog.
2025-10-16 21:49:11 +00:00
Manu NALEPA
47764696ce randomPeer: Return if the context is cancelled when waiting for peers. (#15876)
* `randomPeer`: Return if the context is cancelled when waiting for peers.

* `randomPeer`: Refactor to reduce indentation.
2025-10-16 21:13:11 +00:00
Manu NALEPA
b2d350b988 Correctly advertise (in ENR and metadata) attestation subnets when using --subscribe-all-subnets. (#15880) 2025-10-16 21:12:00 +00:00
kasey
41e7607092 Decrease att batch deadline to 5ms for faster net prop (#15882)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-10-16 17:30:59 +00:00
Jun Song
cd429dc253 SSZ-QL: Access n-th element in List/Vector. (#15767)
* Add basic parsing feature for accessing by index

* Add more tests for 2d byte vector

* Add List case for access indexing

* Handle 2D bytes List example

* Fix misleading cases for CalculateOffsetAndLength

* Use elementSizes[index] if it is the last path element

* Add variable_container_list field for mocking attester_slashings in BeaconBlockBody

* Remove redundant protobuf message

* Better documentation

* Changelog

* Fix `expectedSize` of `VariableTestContainer`: as we added `variable_container_list` here

* Apply reviews from Radek
2025-10-15 16:11:12 +00:00
phrwlk
5ced1125f2 fix: reject out-of-range attestation committee index (#15855)
* reject committee index >= committees_per_slot in unaggregated attestation validation

* Create phrwlk_fix-attestation-committee-index-bound.md

* add a unit test

* fix test

* fixing test

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-10-15 16:02:08 +00:00
Potuz
f67ca6ae5e Fix epoch transition on head event (#15871)
h/t to the NuConstruct team for reporting this. The event feed
incorrectly sends epoch transition flag on head events when the first
slot of the epoch is missing (or reorgs across epoch transition).

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-15 15:13:49 +00:00
Manu NALEPA
9742333f68 WithDataColumnRetentionEpochs: Use dataColumnRetentionEpoch instead of blobColumnRetentionEpoch. (#15872) 2025-10-15 14:44:49 +00:00
Manu NALEPA
c811fadf33 VerifyDataColumnSidecar: Check if there is no too many commitments. (#15859)
* `VerifyDataColumnSidecar`: Check if there is no too many commitments.

* `TestVerifyDataColumnSidecar`: Refactor using test cases.

* Add changelog.
2025-10-15 12:18:04 +00:00
Manu NALEPA
55b9448d41 dataColumnSidecarsByRangeRPCHandler: Gracefully close the stream if no data to return. (#15866)
* `TestDataColumnSidecarsByRangeRPCHandler`: Remove commented code.

* Remove double import

* `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return.

* Tests: Change `require` to `assert` in goroutines in tests.

https://pkg.go.dev/github.com/stretchr/testify/require#hdr-Assertions

* Add changelog.
2025-10-15 12:16:05 +00:00
Manu NALEPA
10f8d8c26e Fix /eth/v1/beacon/blob_sidecars/ beacon API if the fulu fork epoch is set to the far future epoch. (#15867)
* Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch.

* Fix Terence's comment.

* adding a test

---------

Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-10-14 21:38:12 +00:00
Jun Song
4eab41ea4c SSZ-QL: use fastssz-generated SizeSSZ method & clarify Size method (#15864)
* Add SizeSSZ as a member of SSZObject

* Temporarily rename dereferencePointer function

* Fix analyzeType: use reflect.Value for analyzing

* Fix PopulateVariableLengthInfo: change function signature & reset pointer

* Remove Container arm for Size function as it'll be handled in the previous branch

* Remove OffsetBytes function in listInfo

* Refactor and document codes

* Remove misleading "fixedSize" concept & Add Uint8...64 SSZTypes

* Add size testing

* Move TestSSZObject_Batch and rename it as TestHashTreeRoot

* Changelog :)

* Rename endOffset to fixedOffset

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-14 17:33:52 +00:00
Radosław Kapka
683608e34a Improve returning individual message errors from Beacon API (#15835)
* Improve returning individual message errors from Beacon API

* changelog <3

* fix test

* add debug logs

* batch broadcast errors

* use logrus fields

* capitalize log messages
2025-10-14 15:22:00 +00:00
Manu NALEPA
fbbf2a1404 HasAtLeastOneIndex: Check the index is not too high. (#15865) 2025-10-14 14:39:38 +00:00
Potuz
82f556c50f Remove redundant check (#15844)
* Remove redundant check

* changelog

* fix gazelle
2025-10-14 12:39:19 +00:00
Radosław Kapka
c88aa77ac1 Display non-JSON error messages (#15860)
* Display non-JSON error messages

* changelog <3
2025-10-14 12:08:21 +00:00
fernantho
0568bec935 SSZ-QL: use FastSSZ-generated HashTreeRoot through SSZObject in sszInfo (#15805)
* stored CL object to enable the usage Fastssz's HashTreeRoot(). added basic test

* refactorization - using interfaces instead of storing original object

* added tests covering ssz custom types

* renamed hash_tree_root to ssz_interface as it contains MarshalSSZ and UnmarshalSSZ functions

* run gazelle

* renamed test and improved comments

* refactored test and extend to marshalSSZ and UnmarshalSSZ

* added changelog

* updated comment

* Changed SSZIface name to SSZObject. Removed MarshalSSZ and UnmarshalSSZ function signatures from interface as they are not used still. Refactored tests.

* renamed file ssz_interface.go to ssz_object.go. merge test from ssz_interface_test.go into query_test.go.
reordered source SSZObject field from sszInfo struct

* sticked SSZObject interface to HashTreeRoot() function, the only one needed so far

* run gazelle :)

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-10-13 21:39:15 +00:00
Potuz
e463bcd1e1 Mark block as invalid in gossip if it fails signature check (#15847)
* Mark block as invalid in gossip if it fails signature check

* Add tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-10-13 20:29:27 +00:00
terence
5f8eb69201 Add proper handling for submit blind block 502 error (#15848)
* Add proper handling for builder relay 502 BadGateway errors

* James feedback

* Change wording
2025-10-13 18:36:06 +00:00
Marco Munizaga
4b98451649 fix allocation size of proofs in ComputeCellsAndProofsFromStructured (#15809)
* fix allocation size of proofs in ComputeCellsAndProofsFromStructured

the preallocated slice for KZG Proofs was 48x bigger than it needed to
be.

* changelog

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-10-13 17:16:02 +00:00
143 changed files with 2143 additions and 670 deletions

View File

@@ -284,7 +284,7 @@ func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*stru
if resp.StatusCode != http.StatusOK {
decoder := json.NewDecoder(resp.Body)
decoder.DisallowUnknownFields()
errorJson := &server.IndexedVerificationFailureError{}
errorJson := &server.IndexedErrorContainer{}
if err := decoder.Decode(errorJson); err != nil {
return errors.Wrapf(err, "failed to decode error JSON for %s", resp.Request.URL)
}

View File

@@ -726,6 +726,12 @@ func unexpectedStatusErr(response *http.Response, expected int) error {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrNotOK, errMessage.Message)
case http.StatusBadGateway:
log.WithError(ErrBadGateway).Debug(msg)
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrBadGateway, errMessage.Message)
default:
log.WithError(ErrNotOK).Debug(msg)
return errors.Wrap(ErrNotOK, fmt.Sprintf("unsupported error code: %d", response.StatusCode))

View File

@@ -12,7 +12,6 @@ import (
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -22,6 +21,7 @@ import (
eth "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/prysmaticlabs/go-bitfield"
log "github.com/sirupsen/logrus"
)

View File

@@ -21,3 +21,4 @@ var ErrUnsupportedMediaType = errors.Wrap(ErrNotOK, "The media type in \"Content
// ErrNotAcceptable specifically means that a '406 - Not Acceptable' was received from the API.
var ErrNotAcceptable = errors.Wrap(ErrNotOK, "The accept header value is not acceptable")
var ErrBadGateway = errors.Wrap(ErrNotOK, "recv 502 BadGateway response from API")

View File

@@ -6,6 +6,11 @@ import (
"strings"
)
var (
ErrIndexedValidationFail = "One or more messages failed validation"
ErrIndexedBroadcastFail = "One or more messages failed broadcast"
)
// DecodeError represents an error resulting from trying to decode an HTTP request.
// It tracks the full field name for which decoding failed.
type DecodeError struct {
@@ -29,19 +34,38 @@ func (e *DecodeError) Error() string {
return fmt.Sprintf("could not decode %s: %s", strings.Join(e.path, "."), e.err.Error())
}
// IndexedVerificationFailureError wraps a collection of verification failures.
type IndexedVerificationFailureError struct {
Message string `json:"message"`
Code int `json:"code"`
Failures []*IndexedVerificationFailure `json:"failures"`
// IndexedErrorContainer wraps a collection of indexed errors.
type IndexedErrorContainer struct {
Message string `json:"message"`
Code int `json:"code"`
Failures []*IndexedError `json:"failures"`
}
func (e *IndexedVerificationFailureError) StatusCode() int {
func (e *IndexedErrorContainer) StatusCode() int {
return e.Code
}
// IndexedVerificationFailure represents an issue when verifying a single indexed object e.g. an item in an array.
type IndexedVerificationFailure struct {
// IndexedError represents an issue when processing a single indexed object e.g. an item in an array.
type IndexedError struct {
Index int `json:"index"`
Message string `json:"message"`
}
// BroadcastFailedError represents an error scenario where broadcasting a published message failed.
type BroadcastFailedError struct {
msg string
err error
}
// NewBroadcastFailedError creates a new instance of BroadcastFailedError.
func NewBroadcastFailedError(msg string, err error) *BroadcastFailedError {
return &BroadcastFailedError{
msg: msg,
err: err,
}
}
// Error returns the underlying error message.
func (e *BroadcastFailedError) Error() string {
return fmt.Sprintf("could not broadcast %s: %s", e.msg, e.err.Error())
}

View File

@@ -346,13 +346,24 @@ func (s *Service) notifyNewHeadEvent(
if err != nil {
return errors.Wrap(err, "could not check if node is optimistically synced")
}
parentRoot, err := s.ParentRoot([32]byte(newHeadRoot))
if err != nil {
return errors.Wrap(err, "could not obtain parent root in forkchoice")
}
parentSlot, err := s.RecentBlockSlot(parentRoot)
if err != nil {
return errors.Wrap(err, "could not obtain parent slot in forkchoice")
}
epochTransition := slots.ToEpoch(newHeadSlot) > slots.ToEpoch(parentSlot)
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.NewHead,
Data: &ethpbv1.EventHead{
Slot: newHeadSlot,
Block: newHeadRoot,
State: newHeadStateRoot,
EpochTransition: slots.IsEpochStart(newHeadSlot),
EpochTransition: epochTransition,
PreviousDutyDependentRoot: previousDutyDependentRoot[:],
CurrentDutyDependentRoot: currentDutyDependentRoot[:],
ExecutionOptimistic: isOptimistic,

View File

@@ -162,6 +162,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 1, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), 1, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
@@ -196,6 +199,9 @@ func Test_notifyNewHeadEvent(t *testing.T) {
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 0, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
err = srv.notifyNewHeadEvent(t.Context(), epoch2Start, bState, newHeadStateRoot[:], newHeadRoot[:])
require.NoError(t, err)
events := notifier.ReceivedEvents()
@@ -213,6 +219,37 @@ func Test_notifyNewHeadEvent(t *testing.T) {
}
require.DeepSSZEqual(t, wanted, eventHead)
})
t.Run("epoch transition", func(t *testing.T) {
bState, _ := util.DeterministicGenesisState(t, 10)
srv := testServiceWithDB(t)
srv.SetGenesisTime(time.Now())
notifier := srv.cfg.StateNotifier.(*mock.MockStateNotifier)
srv.originBlockRoot = [32]byte{1}
st, blk, err := prepareForkchoiceState(t.Context(), 0, [32]byte{}, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadStateRoot := [32]byte{2}
newHeadRoot := [32]byte{3}
st, blk, err = prepareForkchoiceState(t.Context(), 32, newHeadRoot, [32]byte{}, [32]byte{}, &ethpb.Checkpoint{}, &ethpb.Checkpoint{})
require.NoError(t, err)
require.NoError(t, srv.cfg.ForkChoiceStore.InsertNode(t.Context(), st, blk))
newHeadSlot := params.BeaconConfig().SlotsPerEpoch
require.NoError(t, srv.notifyNewHeadEvent(t.Context(), newHeadSlot, bState, newHeadStateRoot[:], newHeadRoot[:]))
events := notifier.ReceivedEvents()
require.Equal(t, 1, len(events))
eventHead, ok := events[0].Data.(*ethpbv1.EventHead)
require.Equal(t, true, ok)
wanted := &ethpbv1.EventHead{
Slot: newHeadSlot,
Block: newHeadRoot[:],
State: newHeadStateRoot[:],
EpochTransition: true,
PreviousDutyDependentRoot: params.BeaconConfig().ZeroHash[:],
CurrentDutyDependentRoot: srv.originBlockRoot[:],
}
require.DeepSSZEqual(t, wanted, eventHead)
})
}
func TestRetrieveHead_ReadOnly(t *testing.T) {

View File

@@ -3302,7 +3302,6 @@ func Test_postBlockProcess_EventSending(t *testing.T) {
}
}
func setupLightClientTestRequirements(ctx context.Context, t *testing.T, s *Service, v int, options ...util.LightClientOption) (*util.TestLightClient, *postBlockProcessConfig) {
var l *util.TestLightClient
switch v {

View File

@@ -79,7 +79,7 @@ func (s *Service) spawnProcessAttestationsRoutine() {
log.WithError(err).Error("Giving up waiting for genesis time")
return
}
time.Sleep(1 * time.Second)
helpers.Sleep(s.ctx, 1*time.Second)
}
log.Warn("Genesis time received, now available to process attestations")
}

View File

@@ -472,8 +472,8 @@ func (s *Service) removeStartupState() {
func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot, uint64, error) {
isSubscribedToAllDataSubnets := flags.Get().SubscribeAllDataSubnets
beaconConfig := params.BeaconConfig()
custodyRequirement := beaconConfig.CustodyRequirement
cfg := params.BeaconConfig()
custodyRequirement := cfg.CustodyRequirement
// Check if the node was previously subscribed to all data subnets, and if so,
// store the new status accordingly.
@@ -493,7 +493,7 @@ func (s *Service) updateCustodyInfoInDB(slot primitives.Slot) (primitives.Slot,
// Compute the custody group count.
custodyGroupCount := custodyRequirement
if isSubscribedToAllDataSubnets {
custodyGroupCount = beaconConfig.NumberOfColumns
custodyGroupCount = cfg.NumberOfColumns
}
// Safely compute the fulu fork slot.
@@ -536,11 +536,11 @@ func spawnCountdownIfPreGenesis(ctx context.Context, genesisTime time.Time, db d
}
func fuluForkSlot() (primitives.Slot, error) {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
fuluForkEpoch := beaconConfig.FuluForkEpoch
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
return beaconConfig.FarFutureSlot, nil
fuluForkEpoch := cfg.FuluForkEpoch
if fuluForkEpoch == cfg.FarFutureEpoch {
return cfg.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)

View File

@@ -6,3 +6,4 @@ var errNilSignedWithdrawalMessage = errors.New("nil SignedBLSToExecutionChange m
var errNilWithdrawalMessage = errors.New("nil BLSToExecutionChange message")
var errInvalidBLSPrefix = errors.New("withdrawal credential prefix is not a BLS prefix")
var errInvalidWithdrawalCredentials = errors.New("withdrawal credentials do not match")
var ErrInvalidSignature = errors.New("invalid signature")

View File

@@ -114,9 +114,12 @@ func VerifyBlockSignatureUsingCurrentFork(beaconState state.ReadOnlyBeaconState,
}
proposerPubKey := proposer.PublicKey
sig := blk.Signature()
return signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
if err := signing.VerifyBlockSigningRoot(proposerPubKey, sig[:], domain, func() ([32]byte, error) {
return blkRoot, nil
})
}); err != nil {
return ErrInvalidSignature
}
return nil
}
// BlockSignatureBatch retrieves the block signature batch from the provided block and its corresponding state.

View File

@@ -89,3 +89,36 @@ func TestVerifyBlockSignatureUsingCurrentFork(t *testing.T) {
require.NoError(t, err)
assert.NoError(t, blocks.VerifyBlockSignatureUsingCurrentFork(bState, wsb, blkRoot))
}
func TestVerifyBlockSignatureUsingCurrentFork_InvalidSignature(t *testing.T) {
params.SetupTestConfigCleanup(t)
bCfg := params.BeaconConfig()
bCfg.AltairForkEpoch = 100
bCfg.ForkVersionSchedule[bytesutil.ToBytes4(bCfg.AltairForkVersion)] = 100
params.OverrideBeaconConfig(bCfg)
bState, keys := util.DeterministicGenesisState(t, 100)
altairBlk := util.NewBeaconBlockAltair()
altairBlk.Block.ProposerIndex = 0
altairBlk.Block.Slot = params.BeaconConfig().SlotsPerEpoch * 100
blkRoot, err := altairBlk.Block.HashTreeRoot()
assert.NoError(t, err)
// Sign with wrong key (proposer index 0, but using key 1)
fData := &ethpb.Fork{
Epoch: 100,
CurrentVersion: params.BeaconConfig().AltairForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
}
domain, err := signing.Domain(fData, 100, params.BeaconConfig().DomainBeaconProposer, bState.GenesisValidatorsRoot())
assert.NoError(t, err)
rt, err := signing.ComputeSigningRoot(altairBlk.Block, domain)
assert.NoError(t, err)
wrongSig := keys[1].Sign(rt[:]).Marshal()
altairBlk.Signature = wrongSig
wsb, err := consensusblocks.NewSignedBeaconBlock(altairBlk)
require.NoError(t, err)
err = blocks.VerifyBlockSignatureUsingCurrentFork(bState, wsb, blkRoot)
require.ErrorIs(t, err, blocks.ErrInvalidSignature, "Expected ErrInvalidSignature for invalid signature")
}

View File

@@ -14,6 +14,7 @@ go_library(
"rewards_penalties.go",
"shuffle.go",
"sync_committee.go",
"time.go",
"validator_churn.go",
"validators.go",
"weak_subjectivity.go",
@@ -61,6 +62,7 @@ go_test(
"rewards_penalties_test.go",
"shuffle_test.go",
"sync_committee_test.go",
"time_test.go",
"validator_churn_test.go",
"validators_test.go",
"weak_subjectivity_test.go",

View File

@@ -0,0 +1,14 @@
package helpers
import (
"context"
"time"
)
// Sleep sleeps for the given duration or until the context is done.
func Sleep(ctx context.Context, duration time.Duration) {
select {
case <-ctx.Done():
case <-time.After(duration):
}
}

View File

@@ -0,0 +1,22 @@
package helpers_test
import (
"context"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
)
func TestSleep(t *testing.T) {
t.Run("context cancelled", func(t *testing.T) {
ctx, cancel := context.WithCancel(t.Context())
cancel()
helpers.Sleep(ctx, 1*time.Hour)
})
t.Run("Nominal", func(t *testing.T) {
helpers.Sleep(t.Context(), 0)
})
}

View File

@@ -401,7 +401,7 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
return 0, errors.New("empty active indices list")
}
hashFunc := hash.CustomSHA256Hasher()
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
seedBuffer := make([]byte, len(seed)+8)
copy(seedBuffer, seed[:])
@@ -426,14 +426,14 @@ func ComputeProposerIndex(bState state.ReadOnlyBeaconState, activeIndices []prim
offset := (i % 16) * 2
randomValue := uint64(randomBytes[offset]) | uint64(randomBytes[offset+1])<<8
if effectiveBal*fieldparams.MaxRandomValueElectra >= beaconConfig.MaxEffectiveBalanceElectra*randomValue {
if effectiveBal*fieldparams.MaxRandomValueElectra >= cfg.MaxEffectiveBalanceElectra*randomValue {
return candidateIndex, nil
}
} else {
binary.LittleEndian.PutUint64(seedBuffer[len(seed):], i/32)
randomByte := hashFunc(seedBuffer)[i%32]
if effectiveBal*fieldparams.MaxRandomByte >= beaconConfig.MaxEffectiveBalance*uint64(randomByte) {
if effectiveBal*fieldparams.MaxRandomByte >= cfg.MaxEffectiveBalance*uint64(randomByte) {
return candidateIndex, nil
}
}

View File

@@ -89,14 +89,14 @@ func CustodyGroups(nodeId enode.ID, custodyGroupCount uint64) ([]uint64, error)
// ComputeColumnsForCustodyGroup computes the columns for a given custody group.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#compute_columns_for_custody_group
func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
cfg := params.BeaconConfig()
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
if custodyGroup >= numberOfCustodyGroups {
return nil, ErrCustodyGroupTooLarge
}
numberOfColumns := beaconConfig.NumberOfColumns
numberOfColumns := cfg.NumberOfColumns
columnsPerGroup := numberOfColumns / numberOfCustodyGroups
@@ -112,9 +112,9 @@ func ComputeColumnsForCustodyGroup(custodyGroup uint64) ([]uint64, error) {
// ComputeCustodyGroupForColumn computes the custody group for a given column.
// It is the reciprocal function of ComputeColumnsForCustodyGroup.
func ComputeCustodyGroupForColumn(columnIndex uint64) (uint64, error) {
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
cfg := params.BeaconConfig()
numberOfColumns := cfg.NumberOfColumns
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
if columnIndex >= numberOfColumns {
return 0, ErrIndexTooLarge

View File

@@ -43,6 +43,13 @@ func VerifyDataColumnSidecar(sidecar blocks.RODataColumn) error {
return ErrNoKzgCommitments
}
// A sidecar with more commitments than the max blob count for this block is invalid.
slot := sidecar.Slot()
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(sidecar.KzgCommitments) > maxBlobsPerBlock {
return ErrTooManyCommitments
}
// The column length must be equal to the number of commitments/proofs.
if len(sidecar.Column) != len(sidecar.KzgCommitments) || len(sidecar.Column) != len(sidecar.KzgProofs) {
return ErrMismatchLength

View File

@@ -18,38 +18,46 @@ import (
)
func TestVerifyDataColumnSidecar(t *testing.T) {
t.Run("index too large", func(t *testing.T) {
roSidecar := createTestSidecar(t, 1_000_000, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrIndexTooLarge)
})
testCases := []struct {
name string
index uint64
blobCount int
commitmentCount int
proofCount int
maxBlobsPerBlock uint64
expectedError error
}{
{name: "index too large", index: 1_000_000, expectedError: peerdas.ErrIndexTooLarge},
{name: "no commitments", expectedError: peerdas.ErrNoKzgCommitments},
{name: "too many commitments", blobCount: 10, commitmentCount: 10, proofCount: 10, maxBlobsPerBlock: 2, expectedError: peerdas.ErrTooManyCommitments},
{name: "commitments size mismatch", commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
{name: "proofs size mismatch", blobCount: 1, commitmentCount: 1, maxBlobsPerBlock: 1, expectedError: peerdas.ErrMismatchLength},
{name: "nominal", blobCount: 1, commitmentCount: 1, proofCount: 1, maxBlobsPerBlock: 1, expectedError: nil},
}
t.Run("no commitments", func(t *testing.T) {
roSidecar := createTestSidecar(t, 0, nil, nil, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrNoKzgCommitments)
})
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: tc.maxBlobsPerBlock}}
params.OverrideBeaconConfig(cfg)
t.Run("KZG commitments size mismatch", func(t *testing.T) {
kzgCommitments := make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, nil, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
column := make([][]byte, tc.blobCount)
kzgCommitments := make([][]byte, tc.commitmentCount)
kzgProof := make([][]byte, tc.proofCount)
t.Run("KZG proofs size mismatch", func(t *testing.T) {
column, kzgCommitments := make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, nil)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.ErrorIs(t, err, peerdas.ErrMismatchLength)
})
roSidecar := createTestSidecar(t, tc.index, column, kzgCommitments, kzgProof)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
t.Run("nominal", func(t *testing.T) {
column, kzgCommitments, kzgProofs := make([][]byte, 1), make([][]byte, 1), make([][]byte, 1)
roSidecar := createTestSidecar(t, 0, column, kzgCommitments, kzgProofs)
err := peerdas.VerifyDataColumnSidecar(roSidecar)
require.NoError(t, err)
})
if tc.expectedError != nil {
require.ErrorIs(t, err, tc.expectedError)
return
}
require.NoError(t, err)
})
}
}
func TestVerifyDataColumnSidecarKZGProofs(t *testing.T) {

View File

@@ -257,7 +257,7 @@ func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([
return nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, numberOfColumns*kzg.BytesPerProof)
kzgProofs := make([]kzg.Proof, 0, numberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, errors.New("wrong KZG proof size - should never happen")

View File

@@ -441,6 +441,7 @@ func TestComputeCellsAndProofsFromStructured(t *testing.T) {
for i := range blobCount {
require.Equal(t, len(expectedCellsAndProofs[i].Cells), len(actualCellsAndProofs[i].Cells))
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), len(actualCellsAndProofs[i].Proofs))
require.Equal(t, len(expectedCellsAndProofs[i].Proofs), cap(actualCellsAndProofs[i].Proofs))
// Compare cells
for j, expectedCell := range expectedCellsAndProofs[i].Cells {

View File

@@ -84,10 +84,10 @@ func ValidatorsCustodyRequirement(state beaconState.ReadOnlyBeaconState, validat
totalNodeBalance += validator.EffectiveBalance()
}
beaconConfig := params.BeaconConfig()
numberOfCustodyGroups := beaconConfig.NumberOfCustodyGroups
validatorCustodyRequirement := beaconConfig.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := beaconConfig.BalancePerAdditionalCustodyGroup
cfg := params.BeaconConfig()
numberOfCustodyGroups := cfg.NumberOfCustodyGroups
validatorCustodyRequirement := cfg.ValidatorCustodyRequirement
balancePerAdditionalCustodyGroup := cfg.BalancePerAdditionalCustodyGroup
count := totalNodeBalance / balancePerAdditionalCustodyGroup
return min(max(count, validatorCustodyRequirement), numberOfCustodyGroups), nil

View File

@@ -196,7 +196,7 @@ func TestAltairCompatible(t *testing.T) {
}
func TestCanUpgradeTo(t *testing.T) {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
outerTestCases := []struct {
name string
@@ -205,32 +205,32 @@ func TestCanUpgradeTo(t *testing.T) {
}{
{
name: "Altair",
forkEpoch: &beaconConfig.AltairForkEpoch,
forkEpoch: &cfg.AltairForkEpoch,
upgradeFunc: time.CanUpgradeToAltair,
},
{
name: "Bellatrix",
forkEpoch: &beaconConfig.BellatrixForkEpoch,
forkEpoch: &cfg.BellatrixForkEpoch,
upgradeFunc: time.CanUpgradeToBellatrix,
},
{
name: "Capella",
forkEpoch: &beaconConfig.CapellaForkEpoch,
forkEpoch: &cfg.CapellaForkEpoch,
upgradeFunc: time.CanUpgradeToCapella,
},
{
name: "Deneb",
forkEpoch: &beaconConfig.DenebForkEpoch,
forkEpoch: &cfg.DenebForkEpoch,
upgradeFunc: time.CanUpgradeToDeneb,
},
{
name: "Electra",
forkEpoch: &beaconConfig.ElectraForkEpoch,
forkEpoch: &cfg.ElectraForkEpoch,
upgradeFunc: time.CanUpgradeToElectra,
},
{
name: "Fulu",
forkEpoch: &beaconConfig.FuluForkEpoch,
forkEpoch: &cfg.FuluForkEpoch,
upgradeFunc: time.CanUpgradeToFulu,
},
}
@@ -238,7 +238,7 @@ func TestCanUpgradeTo(t *testing.T) {
for _, otc := range outerTestCases {
params.SetupTestConfigCleanup(t)
*otc.forkEpoch = 5
params.OverrideBeaconConfig(beaconConfig)
params.OverrideBeaconConfig(cfg)
innerTestCases := []struct {
name string

View File

@@ -1032,5 +1032,5 @@ func extractFileMetadata(path string) (*fileMetadata, error) {
// period computes the period of a given epoch.
func period(epoch primitives.Epoch) uint64 {
return uint64(epoch / params.BeaconConfig().MinEpochsForBlobsSidecarsRequest)
return uint64(epoch / params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest)
}

View File

@@ -35,8 +35,9 @@ func (s DataColumnStorageSummary) HasIndex(index uint64) bool {
// HasAtLeastOneIndex returns true if at least one of the DataColumnSidecars at the given indices is available in the filesystem.
func (s DataColumnStorageSummary) HasAtLeastOneIndex(indices []uint64) bool {
size := uint64(len(s.mask))
for _, index := range indices {
if s.mask[index] {
if index < size && s.mask[index] {
return true
}
}

View File

@@ -25,11 +25,11 @@ func TestHasIndex(t *testing.T) {
func TestHasAtLeastOneIndex(t *testing.T) {
summary := NewDataColumnStorageSummary(0, [fieldparams.NumberOfColumns]bool{false, true})
hasAtLeastOneIndex := summary.HasAtLeastOneIndex([]uint64{3, 1, 2})
require.Equal(t, true, hasAtLeastOneIndex)
actual := summary.HasAtLeastOneIndex([]uint64{3, 1, fieldparams.NumberOfColumns, 2})
require.Equal(t, true, actual)
hasAtLeastOneIndex = summary.HasAtLeastOneIndex([]uint64{3, 4, 2})
require.Equal(t, false, hasAtLeastOneIndex)
actual = summary.HasAtLeastOneIndex([]uint64{3, 4, fieldparams.NumberOfColumns, 2})
require.Equal(t, false, actual)
}
func TestCount(t *testing.T) {

View File

@@ -126,7 +126,7 @@ func NewWarmedEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts
func NewEphemeralDataColumnStorageUsingFs(t testing.TB, fs afero.Fs, opts ...DataColumnStorageOption) *DataColumnStorage {
opts = append(opts,
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForBlobsSidecarsRequest),
WithDataColumnRetentionEpochs(params.BeaconConfig().MinEpochsForDataColumnSidecarsRequest),
WithDataColumnFs(fs),
)

View File

@@ -7,6 +7,7 @@ import (
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/config/params"
contracts "github.com/OffchainLabs/prysm/v6/contracts/deposit"
"github.com/OffchainLabs/prysm/v6/io/logs"
@@ -98,7 +99,7 @@ func (s *Service) retryExecutionClientConnection(ctx context.Context, err error)
s.runError = errors.Wrap(err, "retryExecutionClientConnection")
s.updateConnectedETH1(false)
// Back off for a while before redialing.
time.Sleep(backOffPeriod)
helpers.Sleep(ctx, backOffPeriod)
currClient := s.rpcClient
if err := s.setupExecutionClientConnections(ctx, s.cfg.currHttpEndpoint); err != nil {
s.runError = errors.Wrap(err, "setupExecutionClientConnections")

View File

@@ -58,7 +58,6 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//encoding/bytesutil:go_default_library",
"//genesis:go_default_library",
"//monitoring/prometheus:go_default_library",
"//monitoring/tracing:go_default_library",

View File

@@ -2,11 +2,13 @@ package node
import (
"context"
"os"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/slasherkv"
"github.com/OffchainLabs/prysm/v6/cmd"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/pkg/errors"
"github.com/urfave/cli/v2"
)
@@ -36,6 +38,22 @@ func (c *dbClearer) clearKV(ctx context.Context, db *kv.Store) (*kv.Store, error
return kv.NewKVStore(ctx, db.DatabasePath())
}
func (c *dbClearer) clearGenesis(dir string) error {
if !c.shouldProceed() {
return nil
}
gfile, err := genesis.FindStateFile(dir)
if err != nil {
return nil
}
if err := os.Remove(gfile.FilePath()); err != nil {
return errors.Wrapf(err, "genesis state file not removed: %s", gfile.FilePath())
}
return nil
}
func (c *dbClearer) clearBlobs(bs *filesystem.BlobStorage) error {
if !c.shouldProceed() {
return nil

View File

@@ -60,7 +60,6 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/monitoring/prometheus"
"github.com/OffchainLabs/prysm/v6/runtime"
@@ -178,6 +177,9 @@ func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*Beaco
}
beacon.db = kvdb
if err := dbClearer.clearGenesis(dataDir); err != nil {
return nil, errors.Wrap(err, "could not clear genesis state")
}
providers := append(beacon.GenesisProviders, kv.NewLegacyGenesisProvider(kvdb))
if err := genesis.Initialize(ctx, dataDir, providers...); err != nil {
return nil, errors.Wrap(err, "could not initialize genesis state")
@@ -598,22 +600,7 @@ func (b *BeaconNode) startStateGen(ctx context.Context, bfs coverage.AvailableBl
return err
}
r := bytesutil.ToBytes32(cp.Root)
// Consider edge case where finalized root are zeros instead of genesis root hash.
if r == params.BeaconConfig().ZeroHash {
genesisBlock, err := b.db.GenesisBlock(ctx)
if err != nil {
return err
}
if genesisBlock != nil && !genesisBlock.IsNil() {
r, err = genesisBlock.Block().HashTreeRoot()
if err != nil {
return err
}
}
}
b.finalizedStateAtStartUp, err = sg.StateByRoot(ctx, r)
b.finalizedStateAtStartUp, err = sg.StateByRoot(ctx, [32]byte(cp.Root))
if err != nil {
return err
}

View File

@@ -208,11 +208,11 @@ func (s *Service) custodyGroupCountFromPeerENR(pid peer.ID) uint64 {
}
func fuluForkSlot() (primitives.Slot, error) {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
fuluForkEpoch := beaconConfig.FuluForkEpoch
if fuluForkEpoch == beaconConfig.FarFutureEpoch {
return beaconConfig.FarFutureSlot, nil
fuluForkEpoch := cfg.FuluForkEpoch
if fuluForkEpoch == cfg.FarFutureEpoch {
return cfg.FarFutureSlot, nil
}
forkFuluSlot, err := slots.EpochStart(fuluForkEpoch)

View File

@@ -10,6 +10,7 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/features"
@@ -348,7 +349,7 @@ func (s *Service) listenForNewNodes() {
if s.isPeerAtLimit(all) {
// Pause the main loop for a period to stop looking for new peers.
log.Trace("Not looking for peers, at peer limit")
time.Sleep(pollingPeriod)
helpers.Sleep(s.ctx, pollingPeriod)
continue
}

View File

@@ -7,6 +7,7 @@ import (
"sync"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/peerdata"
prysmTime "github.com/OffchainLabs/prysm/v6/time"
@@ -164,7 +165,7 @@ func (s *Service) AddConnectionHandler(reqFunc, goodByeFunc func(ctx context.Con
currentTime := prysmTime.Now()
// Wait for peer to initiate handshake
time.Sleep(timeForStatus)
helpers.Sleep(s.ctx, timeForStatus)
// Exit if we are disconnected with the peer.
if s.host.Network().Connectedness(remotePeer) != network.Connected {

View File

@@ -7,6 +7,7 @@ import (
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -94,7 +95,7 @@ func (s *Service) PublishToTopic(ctx context.Context, topic string, data []byte,
case <-ctx.Done():
return errors.Wrapf(ctx.Err(), "unable to find requisite number of peers for topic %s, 0 peers found to publish to", topic)
default:
time.Sleep(100 * time.Millisecond)
helpers.Sleep(ctx, 100*time.Millisecond)
}
}
}

View File

@@ -345,17 +345,17 @@ func TopicFromMessage(msg string, epoch primitives.Epoch) (string, error) {
return "", errors.Errorf("%s: %s", invalidRPCMessageType, msg)
}
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
// Check if the message is to be updated in fulu.
if epoch >= beaconConfig.FuluForkEpoch {
if epoch >= cfg.FuluForkEpoch {
if version, ok := fuluMapping[msg]; ok {
return protocolPrefix + msg + version, nil
}
}
// Check if the message is to be updated in altair.
if epoch >= beaconConfig.AltairForkEpoch {
if epoch >= cfg.AltairForkEpoch {
if version, ok := altairMapping[msg]; ok {
return protocolPrefix + msg + version, nil
}

View File

@@ -514,17 +514,26 @@ func initializePersistentSubnets(id enode.ID, epoch primitives.Epoch) error {
//
// return [compute_subscribed_subnet(node_id, epoch, index) for index in range(SUBNETS_PER_NODE)]
func computeSubscribedSubnets(nodeID enode.ID, epoch primitives.Epoch) ([]uint64, error) {
subnetsPerNode := params.BeaconConfig().SubnetsPerNode
subs := make([]uint64, 0, subnetsPerNode)
cfg := params.BeaconConfig()
for i := uint64(0); i < subnetsPerNode; i++ {
if flags.Get().SubscribeToAllSubnets {
subnets := make([]uint64, 0, cfg.AttestationSubnetCount)
for i := range cfg.AttestationSubnetCount {
subnets = append(subnets, i)
}
return subnets, nil
}
subnets := make([]uint64, 0, cfg.SubnetsPerNode)
for i := range cfg.SubnetsPerNode {
sub, err := computeSubscribedSubnet(nodeID, epoch, i)
if err != nil {
return nil, err
return nil, errors.Wrap(err, "compute subscribed subnet")
}
subs = append(subs, sub)
subnets = append(subnets, sub)
}
return subs, nil
return subnets, nil
}
// Spec pseudocode definition:

View File

@@ -514,17 +514,39 @@ func TestDataColumnSubnets(t *testing.T) {
func TestSubnetComputation(t *testing.T) {
db, err := enode.OpenDB("")
assert.NoError(t, err)
require.NoError(t, err)
defer db.Close()
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
assert.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
assert.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
assert.NoError(t, err)
assert.Equal(t, retrievedSubnets[0]+1, retrievedSubnets[1])
priv, _, err := crypto.GenerateSecp256k1Key(rand.Reader)
require.NoError(t, err)
convertedKey, err := ecdsaprysm.ConvertFromInterfacePrivKey(priv)
require.NoError(t, err)
localNode := enode.NewLocalNode(db, convertedKey)
cfg := params.BeaconConfig()
t.Run("standard", func(t *testing.T) {
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
require.NoError(t, err)
require.Equal(t, cfg.SubnetsPerNode, uint64(len(retrievedSubnets)))
require.Equal(t, retrievedSubnets[0]+1, retrievedSubnets[1])
})
t.Run("subscribed to all", func(t *testing.T) {
gFlags := new(flags.GlobalFlags)
gFlags.SubscribeToAllSubnets = true
flags.Init(gFlags)
defer flags.Init(new(flags.GlobalFlags))
retrievedSubnets, err := computeSubscribedSubnets(localNode.ID(), 1000)
require.NoError(t, err)
require.Equal(t, cfg.AttestationSubnetCount, uint64(len(retrievedSubnets)))
for i := range cfg.AttestationSubnetCount {
require.Equal(t, i, retrievedSubnets[i])
}
})
}
func TestInitializePersistentSubnets(t *testing.T) {

View File

@@ -19,6 +19,7 @@ go_library(
"//testing:__subpackages__",
],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/p2p/encoder:go_default_library",
"//beacon-chain/p2p/peers:go_default_library",

View File

@@ -11,6 +11,7 @@ import (
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/encoder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers"
@@ -174,7 +175,6 @@ func (p *TestP2P) ReceivePubSub(topic string, msg proto.Message) {
// PubSub requires some delay after connecting for the (*PubSub).processLoop method to
// pick up the newly connected peer.
time.Sleep(time.Millisecond * 100)
castedMsg, ok := msg.(ssz.Marshaler)
if !ok {
p.t.Fatalf("%T doesn't support ssz marshaler", msg)
@@ -403,7 +403,7 @@ func (p *TestP2P) Send(ctx context.Context, msg interface{}, topic string, pid p
}
// Delay returning the stream for testing purposes
if p.DelaySend {
time.Sleep(1 * time.Second)
helpers.Sleep(ctx, 1*time.Second)
}
return stream, nil

View File

@@ -12,6 +12,7 @@ go_library(
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/rpc/core",
visibility = ["//visibility:public"],
deps = [
"//api/server:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library",

View File

@@ -7,6 +7,7 @@ import (
"sort"
"time"
"github.com/OffchainLabs/prysm/v6/api/server"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/epoch/precompute"
@@ -36,24 +37,6 @@ import (
var errOptimisticMode = errors.New("the node is currently optimistic and cannot serve validators")
// AggregateBroadcastFailedError represents an error scenario where
// broadcasting an aggregate selection proof failed.
type AggregateBroadcastFailedError struct {
err error
}
// NewAggregateBroadcastFailedError creates a new error instance.
func NewAggregateBroadcastFailedError(err error) AggregateBroadcastFailedError {
return AggregateBroadcastFailedError{
err: err,
}
}
// Error returns the underlying error message.
func (e *AggregateBroadcastFailedError) Error() string {
return fmt.Sprintf("could not broadcast signed aggregated attestation: %s", e.err.Error())
}
// ComputeValidatorPerformance reports the validator's latest balance along with other important metrics on
// rewards and penalties throughout its lifecycle in the beacon chain.
func (s *Service) ComputeValidatorPerformance(
@@ -360,7 +343,8 @@ func (s *Service) SubmitSignedContributionAndProof(
// Wait for p2p broadcast to complete and return the first error (if any)
err := errs.Wait()
if err != nil {
return &RpcError{Err: err, Reason: Internal}
log.WithError(err).Debug("Could not broadcast signed contribution and proof")
return &RpcError{Err: server.NewBroadcastFailedError("SignedContributionAndProof", err), Reason: Internal}
}
s.OperationNotifier.OperationFeed().Send(&feed.Event{
@@ -411,7 +395,8 @@ func (s *Service) SubmitSignedAggregateSelectionProof(
}
if err := s.Broadcaster.Broadcast(ctx, agg); err != nil {
return &RpcError{Err: &AggregateBroadcastFailedError{err: err}, Reason: Internal}
log.WithError(err).Debug("Could not broadcast signed aggregate att and proof")
return &RpcError{Err: server.NewBroadcastFailedError("SignedAggregateAttAndProof", err), Reason: Internal}
}
if logrus.GetLevel() >= logrus.DebugLevel {

View File

@@ -6,8 +6,6 @@ import (
"fmt"
"io"
"net/http"
"strconv"
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/api"
@@ -31,6 +29,7 @@ import (
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const broadcastBLSChangesRateLimit = 128
@@ -200,22 +199,23 @@ func (s *Server) SubmitAttestations(w http.ResponseWriter, r *http.Request) {
return
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
http.StatusInternalServerError,
)
return
}
if len(attFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: "One or more attestations failed validation",
Message: server.ErrIndexedValidationFail,
Failures: attFailures,
}
httputil.WriteError(w, failuresErr)
return
}
if len(failedBroadcasts) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusInternalServerError,
Message: server.ErrIndexedBroadcastFail,
Failures: failedBroadcasts,
}
httputil.WriteError(w, failuresErr)
return
}
}
@@ -247,8 +247,8 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
return
}
var attFailures []*server.IndexedVerificationFailure
var failedBroadcasts []string
var attFailures []*server.IndexedError
var failedBroadcasts []*server.IndexedError
if v >= version.Electra {
attFailures, failedBroadcasts, err = s.handleAttestationsElectra(ctx, req.Data)
@@ -260,29 +260,30 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
return
}
if len(failedBroadcasts) > 0 {
httputil.HandleError(
w,
fmt.Sprintf("Attestations at index %s could not be broadcasted", strings.Join(failedBroadcasts, ", ")),
http.StatusInternalServerError,
)
return
}
if len(attFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: "One or more attestations failed validation",
Message: server.ErrIndexedValidationFail,
Failures: attFailures,
}
httputil.WriteError(w, failuresErr)
return
}
if len(failedBroadcasts) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusInternalServerError,
Message: server.ErrIndexedBroadcastFail,
Failures: failedBroadcasts,
}
httputil.WriteError(w, failuresErr)
return
}
}
func (s *Server) handleAttestationsElectra(
ctx context.Context,
data json.RawMessage,
) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
) (attFailures []*server.IndexedError, failedBroadcasts []*server.IndexedError, err error) {
var sourceAttestations []*structs.SingleAttestation
currentEpoch := slots.ToEpoch(s.TimeFetcher.CurrentSlot())
if currentEpoch < params.BeaconConfig().ElectraForkEpoch {
@@ -301,14 +302,14 @@ func (s *Server) handleAttestationsElectra(
for i, sourceAtt := range sourceAttestations {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
attFailures = append(attFailures, &server.IndexedError{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
attFailures = append(attFailures, &server.IndexedError{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
@@ -317,6 +318,13 @@ func (s *Server) handleAttestationsElectra(
validAttestations = append(validAttestations, att)
}
// We store the error for the first failed broadcast and use it in the log message in case
// there are broadcast issues. Having a single log at the end instead of logging
// for every failed broadcast prevents log noise in case there are many failures.
// Even though we only retain the first error, there is a very good chance that all
// broadcasts fail for the same reason, so this should be sufficient in most cases.
var broadcastErr error
for i, singleAtt := range validAttestations {
s.OperationNotifier.OperationFeed().Send(&feed.Event{
Type: operation.SingleAttReceived,
@@ -338,31 +346,45 @@ func (s *Server) handleAttestationsElectra(
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
return nil, nil, errors.Wrap(err, "could not get head validator indices")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.GetCommitteeIndex(), att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, singleAtt); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
Message: server.NewBroadcastFailedError("SingleAttestation", err).Error(),
})
if broadcastErr == nil {
broadcastErr = err
}
continue
}
if features.Get().EnableExperimentalAttestationPool {
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("could not save attestation")
log.WithError(err).Error("Could not save attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save attestation")
log.WithError(err).Error("Could not save attestation")
}
}
}
if len(failedBroadcasts) > 0 {
log.WithFields(logrus.Fields{
"failedCount": len(failedBroadcasts),
"totalCount": len(validAttestations),
}).WithError(broadcastErr).Error("Some attestations failed to be broadcast")
}
return attFailures, failedBroadcasts, nil
}
func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (attFailures []*server.IndexedVerificationFailure, failedBroadcasts []string, err error) {
func (s *Server) handleAttestations(
ctx context.Context,
data json.RawMessage,
) (attFailures []*server.IndexedError, failedBroadcasts []*server.IndexedError, err error) {
var sourceAttestations []*structs.Attestation
if slots.ToEpoch(s.TimeFetcher.CurrentSlot()) >= params.BeaconConfig().ElectraForkEpoch {
@@ -381,14 +403,14 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
for i, sourceAtt := range sourceAttestations {
att, err := sourceAtt.ToConsensus()
if err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
attFailures = append(attFailures, &server.IndexedError{
Index: i,
Message: "Could not convert request attestation to consensus attestation: " + err.Error(),
})
continue
}
if _, err = bls.SignatureFromBytes(att.Signature); err != nil {
attFailures = append(attFailures, &server.IndexedVerificationFailure{
attFailures = append(attFailures, &server.IndexedError{
Index: i,
Message: "Incorrect attestation signature: " + err.Error(),
})
@@ -397,6 +419,13 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
validAttestations = append(validAttestations, att)
}
// We store the error for the first failed broadcast and use it in the log message in case
// there are broadcast issues. Having a single log at the end instead of logging
// for every failed broadcast prevents log noise in case there are many failures.
// Even though we only retain the first error, there is a very good chance that all
// broadcasts fail for the same reason, so this should be sufficient in most cases.
var broadcastErr error
for i, att := range validAttestations {
// Broadcast the unaggregated attestation on a feed to notify other services in the beacon node
// of a received unaggregated attestation.
@@ -413,32 +442,43 @@ func (s *Server) handleAttestations(ctx context.Context, data json.RawMessage) (
wantedEpoch := slots.ToEpoch(att.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
continue
return nil, nil, errors.Wrap(err, "could not get head validator indices")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.Data.CommitteeIndex, att.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, att); err != nil {
log.WithError(err).Errorf("could not broadcast attestation at index %d", i)
failedBroadcasts = append(failedBroadcasts, strconv.Itoa(i))
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
Message: server.NewBroadcastFailedError("Attestation", err).Error(),
})
if broadcastErr == nil {
broadcastErr = err
}
continue
}
if features.Get().EnableExperimentalAttestationPool {
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("could not save attestation")
log.WithError(err).Error("Could not save attestation")
}
} else if att.IsAggregated() {
if err = s.AttestationsPool.SaveAggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save aggregated attestation")
log.WithError(err).Error("Could not save aggregated attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("could not save unaggregated attestation")
log.WithError(err).Error("Could not save unaggregated attestation")
}
}
}
if len(failedBroadcasts) > 0 {
log.WithFields(logrus.Fields{
"failedCount": len(failedBroadcasts),
"totalCount": len(validAttestations),
}).WithError(broadcastErr).Error("Some attestations failed to be broadcast")
}
return attFailures, failedBroadcasts, nil
}
@@ -541,11 +581,11 @@ func (s *Server) SubmitSyncCommitteeSignatures(w http.ResponseWriter, r *http.Re
}
var validMessages []*eth.SyncCommitteeMessage
var msgFailures []*server.IndexedVerificationFailure
var msgFailures []*server.IndexedError
for i, sourceMsg := range req.Data {
msg, err := sourceMsg.ToConsensus()
if err != nil {
msgFailures = append(msgFailures, &server.IndexedVerificationFailure{
msgFailures = append(msgFailures, &server.IndexedError{
Index: i,
Message: "Could not convert request message to consensus message: " + err.Error(),
})
@@ -562,7 +602,7 @@ func (s *Server) SubmitSyncCommitteeSignatures(w http.ResponseWriter, r *http.Re
}
if len(msgFailures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: "One or more messages failed validation",
Failures: msgFailures,
@@ -581,7 +621,7 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
httputil.HandleError(w, fmt.Sprintf("Could not get head state: %v", err), http.StatusInternalServerError)
return
}
var failures []*server.IndexedVerificationFailure
var failures []*server.IndexedError
var toBroadcast []*eth.SignedBLSToExecutionChange
var req []*structs.SignedBLSToExecutionChange
@@ -602,7 +642,7 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
for i, change := range req {
sbls, err := change.ToConsensus()
if err != nil {
failures = append(failures, &server.IndexedVerificationFailure{
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Unable to decode SignedBLSToExecutionChange: " + err.Error(),
})
@@ -610,14 +650,14 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
}
_, err = blocks.ValidateBLSToExecutionChange(st, sbls)
if err != nil {
failures = append(failures, &server.IndexedVerificationFailure{
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not validate SignedBLSToExecutionChange: " + err.Error(),
})
continue
}
if err := blocks.VerifyBLSChangeSignature(st, sbls); err != nil {
failures = append(failures, &server.IndexedVerificationFailure{
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not validate signature: " + err.Error(),
})
@@ -636,9 +676,9 @@ func (s *Server) SubmitBLSToExecutionChanges(w http.ResponseWriter, r *http.Requ
}
go s.broadcastBLSChanges(context.Background(), toBroadcast)
if len(failures) > 0 {
failuresErr := &server.IndexedVerificationFailureError{
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: "One or more BLSToExecutionChange failed validation",
Message: server.ErrIndexedValidationFail,
Failures: failures,
}
httputil.WriteError(w, failuresErr)
@@ -655,18 +695,18 @@ func (s *Server) broadcastBLSBatch(ctx context.Context, ptr *[]*eth.SignedBLSToE
}
st, err := s.ChainInfoFetcher.HeadStateReadOnly(ctx)
if err != nil {
log.WithError(err).Error("could not get head state")
log.WithError(err).Error("Could not get head state")
return
}
for _, ch := range (*ptr)[:limit] {
if ch != nil {
_, err := blocks.ValidateBLSToExecutionChange(st, ch)
if err != nil {
log.WithError(err).Error("could not validate BLS to execution change")
log.WithError(err).Error("Could not validate BLS to execution change")
continue
}
if err := s.Broadcaster.Broadcast(ctx, ch); err != nil {
log.WithError(err).Error("could not broadcast BLS to execution changes.")
log.WithError(err).Error("Could not broadcast BLS to execution changes.")
}
}
}

View File

@@ -638,7 +638,7 @@ func TestSubmitAttestations(t *testing.T) {
s.SubmitAttestations(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
e := &server.IndexedErrorContainer{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
@@ -772,7 +772,7 @@ func TestSubmitAttestations(t *testing.T) {
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
e := &server.IndexedErrorContainer{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
@@ -873,7 +873,7 @@ func TestSubmitAttestations(t *testing.T) {
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
e := &server.IndexedVerificationFailureError{}
e := &server.IndexedErrorContainer{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), e))
assert.Equal(t, http.StatusBadRequest, e.Code)
require.Equal(t, 1, len(e.Failures))
@@ -1538,7 +1538,7 @@ func TestSubmitSignedBLSToExecutionChanges_Failures(t *testing.T) {
s.SubmitBLSToExecutionChanges(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
time.Sleep(10 * time.Millisecond) // Delay to allow the routine to start
require.StringContains(t, "One or more BLSToExecutionChange failed validation", writer.Body.String())
require.StringContains(t, "One or more messages failed validation", writer.Body.String())
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, numValidators, len(broadcaster.BroadcastMessages)+1)

View File

@@ -12,6 +12,7 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",

View File

@@ -14,6 +14,7 @@ import (
"time"
"github.com/OffchainLabs/prysm/v6/api"
"github.com/OffchainLabs/prysm/v6/api/server"
"github.com/OffchainLabs/prysm/v6/api/server/structs"
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
@@ -268,22 +269,61 @@ func (s *Server) SubmitContributionAndProofs(w http.ResponseWriter, r *http.Requ
return
}
for _, item := range reqData {
var failures []*server.IndexedError
var failedBroadcasts []*server.IndexedError
for i, item := range reqData {
var contribution structs.SignedContributionAndProof
if err := json.Unmarshal(item, &contribution); err != nil {
httputil.HandleError(w, "Could not decode item: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not unmarshal message: " + err.Error(),
})
continue
}
consensusItem, err := contribution.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert contribution to consensus format: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not convert request contribution to consensus contribution: " + err.Error(),
})
continue
}
if rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem); rpcError != nil {
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
return
rpcError := s.CoreService.SubmitSignedContributionAndProof(ctx, consensusItem)
if rpcError != nil {
var broadcastFailedErr *server.BroadcastFailedError
if errors.As(rpcError.Err, &broadcastFailedErr) {
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
Message: rpcError.Err.Error(),
})
continue
} else {
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
return
}
}
}
if len(failures) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: server.ErrIndexedValidationFail,
Failures: failures,
}
httputil.WriteError(w, failuresErr)
return
}
if len(failedBroadcasts) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusInternalServerError,
Message: server.ErrIndexedBroadcastFail,
Failures: failedBroadcasts,
}
httputil.WriteError(w, failuresErr)
return
}
}
// Deprecated: use SubmitAggregateAndProofsV2 instead
@@ -322,8 +362,8 @@ func (s *Server) SubmitAggregateAndProofs(w http.ResponseWriter, r *http.Request
}
rpcError := s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
if rpcError != nil {
var aggregateBroadcastFailedError *core.AggregateBroadcastFailedError
ok := errors.As(rpcError.Err, &aggregateBroadcastFailedError)
var broadcastFailedErr *server.BroadcastFailedError
ok := errors.As(rpcError.Err, &broadcastFailedErr)
if ok {
broadcastFailed = true
} else {
@@ -368,49 +408,83 @@ func (s *Server) SubmitAggregateAndProofsV2(w http.ResponseWriter, r *http.Reque
return
}
broadcastFailed := false
var failures []*server.IndexedError
var failedBroadcasts []*server.IndexedError
var rpcError *core.RpcError
for _, raw := range reqData {
for i, raw := range reqData {
if v >= version.Electra {
var signedAggregate structs.SignedAggregateAttestationAndProofElectra
err = json.Unmarshal(raw, &signedAggregate)
if err != nil {
httputil.HandleError(w, "Failed to parse aggregate attestation and proof: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not parse message: " + err.Error(),
})
continue
}
consensusItem, err := signedAggregate.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not convert request aggregate to consensus aggregate: " + err.Error(),
})
continue
}
rpcError = s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
} else {
var signedAggregate structs.SignedAggregateAttestationAndProof
err = json.Unmarshal(raw, &signedAggregate)
if err != nil {
httputil.HandleError(w, "Failed to parse aggregate attestation and proof: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not parse message: " + err.Error(),
})
continue
}
consensusItem, err := signedAggregate.ToConsensus()
if err != nil {
httputil.HandleError(w, "Could not convert request aggregate to consensus aggregate: "+err.Error(), http.StatusBadRequest)
return
failures = append(failures, &server.IndexedError{
Index: i,
Message: "Could not convert request aggregate to consensus aggregate: " + err.Error(),
})
continue
}
rpcError = s.CoreService.SubmitSignedAggregateSelectionProof(ctx, consensusItem)
}
if rpcError != nil {
var aggregateBroadcastFailedError *core.AggregateBroadcastFailedError
if errors.As(rpcError.Err, &aggregateBroadcastFailedError) {
broadcastFailed = true
var broadcastFailedErr *server.BroadcastFailedError
if errors.As(rpcError.Err, &broadcastFailedErr) {
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
Message: rpcError.Err.Error(),
})
continue
} else {
httputil.HandleError(w, rpcError.Err.Error(), core.ErrorReasonToHTTP(rpcError.Reason))
return
}
}
}
if broadcastFailed {
httputil.HandleError(w, "Could not broadcast one or more signed aggregated attestations", http.StatusInternalServerError)
if len(failures) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusBadRequest,
Message: server.ErrIndexedValidationFail,
Failures: failures,
}
httputil.WriteError(w, failuresErr)
return
}
if len(failedBroadcasts) > 0 {
failuresErr := &server.IndexedErrorContainer{
Code: http.StatusInternalServerError,
Message: server.ErrIndexedBroadcastFail,
Failures: failedBroadcasts,
}
httputil.WriteError(w, failuresErr)
return
}
}
@@ -523,7 +597,18 @@ func (s *Server) SubmitSyncCommitteeSubscription(w http.ResponseWriter, r *http.
epochDuration := time.Duration(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().SecondsPerSlot)) * time.Second
totalDuration := epochDuration * time.Duration(epochsToWatch)
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, sub.SyncCommitteeIndices, totalDuration)
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / params.BeaconConfig().SyncCommitteeSubnetCount
seen := make(map[uint64]bool)
var subnetIndices []uint64
for _, idx := range sub.SyncCommitteeIndices {
subnetIdx := idx / subcommitteeSize
if !seen[subnetIdx] {
seen[subnetIdx] = true
subnetIndices = append(subnetIndices, subnetIdx)
}
}
cache.SyncSubnetIDs.AddSyncCommitteeSubnets(pubkey48[:], startEpoch, subnetIndices, totalDuration)
}
}

View File

@@ -1049,9 +1049,8 @@ func TestSubmitSyncCommitteeSubscription(t *testing.T) {
s.SubmitSyncCommitteeSubscription(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
subnets, _, _, _ := cache.SyncSubnetIDs.GetSyncCommitteeSubnets(pubkeys[1], 0)
require.Equal(t, 2, len(subnets))
require.Equal(t, 1, len(subnets))
assert.Equal(t, uint64(0), subnets[0])
assert.Equal(t, uint64(2), subnets[1])
})
t.Run("multiple", func(t *testing.T) {
cache.SyncSubnetIDs.EmptyAllCaches()
@@ -1070,7 +1069,7 @@ func TestSubmitSyncCommitteeSubscription(t *testing.T) {
assert.Equal(t, uint64(0), subnets[0])
subnets, _, _, _ = cache.SyncSubnetIDs.GetSyncCommitteeSubnets(pubkeys[1], 0)
require.Equal(t, 1, len(subnets))
assert.Equal(t, uint64(2), subnets[0])
assert.Equal(t, uint64(0), subnets[0])
})
t.Run("no body", func(t *testing.T) {
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)

View File

@@ -3,6 +3,7 @@ package lookup
import (
"context"
"fmt"
"math"
"strconv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain"
@@ -283,9 +284,13 @@ func (p *BeaconDbBlocker) Blobs(ctx context.Context, id string, opts ...options.
return make([]*blocks.VerifiedROBlob, 0), nil
}
fuluForkSlot, err := slots.EpochStart(params.BeaconConfig().FuluForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Fulu start slot"), Reason: core.Internal}
// Compute the first Fulu slot.
fuluForkSlot := primitives.Slot(math.MaxUint64)
if fuluForkEpoch := params.BeaconConfig().FuluForkEpoch; fuluForkEpoch != primitives.Epoch(math.MaxUint64) {
fuluForkSlot, err = slots.EpochStart(fuluForkEpoch)
if err != nil {
return nil, &core.RpcError{Err: errors.Wrap(err, "could not calculate Fulu start slot"), Reason: core.Internal}
}
}
// Convert versioned hashes to indices if provided

View File

@@ -587,6 +587,51 @@ func TestGetBlob(t *testing.T) {
require.Equal(t, http.StatusBadRequest, core.ErrorReasonToHTTP(rpcErr.Reason))
require.StringContains(t, "not supported before", rpcErr.Err.Error())
})
t.Run("fulu fork epoch not set (MaxUint64)", func(t *testing.T) {
// Setup with Deneb fork enabled but Fulu fork epoch set to MaxUint64 (not set/far future)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 1
cfg.FuluForkEpoch = primitives.Epoch(math.MaxUint64) // Not set / far future
params.OverrideBeaconConfig(cfg)
// Create and save Deneb block and blob sidecars
denebSlot := util.SlotAtEpoch(t, cfg.DenebForkEpoch)
_, tempBlobStorage := filesystem.NewEphemeralBlobStorageAndFs(t)
denebBlockWithBlobs, denebBlobSidecars := util.GenerateTestDenebBlockWithSidecar(t, [fieldparams.RootLength]byte{}, denebSlot, 2, util.WithDenebSlot(denebSlot))
denebBlockRoot := denebBlockWithBlobs.Root()
verifiedDenebBlobs := verification.FakeVerifySliceForTest(t, denebBlobSidecars)
for i := range verifiedDenebBlobs {
err := tempBlobStorage.Save(verifiedDenebBlobs[i])
require.NoError(t, err)
}
err := db.SaveBlock(t.Context(), denebBlockWithBlobs)
require.NoError(t, err)
blocker := &BeaconDbBlocker{
GenesisTimeFetcher: &testutil.MockGenesisTimeFetcher{
Genesis: time.Now(),
},
BeaconDB: db,
BlobStorage: tempBlobStorage,
}
// Should successfully retrieve blobs even when FuluForkEpoch is not set
retrievedBlobs, rpcErr := blocker.Blobs(ctx, hexutil.Encode(denebBlockRoot[:]))
require.IsNil(t, rpcErr)
require.Equal(t, 2, len(retrievedBlobs))
// Verify blob content matches
for i, retrievedBlob := range retrievedBlobs {
require.NotNil(t, retrievedBlob.BlobSidecar)
require.DeepEqual(t, denebBlobSidecars[i].Blob, retrievedBlob.Blob)
require.DeepEqual(t, denebBlobSidecars[i].KzgCommitment, retrievedBlob.KzgCommitment)
}
})
}
func TestBlobs_CommitmentOrdering(t *testing.T) {

View File

@@ -316,6 +316,10 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
blobSidecars, dataColumnSidecars, err = vs.handleUnblindedBlock(rob, req)
}
if err != nil {
if errors.Is(err, builderapi.ErrBadGateway) && block.IsBlinded() {
log.WithError(err).Info("Optimistically proposed block - builder relay temporarily unavailable, block may arrive over P2P")
return &ethpb.ProposeResponse{BlockRoot: root[:]}, nil
}
return nil, status.Errorf(codes.Internal, "%s: %v", "handle block failed", err)
}

View File

@@ -6,6 +6,7 @@ import (
"testing"
"time"
builderapi "github.com/OffchainLabs/prysm/v6/api/client/builder"
"github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/kzg"
mock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/builder"
@@ -3634,4 +3635,52 @@ func TestServer_ProposeBeaconBlock_PostFuluBlindedBlock(t *testing.T) {
require.NotNil(t, res)
require.NotEmpty(t, res.BlockRoot)
})
t.Run("blinded block - 502 error handling", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 10
params.OverrideBeaconConfig(cfg)
mockBuilder := &builderTest.MockBuilderService{
HasConfigured: true,
Cfg: &builderTest.Config{BeaconDB: db},
PayloadDeneb: &enginev1.ExecutionPayloadDeneb{},
ErrSubmitBlindedBlock: builderapi.ErrBadGateway,
}
c := &mock.ChainService{State: beaconState, Root: parentRoot[:]}
proposerServer := &Server{
ChainStartFetcher: &mockExecution.Chain{},
Eth1InfoFetcher: &mockExecution.Chain{},
Eth1BlockFetcher: &mockExecution.Chain{},
BlockReceiver: c,
BlobReceiver: c,
HeadFetcher: c,
BlockNotifier: c.BlockNotifier(),
OperationNotifier: c.OperationNotifier(),
StateGen: stategen.New(db, doublylinkedtree.New()),
TimeFetcher: c,
SyncChecker: &mockSync.Sync{IsSyncing: false},
BeaconDB: db,
BlockBuilder: mockBuilder,
P2P: &mockp2p.MockBroadcaster{},
}
blindedBlock := util.NewBlindedBeaconBlockDeneb()
blindedBlock.Message.Slot = 160 // This puts us at epoch 5 (160/32 = 5)
blindedBlock.Message.ProposerIndex = 0
blindedBlock.Message.ParentRoot = parentRoot[:]
blindedBlock.Message.StateRoot = make([]byte, 32)
req := &ethpb.GenericSignedBeaconBlock{
Block: &ethpb.GenericSignedBeaconBlock_BlindedDeneb{BlindedDeneb: blindedBlock},
}
// Should handle 502 error gracefully and continue with original blinded block
res, err := proposerServer.ProposeBeaconBlock(ctx, req)
require.NoError(t, err)
require.NotNil(t, res)
require.NotEmpty(t, res.BlockRoot)
})
}

View File

@@ -14,7 +14,7 @@ import (
"github.com/pkg/errors"
)
const signatureVerificationInterval = 50 * time.Millisecond
const signatureVerificationInterval = 5 * time.Millisecond
type signatureVerifier struct {
set *bls.SignatureBatch

View File

@@ -90,10 +90,10 @@ func (s *Service) updateCustodyInfoIfNeeded() error {
// custodyGroupCount computes the custody group count based on the custody requirement,
// the validators custody requirement, and whether the node is subscribed to all data subnets.
func (s *Service) custodyGroupCount(context.Context) (uint64, error) {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
if flags.Get().SubscribeAllDataSubnets {
return beaconConfig.NumberOfCustodyGroups, nil
return cfg.NumberOfCustodyGroups, nil
}
validatorsCustodyRequirement, err := s.validatorsCustodyRequirement()
@@ -101,7 +101,7 @@ func (s *Service) custodyGroupCount(context.Context) (uint64, error) {
return 0, errors.Wrap(err, "validators custody requirement")
}
return max(beaconConfig.CustodyRequirement, validatorsCustodyRequirement), nil
return max(cfg.CustodyRequirement, validatorsCustodyRequirement), nil
}
// validatorsCustodyRequirements computes the custody requirements based on the

View File

@@ -116,11 +116,11 @@ func withSubscribeAllDataSubnets(t *testing.T, fn func()) {
func TestUpdateCustodyInfoIfNeeded(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.NumberOfCustodyGroups = 128
beaconConfig.CustodyRequirement = 4
beaconConfig.SamplesPerSlot = 8
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.NumberOfCustodyGroups = 128
cfg.CustodyRequirement = 4
cfg.SamplesPerSlot = 8
params.OverrideBeaconConfig(cfg)
t.Run("Skip update when actual custody count >= target", func(t *testing.T) {
setup := setupCustodyTest(t, false)
@@ -159,7 +159,7 @@ func TestUpdateCustodyInfoIfNeeded(t *testing.T) {
require.NoError(t, err)
const expectedSlot = primitives.Slot(100)
setup.assertCustodyInfo(t, expectedSlot, beaconConfig.NumberOfCustodyGroups)
setup.assertCustodyInfo(t, expectedSlot, cfg.NumberOfCustodyGroups)
})
})
}

View File

@@ -1122,19 +1122,18 @@ func randomPeer(
}
}
slices.Sort(nonRateLimitedPeers)
if len(nonRateLimitedPeers) == 0 {
log.WithFields(logrus.Fields{
"peerCount": peerCount,
"delay": waitPeriod,
}).Debug("Waiting for a peer with enough bandwidth for data column sidecars")
time.Sleep(waitPeriod)
continue
if len(nonRateLimitedPeers) > 0 {
slices.Sort(nonRateLimitedPeers)
randomIndex := randomSource.Intn(len(nonRateLimitedPeers))
return nonRateLimitedPeers[randomIndex], nil
}
randomIndex := randomSource.Intn(len(nonRateLimitedPeers))
return nonRateLimitedPeers[randomIndex], nil
log.WithFields(logrus.Fields{
"peerCount": peerCount,
"delay": waitPeriod,
}).Debug("Waiting for a peer with enough bandwidth for data column sidecars")
helpers.Sleep(ctx, waitPeriod)
}
return "", ctx.Err()

View File

@@ -45,6 +45,7 @@ func TestFetchDataColumnSidecars(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 10}}
params.OverrideBeaconConfig(cfg)
// Start the trusted setup.
@@ -760,6 +761,12 @@ func TestVerifyDataColumnSidecarsByPeer(t *testing.T) {
err := kzg.Start()
require.NoError(t, err)
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 2}}
params.OverrideBeaconConfig(cfg)
t.Run("nominal", func(t *testing.T) {
const (
start, stop = 0, 15

View File

@@ -6,6 +6,7 @@ import (
"sync"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/peers/scorers"
"github.com/OffchainLabs/prysm/v6/cmd/beacon-chain/flags"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -85,7 +86,7 @@ func (f *blocksFetcher) waitForMinimumPeers(ctx context.Context) ([]peer.ID, err
log.WithFields(logrus.Fields{
"suitable": len(peers),
"required": required}).Info("Waiting for enough suitable peers before syncing")
time.Sleep(handshakePollingInterval)
helpers.Sleep(ctx, handshakePollingInterval)
}
}

View File

@@ -1366,16 +1366,16 @@ func TestFetchSidecars(t *testing.T) {
})
t.Run("Nominal", func(t *testing.T) {
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
samplesPerSlot := beaconConfig.SamplesPerSlot
cfg := params.BeaconConfig()
numberOfColumns := cfg.NumberOfColumns
samplesPerSlot := cfg.SamplesPerSlot
// Define "now" to be one epoch after genesis time + retention period.
genesisTime := time.Date(2025, time.August, 10, 0, 0, 0, 0, time.UTC)
secondsPerSlot := beaconConfig.SecondsPerSlot
slotsPerEpoch := beaconConfig.SlotsPerEpoch
secondsPerSlot := cfg.SecondsPerSlot
slotsPerEpoch := cfg.SlotsPerEpoch
secondsPerEpoch := uint64(slotsPerEpoch.Mul(secondsPerSlot))
retentionEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
retentionEpochs := cfg.MinEpochsForDataColumnSidecarsRequest
nowWrtGenesisSecs := retentionEpochs.Add(1).Mul(secondsPerEpoch)
now := genesisTime.Add(time.Duration(nowWrtGenesisSecs) * time.Second)

View File

@@ -5,6 +5,7 @@ import (
"errors"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
@@ -235,7 +236,7 @@ func (q *blocksQueue) loop() {
} else {
q.exitConditions.noRequiredPeersErrRetries++
log.Debug("Waiting for finalized peers")
time.Sleep(noRequiredPeersErrRefreshInterval)
helpers.Sleep(q.ctx, noRequiredPeersErrRefreshInterval)
}
continue
}

View File

@@ -322,7 +322,7 @@ func (s *Service) waitForMinimumPeers() ([]peer.ID, error) {
"suitable": len(peers),
"required": required,
}).Info("Waiting for enough suitable peers before syncing")
time.Sleep(handshakePollingInterval)
helpers.Sleep(s.ctx, handshakePollingInterval)
}
}

View File

@@ -530,12 +530,12 @@ func TestOriginOutsideRetention(t *testing.T) {
func TestFetchOriginSidecars(t *testing.T) {
ctx := t.Context()
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
genesisTime := time.Date(2025, time.August, 10, 0, 0, 0, 0, time.UTC)
secondsPerSlot := beaconConfig.SecondsPerSlot
slotsPerEpoch := beaconConfig.SlotsPerEpoch
secondsPerSlot := cfg.SecondsPerSlot
slotsPerEpoch := cfg.SlotsPerEpoch
secondsPerEpoch := uint64(slotsPerEpoch.Mul(secondsPerSlot))
retentionEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
retentionEpochs := cfg.MinEpochsForDataColumnSidecarsRequest
genesisValidatorRoot := [fieldparams.RootLength]byte{}
@@ -683,6 +683,7 @@ func TestFetchOriginColumns(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: 10}}
params.OverrideBeaconConfig(cfg)
const (

View File

@@ -7,6 +7,7 @@ import (
"strings"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptypes "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/types"
"github.com/OffchainLabs/prysm/v6/config/features"
@@ -194,7 +195,7 @@ func (s *Service) registerRPC(baseTopic string, handle rpcHandler) {
// https://github.com/quic-go/quic-go/issues/3291
defer func() {
if strings.Contains(stream.Conn().RemoteMultiaddr().String(), "quic-v1") {
time.Sleep(2 * time.Second)
helpers.Sleep(s.ctx, 2*time.Second)
}
_err := stream.Reset()

View File

@@ -113,19 +113,19 @@ func (s *Service) blobSidecarByRootRPCHandler(ctx context.Context, msg interface
}
func validateBlobByRootRequest(blobIdents types.BlobSidecarsByRootReq, slot primitives.Slot) error {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
epoch := slots.ToEpoch(slot)
blobIdentCount := uint64(len(blobIdents))
if epoch >= beaconConfig.ElectraForkEpoch {
if blobIdentCount > beaconConfig.MaxRequestBlobSidecarsElectra {
if epoch >= cfg.ElectraForkEpoch {
if blobIdentCount > cfg.MaxRequestBlobSidecarsElectra {
return types.ErrMaxBlobReqExceeded
}
return nil
}
if blobIdentCount > beaconConfig.MaxRequestBlobSidecars {
if blobIdentCount > cfg.MaxRequestBlobSidecars {
return types.ErrMaxBlobReqExceeded
}

View File

@@ -38,8 +38,8 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
defer cancel()
SetRPCStreamDeadlines(stream)
beaconConfig := params.BeaconConfig()
maxRequestDataColumnSidecars := beaconConfig.MaxRequestDataColumnSidecars
cfg := params.BeaconConfig()
maxRequestDataColumnSidecars := cfg.MaxRequestDataColumnSidecars
remotePeer := stream.Conn().RemotePeer()
log := log.WithFields(logrus.Fields{
@@ -70,6 +70,7 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
log.Trace("Serving data column sidecars by range")
if rangeParameters == nil {
closeStream(stream, log)
return nil
}
@@ -101,7 +102,7 @@ func (s *Service) dataColumnSidecarsByRangeRPCHandler(ctx context.Context, msg i
// Once the quota is reached, we're done serving the request.
if maxRequestDataColumnSidecars == 0 {
log.WithField("initialQuota", beaconConfig.MaxRequestDataColumnSidecars).Trace("Reached quota for data column sidecars by range request")
log.WithField("initialQuota", cfg.MaxRequestDataColumnSidecars).Trace("Reached quota for data column sidecars by range request")
break
}
}

View File

@@ -23,18 +23,17 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
consensusblocks "github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
pb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
//beaconConfig.FuluForkEpoch = beaconConfig.ElectraForkEpoch + 100
beaconConfig.FuluForkEpoch = 0
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
ctx := context.Background()
t.Run("wrong message type", func(t *testing.T) {
@@ -47,6 +46,7 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
require.NoError(t, err)
t.Run("invalid request", func(t *testing.T) {
slot := primitives.Slot(400)
mockNower.SetSlot(t, clock, slot)
@@ -72,8 +72,8 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
code, _, err := readStatusCodeNoDeadline(stream, localP2P.Encoding())
require.NoError(t, err)
require.Equal(t, responseCodeInvalidRequest, code)
assert.NoError(t, err)
assert.Equal(t, responseCodeInvalidRequest, code)
})
localP2P.Connect(remoteP2P)
@@ -94,6 +94,48 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
}
})
t.Run("in the future", func(t *testing.T) {
slot := primitives.Slot(400)
mockNower.SetSlot(t, clock, slot)
localP2P, remoteP2P := p2ptest.NewTestP2P(t), p2ptest.NewTestP2P(t)
protocolID := protocol.ID(fmt.Sprintf("%s/ssz_snappy", p2p.RPCDataColumnSidecarsByRangeTopicV1))
service := &Service{
cfg: &config{
p2p: localP2P,
chain: &chainMock.ChainService{
Slot: &slot,
},
clock: clock,
},
rateLimiter: newRateLimiter(localP2P),
}
var wg sync.WaitGroup
wg.Add(1)
remoteP2P.BHost.SetStreamHandler(protocolID, func(stream network.Stream) {
defer wg.Done()
_, err := readChunkedDataColumnSidecar(stream, remoteP2P, ctxMap)
assert.Equal(t, true, errors.Is(err, io.EOF))
})
localP2P.Connect(remoteP2P)
stream, err := localP2P.BHost.NewStream(ctx, remoteP2P.BHost.ID(), protocolID)
require.NoError(t, err)
msg := &pb.DataColumnSidecarsByRangeRequest{
StartSlot: slot + 1,
Count: 50,
Columns: []uint64{1, 2, 3, 4, 6, 7, 8, 9, 10},
}
err = service.dataColumnSidecarsByRangeRPCHandler(ctx, msg, stream)
require.NoError(t, err)
})
t.Run("nominal", func(t *testing.T) {
slot := primitives.Slot(400)
@@ -133,12 +175,12 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
signedBeaconBlockPb.Block.ParentRoot = roots[i-1][:]
}
signedBeaconBlock, err := consensusblocks.NewSignedBeaconBlock(signedBeaconBlockPb)
signedBeaconBlock, err := blocks.NewSignedBeaconBlock(signedBeaconBlockPb)
require.NoError(t, err)
// There is a discrepancy between the root of the beacon block and the rodata column root,
// but for the sake of this test, we actually don't care.
roblock, err := consensusblocks.NewROBlockWithRoot(signedBeaconBlock, roots[i])
roblock, err := blocks.NewROBlockWithRoot(signedBeaconBlock, roots[i])
require.NoError(t, err)
roBlocks = append(roBlocks, roblock)
@@ -178,28 +220,28 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
break
}
require.NoError(t, err)
assert.NoError(t, err)
sidecars = append(sidecars, sidecar)
}
require.Equal(t, 8, len(sidecars))
require.Equal(t, root0, sidecars[0].BlockRoot())
require.Equal(t, root0, sidecars[1].BlockRoot())
require.Equal(t, root0, sidecars[2].BlockRoot())
require.Equal(t, root3, sidecars[3].BlockRoot())
require.Equal(t, root3, sidecars[4].BlockRoot())
require.Equal(t, root5, sidecars[5].BlockRoot())
require.Equal(t, root5, sidecars[6].BlockRoot())
require.Equal(t, root5, sidecars[7].BlockRoot())
assert.Equal(t, 8, len(sidecars))
assert.Equal(t, root0, sidecars[0].BlockRoot())
assert.Equal(t, root0, sidecars[1].BlockRoot())
assert.Equal(t, root0, sidecars[2].BlockRoot())
assert.Equal(t, root3, sidecars[3].BlockRoot())
assert.Equal(t, root3, sidecars[4].BlockRoot())
assert.Equal(t, root5, sidecars[5].BlockRoot())
assert.Equal(t, root5, sidecars[6].BlockRoot())
assert.Equal(t, root5, sidecars[7].BlockRoot())
require.Equal(t, uint64(1), sidecars[0].Index)
require.Equal(t, uint64(2), sidecars[1].Index)
require.Equal(t, uint64(3), sidecars[2].Index)
require.Equal(t, uint64(4), sidecars[3].Index)
require.Equal(t, uint64(6), sidecars[4].Index)
require.Equal(t, uint64(7), sidecars[5].Index)
require.Equal(t, uint64(8), sidecars[6].Index)
require.Equal(t, uint64(9), sidecars[7].Index)
assert.Equal(t, uint64(1), sidecars[0].Index)
assert.Equal(t, uint64(2), sidecars[1].Index)
assert.Equal(t, uint64(3), sidecars[2].Index)
assert.Equal(t, uint64(4), sidecars[3].Index)
assert.Equal(t, uint64(6), sidecars[4].Index)
assert.Equal(t, uint64(7), sidecars[5].Index)
assert.Equal(t, uint64(8), sidecars[6].Index)
assert.Equal(t, uint64(9), sidecars[7].Index)
})
localP2P.Connect(remoteP2P)
@@ -215,7 +257,6 @@ func TestDataColumnSidecarsByRangeRPCHandler(t *testing.T) {
err = service.dataColumnSidecarsByRangeRPCHandler(ctx, msg, stream)
require.NoError(t, err)
})
}
func TestValidateDataColumnsByRange(t *testing.T) {

View File

@@ -163,9 +163,9 @@ func dataColumnsRPCMinValidSlot(currentSlot primitives.Slot) (primitives.Slot, e
return primitives.Slot(math.MaxUint64), nil
}
beaconConfig := params.BeaconConfig()
minReqEpochs := beaconConfig.MinEpochsForDataColumnSidecarsRequest
minStartEpoch := beaconConfig.FuluForkEpoch
cfg := params.BeaconConfig()
minReqEpochs := cfg.MinEpochsForDataColumnSidecarsRequest
minStartEpoch := cfg.FuluForkEpoch
currEpoch := slots.ToEpoch(currentSlot)
if currEpoch > minReqEpochs && currEpoch-minReqEpochs > minStartEpoch {

View File

@@ -28,9 +28,9 @@ import (
func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.FuluForkEpoch = 0
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
require.NoError(t, err)
@@ -43,9 +43,9 @@ func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
t.Run("invalid request", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.MaxRequestDataColumnSidecars = 1
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.MaxRequestDataColumnSidecars = 1
params.OverrideBeaconConfig(cfg)
localP2P := p2ptest.NewTestP2P(t)
service := &Service{cfg: &config{p2p: localP2P}}
@@ -96,9 +96,9 @@ func TestDataColumnSidecarsByRootRPCHandler(t *testing.T) {
}()
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.FuluForkEpoch = 1
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 1
params.OverrideBeaconConfig(cfg)
localP2P := p2ptest.NewTestP2P(t)
clock := startup.NewClock(time.Now(), [fieldparams.RootLength]byte{})

View File

@@ -465,8 +465,8 @@ func SendDataColumnSidecarsByRangeRequest(
return nil, nil
}
beaconConfig := params.BeaconConfig()
numberOfColumns := beaconConfig.NumberOfColumns
cfg := params.BeaconConfig()
numberOfColumns := cfg.NumberOfColumns
maxRequestDataColumnSidecars := params.BeaconConfig().MaxRequestDataColumnSidecars
// Check if we do not request too many sidecars.

View File

@@ -889,9 +889,9 @@ func TestErrInvalidFetchedDataDistinction(t *testing.T) {
func TestSendDataColumnSidecarsByRangeRequest(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.FuluForkEpoch = 0
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
require.NoError(t, err)
@@ -923,9 +923,9 @@ func TestSendDataColumnSidecarsByRangeRequest(t *testing.T) {
t.Run("too many columns in request", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.MaxRequestDataColumnSidecars = 0
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.MaxRequestDataColumnSidecars = 0
params.OverrideBeaconConfig(cfg)
request := &ethpb.DataColumnSidecarsByRangeRequest{Count: 1, Columns: []uint64{1, 2, 3}}
_, err := SendDataColumnSidecarsByRangeRequest(DataColumnSidecarsParams{Ctx: t.Context()}, "", request)
@@ -1193,9 +1193,9 @@ func TestIsSidecarIndexRequested(t *testing.T) {
func TestSendDataColumnSidecarsByRootRequest(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.FuluForkEpoch = 0
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
params.OverrideBeaconConfig(cfg)
params.BeaconConfig().InitializeForkSchedule()
ctxMap, err := ContextByteVersionsForValRoot(params.BeaconConfig().GenesisValidatorsRoot)
require.NoError(t, err)
@@ -1223,9 +1223,9 @@ func TestSendDataColumnSidecarsByRootRequest(t *testing.T) {
t.Run("too many columns in request", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
beaconConfig := params.BeaconConfig()
beaconConfig.MaxRequestDataColumnSidecars = 4
params.OverrideBeaconConfig(beaconConfig)
cfg := params.BeaconConfig()
cfg.MaxRequestDataColumnSidecars = 4
params.OverrideBeaconConfig(cfg)
request := p2ptypes.DataColumnsByRootIdentifiers{
{Columns: []uint64{1, 2, 3}},

View File

@@ -445,7 +445,7 @@ func TestStatusRPCRequest_RequestSent(t *testing.T) {
custodyGroupCount = uint64(4)
)
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
ctx := t.Context()
testCases := []struct {
@@ -456,7 +456,7 @@ func TestStatusRPCRequest_RequestSent(t *testing.T) {
}{
{
name: "before fulu",
fuluForkEpoch: beaconConfig.FarFutureEpoch,
fuluForkEpoch: cfg.FarFutureEpoch,
topic: "/eth2/beacon_chain/req/status/1/ssz_snappy",
streamHandler: func(service *Service, stream network.Stream, genesisState beaconState.BeaconState, beaconRoot, headRoot, finalizedRoot []byte) {
out := &ethpb.Status{}

View File

@@ -17,6 +17,7 @@ import (
blockfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/block"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
statefeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/filesystem"
"github.com/OffchainLabs/prysm/v6/beacon-chain/execution"
@@ -389,7 +390,7 @@ func (s *Service) waitForChainStart() {
// Wait for chainstart in separate routine.
if startTime.After(prysmTime.Now()) {
time.Sleep(prysmTime.Until(startTime))
helpers.Sleep(s.ctx, prysmTime.Until(startTime))
}
log.WithField("startTime", startTime).Debug("Chain started in sync service")
s.markForChainStart()

View File

@@ -695,10 +695,10 @@ func (s *Service) dataColumnSubnetIndices(primitives.Slot) map[uint64]bool {
// the validators custody requirement, and whether the node is subscribed to all data subnets.
// https://github.com/ethereum/consensus-specs/blob/master/specs/fulu/das-core.md#custody-sampling
func (s *Service) samplingSize() (uint64, error) {
beaconConfig := params.BeaconConfig()
cfg := params.BeaconConfig()
if flags.Get().SubscribeAllDataSubnets {
return beaconConfig.DataColumnSidecarSubnetCount, nil
return cfg.DataColumnSidecarSubnetCount, nil
}
// Compute the validators custody requirement.
@@ -712,14 +712,10 @@ func (s *Service) samplingSize() (uint64, error) {
return 0, errors.Wrap(err, "custody group count")
}
return max(beaconConfig.SamplesPerSlot, validatorsCustodyRequirement, custodyGroupCount), nil
return max(cfg.SamplesPerSlot, validatorsCustodyRequirement, custodyGroupCount), nil
}
func (s *Service) persistentAndAggregatorSubnetIndices(currentSlot primitives.Slot) map[uint64]bool {
if flags.Get().SubscribeToAllSubnets {
return mapFromCount(params.BeaconConfig().AttestationSubnetCount)
}
persistentSubnetIndices := persistentSubnetIndices()
aggregatorSubnetIndices := aggregatorSubnetIndices(currentSlot)

View File

@@ -209,7 +209,7 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
return nil, ctx.Err()
}
time.Sleep(delay)
helpers.Sleep(ctx, delay)
continue
}

View File

@@ -94,9 +94,11 @@ func TestVerifyIndexInCommittee_ExistsInBeaconCommittee(t *testing.T) {
assert.ErrorContains(t, wanted, err)
assert.Equal(t, pubsub.ValidationReject, result)
att.Data.CommitteeIndex = 10000
// Test the edge case where committee index equals count (should be rejected)
// With 64 validators and minimal config, count = 2, so valid indices are 0 and 1
att.Data.CommitteeIndex = 2
_, _, result, err = service.validateCommitteeIndexAndCount(ctx, att, s)
require.ErrorContains(t, "committee index 10000 > 2", err)
require.ErrorContains(t, "committee index 2 >= 2", err)
assert.Equal(t, pubsub.ValidationReject, result)
}

View File

@@ -278,8 +278,8 @@ func (s *Service) validateCommitteeIndexAndCount(
} else {
ci = a.GetCommitteeIndex()
}
if uint64(ci) > count {
return 0, 0, pubsub.ValidationReject, fmt.Errorf("committee index %d > %d", ci, count)
if uint64(ci) >= count {
return 0, 0, pubsub.ValidationReject, fmt.Errorf("committee index %d >= %d", ci, count)
}
return ci, valCount, pubsub.ValidationAccept, nil
}

View File

@@ -611,3 +611,41 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
})
})
}
func Test_validateCommitteeIndexAndCount_Boundary(t *testing.T) {
ctx := t.Context()
// Create a minimal state with a known number of validators.
validators := uint64(64)
bs, _ := util.DeterministicGenesisState(t, validators)
require.NoError(t, bs.SetSlot(1))
s := &Service{}
// Build a minimal Phase0 attestation (unaggregated path).
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 0,
},
}
// First call to obtain the active validator count used to derive committees per slot.
_, valCount, res, err := s.validateCommitteeIndexAndCount(ctx, att, bs)
require.NoError(t, err)
require.Equal(t, pubsub.ValidationAccept, res)
count := helpers.SlotCommitteeCount(valCount)
// committee_index == count - 1 should be accepted.
att.Data.CommitteeIndex = primitives.CommitteeIndex(count - 1)
_, _, res, err = s.validateCommitteeIndexAndCount(ctx, att, bs)
require.NoError(t, err)
require.Equal(t, pubsub.ValidationAccept, res)
// committee_index == count should be rejected (out of range).
att.Data.CommitteeIndex = primitives.CommitteeIndex(count)
_, _, res, err = s.validateCommitteeIndexAndCount(ctx, att, bs)
require.ErrorContains(t, "committee index", err)
require.Equal(t, pubsub.ValidationReject, res)
}

View File

@@ -294,6 +294,9 @@ func (s *Service) validatePhase0Block(ctx context.Context, blk interfaces.ReadOn
}
if err := blocks.VerifyBlockSignatureUsingCurrentFork(parentState, blk, blockRoot); err != nil {
if errors.Is(err, blocks.ErrInvalidSignature) {
s.setBadBlock(ctx, blockRoot)
}
return nil, err
}
// In the event the block is more than an epoch ahead from its

View File

@@ -103,11 +103,84 @@ func TestValidateBeaconBlockPubSub_InvalidSignature(t *testing.T) {
},
}
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
require.ErrorIs(t, err, signing.ErrSigFailedToVerify)
require.ErrorContains(t, "invalid signature", err)
result := res == pubsub.ValidationReject
assert.Equal(t, true, result)
}
func TestValidateBeaconBlockPubSub_InvalidSignature_MarksBlockAsBad(t *testing.T) {
db := dbtest.SetupDB(t)
p := p2ptest.NewTestP2P(t)
ctx := t.Context()
beaconState, privKeys := util.DeterministicGenesisState(t, 100)
parentBlock := util.NewBeaconBlock()
util.SaveBlock(t, ctx, db, parentBlock)
bRoot, err := parentBlock.Block.HashTreeRoot()
require.NoError(t, err)
require.NoError(t, db.SaveState(ctx, beaconState, bRoot))
require.NoError(t, db.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bRoot[:]}))
copied := beaconState.Copy()
require.NoError(t, copied.SetSlot(1))
proposerIdx, err := helpers.BeaconProposerIndex(ctx, copied)
require.NoError(t, err)
msg := util.NewBeaconBlock()
msg.Block.ParentRoot = bRoot[:]
msg.Block.Slot = 1
msg.Block.ProposerIndex = proposerIdx
badPrivKeyIdx := proposerIdx + 1 // We generate a valid signature from a wrong private key which fails to verify
msg.Signature, err = signing.ComputeDomainAndSign(beaconState, 0, msg.Block, params.BeaconConfig().DomainBeaconProposer, privKeys[badPrivKeyIdx])
require.NoError(t, err)
stateGen := stategen.New(db, doublylinkedtree.New())
chainService := &mock.ChainService{Genesis: time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0),
FinalizedCheckPoint: &ethpb.Checkpoint{
Epoch: 0,
Root: make([]byte, 32),
},
DB: db,
}
r := &Service{
cfg: &config{
beaconDB: db,
p2p: p,
initialSync: &mockSync.Sync{IsSyncing: false},
chain: chainService,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
blockNotifier: chainService.BlockNotifier(),
stateGen: stateGen,
},
seenBlockCache: lruwrpr.New(10),
badBlockCache: lruwrpr.New(10),
}
blockRoot, err := msg.Block.HashTreeRoot()
require.NoError(t, err)
// Verify block is not marked as bad initially
assert.Equal(t, false, r.hasBadBlock(blockRoot), "block should not be marked as bad initially")
buf := new(bytes.Buffer)
_, err = p.Encoding().EncodeGossip(buf, msg)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeOf(msg)]
digest, err := r.currentForkDigest()
assert.NoError(t, err)
topic = r.addDigestToTopic(topic, digest)
m := &pubsub.Message{
Message: &pubsubpb.Message{
Data: buf.Bytes(),
Topic: &topic,
},
}
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
require.ErrorContains(t, "invalid signature", err)
result := res == pubsub.ValidationReject
assert.Equal(t, true, result)
// Verify block is now marked as bad after invalid signature
assert.Equal(t, true, r.hasBadBlock(blockRoot), "block should be marked as bad after invalid signature")
}
func TestValidateBeaconBlockPubSub_BlockAlreadyPresentInDB(t *testing.T) {
db := dbtest.SetupDB(t)
ctx := t.Context()
@@ -976,7 +1049,7 @@ func TestValidateBeaconBlockPubSub_InvalidParentBlock(t *testing.T) {
},
}
res, err := r.validateBeaconBlockPubSub(ctx, "", m)
require.ErrorContains(t, "could not unmarshal bytes into signature", err)
require.ErrorContains(t, "invalid signature", err)
assert.Equal(t, res, pubsub.ValidationReject, "block with invalid signature should be rejected")
require.NoError(t, copied.SetSlot(2))

View File

@@ -58,7 +58,6 @@ func TestValid(t *testing.T) {
t.Run("one invalid column", func(t *testing.T) {
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
columns[0].KzgCommitments = [][]byte{}
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)
err := verifier.ValidFields()
@@ -67,6 +66,14 @@ func TestValid(t *testing.T) {
})
t.Run("nominal", func(t *testing.T) {
const maxBlobsPerBlock = 2
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig()
cfg.FuluForkEpoch = 0
cfg.BlobSchedule = []params.BlobScheduleEntry{{Epoch: 0, MaxBlobsPerBlock: maxBlobsPerBlock}}
params.OverrideBeaconConfig(cfg)
columns := GenerateTestDataColumns(t, [fieldparams.RootLength]byte{}, 1, 1)
verifier := initializer.NewDataColumnsVerifier(columns, GossipDataColumnSidecarRequirements)

View File

@@ -71,9 +71,15 @@ var (
errBatchBlockRootMismatch = errors.Join(ErrBlobInvalid, errors.New("sidecar block header root does not match signed block"))
)
// errVerificationImplementationFault indicates that a code path yielding VerifiedROBlobs has an implementation
// error, leading it to call VerifiedROBlobError with a nil error.
var errVerificationImplementationFault = errors.New("could not verify blob data or create a valid VerifiedROBlob")
var (
// errBlobVerificationImplementationFault indicates that a code path yielding VerifiedROBlobs has an implementation
// error, leading it to call VerifiedROBlobError with a nil error.
errBlobVerificationImplementationFault = errors.New("could not verify blob data or create a valid VerifiedROBlob")
// errDataColumnVerificationImplementationFault indicates that a code path yielding VerifiedRODataColumns has an implementation
// error, leading it to call VerifiedRODataColumnError with a nil error.
errDataColumnVerificationImplementationFault = errors.New("could not verify blob data or create a valid VerifiedROBlob")
)
// VerificationMultiError is a custom error that can be used to access individual verification failures.
type VerificationMultiError struct {
@@ -111,7 +117,7 @@ func newVerificationMultiError(r *results, err error) VerificationMultiError {
// create a value of that type in order to generate an error return value.
func VerifiedROBlobError(err error) (blocks.VerifiedROBlob, error) {
if err == nil {
return blocks.VerifiedROBlob{}, errVerificationImplementationFault
return blocks.VerifiedROBlob{}, errBlobVerificationImplementationFault
}
return blocks.VerifiedROBlob{}, err
}
@@ -120,7 +126,7 @@ func VerifiedROBlobError(err error) (blocks.VerifiedROBlob, error) {
// create a value of that type in order to generate an error return value.
func VerifiedRODataColumnError(err error) (blocks.VerifiedRODataColumn, error) {
if err == nil {
return blocks.VerifiedRODataColumn{}, errVerificationImplementationFault
return blocks.VerifiedRODataColumn{}, errDataColumnVerificationImplementationFault
}
return blocks.VerifiedRODataColumn{}, err
}

View File

@@ -4,6 +4,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/pkg/errors"
"github.com/spf13/afero"
)
@@ -25,7 +26,8 @@ func VerifiedROBlobFromDisk(fs afero.Fs, root [32]byte, path string) (blocks.Ver
return blocks.NewVerifiedROBlob(ro), nil
}
// VerifiedRODataColumnFromDisk created a verified read-only data column sidecar from disk.
// VerifiedRODataColumnFromDisk creates a verified read-only data column sidecar from disk.
// The file cursor must be positioned at the start of the data column sidecar SSZ data.
func VerifiedRODataColumnFromDisk(file afero.File, root [fieldparams.RootLength]byte, sszEncodedDataColumnSidecarSize uint32) (blocks.VerifiedRODataColumn, error) {
// Read the ssz encoded data column sidecar from the file
sszEncodedDataColumnSidecar := make([]byte, sszEncodedDataColumnSidecarSize)
@@ -34,7 +36,7 @@ func VerifiedRODataColumnFromDisk(file afero.File, root [fieldparams.RootLength]
return VerifiedRODataColumnError(err)
}
if uint32(count) != sszEncodedDataColumnSidecarSize {
return VerifiedRODataColumnError(err)
return VerifiedRODataColumnError(errors.Errorf("read %d bytes while expecting %d", count, sszEncodedDataColumnSidecarSize))
}
// Unmarshal the SSZ encoded data column sidecar.

View File

@@ -0,0 +1,3 @@
### Added
- Delegate sszInfo HashTreeRoot to FastSSZ-generated implementations via SSZObject, enabling roots calculation for generated types while avoiding duplicate logic.

View File

@@ -0,0 +1,2 @@
### Fixed
- Decreased attestation gossip validation batch deadline to 5ms.

View File

@@ -0,0 +1,2 @@
### Fixed
- Delete the genesis state file when --clear-db / --force-clear-db is specified.

View File

@@ -0,0 +1,2 @@
### Fixed
- Correctly advertise (in ENR and beacon API) attestation subnets when using `--subscribe-all-subnets`.

View File

@@ -0,0 +1,2 @@
### Fixed
- Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch.

View File

@@ -0,0 +1,2 @@
### Fixed
- `VerifyDataColumnSidecar`: Check if there is no too many commitments.

View File

@@ -0,0 +1,2 @@
### Fixed
- `WithDataColumnRetentionEpochs`: Use `dataColumnRetentionEpoch` instead of `blobColumnRetentionEpoch`.

View File

@@ -0,0 +1,2 @@
### Fixed
- `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return.

View File

@@ -0,0 +1,2 @@
### Fixed
- `HasAtLeastOneIndex`: Check the index is not too high.

View File

@@ -0,0 +1,2 @@
### Fixed
- `randomPeer`: Return if the context is cancelled when waiting for peers.

View File

@@ -0,0 +1,2 @@
### Fixed
- Improve error message when the byte count read from disk when reading a data column sidecars is lower than expected. (Mostly, because the file is truncated.)

View File

@@ -0,0 +1,2 @@
### Ignored
- Fix (unreleased) bug where the preallocated slice for KZG Proofs was 48x bigger than it needed to be.

View File

@@ -0,0 +1,3 @@
### Fixed
- reject committee index >= committees_per_slot in unaggregated attestation validation

View File

@@ -0,0 +1,3 @@
### Fixed
- Mark epoch transition correctly on new head events

View File

@@ -0,0 +1,3 @@
### Fixed
- Mark the block as invalid if it has an invalid signature.

View File

@@ -0,0 +1,3 @@
### Ignored
- Remove redundant check for genesis root at startup.

View File

@@ -0,0 +1,3 @@
### Changed
- Improve returning individual message errors from Beacon API.

View File

@@ -0,0 +1,3 @@
### Fixed
- Display error messages from the server verbatim when they are not encoded as `application/json`.

Some files were not shown because too many files have changed in this diff Show More