Compare commits

..

23 Commits
peerDAS ... bal

Author SHA1 Message Date
terence tsao
c2bea8a504 implement eip-7928 as a gloas fork without eip-7732 2025-09-17 10:22:23 -07:00
james-prysm
df86f57507 deprecate /prysm/v1/beacon/blobs (#15643)
* adding in check for post fulu support

* gaz

* fixing unit tests and typo

* gaz

* manu feedback

* addressing manu's suggestion

* fixing tests
2025-09-10 21:44:50 +00:00
james-prysm
9b0a3e9632 default validator client rest mode to ssz for post block (#15645)
* wip

* fixing tests

* updating unit tests for coverage

* gofmt

* changelog

* consolidated propose beacon block tests and improved code coverage

* gaz

* partial review feedback

* continuation fixing feedback on functions

* adding log

* only fall back on 406

* fixing broken tests

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/client/beacon-api/propose_beacon_block.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek review feedback

* more feedback

* radek suggestion

* fixing unit tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-10 21:08:11 +00:00
kasey
5e079aa62c justin's suggestion to unwedge CI (#15679)
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-10 20:31:29 +00:00
terence
5c68ec5c39 spectests: Add Fulu proposer lookahead epoch processing tests (#15667) 2025-09-10 18:51:19 +00:00
terence
5410232bef spectests: Add fulu fork transition tests for mainnet and minimal (#15666) 2025-09-10 18:51:04 +00:00
Radosław Kapka
3f5c4df7e0 Optimize processing of slashings (#14990)
* Calculate max epoch and churn for slashing once

* calculate once for proposer and attester slashings

* changelog <3

* introduce struct

* check if err is nil in ProcessVoluntaryExits

* rename exitData to exitInfo and return from functions

* cleanup + tests

* cleanup after rebase

* Potuz's review

* pre-calculate total active balance

* remove `slashValidatorFunc` closure

* Avoid a second validator loop

    🤖 Generated with [Claude Code](https://claude.ai/code)

    Co-Authored-By: Claude <noreply@anthropic.com>

* remove balance parameter from slashing functions

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: potuz <potuz@prysmaticlabs.com>
2025-09-10 18:14:11 +00:00
Preston Van Loon
5c348dff59 Add erigon/caplin to list of known p2p agents. (#15678) 2025-09-10 17:57:24 +00:00
james-prysm
8136ff7c3a adding remote singer changes for fulu (#15498)
* adding web 3 signer changes for fulu

* missed adding fulu values

* add accounting for fulu version

* updated web3signer version and fixing linting

* Update validator/keymanager/remote-web3signer/types/requests_test.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* Update validator/keymanager/remote-web3signer/types/mock/mocks.go

Co-authored-by: Radosław Kapka <rkapka@wp.pl>

* radek suggestions

* removing redundant check and removing old function, changed changelog to reflect

* gaz

* radek's suggestion

* fixing test as v1 proof is no longer used

* fixing more tests

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-10 16:58:53 +00:00
Preston Van Loon
f690af81fa Add diagnostic logging for 'invalid data returned from peer' errors (#15674)
When peers return invalid data during initial sync, log the specific
validation failure reason. This helps identify:
- Whether peer exceeded requested block count
- Whether peer exceeded MAX_REQUEST_BLOCKS protocol limit
- Whether blocks are outside the requested slot range
- Whether blocks are out of order (not increasing or wrong step)

Each log includes the specific condition that failed, making it easier
to debug whether the issue is with peer implementations or request
validation logic.
2025-09-09 22:08:37 +00:00
kasey
029b896c79 refactor subscription code to enable peer discovery asap (#15660)
* refactor subscription code to enable peer discovery asap

* fix subscription tests

* hunting down the other test initialization bugs

* refactor subscription parameter tracking type

* manu naming feedback

* manu naming feedback

* missing godoc

* protect tracker from nil subscriptions

---------

Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
2025-09-09 21:00:34 +00:00
Jun Song
e1117a7de2 SSZ-QL: Handle Vector type & Add SSZ tag parser for multi-dimensional parsing (#15668)
* Add vectorInfo

* Add 2D bytes field for test

* Add tag_parser for parsing SSZ tags

* Integrate tag parser with analyzer

* Add ByteList test case

* Changelog

* Better printing feature with Stringer implementation

* Return error for non-determined case without printing other values

* Update tag_parser.go: handle Vector and List mutually exclusive (inspired by OffchainLabs/fastssz)

* Make linter happy

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-09 20:35:06 +00:00
Potuz
39b2a02f66 Remove BLS change broadcasting at the fork (#15659)
* Remove BLS change broadcasting at the fork

* Changelog
2025-09-09 16:47:16 +00:00
Galoretka
4e8a710b64 chore: Remove duplicated test case from TestCollector (#15672)
* chore: Remove duplicated test case from TestCollector

* Create Galoretka_fix-leakybucket-test-duplicate.md
2025-09-09 12:49:54 +00:00
Preston Van Loon
7191a5bcdf PeerDAS: find data column peers when tracking validators (#15654)
* Subscription: Get fanout peers in all data column subnets when at least one validator is connected

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Apply suggestion from @nalepae

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>

* Updated test structure to @nalepae suggestion

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2025-09-09 00:31:15 +00:00
Muzry
d335a52c49 fixed the empty dirs not being removed (#15573)
* fixed the empty dirs not being removed

* update list empty dirs first

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
2025-09-08 20:55:50 +00:00
NikolaiKryshnev
c7401f5e75 Fix link (#15631)
* Update CHANGELOG.md

* Update DEPENDENCIES.md

* Create 2025-08-25-docs-links.md
2025-09-08 20:40:18 +00:00
Preston Van Loon
0057cc57b5 spectests: Set mainnet spectests to size=large. This helps with some CI timeouts. CI is showing an average time of 4m39s, which is too close to the 5m timeout for a moderate test (#15664) 2025-09-08 20:01:34 +00:00
Jun Song
b1dc5e485d SSZ-QL: Handle List type & Populate the actual value dynamically (#15637)
* Add VariableTestContainer in ssz_query.proto

* Add listInfo

* Use errors.New for making an error with a static string literal

* Add listInfo field when analyzing the List type

* Persist the field order in the container

* Add actualOffset and goFieldName at fieldInfo

* Add PopulateFromValue function & update test runner

* Handle slice of ssz object for marshalling

* Add CalculateOffsetAndLength test

* Add comments for better doc

* Changelog :)

* Apply reviews from Radek

* Remove actualOffset and update offset field instead

* Add Nested container of variable-sized for testing nested path

* Fix offset adding logics: for variable-sized field, always add 4 instead of its fixed size

* Fix multiple import issue

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-09-08 16:50:24 +00:00
Radosław Kapka
f035da6fc5 Aggregate and pack sync committee messages (#15608)
* Aggregate and pack sync committee messages

* test

* simplify error check

* changelog <3

* fix assert import

* remove parallelization

* use sync committee cache

* ignore bits already set in pool messages

* fuzz fix

* cleanup

* test panic fix

* clear cache in tests
2025-09-08 15:17:19 +00:00
Muzry
854f4bc9a3 fix getBlockAttestationsV2 to return [] instead of null when data is empty (#15651) 2025-09-08 14:45:08 +00:00
james-prysm
1933adedbf update to v1.6.0-alpha.6 (#15658)
* wip workspace update

* update ethspecify yml

* fixing ethspecify

* fixing ethspecify

* fixing loader test

* changelog and reverting something in configs

* adding bad hash
2025-09-05 17:20:13 +00:00
Potuz
278b796e43 Fix next epoch proposer duties (#15642)
* Fix next epoch proposer duties

* Do not update state's slot when computing the proposer

Also do not call Fulu's proposer lookahead if the requested epoch is not
current or next.

* retract Terence's test

* Fix tests

* removing epoch check to pass spec test

* reverting rollback and fixing test setup

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
2025-08-29 21:45:23 +00:00
215 changed files with 14312 additions and 4942 deletions

View File

@@ -2993,7 +2993,7 @@ There are two known issues with this release:
### Added
- Web3Signer support. See the [documentation](https://docs.prylabs.network/docs/next/wallet/web3signer) for more
- Web3Signer support. See the [documentation](https://prysm.offchainlabs.com/docs/manage-wallet/web3signer/) for more
details.
- Bellatrix support. See [kiln testnet instructions](https://hackmd.io/OqIoTiQvS9KOIataIFksBQ?view)
- Weak subjectivity sync / checkpoint sync. This is an experimental feature and may have unintended side effects for

View File

@@ -2,7 +2,7 @@
Prysm is go project with many complicated dependencies, including some c++ based libraries. There
are two parts to Prysm's dependency management. Go modules and bazel managed dependencies. Be sure
to read [Why Bazel?](https://github.com/OffchainLabs/documentation/issues/138) to fully
to read [Why Bazel?](https://prysm.offchainlabs.com/docs/install-prysm/install-with-bazel/#why-bazel) to fully
understand the reasoning behind an additional layer of build tooling via Bazel rather than a pure
"go build" project.

View File

@@ -253,16 +253,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.6.0-alpha.5"
consensus_spec_version = "v1.6.0-alpha.6"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-BXuEb1XbeSft0qzVFnoB8KC0YR1qM3ybT5lKUDbUWn8=",
"minimal": "sha256-EjwSHgBbWSoy5hm9V+A/bVMabyojaKsBNPrRtuPVq4k=",
"mainnet": "sha256-OGWMzarzaV1B9mVpy48/DCUbhjfX+b64pAxWwPLWhAs=",
"general": "sha256-7wkWuahuCO37uVYnxq8Badvi+jY907pBj68ixL8XDOI=",
"minimal": "sha256-Qy/f27N0LffS/ej7VhIubwDejD6LMK0VdenKkqtZVt4=",
"mainnet": "sha256-3H7mu5yE+FGz2Wr/nc8Nd9aEu93YoEpsYtn0zBSoeDE=",
},
version = consensus_spec_version,
)
@@ -278,7 +278,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-FQWR5EZuVcQGR0ol9vpd7eunnfGexJ/7J3xycrFEJbU=",
integrity = "sha256-uvz3XfMTGfy3/BtQQoEp5XQOgrWgcH/5Zo/gR0iiP+k=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -1,9 +1,8 @@
package httprest
import (
"time"
"net/http"
"time"
"github.com/OffchainLabs/prysm/v6/api/server/middleware"
)

View File

@@ -366,7 +366,7 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
randBlob := random.GetRandBlob(123)
var blob Blob
copy(blob[:], randBlob[:])
// Create invalid commitment (wrong size)
invalidCommitment := make([]byte, 32) // Should be 48 bytes
cellProofs := make([][]byte, numberOfColumns)
@@ -456,10 +456,10 @@ func TestVerifyCellKZGProofBatchFromBlobData(t *testing.T) {
copy(blob[:], randBlob[:])
commitment, err := BlobToKZGCommitment(&blob)
require.NoError(t, err)
blobs[i] = blob[:]
commitments[i] = commitment[:]
// Add cell proofs - make some invalid in the second blob
for j := uint64(0); j < numberOfColumns; j++ {
if i == 1 && j == 64 {

View File

@@ -67,6 +67,30 @@ func (s *SyncCommitteeCache) Clear() {
s.cache = cache.NewFIFO(keyFn)
}
// CurrentPeriodPositions returns current period positions of validator indices with respect with
// sync committee. If any input validator index has no assignment, an empty list will be returned
// for that validator. If the input root does not exist in cache, `ErrNonExistingSyncCommitteeKey` is returned.
// Manual checking of state for index position in state is recommended when `ErrNonExistingSyncCommitteeKey` is returned.
func (s *SyncCommitteeCache) CurrentPeriodPositions(root [32]byte, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
s.lock.RLock()
defer s.lock.RUnlock()
pos, err := s.positionsInCommittee(root, indices)
if err != nil {
return nil, err
}
result := make([][]primitives.CommitteeIndex, len(pos))
for i, p := range pos {
if p == nil {
result[i] = []primitives.CommitteeIndex{}
} else {
result[i] = p.currentPeriod
}
}
return result, nil
}
// CurrentPeriodIndexPosition returns current period index position of a validator index with respect with
// sync committee. If the input validator index has no assignment, an empty list will be returned.
// If the input root does not exist in cache, `ErrNonExistingSyncCommitteeKey` is returned.
@@ -104,11 +128,7 @@ func (s *SyncCommitteeCache) NextPeriodIndexPosition(root [32]byte, valIdx primi
return pos.nextPeriod, nil
}
// Helper function for `CurrentPeriodIndexPosition` and `NextPeriodIndexPosition` to return a mapping
// of validator index to its index(s) position in the sync committee.
func (s *SyncCommitteeCache) idxPositionInCommittee(
root [32]byte, valIdx primitives.ValidatorIndex,
) (*positionInCommittee, error) {
func (s *SyncCommitteeCache) positionsInCommittee(root [32]byte, indices []primitives.ValidatorIndex) ([]*positionInCommittee, error) {
obj, exists, err := s.cache.GetByKey(key(root))
if err != nil {
return nil, err
@@ -121,13 +141,33 @@ func (s *SyncCommitteeCache) idxPositionInCommittee(
if !ok {
return nil, errNotSyncCommitteeIndexPosition
}
idxInCommittee, ok := item.vIndexToPositionMap[valIdx]
if !ok {
SyncCommitteeCacheMiss.Inc()
result := make([]*positionInCommittee, len(indices))
for i, idx := range indices {
idxInCommittee, ok := item.vIndexToPositionMap[idx]
if ok {
SyncCommitteeCacheHit.Inc()
result[i] = idxInCommittee
} else {
SyncCommitteeCacheMiss.Inc()
result[i] = nil
}
}
return result, nil
}
// Helper function for `CurrentPeriodIndexPosition` and `NextPeriodIndexPosition` to return a mapping
// of validator index to its index(s) position in the sync committee.
func (s *SyncCommitteeCache) idxPositionInCommittee(
root [32]byte, valIdx primitives.ValidatorIndex,
) (*positionInCommittee, error) {
positions, err := s.positionsInCommittee(root, []primitives.ValidatorIndex{valIdx})
if err != nil {
return nil, err
}
if len(positions) == 0 {
return nil, nil
}
SyncCommitteeCacheHit.Inc()
return idxInCommittee, nil
return positions[0], nil
}
// UpdatePositionsInCommittee updates caching of validators position in sync committee in respect to

View File

@@ -16,6 +16,11 @@ func NewSyncCommittee() *FakeSyncCommitteeCache {
return &FakeSyncCommitteeCache{}
}
// CurrentPeriodPositions -- fake
func (s *FakeSyncCommitteeCache) CurrentPeriodPositions(root [32]byte, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
return nil, nil
}
// CurrentEpochIndexPosition -- fake.
func (s *FakeSyncCommitteeCache) CurrentPeriodIndexPosition(root [32]byte, valIdx primitives.ValidatorIndex) ([]primitives.CommitteeIndex, error) {
return nil, nil

View File

@@ -5,6 +5,7 @@ import (
"sort"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/container/slice"
@@ -39,11 +40,11 @@ func ProcessAttesterSlashings(
ctx context.Context,
beaconState state.BeaconState,
slashings []ethpb.AttSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
var err error
for _, slashing := range slashings {
beaconState, err = ProcessAttesterSlashing(ctx, beaconState, slashing, slashFunc)
beaconState, err = ProcessAttesterSlashing(ctx, beaconState, slashing, exitInfo)
if err != nil {
return nil, err
}
@@ -56,7 +57,7 @@ func ProcessAttesterSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing ethpb.AttSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
if err := VerifyAttesterSlashing(ctx, beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify attester slashing")
@@ -75,10 +76,9 @@ func ProcessAttesterSlashing(
return nil, err
}
if helpers.IsSlashableValidator(val.ActivationEpoch(), val.WithdrawableEpoch(), val.Slashed(), currentEpoch) {
beaconState, err = slashFunc(ctx, beaconState, primitives.ValidatorIndex(validatorIndex))
beaconState, err = validators.SlashValidator(ctx, beaconState, primitives.ValidatorIndex(validatorIndex), exitInfo)
if err != nil {
return nil, errors.Wrapf(err, "could not slash validator index %d",
validatorIndex)
return nil, errors.Wrapf(err, "could not slash validator index %d", validatorIndex)
}
slashedAny = true
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -44,11 +45,10 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
Target: &ethpb.Checkpoint{Epoch: 1}},
})}}
var registry []*ethpb.Validator
currentSlot := primitives.Slot(0)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Validators: []*ethpb.Validator{{}},
Slot: currentSlot,
})
require.NoError(t, err)
@@ -62,16 +62,15 @@ func TestProcessAttesterSlashings_DataNotSlashable(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
assert.ErrorContains(t, "attestations are not slashable", err)
}
func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T) {
var registry []*ethpb.Validator
currentSlot := primitives.Slot(0)
beaconState, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Validators: []*ethpb.Validator{{}},
Slot: currentSlot,
})
require.NoError(t, err)
@@ -101,7 +100,7 @@ func TestProcessAttesterSlashings_IndexedAttestationFailedToVerify(t *testing.T)
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
assert.ErrorContains(t, "validator indices count exceeds MAX_VALIDATORS_PER_COMMITTEE", err)
}
@@ -243,7 +242,7 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
currentSlot := 2 * params.BeaconConfig().SlotsPerEpoch
require.NoError(t, tc.st.SetSlot(currentSlot))
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(t.Context(), tc.st, []ethpb.AttSlashing{tc.slashing}, v.ExitInformation(tc.st))
require.NoError(t, err)
newRegistry := newState.Validators()
@@ -265,3 +264,83 @@ func TestProcessAttesterSlashings_AppliesCorrectStatus(t *testing.T) {
})
}
}
func TestProcessAttesterSlashing_ExitEpochGetsUpdated(t *testing.T) {
st, keys := util.DeterministicGenesisStateElectra(t, 8)
bal, err := helpers.TotalActiveBalance(st)
require.NoError(t, err)
perEpochChurn := helpers.ActivationExitChurnLimit(primitives.Gwei(bal))
vals := st.Validators()
// We set the total effective balance of slashed validators
// higher than the churn limit for a single epoch.
vals[0].EffectiveBalance = uint64(perEpochChurn / 3)
vals[1].EffectiveBalance = uint64(perEpochChurn / 3)
vals[2].EffectiveBalance = uint64(perEpochChurn / 3)
vals[3].EffectiveBalance = uint64(perEpochChurn / 3)
require.NoError(t, st.SetValidators(vals))
sl1att1 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{0, 1},
})
sl1att2 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{0, 1},
})
slashing1 := &ethpb.AttesterSlashingElectra{
Attestation_1: sl1att1,
Attestation_2: sl1att2,
}
sl2att1 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Epoch: 1},
},
AttestingIndices: []uint64{2, 3},
})
sl2att2 := util.HydrateIndexedAttestationElectra(&ethpb.IndexedAttestationElectra{
AttestingIndices: []uint64{2, 3},
})
slashing2 := &ethpb.AttesterSlashingElectra{
Attestation_1: sl2att1,
Attestation_2: sl2att2,
}
domain, err := signing.Domain(st.Fork(), 0, params.BeaconConfig().DomainBeaconAttester, st.GenesisValidatorsRoot())
require.NoError(t, err)
signingRoot, err := signing.ComputeSigningRoot(sl1att1.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 := keys[0].Sign(signingRoot[:])
sig1 := keys[1].Sign(signingRoot[:])
aggregateSig := bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl1att1.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl1att2.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[0].Sign(signingRoot[:])
sig1 = keys[1].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl1att2.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl2att1.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[2].Sign(signingRoot[:])
sig1 = keys[3].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl2att1.Signature = aggregateSig.Marshal()
signingRoot, err = signing.ComputeSigningRoot(sl2att2.GetData(), domain)
assert.NoError(t, err, "Could not get signing root of beacon block header")
sig0 = keys[2].Sign(signingRoot[:])
sig1 = keys[3].Sign(signingRoot[:])
aggregateSig = bls.AggregateSignatures([]bls.Signature{sig0, sig1})
sl2att2.Signature = aggregateSig.Marshal()
exitInfo := v.ExitInformation(st)
assert.Equal(t, primitives.Epoch(0), exitInfo.HighestExitEpoch)
_, err = blocks.ProcessAttesterSlashings(t.Context(), st, []ethpb.AttSlashing{slashing1, slashing2}, exitInfo)
require.NoError(t, err)
assert.Equal(t, primitives.Epoch(6), exitInfo.HighestExitEpoch)
}

View File

@@ -191,7 +191,7 @@ func TestFuzzProcessProposerSlashings_10000(t *testing.T) {
fuzzer.Fuzz(p)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessProposerSlashings(ctx, s, []*ethpb.ProposerSlashing{p}, v.SlashValidator)
r, err := ProcessProposerSlashings(ctx, s, []*ethpb.ProposerSlashing{p}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, p)
}
@@ -224,7 +224,7 @@ func TestFuzzProcessAttesterSlashings_10000(t *testing.T) {
fuzzer.Fuzz(a)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessAttesterSlashings(ctx, s, []ethpb.AttSlashing{a}, v.SlashValidator)
r, err := ProcessAttesterSlashings(ctx, s, []ethpb.AttSlashing{a}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and slashing: %v", r, err, state, a)
}
@@ -334,7 +334,7 @@ func TestFuzzProcessVoluntaryExits_10000(t *testing.T) {
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessVoluntaryExits(ctx, s, []*ethpb.SignedVoluntaryExit{e})
r, err := ProcessVoluntaryExits(ctx, s, []*ethpb.SignedVoluntaryExit{e}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and exit: %v", r, err, state, e)
}
@@ -351,7 +351,7 @@ func TestFuzzProcessVoluntaryExitsNoVerify_10000(t *testing.T) {
fuzzer.Fuzz(e)
s, err := state_native.InitializeFromProtoUnsafePhase0(state)
require.NoError(t, err)
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e})
r, err := ProcessVoluntaryExits(t.Context(), s, []*ethpb.SignedVoluntaryExit{e}, v.ExitInformation(s))
if err != nil && r != nil {
t.Fatalf("return value should be nil on err. found: %v on error: %v for state: %v and block: %v", r, err, state, e)
}

View File

@@ -94,7 +94,7 @@ func TestProcessAttesterSlashings_RegressionSlashableIndices(t *testing.T) {
for i, s := range b.Block.Body.AttesterSlashings {
ss[i] = s
}
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.SlashValidator)
newState, err := blocks.ProcessAttesterSlashings(t.Context(), beaconState, ss, v.ExitInformation(beaconState))
require.NoError(t, err)
newRegistry := newState.Validators()
if !newRegistry[expectedSlashedVal].Slashed {

View File

@@ -9,7 +9,6 @@ import (
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
@@ -50,13 +49,12 @@ func ProcessVoluntaryExits(
ctx context.Context,
beaconState state.BeaconState,
exits []*ethpb.SignedVoluntaryExit,
exitInfo *v.ExitInfo,
) (state.BeaconState, error) {
// Avoid calculating the epoch churn if no exits exist.
if len(exits) == 0 {
return beaconState, nil
}
maxExitEpoch, churn := v.MaxExitEpochAndChurn(beaconState)
var exitEpoch primitives.Epoch
for idx, exit := range exits {
if exit == nil || exit.Exit == nil {
return nil, errors.New("nil voluntary exit in block body")
@@ -68,15 +66,8 @@ func ProcessVoluntaryExits(
if err := VerifyExitAndSignature(val, beaconState, exit); err != nil {
return nil, errors.Wrapf(err, "could not verify exit %d", idx)
}
beaconState, exitEpoch, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex, maxExitEpoch, churn)
if err == nil {
if exitEpoch > maxExitEpoch {
maxExitEpoch = exitEpoch
churn = 1
} else if exitEpoch == maxExitEpoch {
churn++
}
} else if !errors.Is(err, v.ErrValidatorAlreadyExited) {
beaconState, err = v.InitiateValidatorExit(ctx, beaconState, exit.Exit.ValidatorIndex, exitInfo)
if err != nil && !errors.Is(err, v.ErrValidatorAlreadyExited) {
return nil, err
}
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -46,7 +47,7 @@ func TestProcessVoluntaryExits_NotActiveLongEnoughToExit(t *testing.T) {
}
want := "validator has not been active long enough to exit"
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
assert.ErrorContains(t, want, err)
}
@@ -76,7 +77,7 @@ func TestProcessVoluntaryExits_ExitAlreadySubmitted(t *testing.T) {
}
want := "validator with index 0 has already submitted an exit, which will take place at epoch: 10"
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
_, err = blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
assert.ErrorContains(t, want, err)
}
@@ -124,7 +125,7 @@ func TestProcessVoluntaryExits_AppliesCorrectStatus(t *testing.T) {
},
}
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits)
newState, err := blocks.ProcessVoluntaryExits(t.Context(), state, b.Block.Body.VoluntaryExits, validators.ExitInformation(state))
require.NoError(t, err, "Could not process exits")
newRegistry := newState.Validators()
if newRegistry[0].ExitEpoch != helpers.ActivationExitEpoch(primitives.Epoch(state.Slot()/params.BeaconConfig().SlotsPerEpoch)) {

View File

@@ -7,9 +7,9 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
@@ -19,11 +19,6 @@ import (
// ErrCouldNotVerifyBlockHeader is returned when a block header's signature cannot be verified.
var ErrCouldNotVerifyBlockHeader = errors.New("could not verify beacon block header")
type slashValidatorFunc func(
ctx context.Context,
st state.BeaconState,
vid primitives.ValidatorIndex) (state.BeaconState, error)
// ProcessProposerSlashings is one of the operations performed
// on each processed beacon block to slash proposers based on
// slashing conditions if any slashable events occurred.
@@ -54,11 +49,11 @@ func ProcessProposerSlashings(
ctx context.Context,
beaconState state.BeaconState,
slashings []*ethpb.ProposerSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
var err error
for _, slashing := range slashings {
beaconState, err = ProcessProposerSlashing(ctx, beaconState, slashing, slashFunc)
beaconState, err = ProcessProposerSlashing(ctx, beaconState, slashing, exitInfo)
if err != nil {
return nil, err
}
@@ -71,7 +66,7 @@ func ProcessProposerSlashing(
ctx context.Context,
beaconState state.BeaconState,
slashing *ethpb.ProposerSlashing,
slashFunc slashValidatorFunc,
exitInfo *validators.ExitInfo,
) (state.BeaconState, error) {
var err error
if slashing == nil {
@@ -80,7 +75,7 @@ func ProcessProposerSlashing(
if err = VerifyProposerSlashing(beaconState, slashing); err != nil {
return nil, errors.Wrap(err, "could not verify proposer slashing")
}
beaconState, err = slashFunc(ctx, beaconState, slashing.Header_1.Header.ProposerIndex)
beaconState, err = validators.SlashValidator(ctx, beaconState, slashing.Header_1.Header.ProposerIndex, exitInfo)
if err != nil {
return nil, errors.Wrapf(err, "could not slash proposer index %d", slashing.Header_1.Header.ProposerIndex)
}

View File

@@ -50,7 +50,7 @@ func TestProcessProposerSlashings_UnmatchedHeaderSlots(t *testing.T) {
},
}
want := "mismatched header slots"
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
assert.ErrorContains(t, want, err)
}
@@ -83,7 +83,7 @@ func TestProcessProposerSlashings_SameHeaders(t *testing.T) {
},
}
want := "expected slashing headers to differ"
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
assert.ErrorContains(t, want, err)
}
@@ -133,7 +133,7 @@ func TestProcessProposerSlashings_ValidatorNotSlashable(t *testing.T) {
"validator with key %#x is not slashable",
bytesutil.ToBytes48(beaconState.Validators()[0].PublicKey),
)
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.SlashValidator)
_, err = blocks.ProcessProposerSlashings(t.Context(), beaconState, b.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
assert.ErrorContains(t, want, err)
}
@@ -172,7 +172,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatus(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -220,7 +220,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusAltair(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -268,7 +268,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusBellatrix(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()
@@ -316,7 +316,7 @@ func TestProcessProposerSlashings_AppliesCorrectStatusCapella(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Body.ProposerSlashings = slashings
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.SlashValidator)
newState, err := blocks.ProcessProposerSlashings(t.Context(), beaconState, block.Block.Body.ProposerSlashings, v.ExitInformation(beaconState))
require.NoError(t, err)
newStateVals := newState.Validators()

View File

@@ -84,8 +84,8 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) error {
// Handle validator ejections.
for _, idx := range eligibleForEjection {
var err error
// exitQueueEpoch and churn arguments are not used in electra.
st, _, err = validators.InitiateValidatorExit(ctx, st, idx, 0 /*exitQueueEpoch*/, 0 /*churn*/)
// exit info is not used in electra
st, err = validators.InitiateValidatorExit(ctx, st, idx, &validators.ExitInfo{})
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return fmt.Errorf("failed to initiate validator exit at index %d: %w", idx, err)
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
@@ -46,18 +47,21 @@ var (
// # [New in Electra:EIP7251]
// for_ops(body.execution_payload.consolidation_requests, process_consolidation_request)
func ProcessOperations(
ctx context.Context,
st state.BeaconState,
block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
func ProcessOperations(ctx context.Context, st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
// 6110 validations are in VerifyOperationLengths
bb := block.Body()
// Electra extends the altair operations.
st, err := ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), v.SlashValidator)
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
st, err = ProcessProposerSlashings(ctx, st, bb.ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
}
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), v.SlashValidator)
st, err = ProcessAttesterSlashings(ctx, st, bb.AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
@@ -68,7 +72,7 @@ func ProcessOperations(
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits())
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}

View File

@@ -147,9 +147,8 @@ func ProcessWithdrawalRequests(ctx context.Context, st state.BeaconState, wrs []
if isFullExitRequest {
// Only exit validator if it has no pending withdrawals in the queue
if pendingBalanceToWithdraw == 0 {
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(st)
var err error
st, _, err = validators.InitiateValidatorExit(ctx, st, vIdx, maxExitEpoch, churn)
st, err = validators.InitiateValidatorExit(ctx, st, vIdx, validators.ExitInformation(st))
if err != nil {
return nil, err
}

View File

@@ -99,8 +99,7 @@ func ProcessRegistryUpdates(ctx context.Context, st state.BeaconState) (state.Be
for _, idx := range eligibleForEjection {
// Here is fine to do a quadratic loop since this should
// barely happen
maxExitEpoch, churn := validators.MaxExitEpochAndChurn(st)
st, _, err = validators.InitiateValidatorExit(ctx, st, idx, maxExitEpoch, churn)
st, err = validators.InitiateValidatorExit(ctx, st, idx, validators.ExitInformation(st))
if err != nil && !errors.Is(err, validators.ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate exit for validator %d", idx)
}

View File

@@ -16,10 +16,10 @@ func ProcessEpoch(ctx context.Context, state state.BeaconState) error {
if err := electra.ProcessEpoch(ctx, state); err != nil {
return errors.Wrap(err, "could not process epoch in fulu transition")
}
return processProposerLookahead(ctx, state)
return ProcessProposerLookahead(ctx, state)
}
func processProposerLookahead(ctx context.Context, state state.BeaconState) error {
func ProcessProposerLookahead(ctx context.Context, state state.BeaconState) error {
_, span := trace.StartSpan(ctx, "fulu.processProposerLookahead")
defer span.End()

View File

@@ -0,0 +1,19 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["upgrade.go"],
importpath = "github.com/OffchainLabs/prysm/v6/beacon-chain/core/gloas",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -0,0 +1,189 @@
package gloas
import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v6/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v6/config/params"
enginev1 "github.com/OffchainLabs/prysm/v6/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
// UpgradeToGloas updates inputs a generic state to return the version Gloas state.
// This implements the upgrade_to_eip7928 function from the specification.
func UpgradeToGloas(ctx context.Context, beaconState state.BeaconState) (state.BeaconState, error) {
epoch := time.CurrentEpoch(beaconState)
currentSyncCommittee, err := beaconState.CurrentSyncCommittee()
if err != nil {
return nil, err
}
nextSyncCommittee, err := beaconState.NextSyncCommittee()
if err != nil {
return nil, err
}
prevEpochParticipation, err := beaconState.PreviousEpochParticipation()
if err != nil {
return nil, err
}
currentEpochParticipation, err := beaconState.CurrentEpochParticipation()
if err != nil {
return nil, err
}
inactivityScores, err := beaconState.InactivityScores()
if err != nil {
return nil, err
}
payloadHeader, err := beaconState.LatestExecutionPayloadHeader()
if err != nil {
return nil, err
}
txRoot, err := payloadHeader.TransactionsRoot()
if err != nil {
return nil, err
}
wdRoot, err := payloadHeader.WithdrawalsRoot()
if err != nil {
return nil, err
}
wi, err := beaconState.NextWithdrawalIndex()
if err != nil {
return nil, err
}
vi, err := beaconState.NextWithdrawalValidatorIndex()
if err != nil {
return nil, err
}
summaries, err := beaconState.HistoricalSummaries()
if err != nil {
return nil, err
}
excessBlobGas, err := payloadHeader.ExcessBlobGas()
if err != nil {
return nil, err
}
blobGasUsed, err := payloadHeader.BlobGasUsed()
if err != nil {
return nil, err
}
depositRequestsStartIndex, err := beaconState.DepositRequestsStartIndex()
if err != nil {
return nil, err
}
depositBalanceToConsume, err := beaconState.DepositBalanceToConsume()
if err != nil {
return nil, err
}
exitBalanceToConsume, err := beaconState.ExitBalanceToConsume()
if err != nil {
return nil, err
}
earliestExitEpoch, err := beaconState.EarliestExitEpoch()
if err != nil {
return nil, err
}
consolidationBalanceToConsume, err := beaconState.ConsolidationBalanceToConsume()
if err != nil {
return nil, err
}
earliestConsolidationEpoch, err := beaconState.EarliestConsolidationEpoch()
if err != nil {
return nil, err
}
pendingDeposits, err := beaconState.PendingDeposits()
if err != nil {
return nil, err
}
pendingPartialWithdrawals, err := beaconState.PendingPartialWithdrawals()
if err != nil {
return nil, err
}
pendingConsolidations, err := beaconState.PendingConsolidations()
if err != nil {
return nil, err
}
proposerLookahead, err := helpers.InitializeProposerLookahead(ctx, beaconState, slots.ToEpoch(beaconState.Slot()))
if err != nil {
return nil, err
}
// Create the new Gloas execution payload header with block_access_list_root
// as specified in the EIP7928 upgrade function
latestExecutionPayloadHeader := &enginev1.ExecutionPayloadHeaderGloas{
ParentHash: payloadHeader.ParentHash(),
FeeRecipient: payloadHeader.FeeRecipient(),
StateRoot: payloadHeader.StateRoot(),
ReceiptsRoot: payloadHeader.ReceiptsRoot(),
LogsBloom: payloadHeader.LogsBloom(),
PrevRandao: payloadHeader.PrevRandao(),
BlockNumber: payloadHeader.BlockNumber(),
GasLimit: payloadHeader.GasLimit(),
GasUsed: payloadHeader.GasUsed(),
Timestamp: payloadHeader.Timestamp(),
ExtraData: payloadHeader.ExtraData(),
BaseFeePerGas: payloadHeader.BaseFeePerGas(),
BlockHash: payloadHeader.BlockHash(),
TransactionsRoot: txRoot,
WithdrawalsRoot: wdRoot,
BlobGasUsed: blobGasUsed,
ExcessBlobGas: excessBlobGas,
BlockAccessListRoot: make([]byte, 32), // New in EIP7928 - empty Root()
}
s := &ethpb.BeaconStateGloas{
GenesisTime: uint64(beaconState.GenesisTime().Unix()),
GenesisValidatorsRoot: beaconState.GenesisValidatorsRoot(),
Slot: beaconState.Slot(),
Fork: &ethpb.Fork{
PreviousVersion: beaconState.Fork().CurrentVersion,
CurrentVersion: params.BeaconConfig().GloasForkVersion, // Modified in EIP7928
Epoch: epoch,
},
LatestBlockHeader: beaconState.LatestBlockHeader(),
BlockRoots: beaconState.BlockRoots(),
StateRoots: beaconState.StateRoots(),
HistoricalRoots: beaconState.HistoricalRoots(),
Eth1Data: beaconState.Eth1Data(),
Eth1DataVotes: beaconState.Eth1DataVotes(),
Eth1DepositIndex: beaconState.Eth1DepositIndex(),
Validators: beaconState.Validators(),
Balances: beaconState.Balances(),
RandaoMixes: beaconState.RandaoMixes(),
Slashings: beaconState.Slashings(),
PreviousEpochParticipation: prevEpochParticipation,
CurrentEpochParticipation: currentEpochParticipation,
JustificationBits: beaconState.JustificationBits(),
PreviousJustifiedCheckpoint: beaconState.PreviousJustifiedCheckpoint(),
CurrentJustifiedCheckpoint: beaconState.CurrentJustifiedCheckpoint(),
FinalizedCheckpoint: beaconState.FinalizedCheckpoint(),
InactivityScores: inactivityScores,
CurrentSyncCommittee: currentSyncCommittee,
NextSyncCommittee: nextSyncCommittee,
LatestExecutionPayloadHeader: latestExecutionPayloadHeader,
NextWithdrawalIndex: wi,
NextWithdrawalValidatorIndex: vi,
HistoricalSummaries: summaries,
DepositRequestsStartIndex: depositRequestsStartIndex,
DepositBalanceToConsume: depositBalanceToConsume,
ExitBalanceToConsume: exitBalanceToConsume,
EarliestExitEpoch: earliestExitEpoch,
ConsolidationBalanceToConsume: consolidationBalanceToConsume,
EarliestConsolidationEpoch: earliestConsolidationEpoch,
PendingDeposits: pendingDeposits,
PendingPartialWithdrawals: pendingPartialWithdrawals,
PendingConsolidations: pendingConsolidations,
ProposerLookahead: proposerLookahead,
}
post, err := state_native.InitializeFromProtoUnsafeGloas(s)
if err != nil {
return nil, errors.Wrap(err, "failed to initialize post gloas beaconState")
}
return post, nil
}

View File

@@ -317,23 +317,15 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
}
proposerAssignments := make(map[primitives.ValidatorIndex][]primitives.Slot)
originalStateSlot := state.Slot()
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
// Skip proposer assignment for genesis slot.
if slot == 0 {
continue
}
// Set the state's current slot.
if err := state.SetSlot(slot); err != nil {
return nil, err
}
// Determine the proposer index for the current slot.
i, err := BeaconProposerIndex(ctx, state)
i, err := BeaconProposerIndexAtSlot(ctx, state, slot)
if err != nil {
return nil, errors.Wrapf(err, "could not check proposer at slot %d", state.Slot())
return nil, errors.Wrapf(err, "could not check proposer at slot %d", slot)
}
// Append the slot to the proposer's assignments.
@@ -342,12 +334,6 @@ func ProposerAssignments(ctx context.Context, state state.BeaconState, epoch pri
}
proposerAssignments[i] = append(proposerAssignments[i], slot)
}
// Reset state back to its original slot.
if err := state.SetSlot(originalStateSlot); err != nil {
return nil, err
}
return proposerAssignments, nil
}

View File

@@ -87,6 +87,11 @@ func TotalActiveBalance(s state.ReadOnlyBeaconState) (uint64, error) {
return total, nil
}
// UpdateTotalActiveBalanceCache updates the cache with the given total active balance.
func UpdateTotalActiveBalanceCache(s state.BeaconState, total uint64) error {
return balanceCache.AddTotalEffectiveBalance(s, total)
}
// IncreaseBalance increases validator with the given 'index' balance by 'delta' in Gwei.
//
// Spec pseudocode definition:

View File

@@ -297,3 +297,30 @@ func TestIncreaseBadBalance_NotOK(t *testing.T) {
require.ErrorContains(t, "addition overflows", helpers.IncreaseBalance(state, test.i, test.nb))
}
}
func TestUpdateTotalActiveBalanceCache(t *testing.T) {
helpers.ClearCache()
// Create a test state with some validators
validators := []*ethpb.Validator{
{EffectiveBalance: 32 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
{EffectiveBalance: 32 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
{EffectiveBalance: 31 * 1e9, ExitEpoch: params.BeaconConfig().FarFutureEpoch, ActivationEpoch: 0},
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: validators,
Slot: 0,
})
require.NoError(t, err)
// Test updating cache with a specific total
testTotal := uint64(95 * 1e9) // 32 + 32 + 31 = 95
err = helpers.UpdateTotalActiveBalanceCache(state, testTotal)
require.NoError(t, err)
// Verify the cache was updated by retrieving the total active balance
// which should now return the cached value
cachedTotal, err := helpers.TotalActiveBalance(state)
require.NoError(t, err)
assert.Equal(t, testTotal, cachedTotal, "Cache should return the updated total")
}

View File

@@ -21,6 +21,39 @@ var (
syncCommitteeCache = cache.NewSyncCommittee()
)
// CurrentPeriodPositions returns committee indices of the current period sync committee for input validators.
func CurrentPeriodPositions(st state.BeaconState, indices []primitives.ValidatorIndex) ([][]primitives.CommitteeIndex, error) {
root, err := SyncPeriodBoundaryRoot(st)
if err != nil {
return nil, err
}
pos, err := syncCommitteeCache.CurrentPeriodPositions(root, indices)
if errors.Is(err, cache.ErrNonExistingSyncCommitteeKey) {
committee, err := st.CurrentSyncCommittee()
if err != nil {
return nil, err
}
// Fill in the cache on miss.
go func() {
if err := syncCommitteeCache.UpdatePositionsInCommittee(root, st); err != nil {
log.WithError(err).Error("Could not fill sync committee cache on miss")
}
}()
pos = make([][]primitives.CommitteeIndex, len(indices))
for i, idx := range indices {
pubkey := st.PubkeyAtIndex(idx)
pos[i] = findSubCommitteeIndices(pubkey[:], committee.Pubkeys)
}
return pos, nil
}
if err != nil {
return nil, err
}
return pos, nil
}
// IsCurrentPeriodSyncCommittee returns true if the input validator index belongs in the current period sync committee
// along with the sync committee root.
// 1. Checks if the public key exists in the sync committee cache

View File

@@ -17,6 +17,38 @@ import (
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func TestCurrentPeriodPositions(t *testing.T) {
helpers.ClearCache()
validators := make([]*ethpb.Validator, params.BeaconConfig().SyncCommitteeSize)
syncCommittee := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
for i := 0; i < len(validators); i++ {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: k,
}
syncCommittee.Pubkeys[i] = bytesutil.PadTo(k, 48)
}
state, err := state_native.InitializeFromProtoAltair(&ethpb.BeaconStateAltair{
Validators: validators,
})
require.NoError(t, err)
require.NoError(t, state.SetCurrentSyncCommittee(syncCommittee))
require.NoError(t, state.SetNextSyncCommittee(syncCommittee))
require.NoError(t, err, helpers.SyncCommitteeCache().UpdatePositionsInCommittee([32]byte{}, state))
positions, err := helpers.CurrentPeriodPositions(state, []primitives.ValidatorIndex{0, 1})
require.NoError(t, err)
require.Equal(t, 2, len(positions))
require.Equal(t, 1, len(positions[0]))
assert.Equal(t, primitives.CommitteeIndex(0), positions[0][0])
require.Equal(t, 1, len(positions[1]))
assert.Equal(t, primitives.CommitteeIndex(1), positions[1][0])
}
func TestIsCurrentEpochSyncCommittee_UsingCache(t *testing.T) {
helpers.ClearCache()

View File

@@ -309,23 +309,29 @@ func beaconProposerIndexAtSlotFulu(state state.ReadOnlyBeaconState, slot primiti
if err != nil {
return 0, errors.Wrap(err, "could not get proposer lookahead")
}
spe := params.BeaconConfig().SlotsPerEpoch
if e == stateEpoch {
return lookAhead[slot%params.BeaconConfig().SlotsPerEpoch], nil
return lookAhead[slot%spe], nil
}
// The caller is requesting the proposer for the next epoch
return lookAhead[slot%params.BeaconConfig().SlotsPerEpoch+params.BeaconConfig().SlotsPerEpoch], nil
return lookAhead[spe+slot%spe], nil
}
// BeaconProposerIndexAtSlot returns proposer index at the given slot from the
// point of view of the given state as head state
func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
if state.Version() >= version.Fulu {
return beaconProposerIndexAtSlotFulu(state, slot)
}
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(state.Slot())
// Even if the state is post Fulu, we may request a past proposer index.
if state.Version() >= version.Fulu && e >= params.BeaconConfig().FuluForkEpoch {
// We can use the cached lookahead only for the current and the next epoch.
if e == stateEpoch || e == stateEpoch+1 {
return beaconProposerIndexAtSlotFulu(state, slot)
}
}
// The cache uses the state root of the previous epoch - minimum_seed_lookahead last slot as key. (e.g. Starting epoch 1, slot 32, the key would be block root at slot 31)
// For simplicity, the node will skip caching of genesis epoch.
if e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
// For simplicity, the node will skip caching of genesis epoch. If the passed state has not yet reached this slot then we do not check the cache.
if e <= stateEpoch && e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
s, err := slots.EpochEnd(e - 1)
if err != nil {
return 0, err

View File

@@ -1161,6 +1161,10 @@ func TestValidatorMaxEffectiveBalance(t *testing.T) {
}
func TestBeaconProposerIndexAtSlotFulu(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.FuluForkEpoch = 1
params.OverrideBeaconConfig(cfg)
lookahead := make([]uint64, 64)
lookahead[0] = 15
lookahead[1] = 16
@@ -1180,8 +1184,4 @@ func TestBeaconProposerIndexAtSlotFulu(t *testing.T) {
idx, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 130)
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(42), idx)
_, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 95)
require.ErrorContains(t, "slot 95 is not in the current epoch 3 or the next epoch", err)
_, err = helpers.BeaconProposerIndexAtSlot(t.Context(), st, 160)
require.ErrorContains(t, "slot 160 is not in the current epoch 3 or the next epoch", err)
}

View File

@@ -108,6 +108,12 @@ func CanUpgradeToFulu(slot primitives.Slot) bool {
return epochStart && fuluEpoch
}
func CanUpgradeToGloas(slot primitives.Slot) bool {
epochStart := slots.IsEpochStart(slot)
gloasEpoch := slots.ToEpoch(slot) == params.BeaconConfig().GloasForkEpoch
return epochStart && gloasEpoch
}
// CanProcessEpoch checks the eligibility to process epoch.
// The epoch can be processed at the end of the last slot of every epoch.
//

View File

@@ -24,6 +24,7 @@ go_library(
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/execution:go_default_library",
"//beacon-chain/core/fulu:go_default_library",
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition/interop:go_default_library",

View File

@@ -17,6 +17,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/epoch/precompute"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/execution"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/fulu"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
@@ -389,6 +390,15 @@ func UpgradeState(ctx context.Context, state state.BeaconState) (state.BeaconSta
upgraded = true
}
if time.CanUpgradeToGloas(slot) {
state, err = gloas.UpgradeToGloas(ctx, state)
if err != nil {
tracing.AnnotateError(span, err)
return nil, err
}
upgraded = true
}
if upgraded {
log.WithField("version", version.String(state.Version())).Info("Upgraded state to")
}

View File

@@ -8,6 +8,7 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/altair"
b "github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/electra"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition/interop"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
@@ -374,15 +375,18 @@ func ProcessBlockForStateRoot(
}
// This calls altair block operations.
func altairOperations(
ctx context.Context,
st state.BeaconState,
beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
st, err := b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), v.SlashValidator)
func altairOperations(ctx context.Context, st state.BeaconState, beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair proposer slashing")
}
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), v.SlashValidator)
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
@@ -393,7 +397,7 @@ func altairOperations(
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}
@@ -401,15 +405,18 @@ func altairOperations(
}
// This calls phase 0 block operations.
func phase0Operations(
ctx context.Context,
st state.BeaconState,
beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
st, err := b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), v.SlashValidator)
func phase0Operations(ctx context.Context, st state.BeaconState, beaconBlock interfaces.ReadOnlyBeaconBlock) (state.BeaconState, error) {
var err error
exitInfo := v.ExitInformation(st)
if err := helpers.UpdateTotalActiveBalanceCache(st, exitInfo.TotalActiveBalance); err != nil {
return nil, errors.Wrap(err, "could not update total active balance cache")
}
st, err = b.ProcessProposerSlashings(ctx, st, beaconBlock.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process block proposer slashings")
}
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), v.SlashValidator)
st, err = b.ProcessAttesterSlashings(ctx, st, beaconBlock.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process block attester slashings")
}
@@ -420,5 +427,9 @@ func phase0Operations(
if _, err := altair.ProcessDeposits(ctx, st, beaconBlock.Body().Deposits()); err != nil {
return nil, errors.Wrap(err, "could not process deposits")
}
return b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits())
st, err = b.ProcessVoluntaryExits(ctx, st, beaconBlock.Body().VoluntaryExits(), exitInfo)
if err != nil {
return nil, errors.Wrap(err, "could not process voluntary exits")
}
return st, nil
}

View File

@@ -13,34 +13,55 @@ import (
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/math"
mathutil "github.com/OffchainLabs/prysm/v6/math"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
)
// ExitInfo provides information about validator exits in the state.
type ExitInfo struct {
HighestExitEpoch primitives.Epoch
Churn uint64
TotalActiveBalance uint64
}
// ErrValidatorAlreadyExited is an error raised when trying to process an exit of
// an already exited validator
var ErrValidatorAlreadyExited = errors.New("validator already exited")
// MaxExitEpochAndChurn returns the maximum non-FAR_FUTURE_EPOCH exit
// epoch and the number of them
func MaxExitEpochAndChurn(s state.BeaconState) (maxExitEpoch primitives.Epoch, churn uint64) {
// ExitInformation returns information about validator exits.
func ExitInformation(s state.BeaconState) *ExitInfo {
exitInfo := &ExitInfo{}
farFutureEpoch := params.BeaconConfig().FarFutureEpoch
currentEpoch := slots.ToEpoch(s.Slot())
totalActiveBalance := uint64(0)
err := s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
e := val.ExitEpoch()
if e != farFutureEpoch {
if e > maxExitEpoch {
maxExitEpoch = e
churn = 1
} else if e == maxExitEpoch {
churn++
if e > exitInfo.HighestExitEpoch {
exitInfo.HighestExitEpoch = e
exitInfo.Churn = 1
} else if e == exitInfo.HighestExitEpoch {
exitInfo.Churn++
}
}
// Calculate total active balance in the same loop
if helpers.IsActiveValidatorUsingTrie(val, currentEpoch) {
totalActiveBalance += val.EffectiveBalance()
}
return nil
})
_ = err
return
// Apply minimum balance as per spec
exitInfo.TotalActiveBalance = mathutil.Max(params.BeaconConfig().EffectiveBalanceIncrement, totalActiveBalance)
return exitInfo
}
// InitiateValidatorExit takes in validator index and updates
@@ -64,59 +85,117 @@ func MaxExitEpochAndChurn(s state.BeaconState) (maxExitEpoch primitives.Epoch, c
// # Set validator exit epoch and withdrawable epoch
// validator.exit_epoch = exit_queue_epoch
// validator.withdrawable_epoch = Epoch(validator.exit_epoch + MIN_VALIDATOR_WITHDRAWABILITY_DELAY)
func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primitives.ValidatorIndex, exitQueueEpoch primitives.Epoch, churn uint64) (state.BeaconState, primitives.Epoch, error) {
func InitiateValidatorExit(
ctx context.Context,
s state.BeaconState,
idx primitives.ValidatorIndex,
exitInfo *ExitInfo,
) (state.BeaconState, error) {
validator, err := s.ValidatorAtIndex(idx)
if err != nil {
return nil, 0, err
return nil, err
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, validator.ExitEpoch, ErrValidatorAlreadyExited
return s, ErrValidatorAlreadyExited
}
// Compute exit queue epoch.
if s.Version() < version.Electra {
// Relevant spec code from phase0:
//
// exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))])
// exit_queue_churn = len([v for v in state.validators if v.exit_epoch == exit_queue_epoch])
// if exit_queue_churn >= get_validator_churn_limit(state):
// exit_queue_epoch += Epoch(1)
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitableEpoch > exitQueueEpoch {
exitQueueEpoch = exitableEpoch
churn = 0
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, time.CurrentEpoch(s))
if err != nil {
return nil, 0, errors.Wrap(err, "could not get active validator count")
}
currentChurn := helpers.ValidatorExitChurnLimit(activeValidatorCount)
if churn >= currentChurn {
exitQueueEpoch, err = exitQueueEpoch.SafeAdd(1)
if err != nil {
return nil, 0, err
}
if err = initiateValidatorExitPreElectra(ctx, s, exitInfo); err != nil {
return nil, err
}
} else {
// [Modified in Electra:EIP7251]
// exit_queue_epoch = compute_exit_epoch_and_update_churn(state, validator.effective_balance)
var err error
exitQueueEpoch, err = s.ExitEpochAndUpdateChurn(primitives.Gwei(validator.EffectiveBalance))
exitInfo.HighestExitEpoch, err = s.ExitEpochAndUpdateChurn(primitives.Gwei(validator.EffectiveBalance))
if err != nil {
return nil, 0, err
return nil, err
}
}
validator.ExitEpoch = exitQueueEpoch
validator.WithdrawableEpoch, err = exitQueueEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
validator.ExitEpoch = exitInfo.HighestExitEpoch
validator.WithdrawableEpoch, err = exitInfo.HighestExitEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, 0, err
return nil, err
}
if err := s.UpdateValidatorAtIndex(idx, validator); err != nil {
return nil, 0, err
return nil, err
}
return s, exitQueueEpoch, nil
return s, nil
}
// InitiateValidatorExitForTotalBal has the same functionality as InitiateValidatorExit,
// the only difference being how total active balance is obtained. In InitiateValidatorExit
// it is calculated inside the function and in InitiateValidatorExitForTotalBal it's a
// function argument.
func InitiateValidatorExitForTotalBal(
ctx context.Context,
s state.BeaconState,
idx primitives.ValidatorIndex,
exitInfo *ExitInfo,
totalActiveBalance primitives.Gwei,
) (state.BeaconState, error) {
validator, err := s.ValidatorAtIndex(idx)
if err != nil {
return nil, err
}
if validator.ExitEpoch != params.BeaconConfig().FarFutureEpoch {
return s, ErrValidatorAlreadyExited
}
// Compute exit queue epoch.
if s.Version() < version.Electra {
if err = initiateValidatorExitPreElectra(ctx, s, exitInfo); err != nil {
return nil, err
}
} else {
// [Modified in Electra:EIP7251]
// exit_queue_epoch = compute_exit_epoch_and_update_churn(state, validator.effective_balance)
var err error
exitInfo.HighestExitEpoch, err = s.ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance, primitives.Gwei(validator.EffectiveBalance))
if err != nil {
return nil, err
}
}
validator.ExitEpoch = exitInfo.HighestExitEpoch
validator.WithdrawableEpoch, err = exitInfo.HighestExitEpoch.SafeAddEpoch(params.BeaconConfig().MinValidatorWithdrawabilityDelay)
if err != nil {
return nil, err
}
if err := s.UpdateValidatorAtIndex(idx, validator); err != nil {
return nil, err
}
return s, nil
}
func initiateValidatorExitPreElectra(ctx context.Context, s state.BeaconState, exitInfo *ExitInfo) error {
// Relevant spec code from phase0:
//
// exit_epochs = [v.exit_epoch for v in state.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// exit_queue_epoch = max(exit_epochs + [compute_activation_exit_epoch(get_current_epoch(state))])
// exit_queue_churn = len([v for v in state.validators if v.exit_epoch == exit_queue_epoch])
// if exit_queue_churn >= get_validator_churn_limit(state):
// exit_queue_epoch += Epoch(1)
exitableEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(s))
if exitableEpoch > exitInfo.HighestExitEpoch {
exitInfo.HighestExitEpoch = exitableEpoch
exitInfo.Churn = 0
}
activeValidatorCount, err := helpers.ActiveValidatorCount(ctx, s, time.CurrentEpoch(s))
if err != nil {
return errors.Wrap(err, "could not get active validator count")
}
currentChurn := helpers.ValidatorExitChurnLimit(activeValidatorCount)
if exitInfo.Churn >= currentChurn {
exitInfo.HighestExitEpoch, err = exitInfo.HighestExitEpoch.SafeAdd(1)
if err != nil {
return err
}
exitInfo.Churn = 1
} else {
exitInfo.Churn = exitInfo.Churn + 1
}
return nil
}
// SlashValidator slashes the malicious validator's balance and awards
@@ -152,9 +231,12 @@ func InitiateValidatorExit(ctx context.Context, s state.BeaconState, idx primiti
func SlashValidator(
ctx context.Context,
s state.BeaconState,
slashedIdx primitives.ValidatorIndex) (state.BeaconState, error) {
maxExitEpoch, churn := MaxExitEpochAndChurn(s)
s, _, err := InitiateValidatorExit(ctx, s, slashedIdx, maxExitEpoch, churn)
slashedIdx primitives.ValidatorIndex,
exitInfo *ExitInfo,
) (state.BeaconState, error) {
var err error
s, err = InitiateValidatorExitForTotalBal(ctx, s, slashedIdx, exitInfo, primitives.Gwei(exitInfo.TotalActiveBalance))
if err != nil && !errors.Is(err, ErrValidatorAlreadyExited) {
return nil, errors.Wrapf(err, "could not initiate validator %d exit", slashedIdx)
}

View File

@@ -49,9 +49,11 @@ func TestInitiateValidatorExit_AlreadyExited(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, 0, 199, 1)
exitInfo := &validators.ExitInfo{HighestExitEpoch: 199, Churn: 1}
newState, err := validators.InitiateValidatorExit(t.Context(), state, 0, exitInfo)
require.ErrorIs(t, err, validators.ErrValidatorAlreadyExited)
require.Equal(t, exitEpoch, epoch)
assert.Equal(t, primitives.Epoch(199), exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(1), exitInfo.Churn)
v, err := newState.ValidatorAtIndex(0)
require.NoError(t, err)
assert.Equal(t, exitEpoch, v.ExitEpoch, "Already exited")
@@ -68,9 +70,11 @@ func TestInitiateValidatorExit_ProperExit(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitedEpoch+2, 1)
exitInfo := &validators.ExitInfo{HighestExitEpoch: exitedEpoch + 2, Churn: 1}
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitInfo)
require.NoError(t, err)
require.Equal(t, exitedEpoch+2, epoch)
assert.Equal(t, exitedEpoch+2, exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(2), exitInfo.Churn)
v, err := newState.ValidatorAtIndex(idx)
require.NoError(t, err)
assert.Equal(t, exitedEpoch+2, v.ExitEpoch, "Exit epoch was not the highest")
@@ -88,9 +92,11 @@ func TestInitiateValidatorExit_ChurnOverflow(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitedEpoch+2, 4)
exitInfo := &validators.ExitInfo{HighestExitEpoch: exitedEpoch + 2, Churn: 4}
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, exitInfo)
require.NoError(t, err)
require.Equal(t, exitedEpoch+3, epoch)
assert.Equal(t, exitedEpoch+3, exitInfo.HighestExitEpoch)
assert.Equal(t, uint64(1), exitInfo.Churn)
// Because of exit queue overflow,
// validator who init exited has to wait one more epoch.
@@ -110,7 +116,8 @@ func TestInitiateValidatorExit_WithdrawalOverflows(t *testing.T) {
}}
state, err := state_native.InitializeFromProtoPhase0(base)
require.NoError(t, err)
_, _, err = validators.InitiateValidatorExit(t.Context(), state, 1, params.BeaconConfig().FarFutureEpoch-1, 1)
exitInfo := &validators.ExitInfo{HighestExitEpoch: params.BeaconConfig().FarFutureEpoch - 1, Churn: 1}
_, err = validators.InitiateValidatorExit(t.Context(), state, 1, exitInfo)
require.ErrorContains(t, "addition overflows", err)
}
@@ -146,12 +153,11 @@ func TestInitiateValidatorExit_ProperExit_Electra(t *testing.T) {
require.NoError(t, err)
require.Equal(t, primitives.Gwei(0), ebtc)
newState, epoch, err := validators.InitiateValidatorExit(t.Context(), state, idx, 0, 0) // exitQueueEpoch and churn are not used in electra
newState, err := validators.InitiateValidatorExit(t.Context(), state, idx, &validators.ExitInfo{}) // exit info is not used in electra
require.NoError(t, err)
// Expect that the exit epoch is the next available epoch with max seed lookahead.
want := helpers.ActivationExitEpoch(exitedEpoch + 1)
require.Equal(t, want, epoch)
v, err := newState.ValidatorAtIndex(idx)
require.NoError(t, err)
assert.Equal(t, want, v.ExitEpoch, "Exit epoch was not the highest")
@@ -190,7 +196,7 @@ func TestSlashValidator_OK(t *testing.T) {
require.NoError(t, err, "Could not get proposer")
proposerBal, err := state.BalanceAtIndex(proposer)
require.NoError(t, err)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx, validators.ExitInformation(state))
require.NoError(t, err, "Could not slash validator")
require.Equal(t, true, slashedState.Version() == version.Phase0)
@@ -244,7 +250,7 @@ func TestSlashValidator_Electra(t *testing.T) {
require.NoError(t, err, "Could not get proposer")
proposerBal, err := state.BalanceAtIndex(proposer)
require.NoError(t, err)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx)
slashedState, err := validators.SlashValidator(t.Context(), state, slashedIdx, validators.ExitInformation(state))
require.NoError(t, err, "Could not slash validator")
require.Equal(t, true, slashedState.Version() == version.Electra)
@@ -505,8 +511,8 @@ func TestValidatorMaxExitEpochAndChurn(t *testing.T) {
for _, tt := range tests {
s, err := state_native.InitializeFromProtoPhase0(tt.state)
require.NoError(t, err)
epoch, churn := validators.MaxExitEpochAndChurn(s)
require.Equal(t, tt.wantedEpoch, epoch)
require.Equal(t, tt.wantedChurn, churn)
exitInfo := validators.ExitInformation(s)
require.Equal(t, tt.wantedEpoch, exitInfo.HighestExitEpoch)
require.Equal(t, tt.wantedChurn, exitInfo.Churn)
}
}

View File

@@ -116,19 +116,43 @@ func (l *periodicEpochLayout) pruneBefore(before primitives.Epoch) (*pruneSummar
}
// Roll up summaries and clean up per-epoch directories.
rollup := &pruneSummary{}
// Track which period directories might be empty after epoch removal
periodsToCheck := make(map[string]struct{})
for epoch, sum := range sums {
rollup.blobsPruned += sum.blobsPruned
rollup.failedRemovals = append(rollup.failedRemovals, sum.failedRemovals...)
rmdir := l.epochDir(epoch)
periodDir := l.periodDir(epoch)
if len(sum.failedRemovals) == 0 {
if err := l.fs.Remove(rmdir); err != nil {
log.WithField("dir", rmdir).WithError(err).Error("Failed to remove epoch directory while pruning")
} else {
periodsToCheck[periodDir] = struct{}{}
}
} else {
log.WithField("dir", rmdir).WithField("numFailed", len(sum.failedRemovals)).WithError(err).Error("Unable to remove epoch directory due to pruning failures")
}
}
//Clean up empty period directories
for periodDir := range periodsToCheck {
entries, err := afero.ReadDir(l.fs, periodDir)
if err != nil {
log.WithField("dir", periodDir).WithError(err).Debug("Failed to read period directory contents")
continue
}
// Only attempt to remove if directory is empty
if len(entries) == 0 {
if err := l.fs.Remove(periodDir); err != nil {
log.WithField("dir", periodDir).WithError(err).Error("Failed to remove empty period directory")
}
}
}
return rollup, nil
}

View File

@@ -4,6 +4,7 @@ import (
"encoding/binary"
"os"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/params"
@@ -195,3 +196,48 @@ func TestLayoutPruneBefore(t *testing.T) {
})
}
}
func TestLayoutByEpochPruneBefore(t *testing.T) {
roots := testRoots(10)
cases := []struct {
name string
pruned []testIdent
remain []testIdent
err error
sum pruneSummary
}{
{
name: "single epoch period cleanup",
pruned: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[0], epoch: 367076, index: 0}},
},
remain: []testIdent{
{offset: 0, blobIdent: blobIdent{root: roots[1], epoch: 371176, index: 0}}, // Different period
},
sum: pruneSummary{blobsPruned: 1},
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageAndFs(t, WithLayout(LayoutNameByEpoch))
pruned := testSetupBlobIdentPaths(t, fs, bs, c.pruned)
remain := testSetupBlobIdentPaths(t, fs, bs, c.remain)
time.Sleep(1 * time.Second)
for _, id := range pruned {
_, err := fs.Stat(bs.layout.sszPath(id))
require.Equal(t, true, os.IsNotExist(err))
dirs := bs.layout.blockParentDirs(id)
for i := len(dirs) - 1; i > 0; i-- {
_, err = fs.Stat(dirs[i])
require.Equal(t, true, os.IsNotExist(err))
}
}
for _, id := range remain {
_, err := fs.Stat(bs.layout.sszPath(id))
require.NoError(t, err)
}
})
}
}

View File

@@ -57,6 +57,11 @@ var (
GetPayloadMethodV5,
GetBlobsV2,
}
gloasEngineEndpoints = []string{
NewPayloadMethodV5,
GetPayloadMethodV6,
}
)
const (
@@ -67,6 +72,8 @@ const (
NewPayloadMethodV3 = "engine_newPayloadV3"
// NewPayloadMethodV4 is the engine_newPayloadVX method added at Electra.
NewPayloadMethodV4 = "engine_newPayloadV4"
// NewPayloadMethodV5 is the engine_newPayloadVX method added at Gloas.
NewPayloadMethodV5 = "engine_newPayloadV5"
// ForkchoiceUpdatedMethod v1 request string for JSON-RPC.
ForkchoiceUpdatedMethod = "engine_forkchoiceUpdatedV1"
// ForkchoiceUpdatedMethodV2 v2 request string for JSON-RPC.
@@ -83,6 +90,8 @@ const (
GetPayloadMethodV4 = "engine_getPayloadV4"
// GetPayloadMethodV5 is the get payload method added for fulu
GetPayloadMethodV5 = "engine_getPayloadV5"
// GetPayloadMethodV6 is the get payload method added for gloas
GetPayloadMethodV6 = "engine_getPayloadV6"
// BlockByHashMethod request string for JSON-RPC.
BlockByHashMethod = "eth_getBlockByHash"
// BlockByNumberMethod request string for JSON-RPC.
@@ -181,6 +190,18 @@ func (s *Service) NewPayload(ctx context.Context, payload interfaces.ExecutionDa
return nil, handleRPCError(err)
}
}
case *pb.ExecutionPayloadGloas:
if executionRequests == nil {
return nil, errors.New("execution requests are required for gloas execution payload")
}
flattenedRequests, err := pb.EncodeExecutionRequests(executionRequests)
if err != nil {
return nil, errors.Wrap(err, "failed to encode execution requests")
}
err = s.rpcClient.CallContext(ctx, result, NewPayloadMethodV5, payloadPb, versionedHashes, parentBlockRoot, flattenedRequests)
if err != nil {
return nil, handleRPCError(err)
}
default:
return nil, errors.New("unknown execution data type")
}
@@ -273,6 +294,9 @@ func (s *Service) ForkchoiceUpdated(
func getPayloadMethodAndMessage(slot primitives.Slot) (string, proto.Message) {
epoch := slots.ToEpoch(slot)
if epoch >= params.BeaconConfig().GloasForkEpoch {
return GetPayloadMethodV6, &pb.ExecutionBundleGloas{}
}
if epoch >= params.BeaconConfig().FuluForkEpoch {
return GetPayloadMethodV5, &pb.ExecutionBundleFulu{}
}
@@ -325,6 +349,10 @@ func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
supportedEngineEndpoints = append(supportedEngineEndpoints, fuluEngineEndpoints...)
}
if params.GloasEnabled() {
supportedEngineEndpoints = append(supportedEngineEndpoints, gloasEngineEndpoints...)
}
elSupportedEndpointsSlice := make([]string, len(supportedEngineEndpoints))
if err := s.rpcClient.CallContext(ctx, &elSupportedEndpointsSlice, ExchangeCapabilities, supportedEngineEndpoints); err != nil {
return nil, handleRPCError(err)

View File

@@ -32,6 +32,9 @@ var gossipTopicMappings = map[string]func() proto.Message{
func GossipTopicMappings(topic string, epoch primitives.Epoch) proto.Message {
switch topic {
case BlockSubnetTopicFormat:
if epoch >= params.BeaconConfig().GloasForkEpoch {
return &ethpb.SignedBeaconBlockGloas{}
}
if epoch >= params.BeaconConfig().FuluForkEpoch {
return &ethpb.SignedBeaconBlockFulu{}
}
@@ -144,4 +147,7 @@ func init() {
// Specially handle Fulu objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockFulu{})] = BlockSubnetTopicFormat
// Specially handle Gloas objects.
GossipTypeMapping[reflect.TypeOf(&ethpb.SignedBeaconBlockGloas{})] = BlockSubnetTopicFormat
}

View File

@@ -18,6 +18,7 @@ var (
"lodestar",
"js-libp2p",
"rust-libp2p",
"erigon/caplin",
}
p2pPeerCount = promauto.NewGaugeVec(prometheus.GaugeOpts{
Name: "p2p_peer_count",

View File

@@ -50,6 +50,7 @@ const (
// TestP2P represents a p2p implementation that can be used for testing.
type TestP2P struct {
mu sync.Mutex
t *testing.T
BHost host.Host
EnodeID enode.ID
@@ -243,6 +244,8 @@ func (p *TestP2P) SetStreamHandler(topic string, handler network.StreamHandler)
// JoinTopic will join PubSub topic, if not already joined.
func (p *TestP2P) JoinTopic(topic string, opts ...pubsub.TopicOpt) (*pubsub.Topic, error) {
p.mu.Lock()
defer p.mu.Unlock()
if _, ok := p.joinedTopics[topic]; !ok {
joinedTopic, err := p.pubsub.Join(topic, opts...)
if err != nil {

View File

@@ -85,6 +85,11 @@ func InitializeDataMaps() {
&ethpb.SignedBeaconBlockFulu{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{ExecutionPayload: &enginev1.ExecutionPayloadDeneb{}}}},
)
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (interfaces.ReadOnlySignedBeaconBlock, error) {
return blocks.NewSignedBeaconBlock(
&ethpb.SignedBeaconBlockGloas{Block: &ethpb.BeaconBlockGloas{Body: &ethpb.BeaconBlockBodyGloas{ExecutionPayload: &enginev1.ExecutionPayloadGloas{}}}},
)
},
}
// Reset our metadata map.
@@ -110,6 +115,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV2(&ethpb.MetaDataV2{}), nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (metadata.Metadata, error) {
return wrapper.WrappedMetadataV2(&ethpb.MetaDataV2{}), nil
},
}
// Reset our attestation map.
@@ -135,6 +143,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (ethpb.Att, error) {
return &ethpb.SingleAttestation{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (ethpb.Att, error) {
return &ethpb.SingleAttestation{}, nil
},
}
// Reset our aggregate attestation map.
@@ -160,6 +171,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProofElectra{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (ethpb.SignedAggregateAttAndProof, error) {
return &ethpb.SignedAggregateAttestationAndProofElectra{}, nil
},
}
// Reset our aggregate attestation map.
@@ -185,6 +199,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (ethpb.AttSlashing, error) {
return &ethpb.AttesterSlashingElectra{}, nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (ethpb.AttSlashing, error) {
return &ethpb.AttesterSlashingElectra{}, nil
},
}
// Reset our light client optimistic update map.
@@ -204,6 +221,12 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
}
// Reset our light client finality update map.
@@ -223,5 +246,11 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().GloasForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
}
}

View File

@@ -1230,6 +1230,7 @@ func (s *Service) prysmBeaconEndpoints(
methods: []string{http.MethodGet},
},
{
// Warning: no longer supported post Fulu fork
template: "/prysm/v1/beacon/blobs",
name: namespace + ".PublishBlobs",
middleware: []middleware.Middleware{

View File

@@ -334,26 +334,26 @@ func (s *Server) GetBlockAttestationsV2(w http.ResponseWriter, r *http.Request)
consensusAtts := blk.Block().Body().Attestations()
v := blk.Block().Version()
var attStructs []interface{}
attStructs := make([]interface{}, len(consensusAtts))
if v >= version.Electra {
for _, att := range consensusAtts {
for index, att := range consensusAtts {
a, ok := att.(*eth.AttestationElectra)
if !ok {
httputil.HandleError(w, fmt.Sprintf("unable to convert consensus attestations electra of type %T", att), http.StatusInternalServerError)
return
}
attStruct := structs.AttElectraFromConsensus(a)
attStructs = append(attStructs, attStruct)
attStructs[index] = attStruct
}
} else {
for _, att := range consensusAtts {
for index, att := range consensusAtts {
a, ok := att.(*eth.Attestation)
if !ok {
httputil.HandleError(w, fmt.Sprintf("unable to convert consensus attestation of type %T", att), http.StatusInternalServerError)
return
}
attStruct := structs.AttFromConsensus(a)
attStructs = append(attStructs, attStruct)
attStructs[index] = attStruct
}
}

View File

@@ -17,19 +17,19 @@ func TestBlocks_NewSignedBeaconBlock_EquivocationFix(t *testing.T) {
var block structs.SignedBeaconBlock
err := json.Unmarshal([]byte(rpctesting.Phase0Block), &block)
require.NoError(t, err)
// Convert to generic format
genericBlock, err := block.ToGeneric()
require.NoError(t, err)
// Test the FIX: pass genericBlock.Block instead of genericBlock
// This is what our fix changed in handlers.go line 704 and 858
_, err = blocks.NewSignedBeaconBlock(genericBlock.Block)
require.NoError(t, err, "NewSignedBeaconBlock should work with genericBlock.Block")
// Test the BROKEN version: pass genericBlock directly (this should fail)
_, err = blocks.NewSignedBeaconBlock(genericBlock)
if err == nil {
t.Errorf("NewSignedBeaconBlock should fail with whole genericBlock but succeeded")
}
}
}

View File

@@ -910,6 +910,100 @@ func TestGetBlockAttestations(t *testing.T) {
})
})
})
t.Run("empty-attestations", func(t *testing.T) {
t.Run("v1", func(t *testing.T) {
b := util.NewBeaconBlock()
b.Block.Body.Attestations = []*eth.Attestation{} // Explicitly set empty attestations
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v1/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestations(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
// Ensure data is empty array, not null
require.NotNil(t, resp.Data)
assert.Equal(t, 0, len(resp.Data))
})
t.Run("v2-pre-electra", func(t *testing.T) {
b := util.NewBeaconBlock()
b.Block.Body.Attestations = []*eth.Attestation{} // Explicitly set empty attestations
sb, err := blocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: sb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestationsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
// Ensure data is "[]", not null
require.NotNil(t, resp.Data)
assert.Equal(t, string(json.RawMessage("[]")), string(resp.Data))
})
t.Run("v2-electra", func(t *testing.T) {
eb := util.NewBeaconBlockFulu()
eb.Block.Body.Attestations = []*eth.AttestationElectra{} // Explicitly set empty attestations
esb, err := blocks.NewSignedBeaconBlock(eb)
require.NoError(t, err)
mockChainService := &chainMock.ChainService{
FinalizedRoots: map[[32]byte]bool{},
}
s := &Server{
OptimisticModeFetcher: mockChainService,
FinalizationFetcher: mockChainService,
Blocker: &testutil.MockBlocker{BlockToReturn: esb},
}
request := httptest.NewRequest(http.MethodGet, "http://foo.example/eth/v2/beacon/blocks/{block_id}/attestations", nil)
request.SetPathValue("block_id", "head")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetBlockAttestationsV2(writer, request)
require.Equal(t, http.StatusOK, writer.Code)
resp := &structs.GetBlockAttestationsV2Response{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
// Ensure data is "[]", not null
require.NotNil(t, resp.Data)
assert.Equal(t, string(json.RawMessage("[]")), string(resp.Data))
assert.Equal(t, "fulu", resp.Version)
})
})
}
func TestGetBlindedBlock(t *testing.T) {

View File

@@ -68,7 +68,8 @@ func (rs *BlockRewardService) GetBlockRewardsData(ctx context.Context, blk inter
Code: http.StatusInternalServerError,
}
}
st, err = coreblocks.ProcessAttesterSlashings(ctx, st, blk.Body().AttesterSlashings(), validators.SlashValidator)
exitInfo := validators.ExitInformation(st)
st, err = coreblocks.ProcessAttesterSlashings(ctx, st, blk.Body().AttesterSlashings(), exitInfo)
if err != nil {
return nil, &httputil.DefaultJsonError{
Message: "Could not get attester slashing rewards: " + err.Error(),
@@ -82,7 +83,7 @@ func (rs *BlockRewardService) GetBlockRewardsData(ctx context.Context, blk inter
Code: http.StatusInternalServerError,
}
}
st, err = coreblocks.ProcessProposerSlashings(ctx, st, blk.Body().ProposerSlashings(), validators.SlashValidator)
st, err = coreblocks.ProcessProposerSlashings(ctx, st, blk.Body().ProposerSlashings(), exitInfo)
if err != nil {
return nil, &httputil.DefaultJsonError{
Message: "Could not get proposer slashing rewards: " + err.Error(),

View File

@@ -1029,8 +1029,8 @@ func (s *Server) GetProposerDuties(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, fmt.Sprintf("Could not get head state: %v ", err), http.StatusInternalServerError)
return
}
// Advance state with empty transitions up to the requested epoch start slot for pre fulu state only. Fulu state utilizes proposer look ahead field.
if st.Slot() < epochStartSlot && st.Version() != version.Fulu {
// Notice that even for Fulu requests for the next epoch, we are only advancing the state to the start of the current epoch.
if st.Slot() < epochStartSlot {
headRoot, err := s.HeadFetcher.HeadRoot(ctx)
if err != nil {
httputil.HandleError(w, fmt.Sprintf("Could not get head root: %v ", err), http.StatusInternalServerError)

View File

@@ -2645,78 +2645,6 @@ func TestGetProposerDuties(t *testing.T) {
})
}
func TestGetProposerDuties_FuluState(t *testing.T) {
helpers.ClearCache()
// Create a Fulu state with slot 0 (before epoch 1 start slot which is 32)
fuluState, err := util.NewBeaconStateFulu()
require.NoError(t, err)
require.NoError(t, fuluState.SetSlot(0)) // Set to slot 0
// Create some validators for the test
depChainStart := params.BeaconConfig().MinGenesisActiveValidatorCount
deposits, _, err := util.DeterministicDepositsAndKeys(depChainStart)
require.NoError(t, err)
validators := make([]*ethpbalpha.Validator, len(deposits))
for i, deposit := range deposits {
validators[i] = &ethpbalpha.Validator{
PublicKey: deposit.Data.PublicKey,
ActivationEpoch: 0,
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawalCredentials: make([]byte, 32),
}
}
require.NoError(t, fuluState.SetValidators(validators))
// Set up block roots
genesis := util.NewBeaconBlock()
genesisRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err)
roots := make([][]byte, fieldparams.BlockRootsLength)
roots[0] = genesisRoot[:]
require.NoError(t, fuluState.SetBlockRoots(roots))
chainSlot := primitives.Slot(0)
chain := &mockChain.ChainService{
State: fuluState, Root: genesisRoot[:], Slot: &chainSlot,
}
db := dbutil.SetupDB(t)
require.NoError(t, db.SaveGenesisBlockRoot(t.Context(), genesisRoot))
s := &Server{
Stater: &testutil.MockStater{StatesBySlot: map[primitives.Slot]state.BeaconState{0: fuluState}},
HeadFetcher: chain,
TimeFetcher: chain,
OptimisticModeFetcher: chain,
SyncChecker: &mockSync.Sync{IsSyncing: false},
PayloadIDCache: cache.NewPayloadIDCache(),
TrackedValidatorsCache: cache.NewTrackedValidatorsCache(),
BeaconDB: db,
}
// Request epoch 1 duties, which should require advancing from slot 0 to slot 32
// But for Fulu state, this advancement should be skipped
request := httptest.NewRequest(http.MethodGet, "http://www.example.com/eth/v1/validator/duties/proposer/{epoch}", nil)
request.SetPathValue("epoch", "1")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.GetProposerDuties(writer, request)
assert.Equal(t, http.StatusOK, writer.Code)
// Verify the state was not advanced - it should still be at slot 0
// This is the key assertion for the regression test
assert.Equal(t, primitives.Slot(0), fuluState.Slot(), "Fulu state should not have been advanced")
resp := &structs.GetProposerDutiesResponse{}
require.NoError(t, json.Unmarshal(writer.Body.Bytes(), resp))
// Should still return proposer duties despite not advancing the state
assert.Equal(t, true, len(resp.Data) > 0, "Should return proposer duties even without state advancement")
}
func TestGetSyncCommitteeDuties(t *testing.T) {
helpers.ClearCache()
params.SetupTestConfigCleanup(t)

View File

@@ -186,6 +186,7 @@ func (s *Server) GetChainHead(w http.ResponseWriter, r *http.Request) {
httputil.WriteJson(w, response)
}
// Warning: no longer supported post Fulu blobs
func (s *Server) PublishBlobs(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.PublishBlobs")
defer span.End()
@@ -215,6 +216,15 @@ func (s *Server) PublishBlobs(w http.ResponseWriter, r *http.Request) {
httputil.HandleError(w, "Could not decode blob sidecar: "+err.Error(), http.StatusBadRequest)
return
}
scEpoch := slots.ToEpoch(sc.SignedBlockHeader.Header.Slot)
if scEpoch < params.BeaconConfig().DenebForkEpoch {
httputil.HandleError(w, "Blob sidecars not supported for pre deneb", http.StatusBadRequest)
return
}
if scEpoch > params.BeaconConfig().FuluForkEpoch {
httputil.HandleError(w, "Blob sidecars not supported for post fulu blobs", http.StatusBadRequest)
return
}
readOnlySc, err := blocks.NewROBlobWithRoot(sc, bytesutil.ToBytes32(root))
if err != nil {

View File

@@ -1017,6 +1017,11 @@ func TestPublishBlobs_BadBlockRoot(t *testing.T) {
}
func TestPublishBlobs(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 0 // Set Deneb fork to epoch 0 so slot 0 is valid
params.OverrideBeaconConfig(cfg)
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
@@ -1032,3 +1037,94 @@ func TestPublishBlobs(t *testing.T) {
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 1)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), true)
}
func TestPublishBlobs_PreDeneb(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 10 // Set Deneb fork to epoch 10, so slot 0 is pre-Deneb
params.OverrideBeaconConfig(cfg)
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(rpctesting.PublishBlobsRequest)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Blob sidecars not supported for pre deneb", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}
func TestPublishBlobs_PostFulu(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.DenebForkEpoch = 0
cfg.FuluForkEpoch = 0 // Set Fulu fork to epoch 0
params.OverrideBeaconConfig(cfg)
server := &Server{
BlobReceiver: &chainMock.ChainService{},
Broadcaster: &mockp2p.MockBroadcaster{},
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
// Create a custom request with a slot in epoch 1 (which is > FuluForkEpoch of 0)
postFuluRequest := fmt.Sprintf(`{
"block_root" : "0x0000000000000000000000000000000000000000000000000000000000000000",
"blob_sidecars" : {
"sidecars" : [
{
"blob" : "%s",
"index" : "0",
"kzg_commitment" : "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"kzg_commitment_inclusion_proof" : [
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000",
"0x0000000000000000000000000000000000000000000000000000000000000000"
],
"kzg_proof" : "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
"signed_block_header" : {
"message" : {
"body_root" : "0x0000000000000000000000000000000000000000000000000000000000000000",
"parent_root" : "0x0000000000000000000000000000000000000000000000000000000000000000",
"proposer_index" : "0",
"slot" : "33",
"state_root" : "0x0000000000000000000000000000000000000000000000000000000000000000"
},
"signature" : "0x000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"
}
}
]
}
}`, rpctesting.Blob)
request := httptest.NewRequest(http.MethodPost, "http://foo.example", bytes.NewReader([]byte(postFuluRequest)))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
server.PublishBlobs(writer, request)
assert.Equal(t, http.StatusBadRequest, writer.Code)
assert.StringContains(t, "Blob sidecars not supported for post fulu blobs", writer.Body.String())
assert.Equal(t, len(server.BlobReceiver.(*chainMock.ChainService).Blobs), 0)
assert.Equal(t, server.Broadcaster.(*mockp2p.MockBroadcaster).BroadcastCalled.Load(), false)
}

View File

@@ -19,7 +19,7 @@ func TestServer_GetBeaconConfig(t *testing.T) {
conf := params.BeaconConfig()
confType := reflect.TypeOf(conf).Elem()
numFields := confType.NumField()
// Count only exported fields, as unexported fields are not included in the config
exportedFields := 0
for i := 0; i < numFields; i++ {

View File

@@ -57,6 +57,12 @@ func (vs *Server) constructGenericBeaconBlock(
return nil, fmt.Errorf("expected *BlobsBundleV2, got %T", blobsBundler)
}
return vs.constructFuluBlock(blockProto, isBlinded, bidStr, bundle), nil
case version.Gloas:
bundle, ok := blobsBundler.(*enginev1.BlobsBundleV2)
if blobsBundler != nil && !ok {
return nil, fmt.Errorf("expected *BlobsBundleV2, got %T", blobsBundler)
}
return vs.constructGloasBlock(blockProto, isBlinded, bidStr, bundle), nil
default:
return nil, fmt.Errorf("unknown block version: %d", sBlk.Version())
}
@@ -120,3 +126,12 @@ func (vs *Server) constructFuluBlock(blockProto proto.Message, isBlinded bool, p
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Fulu{Fulu: fuluContents}, IsBlinded: false, PayloadValue: payloadValue}
}
func (vs *Server) constructGloasBlock(blockProto proto.Message, isBlinded bool, payloadValue string, bundle *enginev1.BlobsBundleV2) *ethpb.GenericBeaconBlock {
gloasContents := &ethpb.BeaconBlockContentsGloas{Block: blockProto.(*ethpb.BeaconBlockGloas)}
if bundle != nil {
gloasContents.KzgProofs = bundle.Proofs
gloasContents.Blobs = bundle.Blobs
}
return &ethpb.GenericBeaconBlock{Block: &ethpb.GenericBeaconBlock_Gloas{Gloas: gloasContents}, IsBlinded: false, PayloadValue: payloadValue}
}

View File

@@ -19,7 +19,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v6/beacon-chain/db/kv"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
@@ -296,13 +295,12 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
if err != nil {
return nil, status.Errorf(codes.InvalidArgument, "%s: %v", "decode block failed", err)
}
root, err := block.Block().HashTreeRoot()
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not hash tree root: %v", err)
}
// For post-Fulu blinded blocks, submit to relay and return early
// For post-Fulu blinded blocks (including Gloas), submit to relay and return early
if block.IsBlinded() && slots.ToEpoch(block.Block().Slot()) >= params.BeaconConfig().FuluForkEpoch {
err := vs.BlockBuilder.SubmitBlindedBlockPostFulu(ctx, block)
if err != nil {
@@ -322,6 +320,7 @@ func (vs *Server) ProposeBeaconBlock(ctx context.Context, req *ethpb.GenericSign
var wg sync.WaitGroup
errChan := make(chan error, 1)
wg.Add(1)
go func() {
defer wg.Done()
@@ -352,7 +351,7 @@ func (vs *Server) broadcastAndReceiveSidecars(
dataColumnSideCars []*ethpb.DataColumnSidecar,
) error {
if block.Version() >= version.Fulu {
if err := vs.broadcastAndReceiveDataColumns(ctx, dataColumnSideCars, root, block.Block().Slot()); err != nil {
if err := vs.broadcastAndReceiveDataColumns(ctx, dataColumnSideCars, root); err != nil {
return errors.Wrap(err, "broadcast and receive data columns")
}
return nil
@@ -366,7 +365,7 @@ func (vs *Server) broadcastAndReceiveSidecars(
}
// handleBlindedBlock processes blinded beacon blocks (pre-Fulu only).
// Post-Fulu blinded blocks are handled directly in ProposeBeaconBlock.
// Post-Fulu blinded blocks (including Gloas) are handled directly in ProposeBeaconBlock.
func (vs *Server) handleBlindedBlock(ctx context.Context, block interfaces.SignedBeaconBlock) (interfaces.SignedBeaconBlock, []*ethpb.BlobSidecar, error) {
if block.Version() < version.Bellatrix {
return nil, nil, errors.New("pre-Bellatrix blinded block")
@@ -471,14 +470,11 @@ func (vs *Server) broadcastAndReceiveDataColumns(
ctx context.Context,
sidecars []*ethpb.DataColumnSidecar,
root [fieldparams.RootLength]byte,
slot primitives.Slot,
) error {
dataColumnsWithholdCount := features.Get().DataColumnsWithholdCount
verifiedRODataColumns := make([]blocks.VerifiedRODataColumn, 0, len(sidecars))
eg, _ := errgroup.WithContext(ctx)
for _, sd := range sidecars {
roDataColumn, err := blocks.NewRODataColumnWithRoot(sd, root)
for _, sidecar := range sidecars {
roDataColumn, err := blocks.NewRODataColumnWithRoot(sidecar, root)
if err != nil {
return errors.Wrap(err, "new read-only data column with root")
}
@@ -487,20 +483,7 @@ func (vs *Server) broadcastAndReceiveDataColumns(
verifiedRODataColumn := blocks.NewVerifiedRODataColumn(roDataColumn)
verifiedRODataColumns = append(verifiedRODataColumns, verifiedRODataColumn)
// Copy the iteration instance to a local variable to give each go-routine its own copy to play with.
// See https://golang.org/doc/faq#closures_and_goroutines for more details.
sidecar := sd
eg.Go(func() error {
if sidecar.Index < dataColumnsWithholdCount {
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", root),
"slot": slot,
"index": sidecar.Index,
}).Warning("Withholding data column")
return nil
}
// Compute the subnet index based on the column index.
subnet := peerdas.ComputeSubnetForDataColumnSidecar(sidecar.Index)
@@ -649,6 +632,9 @@ func blobsAndProofs(req *ethpb.GenericSignedBeaconBlock) ([][]byte, [][]byte, er
case req.GetFulu() != nil:
dbBlockContents := req.GetFulu()
return dbBlockContents.Blobs, dbBlockContents.KzgProofs, nil
case req.GetGloas() != nil:
dbBlockContents := req.GetGloas()
return dbBlockContents.Blobs, dbBlockContents.KzgProofs, nil
default:
return nil, nil, errors.Errorf("unknown request type provided: %T", req)
}

View File

@@ -1,8 +1,10 @@
package validator
import (
"bytes"
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -15,6 +17,7 @@ import (
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/time/slots"
"github.com/pkg/errors"
"github.com/prysmaticlabs/go-bitfield"
)
func (vs *Server) setSyncAggregate(ctx context.Context, blk interfaces.SignedBeaconBlock) {
@@ -51,21 +54,28 @@ func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, ro
if vs.SyncCommitteePool == nil {
return nil, errors.New("sync committee pool is nil")
}
// Contributions have to match the input root
contributions, err := vs.SyncCommitteePool.SyncCommitteeContributions(slot)
poolContributions, err := vs.SyncCommitteePool.SyncCommitteeContributions(slot)
if err != nil {
return nil, err
}
proposerContributions := proposerSyncContributions(contributions).filterByBlockRoot(root)
// Contributions have to match the input root
proposerContributions := proposerSyncContributions(poolContributions).filterByBlockRoot(root)
// Each sync subcommittee is 128 bits and the sync committee is 512 bits for mainnet.
aggregatedContributions, err := vs.aggregatedSyncCommitteeMessages(ctx, slot, root, poolContributions)
if err != nil {
return nil, errors.Wrap(err, "could not get aggregated sync committee messages")
}
proposerContributions = append(proposerContributions, aggregatedContributions...)
subcommitteeCount := params.BeaconConfig().SyncCommitteeSubnetCount
var bitsHolder [][]byte
for i := uint64(0); i < params.BeaconConfig().SyncCommitteeSubnetCount; i++ {
for i := uint64(0); i < subcommitteeCount; i++ {
bitsHolder = append(bitsHolder, ethpb.NewSyncCommitteeAggregationBits())
}
sigsHolder := make([]bls.Signature, 0, params.BeaconConfig().SyncCommitteeSize/params.BeaconConfig().SyncCommitteeSubnetCount)
sigsHolder := make([]bls.Signature, 0, params.BeaconConfig().SyncCommitteeSize/subcommitteeCount)
for i := uint64(0); i < params.BeaconConfig().SyncCommitteeSubnetCount; i++ {
for i := uint64(0); i < subcommitteeCount; i++ {
cs := proposerContributions.filterBySubIndex(i)
aggregates, err := synccontribution.Aggregate(cs)
if err != nil {
@@ -107,3 +117,107 @@ func (vs *Server) getSyncAggregate(ctx context.Context, slot primitives.Slot, ro
SyncCommitteeSignature: syncSigBytes[:],
}, nil
}
func (vs *Server) aggregatedSyncCommitteeMessages(
ctx context.Context,
slot primitives.Slot,
root [32]byte,
poolContributions []*ethpb.SyncCommitteeContribution,
) ([]*ethpb.SyncCommitteeContribution, error) {
subcommitteeCount := params.BeaconConfig().SyncCommitteeSubnetCount
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / subcommitteeCount
sigsPerSubcommittee := make([][][]byte, subcommitteeCount)
bitsPerSubcommittee := make([]bitfield.Bitfield, subcommitteeCount)
for i := uint64(0); i < subcommitteeCount; i++ {
sigsPerSubcommittee[i] = make([][]byte, 0, subcommitteeSize)
bitsPerSubcommittee[i] = ethpb.NewSyncCommitteeAggregationBits()
}
// Get committee position(s) for each message's validator index.
scMessages, err := vs.SyncCommitteePool.SyncCommitteeMessages(slot)
if err != nil {
return nil, errors.Wrap(err, "could not get sync committee messages")
}
messageIndices := make([]primitives.ValidatorIndex, 0, len(scMessages))
messageSigs := make([][]byte, 0, len(scMessages))
for _, msg := range scMessages {
if bytes.Equal(root[:], msg.BlockRoot) {
messageIndices = append(messageIndices, msg.ValidatorIndex)
messageSigs = append(messageSigs, msg.Signature)
}
}
st, err := vs.HeadFetcher.HeadState(ctx)
if err != nil {
return nil, errors.Wrap(err, "could not get head state")
}
positions, err := helpers.CurrentPeriodPositions(st, messageIndices)
if err != nil {
return nil, errors.Wrap(err, "could not get sync committee positions")
}
// Based on committee position(s), set the appropriate subcommittee bit and signature.
for i, ci := range positions {
for _, index := range ci {
k := uint64(index)
subnetIndex := k / subcommitteeSize
indexMod := k % subcommitteeSize
// Existing aggregated contributions from the pool intersecting with aggregates
// created from single sync committee messages can result in bit intersections
// that fail to produce the best possible final aggregate. Ignoring bits that are
// already set in pool contributions makes intersections impossible.
intersects := false
for _, poolContrib := range poolContributions {
if poolContrib.SubcommitteeIndex == subnetIndex && poolContrib.AggregationBits.BitAt(indexMod) {
intersects = true
}
}
if !intersects && !bitsPerSubcommittee[subnetIndex].BitAt(indexMod) {
bitsPerSubcommittee[subnetIndex].SetBitAt(indexMod, true)
sigsPerSubcommittee[subnetIndex] = append(sigsPerSubcommittee[subnetIndex], messageSigs[i])
}
}
}
// Aggregate.
result := make([]*ethpb.SyncCommitteeContribution, 0, subcommitteeCount)
for i := uint64(0); i < subcommitteeCount; i++ {
aggregatedSig := make([]byte, 96)
aggregatedSig[0] = 0xC0
if len(sigsPerSubcommittee[i]) != 0 {
contrib, err := aggregateSyncSubcommitteeMessages(slot, root, i, bitsPerSubcommittee[i], sigsPerSubcommittee[i])
if err != nil {
// Skip aggregating this subcommittee
log.WithError(err).Errorf("Could not aggregate sync messages for subcommittee %d", i)
continue
}
result = append(result, contrib)
}
}
return result, nil
}
func aggregateSyncSubcommitteeMessages(
slot primitives.Slot,
root [32]byte,
subcommitteeIndex uint64,
bits bitfield.Bitfield,
sigs [][]byte,
) (*ethpb.SyncCommitteeContribution, error) {
var err error
uncompressedSigs := make([]bls.Signature, len(sigs))
for i, sig := range sigs {
uncompressedSigs[i], err = bls.SignatureFromBytesNoValidation(sig)
if err != nil {
return nil, errors.Wrap(err, "could not create signature from bytes")
}
}
return &ethpb.SyncCommitteeContribution{
Slot: slot,
BlockRoot: root[:],
SubcommitteeIndex: subcommitteeIndex,
AggregationBits: bits.Bytes(),
Signature: bls.AggregateSignatures(uncompressedSigs).Marshal(),
}, nil
}

View File

@@ -3,13 +3,67 @@ package validator
import (
"testing"
chainmock "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/synccommittee"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/crypto/bls"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/prysmaticlabs/go-bitfield"
)
func TestProposer_GetSyncAggregate_OK(t *testing.T) {
st, err := util.NewBeaconStateAltair()
require.NoError(t, err)
proposerServer := &Server{
HeadFetcher: &chainmock.ChainService{State: st},
SyncChecker: &mockSync.Sync{IsSyncing: false},
SyncCommitteePool: synccommittee.NewStore(),
}
r := params.BeaconConfig().ZeroHash
conts := []*ethpb.SyncCommitteeContribution{
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
}
for _, cont := range conts {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeContribution(cont))
}
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xf, 0xf, 0xf, 0xf}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xaa, 0xaa, 0xaa, 0xaa}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.NewBitvector32(), aggregate.SyncCommitteeBits)
}
func TestServer_SetSyncAggregate_EmptyCase(t *testing.T) {
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlockAltair())
require.NoError(t, err)
@@ -25,3 +79,123 @@ func TestServer_SetSyncAggregate_EmptyCase(t *testing.T) {
}
require.DeepEqual(t, want, agg)
}
func TestProposer_GetSyncAggregate_IncludesSyncCommitteeMessages(t *testing.T) {
// TEST SETUP
// - validator 0 is selected twice in subcommittee 0 (indexes [0,1])
// - validator 1 is selected once in subcommittee 0 (index 2)
// - validator 2 is selected twice in subcommittee 1 (indexes [0,1])
// - validator 3 is selected once in subcommittee 1 (index 2)
// - sync committee aggregates in the pool have index 3 set for both subcommittees
subcommitteeSize := params.BeaconConfig().SyncCommitteeSize / params.BeaconConfig().SyncCommitteeSubnetCount
helpers.ClearCache()
st, err := util.NewBeaconStateAltair()
require.NoError(t, err)
vals := make([]*ethpb.Validator, 4)
vals[0] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf0}, 48)}
vals[1] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf1}, 48)}
vals[2] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf2}, 48)}
vals[3] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf3}, 48)}
require.NoError(t, st.SetValidators(vals))
sc := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
sc.Pubkeys[0] = vals[0].PublicKey
sc.Pubkeys[1] = vals[0].PublicKey
sc.Pubkeys[2] = vals[1].PublicKey
sc.Pubkeys[subcommitteeSize] = vals[2].PublicKey
sc.Pubkeys[subcommitteeSize+1] = vals[2].PublicKey
sc.Pubkeys[subcommitteeSize+2] = vals[3].PublicKey
require.NoError(t, st.SetCurrentSyncCommittee(sc))
proposerServer := &Server{
HeadFetcher: &chainmock.ChainService{State: st},
SyncChecker: &mockSync.Sync{IsSyncing: false},
SyncCommitteePool: synccommittee.NewStore(),
}
r := params.BeaconConfig().ZeroHash
msgs := []*ethpb.SyncCommitteeMessage{
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 0, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 1, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 2, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 3, Signature: bls.NewAggregateSignature().Marshal()},
}
for _, msg := range msgs {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg))
}
subcommittee0AggBits := ethpb.NewSyncCommitteeAggregationBits()
subcommittee0AggBits.SetBitAt(3, true)
subcommittee1AggBits := ethpb.NewSyncCommitteeAggregationBits()
subcommittee1AggBits.SetBitAt(3, true)
conts := []*ethpb.SyncCommitteeContribution{
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: subcommittee0AggBits, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: subcommittee1AggBits, BlockRoot: r[:]},
}
for _, cont := range conts {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeContribution(cont))
}
// The final sync aggregates must have indexes [0,1,2,3] set for both subcommittees
sa, err := proposerServer.getSyncAggregate(t.Context(), 1, r)
require.NoError(t, err)
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(0))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(1))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(2))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(3))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(subcommitteeSize))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(subcommitteeSize+1))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(subcommitteeSize+2))
assert.Equal(t, true, sa.SyncCommitteeBits.BitAt(subcommitteeSize+3))
}
func Test_aggregatedSyncCommitteeMessages_NoIntersectionWithPoolContributions(t *testing.T) {
helpers.ClearCache()
st, err := util.NewBeaconStateAltair()
require.NoError(t, err)
vals := make([]*ethpb.Validator, 4)
vals[0] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf0}, 48)}
vals[1] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf1}, 48)}
vals[2] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf2}, 48)}
vals[3] = &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte{0xf3}, 48)}
require.NoError(t, st.SetValidators(vals))
sc := &ethpb.SyncCommittee{
Pubkeys: make([][]byte, params.BeaconConfig().SyncCommitteeSize),
}
sc.Pubkeys[0] = vals[0].PublicKey
sc.Pubkeys[1] = vals[1].PublicKey
sc.Pubkeys[2] = vals[2].PublicKey
sc.Pubkeys[3] = vals[3].PublicKey
require.NoError(t, st.SetCurrentSyncCommittee(sc))
proposerServer := &Server{
HeadFetcher: &chainmock.ChainService{State: st},
SyncChecker: &mockSync.Sync{IsSyncing: false},
SyncCommitteePool: synccommittee.NewStore(),
}
r := params.BeaconConfig().ZeroHash
msgs := []*ethpb.SyncCommitteeMessage{
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 0, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 1, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 2, Signature: bls.NewAggregateSignature().Marshal()},
{Slot: 1, BlockRoot: r[:], ValidatorIndex: 3, Signature: bls.NewAggregateSignature().Marshal()},
}
for _, msg := range msgs {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeMessage(msg))
}
subcommitteeAggBits := ethpb.NewSyncCommitteeAggregationBits()
subcommitteeAggBits.SetBitAt(3, true)
cont := &ethpb.SyncCommitteeContribution{
Slot: 1,
SubcommitteeIndex: 0,
Signature: bls.NewAggregateSignature().Marshal(),
AggregationBits: subcommitteeAggBits,
BlockRoot: r[:],
}
aggregated, err := proposerServer.aggregatedSyncCommitteeMessages(t.Context(), 1, r, []*ethpb.SyncCommitteeContribution{cont})
require.NoError(t, err)
require.Equal(t, 1, len(aggregated))
assert.Equal(t, false, aggregated[0].AggregationBits.BitAt(3))
}

View File

@@ -473,6 +473,11 @@ func isVersionCompatible(bidVersion, headBlockVersion int) bool {
return true
}
// Allow Electra bids for Gloas blocks - they have compatible payload formats
if bidVersion == version.Electra && headBlockVersion == version.Gloas {
return true
}
// For all other cases, require exact version match
return false
}

View File

@@ -16,6 +16,11 @@ func getEmptyBlock(slot primitives.Slot) (interfaces.SignedBeaconBlock, error) {
var err error
epoch := slots.ToEpoch(slot)
switch {
case epoch >= params.BeaconConfig().GloasForkEpoch:
sBlk, err = blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockGloas{Block: &ethpb.BeaconBlockGloas{Body: &ethpb.BeaconBlockBodyGloas{}}})
if err != nil {
return nil, status.Errorf(codes.Internal, "Could not initialize block for proposal: %v", err)
}
case epoch >= params.BeaconConfig().FuluForkEpoch:
sBlk, err = blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockFulu{Block: &ethpb.BeaconBlockElectra{Body: &ethpb.BeaconBlockBodyElectra{}}})
if err != nil {

View File

@@ -136,7 +136,7 @@ func (vs *Server) getLocalPayloadFromEngine(
}
var attr payloadattribute.Attributer
switch st.Version() {
case version.Deneb, version.Electra, version.Fulu:
case version.Deneb, version.Electra, version.Fulu, version.Gloas:
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
return nil, err

View File

@@ -4,16 +4,23 @@ import (
"context"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/helpers"
v "github.com/OffchainLabs/prysm/v6/beacon-chain/core/validators"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
)
func (vs *Server) getSlashings(ctx context.Context, head state.BeaconState) ([]*ethpb.ProposerSlashing, []ethpb.AttSlashing) {
var err error
exitInfo := v.ExitInformation(head)
if err := helpers.UpdateTotalActiveBalanceCache(head, exitInfo.TotalActiveBalance); err != nil {
log.WithError(err).Warn("Could not update total active balance cache")
}
proposerSlashings := vs.SlashingsPool.PendingProposerSlashings(ctx, head, false /*noLimit*/)
validProposerSlashings := make([]*ethpb.ProposerSlashing, 0, len(proposerSlashings))
for _, slashing := range proposerSlashings {
_, err := blocks.ProcessProposerSlashing(ctx, head, slashing, v.SlashValidator)
_, err = blocks.ProcessProposerSlashing(ctx, head, slashing, exitInfo)
if err != nil {
log.WithError(err).Warn("Could not validate proposer slashing for block inclusion")
continue
@@ -23,7 +30,7 @@ func (vs *Server) getSlashings(ctx context.Context, head state.BeaconState) ([]*
attSlashings := vs.SlashingsPool.PendingAttesterSlashings(ctx, head, false /*noLimit*/)
validAttSlashings := make([]ethpb.AttSlashing, 0, len(attSlashings))
for _, slashing := range attSlashings {
_, err := blocks.ProcessAttesterSlashing(ctx, head, slashing, v.SlashValidator)
_, err = blocks.ProcessAttesterSlashing(ctx, head, slashing, exitInfo)
if err != nil {
log.WithError(err).Warn("Could not validate attester slashing for block inclusion")
continue

View File

@@ -3096,49 +3096,6 @@ func TestProposer_DeleteAttsInPool_Aggregated(t *testing.T) {
assert.Equal(t, 0, len(atts), "Did not delete unaggregated attestation")
}
func TestProposer_GetSyncAggregate_OK(t *testing.T) {
proposerServer := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: false},
SyncCommitteePool: synccommittee.NewStore(),
}
r := params.BeaconConfig().ZeroHash
conts := []*ethpb.SyncCommitteeContribution{
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b0001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1001}, BlockRoot: r[:]},
{Slot: 1, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b1110}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 0, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 1, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 2, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
{Slot: 2, SubcommitteeIndex: 3, Signature: bls.NewAggregateSignature().Marshal(), AggregationBits: []byte{0b10101010}, BlockRoot: r[:]},
}
for _, cont := range conts {
require.NoError(t, proposerServer.SyncCommitteePool.SaveSyncCommitteeContribution(cont))
}
aggregate, err := proposerServer.getSyncAggregate(t.Context(), 1, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xf, 0xf, 0xf, 0xf}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 2, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.Bitvector32{0xaa, 0xaa, 0xaa, 0xaa}, aggregate.SyncCommitteeBits)
aggregate, err = proposerServer.getSyncAggregate(t.Context(), 3, bytesutil.ToBytes32(conts[0].BlockRoot))
require.NoError(t, err)
require.DeepEqual(t, bitfield.NewBitvector32(), aggregate.SyncCommitteeBits)
}
func TestProposer_PrepareBeaconProposer(t *testing.T) {
type args struct {
request *ethpb.PrepareBeaconProposerRequest

View File

@@ -67,6 +67,13 @@ func WithNower(n Nower) ClockOpt {
}
}
// WithTimeAsNow will create a Nower based on the given time.Time and set it as the Now() implementation.
func WithTimeAsNow(t time.Time) ClockOpt {
return func(g *Clock) {
g.now = func() time.Time { return t }
}
}
// NewClock constructs a Clock value from a genesis timestamp (t) and a Genesis Validator Root (vr).
// The WithNower ClockOpt can be used in tests to specify an alternate `time.Now` implementation,
// for instance to return a value for `Now` spanning a certain number of slots from genesis time, to control the current slot.

View File

@@ -265,6 +265,7 @@ type WriteOnlyEth1Data interface {
AppendEth1DataVotes(val *ethpb.Eth1Data) error
SetEth1DepositIndex(val uint64) error
ExitEpochAndUpdateChurn(exitBalance primitives.Gwei) (primitives.Epoch, error)
ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance primitives.Gwei, exitBalance primitives.Gwei) (primitives.Epoch, error)
}
// WriteOnlyValidators defines a struct which only has write access to validators methods.

View File

@@ -52,6 +52,7 @@ type BeaconState struct {
latestExecutionPayloadHeader *enginev1.ExecutionPayloadHeader
latestExecutionPayloadHeaderCapella *enginev1.ExecutionPayloadHeaderCapella
latestExecutionPayloadHeaderDeneb *enginev1.ExecutionPayloadHeaderDeneb
latestExecutionPayloadHeaderGloas *enginev1.ExecutionPayloadHeaderGloas
// Capella fields
nextWithdrawalIndex uint64

View File

@@ -17,6 +17,10 @@ func (b *BeaconState) LatestExecutionPayloadHeader() (interfaces.ExecutionData,
b.lock.RLock()
defer b.lock.RUnlock()
if b.version >= version.Gloas {
return blocks.WrappedExecutionPayloadHeaderGloas(b.latestExecutionPayloadHeaderGloas.Copy())
}
if b.version >= version.Deneb {
return blocks.WrappedExecutionPayloadHeaderDeneb(b.latestExecutionPayloadHeaderDeneb.Copy())
}

View File

@@ -44,11 +44,27 @@ func (b *BeaconState) ExitEpochAndUpdateChurn(exitBalance primitives.Gwei) (prim
return 0, err
}
return b.exitEpochAndUpdateChurn(primitives.Gwei(activeBal), exitBalance)
}
// ExitEpochAndUpdateChurnForTotalBal has the same functionality as ExitEpochAndUpdateChurn,
// the only difference being how total active balance is obtained. In ExitEpochAndUpdateChurn
// it is calculated inside the function and in ExitEpochAndUpdateChurnForTotalBal it's a
// function argument.
func (b *BeaconState) ExitEpochAndUpdateChurnForTotalBal(totalActiveBalance primitives.Gwei, exitBalance primitives.Gwei) (primitives.Epoch, error) {
if b.version < version.Electra {
return 0, errNotSupported("ExitEpochAndUpdateChurnForTotalBal", b.version)
}
return b.exitEpochAndUpdateChurn(totalActiveBalance, exitBalance)
}
func (b *BeaconState) exitEpochAndUpdateChurn(totalActiveBalance primitives.Gwei, exitBalance primitives.Gwei) (primitives.Epoch, error) {
b.lock.Lock()
defer b.lock.Unlock()
earliestExitEpoch := max(b.earliestExitEpoch, helpers.ActivationExitEpoch(slots.ToEpoch(b.slot)))
perEpochChurn := helpers.ActivationExitChurnLimit(primitives.Gwei(activeBal)) // Guaranteed to be non-zero.
perEpochChurn := helpers.ActivationExitChurnLimit(totalActiveBalance) // Guaranteed to be non-zero.
// New epoch for exits
var exitBalanceToConsume primitives.Gwei

View File

@@ -55,6 +55,17 @@ func (b *BeaconState) SetLatestExecutionPayloadHeader(val interfaces.ExecutionDa
b.latestExecutionPayloadHeaderDeneb = latest
b.markFieldAsDirty(types.LatestExecutionPayloadHeaderDeneb)
return nil
case *enginev1.ExecutionPayloadGloas:
if !(b.version >= version.Gloas) {
return fmt.Errorf("wrong state version (%s) for gloas execution payload", version.String(b.version))
}
latest, err := consensusblocks.PayloadToHeaderGloas(val)
if err != nil {
return errors.Wrap(err, "could not convert payload to header")
}
b.latestExecutionPayloadHeaderGloas = latest
b.markFieldAsDirty(types.LatestExecutionPayloadHeaderGloas)
return nil
case *enginev1.ExecutionPayloadHeader:
if b.version != version.Bellatrix {
return fmt.Errorf("wrong state version (%s) for bellatrix execution payload header", version.String(b.version))
@@ -76,6 +87,13 @@ func (b *BeaconState) SetLatestExecutionPayloadHeader(val interfaces.ExecutionDa
b.latestExecutionPayloadHeaderDeneb = header
b.markFieldAsDirty(types.LatestExecutionPayloadHeaderDeneb)
return nil
case *enginev1.ExecutionPayloadHeaderGloas:
if !(b.version >= version.Gloas) {
return fmt.Errorf("wrong state version (%s) for gloas execution payload header", version.String(b.version))
}
b.latestExecutionPayloadHeaderGloas = header
b.markFieldAsDirty(types.LatestExecutionPayloadHeaderGloas)
return nil
default:
return errors.New("value must be an execution payload header")
}

View File

@@ -112,6 +112,24 @@ var (
electraFields,
types.ProposerLookahead,
)
gloasFields = append(
altairFields,
types.LatestExecutionPayloadHeaderGloas,
types.NextWithdrawalIndex,
types.NextWithdrawalValidatorIndex,
types.HistoricalSummaries,
types.DepositRequestsStartIndex,
types.DepositBalanceToConsume,
types.ExitBalanceToConsume,
types.EarliestExitEpoch,
types.ConsolidationBalanceToConsume,
types.EarliestConsolidationEpoch,
types.PendingDeposits,
types.PendingPartialWithdrawals,
types.PendingConsolidations,
types.ProposerLookahead,
)
)
const (
@@ -122,6 +140,7 @@ const (
denebSharedFieldRefCount = 7
electraSharedFieldRefCount = 10
fuluSharedFieldRefCount = 11
gloasSharedFieldRefCount = 11
)
// InitializeFromProtoPhase0 the beacon state from a protobuf representation.
@@ -159,6 +178,11 @@ func InitializeFromProtoFulu(st *ethpb.BeaconStateFulu) (state.BeaconState, erro
return InitializeFromProtoUnsafeFulu(proto.Clone(st).(*ethpb.BeaconStateFulu))
}
// InitializeFromProtoGloas the beacon state from a protobuf representation.
func InitializeFromProtoGloas(st *ethpb.BeaconStateGloas) (state.BeaconState, error) {
return InitializeFromProtoUnsafeGloas(proto.Clone(st).(*ethpb.BeaconStateGloas))
}
// InitializeFromProtoUnsafePhase0 directly uses the beacon state protobuf fields
// and sets them as fields of the BeaconState type.
func InitializeFromProtoUnsafePhase0(st *ethpb.BeaconState) (state.BeaconState, error) {
@@ -731,6 +755,105 @@ func InitializeFromProtoUnsafeFulu(st *ethpb.BeaconStateFulu) (state.BeaconState
return b, nil
}
// InitializeFromProtoUnsafeGloas directly uses the beacon state protobuf fields
// and sets them as fields of the BeaconState type.
func InitializeFromProtoUnsafeGloas(st *ethpb.BeaconStateGloas) (state.BeaconState, error) {
if st == nil {
return nil, errors.New("received nil state")
}
hRoots := customtypes.HistoricalRoots(make([][32]byte, len(st.HistoricalRoots)))
for i, r := range st.HistoricalRoots {
hRoots[i] = bytesutil.ToBytes32(r)
}
proposerLookahead := make([]primitives.ValidatorIndex, len(st.ProposerLookahead))
for i, v := range st.ProposerLookahead {
proposerLookahead[i] = primitives.ValidatorIndex(v)
}
fieldCount := params.BeaconConfig().BeaconStateFuluFieldCount
b := &BeaconState{
version: version.Gloas,
genesisTime: st.GenesisTime,
genesisValidatorsRoot: bytesutil.ToBytes32(st.GenesisValidatorsRoot),
slot: st.Slot,
fork: st.Fork,
latestBlockHeader: st.LatestBlockHeader,
historicalRoots: hRoots,
eth1Data: st.Eth1Data,
eth1DataVotes: st.Eth1DataVotes,
eth1DepositIndex: st.Eth1DepositIndex,
slashings: st.Slashings,
previousEpochParticipation: st.PreviousEpochParticipation,
currentEpochParticipation: st.CurrentEpochParticipation,
justificationBits: st.JustificationBits,
previousJustifiedCheckpoint: st.PreviousJustifiedCheckpoint,
currentJustifiedCheckpoint: st.CurrentJustifiedCheckpoint,
finalizedCheckpoint: st.FinalizedCheckpoint,
currentSyncCommittee: st.CurrentSyncCommittee,
nextSyncCommittee: st.NextSyncCommittee,
latestExecutionPayloadHeaderGloas: st.LatestExecutionPayloadHeader,
nextWithdrawalIndex: st.NextWithdrawalIndex,
nextWithdrawalValidatorIndex: st.NextWithdrawalValidatorIndex,
historicalSummaries: st.HistoricalSummaries,
depositRequestsStartIndex: st.DepositRequestsStartIndex,
depositBalanceToConsume: st.DepositBalanceToConsume,
exitBalanceToConsume: st.ExitBalanceToConsume,
earliestExitEpoch: st.EarliestExitEpoch,
consolidationBalanceToConsume: st.ConsolidationBalanceToConsume,
earliestConsolidationEpoch: st.EarliestConsolidationEpoch,
pendingDeposits: st.PendingDeposits,
pendingPartialWithdrawals: st.PendingPartialWithdrawals,
pendingConsolidations: st.PendingConsolidations,
proposerLookahead: proposerLookahead,
id: types.Enumerator.Inc(),
dirtyFields: make(map[types.FieldIndex]bool, fieldCount),
dirtyIndices: make(map[types.FieldIndex][]uint64, fieldCount),
stateFieldLeaves: make(map[types.FieldIndex]*fieldtrie.FieldTrie, fieldCount),
rebuildTrie: make(map[types.FieldIndex]bool, fieldCount),
valMapHandler: stateutil.NewValMapHandler(st.Validators),
}
b.blockRootsMultiValue = NewMultiValueBlockRoots(st.BlockRoots)
b.stateRootsMultiValue = NewMultiValueStateRoots(st.StateRoots)
b.randaoMixesMultiValue = NewMultiValueRandaoMixes(st.RandaoMixes)
b.balancesMultiValue = NewMultiValueBalances(st.Balances)
b.validatorsMultiValue = NewMultiValueValidators(st.Validators)
b.inactivityScoresMultiValue = NewMultiValueInactivityScores(st.InactivityScores)
b.sharedFieldReferences = make(map[types.FieldIndex]*stateutil.Reference, gloasSharedFieldRefCount)
for _, f := range gloasFields {
b.dirtyFields[f] = true
b.rebuildTrie[f] = true
b.dirtyIndices[f] = []uint64{}
trie, err := fieldtrie.NewFieldTrie(f, types.BasicArray, nil, 0)
if err != nil {
return nil, err
}
b.stateFieldLeaves[f] = trie
}
// Initialize field reference tracking for shared data.
b.sharedFieldReferences[types.HistoricalRoots] = stateutil.NewRef(1)
b.sharedFieldReferences[types.Eth1DataVotes] = stateutil.NewRef(1)
b.sharedFieldReferences[types.Slashings] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PreviousEpochParticipationBits] = stateutil.NewRef(1)
b.sharedFieldReferences[types.CurrentEpochParticipationBits] = stateutil.NewRef(1)
b.sharedFieldReferences[types.LatestExecutionPayloadHeaderGloas] = stateutil.NewRef(1) // New in Gloas.
b.sharedFieldReferences[types.HistoricalSummaries] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingDeposits] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingPartialWithdrawals] = stateutil.NewRef(1)
b.sharedFieldReferences[types.PendingConsolidations] = stateutil.NewRef(1)
b.sharedFieldReferences[types.ProposerLookahead] = stateutil.NewRef(1)
state.Count.Inc()
// Finalizer runs when dst is being destroyed in garbage collection.
runtime.SetFinalizer(b, finalizerCleanup)
return b, nil
}
// Copy returns a deep copy of the beacon state.
func (b *BeaconState) Copy() state.BeaconState {
b.lock.RLock()
@@ -752,6 +875,8 @@ func (b *BeaconState) Copy() state.BeaconState {
fieldCount = params.BeaconConfig().BeaconStateElectraFieldCount
case version.Fulu:
fieldCount = params.BeaconConfig().BeaconStateFuluFieldCount
case version.Gloas:
fieldCount = params.BeaconConfig().BeaconStateFuluFieldCount
}
dst := &BeaconState{
@@ -806,6 +931,7 @@ func (b *BeaconState) Copy() state.BeaconState {
latestExecutionPayloadHeader: b.latestExecutionPayloadHeader.Copy(),
latestExecutionPayloadHeaderCapella: b.latestExecutionPayloadHeaderCapella.Copy(),
latestExecutionPayloadHeaderDeneb: b.latestExecutionPayloadHeaderDeneb.Copy(),
latestExecutionPayloadHeaderGloas: b.latestExecutionPayloadHeaderGloas.Copy(),
id: types.Enumerator.Inc(),
@@ -842,6 +968,8 @@ func (b *BeaconState) Copy() state.BeaconState {
dst.sharedFieldReferences = make(map[types.FieldIndex]*stateutil.Reference, electraSharedFieldRefCount)
case version.Fulu:
dst.sharedFieldReferences = make(map[types.FieldIndex]*stateutil.Reference, fuluSharedFieldRefCount)
case version.Gloas:
dst.sharedFieldReferences = make(map[types.FieldIndex]*stateutil.Reference, gloasSharedFieldRefCount)
}
for field, ref := range b.sharedFieldReferences {
@@ -937,6 +1065,8 @@ func (b *BeaconState) initializeMerkleLayers(ctx context.Context) error {
b.dirtyFields = make(map[types.FieldIndex]bool, params.BeaconConfig().BeaconStateElectraFieldCount)
case version.Fulu:
b.dirtyFields = make(map[types.FieldIndex]bool, params.BeaconConfig().BeaconStateFuluFieldCount)
case version.Gloas:
b.dirtyFields = make(map[types.FieldIndex]bool, params.BeaconConfig().BeaconStateFuluFieldCount)
default:
return fmt.Errorf("unknown state version (%s) when computing dirty fields in merklization", version.String(b.version))
}
@@ -1149,6 +1279,8 @@ func (b *BeaconState) rootSelector(ctx context.Context, field types.FieldIndex)
return b.latestExecutionPayloadHeaderCapella.HashTreeRoot()
case types.LatestExecutionPayloadHeaderDeneb:
return b.latestExecutionPayloadHeaderDeneb.HashTreeRoot()
case types.LatestExecutionPayloadHeaderGloas:
return b.latestExecutionPayloadHeaderGloas.HashTreeRoot()
case types.NextWithdrawalIndex:
return ssz.Uint64Root(b.nextWithdrawalIndex), nil
case types.NextWithdrawalValidatorIndex:

View File

@@ -88,6 +88,8 @@ func (f FieldIndex) String() string {
return "latestExecutionPayloadHeaderCapella"
case LatestExecutionPayloadHeaderDeneb:
return "latestExecutionPayloadHeaderDeneb"
case LatestExecutionPayloadHeaderGloas:
return "latestExecutionPayloadHeaderGloas"
case NextWithdrawalIndex:
return "nextWithdrawalIndex"
case NextWithdrawalValidatorIndex:
@@ -171,7 +173,7 @@ func (f FieldIndex) RealPosition() int {
return 22
case NextSyncCommittee:
return 23
case LatestExecutionPayloadHeader, LatestExecutionPayloadHeaderCapella, LatestExecutionPayloadHeaderDeneb:
case LatestExecutionPayloadHeader, LatestExecutionPayloadHeaderCapella, LatestExecutionPayloadHeaderDeneb, LatestExecutionPayloadHeaderGloas:
return 24
case NextWithdrawalIndex:
return 25
@@ -251,6 +253,7 @@ const (
LatestExecutionPayloadHeader
LatestExecutionPayloadHeaderCapella
LatestExecutionPayloadHeaderDeneb
LatestExecutionPayloadHeaderGloas
NextWithdrawalIndex
NextWithdrawalValidatorIndex
HistoricalSummaries

View File

@@ -5,7 +5,6 @@ go_library(
srcs = [
"batch_verifier.go",
"block_batcher.go",
"broadcast_bls_changes.go",
"context.go",
"custody.go",
"data_column_sidecars.go",
@@ -165,7 +164,6 @@ go_test(
"batch_verifier_test.go",
"blobs_test.go",
"block_batcher_test.go",
"broadcast_bls_changes_test.go",
"context_test.go",
"custody_test.go",
"data_column_sidecars_test.go",
@@ -194,6 +192,7 @@ go_test(
"slot_aware_cache_test.go",
"subscriber_beacon_aggregate_proof_test.go",
"subscriber_beacon_blocks_test.go",
"subscriber_data_column_sidecar_test.go",
"subscriber_test.go",
"subscription_topic_handler_test.go",
"sync_fuzz_test.go",
@@ -266,6 +265,7 @@ go_test(
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/equality:go_default_library",
"//genesis:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",

View File

@@ -1,88 +0,0 @@
package sync
import (
"context"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/blocks"
"github.com/OffchainLabs/prysm/v6/config/params"
types "github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/crypto/rand"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/time/slots"
)
const broadcastBLSChangesRateLimit = 128
// This routine broadcasts known BLS changes at the Capella fork.
func (s *Service) broadcastBLSChanges(currSlot types.Slot) {
capellaSlotStart, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
if err != nil {
// only possible error is an overflow, so we exit early from the method
return
}
if currSlot != capellaSlotStart {
return
}
changes, err := s.cfg.blsToExecPool.PendingBLSToExecChanges()
if err != nil {
log.WithError(err).Error("could not get BLS to execution changes")
}
if len(changes) == 0 {
return
}
source := rand.NewGenerator()
length := len(changes)
broadcastChanges := make([]*ethpb.SignedBLSToExecutionChange, length)
for i := 0; i < length; i++ {
idx := source.Intn(len(changes))
broadcastChanges[i] = changes[idx]
changes = append(changes[:idx], changes[idx+1:]...)
}
go s.rateBLSChanges(s.ctx, broadcastChanges)
}
func (s *Service) broadcastBLSBatch(ctx context.Context, ptr *[]*ethpb.SignedBLSToExecutionChange) {
limit := broadcastBLSChangesRateLimit
if len(*ptr) < broadcastBLSChangesRateLimit {
limit = len(*ptr)
}
st, err := s.cfg.chain.HeadStateReadOnly(ctx)
if err != nil {
log.WithError(err).Error("could not get head state")
return
}
for _, ch := range (*ptr)[:limit] {
if ch != nil {
_, err := blocks.ValidateBLSToExecutionChange(st, ch)
if err != nil {
log.WithError(err).Error("could not validate BLS to execution change")
continue
}
if err := s.cfg.p2p.Broadcast(ctx, ch); err != nil {
log.WithError(err).Error("could not broadcast BLS to execution changes.")
}
}
}
*ptr = (*ptr)[limit:]
}
func (s *Service) rateBLSChanges(ctx context.Context, changes []*ethpb.SignedBLSToExecutionChange) {
s.broadcastBLSBatch(ctx, &changes)
if len(changes) == 0 {
return
}
ticker := time.NewTicker(500 * time.Millisecond)
for {
select {
case <-s.ctx.Done():
return
case <-ticker.C:
s.broadcastBLSBatch(ctx, &changes)
if len(changes) == 0 {
return
}
}
}
}

View File

@@ -1,157 +0,0 @@
package sync
import (
"testing"
"time"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/signing"
testingdb "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/OffchainLabs/prysm/v6/beacon-chain/operations/blstoexec"
mockp2p "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/encoding/bytesutil"
ethpb "github.com/OffchainLabs/prysm/v6/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
"github.com/OffchainLabs/prysm/v6/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
func TestBroadcastBLSChanges(t *testing.T) {
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig()
c.CapellaForkEpoch = c.BellatrixForkEpoch.Add(2)
params.OverrideBeaconConfig(c)
chainService := &mockChain.ChainService{
Genesis: time.Now(),
ValidatorsRoot: [32]byte{'A'},
}
s := NewService(t.Context(),
WithP2P(mockp2p.NewTestP2P(t)),
WithInitialSync(&mockSync.Sync{IsSyncing: false}),
WithChainService(chainService),
WithOperationNotifier(chainService.OperationNotifier()),
WithBlsToExecPool(blstoexec.NewPool()),
)
var emptySig [96]byte
s.cfg.blsToExecPool.InsertBLSToExecChange(&ethpb.SignedBLSToExecutionChange{
Message: &ethpb.BLSToExecutionChange{
ValidatorIndex: 10,
FromBlsPubkey: make([]byte, 48),
ToExecutionAddress: make([]byte, 20),
},
Signature: emptySig[:],
})
capellaStart, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
require.NoError(t, err)
s.broadcastBLSChanges(capellaStart + 1)
}
func TestRateBLSChanges(t *testing.T) {
logHook := logTest.NewGlobal()
params.SetupTestConfigCleanup(t)
c := params.BeaconConfig()
c.CapellaForkEpoch = c.BellatrixForkEpoch.Add(2)
params.OverrideBeaconConfig(c)
chainService := &mockChain.ChainService{
Genesis: time.Now(),
ValidatorsRoot: [32]byte{'A'},
}
p1 := mockp2p.NewTestP2P(t)
s := NewService(t.Context(),
WithP2P(p1),
WithInitialSync(&mockSync.Sync{IsSyncing: false}),
WithChainService(chainService),
WithOperationNotifier(chainService.OperationNotifier()),
WithBlsToExecPool(blstoexec.NewPool()),
)
beaconDB := testingdb.SetupDB(t)
s.cfg.stateGen = stategen.New(beaconDB, doublylinkedtree.New())
s.cfg.beaconDB = beaconDB
s.initCaches()
st, keys := util.DeterministicGenesisStateCapella(t, 256)
s.cfg.chain = &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Now().Add(-time.Second * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Duration(10)),
State: st,
}
for i := 0; i < 200; i++ {
message := &ethpb.BLSToExecutionChange{
ValidatorIndex: primitives.ValidatorIndex(i),
FromBlsPubkey: keys[i+1].PublicKey().Marshal(),
ToExecutionAddress: bytesutil.PadTo([]byte("address"), 20),
}
epoch := params.BeaconConfig().CapellaForkEpoch + 1
domain, err := signing.Domain(st.Fork(), epoch, params.BeaconConfig().DomainBLSToExecutionChange, st.GenesisValidatorsRoot())
assert.NoError(t, err)
htr, err := signing.Data(message.HashTreeRoot, domain)
assert.NoError(t, err)
signed := &ethpb.SignedBLSToExecutionChange{
Message: message,
Signature: keys[i+1].Sign(htr[:]).Marshal(),
}
s.cfg.blsToExecPool.InsertBLSToExecChange(signed)
}
require.Equal(t, false, p1.BroadcastCalled.Load())
slot, err := slots.EpochStart(params.BeaconConfig().CapellaForkEpoch)
require.NoError(t, err)
s.broadcastBLSChanges(slot)
time.Sleep(100 * time.Millisecond) // Need a sleep for the go routine to be ready
require.Equal(t, true, p1.BroadcastCalled.Load())
require.LogsDoNotContain(t, logHook, "could not")
p1.BroadcastCalled.Store(false)
time.Sleep(500 * time.Millisecond) // Need a sleep for the second batch to be broadcast
require.Equal(t, true, p1.BroadcastCalled.Load())
require.LogsDoNotContain(t, logHook, "could not")
}
func TestBroadcastBLSBatch_changes_slice(t *testing.T) {
message := &ethpb.BLSToExecutionChange{
FromBlsPubkey: make([]byte, 48),
ToExecutionAddress: make([]byte, 20),
}
signed := &ethpb.SignedBLSToExecutionChange{
Message: message,
Signature: make([]byte, 96),
}
changes := make([]*ethpb.SignedBLSToExecutionChange, 200)
for i := 0; i < len(changes); i++ {
changes[i] = signed
}
p1 := mockp2p.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: time.Now(),
ValidatorsRoot: [32]byte{'A'},
}
s := NewService(t.Context(),
WithP2P(p1),
WithInitialSync(&mockSync.Sync{IsSyncing: false}),
WithChainService(chainService),
WithOperationNotifier(chainService.OperationNotifier()),
WithBlsToExecPool(blstoexec.NewPool()),
)
beaconDB := testingdb.SetupDB(t)
s.cfg.stateGen = stategen.New(beaconDB, doublylinkedtree.New())
s.cfg.beaconDB = beaconDB
s.initCaches()
st, _ := util.DeterministicGenesisStateCapella(t, 32)
s.cfg.chain = &mockChain.ChainService{
ValidatorsRoot: [32]byte{'A'},
Genesis: time.Now().Add(-time.Second * time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Duration(10)),
State: st,
}
s.broadcastBLSBatch(s.ctx, &changes)
require.Equal(t, 200-128, len(changes))
}

View File

@@ -106,6 +106,9 @@ func (s *Service) custodyGroupCount() (uint64, error) {
// validatorsCustodyRequirements computes the custody requirements based on the
// finalized state and the tracked validators.
func (s *Service) validatorsCustodyRequirement() (uint64, error) {
if s.trackedValidatorsCache == nil {
return 0, nil
}
// Get the indices of the tracked validators.
indices := s.trackedValidatorsCache.Indices()

View File

@@ -12,6 +12,7 @@ import (
// Is a background routine that observes for new incoming forks. Depending on the epoch
// it will be in charge of subscribing/unsubscribing the relevant topics at the fork boundaries.
func (s *Service) forkWatcher() {
<-s.initialSyncComplete
slotTicker := slots.NewSlotTicker(s.cfg.clock.GenesisTime(), params.BeaconConfig().SecondsPerSlot)
for {
select {
@@ -28,9 +29,6 @@ func (s *Service) forkWatcher() {
log.WithError(err).Error("Unable to check for fork in the previous epoch")
continue
}
// Broadcast BLS changes at the Capella fork boundary
s.broadcastBLSChanges(currSlot)
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
slotTicker.Done()

View File

@@ -2,96 +2,84 @@ package sync
import (
"context"
"fmt"
"sync"
"testing"
"time"
"github.com/OffchainLabs/prysm/v6/async/abool"
mockChain "github.com/OffchainLabs/prysm/v6/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v6/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/startup"
mockSync "github.com/OffchainLabs/prysm/v6/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/runtime/version"
"github.com/OffchainLabs/prysm/v6/genesis"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
)
func defaultClockWithTimeAtEpoch(epoch primitives.Epoch) *startup.Clock {
now := genesis.Time().Add(params.EpochsDuration(epoch, params.BeaconConfig()))
return startup.NewClock(genesis.Time(), genesis.ValidatorsRoot(), startup.WithTimeAsNow(now))
}
func testForkWatcherService(t *testing.T, current primitives.Epoch) *Service {
closedChan := make(chan struct{})
close(closedChan)
peer2peer := p2ptest.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: genesis.Time(),
ValidatorsRoot: genesis.ValidatorsRoot(),
}
ctx, cancel := context.WithTimeout(t.Context(), 10*time.Millisecond)
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: defaultClockWithTimeAtEpoch(current),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
initialSyncComplete: closedChan,
}
return r
}
func TestService_CheckForNextEpochFork(t *testing.T) {
closedChan := make(chan struct{})
close(closedChan)
params.SetupTestConfigCleanup(t)
genesis.StoreEmbeddedDuringTest(t, params.BeaconConfig().ConfigName)
params.BeaconConfig().FuluForkEpoch = params.BeaconConfig().ElectraForkEpoch + 1096*2
params.BeaconConfig().InitializeForkSchedule()
tests := []struct {
name string
svcCreator func(t *testing.T) *Service
currEpoch primitives.Epoch
wantErr bool
postSvcCheck func(t *testing.T, s *Service)
name string
svcCreator func(t *testing.T) *Service
checkRegistration func(t *testing.T, s *Service)
forkEpoch primitives.Epoch
epochAtRegistration func(primitives.Epoch) primitives.Epoch
nextForkEpoch primitives.Epoch
}{
{
name: "no fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
gt := time.Now().Add(time.Duration(-params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().SlotsPerEpoch))) * time.Second)
vr := [32]byte{'A'}
chainService := &mockChain.ChainService{
Genesis: gt,
ValidatorsRoot: vr,
}
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
return r
},
currEpoch: 10,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
},
name: "no fork in the next epoch",
forkEpoch: params.BeaconConfig().AltairForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 2 },
nextForkEpoch: params.BeaconConfig().BellatrixForkEpoch,
checkRegistration: func(t *testing.T, s *Service) {},
},
{
name: "altair fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
gt := time.Now().Add(-4 * oneEpoch())
vr := [32]byte{'A'}
chainService := &mockChain.ChainService{
Genesis: gt,
ValidatorsRoot: vr,
}
bCfg := params.BeaconConfig().Copy()
bCfg.AltairForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(5)
assert.Equal(t, true, s.subHandler.digestExists(digest))
name: "altair fork in the next epoch",
forkEpoch: params.BeaconConfig().AltairForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 1 },
nextForkEpoch: params.BeaconConfig().BellatrixForkEpoch,
checkRegistration: func(t *testing.T, s *Service) {
digest := params.ForkDigest(params.BeaconConfig().AltairForkEpoch)
rpcMap := make(map[string]bool)
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
rpcMap[string(p)] = true
@@ -99,375 +87,132 @@ func TestService_CheckForNextEpochFork(t *testing.T) {
assert.Equal(t, true, rpcMap[p2p.RPCBlocksByRangeTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCBlocksByRootTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCMetaDataTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
expected := fmt.Sprintf(p2p.SyncContributionAndProofSubnetTopicFormat+s.cfg.p2p.Encoding().ProtocolSuffix(), digest)
assert.Equal(t, true, s.subHandler.topicExists(expected), "subnet topic doesn't exist")
// TODO: we should check subcommittee indices here but we need to work with the committee cache to do it properly
/*
subIndices := mapFromCount(params.BeaconConfig().SyncCommitteeSubnetCount)
for idx := range subIndices {
topic := fmt.Sprintf(p2p.SyncCommitteeSubnetTopicFormat, digest, idx)
expected := topic + s.cfg.p2p.Encoding().ProtocolSuffix()
assert.Equal(t, true, s.subHandler.topicExists(expected), fmt.Sprintf("subnet topic %s doesn't exist", expected))
}
*/
},
},
{
name: "bellatrix fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: time.Now().Add(-4 * oneEpoch()),
ValidatorsRoot: [32]byte{'A'},
}
bCfg := params.BeaconConfig().Copy()
bCfg.AltairForkEpoch = 3
bCfg.BellatrixForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(5)
assert.Equal(t, true, s.subHandler.digestExists(digest))
name: "capella fork in the next epoch",
checkRegistration: func(t *testing.T, s *Service) {
digest := params.ForkDigest(params.BeaconConfig().CapellaForkEpoch)
rpcMap := make(map[string]bool)
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
rpcMap[string(p)] = true
}
expected := fmt.Sprintf(p2p.BlsToExecutionChangeSubnetTopicFormat+s.cfg.p2p.Encoding().ProtocolSuffix(), digest)
assert.Equal(t, true, s.subHandler.topicExists(expected), "subnet topic doesn't exist")
},
forkEpoch: params.BeaconConfig().CapellaForkEpoch,
nextForkEpoch: params.BeaconConfig().DenebForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 1 },
},
{
name: "deneb fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
gt := time.Now().Add(-4 * oneEpoch())
vr := [32]byte{'A'}
chainService := &mockChain.ChainService{
Genesis: gt,
ValidatorsRoot: vr,
}
bCfg := params.BeaconConfig().Copy()
bCfg.DenebForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(5)
assert.Equal(t, true, s.subHandler.digestExists(digest))
checkRegistration: func(t *testing.T, s *Service) {
digest := params.ForkDigest(params.BeaconConfig().DenebForkEpoch)
rpcMap := make(map[string]bool)
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
rpcMap[string(p)] = true
}
subIndices := mapFromCount(params.BeaconConfig().BlobsidecarSubnetCount)
for idx := range subIndices {
topic := fmt.Sprintf(p2p.BlobSubnetTopicFormat, digest, idx)
expected := topic + s.cfg.p2p.Encoding().ProtocolSuffix()
assert.Equal(t, true, s.subHandler.topicExists(expected), fmt.Sprintf("subnet topic %s doesn't exist", expected))
}
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
},
forkEpoch: params.BeaconConfig().DenebForkEpoch,
nextForkEpoch: params.BeaconConfig().ElectraForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 1 },
},
{
name: "electra fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
gt := time.Now().Add(-4 * oneEpoch())
vr := [32]byte{'A'}
chainService := &mockChain.ChainService{
Genesis: gt,
ValidatorsRoot: vr,
checkRegistration: func(t *testing.T, s *Service) {
digest := params.ForkDigest(params.BeaconConfig().ElectraForkEpoch)
subIndices := mapFromCount(params.BeaconConfig().BlobsidecarSubnetCountElectra)
for idx := range subIndices {
topic := fmt.Sprintf(p2p.BlobSubnetTopicFormat, digest, idx)
expected := topic + s.cfg.p2p.Encoding().ProtocolSuffix()
assert.Equal(t, true, s.subHandler.topicExists(expected), fmt.Sprintf("subnet topic %s doesn't exist", expected))
}
bCfg := params.BeaconConfig().Copy()
bCfg.ElectraForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(5)
assert.Equal(t, true, s.subHandler.digestExists(digest))
rpcMap := make(map[string]bool)
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
rpcMap[string(p)] = true
}
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
},
forkEpoch: params.BeaconConfig().ElectraForkEpoch,
nextForkEpoch: params.BeaconConfig().FuluForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 1 },
},
{
name: "fulu fork in the next epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
gt := time.Now().Add(-4 * oneEpoch())
vr := [32]byte{'A'}
chainService := &mockChain.ChainService{
Genesis: gt,
ValidatorsRoot: vr,
}
bCfg := params.BeaconConfig().Copy()
bCfg.FuluForkEpoch = 5
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: startup.NewClock(gt, vr),
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
trackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(5)
assert.Equal(t, true, s.subHandler.digestExists(digest))
checkRegistration: func(t *testing.T, s *Service) {
rpcMap := make(map[string]bool)
for _, p := range s.cfg.p2p.Host().Mux().Protocols() {
rpcMap[string(p)] = true
}
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCBlobSidecarsByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
assert.Equal(t, true, rpcMap[p2p.RPCMetaDataTopicV3+s.cfg.p2p.Encoding().ProtocolSuffix()], "topic doesn't exist")
},
forkEpoch: params.BeaconConfig().FuluForkEpoch,
nextForkEpoch: params.BeaconConfig().FuluForkEpoch,
epochAtRegistration: func(e primitives.Epoch) primitives.Epoch { return e - 1 },
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := tt.svcCreator(t)
if err := s.registerForUpcomingFork(tt.currEpoch); (err != nil) != tt.wantErr {
t.Errorf("registerForUpcomingFork() error = %v, wantErr %v", err, tt.wantErr)
current := tt.epochAtRegistration(tt.forkEpoch)
s := testForkWatcherService(t, current)
wg := attachSpawner(s)
require.NoError(t, s.registerForUpcomingFork(s.cfg.clock.CurrentEpoch()))
wg.Wait()
tt.checkRegistration(t, s)
if current != tt.forkEpoch-1 {
return
}
tt.postSvcCheck(t, s)
// Ensure the topics were registered for the upcoming fork
digest := params.ForkDigest(tt.forkEpoch)
assert.Equal(t, true, s.subHandler.digestExists(digest))
// After this point we are checking deregistration, which doesn't apply if there isn't a higher
// nextForkEpoch.
if tt.forkEpoch >= tt.nextForkEpoch {
return
}
nextDigest := params.ForkDigest(tt.nextForkEpoch)
// Move the clock to just before the next fork epoch and ensure deregistration is correct
wg = attachSpawner(s)
s.cfg.clock = defaultClockWithTimeAtEpoch(tt.nextForkEpoch - 1)
require.NoError(t, s.registerForUpcomingFork(s.cfg.clock.CurrentEpoch()))
wg.Wait()
// deregister as if it is the epoch after the next fork epoch
require.NoError(t, s.deregisterFromPastFork(tt.nextForkEpoch+1))
assert.Equal(t, false, s.subHandler.digestExists(digest))
assert.Equal(t, true, s.subHandler.digestExists(nextDigest))
})
}
}
func TestService_CheckForPreviousEpochFork(t *testing.T) {
params.SetupTestConfigCleanup(t)
tests := []struct {
name string
svcCreator func(t *testing.T) *Service
currEpoch primitives.Epoch
wantErr bool
postSvcCheck func(t *testing.T, s *Service)
}{
{
name: "no fork in the previous epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: time.Now().Add(-oneEpoch()),
ValidatorsRoot: [32]byte{'A'},
}
clock := startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: clock,
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
err := r.registerRPCHandlers()
assert.NoError(t, err)
return r
},
currEpoch: 10,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
ptcls := s.cfg.p2p.Host().Mux().Protocols()
pMap := make(map[string]bool)
for _, p := range ptcls {
pMap[string(p)] = true
}
assert.Equal(t, true, pMap[p2p.RPCGoodByeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCStatusTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCPingTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCMetaDataTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCBlocksByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCBlocksByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
},
},
{
name: "altair fork in the previous epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: time.Now().Add(-4 * oneEpoch()),
ValidatorsRoot: [32]byte{'A'},
}
clock := startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)
bCfg := params.BeaconConfig().Copy()
bCfg.AltairForkEpoch = 3
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: clock,
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
prevGenesis := chainService.Genesis
// To allow registration of v1 handlers
chainService.Genesis = time.Now().Add(-1 * oneEpoch())
err := r.registerRPCHandlers()
assert.NoError(t, err)
chainService.Genesis = prevGenesis
previous, err := r.rpcHandlerByTopicFromFork(version.Phase0)
assert.NoError(t, err)
next, err := r.rpcHandlerByTopicFromFork(version.Altair)
assert.NoError(t, err)
handlerByTopic := addedRPCHandlerByTopic(previous, next)
for topic, handler := range handlerByTopic {
r.registerRPC(topic, handler)
}
digest := params.ForkDigest(0)
assert.NoError(t, err)
r.registerSubscribers(0, digest)
assert.Equal(t, true, r.subHandler.digestExists(digest))
digest = params.ForkDigest(3)
r.registerSubscribers(3, digest)
assert.Equal(t, true, r.subHandler.digestExists(digest))
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(0)
assert.Equal(t, false, s.subHandler.digestExists(digest))
digest = params.ForkDigest(3)
assert.Equal(t, true, s.subHandler.digestExists(digest))
ptcls := s.cfg.p2p.Host().Mux().Protocols()
pMap := make(map[string]bool)
for _, p := range ptcls {
pMap[string(p)] = true
}
assert.Equal(t, true, pMap[p2p.RPCGoodByeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCStatusTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCPingTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCMetaDataTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCBlocksByRangeTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, true, pMap[p2p.RPCBlocksByRootTopicV2+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, false, pMap[p2p.RPCMetaDataTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, false, pMap[p2p.RPCBlocksByRangeTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
assert.Equal(t, false, pMap[p2p.RPCBlocksByRootTopicV1+s.cfg.p2p.Encoding().ProtocolSuffix()])
},
},
{
name: "bellatrix fork in the previous epoch",
svcCreator: func(t *testing.T) *Service {
peer2peer := p2ptest.NewTestP2P(t)
chainService := &mockChain.ChainService{
Genesis: time.Now().Add(-4 * oneEpoch()),
ValidatorsRoot: [32]byte{'A'},
}
clock := startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)
bCfg := params.BeaconConfig().Copy()
bCfg.AltairForkEpoch = 1
bCfg.BellatrixForkEpoch = 3
params.OverrideBeaconConfig(bCfg)
params.BeaconConfig().InitializeForkSchedule()
ctx, cancel := context.WithCancel(t.Context())
r := &Service{
ctx: ctx,
cancel: cancel,
cfg: &config{
p2p: peer2peer,
chain: chainService,
clock: clock,
initialSync: &mockSync.Sync{IsSyncing: false},
},
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
digest := params.ForkDigest(1)
r.registerSubscribers(1, digest)
assert.Equal(t, true, r.subHandler.digestExists(digest))
digest = params.ForkDigest(3)
r.registerSubscribers(3, digest)
assert.Equal(t, true, r.subHandler.digestExists(digest))
return r
},
currEpoch: 4,
wantErr: false,
postSvcCheck: func(t *testing.T, s *Service) {
digest := params.ForkDigest(1)
assert.Equal(t, false, s.subHandler.digestExists(digest))
digest = params.ForkDigest(3)
assert.Equal(t, true, s.subHandler.digestExists(digest))
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := tt.svcCreator(t)
if err := s.deregisterFromPastFork(tt.currEpoch); (err != nil) != tt.wantErr {
t.Errorf("registerForUpcomingFork() error = %v, wantErr %v", err, tt.wantErr)
}
tt.postSvcCheck(t, s)
})
func attachSpawner(s *Service) *sync.WaitGroup {
wg := new(sync.WaitGroup)
s.subscriptionSpawner = func(f func()) {
wg.Add(1)
go func() {
defer wg.Done()
f()
}()
}
return wg
}
// oneEpoch returns the duration of one epoch.

View File

@@ -450,7 +450,12 @@ func (f *blocksFetcher) fetchBlocksFromPeer(
for _, p := range peers {
blocks, err := f.requestBlocks(ctx, req, p)
if err != nil {
log.WithField("peer", p).WithError(err).Debug("Could not request blocks by range from peer")
log.WithFields(logrus.Fields{
"peer": p,
"startSlot": req.StartSlot,
"count": req.Count,
"step": req.Step,
}).WithError(err).Debug("Could not request blocks by range from peer")
continue
}
f.p2p.Peers().Scorers().BlockProviderScorer().Touch(p)

View File

@@ -92,22 +92,72 @@ func SendBeaconBlocksByRangeRequest(
// The response MUST contain no more than `count` blocks, and no more than
// MAX_REQUEST_BLOCKS blocks.
currentEpoch := slots.ToEpoch(tor.CurrentSlot())
if i >= req.Count || i >= params.MaxRequestBlock(currentEpoch) {
maxBlocks := params.MaxRequestBlock(currentEpoch)
if i >= req.Count {
log.WithFields(logrus.Fields{
"blockIndex": i,
"requestedCount": req.Count,
"blockSlot": blk.Block().Slot(),
"peer": pid,
"reason": "exceeded requested count",
}).Debug("Peer returned invalid data: too many blocks")
return nil, ErrInvalidFetchedData
}
if i >= maxBlocks {
log.WithFields(logrus.Fields{
"blockIndex": i,
"maxBlocks": maxBlocks,
"currentEpoch": currentEpoch,
"blockSlot": blk.Block().Slot(),
"peer": pid,
"reason": "exceeded MAX_REQUEST_BLOCKS",
}).Debug("Peer returned invalid data: exceeded protocol limit")
return nil, ErrInvalidFetchedData
}
// Returned blocks MUST be in the slot range [start_slot, start_slot + count * step).
if blk.Block().Slot() < req.StartSlot || blk.Block().Slot() >= req.StartSlot.Add(req.Count*req.Step) {
endSlot := req.StartSlot.Add(req.Count * req.Step)
if blk.Block().Slot() < req.StartSlot {
log.WithFields(logrus.Fields{
"blockSlot": blk.Block().Slot(),
"requestedStart": req.StartSlot,
"peer": pid,
"reason": "block slot before requested start",
}).Debug("Peer returned invalid data: block too early")
return nil, ErrInvalidFetchedData
}
if blk.Block().Slot() >= endSlot {
log.WithFields(logrus.Fields{
"blockSlot": blk.Block().Slot(),
"requestedStart": req.StartSlot,
"requestedEnd": endSlot,
"requestedCount": req.Count,
"requestedStep": req.Step,
"peer": pid,
"reason": "block slot >= start + count*step",
}).Debug("Peer returned invalid data: block beyond range")
return nil, ErrInvalidFetchedData
}
// Returned blocks, where they exist, MUST be sent in a consecutive order.
// Consecutive blocks MUST have values in `step` increments (slots may be skipped in between).
isSlotOutOfOrder := false
outOfOrderReason := ""
if prevSlot >= blk.Block().Slot() {
isSlotOutOfOrder = true
outOfOrderReason = "slot not increasing"
} else if req.Step != 0 && blk.Block().Slot().SubSlot(prevSlot).Mod(req.Step) != 0 {
isSlotOutOfOrder = true
slotDiff := blk.Block().Slot().SubSlot(prevSlot)
outOfOrderReason = fmt.Sprintf("slot diff %d not multiple of step %d", slotDiff, req.Step)
}
if !isFirstChunk && isSlotOutOfOrder {
log.WithFields(logrus.Fields{
"blockSlot": blk.Block().Slot(),
"prevSlot": prevSlot,
"requestedStep": req.Step,
"blockIndex": i,
"peer": pid,
"reason": outOfOrderReason,
}).Debug("Peer returned invalid data: blocks out of order")
return nil, ErrInvalidFetchedData
}
prevSlot = blk.Block().Slot()

View File

@@ -316,11 +316,14 @@ func TestHandshakeHandlers_Roundtrip(t *testing.T) {
clock: startup.NewClock(chain.Genesis, chain.ValidatorsRoot),
beaconDB: db,
stateNotifier: chain.StateNotifier(),
initialSync: &mockSync.Sync{IsSyncing: false},
},
rateLimiter: newRateLimiter(p1),
clockWaiter: cw,
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
markInitSyncComplete(t, r)
clock := startup.NewClockSynchronizer()
require.NoError(t, clock.SetClock(startup.NewClock(time.Now(), [32]byte{})))
r.verifierWaiter = verification.NewInitializerWaiter(clock, chain.ForkChoiceStore, r.cfg.stateGen)
@@ -337,9 +340,12 @@ func TestHandshakeHandlers_Roundtrip(t *testing.T) {
clock: startup.NewClock(chain2.Genesis, chain2.ValidatorsRoot),
p2p: p2,
stateNotifier: chain.StateNotifier(),
initialSync: &mockSync.Sync{IsSyncing: false},
},
rateLimiter: newRateLimiter(p2),
subHandler: newSubTopicHandler(),
}
markInitSyncComplete(t, r2)
clock = startup.NewClockSynchronizer()
require.NoError(t, clock.SetClock(startup.NewClock(time.Now(), [32]byte{})))
r2.verifierWaiter = verification.NewInitializerWaiter(clock, chain2.ForkChoiceStore, r2.cfg.stateGen)
@@ -909,13 +915,16 @@ func TestStatusRPCRequest_BadPeerHandshake(t *testing.T) {
p2p: p1,
chain: chain,
stateNotifier: chain.StateNotifier(),
initialSync: &mockSync.Sync{IsSyncing: false},
},
ctx: ctx,
rateLimiter: newRateLimiter(p1),
clockWaiter: cw,
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
markInitSyncComplete(t, r)
clock := startup.NewClockSynchronizer()
require.NoError(t, clock.SetClock(startup.NewClock(time.Now(), [32]byte{})))
r.verifierWaiter = verification.NewInitializerWaiter(clock, chain.ForkChoiceStore, r.cfg.stateGen)

View File

@@ -178,6 +178,7 @@ type Service struct {
lcStore *lightClient.Store
dataColumnLogCh chan dataColumnLogEntry
registeredNetworkEntry params.NetworkScheduleEntry
subscriptionSpawner func(func()) // see Service.spawn for details
}
// NewService initializes new regular sync service.
@@ -254,7 +255,7 @@ func (s *Service) Start() {
s.newColumnsVerifier = newDataColumnsVerifierFromInitializer(v)
go s.verifierRoutine()
go s.startTasksPostInitialSync()
go s.startDiscoveryAndSubscriptions()
go s.processDataColumnLogs()
s.cfg.p2p.AddConnectionHandler(s.reValidatePeer, s.sendGoodbye)
@@ -384,32 +385,31 @@ func (s *Service) waitForChainStart() {
s.markForChainStart()
}
func (s *Service) startTasksPostInitialSync() {
func (s *Service) startDiscoveryAndSubscriptions() {
// Wait for the chain to start.
s.waitForChainStart()
select {
case <-s.initialSyncComplete:
// Compute the current epoch.
currentSlot := slots.CurrentSlot(s.cfg.clock.GenesisTime())
currentEpoch := slots.ToEpoch(currentSlot)
// Compute the current fork forkDigest.
forkDigest, err := s.currentForkDigest()
if err != nil {
log.WithError(err).Error("Could not retrieve current fork digest")
return
}
// Register respective pubsub handlers at state synced event.
s.registerSubscribers(currentEpoch, forkDigest)
// Start the fork watcher.
go s.forkWatcher()
case <-s.ctx.Done():
log.Debug("Context closed, exiting goroutine")
if s.ctx.Err() != nil {
log.Debug("Context closed, exiting StartDiscoveryAndSubscription")
return
}
// Compute the current epoch.
currentSlot := slots.CurrentSlot(s.cfg.clock.GenesisTime())
currentEpoch := slots.ToEpoch(currentSlot)
// Compute the current fork forkDigest.
forkDigest, err := s.currentForkDigest()
if err != nil {
log.WithError(err).Error("Could not retrieve current fork digest")
return
}
// Register respective pubsub handlers at state synced event.
s.registerSubscribers(currentEpoch, forkDigest)
// Start the fork watcher.
go s.forkWatcher()
}
func (s *Service) writeErrorResponseToStream(responseCode byte, reason string, stream libp2pcore.Stream) {

View File

@@ -69,7 +69,7 @@ func TestSyncHandlers_WaitToSync(t *testing.T) {
}
topic := "/eth2/%x/beacon_block"
go r.startTasksPostInitialSync()
go r.startDiscoveryAndSubscriptions()
time.Sleep(100 * time.Millisecond)
var vr [32]byte
@@ -150,7 +150,7 @@ func TestSyncHandlers_WaitTillSynced(t *testing.T) {
syncCompleteCh := make(chan bool)
go func() {
r.startTasksPostInitialSync()
r.startDiscoveryAndSubscriptions()
syncCompleteCh <- true
}()
@@ -206,8 +206,9 @@ func TestSyncService_StopCleanly(t *testing.T) {
clockWaiter: gs,
initialSyncComplete: make(chan struct{}),
}
markInitSyncComplete(t, &r)
go r.startTasksPostInitialSync()
go r.startDiscoveryAndSubscriptions()
var vr [32]byte
require.NoError(t, gs.SetClock(startup.NewClock(time.Now(), vr)))
r.waitForChainStart()
@@ -220,9 +221,6 @@ func TestSyncService_StopCleanly(t *testing.T) {
time.Sleep(2 * time.Second)
require.Equal(t, true, r.chainStarted.IsSet(), "Did not receive chain start event.")
close(r.initialSyncComplete)
time.Sleep(1 * time.Second)
require.NotEqual(t, 0, len(r.cfg.p2p.PubSub().GetTopics()))
require.NotEqual(t, 0, len(r.cfg.p2p.Host().Mux().Protocols()))

View File

@@ -6,6 +6,7 @@ import (
"reflect"
"runtime/debug"
"strings"
"sync"
"time"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
@@ -35,40 +36,114 @@ import (
const pubsubMessageTimeout = 30 * time.Second
type (
// wrappedVal represents a gossip validator which also returns an error along with the result.
wrappedVal func(context.Context, peer.ID, *pubsub.Message) (pubsub.ValidationResult, error)
// subHandler represents handler for a given subscription.
subHandler func(context.Context, proto.Message) error
// parameters used for the `subscribeWithParameters` function.
subscribeParameters struct {
topicFormat string
validate wrappedVal
handle subHandler
digest [4]byte
// getSubnetsToJoin is a function that returns all subnets the node should join.
getSubnetsToJoin func(currentSlot primitives.Slot) map[uint64]bool
// getSubnetsRequiringPeers is a function that returns all subnets that require peers to be found
// but for which no subscriptions are needed.
getSubnetsRequiringPeers func(currentSlot primitives.Slot) map[uint64]bool
}
subscribeToSubnetsParameters struct {
subscriptionBySubnet map[uint64]*pubsub.Subscription
topicFormat string
digest [4]byte
validate wrappedVal
handle subHandler
getSubnetsToJoin func(currentSlot primitives.Slot) map[uint64]bool
}
)
var errInvalidDigest = errors.New("invalid digest")
// wrappedVal represents a gossip validator which also returns an error along with the result.
type wrappedVal func(context.Context, peer.ID, *pubsub.Message) (pubsub.ValidationResult, error)
// subHandler represents handler for a given subscription.
type subHandler func(context.Context, proto.Message) error
// subscribeParameters holds the parameters that are needed to construct a set of subscriptions topics for a given
// set of gossipsub subnets.
type subscribeParameters struct {
topicFormat string
validate wrappedVal
handle subHandler
digest [4]byte
// getSubnetsToJoin is a function that returns all subnets the node should join.
getSubnetsToJoin func(currentSlot primitives.Slot) map[uint64]bool
// getSubnetsRequiringPeers is a function that returns all subnets that require peers to be found
// but for which no subscriptions are needed.
getSubnetsRequiringPeers func(currentSlot primitives.Slot) map[uint64]bool
}
// shortTopic is a less verbose version of topic strings used for logging.
func (p subscribeParameters) shortTopic() string {
short := p.topicFormat
fmtLen := len(short)
if fmtLen >= 3 && short[fmtLen-3:] == "_%d" {
short = short[:fmtLen-3]
}
return fmt.Sprintf(short, p.digest)
}
func (p subscribeParameters) logFields() logrus.Fields {
return logrus.Fields{
"topic": p.shortTopic(),
}
}
// fullTopic is the fully qualified topic string, given to gossipsub.
func (p subscribeParameters) fullTopic(subnet uint64, suffix string) string {
return fmt.Sprintf(p.topicFormat, p.digest, subnet) + suffix
}
// subnetTracker keeps track of which subnets we are subscribed to, out of the set of
// possible subnets described by a `subscribeParameters`.
type subnetTracker struct {
subscribeParameters
mu sync.RWMutex
subscriptions map[uint64]*pubsub.Subscription
}
func newSubnetTracker(p subscribeParameters) *subnetTracker {
return &subnetTracker{
subscribeParameters: p,
subscriptions: make(map[uint64]*pubsub.Subscription),
}
}
// unwanted takes a list of wanted subnets and returns a list of currently subscribed subnets that are not included.
func (t *subnetTracker) unwanted(wanted map[uint64]bool) []uint64 {
t.mu.RLock()
defer t.mu.RUnlock()
unwanted := make([]uint64, 0, len(t.subscriptions))
for subnet := range t.subscriptions {
if wanted == nil || !wanted[subnet] {
unwanted = append(unwanted, subnet)
}
}
return unwanted
}
// missing takes a list of wanted subnets and returns a list of wanted subnets that are not currently tracked.
func (t *subnetTracker) missing(wanted map[uint64]bool) []uint64 {
t.mu.RLock()
defer t.mu.RUnlock()
missing := make([]uint64, 0, len(wanted))
for subnet := range wanted {
if _, ok := t.subscriptions[subnet]; !ok {
missing = append(missing, subnet)
}
}
return missing
}
// cancelSubscription cancels and removes the subscription for a given subnet.
func (t *subnetTracker) cancelSubscription(subnet uint64) {
t.mu.Lock()
defer t.mu.Unlock()
defer delete(t.subscriptions, subnet)
sub := t.subscriptions[subnet]
if sub == nil {
return
}
sub.Cancel()
}
// track asks subscriptionTracker to hold on to the subscription for a given subnet so
// that we can remember that it is tracked and cancel its context when it's time to unsubscribe.
func (t *subnetTracker) track(subnet uint64, sub *pubsub.Subscription) {
if sub == nil {
return
}
t.mu.Lock()
defer t.mu.Unlock()
t.subscriptions[subnet] = sub
}
// noopValidator is a no-op that only decodes the message, but does not check its contents.
func (s *Service) noopValidator(_ context.Context, _ peer.ID, msg *pubsub.Message) (pubsub.ValidationResult, error) {
m, err := s.decodePubsubMessage(msg)
@@ -112,132 +187,146 @@ func (s *Service) activeSyncSubnetIndices(currentSlot primitives.Slot) map[uint6
return mapFromSlice(subscriptions)
}
// spawn allows the Service to use a custom function for launching goroutines.
// This is useful in tests where we can set spawner to a sync.WaitGroup and
// wait for the spawned goroutines to finish.
func (s *Service) spawn(f func()) {
if s.subscriptionSpawner != nil {
s.subscriptionSpawner(f)
} else {
go f()
}
}
// Register PubSub subscribers
func (s *Service) registerSubscribers(epoch primitives.Epoch, digest [4]byte) {
s.subscribe(
p2p.BlockSubnetTopicFormat,
s.validateBeaconBlockPubSub,
s.beaconBlockSubscriber,
digest,
)
s.subscribe(
p2p.AggregateAndProofSubnetTopicFormat,
s.validateAggregateAndProof,
s.beaconAggregateProofSubscriber,
digest,
)
s.subscribe(
p2p.ExitSubnetTopicFormat,
s.validateVoluntaryExit,
s.voluntaryExitSubscriber,
digest,
)
s.subscribe(
p2p.ProposerSlashingSubnetTopicFormat,
s.validateProposerSlashing,
s.proposerSlashingSubscriber,
digest,
)
s.subscribe(
p2p.AttesterSlashingSubnetTopicFormat,
s.validateAttesterSlashing,
s.attesterSlashingSubscriber,
digest,
)
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.AttestationSubnetTopicFormat,
validate: s.validateCommitteeIndexBeaconAttestation,
handle: s.committeeIndexBeaconAttestationSubscriber,
digest: digest,
getSubnetsToJoin: s.persistentAndAggregatorSubnetIndices,
getSubnetsRequiringPeers: attesterSubnetIndices,
s.spawn(func() {
s.subscribe(p2p.BlockSubnetTopicFormat, s.validateBeaconBlockPubSub, s.beaconBlockSubscriber, digest)
})
s.spawn(func() {
s.subscribe(p2p.AggregateAndProofSubnetTopicFormat, s.validateAggregateAndProof, s.beaconAggregateProofSubscriber, digest)
})
s.spawn(func() {
s.subscribe(p2p.ExitSubnetTopicFormat, s.validateVoluntaryExit, s.voluntaryExitSubscriber, digest)
})
s.spawn(func() {
s.subscribe(p2p.ProposerSlashingSubnetTopicFormat, s.validateProposerSlashing, s.proposerSlashingSubscriber, digest)
})
s.spawn(func() {
s.subscribe(p2p.AttesterSlashingSubnetTopicFormat, s.validateAttesterSlashing, s.attesterSlashingSubscriber, digest)
})
s.spawn(func() {
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.AttestationSubnetTopicFormat,
validate: s.validateCommitteeIndexBeaconAttestation,
handle: s.committeeIndexBeaconAttestationSubscriber,
digest: digest,
getSubnetsToJoin: s.persistentAndAggregatorSubnetIndices,
getSubnetsRequiringPeers: attesterSubnetIndices,
})
})
// New gossip topic in Altair
if params.BeaconConfig().AltairForkEpoch <= epoch {
s.subscribe(
p2p.SyncContributionAndProofSubnetTopicFormat,
s.validateSyncContributionAndProof,
s.syncContributionAndProofSubscriber,
digest,
)
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.SyncCommitteeSubnetTopicFormat,
validate: s.validateSyncCommitteeMessage,
handle: s.syncCommitteeMessageSubscriber,
digest: digest,
getSubnetsToJoin: s.activeSyncSubnetIndices,
s.spawn(func() {
s.subscribe(
p2p.SyncContributionAndProofSubnetTopicFormat,
s.validateSyncContributionAndProof,
s.syncContributionAndProofSubscriber,
digest,
)
})
s.spawn(func() {
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.SyncCommitteeSubnetTopicFormat,
validate: s.validateSyncCommitteeMessage,
handle: s.syncCommitteeMessageSubscriber,
digest: digest,
getSubnetsToJoin: s.activeSyncSubnetIndices,
})
})
if features.Get().EnableLightClient {
s.subscribe(
p2p.LightClientOptimisticUpdateTopicFormat,
s.validateLightClientOptimisticUpdate,
s.lightClientOptimisticUpdateSubscriber,
digest,
)
s.subscribe(
p2p.LightClientFinalityUpdateTopicFormat,
s.validateLightClientFinalityUpdate,
s.lightClientFinalityUpdateSubscriber,
digest,
)
s.spawn(func() {
s.subscribe(
p2p.LightClientOptimisticUpdateTopicFormat,
s.validateLightClientOptimisticUpdate,
s.lightClientOptimisticUpdateSubscriber,
digest,
)
})
s.spawn(func() {
s.subscribe(
p2p.LightClientFinalityUpdateTopicFormat,
s.validateLightClientFinalityUpdate,
s.lightClientFinalityUpdateSubscriber,
digest,
)
})
}
}
// New gossip topic in Capella
if params.BeaconConfig().CapellaForkEpoch <= epoch {
s.subscribe(
p2p.BlsToExecutionChangeSubnetTopicFormat,
s.validateBlsToExecutionChange,
s.blsToExecutionChangeSubscriber,
digest,
)
s.spawn(func() {
s.subscribe(
p2p.BlsToExecutionChangeSubnetTopicFormat,
s.validateBlsToExecutionChange,
s.blsToExecutionChangeSubscriber,
digest,
)
})
}
// New gossip topic in Deneb, removed in Electra
if params.BeaconConfig().DenebForkEpoch <= epoch && epoch < params.BeaconConfig().ElectraForkEpoch {
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.BlobSubnetTopicFormat,
validate: s.validateBlob,
handle: s.blobSubscriber,
digest: digest,
getSubnetsToJoin: func(primitives.Slot) map[uint64]bool {
return mapFromCount(params.BeaconConfig().BlobsidecarSubnetCount)
},
s.spawn(func() {
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.BlobSubnetTopicFormat,
validate: s.validateBlob,
handle: s.blobSubscriber,
digest: digest,
getSubnetsToJoin: func(primitives.Slot) map[uint64]bool {
return mapFromCount(params.BeaconConfig().BlobsidecarSubnetCount)
},
})
})
}
// New gossip topic in Electra, removed in Fulu
if params.BeaconConfig().ElectraForkEpoch <= epoch && epoch < params.BeaconConfig().FuluForkEpoch {
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.BlobSubnetTopicFormat,
validate: s.validateBlob,
handle: s.blobSubscriber,
digest: digest,
getSubnetsToJoin: func(currentSlot primitives.Slot) map[uint64]bool {
return mapFromCount(params.BeaconConfig().BlobsidecarSubnetCountElectra)
},
s.spawn(func() {
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.BlobSubnetTopicFormat,
validate: s.validateBlob,
handle: s.blobSubscriber,
digest: digest,
getSubnetsToJoin: func(currentSlot primitives.Slot) map[uint64]bool {
return mapFromCount(params.BeaconConfig().BlobsidecarSubnetCountElectra)
},
})
})
}
// New gossip topic in Fulu.
if params.BeaconConfig().FuluForkEpoch <= epoch {
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.DataColumnSubnetTopicFormat,
validate: s.validateDataColumn,
handle: s.dataColumnSubscriber,
digest: digest,
getSubnetsToJoin: s.dataColumnSubnetIndices,
// TODO: Should we find peers always? When validators are managed? When validators are managed AND when we are going to propose a block?
s.spawn(func() {
s.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.DataColumnSubnetTopicFormat,
validate: s.validateDataColumn,
handle: s.dataColumnSubscriber,
digest: digest,
getSubnetsToJoin: s.dataColumnSubnetIndices,
getSubnetsRequiringPeers: s.allDataColumnSubnets,
})
})
}
}
// subscribe to a given topic with a given validator and subscription handler.
// The base protobuf message is used to initialize new messages for decoding.
func (s *Service) subscribe(topic string, validator wrappedVal, handle subHandler, digest [4]byte) *pubsub.Subscription {
func (s *Service) subscribe(topic string, validator wrappedVal, handle subHandler, digest [4]byte) {
<-s.initialSyncComplete
_, e, err := params.ForkDataFromDigest(digest)
if err != nil {
// Impossible condition as it would mean digest does not exist.
@@ -248,7 +337,7 @@ func (s *Service) subscribe(topic string, validator wrappedVal, handle subHandle
// Impossible condition as it would mean topic does not exist.
panic(fmt.Sprintf("%s is not mapped to any message in GossipTopicMappings", topic)) // lint:nopanic -- Impossible condition.
}
return s.subscribeWithBase(s.addDigestToTopic(topic, digest), validator, handle)
s.subscribeWithBase(s.addDigestToTopic(topic, digest), validator, handle)
}
func (s *Service) subscribeWithBase(topic string, validator wrappedVal, handle subHandler) *pubsub.Subscription {
@@ -413,61 +502,38 @@ func (s *Service) wrapAndReportValidation(topic string, v wrappedVal) (string, p
// pruneSubscriptions unsubscribes from topics we are currently subscribed to but that are
// not in the list of wanted subnets.
// This function mutates the `subscriptionBySubnet` map, which is used to keep track of the current subscriptions.
func (s *Service) pruneSubscriptions(
subscriptionBySubnet map[uint64]*pubsub.Subscription,
wantedSubnets map[uint64]bool,
topicFormat string,
digest [4]byte,
) {
for subnet, subscription := range subscriptionBySubnet {
if subscription == nil {
// Should not happen, but just in case.
delete(subscriptionBySubnet, subnet)
continue
}
if wantedSubnets[subnet] {
// Nothing to prune.
continue
}
// We are subscribed to a subnet that is no longer wanted.
subscription.Cancel()
fullTopic := fmt.Sprintf(topicFormat, digest, subnet) + s.cfg.p2p.Encoding().ProtocolSuffix()
s.unSubscribeFromTopic(fullTopic)
delete(subscriptionBySubnet, subnet)
func (s *Service) pruneSubscriptions(t *subnetTracker, wantedSubnets map[uint64]bool) {
for _, subnet := range t.unwanted(wantedSubnets) {
t.cancelSubscription(subnet)
s.unSubscribeFromTopic(t.fullTopic(subnet, s.cfg.p2p.Encoding().ProtocolSuffix()))
}
}
// subscribeToSubnets subscribes to needed subnets and unsubscribe from unneeded ones.
// This functions mutates the `subscriptionBySubnet` map, which is used to keep track of the current subscriptions.
func (s *Service) subscribeToSubnets(p subscribeToSubnetsParameters) error {
func (s *Service) subscribeToSubnets(t *subnetTracker) error {
// Do not subscribe if not synced.
if s.chainStarted.IsSet() && s.cfg.initialSync.Syncing() {
return nil
}
valid, err := isDigestValid(p.digest, s.cfg.clock)
valid, err := isDigestValid(t.digest, s.cfg.clock)
if err != nil {
return errors.Wrap(err, "is digest valid")
}
// Unsubscribe from all subnets if digest is not valid. It's likely to be the case after a hard fork.
if !valid {
wantedSubnets := map[uint64]bool{}
s.pruneSubscriptions(p.subscriptionBySubnet, wantedSubnets, p.topicFormat, p.digest)
s.pruneSubscriptions(t, nil)
return errInvalidDigest
}
subnetsToJoin := p.getSubnetsToJoin(s.cfg.clock.CurrentSlot())
s.pruneSubscriptions(p.subscriptionBySubnet, subnetsToJoin, p.topicFormat, p.digest)
for subnet := range subnetsToJoin {
subnetTopic := fmt.Sprintf(p.topicFormat, p.digest, subnet)
if _, exists := p.subscriptionBySubnet[subnet]; !exists {
subscription := s.subscribeWithBase(subnetTopic, p.validate, p.handle)
p.subscriptionBySubnet[subnet] = subscription
}
subnetsToJoin := t.getSubnetsToJoin(s.cfg.clock.CurrentSlot())
s.pruneSubscriptions(t, subnetsToJoin)
for _, subnet := range t.missing(subnetsToJoin) {
// TODO: subscribeWithBase appends the protocol suffix, other methods don't. Make this consistent.
topic := t.fullTopic(subnet, "")
t.track(subnet, s.subscribeWithBase(topic, t.validate, t.handle))
}
return nil
@@ -475,110 +541,81 @@ func (s *Service) subscribeToSubnets(p subscribeToSubnetsParameters) error {
// subscribeWithParameters subscribes to a list of subnets.
func (s *Service) subscribeWithParameters(p subscribeParameters) {
shortTopicFormat := p.topicFormat
shortTopicFormatLen := len(shortTopicFormat)
if shortTopicFormatLen >= 3 && shortTopicFormat[shortTopicFormatLen-3:] == "_%d" {
shortTopicFormat = shortTopicFormat[:shortTopicFormatLen-3]
}
shortTopic := fmt.Sprintf(shortTopicFormat, p.digest)
tracker := newSubnetTracker(p)
// Try once immediately so we don't have to wait until the next slot.
s.ensureSubnetPeersAndSubscribe(tracker)
parameters := subscribeToSubnetsParameters{
subscriptionBySubnet: make(map[uint64]*pubsub.Subscription),
topicFormat: p.topicFormat,
digest: p.digest,
validate: p.validate,
handle: p.handle,
getSubnetsToJoin: p.getSubnetsToJoin,
go s.logMinimumPeersPerSubnet(p)
slotTicker := slots.NewSlotTicker(s.cfg.clock.GenesisTime(), params.BeaconConfig().SecondsPerSlot)
defer slotTicker.Done()
for {
select {
case <-slotTicker.C():
s.ensureSubnetPeersAndSubscribe(tracker)
case <-s.ctx.Done():
return
}
}
err := s.subscribeToSubnets(parameters)
if err != nil {
log.WithError(err).Error("Could not subscribe to subnets")
}
func (s *Service) ensureSubnetPeersAndSubscribe(tracker *subnetTracker) {
timeout := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
minPeers := flags.Get().MinimumPeersPerSubnet
logFields := tracker.logFields()
neededSubnets := computeAllNeededSubnets(s.cfg.clock.CurrentSlot(), tracker.getSubnetsToJoin, tracker.getSubnetsRequiringPeers)
if err := s.subscribeToSubnets(tracker); err != nil {
if errors.Is(err, errInvalidDigest) {
log.WithFields(logFields).Debug("Digest is invalid, stopping subscription")
return
}
log.WithFields(logFields).WithError(err).Error("Could not subscribe to subnets")
return
}
slotDuration := time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second
ctx, cancel := context.WithTimeout(s.ctx, timeout)
defer cancel()
if err := s.cfg.p2p.FindAndDialPeersWithSubnets(ctx, tracker.topicFormat, tracker.digest, minPeers, neededSubnets); err != nil && !errors.Is(err, context.DeadlineExceeded) {
log.WithFields(logFields).WithError(err).Debug("Could not find peers with subnets")
}
}
func (s *Service) logMinimumPeersPerSubnet(p subscribeParameters) {
logFields := p.logFields()
minimumPeersPerSubnet := flags.Get().MinimumPeersPerSubnet
// Subscribe to expected subnets and search for peers if needed at every slot.
go func() {
currentSlot := s.cfg.clock.CurrentSlot()
neededSubnets := computeAllNeededSubnets(currentSlot, p.getSubnetsToJoin, p.getSubnetsRequiringPeers)
func() {
ctx, cancel := context.WithTimeout(s.ctx, slotDuration)
defer cancel()
if err := s.cfg.p2p.FindAndDialPeersWithSubnets(ctx, p.topicFormat, p.digest, minimumPeersPerSubnet, neededSubnets); err != nil && !errors.Is(err, context.DeadlineExceeded) {
log.WithError(err).Debug("Could not find peers with subnets")
}
}()
slotTicker := slots.NewSlotTicker(s.cfg.clock.GenesisTime(), params.BeaconConfig().SecondsPerSlot)
defer slotTicker.Done()
for {
select {
case <-slotTicker.C():
currentSlot := s.cfg.clock.CurrentSlot()
neededSubnets := computeAllNeededSubnets(currentSlot, p.getSubnetsToJoin, p.getSubnetsRequiringPeers)
if err := s.subscribeToSubnets(parameters); err != nil {
if errors.Is(err, errInvalidDigest) {
log.WithField("topics", shortTopic).Debug("Digest is invalid, stopping subscription")
return
}
log.WithError(err).Error("Could not subscribe to subnets")
continue
}
func() {
ctx, cancel := context.WithTimeout(s.ctx, slotDuration)
defer cancel()
if err := s.cfg.p2p.FindAndDialPeersWithSubnets(ctx, p.topicFormat, p.digest, minimumPeersPerSubnet, neededSubnets); err != nil && !errors.Is(err, context.DeadlineExceeded) {
log.WithError(err).Debug("Could not find peers with subnets")
}
}()
case <-s.ctx.Done():
return
}
}
}()
// Warn the user if we are not subscribed to enough peers in the subnets.
go func() {
log := log.WithField("minimum", minimumPeersPerSubnet)
logTicker := time.NewTicker(5 * time.Minute)
defer logTicker.Stop()
log := log.WithField("minimum", minimumPeersPerSubnet)
logTicker := time.NewTicker(5 * time.Minute)
defer logTicker.Stop()
for {
select {
case <-logTicker.C:
currentSlot := s.cfg.clock.CurrentSlot()
subnetsToFindPeersIndex := computeAllNeededSubnets(currentSlot, p.getSubnetsToJoin, p.getSubnetsRequiringPeers)
for {
select {
case <-logTicker.C:
currentSlot := s.cfg.clock.CurrentSlot()
subnetsToFindPeersIndex := computeAllNeededSubnets(currentSlot, p.getSubnetsToJoin, p.getSubnetsRequiringPeers)
isSubnetWithMissingPeers := false
// Find new peers for wanted subnets if needed.
for index := range subnetsToFindPeersIndex {
topic := fmt.Sprintf(p.topicFormat, p.digest, index)
isSubnetWithMissingPeers := false
// Find new peers for wanted subnets if needed.
for index := range subnetsToFindPeersIndex {
topic := fmt.Sprintf(p.topicFormat, p.digest, index)
// Check if we have enough peers in the subnet. Skip if we do.
if count := s.connectedPeersCount(topic); count < minimumPeersPerSubnet {
isSubnetWithMissingPeers = true
log.WithFields(logrus.Fields{
"topic": topic,
"actual": count,
}).Warning("Not enough connected peers")
}
// Check if we have enough peers in the subnet. Skip if we do.
if count := s.connectedPeersCount(topic); count < minimumPeersPerSubnet {
isSubnetWithMissingPeers = true
log.WithFields(logrus.Fields{
"topic": topic,
"actual": count,
}).Warning("Not enough connected peers")
}
if !isSubnetWithMissingPeers {
log.WithField("topic", shortTopic).Info("All subnets have enough connected peers")
}
case <-s.ctx.Done():
return
}
if !isSubnetWithMissingPeers {
log.WithFields(logFields).Debug("All subnets have enough connected peers")
}
case <-s.ctx.Done():
return
}
}()
}
}
func (s *Service) unSubscribeFromTopic(topic string) {

View File

@@ -6,7 +6,9 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed"
opfeed "github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/pkg/errors"
"google.golang.org/protobuf/proto"
)
@@ -52,3 +54,31 @@ func (s *Service) receiveDataColumnSidecar(ctx context.Context, sidecar blocks.V
return nil
}
// allDataColumnSubnets returns the data column subnets for which we need to find peers
// but don't need to subscribe to. This is used to ensure we have peers available in all subnets
// when we are serving validators. When a validator proposes a block, they need to publish data
// column sidecars on all subnets. This method returns a nil map when there is no validators custody
// requirement.
func (s *Service) allDataColumnSubnets(_ primitives.Slot) map[uint64]bool {
validatorsCustodyRequirement, err := s.validatorsCustodyRequirement()
if err != nil {
log.WithError(err).Error("Could not retrieve validators custody requirement")
return nil
}
// If no validators are tracked, return early
if validatorsCustodyRequirement == 0 {
return nil
}
// When we have validators with custody requirements, we need peers in all subnets
// because validators need to be able to publish data columns to all subnets when proposing
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
allSubnets := make(map[uint64]bool, dataColumnSidecarSubnetCount)
for i := range dataColumnSidecarSubnetCount {
allSubnets[i] = true
}
return allSubnets
}

View File

@@ -0,0 +1,65 @@
package sync
import (
"testing"
"github.com/OffchainLabs/prysm/v6/beacon-chain/cache"
dbtest "github.com/OffchainLabs/prysm/v6/beacon-chain/db/testing"
doublylinkedtree "github.com/OffchainLabs/prysm/v6/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/OffchainLabs/prysm/v6/beacon-chain/state/stategen"
"github.com/OffchainLabs/prysm/v6/config/params"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v6/testing/assert"
"github.com/OffchainLabs/prysm/v6/testing/require"
"github.com/OffchainLabs/prysm/v6/testing/util"
)
func TestAllDataColumnSubnets(t *testing.T) {
t.Run("returns nil when no validators tracked", func(t *testing.T) {
// Service with no tracked validators
svc := &Service{
ctx: t.Context(),
trackedValidatorsCache: cache.NewTrackedValidatorsCache(),
}
result := svc.allDataColumnSubnets(primitives.Slot(0))
assert.Equal(t, true, len(result) == 0, "Expected nil or empty map when no validators are tracked")
})
t.Run("returns all subnets logic test", func(t *testing.T) {
params.SetupTestConfigCleanup(t)
ctx := t.Context()
db := dbtest.SetupDB(t)
// Create and save genesis state
genesisState, _ := util.DeterministicGenesisState(t, 64)
require.NoError(t, db.SaveGenesisData(ctx, genesisState))
// Create stategen and initialize with genesis state
stateGen := stategen.New(db, doublylinkedtree.New())
_, err := stateGen.Resume(ctx, genesisState)
require.NoError(t, err)
// At least one tracked validator.
tvc := cache.NewTrackedValidatorsCache()
tvc.Set(cache.TrackedValidator{Active: true, Index: 1})
svc := &Service{
ctx: ctx,
trackedValidatorsCache: tvc,
cfg: &config{
stateGen: stateGen,
beaconDB: db,
},
}
dataColumnSidecarSubnetCount := params.BeaconConfig().DataColumnSidecarSubnetCount
result := svc.allDataColumnSubnets(0)
assert.Equal(t, dataColumnSidecarSubnetCount, uint64(len(result)))
for i := range dataColumnSidecarSubnetCount {
assert.Equal(t, true, result[i])
}
})
}

View File

@@ -58,6 +58,7 @@ func TestSubscribe_ReceivesValidMessage(t *testing.T) {
subHandler: newSubTopicHandler(),
chainStarted: abool.New(),
}
markInitSyncComplete(t, &r)
var err error
p2pService.Digest, err = r.currentForkDigest()
require.NoError(t, err)
@@ -83,6 +84,11 @@ func TestSubscribe_ReceivesValidMessage(t *testing.T) {
}
}
func markInitSyncComplete(_ *testing.T, s *Service) {
s.initialSyncComplete = make(chan struct{})
close(s.initialSyncComplete)
}
func TestSubscribe_UnsubscribeTopic(t *testing.T) {
p2pService := p2ptest.NewTestP2P(t)
gt := time.Now()
@@ -101,6 +107,7 @@ func TestSubscribe_UnsubscribeTopic(t *testing.T) {
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
markInitSyncComplete(t, &r)
var err error
p2pService.Digest, err = r.currentForkDigest()
require.NoError(t, err)
@@ -152,6 +159,7 @@ func TestSubscribe_ReceivesAttesterSlashing(t *testing.T) {
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
markInitSyncComplete(t, &r)
topic := "/eth2/%x/attester_slashing"
var wg sync.WaitGroup
wg.Add(1)
@@ -205,6 +213,7 @@ func TestSubscribe_ReceivesProposerSlashing(t *testing.T) {
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
markInitSyncComplete(t, &r)
topic := "/eth2/%x/proposer_slashing"
var wg sync.WaitGroup
wg.Add(1)
@@ -253,6 +262,8 @@ func TestSubscribe_HandlesPanic(t *testing.T) {
subHandler: newSubTopicHandler(),
chainStarted: abool.New(),
}
markInitSyncComplete(t, &r)
var err error
p.Digest, err = r.currentForkDigest()
require.NoError(t, err)
@@ -292,25 +303,33 @@ func TestRevalidateSubscription_CorrectlyFormatsTopic(t *testing.T) {
}
digest, err := r.currentForkDigest()
require.NoError(t, err)
subscriptions := make(map[uint64]*pubsub.Subscription, params.BeaconConfig().MaxCommitteesPerSlot)
defaultTopic := "/eth2/testing/%#x/committee%d"
params := subscribeParameters{
topicFormat: "/eth2/testing/%#x/committee%d",
digest: digest,
}
tracker := newSubnetTracker(params)
// committee index 1
fullTopic := fmt.Sprintf(defaultTopic, digest, 1) + r.cfg.p2p.Encoding().ProtocolSuffix()
c1 := uint64(1)
fullTopic := params.fullTopic(c1, r.cfg.p2p.Encoding().ProtocolSuffix())
_, topVal := r.wrapAndReportValidation(fullTopic, r.noopValidator)
require.NoError(t, r.cfg.p2p.PubSub().RegisterTopicValidator(fullTopic, topVal))
subscriptions[1], err = r.cfg.p2p.SubscribeToTopic(fullTopic)
sub1, err := r.cfg.p2p.SubscribeToTopic(fullTopic)
require.NoError(t, err)
tracker.track(c1, sub1)
// committee index 2
fullTopic = fmt.Sprintf(defaultTopic, digest, 2) + r.cfg.p2p.Encoding().ProtocolSuffix()
c2 := uint64(2)
fullTopic = params.fullTopic(c2, r.cfg.p2p.Encoding().ProtocolSuffix())
_, topVal = r.wrapAndReportValidation(fullTopic, r.noopValidator)
err = r.cfg.p2p.PubSub().RegisterTopicValidator(fullTopic, topVal)
require.NoError(t, err)
subscriptions[2], err = r.cfg.p2p.SubscribeToTopic(fullTopic)
sub2, err := r.cfg.p2p.SubscribeToTopic(fullTopic)
require.NoError(t, err)
tracker.track(c2, sub2)
r.pruneSubscriptions(subscriptions, map[uint64]bool{2: true}, defaultTopic, digest)
r.pruneSubscriptions(tracker, map[uint64]bool{c2: true})
require.LogsDoNotContain(t, hook, "Could not unregister topic validator")
}
@@ -539,7 +558,7 @@ func TestSubscribeWithSyncSubnets_DynamicOK(t *testing.T) {
cache.SyncSubnetIDs.AddSyncCommitteeSubnets([]byte("pubkey"), currEpoch, []uint64{0, 1}, 10*time.Second)
digest, err := r.currentForkDigest()
assert.NoError(t, err)
r.subscribeWithParameters(subscribeParameters{
go r.subscribeWithParameters(subscribeParameters{
topicFormat: p2p.SyncCommitteeSubnetTopicFormat,
digest: digest,
getSubnetsToJoin: r.activeSyncSubnetIndices,
@@ -580,6 +599,7 @@ func TestSubscribeWithSyncSubnets_DynamicSwitchFork(t *testing.T) {
chainStarted: abool.New(),
subHandler: newSubTopicHandler(),
}
markInitSyncComplete(t, &r)
// Empty cache at the end of the test.
defer cache.SyncSubnetIDs.EmptyAllCaches()
cache.SyncSubnetIDs.AddSyncCommitteeSubnets([]byte("pubkey"), 0, []uint64{0, 1}, 10*time.Second)
@@ -589,12 +609,11 @@ func TestSubscribeWithSyncSubnets_DynamicSwitchFork(t *testing.T) {
require.Equal(t, [4]byte(params.BeaconConfig().DenebForkVersion), version)
require.Equal(t, params.BeaconConfig().DenebForkEpoch, e)
sp := subscribeToSubnetsParameters{
subscriptionBySubnet: make(map[uint64]*pubsub.Subscription),
topicFormat: p2p.SyncCommitteeSubnetTopicFormat,
digest: digest,
getSubnetsToJoin: r.activeSyncSubnetIndices,
}
sp := newSubnetTracker(subscribeParameters{
topicFormat: p2p.SyncCommitteeSubnetTopicFormat,
digest: digest,
getSubnetsToJoin: r.activeSyncSubnetIndices,
})
require.NoError(t, r.subscribeToSubnets(sp))
assert.Equal(t, 2, len(r.cfg.p2p.PubSub().GetTopics()))
topicMap := map[string]bool{}
@@ -697,6 +716,7 @@ func TestSubscribe_ReceivesLCOptimisticUpdate(t *testing.T) {
lcStore: lightClient.NewLightClientStore(d, &p2ptest.FakeP2P{}, new(event.Feed)),
subHandler: newSubTopicHandler(),
}
markInitSyncComplete(t, &r)
topic := p2p.LightClientOptimisticUpdateTopicFormat
var wg sync.WaitGroup
wg.Add(1)
@@ -764,6 +784,7 @@ func TestSubscribe_ReceivesLCFinalityUpdate(t *testing.T) {
lcStore: lightClient.NewLightClientStore(d, &p2ptest.FakeP2P{}, new(event.Feed)),
subHandler: newSubTopicHandler(),
}
markInitSyncComplete(t, &r)
topic := p2p.LightClientFinalityUpdateTopicFormat
var wg sync.WaitGroup
wg.Add(1)

View File

@@ -427,6 +427,7 @@ func TestService_ValidateBlsToExecutionChange(t *testing.T) {
cw := startup.NewClockSynchronizer()
opts := []Option{WithClockWaiter(cw)}
svc := NewService(ctx, append(opts, tt.svcopts...)...)
markInitSyncComplete(t, svc)
svc, tt.args.topic = tt.setupSvc(svc, tt.args.msg, tt.args.topic)
go svc.Start()
if tt.clock == nil {

View File

@@ -10,7 +10,6 @@ import (
"github.com/OffchainLabs/prysm/v6/beacon-chain/core/feed/operation"
"github.com/OffchainLabs/prysm/v6/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v6/beacon-chain/verification"
"github.com/OffchainLabs/prysm/v6/config/features"
fieldparams "github.com/OffchainLabs/prysm/v6/config/fieldparams"
"github.com/OffchainLabs/prysm/v6/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v6/consensus-types/primitives"
@@ -49,7 +48,7 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
return pubsub.ValidationReject, p2p.ErrInvalidTopic
}
// Decode the message.
// Decode the message, reject if it fails.
m, err := s.decodePubsubMessage(msg)
if err != nil {
log.WithError(err).Error("Failed to decode message")
@@ -69,20 +68,6 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
return pubsub.ValidationReject, errors.Wrap(err, "roDataColumn conversion failure")
}
// Voluntary ignore messages (for debugging purposes).
dataColumnsIgnoreSlotMultiple := features.Get().DataColumnsIgnoreSlotMultiple
blockSlot := uint64(roDataColumn.SignedBlockHeader.Header.Slot)
if dataColumnsIgnoreSlotMultiple != 0 && blockSlot%dataColumnsIgnoreSlotMultiple == 0 {
log.WithFields(logrus.Fields{
"slot": blockSlot,
"columnIndex": roDataColumn.Index,
"blockRoot": fmt.Sprintf("%#x", roDataColumn.BlockRoot()),
}).Warning("Voluntary ignore data column sidecar gossip")
return pubsub.ValidationIgnore, err
}
// Compute a batch of only one data column sidecar.
roDataColumns := []blocks.RODataColumn{roDataColumn}

View File

@@ -409,6 +409,7 @@ func TestService_ValidateSyncCommitteeMessage(t *testing.T) {
svc := NewService(ctx, append(opts, tt.svcopts...)...)
var clock *startup.Clock
svc, tt.args.topic, clock = tt.setupSvc(svc, tt.args.msg, tt.args.topic)
markInitSyncComplete(t, svc)
go svc.Start()
require.NoError(t, cw.SetClock(clock))
svc.verifierWaiter = verification.NewInitializerWaiter(cw, chainService.ForkChoiceStore, svc.cfg.stateGen)

View File

@@ -856,7 +856,7 @@ func TestService_ValidateSyncContributionAndProof(t *testing.T) {
svc, clock = tt.setupSvc(svc, tt.args.msg)
require.NoError(t, cw.SetClock(clock))
svc.verifierWaiter = verification.NewInitializerWaiter(cw, chainService.ForkChoiceStore, svc.cfg.stateGen)
markInitSyncComplete(t, svc)
go svc.Start()
marshalledObj, err := tt.args.msg.MarshalSSZ()
assert.NoError(t, err)

View File

@@ -0,0 +1,2 @@
### Changed
- Updated outdated documentation links for Web3Signer and Why Bazel.

View File

@@ -0,0 +1,2 @@
## Ignored
- Remove duplicate test case in `container/leaky-bucket/collector_test.go` to reduce redundancy. (#15672)

View File

@@ -0,0 +1,3 @@
### Changed
- Updated consensus spec from v1.6.0-alpha.5 to v1.6.0-alpha.6

View File

@@ -0,0 +1,3 @@
### Changed
- Deprecated and added error to /prysm/v1/beacon/blobs endpoint for post Fulu fork.

View File

@@ -0,0 +1,7 @@
### Added
- Adding Fulu types for web3signer.
### Changed
- changed validatorpb.SignRequest_AggregateAttestationAndProof signing type to use AggregateAttestationAndProofV2 on web3signer.

View File

@@ -0,0 +1,3 @@
### Changed
- Switching default of validator client rest call for submit block from JSON to SSZ. Fallback json will be attempted.

View File

@@ -0,0 +1,2 @@
### Fixed
- Start topic-based peer discovery before initial sync completes so that we have coverage of needed columns when range syncing.

View File

@@ -0,0 +1,2 @@
### Ignored
- Unwedge ethspecify.

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix getBlockAttestationsV2 to return [] instead of null when data is empty

View File

@@ -0,0 +1,3 @@
### Fixed
- Fixed the issue of empty dirs not being deleted when using blob-storage-layout=by-epoch

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix next epoch proposer duties in Fulu by advancing the state to the beginning of the current epoch.

View File

@@ -0,0 +1,3 @@
### Ignored
- Remove broadcast of BLS changes at the Capella fork.

Some files were not shown because too many files have changed in this diff Show More