Compare commits

...

6 Commits

Author SHA1 Message Date
satushh
5933a1f876 copy data once to be safe 2025-12-10 04:27:30 +00:00
Potuz
a3210157e2 Fix TOCTOU race validating attestations (#16105)
A TOCTOU issue was reported by EF security in which two attestations
being validated at the same time may result in both of them being
forwarded. The spec says that we need to forward only the first one.
2025-12-09 19:26:05 +00:00
satushh
1536d59e30 Remove unnecessary copy in Eth1DataHasEnoughSupport (#16118)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Other

**What does this PR do? Why is it needed?**

- Remove unnecessary `Copy()` call in `Eth1DataHasEnoughSupport`
- `data.Copy()` was called on every iteration of the vote counting loop,
even though `AreEth1DataEqual` only reads the data and never mutates it.
- Additionally, `Eth1DataVotes()` already returns copies of all votes,
so state is protected regardless.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-09 19:02:36 +00:00
satushh
11e46a4560 Optimise for loop of MigrateToCold (#16101)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

 Other

**What does this PR do? Why is it needed?**

The for loop in MigrateToCold function was brute force in nature. It
could be improved by just directly jumping by `slotsPerArchivedPoint`
rather than going over every single slot.

```
for slot := oldFSlot; slot < fSlot; slot++ {
  ...
   if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
```
No need to do the modulo for every single slot.
We could just find the correct starting point and jump by
slotsPerArchivedPoint at a time.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2025-12-09 17:15:52 +00:00
Snezhkko
5a2e51b894 fix(rpc): incorrect constructor return type (#16084)
The constructor `NewStateRootNotFoundError` incorrectly returned
`StateNotFoundError`. This prevented handlers that rely on
errors.As(err, *lookup.StateRootNotFoundError) from matching and mapping
the error to HTTP 404. The function now returns
StateRootNotFoundError and constructs that type, restoring the intended
behavior for “state root not found” cases.

---------

Co-authored-by: Radosław Kapka <rkapka@wp.pl>
2025-12-09 13:56:00 +00:00
Potuz
d20ec4c7a1 Track the dependent root of the latest finalized checkpoint (#16103)
This PR adds the dependent root of the latest finalized checkpoint to
forkchoice since this node will be typically pruned upon finalization.
2025-12-08 16:16:32 +00:00
18 changed files with 206 additions and 83 deletions

View File

@@ -58,9 +58,10 @@ func AreEth1DataEqual(a, b *ethpb.Eth1Data) bool {
// votes to see if they match the eth1data.
func Eth1DataHasEnoughSupport(beaconState state.ReadOnlyBeaconState, data *ethpb.Eth1Data) (bool, error) {
voteCount := uint64(0)
dataCopy := data.Copy()
for _, vote := range beaconState.Eth1DataVotes() {
if AreEth1DataEqual(vote, data.Copy()) {
if AreEth1DataEqual(vote, dataCopy) {
voteCount++
}
}

View File

@@ -642,8 +642,12 @@ func (f *ForkChoice) DependentRootForEpoch(root [32]byte, epoch primitives.Epoch
if !ok || node == nil {
return [32]byte{}, ErrNilNode
}
if slots.ToEpoch(node.slot) >= epoch && node.parent != nil {
node = node.parent
if slots.ToEpoch(node.slot) >= epoch {
if node.parent != nil {
node = node.parent
} else {
return f.store.finalizedDependentRoot, nil
}
}
return node.root, nil
}

View File

@@ -212,6 +212,9 @@ func (s *Store) prune(ctx context.Context) error {
return nil
}
// Save the new finalized dependent root because it will be pruned
s.finalizedDependentRoot = finalizedNode.parent.root
// Prune nodeByRoot starting from root
if err := s.pruneFinalizedNodeByRootMap(ctx, s.treeRootNode, finalizedNode); err != nil {
return err

View File

@@ -465,6 +465,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
ctx := t.Context()
f := setup(1, 1)
// Insert a block in slot 32
state, blk, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
@@ -475,6 +476,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, [32]byte{})
// Insert a block in slot 33
state, blk1, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'b'}, blk.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk1))
@@ -488,7 +490,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, [32]byte{})
// Insert a block for the next epoch (missed slot 0)
// Insert a block for the next epoch (missed slot 0), slot 65
state, blk2, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'c'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
@@ -509,6 +511,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, blk1.Root())
// Insert a block at slot 66
state, blk3, err := prepareForkchoiceState(ctx, 2*params.BeaconConfig().SlotsPerEpoch+2, [32]byte{'d'}, blk2.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk3))
@@ -533,8 +536,11 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
dependent, err = f.DependentRoot(1)
require.NoError(t, err)
require.Equal(t, [32]byte{}, dependent)
dependent, err = f.DependentRoot(2)
require.NoError(t, err)
require.Equal(t, blk1.Root(), dependent)
// Insert a block for next epoch (slot 0 present)
// Insert a block for the next epoch, slot 96 (descends from finalized at slot 33)
state, blk4, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch, [32]byte{'e'}, blk1.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk4))
@@ -551,6 +557,7 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, dependent, blk1.Root())
// Insert a block at slot 97
state, blk5, err := prepareForkchoiceState(ctx, 3*params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'f'}, blk4.Root(), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk5))
@@ -600,12 +607,16 @@ func TestStore_TargetRootForEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, target, blk1.Root())
// Prune finalization
// Prune finalization, finalize the block at slot 96
s.finalizedCheckpoint.Root = blk4.Root()
require.NoError(t, s.prune(ctx))
target, err = f.TargetRootForEpoch(blk4.Root(), 3)
require.NoError(t, err)
require.Equal(t, blk4.Root(), target)
// Dependent root for the finalized block should be the root of the pruned block at slot 33
dependent, err = f.DependentRootForEpoch(blk4.Root(), 3)
require.NoError(t, err)
require.Equal(t, blk1.Root(), dependent)
}
func TestStore_DependentRootForEpoch(t *testing.T) {

View File

@@ -31,6 +31,7 @@ type Store struct {
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
finalizedDependentRoot [fieldparams.RootLength]byte // dependent root at finalized checkpoint.
committeeWeight uint64 // tracks the total active validator balance divided by the number of slots per Epoch.
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node

View File

@@ -82,8 +82,8 @@ type StateRootNotFoundError struct {
}
// NewStateRootNotFoundError creates a new error instance.
func NewStateRootNotFoundError(stateRootsSize int) StateNotFoundError {
return StateNotFoundError{
func NewStateRootNotFoundError(stateRootsSize int) StateRootNotFoundError {
return StateRootNotFoundError{
message: fmt.Sprintf("state root not found in the last %d state roots", stateRootsSize),
}
}

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
"github.com/sirupsen/logrus"
@@ -37,76 +38,84 @@ func (s *State) MigrateToCold(ctx context.Context, fRoot [32]byte) error {
return nil
}
// Start at previous finalized slot, stop at current finalized slot (it will be handled in the next migration).
// If the slot is on archived point, save the state of that slot to the DB.
for slot := oldFSlot; slot < fSlot; slot++ {
// Calculate the first archived point slot >= oldFSlot (but > 0).
// This avoids iterating through every slot and only visits archived points directly.
var startSlot primitives.Slot
if oldFSlot == 0 {
startSlot = s.slotsPerArchivedPoint
} else {
// Round up to the next archived point
startSlot = (oldFSlot + s.slotsPerArchivedPoint - 1) / s.slotsPerArchivedPoint * s.slotsPerArchivedPoint
}
// Start at the first archived point after old finalized slot, stop before current finalized slot.
// Jump directly between archived points.
for slot := startSlot; slot < fSlot; slot += s.slotsPerArchivedPoint {
if ctx.Err() != nil {
return ctx.Err()
}
if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
cached, exists, err := s.epochBoundaryStateCache.getBySlot(slot)
if err != nil {
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
}
var aRoot [32]byte
var aState state.BeaconState
// When the epoch boundary state is not in cache due to skip slot scenario,
// we have to regenerate the state which will represent epoch boundary.
// By finding the highest available block below epoch boundary slot, we
// generate the state for that block root.
if exists {
aRoot = cached.root
aState = cached.state
} else {
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
if err != nil {
return fmt.Errorf("could not get epoch boundary state for slot %d", slot)
return err
}
var aRoot [32]byte
var aState state.BeaconState
// When the epoch boundary state is not in cache due to skip slot scenario,
// we have to regenerate the state which will represent epoch boundary.
// By finding the highest available block below epoch boundary slot, we
// generate the state for that block root.
if exists {
aRoot = cached.root
aState = cached.state
} else {
_, roots, err := s.beaconDB.HighestRootsBelowSlot(ctx, slot)
// Given the block has been finalized, the db should not have more than one block in a given slot.
// We should error out when this happens.
if len(roots) != 1 {
return errUnknownBlock
}
aRoot = roots[0]
// There's no need to generate the state if the state already exists in the DB.
// We can skip saving the state.
if !s.beaconDB.HasState(ctx, aRoot) {
aState, err = s.StateByRoot(ctx, aRoot)
if err != nil {
return err
}
// Given the block has been finalized, the db should not have more than one block in a given slot.
// We should error out when this happens.
if len(roots) != 1 {
return errUnknownBlock
}
aRoot = roots[0]
// There's no need to generate the state if the state already exists in the DB.
// We can skip saving the state.
if !s.beaconDB.HasState(ctx, aRoot) {
aState, err = s.StateByRoot(ctx, aRoot)
if err != nil {
return err
}
}
}
if s.beaconDB.HasState(ctx, aRoot) {
// If you are migrating a state and its already part of the hot state cache saved to the db,
// you can just remove it from the hot state cache as it becomes redundant.
s.saveHotStateDB.lock.Lock()
roots := s.saveHotStateDB.blockRootsOfSavedStates
for i := range roots {
if aRoot == roots[i] {
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
// Break here is ok.
break
}
}
s.saveHotStateDB.lock.Unlock()
continue
}
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
return err
}
log.WithFields(
logrus.Fields{
"slot": aState.Slot(),
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
}).Info("Saved state in DB")
}
if s.beaconDB.HasState(ctx, aRoot) {
// If you are migrating a state and its already part of the hot state cache saved to the db,
// you can just remove it from the hot state cache as it becomes redundant.
s.saveHotStateDB.lock.Lock()
roots := s.saveHotStateDB.blockRootsOfSavedStates
for i := range roots {
if aRoot == roots[i] {
s.saveHotStateDB.blockRootsOfSavedStates = append(roots[:i], roots[i+1:]...)
// There shouldn't be duplicated roots in `blockRootsOfSavedStates`.
// Break here is ok.
break
}
}
s.saveHotStateDB.lock.Unlock()
continue
}
if err := s.beaconDB.SaveState(ctx, aState, aRoot); err != nil {
return err
}
log.WithFields(
logrus.Fields{
"slot": aState.Slot(),
"root": hex.EncodeToString(bytesutil.Trunc(aRoot[:])),
}).Info("Saved state in DB")
}
// Update finalized info in memory.

View File

@@ -265,7 +265,7 @@ func (s *Service) processVerifiedAttestation(
if key, err := generateUnaggregatedAttCacheKey(broadcastAtt); err != nil {
log.WithError(err).Error("Failed to generate cache key for attestation tracking")
} else {
s.setSeenUnaggregatedAtt(key)
_ = s.setSeenUnaggregatedAtt(key)
}
valCount, err := helpers.ActiveValidatorCount(ctx, preState, slots.ToEpoch(data.Slot))
@@ -320,7 +320,7 @@ func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAg
return
}
s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
_ = s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
log.WithError(err).Debug("Could not broadcast aggregated attestation")

View File

@@ -137,7 +137,9 @@ func (s *Service) validateAggregateAndProof(ctx context.Context, pid peer.ID, ms
return validationRes, err
}
s.setAggregatorIndexEpochSeen(data.Target.Epoch, m.AggregateAttestationAndProof().GetAggregatorIndex())
if first := s.setAggregatorIndexEpochSeen(data.Target.Epoch, m.AggregateAttestationAndProof().GetAggregatorIndex()); !first {
return pubsub.ValidationIgnore, nil
}
msg.ValidatorData = m
@@ -265,13 +267,19 @@ func (s *Service) hasSeenAggregatorIndexEpoch(epoch primitives.Epoch, aggregator
}
// Set aggregate's aggregator index target epoch as seen.
func (s *Service) setAggregatorIndexEpochSeen(epoch primitives.Epoch, aggregatorIndex primitives.ValidatorIndex) {
// Returns true if this is the first time seeing this aggregator index and epoch.
func (s *Service) setAggregatorIndexEpochSeen(epoch primitives.Epoch, aggregatorIndex primitives.ValidatorIndex) bool {
b := append(bytesutil.Bytes32(uint64(epoch)), bytesutil.Bytes32(uint64(aggregatorIndex))...)
s.seenAggregatedAttestationLock.Lock()
defer s.seenAggregatedAttestationLock.Unlock()
_, seen := s.seenAggregatedAttestationCache.Get(string(b))
if seen {
return false
}
s.seenAggregatedAttestationCache.Add(string(b), true)
return true
}
// This validates the bitfield is correct and aggregator's index in state is within the beacon committee.

View File

@@ -801,3 +801,27 @@ func TestValidateAggregateAndProof_RejectWhenAttEpochDoesntEqualTargetEpoch(t *t
assert.NotNil(t, err)
assert.Equal(t, pubsub.ValidationReject, res)
}
func Test_SetAggregatorIndexEpochSeen(t *testing.T) {
db := dbtest.SetupDB(t)
p := p2ptest.NewTestP2P(t)
r := &Service{
cfg: &config{
p2p: p,
beaconDB: db,
},
seenAggregatedAttestationCache: lruwrpr.New(10),
}
aggIndex := primitives.ValidatorIndex(42)
epoch := primitives.Epoch(7)
require.Equal(t, false, r.hasSeenAggregatorIndexEpoch(epoch, aggIndex))
first := r.setAggregatorIndexEpochSeen(epoch, aggIndex)
require.Equal(t, true, first)
require.Equal(t, true, r.hasSeenAggregatorIndexEpoch(epoch, aggIndex))
second := r.setAggregatorIndexEpochSeen(epoch, aggIndex)
require.Equal(t, false, second)
}

View File

@@ -104,7 +104,8 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
}
if !s.slasherEnabled {
// Verify this the first attestation received for the participating validator for the slot.
// Verify this the first attestation received for the participating validator for the slot. This verification is here to return early if we've already seen this attestation.
// This verification is carried again later after all other validations to avoid TOCTOU issues.
if s.hasSeenUnaggregatedAtt(attKey) {
return pubsub.ValidationIgnore, nil
}
@@ -228,7 +229,10 @@ func (s *Service) validateCommitteeIndexBeaconAttestation(
Data: eventData,
})
s.setSeenUnaggregatedAtt(attKey)
if first := s.setSeenUnaggregatedAtt(attKey); !first {
// Another concurrent validation processed the same attestation meanwhile
return pubsub.ValidationIgnore, nil
}
// Attach final validated attestation to the message for further pipeline use
msg.ValidatorData = attForValidation
@@ -385,11 +389,16 @@ func (s *Service) hasSeenUnaggregatedAtt(key string) bool {
}
// Set an incoming attestation as seen for the participating validator for the slot.
func (s *Service) setSeenUnaggregatedAtt(key string) {
// Returns false if the attestation was already seen.
func (s *Service) setSeenUnaggregatedAtt(key string) bool {
s.seenUnAggregatedAttestationLock.Lock()
defer s.seenUnAggregatedAttestationLock.Unlock()
_, seen := s.seenUnAggregatedAttestationCache.Get(key)
if seen {
return false
}
s.seenUnAggregatedAttestationCache.Add(key, true)
return true
}
// hasBlockAndState returns true if the beacon node knows about a block and associated state in the

View File

@@ -499,6 +499,10 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 0},
AggregationBits: bitfield.Bitlist{0b1001},
}
s3c0a0 := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 3, CommitteeIndex: 0},
AggregationBits: bitfield.Bitlist{0b1001},
}
t.Run("empty cache", func(t *testing.T) {
key := generateKey(t, s0c0a0)
@@ -506,26 +510,39 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
})
t.Run("ok", func(t *testing.T) {
key := generateKey(t, s0c0a0)
s.setSeenUnaggregatedAtt(key)
first := s.setSeenUnaggregatedAtt(key)
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
assert.Equal(t, true, first)
})
t.Run("already seen", func(t *testing.T) {
key := generateKey(t, s3c0a0)
first := s.setSeenUnaggregatedAtt(key)
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
assert.Equal(t, true, first)
first = s.setSeenUnaggregatedAtt(key)
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
assert.Equal(t, false, first)
})
t.Run("different slot", func(t *testing.T) {
key1 := generateKey(t, s1c0a0)
key2 := generateKey(t, s2c0a0)
s.setSeenUnaggregatedAtt(key1)
first := s.setSeenUnaggregatedAtt(key1)
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
assert.Equal(t, true, first)
})
t.Run("different committee index", func(t *testing.T) {
key1 := generateKey(t, s0c1a0)
key2 := generateKey(t, s0c2a0)
s.setSeenUnaggregatedAtt(key1)
first := s.setSeenUnaggregatedAtt(key1)
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
assert.Equal(t, true, first)
})
t.Run("different bit", func(t *testing.T) {
key1 := generateKey(t, s0c0a1)
key2 := generateKey(t, s0c0a2)
s.setSeenUnaggregatedAtt(key1)
first := s.setSeenUnaggregatedAtt(key1)
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
assert.Equal(t, true, first)
})
t.Run("0 bits set is considered not seen", func(t *testing.T) {
a := &ethpb.Attestation{AggregationBits: bitfield.Bitlist{0b1000}}
@@ -576,6 +593,11 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
CommitteeId: 0,
AttesterIndex: 0,
}
s3c0a0 := &ethpb.SingleAttestation{
Data: &ethpb.AttestationData{Slot: 2},
CommitteeId: 0,
AttesterIndex: 0,
}
t.Run("empty cache", func(t *testing.T) {
key := generateKey(t, s0c0a0)
@@ -583,26 +605,39 @@ func TestService_setSeenUnaggregatedAtt(t *testing.T) {
})
t.Run("ok", func(t *testing.T) {
key := generateKey(t, s0c0a0)
s.setSeenUnaggregatedAtt(key)
first := s.setSeenUnaggregatedAtt(key)
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
assert.Equal(t, true, first)
})
t.Run("different slot", func(t *testing.T) {
key1 := generateKey(t, s1c0a0)
key2 := generateKey(t, s2c0a0)
s.setSeenUnaggregatedAtt(key1)
first := s.setSeenUnaggregatedAtt(key1)
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
assert.Equal(t, true, first)
})
t.Run("already seen", func(t *testing.T) {
key := generateKey(t, s3c0a0)
first := s.setSeenUnaggregatedAtt(key)
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
assert.Equal(t, true, first)
first = s.setSeenUnaggregatedAtt(key)
assert.Equal(t, true, s.hasSeenUnaggregatedAtt(key))
assert.Equal(t, false, first)
})
t.Run("different committee index", func(t *testing.T) {
key1 := generateKey(t, s0c1a0)
key2 := generateKey(t, s0c2a0)
s.setSeenUnaggregatedAtt(key1)
first := s.setSeenUnaggregatedAtt(key1)
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
assert.Equal(t, true, first)
})
t.Run("different attester", func(t *testing.T) {
key1 := generateKey(t, s0c0a1)
key2 := generateKey(t, s0c0a2)
s.setSeenUnaggregatedAtt(key1)
first := s.setSeenUnaggregatedAtt(key1)
assert.Equal(t, false, s.hasSeenUnaggregatedAtt(key2))
assert.Equal(t, true, first)
})
t.Run("single attestation is considered not seen", func(t *testing.T) {
a := &ethpb.AttestationElectra{}

View File

@@ -0,0 +1,3 @@
## Fixed
- incorrect constructor return type [#16084](https://github.com/OffchainLabs/prysm/pull/16084)

View File

@@ -0,0 +1,3 @@
### Fixed
- Fixed possible race when validating two attestations at the same time.

View File

@@ -0,0 +1,3 @@
### Added
- Track the dependent root of the latest finalized checkpoint in forkchoice.

View File

@@ -0,0 +1,3 @@
### Changed
- In Eth1DataHasEnoughSupport do copy once to be sure of safety

View File

@@ -0,0 +1,3 @@
### Removed
- Unnecessary copy is removed from Eth1DataHasEnoughSupport

View File

@@ -0,0 +1,3 @@
### Changed
- Optimise migratetocold by not doing brute force for loop