Compare commits

..

8 Commits

Author SHA1 Message Date
satushh
9692993b6a remove redundant justified checkpoint update in pullTips 2025-12-19 23:07:40 +05:30
Manu NALEPA
2ac30f5ce6 Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them. (#16153)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
When an (potentially aggregated) attestation is received **before** the
block being voted for, Prysm queues this attestation, then processes the
queue when the block has been received.

This behavior is consistent with the [Phase0 specification
](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#beacon_attestation_subnet_id).

> [IGNORE] The block being voted for
(attestation.data.beacon_block_root) has been seen (via gossip or
non-gossip sources) (a client MAY queue attestations for processing once
block is retrieved).

Once the block being voted for is processed, previously queued
(potentially aggregated) attestations are then processed, and
broadcasted.

Processing (potentially aggregated) attestations takes some non
negligible time. For this reason, (potentially aggregated) attestations
are deduplicated before being introduced into the pending queue, to
avoid eventually processing duplicates.

Before this PR, two aggregated attestations were considered duplicated
if all of the following conditions were gathered:
1. Attestations have the same version, 
2. **Attestations have the same aggregator index (aka., the same
validator aggregated them)**,
3. Attestations have the same slot, 
4. Attestations have the same committee index, and
5. Attestations have the same aggregation bits

Aggregated attestations are then broadcasted.
The final purpose of aggregated attestations is to be packed into the
next block by the next proposer.
When packing attestations, the aggregator index is not used any more.

This pull request modifies the deduplication function used in the
pending aggregated attestations queue by considering that multiple
aggregated attestations only differing by the aggregator index are
equivalent (removing `2.` of the previous list.)

As a consequence, the count of aggregated attestations to be introduced
in the pending queue is reduced from 1 aggregated attestation by
aggregator to, in the best case,
[MAX_COMMITTEE_PER_SLOT=64](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/beacon-chain.md#misc-1).

Also, only a single aggregated attestation for a given version, slot,
committee index and aggregation bits will be re-broadcasted. This is a
correct behavior, since no data to be included in a block will be lost.
(We can even say that this will reduce by a bit the total networking
volume.)

**How to test**:
1. Start a beacon node (preferably, on a slow computer) from a
checkpoint.
2. Filter logs containing `Synced new block` and `Verified and saved
pending attestations to pool`. (You can pipe logs into `grep -E "Synced
new block|Verified and saved pending attestations to pool"`.

- In `Synced new block` logs, monitor the `sinceSlotStartTime` value.
This should monotonically decrease.
- In `Verified and saved pending attestations to pool`, monitor the
`pendingAggregateAttAndProofCount` value. It should be a "honest" value.
"honest" is not really quantifiable here, since it depends on the
aggregators. But it's likely to be less than
`5*MAX_COMMITTEE_PER_SLOT=320`.

**Which issues(s) does this PR fix?**

Partially fixes:
- https://github.com/OffchainLabs/prysm/issues/16160

**Other notes for review**
Please read commit by commit, with commit messages.
The important commit is b748c04a67.

**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-19 14:05:50 +00:00
Manu NALEPA
7418c00ad6 validateDataColumn: Remove error logs. (#16157)
**What type of PR is this?**
Other

**What does this PR do? Why is it needed?**
When we receive data column sidecars via gossip, if the sidecar does not
respect the validation rules, a scary ERROR log is displayed. We can't
to anything about it, since the error comes from an invalid incoming
sidecar, so there is no need to print an ERROR message.

Node: As all REJECTED gossip message, a DEBUG log is also always
displayed.

Example of ERROR log:
```
[2025-12-18 15:38:26.46] ERROR sync: Failed to decode message error=invalid ssz encoding. first variable element offset indexes into fixed value data
[2025-12-18 15:38:26.46] DEBUG sync: Gossip message was rejected agent=erigon/caplin error=invalid ssz encoding. first variable element offset indexes into fixed value data gossipScore=0 multiaddress=/ip4/141.147.32.105/tcp/9000 peerID=16Uiu2HAmHu88k97iBist1vJg7cPNuTjJFRARKvDF7yaH3Pv3Vmso topic=/eth2/c6ecb76c/data_column_sidecar_30/ssz_snappy
```

(After this PR, the DEBUG one will still be printed.)

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-18 16:18:02 +00:00
james-prysm
66342655fd throw 503 error when submit attestation and sync committee are called on syncing node + align changes to gRPC (#16152)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Bug fix


**What does this PR do? Why is it needed?**

Prysm starting throwing this error `Could not write response message"
error="write tcp 10.104.92.212:5052->10.104.92.196:41876: write: broken
pipe` because a validator got attestation data from a synced node and
submitted attestation to a syncing node, when the node couldn't replay
the state the validator context deadlined and disconnected but the
writer when it finally responded tries to write it gets this broken pipe
error.

applies to `/eth/v2/beacon/pool/attestations` and
`/eth/v1/beacon/pool/sync_committees`

the solution is 2 part.
1. we shouldn't allow submission of an attestation if the node is
syncing because we can't save the attestation without the state
information.
2. we were doing the expensive state call before broadcast before in
rest and now it should match gRPC where it happens afterward in its own
go routine.

Tested manually running kurtosis with rest validators

```
participants:
 # Super-nodes
 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   count: 2
   supernode: true
   cl_extra_params:
     - --subscribe-all-subnets
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug

 # Full-nodes
 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   validator_count: 63
   cl_extra_params:
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug

 - el_type: nethermind
   cl_type: prysm
   cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
   cl_extra_params:
     - --verbosity=debug
   vc_extra_params:
     - --enable-beacon-rest-api
     - --verbosity=debug
   validator_count: 13

additional_services:
 - dora
 - spamoor

spamoor_params:
 image: ethpandaops/spamoor:master
 max_mem: 4000
 spammers:
   - scenario: eoatx
     config:
       throughput: 200
   - scenario: blobs
     config:
       throughput: 20

network_params:
  fulu_fork_epoch: 2
  bpo_1_epoch: 8
  bpo_1_max_blobs: 21
  withdrawal_type: "0x02"
  preset: mainnet
  seconds_per_slot: 6

global_log_level: debug
```

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-18 15:07:09 +00:00
Bastin
18eca953c1 Fix lightclient p2p bug (#16151)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
This PR fixes the LC p2p `fork version not recognized` bug. It adds
object mappings for the LC types for Fulu, and fixes tests to cover such
cases in the future.
2025-12-17 20:45:06 +00:00
Manu NALEPA
8191bb5711 Construct data column sidecars from the execution layer in parallel and add metrics (#16115)
**What type of PR is this?**
Optimisation

**What does this PR do? Why is it needed?**
While constructing data column sidecars from the execution layer is very
cheap compared to reconstructing data column sidecars from data column
sidecars, it is still efficient to run this construction in parallel.

(**Reminder:** Using `getBlobsV2`, all the cell proofs are present, but
only 64 (out of 128) cells are present. Recomputing the missing cells is
cheap, while reconstruction the missing proofs is expensive.)

This PR:
- adds some metrics
- ensure the construction is done in parallel

**Other notes for review**
Please read commit by commit

The red vertical lines represent the limit between before and after this
pull request
<img width="1575" height="603" alt="image"
src="https://github.com/user-attachments/assets/24811b1b-8e3c-4bf5-ac82-f920d385573a"
/>

The last commit transforms the bottom right histogram to summary, since
it makes no sense any more to have an histogram for values.

Please check "hide whitespace" so this PR is easier to review:
<img width="229" height="196" alt="image"
src="https://github.com/user-attachments/assets/548cb2f4-b6f4-41d1-b3b3-4d4c8554f390"
/>

Updated metrics:



Now, for every **non missed slot**, for a block **with at least one
commitment**, we have either:
```
[2025-12-10 10:02:12.93] DEBUG sync: Constructed data column sidecars from the execution client count=118 indices=0-5,7-16,18-27,29-35,37-46,48-49,51-82,84-100,102-106,108-125,127 iteration=0 proposerIndex=855082 root=0xf8f44e7d4cbc209b2ff2796c07fcf91e85ab45eebe145c4372017a18b25bf290 slot=1928961 type=BeaconBlock
```

either
```
[2025-12-10 10:02:25.69] DEBUG sync: No data column sidecars constructed from the execution client iteration=2 proposerIndex=1093657 root=0x64c2f6c31e369cd45f2edaf5524b64f4869e8148cd29fb84b5b8866be529eea3 slot=1928962 type=DataColumnSidecar
```
<img width="1581" height="957" alt="image"
src="https://github.com/user-attachments/assets/514dbdae-ef14-47e2-9127-502ac6d26bc0"
/>
<img width="1596" height="916" alt="image"
src="https://github.com/user-attachments/assets/343d4710-4191-49e8-98be-afe70d5ffe1c"
/>



**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-12-16 16:27:32 +00:00
james-prysm
d4613aee0c skipping slot 1 sync committee check e2e (#16145)
<!-- Thanks for sending a PR! Before submitting:

1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
   fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
   in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
   Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->

**What type of PR is this?**

Tests

**What does this PR do? Why is it needed?**

```

--- PASS: TestEndToEnd_MinimalConfig/chain_started (0.50s)
--
--- PASS: TestEndToEnd_MinimalConfig/finished_syncing_0 (0.00s)
--- PASS: TestEndToEnd_MinimalConfig/all_nodes_have_same_head_0 (0.00s)
--- PASS: TestEndToEnd_MinimalConfig/validators_active_epoch_0 (0.00s)
--- FAIL: TestEndToEnd_MinimalConfig/validator_sync_participation_0 (0.01s)
--- PASS: TestEndToEnd_MinimalConfig/peers_connect_epoch_0 (0.11s)


```
This PR attempts to reduce flakes on validator sync participation
failures by skipping the first slot of the block after startup

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2025-12-15 20:00:34 +00:00
terence
9fcc1a7a77 Guard KZG send with context cancellation (#16144)
Avoid sending KZG verification reqs when the caller context is already
canceled to prevent blocking on the channel
2025-12-15 16:58:51 +00:00
51 changed files with 1318 additions and 246 deletions

View File

@@ -335,6 +335,9 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
return errors.Wrap(err, "could not update committee cache")
}
if err := helpers.UpdateProposerIndicesInCache(ctx, st, e); err != nil {
return errors.Wrap(err, "could not update proposer index cache")
}
go func(ep primitives.Epoch) {
// Use a custom deadline here, since this method runs asynchronously.
// We ignore the parent method's context and instead create a new one
@@ -345,6 +348,26 @@ func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.Beacon
log.WithError(err).Warn("Could not update committee cache")
}
}(e)
// The latest block header is from the previous epoch
r, err := st.LatestBlockHeader().HashTreeRoot()
if err != nil {
log.WithError(err).Error("Could not update proposer index state-root map")
return nil
}
// The proposer indices cache takes the target root for the previous
// epoch as key
if e > 0 {
e = e - 1
}
target, err := s.cfg.ForkChoiceStore.TargetRootForEpoch(r, e)
if err != nil {
log.WithError(err).Error("Could not update proposer index state-root map")
return nil
}
err = helpers.UpdateCachedCheckpointToStateRoot(st, &forkchoicetypes.Checkpoint{Epoch: e, Root: target})
if err != nil {
log.WithError(err).Error("Could not update proposer index state-root map")
}
return nil
}

View File

@@ -15,6 +15,7 @@ import (
statefeed "github.com/OffchainLabs/prysm/v7/beacon-chain/core/feed/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
coreTime "github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db"
"github.com/OffchainLabs/prysm/v7/beacon-chain/db/filesystem"
@@ -396,6 +397,10 @@ func (s *Service) initializeBeaconChain(
if err := helpers.UpdateCommitteeCache(ctx, genesisState, 0); err != nil {
return nil, err
}
if err := helpers.UpdateProposerIndicesInCache(ctx, genesisState, coreTime.CurrentEpoch(genesisState)); err != nil {
return nil, err
}
s.cfg.AttService.SetGenesisTime(genesisState.GenesisTime())
return genesisState, nil

View File

@@ -17,6 +17,9 @@ go_library(
"error.go",
"interfaces.go",
"payload_id.go",
"proposer_indices.go",
"proposer_indices_disabled.go", # keep
"proposer_indices_type.go",
"registration.go",
"skip_slot_cache.go",
"subnet_ids.go",
@@ -37,6 +40,7 @@ go_library(
"//beacon-chain/operations/attestations/attmap:go_default_library",
"//beacon-chain/state:go_default_library",
"//cache/lru:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
@@ -73,6 +77,7 @@ go_test(
"committee_test.go",
"payload_id_test.go",
"private_access_test.go",
"proposer_indices_test.go",
"registration_test.go",
"skip_slot_cache_test.go",
"subnet_ids_test.go",

122
beacon-chain/cache/proposer_indices.go vendored Normal file
View File

@@ -0,0 +1,122 @@
//go:build !fuzz
package cache
import (
"sync"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
// ProposerIndicesCacheMiss tracks the number of proposerIndices requests that aren't present in the cache.
ProposerIndicesCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "proposer_indices_cache_miss",
Help: "The number of proposer indices requests that aren't present in the cache.",
})
// ProposerIndicesCacheHit tracks the number of proposerIndices requests that are in the cache.
ProposerIndicesCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "proposer_indices_cache_hit",
Help: "The number of proposer indices requests that are present in the cache.",
})
)
// ProposerIndicesCache keeps track of the proposer indices in the next two
// epochs. It is keyed by the state root of the last epoch before. That is, for
// blocks during epoch 2, for example slot 65, it will be keyed by the state
// root of slot 63 (last slot in epoch 1).
// The cache keeps two sets of indices computed, the "safe" set is computed
// right before the epoch transition into the current epoch. For example for
// epoch 2 we will compute this list after importing block 63. The "unsafe"
// version is computed an epoch in advance, for example for epoch 3, it will be
// computed after importing block 63.
//
// The cache also keeps a map from checkpoints to state roots so that one is
// able to access the proposer indices list from a checkpoint instead. The
// checkpoint is the checkpoint for the epoch previous to the requested
// proposer indices. That is, for a slot in epoch 2 (eg. 65), the checkpoint
// root would be for slot 32 if present.
type ProposerIndicesCache struct {
sync.Mutex
indices map[primitives.Epoch]map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex
rootMap map[forkchoicetypes.Checkpoint][32]byte // A map from checkpoint root to state root
}
// NewProposerIndicesCache returns a newly created cache
func NewProposerIndicesCache() *ProposerIndicesCache {
return &ProposerIndicesCache{
indices: make(map[primitives.Epoch]map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex),
rootMap: make(map[forkchoicetypes.Checkpoint][32]byte),
}
}
// ProposerIndices returns the proposer indices (safe) for the given root
func (p *ProposerIndicesCache) ProposerIndices(epoch primitives.Epoch, root [32]byte) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
p.Lock()
defer p.Unlock()
inner, ok := p.indices[epoch]
if !ok {
ProposerIndicesCacheMiss.Inc()
return [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}, false
}
indices, exists := inner[root]
if exists {
ProposerIndicesCacheHit.Inc()
} else {
ProposerIndicesCacheMiss.Inc()
}
return indices, exists
}
// Prune resets the ProposerIndicesCache to its initial state
func (p *ProposerIndicesCache) Prune(epoch primitives.Epoch) {
p.Lock()
defer p.Unlock()
for key := range p.indices {
if key < epoch {
delete(p.indices, key)
}
}
for key := range p.rootMap {
if key.Epoch+1 < epoch {
delete(p.rootMap, key)
}
}
}
// Set sets the proposer indices for the given root as key
func (p *ProposerIndicesCache) Set(epoch primitives.Epoch, root [32]byte, indices [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex) {
p.Lock()
defer p.Unlock()
inner, ok := p.indices[epoch]
if !ok {
inner = make(map[[32]byte][fieldparams.SlotsPerEpoch]primitives.ValidatorIndex)
p.indices[epoch] = inner
}
inner[root] = indices
}
// SetCheckpoint updates the map from checkpoints to state roots
func (p *ProposerIndicesCache) SetCheckpoint(c forkchoicetypes.Checkpoint, root [32]byte) {
p.Lock()
defer p.Unlock()
p.rootMap[c] = root
}
// IndicesFromCheckpoint returns the proposer indices from a checkpoint rather than the state root
func (p *ProposerIndicesCache) IndicesFromCheckpoint(c forkchoicetypes.Checkpoint) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
p.Lock()
emptyIndices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
root, ok := p.rootMap[c]
p.Unlock()
if !ok {
ProposerIndicesCacheMiss.Inc()
return emptyIndices, ok
}
return p.ProposerIndices(c.Epoch+1, root)
}

View File

@@ -0,0 +1,63 @@
//go:build fuzz
// This file is used in fuzzer builds to bypass proposer indices caches.
package cache
import (
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
// ProposerIndicesCacheMiss tracks the number of proposerIndices requests that aren't present in the cache.
ProposerIndicesCacheMiss = promauto.NewCounter(prometheus.CounterOpts{
Name: "proposer_indices_cache_miss",
Help: "The number of proposer indices requests that aren't present in the cache.",
})
// ProposerIndicesCacheHit tracks the number of proposerIndices requests that are in the cache.
ProposerIndicesCacheHit = promauto.NewCounter(prometheus.CounterOpts{
Name: "proposer_indices_cache_hit",
Help: "The number of proposer indices requests that are present in the cache.",
})
)
// FakeProposerIndicesCache is a struct with 1 queue for looking up proposer indices by root.
type FakeProposerIndicesCache struct {
}
// NewProposerIndicesCache creates a new proposer indices cache for storing/accessing proposer index assignments of an epoch.
func NewProposerIndicesCache() *FakeProposerIndicesCache {
return &FakeProposerIndicesCache{}
}
// ProposerIndices is a stub.
func (c *FakeProposerIndicesCache) ProposerIndices(_ primitives.Epoch, _ [32]byte) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
return [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}, false
}
// UnsafeProposerIndices is a stub.
func (c *FakeProposerIndicesCache) UnsafeProposerIndices(_ primitives.Epoch, _ [32]byte) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
return [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}, false
}
// Prune is a stub.
func (p *FakeProposerIndicesCache) Prune(epoch primitives.Epoch) {}
// Set is a stub.
func (p *FakeProposerIndicesCache) Set(epoch primitives.Epoch, root [32]byte, indices [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex) {
}
// SetUnsafe is a stub.
func (p *FakeProposerIndicesCache) SetUnsafe(epoch primitives.Epoch, root [32]byte, indices [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex) {
}
// SetCheckpoint is a stub.
func (p *FakeProposerIndicesCache) SetCheckpoint(c forkchoicetypes.Checkpoint, root [32]byte) {}
// IndicesFromCheckpoint is a stub.
func (p *FakeProposerIndicesCache) IndicesFromCheckpoint(_ forkchoicetypes.Checkpoint) ([fieldparams.SlotsPerEpoch]primitives.ValidatorIndex, bool) {
return [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}, false
}

View File

@@ -0,0 +1,105 @@
//go:build !fuzz
package cache
import (
"testing"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestProposerCache_Set(t *testing.T) {
cache := NewProposerIndicesCache()
bRoot := [32]byte{'A'}
indices, ok := cache.ProposerIndices(0, bRoot)
require.Equal(t, false, ok)
emptyIndices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
require.Equal(t, indices, emptyIndices, "Expected committee count not to exist in empty cache")
emptyIndices[0] = 1
cache.Set(0, bRoot, emptyIndices)
received, ok := cache.ProposerIndices(0, bRoot)
require.Equal(t, true, ok)
require.Equal(t, received, emptyIndices)
newRoot := [32]byte{'B'}
copy(emptyIndices[3:], []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6})
cache.Set(0, newRoot, emptyIndices)
received, ok = cache.ProposerIndices(0, newRoot)
require.Equal(t, true, ok)
require.Equal(t, emptyIndices, received)
}
func TestProposerCache_CheckpointAndPrune(t *testing.T) {
cache := NewProposerIndicesCache()
indices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
copy(indices[3:], []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6})
for i := 1; i < 10; i++ {
root := [32]byte{byte(i)}
cache.Set(primitives.Epoch(i), root, indices)
cpRoot := [32]byte{byte(i - 1)}
cache.SetCheckpoint(forkchoicetypes.Checkpoint{Epoch: primitives.Epoch(i - 1), Root: cpRoot}, root)
}
received, ok := cache.ProposerIndices(1, [32]byte{1})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.ProposerIndices(4, [32]byte{4})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.ProposerIndices(9, [32]byte{9})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: [32]byte{3}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: [32]byte{4}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: [32]byte{8}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
cache.Prune(5)
emptyIndices := [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex{}
received, ok = cache.ProposerIndices(1, [32]byte{1})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.ProposerIndices(4, [32]byte{4})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.ProposerIndices(9, [32]byte{9})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 0, Root: [32]byte{0}})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 3, Root: [32]byte{3}})
require.Equal(t, false, ok)
require.Equal(t, emptyIndices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 4, Root: [32]byte{4}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
received, ok = cache.IndicesFromCheckpoint(forkchoicetypes.Checkpoint{Epoch: 8, Root: [32]byte{8}})
require.Equal(t, true, ok)
require.Equal(t, indices, received)
}

View File

@@ -0,0 +1,11 @@
package cache
import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
// ProposerIndices defines the cached struct for proposer indices.
type ProposerIndices struct {
BlockRoot [32]byte
ProposerIndices []primitives.ValidatorIndex
}

View File

@@ -23,6 +23,7 @@ go_library(
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
@@ -71,6 +72,7 @@ go_test(
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",

View File

@@ -10,7 +10,9 @@ import (
"github.com/OffchainLabs/go-bitfield"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/container/slice"
@@ -25,7 +27,8 @@ import (
)
var (
committeeCache = cache.NewCommitteesCache()
committeeCache = cache.NewCommitteesCache()
proposerIndicesCache = cache.NewProposerIndicesCache()
)
type beaconCommitteeFunc = func(
@@ -525,6 +528,75 @@ func UpdateCommitteeCache(ctx context.Context, state state.ReadOnlyBeaconState,
return nil
}
// UpdateProposerIndicesInCache updates proposer indices entry of the committee cache.
// Input state is used to retrieve active validator indices.
// Input root is to use as key in the cache.
// Input epoch is the epoch to retrieve proposer indices for.
func UpdateProposerIndicesInCache(ctx context.Context, state state.ReadOnlyBeaconState, epoch primitives.Epoch) error {
// The cache uses the state root at the end of (current epoch - 1) as key.
// (e.g. for epoch 2, the key is root at slot 63)
if epoch <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
return nil
}
slot, err := slots.EpochEnd(epoch - 1)
if err != nil {
return err
}
root, err := StateRootAtSlot(state, slot)
if err != nil {
return err
}
var proposerIndices []primitives.ValidatorIndex
// use the state if post fulu (EIP-7917)
if state.Version() >= version.Fulu {
lookAhead, err := state.ProposerLookahead()
if err != nil {
return errors.Wrap(err, "could not get proposer lookahead")
}
proposerIndices = lookAhead[:params.BeaconConfig().SlotsPerEpoch]
} else {
// Skip cache update if the key already exists
_, ok := proposerIndicesCache.ProposerIndices(epoch, [32]byte(root))
if ok {
return nil
}
indices, err := ActiveValidatorIndices(ctx, state, epoch)
if err != nil {
return err
}
proposerIndices, err = PrecomputeProposerIndices(state, indices, epoch)
if err != nil {
return err
}
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
return errors.New("invalid proposer length returned from state")
}
}
// This is here to deal with tests only
var indicesArray [fieldparams.SlotsPerEpoch]primitives.ValidatorIndex
copy(indicesArray[:], proposerIndices)
proposerIndicesCache.Prune(epoch - 2)
proposerIndicesCache.Set(epoch, [32]byte(root), indicesArray)
return nil
}
// UpdateCachedCheckpointToStateRoot updates the map from checkpoints to state root in the proposer indices cache
func UpdateCachedCheckpointToStateRoot(state state.ReadOnlyBeaconState, cp *forkchoicetypes.Checkpoint) error {
if cp.Epoch <= params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
return nil
}
slot, err := slots.EpochEnd(cp.Epoch)
if err != nil {
return err
}
root, err := state.StateRootAtIndex(uint64(slot % params.BeaconConfig().SlotsPerHistoricalRoot))
if err != nil {
return err
}
proposerIndicesCache.SetCheckpoint(*cp, [32]byte(root))
return nil
}
// ExpandCommitteeCache resizes the cache to a higher limit.
func ExpandCommitteeCache() {
committeeCache.ExpandCommitteeCache()
@@ -538,6 +610,7 @@ func CompressCommitteeCache() {
// ClearCache clears the beacon committee cache and sync committee cache.
func ClearCache() {
committeeCache.Clear()
proposerIndicesCache.Prune(0)
syncCommitteeCache.Clear()
balanceCache.Clear()
}

View File

@@ -11,3 +11,7 @@ func CommitteeCache() *cache.FakeCommitteeCache {
func SyncCommitteeCache() *cache.FakeSyncCommitteeCache {
return syncCommitteeCache
}
func ProposerIndicesCache() *cache.FakeProposerIndicesCache {
return proposerIndicesCache
}

View File

@@ -11,3 +11,7 @@ func CommitteeCache() *cache.CommitteeCache {
func SyncCommitteeCache() *cache.SyncCommitteeCache {
return syncCommitteeCache
}
func ProposerIndicesCache() *cache.ProposerIndicesCache {
return proposerIndicesCache
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -272,6 +273,32 @@ func BeaconProposerIndex(ctx context.Context, state state.ReadOnlyBeaconState) (
return BeaconProposerIndexAtSlot(ctx, state, state.Slot())
}
// cachedProposerIndexAtSlot returns the proposer index at the given slot from
// the cache at the given root key.
func cachedProposerIndexAtSlot(slot primitives.Slot, root [32]byte) (primitives.ValidatorIndex, error) {
proposerIndices, has := proposerIndicesCache.ProposerIndices(slots.ToEpoch(slot), root)
if !has {
return 0, errProposerIndexMiss
}
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
return 0, errProposerIndexMiss
}
return proposerIndices[slot%params.BeaconConfig().SlotsPerEpoch], nil
}
// ProposerIndexAtSlotFromCheckpoint returns the proposer index at the given
// slot from the cache at the given checkpoint
func ProposerIndexAtSlotFromCheckpoint(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, error) {
proposerIndices, has := proposerIndicesCache.IndicesFromCheckpoint(*c)
if !has {
return 0, errProposerIndexMiss
}
if len(proposerIndices) != int(params.BeaconConfig().SlotsPerEpoch) {
return 0, errProposerIndexMiss
}
return proposerIndices[slot%params.BeaconConfig().SlotsPerEpoch], nil
}
func beaconProposerIndexAtSlotFulu(state state.ReadOnlyBeaconState, slot primitives.Slot) (primitives.ValidatorIndex, error) {
e := slots.ToEpoch(slot)
stateEpoch := slots.ToEpoch(state.Slot())
@@ -302,6 +329,32 @@ func BeaconProposerIndexAtSlot(ctx context.Context, state state.ReadOnlyBeaconSt
return beaconProposerIndexAtSlotFulu(state, slot)
}
}
// The cache uses the state root of the previous epoch - minimum_seed_lookahead last slot as key. (e.g. Starting epoch 1, slot 32, the key would be block root at slot 31)
// For simplicity, the node will skip caching of genesis epoch. If the passed state has not yet reached this slot then we do not check the cache.
if e <= stateEpoch && e > params.BeaconConfig().GenesisEpoch+params.BeaconConfig().MinSeedLookahead {
s, err := slots.EpochEnd(e - 1)
if err != nil {
return 0, err
}
r, err := StateRootAtSlot(state, s)
if err != nil {
return 0, err
}
if r != nil && !bytes.Equal(r, params.BeaconConfig().ZeroHash[:]) {
pid, err := cachedProposerIndexAtSlot(slot, [32]byte(r))
if err == nil {
return pid, nil
}
if err := UpdateProposerIndicesInCache(ctx, state, e); err != nil {
return 0, errors.Wrap(err, "could not update proposer index cache")
}
pid, err = cachedProposerIndexAtSlot(slot, [32]byte(r))
if err == nil {
return pid, nil
}
}
}
seed, err := Seed(state, e, params.BeaconConfig().DomainBeaconProposer)
if err != nil {
return 0, errors.Wrap(err, "could not generate seed")

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/time"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -877,6 +878,23 @@ func TestLastActivatedValidatorIndex_OK(t *testing.T) {
require.Equal(t, index, primitives.ValidatorIndex(3))
}
func TestProposerIndexFromCheckpoint(t *testing.T) {
helpers.ClearCache()
e := primitives.Epoch(2)
r := [32]byte{'a'}
root := [32]byte{'b'}
ids := [32]primitives.ValidatorIndex{}
slot := primitives.Slot(69) // slot 5 in the Epoch
ids[5] = primitives.ValidatorIndex(19)
helpers.ProposerIndicesCache().Set(e, r, ids)
c := &forkchoicetypes.Checkpoint{Root: root, Epoch: e - 1}
helpers.ProposerIndicesCache().SetCheckpoint(*c, r)
id, err := helpers.ProposerIndexAtSlotFromCheckpoint(c, slot)
require.NoError(t, err)
require.Equal(t, ids[5], id)
}
func TestHasETH1WithdrawalCredentials(t *testing.T) {
creds := []byte{0xFA, 0xCC}
v := &ethpb.Validator{WithdrawalCredentials: creds}

View File

@@ -5,10 +5,20 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
)
var dataColumnComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_computation_milliseconds",
Help: "Captures the time taken to compute data column sidecars from blobs.",
Buckets: []float64{25, 50, 100, 250, 500, 750, 1000},
},
var (
dataColumnComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "beacon_data_column_sidecar_computation_milliseconds",
Help: "Captures the time taken to compute data column sidecars from blobs.",
Buckets: []float64{25, 50, 100, 250, 500, 750, 1000},
},
)
cellsAndProofsFromStructuredComputationTime = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "cells_and_proofs_from_structured_computation_milliseconds",
Help: "Captures the time taken to compute cells and proofs from structured computation.",
Buckets: []float64{10, 20, 30, 40, 50, 100, 200},
},
)
)

View File

@@ -3,6 +3,7 @@ package peerdas
import (
"sort"
"sync"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/kzg"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
@@ -296,32 +297,42 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
return nil, nil, ErrBlobsCellsProofsMismatch
}
cellsPerBlob := make([][]kzg.Cell, 0, blobCount)
proofsPerBlob := make([][]kzg.Proof, 0, blobCount)
var wg errgroup.Group
cellsPerBlob := make([][]kzg.Cell, blobCount)
proofsPerBlob := make([][]kzg.Proof, blobCount)
for i, blob := range blobs {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return nil, nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, nil, errors.Wrap(err, "compute cells")
}
var proofs []kzg.Proof
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return nil, nil, errors.New("wrong KZG proof size - should never happen")
wg.Go(func() error {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blob) != len(kzgBlob) {
return errors.New("wrong blob size - should never happen")
}
proofs = append(proofs, kzgProof)
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return errors.Wrap(err, "compute cells")
}
cellsPerBlob = append(cellsPerBlob, cells)
proofsPerBlob = append(proofsPerBlob, proofs)
proofs := make([]kzg.Proof, 0, numberOfColumns)
for idx := uint64(i) * numberOfColumns; idx < (uint64(i)+1)*numberOfColumns; idx++ {
var kzgProof kzg.Proof
if copy(kzgProof[:], cellProofs[idx]) != len(kzgProof) {
return errors.New("wrong KZG proof size - should never happen")
}
proofs = append(proofs, kzgProof)
}
cellsPerBlob[i] = cells
proofsPerBlob[i] = proofs
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, nil, err
}
return cellsPerBlob, proofsPerBlob, nil
@@ -329,40 +340,55 @@ func ComputeCellsAndProofsFromFlat(blobs [][]byte, cellProofs [][]byte) ([][]kzg
// ComputeCellsAndProofsFromStructured computes the cells and proofs from blobs and cell proofs.
func ComputeCellsAndProofsFromStructured(blobsAndProofs []*pb.BlobAndProofV2) ([][]kzg.Cell, [][]kzg.Proof, error) {
cellsPerBlob := make([][]kzg.Cell, 0, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, 0, len(blobsAndProofs))
for _, blobAndProof := range blobsAndProofs {
start := time.Now()
defer func() {
cellsAndProofsFromStructuredComputationTime.Observe(float64(time.Since(start).Milliseconds()))
}()
var wg errgroup.Group
cellsPerBlob := make([][]kzg.Cell, len(blobsAndProofs))
proofsPerBlob := make([][]kzg.Proof, len(blobsAndProofs))
for i, blobAndProof := range blobsAndProofs {
if blobAndProof == nil {
return nil, nil, ErrNilBlobAndProof
}
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
return nil, nil, errors.New("wrong blob size - should never happen")
}
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return nil, nil, errors.Wrap(err, "compute cells")
}
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return nil, nil, errors.New("wrong KZG proof size - should never happen")
wg.Go(func() error {
var kzgBlob kzg.Blob
if copy(kzgBlob[:], blobAndProof.Blob) != len(kzgBlob) {
return errors.New("wrong blob size - should never happen")
}
var kzgProof kzg.Proof
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
return nil, nil, errors.New("wrong copied KZG proof size - should never happen")
// Compute the extended cells from the (non-extended) blob.
cells, err := kzg.ComputeCells(&kzgBlob)
if err != nil {
return errors.Wrap(err, "compute cells")
}
kzgProofs = append(kzgProofs, kzgProof)
}
kzgProofs := make([]kzg.Proof, 0, fieldparams.NumberOfColumns)
for _, kzgProofBytes := range blobAndProof.KzgProofs {
if len(kzgProofBytes) != kzg.BytesPerProof {
return errors.New("wrong KZG proof size - should never happen")
}
cellsPerBlob = append(cellsPerBlob, cells)
proofsPerBlob = append(proofsPerBlob, kzgProofs)
var kzgProof kzg.Proof
if copy(kzgProof[:], kzgProofBytes) != len(kzgProof) {
return errors.New("wrong copied KZG proof size - should never happen")
}
kzgProofs = append(kzgProofs, kzgProof)
}
cellsPerBlob[i] = cells
proofsPerBlob[i] = kzgProofs
return nil
})
}
if err := wg.Wait(); err != nil {
return nil, nil, err
}
return cellsPerBlob, proofsPerBlob, nil

View File

@@ -515,6 +515,11 @@ func (dcs *DataColumnStorage) Clear() error {
// prune clean the cache, the filesystem and mutexes.
func (dcs *DataColumnStorage) prune() {
startTime := time.Now()
defer func() {
dataColumnPruneLatency.Observe(float64(time.Since(startTime).Milliseconds()))
}()
highestStoredEpoch := dcs.cache.HighestEpoch()
// Check if we need to prune.
@@ -622,6 +627,9 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
// Create the SSZ encoded data column sidecars.
var sszEncodedDataColumnSidecars []byte
// Initialize the count of the saved SSZ encoded data column sidecar.
storedCount := uint8(0)
for {
dataColumnSidecars := pullChan(inputDataColumnSidecars)
if len(dataColumnSidecars) == 0 {
@@ -668,6 +676,9 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
return errors.Wrap(err, "set index")
}
// Increment the count of the saved SSZ encoded data column sidecar.
storedCount++
// Append the SSZ encoded data column sidecar to the SSZ encoded data column sidecars.
sszEncodedDataColumnSidecars = append(sszEncodedDataColumnSidecars, sszEncodedDataColumnSidecar...)
}
@@ -692,9 +703,12 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsExistingFile(filePath string
return errWrongBytesWritten
}
syncStart := time.Now()
if err := file.Sync(); err != nil {
return errors.Wrap(err, "sync")
}
dataColumnFileSyncLatency.Observe(float64(time.Since(syncStart).Milliseconds()))
dataColumnBatchStoreCount.Observe(float64(storedCount))
return nil
}
@@ -808,10 +822,14 @@ func (dcs *DataColumnStorage) saveDataColumnSidecarsNewFile(filePath string, inp
return errWrongBytesWritten
}
syncStart := time.Now()
if err := file.Sync(); err != nil {
return errors.Wrap(err, "sync")
}
dataColumnFileSyncLatency.Observe(float64(time.Since(syncStart).Milliseconds()))
dataColumnBatchStoreCount.Observe(float64(storedCount))
return nil
}

View File

@@ -36,16 +36,15 @@ var (
})
// Data columns
dataColumnBuckets = []float64{3, 5, 7, 9, 11, 13}
dataColumnSaveLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_save_latency",
Help: "Latency of DataColumnSidecar storage save operations in milliseconds",
Buckets: dataColumnBuckets,
Buckets: []float64{10, 20, 30, 50, 100, 200, 500},
})
dataColumnFetchLatency = promauto.NewHistogram(prometheus.HistogramOpts{
Name: "data_column_storage_get_latency",
Help: "Latency of DataColumnSidecar storage get operations in milliseconds",
Buckets: dataColumnBuckets,
Buckets: []float64{3, 5, 7, 9, 11, 13},
})
dataColumnPrunedCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "data_column_pruned",
@@ -59,4 +58,16 @@ var (
Name: "data_column_disk_count",
Help: "Approximate number of data columns in storage",
})
dataColumnFileSyncLatency = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_file_sync_latency",
Help: "Latency of sync operations when saving data columns in milliseconds",
})
dataColumnBatchStoreCount = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_batch_store_count",
Help: "Number of data columns stored in a batch",
})
dataColumnPruneLatency = promauto.NewSummary(prometheus.SummaryOpts{
Name: "data_column_prune_latency",
Help: "Latency of data column prune operations in milliseconds",
})
)

View File

@@ -532,12 +532,19 @@ func (s *Service) GetBlobsV2(ctx context.Context, versionedHashes []common.Hash)
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetBlobsV2")
defer span.End()
start := time.Now()
if !s.capabilityCache.has(GetBlobsV2) {
return nil, errors.New(fmt.Sprintf("%s is not supported", GetBlobsV2))
}
result := make([]*pb.BlobAndProofV2, len(versionedHashes))
err := s.rpcClient.CallContext(ctx, &result, GetBlobsV2, versionedHashes)
if len(result) != 0 {
getBlobsV2Latency.Observe(float64(time.Since(start).Milliseconds()))
}
return result, handleRPCError(err)
}

View File

@@ -27,6 +27,13 @@ var (
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
},
)
getBlobsV2Latency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "get_blobs_v2_latency_milliseconds",
Help: "Captures RPC latency for getBlobsV2 in milliseconds",
Buckets: []float64{25, 50, 100, 200, 500, 1000, 2000, 4000},
},
)
errParseCount = promauto.NewCounter(prometheus.CounterOpts{
Name: "execution_parse_error_count",
Help: "The number of errors that occurred while parsing execution payload",

View File

@@ -88,9 +88,6 @@ func (s *Store) pullTips(state state.BeaconState, node *Node, jc, fc *ethpb.Chec
}
}
if uf.Epoch > s.unrealizedFinalizedCheckpoint.Epoch {
s.unrealizedJustifiedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uj.Epoch, Root: bytesutil.ToBytes32(uj.Root),
}
s.unrealizedFinalizedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: uf.Epoch, Root: bytesutil.ToBytes32(uf.Root),
}

View File

@@ -204,6 +204,9 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientOptimisticUpdate, error) {
return lightclientConsensusTypes.NewEmptyOptimisticUpdateDeneb(), nil
},
}
// Reset our light client finality update map.
@@ -223,5 +226,8 @@ func InitializeDataMaps() {
bytesutil.ToBytes4(params.BeaconConfig().ElectraForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
bytesutil.ToBytes4(params.BeaconConfig().FuluForkVersion): func() (interfaces.LightClientFinalityUpdate, error) {
return lightclientConsensusTypes.NewEmptyFinalityUpdateElectra(), nil
},
}
}

View File

@@ -130,6 +130,10 @@ func (s *Server) SubmitAttestationsV2(w http.ResponseWriter, r *http.Request) {
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitAttestationsV2")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
versionHeader := r.Header.Get(api.VersionHeader)
if versionHeader == "" {
httputil.HandleError(w, api.VersionHeader+" header is required", http.StatusBadRequest)
@@ -238,22 +242,14 @@ func (s *Server) handleAttestationsElectra(
},
})
targetState, err := s.AttestationStateFetcher.AttestationTargetState(ctx, singleAtt.Data.Target)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get target state for attestation")
}
committee, err := corehelpers.BeaconCommitteeFromState(ctx, targetState, singleAtt.Data.Slot, singleAtt.CommitteeId)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get committee for attestation")
}
att := singleAtt.ToAttestationElectra(committee)
wantedEpoch := slots.ToEpoch(att.Data.Slot)
// Broadcast first using CommitteeId directly (fast path)
// This matches gRPC behavior and avoids blocking on state fetching
wantedEpoch := slots.ToEpoch(singleAtt.Data.Slot)
vals, err := s.HeadFetcher.HeadValidatorsIndices(ctx, wantedEpoch)
if err != nil {
return nil, nil, errors.Wrap(err, "could not get head validator indices")
}
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), att.GetCommitteeIndex(), att.Data.Slot)
subnet := corehelpers.ComputeSubnetFromCommitteeAndSlot(uint64(len(vals)), singleAtt.CommitteeId, singleAtt.Data.Slot)
if err = s.Broadcaster.BroadcastAttestation(ctx, subnet, singleAtt); err != nil {
failedBroadcasts = append(failedBroadcasts, &server.IndexedError{
Index: i,
@@ -264,17 +260,35 @@ func (s *Server) handleAttestationsElectra(
}
continue
}
}
if features.Get().EnableExperimentalAttestationPool {
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
// Save to pool after broadcast (slow path - requires state fetching)
// Run in goroutine to avoid blocking the HTTP response
go func() {
for _, singleAtt := range validAttestations {
targetState, err := s.AttestationStateFetcher.AttestationTargetState(context.Background(), singleAtt.Data.Target)
if err != nil {
log.WithError(err).Error("Could not get target state for attestation")
continue
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("Could not save attestation")
committee, err := corehelpers.BeaconCommitteeFromState(context.Background(), targetState, singleAtt.Data.Slot, singleAtt.CommitteeId)
if err != nil {
log.WithError(err).Error("Could not get committee for attestation")
continue
}
att := singleAtt.ToAttestationElectra(committee)
if features.Get().EnableExperimentalAttestationPool {
if err = s.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else {
if err = s.AttestationsPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
}
}
}
}()
if len(failedBroadcasts) > 0 {
log.WithFields(logrus.Fields{
@@ -470,6 +484,10 @@ func (s *Server) SubmitSyncCommitteeSignatures(w http.ResponseWriter, r *http.Re
ctx, span := trace.StartSpan(r.Context(), "beacon.SubmitPoolSyncCommitteeSignatures")
defer span.End()
if shared.IsSyncing(ctx, w, s.SyncChecker, s.HeadFetcher, s.TimeFetcher, s.OptimisticModeFetcher) {
return
}
var req structs.SubmitSyncCommitteeSignaturesRequest
err := json.NewDecoder(r.Body).Decode(&req.Data)
switch {

View File

@@ -26,6 +26,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/operations/voluntaryexits/mock"
p2pMock "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/rpc/core"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
@@ -622,6 +623,8 @@ func TestSubmitAttestationsV2(t *testing.T) {
HeadFetcher: chainService,
ChainInfoFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
OperationNotifier: &blockchainmock.MockOperationNotifier{},
AttestationStateFetcher: chainService,
}
@@ -654,6 +657,7 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
@@ -673,6 +677,7 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("phase0 att post electra", func(t *testing.T) {
@@ -793,6 +798,7 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Source.Epoch)
assert.Equal(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2", hexutil.Encode(broadcaster.BroadcastAttestations[0].GetData().Target.Root))
assert.Equal(t, primitives.Epoch(0), broadcaster.BroadcastAttestations[0].GetData().Target.Epoch)
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 1, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("multiple", func(t *testing.T) {
@@ -812,6 +818,7 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, http.StatusOK, writer.Code)
assert.Equal(t, true, broadcaster.BroadcastCalled.Load())
assert.Equal(t, 2, broadcaster.NumAttestations())
time.Sleep(100 * time.Millisecond) // Wait for async pool save
assert.Equal(t, 2, s.AttestationsPool.UnaggregatedAttestationCount())
})
t.Run("no body", func(t *testing.T) {
@@ -861,6 +868,27 @@ func TestSubmitAttestationsV2(t *testing.T) {
assert.Equal(t, true, strings.Contains(e.Failures[0].Message, "Incorrect attestation signature"))
})
})
t.Run("syncing", func(t *testing.T) {
chainService := &blockchainmock.ChainService{}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
var body bytes.Buffer
_, err := body.WriteString(singleAtt)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
request.Header.Set(api.VersionHeader, version.String(version.Phase0))
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitAttestationsV2(writer, request)
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "Beacon node is currently syncing"))
})
}
func TestListVoluntaryExits(t *testing.T) {
@@ -1057,14 +1085,19 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
t.Run("single", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
CoreService: &core.Service{
SyncCommitteePool: synccommittee.NewStore(),
P2P: broadcaster,
HeadFetcher: &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
},
HeadFetcher: chainService,
},
}
@@ -1089,14 +1122,19 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
})
t.Run("multiple", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
CoreService: &core.Service{
SyncCommitteePool: synccommittee.NewStore(),
P2P: broadcaster,
HeadFetcher: &blockchainmock.ChainService{
State: st,
SyncCommitteeIndices: []primitives.CommitteeIndex{0},
},
HeadFetcher: chainService,
},
}
@@ -1120,13 +1158,18 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
})
t.Run("invalid", func(t *testing.T) {
broadcaster := &p2pMock.MockBroadcaster{}
chainService := &blockchainmock.ChainService{
State: st,
}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
CoreService: &core.Service{
SyncCommitteePool: synccommittee.NewStore(),
P2P: broadcaster,
HeadFetcher: &blockchainmock.ChainService{
State: st,
},
HeadFetcher: chainService,
},
}
@@ -1149,7 +1192,13 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
assert.Equal(t, false, broadcaster.BroadcastCalled.Load())
})
t.Run("empty", func(t *testing.T) {
s := &Server{}
chainService := &blockchainmock.ChainService{State: st}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
var body bytes.Buffer
_, err := body.WriteString("[]")
@@ -1166,7 +1215,13 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("no body", func(t *testing.T) {
s := &Server{}
chainService := &blockchainmock.ChainService{State: st}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
request := httptest.NewRequest(http.MethodPost, "http://example.com", nil)
writer := httptest.NewRecorder()
@@ -1179,6 +1234,26 @@ func TestSubmitSyncCommitteeSignatures(t *testing.T) {
assert.Equal(t, http.StatusBadRequest, e.Code)
assert.Equal(t, true, strings.Contains(e.Message, "No data submitted"))
})
t.Run("syncing", func(t *testing.T) {
chainService := &blockchainmock.ChainService{State: st}
s := &Server{
HeadFetcher: chainService,
TimeFetcher: chainService,
OptimisticModeFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
var body bytes.Buffer
_, err := body.WriteString(singleSyncCommitteeMsg)
require.NoError(t, err)
request := httptest.NewRequest(http.MethodPost, "http://example.com", &body)
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
s.SubmitSyncCommitteeSignatures(writer, request)
assert.Equal(t, http.StatusServiceUnavailable, writer.Code)
assert.Equal(t, true, strings.Contains(writer.Body.String(), "Beacon node is currently syncing"))
})
}
func TestListBLSToExecutionChanges(t *testing.T) {

View File

@@ -52,24 +52,27 @@ func (vs *Server) ProposeAttestation(ctx context.Context, att *ethpb.Attestation
ctx, span := trace.StartSpan(ctx, "AttesterServer.ProposeAttestation")
defer span.End()
if vs.SyncChecker.Syncing() {
return nil, status.Errorf(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
resp, err := vs.proposeAtt(ctx, att, att.GetData().CommitteeIndex)
if err != nil {
return nil, err
}
if features.Get().EnableExperimentalAttestationPool {
if err = vs.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else {
go func() {
go func() {
if features.Get().EnableExperimentalAttestationPool {
if err := vs.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else {
attCopy := att.Copy()
if err := vs.AttPool.SaveUnaggregatedAttestation(attCopy); err != nil {
log.WithError(err).Error("Could not save unaggregated attestation")
return
}
}()
}
}
}()
return resp, nil
}
@@ -82,6 +85,10 @@ func (vs *Server) ProposeAttestationElectra(ctx context.Context, singleAtt *ethp
ctx, span := trace.StartSpan(ctx, "AttesterServer.ProposeAttestationElectra")
defer span.End()
if vs.SyncChecker.Syncing() {
return nil, status.Errorf(codes.Unavailable, "Syncing to latest head, not ready to respond")
}
resp, err := vs.proposeAtt(ctx, singleAtt, singleAtt.GetCommitteeIndex())
if err != nil {
return nil, err
@@ -98,18 +105,17 @@ func (vs *Server) ProposeAttestationElectra(ctx context.Context, singleAtt *ethp
singleAttCopy := singleAtt.Copy()
att := singleAttCopy.ToAttestationElectra(committee)
if features.Get().EnableExperimentalAttestationPool {
if err = vs.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else {
go func() {
go func() {
if features.Get().EnableExperimentalAttestationPool {
if err := vs.AttestationCache.Add(att); err != nil {
log.WithError(err).Error("Could not save attestation")
}
} else {
if err := vs.AttPool.SaveUnaggregatedAttestation(att); err != nil {
log.WithError(err).Error("Could not save unaggregated attestation")
return
}
}()
}
}
}()
return resp, nil
}

View File

@@ -38,6 +38,7 @@ func TestProposeAttestation(t *testing.T) {
OperationNotifier: (&mock.ChainService{}).OperationNotifier(),
TimeFetcher: chainService,
AttestationStateFetcher: chainService,
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
head := util.NewBeaconBlock()
head.Block.Slot = 999
@@ -141,6 +142,7 @@ func TestProposeAttestation_IncorrectSignature(t *testing.T) {
P2P: &mockp2p.MockBroadcaster{},
AttPool: attestations.NewPool(),
OperationNotifier: (&mock.ChainService{}).OperationNotifier(),
SyncChecker: &mockSync.Sync{IsSyncing: false},
}
req := util.HydrateAttestation(&ethpb.Attestation{})
@@ -149,6 +151,37 @@ func TestProposeAttestation_IncorrectSignature(t *testing.T) {
assert.ErrorContains(t, wanted, err)
}
func TestProposeAttestation_Syncing(t *testing.T) {
attesterServer := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
req := util.HydrateAttestation(&ethpb.Attestation{})
_, err := attesterServer.ProposeAttestation(t.Context(), req)
assert.ErrorContains(t, "Syncing to latest head", err)
s, ok := status.FromError(err)
require.Equal(t, true, ok)
assert.Equal(t, codes.Unavailable, s.Code())
}
func TestProposeAttestationElectra_Syncing(t *testing.T) {
attesterServer := &Server{
SyncChecker: &mockSync.Sync{IsSyncing: true},
}
req := &ethpb.SingleAttestation{
Data: &ethpb.AttestationData{
Source: &ethpb.Checkpoint{Root: make([]byte, 32)},
Target: &ethpb.Checkpoint{Root: make([]byte, 32)},
},
}
_, err := attesterServer.ProposeAttestationElectra(t.Context(), req)
assert.ErrorContains(t, "Syncing to latest head", err)
s, ok := status.FromError(err)
require.Equal(t, true, ok)
assert.Equal(t, codes.Unavailable, s.Code())
}
func TestGetAttestationData_OK(t *testing.T) {
block := util.NewBeaconBlock()
block.Block.Slot = 3*params.BeaconConfig().SlotsPerEpoch + 1

View File

@@ -163,11 +163,15 @@ func (s *Service) validateWithKzgBatchVerifier(ctx context.Context, dataColumns
resChan := make(chan error, 1)
verificationSet := &kzgVerifier{dataColumns: dataColumns, resChan: resChan}
s.kzgChan <- verificationSet
ctx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
select {
case s.kzgChan <- verificationSet:
case <-ctx.Done():
return pubsub.ValidationIgnore, ctx.Err()
}
select {
case <-ctx.Done():
return pubsub.ValidationIgnore, ctx.Err() // parent context canceled, give up

View File

@@ -3,6 +3,7 @@ package sync
import (
"bytes"
"context"
"fmt"
"maps"
"slices"
"sync"
@@ -243,8 +244,10 @@ func requestDirectSidecarsFromPeers(
}
// Compute missing indices by root, excluding those already in storage.
var lastRoot [fieldparams.RootLength]byte
missingIndicesByRoot := make(map[[fieldparams.RootLength]byte]map[uint64]bool, len(incompleteRoots))
for root := range incompleteRoots {
lastRoot = root
storedIndices := storedIndicesByRoot[root]
missingIndices := make(map[uint64]bool, len(requestedIndices))
@@ -259,6 +262,7 @@ func requestDirectSidecarsFromPeers(
}
}
initialMissingRootCount := len(missingIndicesByRoot)
initialMissingCount := computeTotalCount(missingIndicesByRoot)
indicesByRootByPeer, err := computeIndicesByRootByPeer(params.P2P, slotByRoot, missingIndicesByRoot, connectedPeers)
@@ -301,11 +305,19 @@ func requestDirectSidecarsFromPeers(
}
}
log.WithFields(logrus.Fields{
"duration": time.Since(start),
"initialMissingCount": initialMissingCount,
"finalMissingCount": computeTotalCount(missingIndicesByRoot),
}).Debug("Requested direct data column sidecars from peers")
log := log.WithFields(logrus.Fields{
"duration": time.Since(start),
"initialMissingRootCount": initialMissingRootCount,
"initialMissingCount": initialMissingCount,
"finalMissingRootCount": len(missingIndicesByRoot),
"finalMissingCount": computeTotalCount(missingIndicesByRoot),
})
if initialMissingRootCount == 1 {
log = log.WithField("root", fmt.Sprintf("%#x", lastRoot))
}
log.Debug("Requested direct data column sidecars from peers")
return verifiedColumnsByRoot, nil
}

View File

@@ -304,6 +304,36 @@ func TestValidateWithKzgBatchVerifier_DeadlockOnTimeout(t *testing.T) {
}
}
func TestValidateWithKzgBatchVerifier_ContextCanceledBeforeSend(t *testing.T) {
cancelledCtx, cancel := context.WithCancel(t.Context())
cancel()
service := &Service{
ctx: context.Background(),
kzgChan: make(chan *kzgVerifier),
}
done := make(chan struct{})
go func() {
result, err := service.validateWithKzgBatchVerifier(cancelledCtx, nil)
require.Equal(t, pubsub.ValidationIgnore, result)
require.ErrorIs(t, err, context.Canceled)
close(done)
}()
select {
case <-done:
case <-time.After(500 * time.Millisecond):
t.Fatal("validateWithKzgBatchVerifier did not return after context cancellation")
}
select {
case <-service.kzgChan:
t.Fatal("verificationSet was sent to kzgChan despite canceled context")
default:
}
}
func createValidTestDataColumns(t *testing.T, count int) []blocks.RODataColumn {
_, roSidecars, _ := util.GenerateTestFuluBlockWithSidecars(t, count)
if len(roSidecars) >= count {

View File

@@ -204,6 +204,13 @@ var (
},
)
dataColumnsRecoveredFromELAttempts = promauto.NewCounter(
prometheus.CounterOpts{
Name: "data_columns_recovered_from_el_attempts",
Help: "Count the number of data columns recovery attempts from the execution layer.",
},
)
dataColumnsRecoveredFromELTotal = promauto.NewCounter(
prometheus.CounterOpts{
Name: "data_columns_recovered_from_el_total",
@@ -242,6 +249,13 @@ var (
Buckets: []float64{100, 250, 500, 750, 1000, 1500, 2000, 4000, 8000, 12000, 16000},
},
)
dataColumnSidecarsObtainedViaELCount = promauto.NewSummary(
prometheus.SummaryOpts{
Name: "data_column_obtained_via_el_count",
Help: "Count the number of data column sidecars obtained via the execution layer.",
},
)
)
func (s *Service) updateMetrics() {

View File

@@ -3,9 +3,9 @@ package sync
import (
"bytes"
"context"
"encoding/hex"
"fmt"
"slices"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/blocks"
@@ -21,13 +21,23 @@ import (
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time"
"github.com/OffchainLabs/prysm/v7/time/slots"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
var pendingAttsLimit = 32768
const pendingAttsLimit = 32768
// aggregatorIndexFilter defines how aggregator index should be handled in equality checks.
type aggregatorIndexFilter int
const (
// ignoreAggregatorIndex means aggregates differing only by aggregator index are considered equal.
ignoreAggregatorIndex aggregatorIndexFilter = iota
// includeAggregatorIndex means aggregator index must also match for aggregates to be considered equal.
includeAggregatorIndex
)
// This method processes pending attestations as a "known" block as arrived. With validations,
// the valid attestations get saved into the operation mem pool, and the invalid attestations gets deleted
@@ -50,16 +60,7 @@ func (s *Service) processPendingAttsForBlock(ctx context.Context, bRoot [32]byte
attestations := s.blkRootToPendingAtts[bRoot]
s.pendingAttsLock.RUnlock()
if len(attestations) > 0 {
start := time.Now()
s.processAttestations(ctx, attestations)
duration := time.Since(start)
log.WithFields(logrus.Fields{
"blockRoot": hex.EncodeToString(bytesutil.Trunc(bRoot[:])),
"pendingAttsCount": len(attestations),
"duration": duration,
}).Debug("Verified and saved pending attestations to pool")
}
s.processAttestations(ctx, attestations)
randGen := rand.NewGenerator()
// Delete the missing block root key from pending attestation queue so a node will not request for the block again.
@@ -79,26 +80,71 @@ func (s *Service) processPendingAttsForBlock(ctx context.Context, bRoot [32]byte
return s.sendBatchRootRequest(ctx, pendingRoots, randGen)
}
// processAttestations processes a list of attestations.
// It assumes (for logging purposes only) that all attestations pertain to the same block.
func (s *Service) processAttestations(ctx context.Context, attestations []any) {
if len(attestations) == 0 {
return
}
firstAttestation := attestations[0]
var blockRoot []byte
switch v := firstAttestation.(type) {
case ethpb.Att:
blockRoot = v.GetData().BeaconBlockRoot
case ethpb.SignedAggregateAttAndProof:
blockRoot = v.AggregateAttestationAndProof().AggregateVal().GetData().BeaconBlockRoot
default:
log.Warnf("Unexpected attestation type %T, skipping processing", v)
return
}
validAggregates := make([]ethpb.SignedAggregateAttAndProof, 0, len(attestations))
startAggregate := time.Now()
atts := make([]ethpb.Att, 0, len(attestations))
aggregateAttAndProofCount := 0
for _, att := range attestations {
switch v := att.(type) {
case ethpb.Att:
atts = append(atts, v)
case ethpb.SignedAggregateAttAndProof:
s.processAggregate(ctx, v)
aggregateAttAndProofCount++
// Avoid processing multiple aggregates only differing by aggregator index.
if slices.ContainsFunc(validAggregates, func(other ethpb.SignedAggregateAttAndProof) bool {
return pendingAggregatesAreEqual(v, other, ignoreAggregatorIndex)
}) {
continue
}
if err := s.processAggregate(ctx, v); err != nil {
log.WithError(err).Debug("Pending aggregate attestation could not be processed")
continue
}
validAggregates = append(validAggregates, v)
default:
log.Warnf("Unexpected attestation type %T, skipping", v)
}
}
durationAggregateAttAndProof := time.Since(startAggregate)
startAtts := time.Now()
for _, bucket := range bucketAttestationsByData(atts) {
s.processAttestationBucket(ctx, bucket)
}
durationAtts := time.Since(startAtts)
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", blockRoot),
"totalCount": len(attestations),
"aggregateAttAndProofCount": aggregateAttAndProofCount,
"uniqueAggregateAttAndProofCount": len(validAggregates),
"attCount": len(atts),
"durationTotal": durationAggregateAttAndProof + durationAtts,
"durationAggregateAttAndProof": durationAggregateAttAndProof,
"durationAtts": durationAtts,
}).Debug("Verified and saved pending attestations to pool")
}
// attestationBucket groups attestations with the same AttestationData for batch processing.
@@ -303,21 +349,20 @@ func (s *Service) processVerifiedAttestation(
})
}
func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAggregateAttAndProof) {
func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAggregateAttAndProof) error {
res, err := s.validateAggregatedAtt(ctx, aggregate)
if err != nil {
log.WithError(err).Debug("Pending aggregated attestation failed validation")
return
return errors.Wrap(err, "validate aggregated att")
}
if res != pubsub.ValidationAccept || !s.validateBlockInAttestation(ctx, aggregate) {
log.Debug("Pending aggregated attestation failed validation")
return
return errors.New("Pending aggregated attestation failed validation")
}
att := aggregate.AggregateAttestationAndProof().AggregateVal()
if err := s.saveAttestation(att); err != nil {
log.WithError(err).Debug("Could not save aggregated attestation")
return
return errors.Wrap(err, "save attestation")
}
_ = s.setAggregatorIndexEpochSeen(att.GetData().Target.Epoch, aggregate.AggregateAttestationAndProof().GetAggregatorIndex())
@@ -325,6 +370,8 @@ func (s *Service) processAggregate(ctx context.Context, aggregate ethpb.SignedAg
if err := s.cfg.p2p.Broadcast(ctx, aggregate); err != nil {
log.WithError(err).Debug("Could not broadcast aggregated attestation")
}
return nil
}
// This defines how pending aggregates are saved in the map. The key is the
@@ -336,7 +383,7 @@ func (s *Service) savePendingAggregate(agg ethpb.SignedAggregateAttAndProof) {
s.savePending(root, agg, func(other any) bool {
a, ok := other.(ethpb.SignedAggregateAttAndProof)
return ok && pendingAggregatesAreEqual(agg, a)
return ok && pendingAggregatesAreEqual(agg, a, includeAggregatorIndex)
})
}
@@ -391,13 +438,19 @@ func (s *Service) savePending(root [32]byte, pending any, isEqual func(other any
s.blkRootToPendingAtts[root] = append(s.blkRootToPendingAtts[root], pending)
}
func pendingAggregatesAreEqual(a, b ethpb.SignedAggregateAttAndProof) bool {
// pendingAggregatesAreEqual checks if two pending aggregate attestations are equal.
// The filter parameter controls whether aggregator index is considered in the equality check.
func pendingAggregatesAreEqual(a, b ethpb.SignedAggregateAttAndProof, filter aggregatorIndexFilter) bool {
if a.Version() != b.Version() {
return false
}
if a.AggregateAttestationAndProof().GetAggregatorIndex() != b.AggregateAttestationAndProof().GetAggregatorIndex() {
return false
if filter == includeAggregatorIndex {
if a.AggregateAttestationAndProof().GetAggregatorIndex() != b.AggregateAttestationAndProof().GetAggregatorIndex() {
return false
}
}
aAtt := a.AggregateAttestationAndProof().AggregateVal()
bAtt := b.AggregateAttestationAndProof().AggregateVal()
if aAtt.GetData().Slot != bAtt.GetData().Slot {

View File

@@ -94,7 +94,7 @@ func TestProcessPendingAtts_NoBlockRequestBlock(t *testing.T) {
// Process block A (which exists and has no pending attestations)
// This should skip processing attestations for A and request blocks B and C
require.NoError(t, r.processPendingAttsForBlock(t.Context(), rootA))
require.LogsContain(t, hook, "Requesting block by root")
require.LogsContain(t, hook, "Requesting blocks by root")
}
func TestProcessPendingAtts_HasBlockSaveUnaggregatedAtt(t *testing.T) {
@@ -911,17 +911,17 @@ func Test_pendingAggregatesAreEqual(t *testing.T) {
},
AggregationBits: bitfield.Bitlist{0b1111},
}}}
assert.Equal(t, true, pendingAggregatesAreEqual(a, b))
assert.Equal(t, true, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different version", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{Message: &ethpb.AggregateAttestationAndProof{AggregatorIndex: 1}}
b := &ethpb.SignedAggregateAttestationAndProofElectra{Message: &ethpb.AggregateAttestationAndProofElectra{AggregatorIndex: 1}}
assert.Equal(t, false, pendingAggregatesAreEqual(a, b))
assert.Equal(t, false, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different aggregator index", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{Message: &ethpb.AggregateAttestationAndProof{AggregatorIndex: 1}}
b := &ethpb.SignedAggregateAttestationAndProof{Message: &ethpb.AggregateAttestationAndProof{AggregatorIndex: 2}}
assert.Equal(t, false, pendingAggregatesAreEqual(a, b))
assert.Equal(t, false, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different slot", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{
@@ -942,7 +942,7 @@ func Test_pendingAggregatesAreEqual(t *testing.T) {
},
AggregationBits: bitfield.Bitlist{0b1111},
}}}
assert.Equal(t, false, pendingAggregatesAreEqual(a, b))
assert.Equal(t, false, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different committee index", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{
@@ -963,7 +963,7 @@ func Test_pendingAggregatesAreEqual(t *testing.T) {
},
AggregationBits: bitfield.Bitlist{0b1111},
}}}
assert.Equal(t, false, pendingAggregatesAreEqual(a, b))
assert.Equal(t, false, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different aggregation bits", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{
@@ -984,7 +984,30 @@ func Test_pendingAggregatesAreEqual(t *testing.T) {
},
AggregationBits: bitfield.Bitlist{0b1000},
}}}
assert.Equal(t, false, pendingAggregatesAreEqual(a, b))
assert.Equal(t, false, pendingAggregatesAreEqual(a, b, includeAggregatorIndex))
})
t.Run("different aggregator index should be equal while ignoring aggregator index", func(t *testing.T) {
a := &ethpb.SignedAggregateAttestationAndProof{
Message: &ethpb.AggregateAttestationAndProof{
AggregatorIndex: 1,
Aggregate: &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 1,
},
AggregationBits: bitfield.Bitlist{0b1111},
}}}
b := &ethpb.SignedAggregateAttestationAndProof{
Message: &ethpb.AggregateAttestationAndProof{
AggregatorIndex: 2,
Aggregate: &ethpb.Attestation{
Data: &ethpb.AttestationData{
Slot: 1,
CommitteeIndex: 1,
},
AggregationBits: bitfield.Bitlist{0b1111},
}}}
assert.Equal(t, true, pendingAggregatesAreEqual(a, b, ignoreAggregatorIndex))
})
}

View File

@@ -2,7 +2,6 @@ package sync
import (
"context"
"encoding/hex"
"fmt"
"slices"
"sync"
@@ -44,11 +43,13 @@ func (s *Service) processPendingBlocksQueue() {
if !s.chainIsStarted() {
return
}
locker.Lock()
defer locker.Unlock()
if err := s.processPendingBlocks(s.ctx); err != nil {
log.WithError(err).Debug("Could not process pending blocks")
}
locker.Unlock()
})
}
@@ -73,8 +74,10 @@ func (s *Service) processPendingBlocks(ctx context.Context) error {
randGen := rand.NewGenerator()
var parentRoots [][32]byte
blkRoots := make([][32]byte, 0, len(sortedSlots)*maxBlocksPerSlot)
// Iterate through sorted slots.
for _, slot := range sortedSlots {
for i, slot := range sortedSlots {
// Skip processing if slot is in the future.
if slot > s.cfg.clock.CurrentSlot() {
continue
@@ -91,6 +94,9 @@ func (s *Service) processPendingBlocks(ctx context.Context) error {
// Process each block in the queue.
for _, b := range blocksInCache {
start := time.Now()
totalDuration := time.Duration(0)
if err := blocks.BeaconBlockIsNil(b); err != nil {
continue
}
@@ -147,19 +153,34 @@ func (s *Service) processPendingBlocks(ctx context.Context) error {
}
cancelFunction()
// Process pending attestations for this block.
if err := s.processPendingAttsForBlock(ctx, blkRoot); err != nil {
log.WithError(err).Debug("Failed to process pending attestations for block")
}
blkRoots = append(blkRoots, blkRoot)
// Remove the processed block from the queue.
if err := s.removeBlockFromQueue(b, blkRoot); err != nil {
return err
}
log.WithFields(logrus.Fields{"slot": slot, "blockRoot": hex.EncodeToString(bytesutil.Trunc(blkRoot[:]))}).Debug("Processed pending block and cleared it in cache")
duration := time.Since(start)
totalDuration += duration
log.WithFields(logrus.Fields{
"slotIndex": fmt.Sprintf("%d/%d", i+1, len(sortedSlots)),
"slot": slot,
"root": fmt.Sprintf("%#x", blkRoot),
"duration": duration,
"totalDuration": totalDuration,
}).Debug("Processed pending block and cleared it in cache")
}
span.End()
}
for _, blkRoot := range blkRoots {
// Process pending attestations for this block.
if err := s.processPendingAttsForBlock(ctx, blkRoot); err != nil {
log.WithError(err).Debug("Failed to process pending attestations for block")
}
}
return s.sendBatchRootRequest(ctx, parentRoots, randGen)
}
@@ -379,6 +400,19 @@ func (s *Service) sendBatchRootRequest(ctx context.Context, roots [][32]byte, ra
req = roots[:maxReqBlock]
}
if logrus.GetLevel() >= logrus.DebugLevel {
rootsStr := make([]string, 0, len(roots))
for _, req := range roots {
rootsStr = append(rootsStr, fmt.Sprintf("%#x", req))
}
log.WithFields(logrus.Fields{
"peer": pid,
"count": len(req),
"roots": rootsStr,
}).Debug("Requesting blocks by root")
}
// Send the request to the peer.
if err := s.sendBeaconBlocksRequest(ctx, &req, pid); err != nil {
tracing.AnnotateError(span, err)
@@ -438,8 +472,6 @@ func (s *Service) filterOutPendingAndSynced(roots [][fieldparams.RootLength]byte
roots = append(roots[:i], roots[i+1:]...)
continue
}
log.WithField("blockRoot", fmt.Sprintf("%#x", r)).Debug("Requesting block by root")
}
return roots
}

View File

@@ -189,12 +189,30 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
ctx, cancel := context.WithTimeout(ctx, secondsPerHalfSlot)
defer cancel()
log := log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", source.Root()),
"slot": source.Slot(),
"proposerIndex": source.ProposerIndex(),
"type": source.Type(),
})
var constructedSidecarCount uint64
for iteration := uint64(0); ; /*no stop condition*/ iteration++ {
log = log.WithField("iteration", iteration)
// Exit early if all sidecars to sample have been seen.
if s.haveAllSidecarsBeenSeen(source.Slot(), source.ProposerIndex(), columnIndicesToSample) {
if iteration > 0 && constructedSidecarCount == 0 {
log.Debug("No data column sidecars constructed from the execution client")
}
return nil, nil
}
if iteration == 0 {
dataColumnsRecoveredFromELAttempts.Inc()
}
// Try to reconstruct data column constructedSidecars from the execution client.
constructedSidecars, err := s.cfg.executionReconstructor.ConstructDataColumnSidecars(ctx, source)
if err != nil {
@@ -202,8 +220,8 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
}
// No sidecars are retrieved from the EL, retry later
sidecarCount := uint64(len(constructedSidecars))
if sidecarCount == 0 {
constructedSidecarCount = uint64(len(constructedSidecars))
if constructedSidecarCount == 0 {
if ctx.Err() != nil {
return nil, ctx.Err()
}
@@ -212,9 +230,11 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
continue
}
dataColumnsRecoveredFromELTotal.Inc()
// Boundary check.
if sidecarCount != fieldparams.NumberOfColumns {
return nil, errors.Errorf("reconstruct data column sidecars returned %d sidecars, expected %d - should never happen", sidecarCount, fieldparams.NumberOfColumns)
if constructedSidecarCount != fieldparams.NumberOfColumns {
return nil, errors.Errorf("reconstruct data column sidecars returned %d sidecars, expected %d - should never happen", constructedSidecarCount, fieldparams.NumberOfColumns)
}
unseenIndices, err := s.broadcastAndReceiveUnseenDataColumnSidecars(ctx, source.Slot(), source.ProposerIndex(), columnIndicesToSample, constructedSidecars)
@@ -222,19 +242,12 @@ func (s *Service) processDataColumnSidecarsFromExecution(ctx context.Context, so
return nil, errors.Wrap(err, "broadcast and receive unseen data column sidecars")
}
if len(unseenIndices) > 0 {
dataColumnsRecoveredFromELTotal.Inc()
log.WithFields(logrus.Fields{
"count": len(unseenIndices),
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
}).Debug("Constructed data column sidecars from the execution client")
log.WithFields(logrus.Fields{
"root": fmt.Sprintf("%#x", source.Root()),
"slot": source.Slot(),
"proposerIndex": source.ProposerIndex(),
"iteration": iteration,
"type": source.Type(),
"count": len(unseenIndices),
"indices": helpers.SortedPrettySliceFromMap(unseenIndices),
}).Debug("Constructed data column sidecars from the execution client")
}
dataColumnSidecarsObtainedViaELCount.Observe(float64(len(unseenIndices)))
return nil, nil
}

View File

@@ -51,14 +51,12 @@ func (s *Service) validateDataColumn(ctx context.Context, pid peer.ID, msg *pubs
// Decode the message, reject if it fails.
m, err := s.decodePubsubMessage(msg)
if err != nil {
log.WithError(err).Error("Failed to decode message")
return pubsub.ValidationReject, err
}
// Reject messages that are not of the expected type.
dcsc, ok := m.(*eth.DataColumnSidecar)
if !ok {
log.WithField("message", m).Error("Message is not of type *eth.DataColumnSidecar")
return pubsub.ValidationReject, errWrongMessage
}

View File

@@ -54,11 +54,13 @@ func TestValidateLightClientOptimisticUpdate(t *testing.T) {
cfg.CapellaForkEpoch = 3
cfg.DenebForkEpoch = 4
cfg.ElectraForkEpoch = 5
cfg.FuluForkEpoch = 6
cfg.ForkVersionSchedule[[4]byte{1, 0, 0, 0}] = 1
cfg.ForkVersionSchedule[[4]byte{2, 0, 0, 0}] = 2
cfg.ForkVersionSchedule[[4]byte{3, 0, 0, 0}] = 3
cfg.ForkVersionSchedule[[4]byte{4, 0, 0, 0}] = 4
cfg.ForkVersionSchedule[[4]byte{5, 0, 0, 0}] = 5
cfg.ForkVersionSchedule[[4]byte{6, 0, 0, 0}] = 6
params.OverrideBeaconConfig(cfg)
secondsPerSlot := int(params.BeaconConfig().SecondsPerSlot)
@@ -101,7 +103,10 @@ func TestValidateLightClientOptimisticUpdate(t *testing.T) {
}
for _, test := range tests {
for v := 1; v < 6; v++ {
for v := range version.All() {
if v == version.Phase0 {
continue
}
t.Run(test.name+"_"+version.String(v), func(t *testing.T) {
ctx := t.Context()
p := p2ptest.NewTestP2P(t)
@@ -180,11 +185,13 @@ func TestValidateLightClientFinalityUpdate(t *testing.T) {
cfg.CapellaForkEpoch = 3
cfg.DenebForkEpoch = 4
cfg.ElectraForkEpoch = 5
cfg.FuluForkEpoch = 6
cfg.ForkVersionSchedule[[4]byte{1, 0, 0, 0}] = 1
cfg.ForkVersionSchedule[[4]byte{2, 0, 0, 0}] = 2
cfg.ForkVersionSchedule[[4]byte{3, 0, 0, 0}] = 3
cfg.ForkVersionSchedule[[4]byte{4, 0, 0, 0}] = 4
cfg.ForkVersionSchedule[[4]byte{5, 0, 0, 0}] = 5
cfg.ForkVersionSchedule[[4]byte{6, 0, 0, 0}] = 6
params.OverrideBeaconConfig(cfg)
secondsPerSlot := int(params.BeaconConfig().SecondsPerSlot)
@@ -227,7 +234,10 @@ func TestValidateLightClientFinalityUpdate(t *testing.T) {
}
for _, test := range tests {
for v := 1; v < 6; v++ {
for v := range version.All() {
if v == version.Phase0 {
continue
}
t.Run(test.name+"_"+version.String(v), func(t *testing.T) {
ctx := t.Context()
p := p2ptest.NewTestP2P(t)

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
@@ -288,15 +289,27 @@ func (bv *ROBlobVerifier) SidecarKzgProofVerified() (err error) {
// for later processing while proposers for the block's branch are calculated -- in such a case do not REJECT, instead IGNORE this message.
func (bv *ROBlobVerifier) SidecarProposerExpected(ctx context.Context) (err error) {
defer bv.recordResult(RequireSidecarProposerExpected, &err)
pst, err := bv.parentState(ctx)
e := slots.ToEpoch(bv.blob.Slot())
if e > 0 {
e = e - 1
}
r, err := bv.fc.TargetRootForEpoch(bv.blob.ParentRoot(), e)
if err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("State replay to parent_root failed")
return errSidecarUnexpectedProposer
}
idx, err := bv.pc.ComputeProposer(ctx, bv.blob.Slot(), pst)
if err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("Error computing proposer index from parent state")
return errSidecarUnexpectedProposer
c := &forkchoicetypes.Checkpoint{Root: r, Epoch: e}
idx, cached := bv.pc.Proposer(c, bv.blob.Slot())
if !cached {
pst, err := bv.parentState(ctx)
if err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("State replay to parent_root failed")
return errSidecarUnexpectedProposer
}
idx, err = bv.pc.ComputeProposer(ctx, bv.blob.ParentRoot(), bv.blob.Slot(), pst)
if err != nil {
log.WithError(err).WithFields(logging.BlobFields(bv.blob)).Debug("Error computing proposer index from parent state")
return errSidecarUnexpectedProposer
}
}
if idx != bv.blob.ProposerIndex() {
log.WithError(errSidecarUnexpectedProposer).

View File

@@ -452,17 +452,33 @@ func TestSidecarProposerExpected(t *testing.T) {
ctx := t.Context()
_, blobs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 1)
b := blobs[0]
t.Run("state lookup failure", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{sr: sbrNotFound(t, b.ParentRoot()), pc: &mockProposerCache{}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
t.Run("cached, matches", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{pc: &mockProposerCache{ProposerCB: pcReturnsIdx(b.ProposerIndex())}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.NoError(t, v.SidecarProposerExpected(ctx))
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NoError(t, v.results.result(RequireSidecarProposerExpected))
})
t.Run("cached, does not match", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{pc: &mockProposerCache{ProposerCB: pcReturnsIdx(b.ProposerIndex() + 1)}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarProposerExpected(ctx), errSidecarUnexpectedProposer)
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
})
t.Run("not cached, state lookup failure", func(t *testing.T) {
ini := Initializer{shared: &sharedResources{sr: sbrNotFound(t, b.ParentRoot()), pc: &mockProposerCache{ProposerCB: pcReturnsNotFound()}, fc: &mockForkchoicer{TargetRootForEpochCB: fcReturnsTargetRoot([32]byte{})}}}
v := ini.NewBlobVerifier(b, GossipBlobSidecarRequirements)
require.ErrorIs(t, v.SidecarProposerExpected(ctx), errSidecarUnexpectedProposer)
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
})
t.Run("proposer matches", func(t *testing.T) {
t.Run("not cached, proposer matches", func(t *testing.T) {
pc := &mockProposerCache{
ComputeProposerCB: func(_ context.Context, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, b.ParentRoot(), root)
require.Equal(t, b.Slot(), slot)
return b.ProposerIndex(), nil
},
@@ -473,9 +489,11 @@ func TestSidecarProposerExpected(t *testing.T) {
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NoError(t, v.results.result(RequireSidecarProposerExpected))
})
t.Run("proposer does not match", func(t *testing.T) {
t.Run("not cached, proposer does not match", func(t *testing.T) {
pc := &mockProposerCache{
ComputeProposerCB: func(_ context.Context, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, b.ParentRoot(), root)
require.Equal(t, b.Slot(), slot)
return b.ProposerIndex() + 1, nil
},
@@ -486,9 +504,11 @@ func TestSidecarProposerExpected(t *testing.T) {
require.Equal(t, true, v.results.executed(RequireSidecarProposerExpected))
require.NotNil(t, v.results.result(RequireSidecarProposerExpected))
})
t.Run("ComputeProposer fails", func(t *testing.T) {
t.Run("not cached, ComputeProposer fails", func(t *testing.T) {
pc := &mockProposerCache{
ComputeProposerCB: func(_ context.Context, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
ProposerCB: pcReturnsNotFound(),
ComputeProposerCB: func(_ context.Context, root [32]byte, slot primitives.Slot, _ state.BeaconState) (primitives.ValidatorIndex, error) {
require.Equal(t, b.ParentRoot(), root)
require.Equal(t, b.Slot(), slot)
return 0, errors.New("ComputeProposer failed")
},
@@ -825,11 +845,28 @@ func (v *validxStateOverride) ReadFromEveryValidator(f func(idx int, val state.R
}
type mockProposerCache struct {
ComputeProposerCB func(ctx context.Context, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error)
ComputeProposerCB func(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error)
ProposerCB func(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool)
}
func (p *mockProposerCache) ComputeProposer(ctx context.Context, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
return p.ComputeProposerCB(ctx, slot, pst)
func (p *mockProposerCache) ComputeProposer(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
return p.ComputeProposerCB(ctx, root, slot, pst)
}
func (p *mockProposerCache) Proposer(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
return p.ProposerCB(c, slot)
}
var _ proposerCache = &mockProposerCache{}
func pcReturnsIdx(idx primitives.ValidatorIndex) func(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
return func(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
return idx, true
}
}
func pcReturnsNotFound() func(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
return func(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
return 0, false
}
}

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
lruwrpr "github.com/OffchainLabs/prysm/v7/cache/lru"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -15,7 +16,6 @@ import (
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
lru "github.com/hashicorp/golang-lru"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -152,7 +152,8 @@ func (c *sigCache) SignatureVerified(sig signatureData) (bool, error) {
// and cache the result so that it can be reused when the same verification needs to be performed
// across multiple values.
type proposerCache interface {
ComputeProposer(ctx context.Context, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error)
ComputeProposer(ctx context.Context, root [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error)
Proposer(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool)
}
func newPropCache() *propCache {
@@ -162,20 +163,26 @@ func newPropCache() *propCache {
type propCache struct {
}
// ComputeProposer takes the state and computes the proposer index at the given slot
func (*propCache) ComputeProposer(ctx context.Context, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
// After Fulu, the lookahead only contains proposers for the current and next epoch.
stateEpoch := slots.ToEpoch(pst.Slot())
slotEpoch := slots.ToEpoch(slot)
if slotEpoch > stateEpoch+1 {
start, err := slots.EpochStart(slotEpoch - 1)
if err != nil {
return 0, err
}
pst, err = transition.ProcessSlots(ctx, pst, start)
if err != nil {
return 0, errors.Wrap(err, "failed to advance state to compute proposer")
}
// ComputeProposer takes the state for the given parent root and slot and computes the proposer index, updating the
// proposer index cache when successful.
func (*propCache) ComputeProposer(ctx context.Context, parent [32]byte, slot primitives.Slot, pst state.BeaconState) (primitives.ValidatorIndex, error) {
pst, err := transition.ProcessSlotsUsingNextSlotCache(ctx, pst, parent[:], slot)
if err != nil {
return 0, err
}
return helpers.BeaconProposerIndexAtSlot(ctx, pst, slot)
idx, err := helpers.BeaconProposerIndex(ctx, pst)
if err != nil {
return 0, err
}
return idx, nil
}
// Proposer returns the validator index if it is found in the cache, along with a boolean indicating
// whether the value was present, similar to accessing an lru or go map.
func (*propCache) Proposer(c *forkchoicetypes.Checkpoint, slot primitives.Slot) (primitives.ValidatorIndex, bool) {
id, err := helpers.ProposerIndexAtSlotFromCheckpoint(c, slot)
if err != nil {
return 0, false
}
return id, true
}

View File

@@ -4,6 +4,7 @@ import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
@@ -106,3 +107,25 @@ func (m *mockValidatorAtIndexer) ValidatorAtIndex(idx primitives.ValidatorIndex)
}
var _ validatorAtIndexer = &mockValidatorAtIndexer{}
func TestProposerCache(t *testing.T) {
ctx := t.Context()
// 3 validators because that was the first number that produced a non-zero proposer index by default
st, _ := util.DeterministicGenesisStateDeneb(t, 3)
pc := newPropCache()
_, cached := pc.Proposer(&forkchoicetypes.Checkpoint{}, 1)
// should not be cached yet
require.Equal(t, false, cached)
// If this test breaks due to changes in the deterministic state gen, just replace '2' with whatever the right index is.
expectedIdx := 2
idx, err := pc.ComputeProposer(ctx, [32]byte{}, 1, st)
require.NoError(t, err)
require.Equal(t, primitives.ValidatorIndex(expectedIdx), idx)
idx, cached = pc.Proposer(&forkchoicetypes.Checkpoint{}, 1)
// TODO: update this test when we integrate a proposer id cache
require.Equal(t, false, cached)
require.Equal(t, primitives.ValidatorIndex(0), idx)
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/peerdas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
forkchoicetypes "github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice/types"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
@@ -360,7 +361,7 @@ func (dv *RODataColumnsVerifier) SidecarParentSeen(parentSeen func([fieldparams.
}
if !dv.fc.HasNode(parentRoot) {
return columnErrBuilder(errSidecarParentNotSeen)
return columnErrBuilder(errors.Wrapf(errSidecarParentNotSeen, "parent root: %#x", parentRoot))
}
}
@@ -483,6 +484,38 @@ func (dv *RODataColumnsVerifier) SidecarProposerExpected(ctx context.Context) (e
defer dv.recordResult(RequireSidecarProposerExpected, &err)
type slotParentRoot struct {
slot primitives.Slot
parentRoot [fieldparams.RootLength]byte
}
targetRootBySlotParentRoot := make(map[slotParentRoot][fieldparams.RootLength]byte)
var targetRootFromCache = func(slot primitives.Slot, parentRoot [fieldparams.RootLength]byte) ([fieldparams.RootLength]byte, error) {
// Use cached values if available.
slotParentRoot := slotParentRoot{slot: slot, parentRoot: parentRoot}
if root, ok := targetRootBySlotParentRoot[slotParentRoot]; ok {
return root, nil
}
// Compute the epoch of the data column slot.
dataColumnEpoch := slots.ToEpoch(slot)
if dataColumnEpoch > 0 {
dataColumnEpoch = dataColumnEpoch - 1
}
// Compute the target root for the epoch.
targetRoot, err := dv.fc.TargetRootForEpoch(parentRoot, dataColumnEpoch)
if err != nil {
return [fieldparams.RootLength]byte{}, columnErrBuilder(errors.Wrap(err, "target root from epoch"))
}
// Store the target root in the cache.
targetRootBySlotParentRoot[slotParentRoot] = targetRoot
return targetRoot, nil
}
for _, dataColumn := range dv.dataColumns {
// Extract the slot of the data column.
dataColumnSlot := dataColumn.Slot()
@@ -490,33 +523,56 @@ func (dv *RODataColumnsVerifier) SidecarProposerExpected(ctx context.Context) (e
// Extract the root of the parent block corresponding to the data column.
parentRoot := dataColumn.ParentRoot()
// Ensure the expensive index computation is only performed once for
// concurrent requests for the same signature data.
idxAny, err, _ := dv.sg.Do(concatRootSlot(parentRoot, dataColumnSlot), func() (any, error) {
verifyingState, err := dv.getVerifyingState(ctx, dataColumn)
if err != nil {
return nil, columnErrBuilder(errors.Wrap(err, "verifying state"))
}
idx, err := helpers.BeaconProposerIndexAtSlot(ctx, verifyingState, dataColumnSlot)
if err != nil {
return nil, columnErrBuilder(errors.Wrap(err, "compute proposer"))
}
return idx, nil
})
// Compute the target root for the data column.
targetRoot, err := targetRootFromCache(dataColumnSlot, parentRoot)
if err != nil {
return err
return columnErrBuilder(errors.Wrap(err, "target root"))
}
idx, ok := idxAny.(primitives.ValidatorIndex)
if !ok {
return columnErrBuilder(errors.New("type assertion to ValidatorIndex failed"))
// Compute the epoch of the data column slot.
dataColumnEpoch := slots.ToEpoch(dataColumnSlot)
if dataColumnEpoch > 0 {
dataColumnEpoch = dataColumnEpoch - 1
}
// Create a checkpoint for the target root.
checkpoint := &forkchoicetypes.Checkpoint{Root: targetRoot, Epoch: dataColumnEpoch}
// Try to extract the proposer index from the data column in the cache.
idx, cached := dv.pc.Proposer(checkpoint, dataColumnSlot)
if !cached {
parentRoot := dataColumn.ParentRoot()
// Ensure the expensive index computation is only performed once for
// concurrent requests for the same signature data.
idxAny, err, _ := dv.sg.Do(concatRootSlot(parentRoot, dataColumnSlot), func() (any, error) {
verifyingState, err := dv.getVerifyingState(ctx, dataColumn)
if err != nil {
return nil, columnErrBuilder(errors.Wrap(err, "verifying state"))
}
idx, err = helpers.BeaconProposerIndexAtSlot(ctx, verifyingState, dataColumnSlot)
if err != nil {
return nil, columnErrBuilder(errors.Wrap(err, "compute proposer"))
}
return idx, nil
})
if err != nil {
return err
}
var ok bool
if idx, ok = idxAny.(primitives.ValidatorIndex); !ok {
return columnErrBuilder(errors.New("type assertion to ValidatorIndex failed"))
}
}
if idx != dataColumn.ProposerIndex() {
return columnErrBuilder(errSidecarUnexpectedProposer)
}
}
return nil
}

View File

@@ -799,20 +799,35 @@ func TestDataColumnsSidecarProposerExpected(t *testing.T) {
columns := GenerateTestDataColumns(t, parentRoot, columnSlot, blobCount)
firstColumn := columns[0]
ctx := t.Context()
testCases := []struct {
name string
stateByRooter StateByRooter
headStateProvider *mockHeadStateProvider
columns []blocks.RODataColumn
error string
name string
stateByRooter StateByRooter
proposerCache proposerCache
columns []blocks.RODataColumn
error string
}{
{
name: "state lookup failure",
name: "Cached, matches",
stateByRooter: nil,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex()),
},
columns: columns,
},
{
name: "Cached, does not match",
stateByRooter: nil,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsIdx(firstColumn.ProposerIndex() + 1),
},
columns: columns,
error: errSidecarUnexpectedProposer.Error(),
},
{
name: "Not cached, state lookup failure",
stateByRooter: sbrNotFound(t, firstColumn.ParentRoot()),
headStateProvider: &mockHeadStateProvider{
headRoot: []byte{0xff}, // Different from parentRoot so it won't use head
headSlot: 1000,
proposerCache: &mockProposerCache{
ProposerCB: pcReturnsNotFound(),
},
columns: columns,
error: "verifying state",
@@ -824,7 +839,8 @@ func TestDataColumnsSidecarProposerExpected(t *testing.T) {
initializer := Initializer{
shared: &sharedResources{
sr: tc.stateByRooter,
hsp: tc.headStateProvider,
pc: tc.proposerCache,
hsp: &mockHeadStateProvider{},
fc: &mockForkchoicer{
TargetRootForEpochCB: fcReturnsTargetRoot([fieldparams.RootLength]byte{}),
},

View File

@@ -0,0 +1,3 @@
### Fixed
- Prevent blocked sends to the KZG batch verifier when the caller context is already canceled, avoiding useless queueing and potential hangs.

View File

@@ -0,0 +1,3 @@
### Fixed
- Fix the missing fork version object mapping for Fulu in light client p2p.

View File

@@ -0,0 +1,3 @@
### Changed
- the /eth/v2/beacon/pool/attestations and /eth/v1/beacon/pool/sync_committees now returns a 503 error if the node is still syncing, the rest api is also working in a similar process to gRPC broadcasting immediately now.

View File

@@ -0,0 +1,3 @@
### Changed
- e2e sync committee evaluator now skips the first slot after startup, we already skip the fork epoch for checks here, this skip only applies on startup, due to altair always from 0 and validators need to warm up.

3
changelog/manu-agg.md Normal file
View File

@@ -0,0 +1,3 @@
### Changed
- Pending aggregates: When multiple aggregated attestations only differing by the aggregator index are in the pending queue, only process one of them.

View File

@@ -0,0 +1,2 @@
### Changed
- `validateDataColumn`: Remove error logs.

View File

@@ -0,0 +1,7 @@
### Added
- prometheus histogram `cells_and_proofs_from_structured_computation_milliseconds` to track computation time for cells and proofs from structured blobs.
- prometheus histogram `get_blobs_v2_latency_milliseconds` to track RPC latency for `getBlobsV2` calls to the execution layer.
### Changed
- Run `ComputeCellsAndProofsFromFlat` in parallel to improve performance when computing cells and proofs.
- Run `ComputeCellsAndProofsFromStructured` in parallel to improve performance when computing cells and proofs.

View File

@@ -1,3 +0,0 @@
### Changed
- Removed proposer id cache.

View File

@@ -0,0 +1,3 @@
### Fixed
- Removed redundant justified checkpoint update in pullTips.

View File

@@ -262,6 +262,11 @@ func validatorsSyncParticipation(_ *types.EvaluationContext, conns ...*grpc.Clie
// Skip fork slot.
continue
}
// Skip slot 1 at genesis - validators need time to ramp up after chain start.
// This is a startup timing issue, not a fork transition issue.
if b.Block().Slot() == 1 {
continue
}
expectedParticipation := expectedSyncParticipation
switch slots.ToEpoch(b.Block().Slot()) {
case params.BeaconConfig().AltairForkEpoch: