1610 Commits

Author SHA1 Message Date
twoeths
86298a43e6 refactor: move reward apis to state-transition (#8719)
**Motivation**

- the reward apis tightly couple to state-transition functions like
`beforeProcessEpoch() processBlock() processAttestationAltair()` so it
needs to be moved there

**Description**

- move api type definitions to `types` package so that it can be used
everywhere
- move reward apis implementation to `state-transition` package

Closes #8690

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2026-01-06 08:48:58 +07:00
Nico Flaig
ae3f082e01 fix: prevent duplicate aggregates passing validation due to race condition (#8716)
**Motivation**


https://github.com/ChainSafe/lodestar/pull/8711#pullrequestreview-3612431091

**Description**

Prevent duplicate aggregates passing gossip validation due to race
condition by checking again if we've seen the aggregate before inserting
it into op pool. This is required since we run multiple async operations
in-between first check and inserting it into op pool.


<img width="942" height="301" alt="image"
src="https://github.com/user-attachments/assets/2701a92e-7733-4de3-bf4a-ac853fd5c0b7"
/>

`AlreadyKnown` disappears since we now filter those out properly during
gossip validation which is important since we don't wanna re-gossip
those aggregates.
2025-12-31 10:09:54 -05:00
twoeths
b6bba4cb8c fix: simplify getBlockSignatureSets api (#8720)
**Motivation**

- we use the whole CachedBeaconStateAllForks to get all block
signatures, turn out we only need the validator indices of the current
SyncCommittee

**Description**

given this `getConfig` api: 
```typescript
getDomain(domainSlot: Slot, domainType: DomainType, messageSlot?: Slot): Uint8Array
```

we currently pass `state.slot` as the 1st param. However it's the same
to `block.slot` in `state-transition` and the same epoch when we verify
blocks in batch in
[beacon-node](b255111a20/packages/beacon-node/src/chain/blocks/verifyBlock.ts (L62))

- so we can just use `block.slot` instead of passing the whole
CachedBeaconStateAllForks in `getBlockSignatureSets()` api
- still have to pass in `currentSyncCommitteeIndexed` instead

part of #8650

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2025-12-31 10:08:51 -05:00
twoeths
b255111a20 refactor: pass validators pubkey-index map from cli (#8707)
**Motivation**

- we will not be able to access `pubkey2index` or `index2pubkey` once we
switch to a native state-transition so we need to be prepared for that

**Description**

- pass `pubkey2index`, `index2pubkey` from cli instead
- in the future, we should find a way to extract them given a
BeaconState so that we don't have to depend on any implementations of
BeaconStateView, see
https://github.com/ChainSafe/lodestar/issues/8706#issue-3741320691

Closes #8652

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2025-12-22 13:16:12 +07:00
Nico Flaig
84b481ddb5 chore: restore code required to perform sync through bellatrix (#8700)
**Motivation**

As noted in
https://github.com/ChainSafe/lodestar/pull/8680#discussion_r2624026653
we cannot sync through bellatrix anymore. While I don't think it's a big
deal it's simple enough to keep that functionality as that code is
pretty isolated and won't get in our way during refactors and with gloas
won't be part of the block processing pipeline anymore due to
block/payload separation.


**Description**

Restore code required to perform sync through bellatrix
- re-added `isExecutionEnabled()` and `isMergeTransitionComplete()`
checks during block processing
- enabled some spec tests again that were previously skipped
- mostly copied original code removed in
[#8680](https://github.com/ChainSafe/lodestar/pull/8680) but cleaned up
some comments and simplified a bit
2025-12-18 18:00:34 -05:00
twoeths
c151a164f2 chore: use config from beacon chain (#8703)
**Motivation**

- as a preparation for lodestar-z integration, we should not access
config from any cached BeaconState

**Description**

- use chain.config instead

part of #8652

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2025-12-18 09:25:53 +07:00
Nico Flaig
f4236afdba chore: delete unused eth1 data from existing databases (#8696)
Suggestion from
https://github.com/ChainSafe/lodestar/pull/8692#pullrequestreview-3580953573
2025-12-17 12:32:41 +01:00
Nico Flaig
aceb5b7416 chore: remove eth1 related code (#8692)
**Motivation**

All networks are post-electra now and transition period is completed,
which means due to [EIP-6110](https://eips.ethereum.org/EIPS/eip-6110)
we no longer need to process deposits via eth1 bridge as those are now
processed by the execution layer.

This code is effectively tech debt, no longer exercised and just gets in
the way when doing refactors.

**Description**

Removes all code related to eth1 bridge mechanism to include new
deposits

- removed all eth1 related code, we can no longer produce blocks with
deposits pre-electra (syncing blocks still works)
- building a genesis state from eth1 is no longer supported (only for
testing)
- removed various db repositories related to deposits/eth1 data
- removed various `lodestar_eth1_*` metrics and dashboard panels
- deprecated all `--eth1.*` flags (but kept for backward compatibility)
- moved shared utility functions from eth1 to execution engine module

Closes https://github.com/ChainSafe/lodestar/issues/7682
Closes https://github.com/ChainSafe/lodestar/issues/8654
2025-12-17 15:45:02 +07:00
twoeths
d4a47659a5 feat: transfer pending gossipsub message msg data (#8689)
**Motivation**

- improve memory by transferirng gossipsub message data from network
thread to the main thread
- In snappy decompression in #8647 we had to do `Buffer.alloc()` instead
of `Buffer.allocUnsafe()`. We don't have to feel bad about that because
`Buffer.allocUnsafe()` does not work with this PR, and we don't waste
any memory.

**Description**

- use `transferList` param when posting messages from network thread to
the main thread

part of #8629

**Testing**
I've tested this on `feat2` for 3 days, the previous branch was #8671 so
it's basically the current stable, does not see significant improvement
but some good data for different nodes
- no change on 1k or `novc`
- on hoodi `sas` node we have better memory there on main thread with
same mesh peers, same memory on network thread

<img width="851" height="511" alt="Screenshot 2025-12-12 at 11 05 27"
src="https://github.com/user-attachments/assets/8d7b2c2f-8213-4f89-87e0-437d016bc24a"
/>

- on mainnnet `sas` node, we have better memory on network thread, a
little bit worse on the main thread
<img width="854" height="504" alt="Screenshot 2025-12-12 at 11 08 42"
src="https://github.com/user-attachments/assets/7e638149-2dbe-4c7e-849c-ef78f6ff4d6f"
/>

- but for this mainnet node, the most interesting metric is `forward msg
avg peers`, we're faster than majority of them

<img width="1378" height="379" alt="Screenshot 2025-12-12 at 11 11 00"
src="https://github.com/user-attachments/assets/3ba5eeaa-5a11-4cad-adfa-1e0f68a81f16"
/>

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2025-12-16 08:47:13 -05:00
Phil Ngo
3bf4734ba9 chore: merge v1.38.0 stable back to unstable (#8694) 2025-12-15 11:08:40 -05:00
twoeths
ebc352f211 chore: use pubkey2index from BeaconChain (#8691)
**Motivation**

- as a preparation for lodestar-z integration, we should not access
pubkey2index from CachedBeaconState

**Description**

- use that from BeaconChain instead

part of #8652

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2025-12-12 11:29:41 -05:00
Nico Flaig
889b1c4475 chore: remove merge transition code (#8680)
**Motiviation**

All networks have completed the merge transition and most execution
clients no longer support pre-merge so it's not even possible anymore to
run a network from a genesis before bellatrix, unless you keep it to
phase0/altair only, which still works after this PR is merged.

This code is effectively tech debt, no longer exercised and just gets in
the way when doing refactors.

**Description**

Removes all code related to performing the merge transition. Running the
node pre-merge (CL only mode) is still possible and syncing still works.
Also removed a few CLI flags we added for the merge specifically, those
shouldn't be used anymore. Spec constants like
`TERMINAL_TOTAL_DIFFICULTY` are kept for spec compliance and ssz types
(like `PowBlock`) as well. I had to disable a few spec tests related to
handling the merge block since those code paths are removed.

Closes https://github.com/ChainSafe/lodestar/issues/8661
2025-12-12 10:18:23 +07:00
twoeths
1ddbe5d870 chore: use index2pubkey of BeaconChain (#8674)
**Motivation**

- once we have `state-transition-z`, we're not able to get
`index2pubkey` from a light view of BeaconState in beacon-node

**Description**

- in `beacon-node`, use `index2pubkey` of BeaconChain instead as a
preparation for working with `state-transition-z`
- it's ok to use `state.epochCtx.index2pubkey` in `state-transition`
since it can access the full state there

part of #8652

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2025-12-11 14:19:08 -05:00
philknows
62d3e49f28 chore: bump package versions to 1.38.0 2025-12-10 11:44:00 -05:00
Cayman
688d5584ea feat: use snappy-wasm (#6483) (#8647)
**Motivation**
- #6483 is back 😂 
- Benchmarks show `snappy-wasm` is faster at compressing and
uncompressing than `snappyjs`.

**Description**

- Use `snappy-wasm` for compressing / uncompressing gossip payloads
- Add more `snappy` vs `snappyjs` vs `snappy-wasm` benchmarks

**TODO**
- [x] deploy this branch on our test fleet - deployed on feat3

```
  network / gossip / snappy
    compress
      ✔ 100 bytes - compress - snappyjs                                     335566.9 ops/s    2.980032 us/op        -        685 runs   2.54 s
      ✔ 100 bytes - compress - snappy                                       388610.3 ops/s    2.573272 us/op        -        870 runs   2.74 s
      ✔ 100 bytes - compress - snappy-wasm                                  583254.0 ops/s    1.714519 us/op        -        476 runs   1.32 s
      ✔ 100 bytes - compress - snappy-wasm - prealloc                        1586695 ops/s    630.2410 ns/op        -        481 runs  0.804 s
      ✔ 200 bytes - compress - snappyjs                                     298272.8 ops/s    3.352636 us/op        -        213 runs   1.22 s
      ✔ 200 bytes - compress - snappy                                       419528.0 ops/s    2.383631 us/op        -        926 runs   2.71 s
      ✔ 200 bytes - compress - snappy-wasm                                  472468.5 ops/s    2.116543 us/op        -        577 runs   1.72 s
      ✔ 200 bytes - compress - snappy-wasm - prealloc                        1430445 ops/s    699.0830 ns/op        -        868 runs   1.11 s
      ✔ 300 bytes - compress - snappyjs                                     265124.9 ops/s    3.771807 us/op        -        137 runs   1.02 s
      ✔ 300 bytes - compress - snappy                                       361683.9 ops/s    2.764845 us/op        -       1332 runs   4.18 s
      ✔ 300 bytes - compress - snappy-wasm                                  443688.4 ops/s    2.253834 us/op        -        859 runs   2.44 s
      ✔ 300 bytes - compress - snappy-wasm - prealloc                        1213825 ops/s    823.8420 ns/op        -        370 runs  0.807 s
      ✔ 400 bytes - compress - snappyjs                                     262168.5 ops/s    3.814341 us/op        -        358 runs   1.87 s
      ✔ 400 bytes - compress - snappy                                       382494.9 ops/s    2.614414 us/op        -       1562 runs   4.58 s
      ✔ 400 bytes - compress - snappy-wasm                                  406373.2 ops/s    2.460792 us/op        -        797 runs   2.46 s
      ✔ 400 bytes - compress - snappy-wasm - prealloc                        1111715 ops/s    899.5110 ns/op        -        450 runs  0.906 s
      ✔ 500 bytes - compress - snappyjs                                     229213.1 ops/s    4.362753 us/op        -        359 runs   2.07 s
      ✔ 500 bytes - compress - snappy                                       373695.8 ops/s    2.675973 us/op        -       2050 runs   5.99 s
      ✔ 500 bytes - compress - snappy-wasm                                  714917.4 ops/s    1.398763 us/op        -        960 runs   1.84 s
      ✔ 500 bytes - compress - snappy-wasm - prealloc                        1054619 ops/s    948.2100 ns/op        -        427 runs  0.907 s
      ✔ 1000 bytes - compress - snappyjs                                    148702.3 ops/s    6.724847 us/op        -        171 runs   1.65 s
      ✔ 1000 bytes - compress - snappy                                      423688.1 ops/s    2.360227 us/op        -        525 runs   1.74 s
      ✔ 1000 bytes - compress - snappy-wasm                                 524350.6 ops/s    1.907121 us/op        -        273 runs   1.03 s
      ✔ 1000 bytes - compress - snappy-wasm - prealloc                      685191.5 ops/s    1.459446 us/op        -        349 runs   1.01 s
      ✔ 10000 bytes - compress - snappyjs                                   21716.92 ops/s    46.04704 us/op        -         16 runs   1.24 s
      ✔ 10000 bytes - compress - snappy                                     98051.32 ops/s    10.19874 us/op        -        184 runs   2.39 s
      ✔ 10000 bytes - compress - snappy-wasm                                114681.8 ops/s    8.719783 us/op        -         49 runs  0.937 s
      ✔ 10000 bytes - compress - snappy-wasm - prealloc                     111203.6 ops/s    8.992518 us/op        -         49 runs  0.953 s
      ✔ 100000 bytes - compress - snappyjs                                  2947.313 ops/s    339.2921 us/op        -         12 runs   4.74 s
      ✔ 100000 bytes - compress - snappy                                    14963.78 ops/s    66.82801 us/op        -         70 runs   5.19 s
      ✔ 100000 bytes - compress - snappy-wasm                               19868.33 ops/s    50.33136 us/op        -         14 runs   1.21 s
      ✔ 100000 bytes - compress - snappy-wasm - prealloc                    24579.34 ops/s    40.68457 us/op        -         13 runs   1.06 s
    uncompress
      ✔ 100 bytes - uncompress - snappyjs                                   589201.6 ops/s    1.697212 us/op        -        242 runs  0.911 s
      ✔ 100 bytes - uncompress - snappy                                     537424.1 ops/s    1.860728 us/op        -        220 runs  0.910 s
      ✔ 100 bytes - uncompress - snappy-wasm                                634966.2 ops/s    1.574887 us/op        -        194 runs  0.808 s
      ✔ 100 bytes - uncompress - snappy-wasm - prealloc                      1846964 ops/s    541.4290 ns/op        -        559 runs  0.804 s
      ✔ 200 bytes - uncompress - snappyjs                                   395141.8 ops/s    2.530737 us/op        -        281 runs   1.22 s
      ✔ 200 bytes - uncompress - snappy                                     536862.6 ops/s    1.862674 us/op        -        274 runs   1.01 s
      ✔ 200 bytes - uncompress - snappy-wasm                                420251.6 ops/s    2.379527 us/op        -        129 runs  0.810 s
      ✔ 200 bytes - uncompress - snappy-wasm - prealloc                      1746167 ops/s    572.6830 ns/op        -        529 runs  0.804 s
      ✔ 300 bytes - uncompress - snappyjs                                   441676.2 ops/s    2.264102 us/op        -        898 runs   2.53 s
      ✔ 300 bytes - uncompress - snappy                                     551313.2 ops/s    1.813851 us/op        -        336 runs   1.11 s
      ✔ 300 bytes - uncompress - snappy-wasm                                494773.0 ops/s    2.021129 us/op        -        203 runs  0.912 s
      ✔ 300 bytes - uncompress - snappy-wasm - prealloc                      1528680 ops/s    654.1590 ns/op        -        465 runs  0.805 s
      ✔ 400 bytes - uncompress - snappyjs                                   383746.1 ops/s    2.605890 us/op        -        235 runs   1.11 s
      ✔ 400 bytes - uncompress - snappy                                     515986.6 ops/s    1.938035 us/op        -        158 runs  0.809 s
      ✔ 400 bytes - uncompress - snappy-wasm                                392947.8 ops/s    2.544867 us/op        -        322 runs   1.32 s
      ✔ 400 bytes - uncompress - snappy-wasm - prealloc                      1425978 ops/s    701.2730 ns/op        -        721 runs   1.01 s
      ✔ 500 bytes - uncompress - snappyjs                                   330727.5 ops/s    3.023637 us/op        -        173 runs   1.02 s
      ✔ 500 bytes - uncompress - snappy                                     513874.1 ops/s    1.946002 us/op        -        157 runs  0.806 s
      ✔ 500 bytes - uncompress - snappy-wasm                                389263.0 ops/s    2.568957 us/op        -        161 runs  0.914 s
      ✔ 500 bytes - uncompress - snappy-wasm - prealloc                      1330936 ops/s    751.3510 ns/op        -        672 runs   1.01 s
      ✔ 1000 bytes - uncompress - snappyjs                                  241393.9 ops/s    4.142606 us/op        -        126 runs   1.03 s
      ✔ 1000 bytes - uncompress - snappy                                    491119.6 ops/s    2.036164 us/op        -        201 runs  0.911 s
      ✔ 1000 bytes - uncompress - snappy-wasm                               361794.5 ops/s    2.764000 us/op        -        148 runs  0.910 s
      ✔ 1000 bytes - uncompress - snappy-wasm - prealloc                    959026.5 ops/s    1.042724 us/op        -        390 runs  0.909 s
      ✔ 10000 bytes - uncompress - snappyjs                                 40519.03 ops/s    24.67976 us/op        -         16 runs  0.913 s
      ✔ 10000 bytes - uncompress - snappy                                   202537.6 ops/s    4.937355 us/op        -        796 runs   4.43 s
      ✔ 10000 bytes - uncompress - snappy-wasm                              165017.6 ops/s    6.059960 us/op        -         52 runs  0.822 s
      ✔ 10000 bytes - uncompress - snappy-wasm - prealloc                   175061.5 ops/s    5.712277 us/op        -        130 runs   1.25 s
      ✔ 100000 bytes - uncompress - snappyjs                                4030.391 ops/s    248.1149 us/op        -         12 runs   3.71 s
      ✔ 100000 bytes - uncompress - snappy                                  35459.43 ops/s    28.20124 us/op        -         41 runs   1.67 s
      ✔ 100000 bytes - uncompress - snappy-wasm                             22449.16 ops/s    44.54509 us/op        -         13 runs   1.11 s
      ✔ 100000 bytes - uncompress - snappy-wasm - prealloc                  27169.50 ops/s    36.80598 us/op        -         13 runs  0.997 s
```

Closes #4170

---------

Co-authored-by: Nico Flaig <nflaig@protonmail.com>
Co-authored-by: twoeths <10568965+twoeths@users.noreply.github.com>
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2025-12-08 23:20:08 -05:00
NC
1f2a3a4524 feat: implement epbs state transition (#8507)
Implement epbs state transition function.

Passes all operations, epoch_transition and rewards spec tests on v1.6.1

Part of #8439

---------

Co-authored-by: Nico Flaig <nflaig@protonmail.com>
2025-12-04 08:35:47 -08:00
Cayman
362bd5ea5d feat: support and test node 24 (#8645)
**Motivation**

- Support the latest LTS

**Description**

- support node 24

---------

Co-authored-by: Nico Flaig <nflaig@protonmail.com>
2025-12-03 13:32:11 -05:00
Nico Flaig
6938ce2049 fix: don't try to serve states for future slots (#8665)
**Motivation**

When requesting a future slot the node tries to dial the state from head
which allows to quite easily DoS the node as it's unbounded amount of
work if the slot is very far away from head.

We should not allow to request states that are in the future (> clock
slot) and return a 404 instead.

**Description**

In case state is request by slot, check if it's a slot from the future
based on clock slot and return 404 state not found error.

I didn't use `forkChoice.getHead().slot` because we should still be able
to serve the state if all slots between the requested slot and the head
slot are skipped.

Related [discord
discussion](https://discord.com/channels/593655374469660673/1387128551962050751/1445514034592878755),
thanks to @guha-rahul for catching and reporting this.
2025-12-03 09:59:02 -05:00
twoeths
1ad9c40143 chore: improve benchmark (#8664)
**Motivation**

- it takes so much time to run benchmark, a lot of them does not make
sense
- seeing OOM with NodeJS 24, see
https://github.com/ChainSafe/lodestar/pull/8645#issuecomment-3601327203

**Description**

- remove benchmarks for flow that's not used in prod
- remove some "minMs" option for some tests that causes a lot of time
- remote test that does not reflect the bottle neck of lodestar's
performance as of Dec 2025
- remote tests that's not part of lodestar code. It's only meaningful in
the scope of that PR only

this is based on the long running test I found in
https://github.com/ChainSafe/lodestar/actions/runs/19874295397/job/56957698411

```
packages/beacon-node/test/perf/chain/validation/attestation.test.ts
  validate gossip attestation
    ✔ batch validate gossip attestation - vc 640000 - chunk 32            8931.657 ops/s    111.9613 us/op   x0.918       7814 runs   30.0 s
    ✔ batch validate gossip attestation - vc 640000 - chunk 64            9972.473 ops/s    100.2760 us/op   x0.926       4321 runs   30.1 s
    ✔ batch validate gossip attestation - vc 640000 - chunk 128           10569.62 ops/s    94.61075 us/op   x0.921       2268 runs   30.0 s
    ✔ batch validate gossip attestation - vc 640000 - chunk 256           10069.74 ops/s    99.30746 us/op   x0.901       1154 runs   30.1 s

packages/fork-choice/test/perf/protoArray/computeDeltas.test.ts
  computeDeltas
    ✔ computeDeltas 1400000 validators 0% inactive                        73.51301 ops/s    13.60303 ms/op   x0.986        539 runs   10.0 s
    ✔ computeDeltas 1400000 validators 10% inactive                       78.91095 ops/s    12.67251 ms/op   x0.989        556 runs   10.0 s
    ✔ computeDeltas 1400000 validators 20% inactive                       86.73608 ops/s    11.52923 ms/op   x1.001        598 runs   10.0 s
    ✔ computeDeltas 1400000 validators 50% inactive                       114.8443 ops/s    8.707439 ms/op   x0.990        799 runs   10.0 s
    ✔ computeDeltas 2100000 validators 0% inactive                        48.69939 ops/s    20.53414 ms/op   x0.996        371 runs   10.0 s
    ✔ computeDeltas 2100000 validators 10% inactive                       53.13929 ops/s    18.81847 ms/op   x1.000        371 runs   10.0 s
    ✔ computeDeltas 2100000 validators 20% inactive                       60.11017 ops/s    16.63612 ms/op   x0.978        418 runs   10.0 s
    ✔ computeDeltas 2100000 validators 50% inactive                       79.46802 ops/s    12.58368 ms/op   x0.967        552 runs   10.0 s

packages/state-transition/test/perf/util/loadState/findModifiedValidators.test.ts
  find modified validators by different ways
    serialize validators then findModifiedValidators
      ✔ findModifiedValidators - 10000 modified validators                  1.382729 ops/s    723.2076 ms/op   x0.993         10 runs   9.21 s
      ✔ findModifiedValidators - 1000 modified validators                   1.298120 ops/s    770.3450 ms/op   x1.152         10 runs   8.68 s
      ✔ findModifiedValidators - 100 modified validators                    3.535168 ops/s    282.8720 ms/op   x1.329         10 runs   3.85 s
      ✔ findModifiedValidators - 10 modified validators                     4.648368 ops/s    215.1293 ms/op   x1.548         10 runs   3.13 s
      ✔ findModifiedValidators - 1 modified validators                      5.296754 ops/s    188.7949 ms/op   x1.187         10 runs   3.10 s
      ✔ findModifiedValidators - no difference                              3.873496 ops/s    258.1647 ms/op   x1.236         12 runs   3.88 s
    deserialize validators then compare validator ViewDUs
      ✔ compare ViewDUs                                                    0.1524038 ops/s    6.561514  s/op   x1.077          9 runs   65.7 s
    serialize each validator then compare Uin8Array
      ✔ compare each validator Uint8Array                                  0.8007866 ops/s    1.248772  s/op   x0.830         10 runs   13.7 s
    compare validator ViewDU to Uint8Array
      ✔ compare ViewDU to Uint8Array                                       0.9549799 ops/s    1.047143  s/op   x0.999         10 runs   11.5 s

packages/state-transition/test/perf/util/loadState/loadState.test.ts
  loadState
    ✔ migrate state 1000000 validators, 24 modified, 0 new               0.9790753 ops/s    1.021372  s/op   x1.147         57 runs   60.1 s
    ✔ migrate state 1000000 validators, 1700 modified, 1000 new          0.7290797 ops/s    1.371592  s/op   x0.942         43 runs   61.1 s
    ✔ migrate state 1000000 validators, 3400 modified, 2000 new          0.6307866 ops/s    1.585322  s/op   x0.883         37 runs   60.9 s
    ✔ migrate state 1500000 validators, 24 modified, 0 new               0.9393088 ops/s    1.064613  s/op   x0.911         55 runs   60.5 s
    ✔ migrate state 1500000 validators, 1700 modified, 1000 new          0.8235204 ops/s    1.214299  s/op   x0.785         48 runs   60.2 s
    ✔ migrate state 1500000 validators, 3400 modified, 2000 new          0.6997867 ops/s    1.429007  s/op   x0.720         41 runs   60.7 s


  ✔ naive computeProposerIndex 100000 validators                        21.29210 ops/s    46.96578 ms/op   x0.591         10 runs   51.8 s

  getNextSyncCommitteeIndices electra
    ✔ naiveGetNextSyncCommitteeIndices 1000 validators                   0.1319639 ops/s    7.577831  s/op   x0.675          8 runs   66.8 s
    ✔ getNextSyncCommitteeIndices 1000 validators                         9.444554 ops/s    105.8811 ms/op   x0.753         10 runs   1.60 s
    ✔ naiveGetNextSyncCommitteeIndices 10000 validators                  0.1280431 ops/s    7.809868  s/op   x0.766          7 runs   61.8 s
    ✔ getNextSyncCommitteeIndices 10000 validators                        9.244910 ops/s    108.1676 ms/op   x0.880         10 runs   1.62 s
    ✔ naiveGetNextSyncCommitteeIndices 100000 validators                 0.1295493 ops/s    7.719071  s/op   x0.814          7 runs   61.9 s
    ✔ getNextSyncCommitteeIndices 100000 validators                       9.279165 ops/s    107.7683 ms/op   x0.751         10 runs   1.62 s

  computeShuffledIndex
    ✔ naive computeShuffledIndex 100000 validators                      0.04376956 ops/s    22.84693  s/op   x0.719          2 runs   67.8 s
    ✔ cached computeShuffledIndex 100000 validators                       1.790556 ops/s    558.4858 ms/op   x0.973         10 runs   6.16 s
    ✔ naive computeShuffledIndex 2000000 validators                    0.002243157 ops/s    445.8003  s/op   x0.922          1 runs    931 s
    ✔ cached computeShuffledIndex 2000000 validators                    0.02947726 ops/s    33.92445  s/op   x0.810          1 runs   71.3 s

packages/state-transition/test/perf/util/signingRoot.test.ts
  computeSigningRoot
    ✔ computeSigningRoot for AttestationData                              51551.61 ops/s    19.39804 us/op   x0.905        491 runs   10.0 s
    ✔ hash AttestationData serialized data then Buffer.toString(base64    639269.7 ops/s    1.564285 us/op   x0.977       5818 runs   10.0 s
    ✔ toHexString serialized data                                         886487.9 ops/s    1.128047 us/op   x0.926       8417 runs   10.0 s
    ✔ Buffer.toString(base64)                                              6071166 ops/s    164.7130 ns/op   x0.974      50685 runs   10.1 s
```

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2025-12-03 09:57:51 -05:00
Nico Flaig
bc1fed4d3d fix: avoid recomputing indexed attestations during block import (#8637)
**Motivation**

- https://github.com/ChainSafe/lodestar/issues/8625

**Description**

Store indexed attestations for each block during block/signature
verification to avoid recomputing them during import
- compute `indexedAttestationsByBlock` once during verification
- enhance `FullyVerifiedBlock` to include indexed attestations for block
import

Closes https://github.com/ChainSafe/lodestar/issues/8625
2025-12-03 14:49:20 +01:00
Phil Ngo
68e0c78624 chore: merge v1.37.0 stable back to unstable (#8643) 2025-12-01 10:20:10 -05:00
twoeths
0c3b3f119c feat: track DataTransform metrics (#8639)
**Motivation**

- we usually have to uncompress more messages than needed so it's good
to track it in DataTransform

**Description**

- track `compress` and `uncompress` times by topic type
- in this instance, this node only subscribe to 8 column subnets but we
usually have to uncompress 9 or up to 10/11 DataColumnSidecars per slot,
they will likely be duplicated in the end. See also
https://github.com/ChainSafe/js-libp2p-gossipsub/pull/536

part of #8629
<img width="1046" height="485" alt="Screenshot 2025-11-28 at 17 02 42"
src="https://github.com/user-attachments/assets/df190b29-6681-48de-a04e-cd79ab82858d"
/>

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2025-12-01 09:14:28 +07:00
philknows
2303b06a91 chore: bump package versions to 1.37.0 2025-11-28 10:17:20 -05:00
Nazar Hussain
c68bfb2ae3 refactor: introduce safe-block to fork-choice (#8618)
**Motivation**

There is a concept documented in the specs called
[safe-block](https://github.com/ethereum/consensus-specs/blob/master/fork_choice/safe-block.md).
Wanted to introduce that concept in our codebase so upcoming feature of
`fcr` have less invasive changes.

**Description**

- Expose functions `getSafeBeaconBlockRoot` and
`getSafeExecutionBlockHash` from `fork-choice` package.
- Update the usage of `safeBlock` to use those functions

---------

Co-authored-by: Nico Flaig <nflaig@protonmail.com>
2025-11-28 15:48:06 +01:00
Nico Flaig
2724822372 fix: compare signature bytes in proposer signature cache check (#8636)
**Motivation**

https://github.com/ChainSafe/lodestar/pull/8620#discussion_r2567993594

**Description**

Compare signature bytes in proposer signature cache check
2025-11-28 02:57:13 +01:00
Cayman
c2cf1aac27 feat: cache serialized data column sidecars (#8627)
**Motivation**

- https://github.com/ChainSafe/lodestar/issues/8624

**Description**

- Cache serialized data column sidecars (from gossip and reqresp)
- Use serialized data column sidecars (if available) when persisting to
db (for engine, ~10 per slot, they will not be available, so they will
still be reserialized)
2025-11-22 14:23:20 -05:00
twoeths
3e80b7391e fix: verify proposer signatures once per slot (#8620)
**Motivation**

- I found we verify proposer signatures multiple times per slot. On
hoodi it takes 20ms to 40ms, if we receive all DataColumnSidecars by
gossip it would be a lot of time

<img width="1594" height="294" alt="Screenshot 2025-11-20 at 15 15 01"
src="https://github.com/user-attachments/assets/6797bb8b-e4a6-4b10-a939-30fc45658f45"
/>

proposer signatures are verified when we receive gossip block,
BlobSidecar or DataColumnSidecar

**Description**
- enhance `SeenBlockInput` with a map to cache verified proposer
signature by slot + root hex
- verify Block/Blob/DataColumnSidecar proposer signature on main thread
and cache. It will takes ~30ms to do that, and we only have to do it
once per slot

part of #8619

**Testing**
- [x] deployed to feat4
- [x] monitor result

---------

Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
2025-11-22 07:39:57 -05:00
Nazar Hussain
ada2b2b0ea chore: remove bun:ffi usage across all packages (#8613)
**Motivation**

After discussion we decided to stick with `napi` bindings instead of
`bun:ffi` to reduce the risk of vendor lock-in.

**Description**

- Remove all usages of `@lodestar/bun`
- Remove all conditional `imports` for packages related to `bun:ffi`

**Steps to test or reproduce**

- Run all tests
2025-11-13 12:16:08 +01:00
NC
d048e9aee4 feat: enable gloas spec tests (#8609)
Enable gloas spec test and skip any non-ssz_static gloas tests.

Also skipping `ForkChoiceNode` because it is not necessary. See
https://discord.com/channels/595666850260713488/874767108809031740/1420966514709889084

---------

Co-authored-by: Nico Flaig <nflaig@protonmail.com>
2025-11-06 14:40:13 -08:00
Nazar Hussain
4131637eb3 test: improve type check performance (#8611)
**Motivation**

Improve type check performance. 

**Description**

- Improve the type check performance which was degraded in recent
changes. From `100s` to `8s` for `beacon-node` package.

```
time yarn check-types
yarn run v1.22.22
$ tsc
  Done in 105.85s.
yarn check-types  108.35s user 2.59s system 104% cpu 1:46.08 total
```

vs

```
time yarn check-types
yarn run v1.22.22
$ tsc
  Done in 8.46s.
yarn check-types  14.60s user 0.86s system 178% cpu 8.661 total
```

**Steps to test or reproduce**

- Run all jobs
2025-11-06 18:07:36 +01:00
Nazar Hussain
f0ce024c1a test: update to vitest 4 to use builtin bun support (#8599)
**Motivation**

Update the vitest to avoid using third party test pool. 

**Description**

- Use latest vitest
- Remove custom process pool which we developed to run our tests in Bun
runtime
- Migrate test configs to latest version
- Update types
- Switch to playwright from webdriverio for browser tests performance,
which was due for long.


**Steps to test or reproduce**

- Run all tests
2025-11-06 10:43:41 -05:00
Nico Flaig
983ef10850 feat: add proposer duties v2 endpoint (#8597)
Adds proposer duties v2 endpoint which works the same as v1 but uses
previous epoch to determine dependent root to account for deterministic
proposer lookahead changes in fulu.

https://github.com/ethereum/beacon-APIs/pull/563
2025-11-04 21:33:18 +00:00
philknows
6eb05a083a chore: bump package versions to 1.36.0 2025-11-04 12:28:29 -05:00
Matthew Keil
801b1f4f52 feat: log agent, version and peerid for batched gossip errors (#8604)
**Motivation**

Logs client meta for batch processed gossip errors
2025-11-04 09:51:03 -05:00
Nico Flaig
68f0ed9071 chore: include block slot in proposal signature errors (#8603)
Would be great to know for which slot this block was
2025-11-04 09:19:12 -05:00
Matthew Keil
f3703b7882 feat: signature verification for reqresp DA (#8580)
**Motivation**

Spec will be updated to have check of signatures via reqresp. Proacative
fix inline with Lighthouse and Prysm

https://github.com/sigp/lighthouse/issues/7650
2025-11-03 21:36:47 +00:00
Matthew Keil
6832b029e7 feat: log clientAgent and clientVersion for gossip errors (#8601)
**Motivation**
 
When gossip errors occur it will be helpful to see the client and
version sending the invalid gossip message
2025-11-03 13:57:13 -05:00
Matthew Keil
322b07c0ac fix: prevent columns that arrive after block import from getting processed (#8598)
**Motivation**

A bug was found on hoodi that needs to be rectified.
1) 1st column arrives via gossip
2) trigger getBlobsV2
3) many more columns (but not all) come via gossip
4) gossip block arrives
5) reqresp triggered via block arrival
6) get remaining data via reqresp
7) process blockInput 
8) delete cached blockInput
9) remaining columns arrive via gossip and get added to a new BlockInput
10) getBlobsV2 finishes and gossips "missing" columns not found on new
BlockInput
11) reqresp gets triggered again after timeout (from second batch of
gossip columns on second BlockInput)
12) second batch of columns and second block get reqresp downloaded and
second block Input goes for processing

---------

Co-authored-by: Nico Flaig <nflaig@protonmail.com>
2025-11-03 17:44:56 +00:00
Phil Ngo
698a315802 feat: increase default gas limit to 60M (#8600)
**Motivation**

Client teams have been instructed to increase default gas limits to 60M
for Fusaka.

**Description**

This will ensure that validators signal 60M by default and updates
docs/tests to work with the new 60M configuration.

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-03 10:23:15 -05:00
Nico Flaig
faac5550fb chore: remove usage of prettyBytes in errors (#8589)
Closes https://github.com/ChainSafe/lodestar/issues/8566, this just
removes usage of `prettyBytes` as it's objectively bad to do that as it
doesn't allow external lookup.
2025-11-03 08:49:28 -05:00
kevaundray
5317389489 feat: update node-eth-kzg to 0.9.1 (#8594)
**Motivation**

This includes the update for the spec changes added here:
https://github.com/ethereum/consensus-specs/pull/4519

---------

Co-authored-by: Nico Flaig <nflaig@protonmail.com>
2025-10-31 12:18:53 +00:00
NC
d8afb6dc39 chore: migrate to consensus-specs for spec tests (#8591)
[ethereum/consensus-spec-tests](https://github.com/ethereum/consensus-spec-tests)
are archived as of October 22.

We need to point to `ethereum/consensus-specs` for spec test vectors.
2025-10-31 09:46:43 +00:00
Nico Flaig
d4044b6621 feat: schedule fulu and BPOs on mainnet (#8590)
- https://github.com/ethereum/consensus-specs/pull/4689
2025-10-30 16:41:31 -07:00
Nico Flaig
aec301c48f chore: downgrade enr without tcp multiaddr log to warning (#8586)
This happens if the node has ENRs without a tcp4 or tcp6 multiaddress
field and `--connectToDiscv5Bootnodes` flag is added. It's not really
critical so `warn` seems more appropriate than `error`.
2025-10-29 16:27:17 -04:00
Nico Flaig
13fb933e7e feat: add option to prune persisted cp states (#8582)
**Motivation**

Last change from https://github.com/ChainSafe/lodestar/pull/7501 which
we implemented because persisted checkpoint states are added each epoch
during non-finality and never pruned until the chain finalizes again. It
turns out this is not sustainable if we have multiple weeks of
non-finality since it takes up hundreds of GB of disk space and many
nodes don't have sufficient disk space to handle this.

The long term solution is to store states more efficiently but for now
we should at least have a option to enable pruning, there is also always
the options to clean up the `checkpoint_states` folder manually.

**Description**

This PR adds a new flag `--chain.maxCPStateEpochsOnDisk` to enable
pruning of persisted checkpoint states. By default we don't prune any
persistent checkpoint states as it's not safe to delete them during long
non-finality as we don't know the state of the chain and there could be
a deep (hundreds of epochs) reorg if there two competing chains with
similar weight but we wouldn't have a close enough state to pivot to
this chain and instead require a resync from last finalized checkpoint
state which could be very far in the past.


Previous PR https://github.com/ChainSafe/lodestar/pull/7510

---------

Co-authored-by: twoeths <10568965+twoeths@users.noreply.github.com>
2025-10-28 09:40:47 -04:00
Matthew Keil
a47974e6c6 feat: enforce earliest available slot for incoming reqresp (#8553)
**Motivation**

Need to enforce earliest available slot for incoming reqresp. We handle
for outgoing but incoming implementation was missed
2025-10-27 18:10:59 +00:00
Matthew Keil
b992c32a23 fix: refactor validateColumnsByRangeResponse (#8482)
**Motivation**

Found two small bugs while looking through devnet logs
```sh
Sep-25 13:49:19.674[sync]          verbose: Batch download error id=Head-35, startEpoch=14392, status=Downloading, peer=16...wUr9yu - Cannot read properties of undefined (reading 'signedBlockHeader')
TypeError: Cannot read properties of undefined (reading 'signedBlockHeader')
    at cacheByRangeResponses (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:146:63)
    at SyncChain.downloadByRange (file:///usr/app/packages/beacon-node/src/sync/range/range.ts:213:20)
    at wrapError (file:///usr/app/packages/beacon-node/src/util/wrapError.ts:18:32)
    at SyncChain.sendBatch (file:///usr/app/packages/beacon-node/src/sync/range/chain.ts:470:19)
```
and 
```sh
Sep-30 15:39:03.738[sync]          verbose: Batch download error id=Head-12, startEpoch=15436, status=Downloading, peer=16...3aErhh - Cannot read properties of undefined (reading 'message')
TypeError: Cannot read properties of undefined (reading 'message')
    at validateBlockByRangeResponse (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:432:46)
    at validateResponses (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:304:42)
    at downloadByRange (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:206:27)
    at SyncChain.downloadByRange (file:///usr/app/packages/beacon-node/src/sync/range/range.ts:205:32)
    at wrapError (file:///usr/app/packages/beacon-node/src/util/wrapError.ts:18:32)
    at SyncChain.sendBatch (file:///usr/app/packages/beacon-node/src/sync/range/chain.ts:470:19)
```

There is a bug in `validateColumnsByRangeResponse` that is passing
through an empty array of `blockColumnSidecars` on this line
bca2040ef3/packages/beacon-node/src/sync/utils/downloadByRange.ts (L770)
that is throwing in `cacheByRangeResponses`. While going through the
validation with a fine toothed comb I noticed that there were a couple
of conditions that we are not checking per spec.

**Scope**
Changed heuristic for how columns are validated in ByRange and added in
checks for column delivery in `(slot, column_index)` order.

---------

Co-authored-by: Cayman <caymannava@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-27 18:02:17 +00:00
Nico Flaig
9fe2e4d0bd feat: add partial blob reconstruction (#8567)
**Motivation**

When retrieving blobs via `GET /eth/v1/beacon/blobs/{block_id}` it's
possible to specify versioned hashes of the blobs as a consumer and
commonly (eg. in case of L2s) there isn't the need to get all the blobs.
There is not a huge difference in reconstruction of all or just a subset
of blobs when we have the a full set of data columns (128) as a
supernode as we don't need to do cell recovery but in the case the node
only has 64 columns (["semi
supernode"](https://github.com/ChainSafe/lodestar/pull/8568)) we need to
recover cells which is very expensive for all blob rows but with this
change we can only recover cells for the requested blobs which can make
a huge difference (see benchmarks below).

**Description**

Add partial blob reconstruction
- update `reconstructBlobs` to only reconstruct blobs for `indices` (if
provided)
- add new `recoverBlobCells` which is similar to
`dataColumnMatrixRecovery` but allows partial recovery
- only call `asyncRecoverCellsAndKzgProofs` for blob rows we need to
recover

Results from running benchmark locally
```
reconstructBlobs
    Reconstruct blobs - 6 blobs
      ✔ Full columns - reconstruct all 6 blobs                              7492.994 ops/s    133.4580 us/op   x0.952       1416 runs  0.381 s
      ✔ Full columns - reconstruct half of the blobs out of 6               14825.80 ops/s    67.45000 us/op   x0.989       3062 runs  0.304 s
      ✔ Full columns - reconstruct single blob out of 6                     34335.94 ops/s    29.12400 us/op   x1.009       8595 runs  0.306 s
      ✔ Half columns - reconstruct all 6 blobs                              3.992937 ops/s    250.4422 ms/op   x0.968         10 runs   4.83 s
      ✔ Half columns - reconstruct half of the blobs out of 6               7.329427 ops/s    136.4363 ms/op   x0.980         10 runs   1.90 s
      ✔ Half columns - reconstruct single blob out of 6                     13.21413 ops/s    75.67658 ms/op   x1.003         11 runs   1.36 s
    Reconstruct blobs - 10 blobs
      ✔ Full columns - reconstruct all 10 blobs                             4763.833 ops/s    209.9150 us/op   x0.908       1324 runs  0.536 s
      ✔ Full columns - reconstruct half of the blobs out of 10              9749.439 ops/s    102.5700 us/op   x0.935       1818 runs  0.319 s
      ✔ Full columns - reconstruct single blob out of 10                    36794.47 ops/s    27.17800 us/op   x0.923       9087 runs  0.307 s
      ✔ Half columns - reconstruct all 10 blobs                             2.346124 ops/s    426.2349 ms/op   x1.033         10 runs   5.09 s
      ✔ Half columns - reconstruct half of the blobs out of 10              4.509997 ops/s    221.7296 ms/op   x1.022         10 runs   2.84 s
      ✔ Half columns - reconstruct single blob out of 10                    13.73414 ops/s    72.81126 ms/op   x0.910         11 runs   1.30 s
    Reconstruct blobs - 20 blobs
      ✔ Full columns - reconstruct all 20 blobs                             2601.524 ops/s    384.3900 us/op   x0.982        723 runs  0.727 s
      ✔ Full columns - reconstruct half of the blobs out of 20              5049.306 ops/s    198.0470 us/op   x0.961        933 runs  0.421 s
      ✔ Full columns - reconstruct single blob out of 20                    34156.51 ops/s    29.27700 us/op   x0.980       8441 runs  0.306 s
      ✔ Half columns - reconstruct all 20 blobs                             1.211887 ops/s    825.1593 ms/op   x1.010         10 runs   9.10 s
      ✔ Half columns - reconstruct half of the blobs out of 20              2.350099 ops/s    425.5140 ms/op   x0.977         10 runs   5.13 s
      ✔ Half columns - reconstruct single blob out of 20                    13.93751 ops/s    71.74882 ms/op   x0.915         11 runs   1.31 s
    Reconstruct blobs - 48 blobs
      ✔ Full columns - reconstruct all 48 blobs                             1031.150 ops/s    969.7910 us/op   x0.853        286 runs  0.805 s
      ✔ Full columns - reconstruct half of the blobs out of 48              2042.254 ops/s    489.6550 us/op   x0.933        581 runs  0.805 s
      ✔ Full columns - reconstruct single blob out of 48                    33946.64 ops/s    29.45800 us/op   x0.961       7685 runs  0.306 s
      ✔ Half columns - reconstruct all 48 blobs                            0.5274713 ops/s    1.895838  s/op   x0.940         10 runs   21.0 s
      ✔ Half columns - reconstruct half of the blobs out of 48              1.033691 ops/s    967.4067 ms/op   x0.951         10 runs   10.7 s
      ✔ Half columns - reconstruct single blob out of 48                    12.54519 ops/s    79.71183 ms/op   x1.072         11 runs   1.44 s
    Reconstruct blobs - 72 blobs
      ✔ Full columns - reconstruct all 72 blobs                             586.0658 ops/s    1.706293 ms/op   x0.985        178 runs  0.806 s
      ✔ Full columns - reconstruct half of the blobs out of 72              1390.803 ops/s    719.0090 us/op   x0.959        386 runs  0.804 s
      ✔ Full columns - reconstruct single blob out of 72                    34457.81 ops/s    29.02100 us/op   x0.995       8437 runs  0.306 s
      ✔ Half columns - reconstruct all 72 blobs                            0.3519770 ops/s    2.841095  s/op   x0.972         10 runs   31.4 s
      ✔ Half columns - reconstruct half of the blobs out of 72             0.6779473 ops/s    1.475041  s/op   x1.027         10 runs   16.2 s
      ✔ Half columns - reconstruct single blob out of 72                    13.59862 ops/s    73.53685 ms/op   x0.927         11 runs   1.38 s
```
2025-10-27 08:38:13 -04:00
Cayman
88fbac9fcf feat: use bytes from lodestar-bun (#8562)
**Motivation**

- #7280 

**Description**

- Build upon the isomorphic bytes code in the utils package
- refactor browser/nodejs selection to use conditional imports (like how
we've been handling bun / nodejs selection
- Use Uint8Array.fromHex and toHex (mentioned in
https://github.com/ChainSafe/lodestar/pull/8275#issuecomment-3228184163)
- Refactor the bytes perf tests to include bun
- Add lodestar-bun dependency (also add missing dependency in
beacon-node package)

Results from my machine
```
  bytes utils
    ✔ nodejs block root to RootHex using toHex                             5500338 ops/s    181.8070 ns/op        -       1048 runs  0.444 s
    ✔ nodejs block root to RootHex using toRootHex                         7466866 ops/s    133.9250 ns/op        -       2189 runs  0.477 s
    ✔ nodejs fromHex(blob)                                                7001.930 ops/s    142.8178 us/op        -         10 runs   1.94 s
    ✔ nodejs fromHexInto(blob)                                            1744.298 ops/s    573.2965 us/op        -         10 runs   6.33 s
    ✔ nodejs block root to RootHex using the deprecated toHexString        1609510 ops/s    621.3070 ns/op        -        309 runs  0.704 s
    ✔ browser block root to RootHex using toHex                            1854390 ops/s    539.2610 ns/op        -        522 runs  0.807 s
    ✔ browser block root to RootHex using toRootHex                        2060543 ops/s    485.3090 ns/op        -        597 runs  0.805 s
    ✔ browser fromHex(blob)                                               1632.601 ops/s    612.5196 us/op        -         10 runs   6.77 s
    ✔ browser fromHexInto(blob)                                           1751.718 ops/s    570.8683 us/op        -         10 runs   6.36 s
    ✔ browser block root to RootHex using the deprecated toHexString       1596024 ops/s    626.5570 ns/op        -        457 runs  0.805 s
    ✔ bun block root to RootHex using toHex                            1.249563e+7 ops/s    80.02800 ns/op        -       4506 runs  0.518 s
    ✔ bun block root to RootHex using toRootHex                        1.262626e+7 ops/s    79.20000 ns/op        -       3716 runs  0.409 s
    ✔ bun fromHex(blob)                                                   26995.09 ops/s    37.04377 us/op        -         10 runs  0.899 s
    ✔ bun fromHexInto(blob)                                               31539.09 ops/s    31.70668 us/op        -         13 runs  0.914 s
    ✔ bun block root to RootHex using the deprecated toHexString       1.252944e+7 ops/s    79.81200 ns/op        -       3616 runs  0.414 s
```
2025-10-23 14:28:25 -04:00
Nico Flaig
57b1f6e666 fix: update local status fork digest on fork boundary transition (#8561)
**Motivation**

Seems like we only update our local status if we receive a new block
after fork, and incorrectly disconnect peers due to that
```txt
info: Synced - slot: 32 - head: (slot -1) 0x355c…8bed - exec-block: valid(29 0xd623…) - finalized: 0x0000…0000:0 - peers: 3
debug: Req  received method=status, version=2, client=Prysm, peer=16...HcnnMH, requestId=48
verbose: Resp done method=status, version=2, client=Prysm, peer=16...HcnnMH, requestId=48
debug: Irrelevant peer peer=16...HcnnMH, reason=INCOMPATIBLE_FORKS ours: 0x12e80bb5 theirs: 0x450757b9
debug: initiating goodbyeAndDisconnect peer reason=Irrelevant network, peerId=16...HcnnMH
```
`ours` here is still the previous fork digest

**Description**

Update local status fork digest on fork boundary transition. It seemed
the easiest to do this in the network thread directly as we also update
the fork digest of the ENR there.
2025-10-23 17:02:34 +01:00