**Motivation**
- the reward apis tightly couple to state-transition functions like
`beforeProcessEpoch() processBlock() processAttestationAltair()` so it
needs to be moved there
**Description**
- move api type definitions to `types` package so that it can be used
everywhere
- move reward apis implementation to `state-transition` package
Closes#8690
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
https://github.com/ChainSafe/lodestar/pull/8711#pullrequestreview-3612431091
**Description**
Prevent duplicate aggregates passing gossip validation due to race
condition by checking again if we've seen the aggregate before inserting
it into op pool. This is required since we run multiple async operations
in-between first check and inserting it into op pool.
<img width="942" height="301" alt="image"
src="https://github.com/user-attachments/assets/2701a92e-7733-4de3-bf4a-ac853fd5c0b7"
/>
`AlreadyKnown` disappears since we now filter those out properly during
gossip validation which is important since we don't wanna re-gossip
those aggregates.
**Motivation**
- we use the whole CachedBeaconStateAllForks to get all block
signatures, turn out we only need the validator indices of the current
SyncCommittee
**Description**
given this `getConfig` api:
```typescript
getDomain(domainSlot: Slot, domainType: DomainType, messageSlot?: Slot): Uint8Array
```
we currently pass `state.slot` as the 1st param. However it's the same
to `block.slot` in `state-transition` and the same epoch when we verify
blocks in batch in
[beacon-node](b255111a20/packages/beacon-node/src/chain/blocks/verifyBlock.ts (L62))
- so we can just use `block.slot` instead of passing the whole
CachedBeaconStateAllForks in `getBlockSignatureSets()` api
- still have to pass in `currentSyncCommitteeIndexed` instead
part of #8650
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
- we will not be able to access `pubkey2index` or `index2pubkey` once we
switch to a native state-transition so we need to be prepared for that
**Description**
- pass `pubkey2index`, `index2pubkey` from cli instead
- in the future, we should find a way to extract them given a
BeaconState so that we don't have to depend on any implementations of
BeaconStateView, see
https://github.com/ChainSafe/lodestar/issues/8706#issue-3741320691Closes#8652
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
As noted in
https://github.com/ChainSafe/lodestar/pull/8680#discussion_r2624026653
we cannot sync through bellatrix anymore. While I don't think it's a big
deal it's simple enough to keep that functionality as that code is
pretty isolated and won't get in our way during refactors and with gloas
won't be part of the block processing pipeline anymore due to
block/payload separation.
**Description**
Restore code required to perform sync through bellatrix
- re-added `isExecutionEnabled()` and `isMergeTransitionComplete()`
checks during block processing
- enabled some spec tests again that were previously skipped
- mostly copied original code removed in
[#8680](https://github.com/ChainSafe/lodestar/pull/8680) but cleaned up
some comments and simplified a bit
**Motivation**
- as a preparation for lodestar-z integration, we should not access
config from any cached BeaconState
**Description**
- use chain.config instead
part of #8652
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
All networks are post-electra now and transition period is completed,
which means due to [EIP-6110](https://eips.ethereum.org/EIPS/eip-6110)
we no longer need to process deposits via eth1 bridge as those are now
processed by the execution layer.
This code is effectively tech debt, no longer exercised and just gets in
the way when doing refactors.
**Description**
Removes all code related to eth1 bridge mechanism to include new
deposits
- removed all eth1 related code, we can no longer produce blocks with
deposits pre-electra (syncing blocks still works)
- building a genesis state from eth1 is no longer supported (only for
testing)
- removed various db repositories related to deposits/eth1 data
- removed various `lodestar_eth1_*` metrics and dashboard panels
- deprecated all `--eth1.*` flags (but kept for backward compatibility)
- moved shared utility functions from eth1 to execution engine module
Closes https://github.com/ChainSafe/lodestar/issues/7682
Closes https://github.com/ChainSafe/lodestar/issues/8654
**Motivation**
- improve memory by transferirng gossipsub message data from network
thread to the main thread
- In snappy decompression in #8647 we had to do `Buffer.alloc()` instead
of `Buffer.allocUnsafe()`. We don't have to feel bad about that because
`Buffer.allocUnsafe()` does not work with this PR, and we don't waste
any memory.
**Description**
- use `transferList` param when posting messages from network thread to
the main thread
part of #8629
**Testing**
I've tested this on `feat2` for 3 days, the previous branch was #8671 so
it's basically the current stable, does not see significant improvement
but some good data for different nodes
- no change on 1k or `novc`
- on hoodi `sas` node we have better memory there on main thread with
same mesh peers, same memory on network thread
<img width="851" height="511" alt="Screenshot 2025-12-12 at 11 05 27"
src="https://github.com/user-attachments/assets/8d7b2c2f-8213-4f89-87e0-437d016bc24a"
/>
- on mainnnet `sas` node, we have better memory on network thread, a
little bit worse on the main thread
<img width="854" height="504" alt="Screenshot 2025-12-12 at 11 08 42"
src="https://github.com/user-attachments/assets/7e638149-2dbe-4c7e-849c-ef78f6ff4d6f"
/>
- but for this mainnet node, the most interesting metric is `forward msg
avg peers`, we're faster than majority of them
<img width="1378" height="379" alt="Screenshot 2025-12-12 at 11 11 00"
src="https://github.com/user-attachments/assets/3ba5eeaa-5a11-4cad-adfa-1e0f68a81f16"
/>
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
- as a preparation for lodestar-z integration, we should not access
pubkey2index from CachedBeaconState
**Description**
- use that from BeaconChain instead
part of #8652
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motiviation**
All networks have completed the merge transition and most execution
clients no longer support pre-merge so it's not even possible anymore to
run a network from a genesis before bellatrix, unless you keep it to
phase0/altair only, which still works after this PR is merged.
This code is effectively tech debt, no longer exercised and just gets in
the way when doing refactors.
**Description**
Removes all code related to performing the merge transition. Running the
node pre-merge (CL only mode) is still possible and syncing still works.
Also removed a few CLI flags we added for the merge specifically, those
shouldn't be used anymore. Spec constants like
`TERMINAL_TOTAL_DIFFICULTY` are kept for spec compliance and ssz types
(like `PowBlock`) as well. I had to disable a few spec tests related to
handling the merge block since those code paths are removed.
Closes https://github.com/ChainSafe/lodestar/issues/8661
**Motivation**
- once we have `state-transition-z`, we're not able to get
`index2pubkey` from a light view of BeaconState in beacon-node
**Description**
- in `beacon-node`, use `index2pubkey` of BeaconChain instead as a
preparation for working with `state-transition-z`
- it's ok to use `state.epochCtx.index2pubkey` in `state-transition`
since it can access the full state there
part of #8652
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
Implement epbs state transition function.
Passes all operations, epoch_transition and rewards spec tests on v1.6.1
Part of #8439
---------
Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation**
When requesting a future slot the node tries to dial the state from head
which allows to quite easily DoS the node as it's unbounded amount of
work if the slot is very far away from head.
We should not allow to request states that are in the future (> clock
slot) and return a 404 instead.
**Description**
In case state is request by slot, check if it's a slot from the future
based on clock slot and return 404 state not found error.
I didn't use `forkChoice.getHead().slot` because we should still be able
to serve the state if all slots between the requested slot and the head
slot are skipped.
Related [discord
discussion](https://discord.com/channels/593655374469660673/1387128551962050751/1445514034592878755),
thanks to @guha-rahul for catching and reporting this.
**Motivation**
- https://github.com/ChainSafe/lodestar/issues/8625
**Description**
Store indexed attestations for each block during block/signature
verification to avoid recomputing them during import
- compute `indexedAttestationsByBlock` once during verification
- enhance `FullyVerifiedBlock` to include indexed attestations for block
import
Closes https://github.com/ChainSafe/lodestar/issues/8625
**Motivation**
There is a concept documented in the specs called
[safe-block](https://github.com/ethereum/consensus-specs/blob/master/fork_choice/safe-block.md).
Wanted to introduce that concept in our codebase so upcoming feature of
`fcr` have less invasive changes.
**Description**
- Expose functions `getSafeBeaconBlockRoot` and
`getSafeExecutionBlockHash` from `fork-choice` package.
- Update the usage of `safeBlock` to use those functions
---------
Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation**
- https://github.com/ChainSafe/lodestar/issues/8624
**Description**
- Cache serialized data column sidecars (from gossip and reqresp)
- Use serialized data column sidecars (if available) when persisting to
db (for engine, ~10 per slot, they will not be available, so they will
still be reserialized)
**Motivation**
- I found we verify proposer signatures multiple times per slot. On
hoodi it takes 20ms to 40ms, if we receive all DataColumnSidecars by
gossip it would be a lot of time
<img width="1594" height="294" alt="Screenshot 2025-11-20 at 15 15 01"
src="https://github.com/user-attachments/assets/6797bb8b-e4a6-4b10-a939-30fc45658f45"
/>
proposer signatures are verified when we receive gossip block,
BlobSidecar or DataColumnSidecar
**Description**
- enhance `SeenBlockInput` with a map to cache verified proposer
signature by slot + root hex
- verify Block/Blob/DataColumnSidecar proposer signature on main thread
and cache. It will takes ~30ms to do that, and we only have to do it
once per slot
part of #8619
**Testing**
- [x] deployed to feat4
- [x] monitor result
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
After discussion we decided to stick with `napi` bindings instead of
`bun:ffi` to reduce the risk of vendor lock-in.
**Description**
- Remove all usages of `@lodestar/bun`
- Remove all conditional `imports` for packages related to `bun:ffi`
**Steps to test or reproduce**
- Run all tests
**Motivation**
Improve type check performance.
**Description**
- Improve the type check performance which was degraded in recent
changes. From `100s` to `8s` for `beacon-node` package.
```
time yarn check-types
yarn run v1.22.22
$ tsc
✨ Done in 105.85s.
yarn check-types 108.35s user 2.59s system 104% cpu 1:46.08 total
```
vs
```
time yarn check-types
yarn run v1.22.22
$ tsc
✨ Done in 8.46s.
yarn check-types 14.60s user 0.86s system 178% cpu 8.661 total
```
**Steps to test or reproduce**
- Run all jobs
**Motivation**
Update the vitest to avoid using third party test pool.
**Description**
- Use latest vitest
- Remove custom process pool which we developed to run our tests in Bun
runtime
- Migrate test configs to latest version
- Update types
- Switch to playwright from webdriverio for browser tests performance,
which was due for long.
**Steps to test or reproduce**
- Run all tests
Adds proposer duties v2 endpoint which works the same as v1 but uses
previous epoch to determine dependent root to account for deterministic
proposer lookahead changes in fulu.
https://github.com/ethereum/beacon-APIs/pull/563
**Motivation**
A bug was found on hoodi that needs to be rectified.
1) 1st column arrives via gossip
2) trigger getBlobsV2
3) many more columns (but not all) come via gossip
4) gossip block arrives
5) reqresp triggered via block arrival
6) get remaining data via reqresp
7) process blockInput
8) delete cached blockInput
9) remaining columns arrive via gossip and get added to a new BlockInput
10) getBlobsV2 finishes and gossips "missing" columns not found on new
BlockInput
11) reqresp gets triggered again after timeout (from second batch of
gossip columns on second BlockInput)
12) second batch of columns and second block get reqresp downloaded and
second block Input goes for processing
---------
Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation**
Client teams have been instructed to increase default gas limits to 60M
for Fusaka.
**Description**
This will ensure that validators signal 60M by default and updates
docs/tests to work with the new 60M configuration.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This happens if the node has ENRs without a tcp4 or tcp6 multiaddress
field and `--connectToDiscv5Bootnodes` flag is added. It's not really
critical so `warn` seems more appropriate than `error`.
**Motivation**
Last change from https://github.com/ChainSafe/lodestar/pull/7501 which
we implemented because persisted checkpoint states are added each epoch
during non-finality and never pruned until the chain finalizes again. It
turns out this is not sustainable if we have multiple weeks of
non-finality since it takes up hundreds of GB of disk space and many
nodes don't have sufficient disk space to handle this.
The long term solution is to store states more efficiently but for now
we should at least have a option to enable pruning, there is also always
the options to clean up the `checkpoint_states` folder manually.
**Description**
This PR adds a new flag `--chain.maxCPStateEpochsOnDisk` to enable
pruning of persisted checkpoint states. By default we don't prune any
persistent checkpoint states as it's not safe to delete them during long
non-finality as we don't know the state of the chain and there could be
a deep (hundreds of epochs) reorg if there two competing chains with
similar weight but we wouldn't have a close enough state to pivot to
this chain and instead require a resync from last finalized checkpoint
state which could be very far in the past.
Previous PR https://github.com/ChainSafe/lodestar/pull/7510
---------
Co-authored-by: twoeths <10568965+twoeths@users.noreply.github.com>
**Motivation**
Found two small bugs while looking through devnet logs
```sh
Sep-25 13:49:19.674[sync] verbose: Batch download error id=Head-35, startEpoch=14392, status=Downloading, peer=16...wUr9yu - Cannot read properties of undefined (reading 'signedBlockHeader')
TypeError: Cannot read properties of undefined (reading 'signedBlockHeader')
at cacheByRangeResponses (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:146:63)
at SyncChain.downloadByRange (file:///usr/app/packages/beacon-node/src/sync/range/range.ts:213:20)
at wrapError (file:///usr/app/packages/beacon-node/src/util/wrapError.ts:18:32)
at SyncChain.sendBatch (file:///usr/app/packages/beacon-node/src/sync/range/chain.ts:470:19)
```
and
```sh
Sep-30 15:39:03.738[sync] verbose: Batch download error id=Head-12, startEpoch=15436, status=Downloading, peer=16...3aErhh - Cannot read properties of undefined (reading 'message')
TypeError: Cannot read properties of undefined (reading 'message')
at validateBlockByRangeResponse (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:432:46)
at validateResponses (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:304:42)
at downloadByRange (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:206:27)
at SyncChain.downloadByRange (file:///usr/app/packages/beacon-node/src/sync/range/range.ts:205:32)
at wrapError (file:///usr/app/packages/beacon-node/src/util/wrapError.ts:18:32)
at SyncChain.sendBatch (file:///usr/app/packages/beacon-node/src/sync/range/chain.ts:470:19)
```
There is a bug in `validateColumnsByRangeResponse` that is passing
through an empty array of `blockColumnSidecars` on this line
bca2040ef3/packages/beacon-node/src/sync/utils/downloadByRange.ts (L770)
that is throwing in `cacheByRangeResponses`. While going through the
validation with a fine toothed comb I noticed that there were a couple
of conditions that we are not checking per spec.
**Scope**
Changed heuristic for how columns are validated in ByRange and added in
checks for column delivery in `(slot, column_index)` order.
---------
Co-authored-by: Cayman <caymannava@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
**Motivation**
When retrieving blobs via `GET /eth/v1/beacon/blobs/{block_id}` it's
possible to specify versioned hashes of the blobs as a consumer and
commonly (eg. in case of L2s) there isn't the need to get all the blobs.
There is not a huge difference in reconstruction of all or just a subset
of blobs when we have the a full set of data columns (128) as a
supernode as we don't need to do cell recovery but in the case the node
only has 64 columns (["semi
supernode"](https://github.com/ChainSafe/lodestar/pull/8568)) we need to
recover cells which is very expensive for all blob rows but with this
change we can only recover cells for the requested blobs which can make
a huge difference (see benchmarks below).
**Description**
Add partial blob reconstruction
- update `reconstructBlobs` to only reconstruct blobs for `indices` (if
provided)
- add new `recoverBlobCells` which is similar to
`dataColumnMatrixRecovery` but allows partial recovery
- only call `asyncRecoverCellsAndKzgProofs` for blob rows we need to
recover
Results from running benchmark locally
```
reconstructBlobs
Reconstruct blobs - 6 blobs
✔ Full columns - reconstruct all 6 blobs 7492.994 ops/s 133.4580 us/op x0.952 1416 runs 0.381 s
✔ Full columns - reconstruct half of the blobs out of 6 14825.80 ops/s 67.45000 us/op x0.989 3062 runs 0.304 s
✔ Full columns - reconstruct single blob out of 6 34335.94 ops/s 29.12400 us/op x1.009 8595 runs 0.306 s
✔ Half columns - reconstruct all 6 blobs 3.992937 ops/s 250.4422 ms/op x0.968 10 runs 4.83 s
✔ Half columns - reconstruct half of the blobs out of 6 7.329427 ops/s 136.4363 ms/op x0.980 10 runs 1.90 s
✔ Half columns - reconstruct single blob out of 6 13.21413 ops/s 75.67658 ms/op x1.003 11 runs 1.36 s
Reconstruct blobs - 10 blobs
✔ Full columns - reconstruct all 10 blobs 4763.833 ops/s 209.9150 us/op x0.908 1324 runs 0.536 s
✔ Full columns - reconstruct half of the blobs out of 10 9749.439 ops/s 102.5700 us/op x0.935 1818 runs 0.319 s
✔ Full columns - reconstruct single blob out of 10 36794.47 ops/s 27.17800 us/op x0.923 9087 runs 0.307 s
✔ Half columns - reconstruct all 10 blobs 2.346124 ops/s 426.2349 ms/op x1.033 10 runs 5.09 s
✔ Half columns - reconstruct half of the blobs out of 10 4.509997 ops/s 221.7296 ms/op x1.022 10 runs 2.84 s
✔ Half columns - reconstruct single blob out of 10 13.73414 ops/s 72.81126 ms/op x0.910 11 runs 1.30 s
Reconstruct blobs - 20 blobs
✔ Full columns - reconstruct all 20 blobs 2601.524 ops/s 384.3900 us/op x0.982 723 runs 0.727 s
✔ Full columns - reconstruct half of the blobs out of 20 5049.306 ops/s 198.0470 us/op x0.961 933 runs 0.421 s
✔ Full columns - reconstruct single blob out of 20 34156.51 ops/s 29.27700 us/op x0.980 8441 runs 0.306 s
✔ Half columns - reconstruct all 20 blobs 1.211887 ops/s 825.1593 ms/op x1.010 10 runs 9.10 s
✔ Half columns - reconstruct half of the blobs out of 20 2.350099 ops/s 425.5140 ms/op x0.977 10 runs 5.13 s
✔ Half columns - reconstruct single blob out of 20 13.93751 ops/s 71.74882 ms/op x0.915 11 runs 1.31 s
Reconstruct blobs - 48 blobs
✔ Full columns - reconstruct all 48 blobs 1031.150 ops/s 969.7910 us/op x0.853 286 runs 0.805 s
✔ Full columns - reconstruct half of the blobs out of 48 2042.254 ops/s 489.6550 us/op x0.933 581 runs 0.805 s
✔ Full columns - reconstruct single blob out of 48 33946.64 ops/s 29.45800 us/op x0.961 7685 runs 0.306 s
✔ Half columns - reconstruct all 48 blobs 0.5274713 ops/s 1.895838 s/op x0.940 10 runs 21.0 s
✔ Half columns - reconstruct half of the blobs out of 48 1.033691 ops/s 967.4067 ms/op x0.951 10 runs 10.7 s
✔ Half columns - reconstruct single blob out of 48 12.54519 ops/s 79.71183 ms/op x1.072 11 runs 1.44 s
Reconstruct blobs - 72 blobs
✔ Full columns - reconstruct all 72 blobs 586.0658 ops/s 1.706293 ms/op x0.985 178 runs 0.806 s
✔ Full columns - reconstruct half of the blobs out of 72 1390.803 ops/s 719.0090 us/op x0.959 386 runs 0.804 s
✔ Full columns - reconstruct single blob out of 72 34457.81 ops/s 29.02100 us/op x0.995 8437 runs 0.306 s
✔ Half columns - reconstruct all 72 blobs 0.3519770 ops/s 2.841095 s/op x0.972 10 runs 31.4 s
✔ Half columns - reconstruct half of the blobs out of 72 0.6779473 ops/s 1.475041 s/op x1.027 10 runs 16.2 s
✔ Half columns - reconstruct single blob out of 72 13.59862 ops/s 73.53685 ms/op x0.927 11 runs 1.38 s
```
**Motivation**
- #7280
**Description**
- Build upon the isomorphic bytes code in the utils package
- refactor browser/nodejs selection to use conditional imports (like how
we've been handling bun / nodejs selection
- Use Uint8Array.fromHex and toHex (mentioned in
https://github.com/ChainSafe/lodestar/pull/8275#issuecomment-3228184163)
- Refactor the bytes perf tests to include bun
- Add lodestar-bun dependency (also add missing dependency in
beacon-node package)
Results from my machine
```
bytes utils
✔ nodejs block root to RootHex using toHex 5500338 ops/s 181.8070 ns/op - 1048 runs 0.444 s
✔ nodejs block root to RootHex using toRootHex 7466866 ops/s 133.9250 ns/op - 2189 runs 0.477 s
✔ nodejs fromHex(blob) 7001.930 ops/s 142.8178 us/op - 10 runs 1.94 s
✔ nodejs fromHexInto(blob) 1744.298 ops/s 573.2965 us/op - 10 runs 6.33 s
✔ nodejs block root to RootHex using the deprecated toHexString 1609510 ops/s 621.3070 ns/op - 309 runs 0.704 s
✔ browser block root to RootHex using toHex 1854390 ops/s 539.2610 ns/op - 522 runs 0.807 s
✔ browser block root to RootHex using toRootHex 2060543 ops/s 485.3090 ns/op - 597 runs 0.805 s
✔ browser fromHex(blob) 1632.601 ops/s 612.5196 us/op - 10 runs 6.77 s
✔ browser fromHexInto(blob) 1751.718 ops/s 570.8683 us/op - 10 runs 6.36 s
✔ browser block root to RootHex using the deprecated toHexString 1596024 ops/s 626.5570 ns/op - 457 runs 0.805 s
✔ bun block root to RootHex using toHex 1.249563e+7 ops/s 80.02800 ns/op - 4506 runs 0.518 s
✔ bun block root to RootHex using toRootHex 1.262626e+7 ops/s 79.20000 ns/op - 3716 runs 0.409 s
✔ bun fromHex(blob) 26995.09 ops/s 37.04377 us/op - 10 runs 0.899 s
✔ bun fromHexInto(blob) 31539.09 ops/s 31.70668 us/op - 13 runs 0.914 s
✔ bun block root to RootHex using the deprecated toHexString 1.252944e+7 ops/s 79.81200 ns/op - 3616 runs 0.414 s
```
**Motivation**
Seems like we only update our local status if we receive a new block
after fork, and incorrectly disconnect peers due to that
```txt
[32minfo[39m: Synced - slot: 32 - head: (slot -1) 0x355c…8bed - exec-block: valid(29 0xd623…) - finalized: 0x0000…0000:0 - peers: 3
[34mdebug[39m: Req received method=status, version=2, client=Prysm, peer=16...HcnnMH, requestId=48
[36mverbose[39m: Resp done method=status, version=2, client=Prysm, peer=16...HcnnMH, requestId=48
[34mdebug[39m: Irrelevant peer peer=16...HcnnMH, reason=INCOMPATIBLE_FORKS ours: 0x12e80bb5 theirs: 0x450757b9
[34mdebug[39m: initiating goodbyeAndDisconnect peer reason=Irrelevant network, peerId=16...HcnnMH
```
`ours` here is still the previous fork digest
**Description**
Update local status fork digest on fork boundary transition. It seemed
the easiest to do this in the network thread directly as we also update
the fork digest of the ENR there.