**Motivation**
```
> ./lodestar dev
file:///home/nico/projects/ethereum/lodestar/packages/params/lib/setPreset.js:27
throw Error(`Lodestar preset is already frozen. You must call setActivePreset() at the top of your
^
Error: Lodestar preset is already frozen. You must call setActivePreset() at the top of your
application entry point, before importing @lodestar/params, or any library that may import it.
// index.ts
import {setActivePreset, PresetName} from "@lodestar/params/setPreset"
setActivePreset(PresetName.minimal)
// Now you can safely import from other paths and consume params
import {SLOTS_PER_EPOCH} from "@lodestar/params"
console.log({SLOTS_PER_EPOCH})
at setActivePreset (file:///home/nico/projects/ethereum/lodestar/packages/params/lib/setPreset.js:27:15)
at file:///home/nico/projects/ethereum/lodestar/packages/cli/lib/applyPreset.js:49:5
at ModuleJob.run (node:internal/modules/esm/module_job:262:25)
at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:485:26)
at async file:///home/nico/projects/ethereum/lodestar/packages/cli/bin/lodestar.js:3:1
Node.js v22.4.1
```
**Description**
We cannot import any lodestar packages that depend on `@lodestar/params`
into `packages/cli/src/util/file.ts` since this is loaded from
`packages/cli/src/applyPreset.ts` and we run in the error above.
**Motivation**
Found two small bugs while looking through devnet logs
```sh
Sep-25 13:49:19.674[sync] verbose: Batch download error id=Head-35, startEpoch=14392, status=Downloading, peer=16...wUr9yu - Cannot read properties of undefined (reading 'signedBlockHeader')
TypeError: Cannot read properties of undefined (reading 'signedBlockHeader')
at cacheByRangeResponses (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:146:63)
at SyncChain.downloadByRange (file:///usr/app/packages/beacon-node/src/sync/range/range.ts:213:20)
at wrapError (file:///usr/app/packages/beacon-node/src/util/wrapError.ts:18:32)
at SyncChain.sendBatch (file:///usr/app/packages/beacon-node/src/sync/range/chain.ts:470:19)
```
and
```sh
Sep-30 15:39:03.738[sync] verbose: Batch download error id=Head-12, startEpoch=15436, status=Downloading, peer=16...3aErhh - Cannot read properties of undefined (reading 'message')
TypeError: Cannot read properties of undefined (reading 'message')
at validateBlockByRangeResponse (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:432:46)
at validateResponses (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:304:42)
at downloadByRange (file:///usr/app/packages/beacon-node/src/sync/utils/downloadByRange.ts:206:27)
at SyncChain.downloadByRange (file:///usr/app/packages/beacon-node/src/sync/range/range.ts:205:32)
at wrapError (file:///usr/app/packages/beacon-node/src/util/wrapError.ts:18:32)
at SyncChain.sendBatch (file:///usr/app/packages/beacon-node/src/sync/range/chain.ts:470:19)
```
There is a bug in `validateColumnsByRangeResponse` that is passing
through an empty array of `blockColumnSidecars` on this line
bca2040ef3/packages/beacon-node/src/sync/utils/downloadByRange.ts (L770)
that is throwing in `cacheByRangeResponses`. While going through the
validation with a fine toothed comb I noticed that there were a couple
of conditions that we are not checking per spec.
**Scope**
Changed heuristic for how columns are validated in ByRange and added in
checks for column delivery in `(slot, column_index)` order.
---------
Co-authored-by: Cayman <caymannava@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
**Motivation**
- #8562 cleanup
**Description**
- Several differences between the nodejs and bun functions exist and are
squashed here
- the function signatures: relaxed to return a Uint8Array
- the upstream bun bytes<->int functions only support lengths up to 8,
fallback to existing implementations if we need more length
**Motivation**
Enable more efficient data availability with lower bandwidth and storage
requirements compared to a supernode.
**Description**
Adds new `--semiSupernode` flag to subscribe to and custody half of the
data column sidecar subnets to support blob reconstruction. This change
in combination with https://github.com/ChainSafe/lodestar/pull/8567 will
make it a lot less resource intensive to run a blob serving node.
I went with the same flag name as Lighthouse currently uses for this
https://github.com/sigp/lighthouse/issues/8218 to make it easier for
users, even though I don't think this flag name is great. We can look
into other ways to reconstruct blobs later, like fetching missing
columns over req/resp, which will eventually become necessary if we want
to support home operators that need blobs with higher max blob counts.
**Note:** If the custody group count of the node was higher than 64
previously it will not be reduced. It is required to remove the ENR
either manually or by setting `--persistNetworkIdentity false` to reset
the custody requirements.
**Motivation**
When retrieving blobs via `GET /eth/v1/beacon/blobs/{block_id}` it's
possible to specify versioned hashes of the blobs as a consumer and
commonly (eg. in case of L2s) there isn't the need to get all the blobs.
There is not a huge difference in reconstruction of all or just a subset
of blobs when we have the a full set of data columns (128) as a
supernode as we don't need to do cell recovery but in the case the node
only has 64 columns (["semi
supernode"](https://github.com/ChainSafe/lodestar/pull/8568)) we need to
recover cells which is very expensive for all blob rows but with this
change we can only recover cells for the requested blobs which can make
a huge difference (see benchmarks below).
**Description**
Add partial blob reconstruction
- update `reconstructBlobs` to only reconstruct blobs for `indices` (if
provided)
- add new `recoverBlobCells` which is similar to
`dataColumnMatrixRecovery` but allows partial recovery
- only call `asyncRecoverCellsAndKzgProofs` for blob rows we need to
recover
Results from running benchmark locally
```
reconstructBlobs
Reconstruct blobs - 6 blobs
✔ Full columns - reconstruct all 6 blobs 7492.994 ops/s 133.4580 us/op x0.952 1416 runs 0.381 s
✔ Full columns - reconstruct half of the blobs out of 6 14825.80 ops/s 67.45000 us/op x0.989 3062 runs 0.304 s
✔ Full columns - reconstruct single blob out of 6 34335.94 ops/s 29.12400 us/op x1.009 8595 runs 0.306 s
✔ Half columns - reconstruct all 6 blobs 3.992937 ops/s 250.4422 ms/op x0.968 10 runs 4.83 s
✔ Half columns - reconstruct half of the blobs out of 6 7.329427 ops/s 136.4363 ms/op x0.980 10 runs 1.90 s
✔ Half columns - reconstruct single blob out of 6 13.21413 ops/s 75.67658 ms/op x1.003 11 runs 1.36 s
Reconstruct blobs - 10 blobs
✔ Full columns - reconstruct all 10 blobs 4763.833 ops/s 209.9150 us/op x0.908 1324 runs 0.536 s
✔ Full columns - reconstruct half of the blobs out of 10 9749.439 ops/s 102.5700 us/op x0.935 1818 runs 0.319 s
✔ Full columns - reconstruct single blob out of 10 36794.47 ops/s 27.17800 us/op x0.923 9087 runs 0.307 s
✔ Half columns - reconstruct all 10 blobs 2.346124 ops/s 426.2349 ms/op x1.033 10 runs 5.09 s
✔ Half columns - reconstruct half of the blobs out of 10 4.509997 ops/s 221.7296 ms/op x1.022 10 runs 2.84 s
✔ Half columns - reconstruct single blob out of 10 13.73414 ops/s 72.81126 ms/op x0.910 11 runs 1.30 s
Reconstruct blobs - 20 blobs
✔ Full columns - reconstruct all 20 blobs 2601.524 ops/s 384.3900 us/op x0.982 723 runs 0.727 s
✔ Full columns - reconstruct half of the blobs out of 20 5049.306 ops/s 198.0470 us/op x0.961 933 runs 0.421 s
✔ Full columns - reconstruct single blob out of 20 34156.51 ops/s 29.27700 us/op x0.980 8441 runs 0.306 s
✔ Half columns - reconstruct all 20 blobs 1.211887 ops/s 825.1593 ms/op x1.010 10 runs 9.10 s
✔ Half columns - reconstruct half of the blobs out of 20 2.350099 ops/s 425.5140 ms/op x0.977 10 runs 5.13 s
✔ Half columns - reconstruct single blob out of 20 13.93751 ops/s 71.74882 ms/op x0.915 11 runs 1.31 s
Reconstruct blobs - 48 blobs
✔ Full columns - reconstruct all 48 blobs 1031.150 ops/s 969.7910 us/op x0.853 286 runs 0.805 s
✔ Full columns - reconstruct half of the blobs out of 48 2042.254 ops/s 489.6550 us/op x0.933 581 runs 0.805 s
✔ Full columns - reconstruct single blob out of 48 33946.64 ops/s 29.45800 us/op x0.961 7685 runs 0.306 s
✔ Half columns - reconstruct all 48 blobs 0.5274713 ops/s 1.895838 s/op x0.940 10 runs 21.0 s
✔ Half columns - reconstruct half of the blobs out of 48 1.033691 ops/s 967.4067 ms/op x0.951 10 runs 10.7 s
✔ Half columns - reconstruct single blob out of 48 12.54519 ops/s 79.71183 ms/op x1.072 11 runs 1.44 s
Reconstruct blobs - 72 blobs
✔ Full columns - reconstruct all 72 blobs 586.0658 ops/s 1.706293 ms/op x0.985 178 runs 0.806 s
✔ Full columns - reconstruct half of the blobs out of 72 1390.803 ops/s 719.0090 us/op x0.959 386 runs 0.804 s
✔ Full columns - reconstruct single blob out of 72 34457.81 ops/s 29.02100 us/op x0.995 8437 runs 0.306 s
✔ Half columns - reconstruct all 72 blobs 0.3519770 ops/s 2.841095 s/op x0.972 10 runs 31.4 s
✔ Half columns - reconstruct half of the blobs out of 72 0.6779473 ops/s 1.475041 s/op x1.027 10 runs 16.2 s
✔ Half columns - reconstruct single blob out of 72 13.59862 ops/s 73.53685 ms/op x0.927 11 runs 1.38 s
```
**Motivation**
- #7280
**Description**
- Build upon the isomorphic bytes code in the utils package
- refactor browser/nodejs selection to use conditional imports (like how
we've been handling bun / nodejs selection
- Use Uint8Array.fromHex and toHex (mentioned in
https://github.com/ChainSafe/lodestar/pull/8275#issuecomment-3228184163)
- Refactor the bytes perf tests to include bun
- Add lodestar-bun dependency (also add missing dependency in
beacon-node package)
Results from my machine
```
bytes utils
✔ nodejs block root to RootHex using toHex 5500338 ops/s 181.8070 ns/op - 1048 runs 0.444 s
✔ nodejs block root to RootHex using toRootHex 7466866 ops/s 133.9250 ns/op - 2189 runs 0.477 s
✔ nodejs fromHex(blob) 7001.930 ops/s 142.8178 us/op - 10 runs 1.94 s
✔ nodejs fromHexInto(blob) 1744.298 ops/s 573.2965 us/op - 10 runs 6.33 s
✔ nodejs block root to RootHex using the deprecated toHexString 1609510 ops/s 621.3070 ns/op - 309 runs 0.704 s
✔ browser block root to RootHex using toHex 1854390 ops/s 539.2610 ns/op - 522 runs 0.807 s
✔ browser block root to RootHex using toRootHex 2060543 ops/s 485.3090 ns/op - 597 runs 0.805 s
✔ browser fromHex(blob) 1632.601 ops/s 612.5196 us/op - 10 runs 6.77 s
✔ browser fromHexInto(blob) 1751.718 ops/s 570.8683 us/op - 10 runs 6.36 s
✔ browser block root to RootHex using the deprecated toHexString 1596024 ops/s 626.5570 ns/op - 457 runs 0.805 s
✔ bun block root to RootHex using toHex 1.249563e+7 ops/s 80.02800 ns/op - 4506 runs 0.518 s
✔ bun block root to RootHex using toRootHex 1.262626e+7 ops/s 79.20000 ns/op - 3716 runs 0.409 s
✔ bun fromHex(blob) 26995.09 ops/s 37.04377 us/op - 10 runs 0.899 s
✔ bun fromHexInto(blob) 31539.09 ops/s 31.70668 us/op - 13 runs 0.914 s
✔ bun block root to RootHex using the deprecated toHexString 1.252944e+7 ops/s 79.81200 ns/op - 3616 runs 0.414 s
```
**Motivation**
Seems like we only update our local status if we receive a new block
after fork, and incorrectly disconnect peers due to that
```txt
[32minfo[39m: Synced - slot: 32 - head: (slot -1) 0x355c…8bed - exec-block: valid(29 0xd623…) - finalized: 0x0000…0000:0 - peers: 3
[34mdebug[39m: Req received method=status, version=2, client=Prysm, peer=16...HcnnMH, requestId=48
[36mverbose[39m: Resp done method=status, version=2, client=Prysm, peer=16...HcnnMH, requestId=48
[34mdebug[39m: Irrelevant peer peer=16...HcnnMH, reason=INCOMPATIBLE_FORKS ours: 0x12e80bb5 theirs: 0x450757b9
[34mdebug[39m: initiating goodbyeAndDisconnect peer reason=Irrelevant network, peerId=16...HcnnMH
```
`ours` here is still the previous fork digest
**Description**
Update local status fork digest on fork boundary transition. It seemed
the easiest to do this in the network thread directly as we also update
the fork digest of the ENR there.
It doesn't seem necessary to pass down `supernode` config into different
modules like chain or network, it's better to handle initial custody
config only in beacon handler and then use `initialCustodyGroupCount`
downstream.
This is just relevant to be compliant with the [builder
spec](db7c5ee036/specs/fulu/builder.md (L39-L45))
```python
class ExecutionPayloadAndBlobsBundle(Container):
execution_payload: ExecutionPayload
blobs_bundle: BlobsBundle # [Modified in Fulu:EIP7594]
```
We haven't seen issues due to this because once Fulu activates we use
`submitBlindedBlockV2` which no longer returns
`ExecutionPayloadAndBlobsBundle` in the response.
**Motivation**
This was a feature we developed for [rescuing
Holesky](https://blog.chainsafe.io/lodestar-holesky-rescue-retrospective/)
as part of https://github.com/ChainSafe/lodestar/pull/7501 to quickly
sync nodes to head during a period of long non-finality (~3 weeks).
While it's unlikely we will have such a long period of non-finality on
mainnet, this feature is still useful to have for much shorter periods
and testing purposes on devnets.
It is now part of [Ethereum protocol
hardening](https://github.com/eth-clients/diamond) mitigations described
[here](https://github.com/eth-clients/diamond/blob/main/mitigations/nfin-checkpoint-001.md)
> Ordinary checkpoint sync begins from the latest finalized checkpoint
(block and/or state). As an escape hatch during non-finality, it is
useful to have the ability to checkpoint sync from an unfinalized
checkpoint. A client implementing this mitigation MUST support
checkpoint sync from an arbitrary non-finalized checkpoint state.
We will support this with the exception that our checkpoint state needs
to be an epoch boundary checkpoint.
**Description**
The main feature of this PR is to allow initializing a node from an
unfinalized checkpoint state either retrieved locally or from a remote
source.
This behavior is disabled by default but can be enabled by either adding
- the `--lastPersistedCheckpointState` flag to load from the last safe
persisted checkpoint state stored locally
- or `--unsafeCheckpointState` to provide a file path or url to an
unfinalized checkpoint state to start syncing from which can be used
with new endpoint `GET /eth/v1/lodestar/persisted_checkpoint_state` to
sync from a remote node or by sharing states from `checkpoint_states`
folder
Both of these options are not safe to use on a network that recently
finalized an epoch and must only be considered if syncing from last
finalized checkpoint state is unfeasible.
An unfinalized checkpoint state persisted locally is only considered to
be safe to boot if
- it's the only checkpoint in it's epoch to avoid ambiguity from forks
- its last processed block slot is at an epoch boundary or last slot of
previous epoch
- state slot is at an epoch boundary
- state slot is equal to `epoch * SLOTS_PER_EPOCH`
But even if these criteria are met, there is chance that the node will
end up on a minority chain as it will not be able to pivot to another
chain that conflicts with the checkpoint state it was initialized from.
Other existing flags (like `--checkpointState`) are unchanged by this PR
and will continue to expect a finalized checkpoint state.
Previous PRs https://github.com/ChainSafe/lodestar/pull/7509,
https://github.com/ChainSafe/lodestar/pull/7541,
https://github.com/ChainSafe/lodestar/pull/7542 not merged to unstable
are included.
Closes https://github.com/ChainSafe/lodestar/issues/7963
cc @twoeths
---------
Co-authored-by: twoeths <10568965+twoeths@users.noreply.github.com>
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
- #7280
**Description**
- set some basic options for this project
- increase console depth
- bun 1.3, added isolated installs, which are cool, but we want the
simpler hoisted installs for now
- alias `node` as `bun` when running `bun run`
**Motivation**
- to be able to take profile in Bun
**Description**
- implement `profileBun` api using `console.profile()` apis
- as tested, it only supported up to 3s or Bun will crash so I have to
do a for loop
- cannot take the whole epoch, or `debug.bun.sh` will take forever to
load
- refactor: implement `profileThread` as wrapper of either
`profileNodeJS` or `profileBun`
- note that NodeJS and Bun works a bit differently:
- NodeJS: we can persist to a file, log into server and copy it
- Bun: need to launch `debug.bun.sh` web page as the inspector, profile
will be flushed from node to to the inspected and rendered live there
**Steps to take profile**
- start beacon node with `--inspect` and look for `debug.bun.sh` log
- launch the specified url, for example
`https://debug.bun.sh/#127.0.0.1:9229/0qoflywrwso`
- (optional) the UI does not show if the inspector is connected to app
or not, so normally I wait for the sources to be launched
- `curl -X POST
http://localhost:9596/eth/v1/lodestar/write_profile?thread=main`
- look into `Timelines` tab in `https://debug.bun.sh/`, check `Call
tree` there
- (optional) export Timeline to share it
**Sample Profile**
[Timeline%20Recording%201
(4).json.zip](https://github.com/user-attachments/files/22788370/Timeline.20Recording.201.4.json.zip)
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
Reverts https://github.com/ChainSafe/lodestar/pull/7448 to unhide prune
history option. I've been running this on mainnet for a while and it's
seems pretty stable with no noticable impact on performance. The feature
is already widely used even though it was hidden so might as well show
it in our docs.
Metrics from my node
<img width="1888" height="340" alt="image"
src="https://github.com/user-attachments/assets/223b7e6b-101e-4b4f-b06a-6d74f830bf96"
/>
I did a few tweaks to how we query keys but nothing really improved the
fetch keys duration.
Caveat still remains that it's quite slow on first startup if the
previous db is large but that's a one time operation.
Closes https://github.com/ChainSafe/lodestar/issues/7556
**Motivation**
Make sure all specs passes.
**Description**
- Fix the broken spec tests
- Add condition to fix the types used for deserialization.
Closes#7839
**Steps to test or reproduce**
Run all tests
---------
Co-authored-by: Cayman <caymannava@gmail.com>
**Motivation**
Make the types exports consistent for all packages.
All modern runtimes support [conditional
exports](https://nodejs.org/api/packages.html#conditional-exports) and
there are caveats when we have both conditional exports and normal
exports present in a package.json. This PR tend to make all exports
follow same consistent and modern pattern.
**Description**
- We were using subpath exports for some packages and module exports for
other
- Keep all the types export consistent as subpath exports.
- Remove "types" and "exports` directive from package.json
- Remove `typesVersions`, this is useful only if we have different
version of types for different versions of Typescript. Or having
different types files for different file paths.
**Steps to test or reproduce**
- Run all CI
**Motivation**
This was brought up in the Fusaka bug bounty, when we receive a already
known data column sidecar with the same block header and column index,
the gossip message is accepted and rebroadcast without any additional
verification.
This allows a malicious peer to send a data column sidecar with the same
block header and column index but an invalid block header signature
which we would accept and rebroadcast and get downscored by our peers.
**Description**
Ignore already known data column sidecars (based on block header and
column index). We could also consider to run
`validateGossipDataColumnSidecar` on those sidecars to penalize the node
sending us the data but it's just additional work for us and
`GossipAction.IGNORE` seems sufficient.
**Motivation**
- this task runs once per epoch so there should be no issue being more
verbose on the log
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
- https://github.com/ChainSafe/lodestar/issues/8526
**Description**
- In #8526 it was discovered that the leveldb controller did not respect
`reverse`
- Visual review of the code shows it doesn't respect `reverse` nor
`limit`
- Add support for both `reverse` and `limit`
This simplifies the calculation of `startEpoch` in head chain range
sync.
We check the remote finalized epoch against ours here
d9cc6b90f7/packages/beacon-node/src/sync/utils/remoteSyncType.ts (L33)
This means `remote.finalizedEpoch == local.finalizedEpoch` in this case
and local head epoch should always be >= finalized epoch which means we
can simplify use epoch of `local.headSlot` here.
cc @twoeths
**Motivation**
See https://github.com/ethereum/consensus-specs/pull/4650
**Description**
Ensure data column sidecars respect blob limit by checking that
`kzgCommitments.length` of each data column sidecar does not exceed
`getMaxBlobsPerBlock(epoch)`.
**Motivation**
- investigate and maintain the performance of
`processFinalizedCheckpoint()`
- this is part of #8526
**Description**
- track duration of `processFinalizedCheckpoint()` by tasks, the result
on a hoodi node, it shows that `FrequencyStateArchiveStrategy` takes the
most time
<img width="941" height="297" alt="Screenshot 2025-10-14 at 13 45 38"
src="https://github.com/user-attachments/assets/ef440399-538b-4a4a-a63c-e775745b25e6"
/>
- track different steps of `FrequencyStateArchiveStrategy`, the result
shows that the mainthread is blocked by different db queries cc
@wemeetagain
<img width="1291" height="657" alt="Screenshot 2025-10-14 at 13 46 36"
src="https://github.com/user-attachments/assets/3b19f008-c7d8-49a4-9dc5-e68b1a5ba2a5"
/>
part of #8526
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
If we don't have the proposer index in cache when having to build a
block, we create a default entry
[here](d9cc6b90f7/packages/beacon-node/src/chain/produceBlock/produceBlockBody.ts (L196)).
This shouldn't happen in normal circumstances as proposers are
registered beforehand, however if you produce a block each slot for
testing purposes this affects the custody of the node as it will have up
to 32 validators in proposer cache (assuming a block each slot) and
since we never reduce the cgc it will stay at that value.
Logs from Ethpandaops from a node without attached validators but that's
producing a block each slot
```
Oct-14 09:12:00.005[chain] verbose: Updated target custody group count finalizedEpoch=272653, validatorCount=32, targetCustodyGroupCount=33
```
```
Oct-14 09:12:00.008[network] debug: Updated cgc field in ENR custodyGroupCount=33
```
**Description**
Do not create default cache entry for unknown proposers by using normal
`Map` and just fall back to `suggestedFeeRecipient` if there isn't any
value. The behavior from a caller perspective stays the same but we no
longer create a proposer cache entry for unknown proposers.
**Motivation**
Voluntary exit validation previously returned only a boolean, which
gives a vague error codes and may cause harder debugging.
This PR aims to improve debuggability by providing clearer error message
and feedback during validator exits.
**Description**
<!-- A clear and concise general description of the changes of this PR
commits -->
This PR introduces the VoluntaryExitValidity enum to provide granular
reasons for voluntary exit validation failures.
It refactors processVoluntaryExit and getVoluntaryExitValidity to return
specific validity states, rather than a simple boolean.
Beacon node validation logic now maps these validity results to error
codes (VoluntaryExitErrorCode) for clearer gossip and API handling.
This improves debuggability and aligns exit validation with consensus
spec requirements.
Closes#6330
---------
Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation**
- got "heap size limit too low" warn in our Bun instance but has no idea
what's the exact value of it
**Description**
- include `heapSizeLimit` in the log
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
To make transition to `getBlobs` api smoother and make Lodestar work
with v0.29.0 release of checkpointz
https://github.com/ethpandaops/checkpointz/issues/215, we should be
serving blob sidecars even post fulu from `getBlobSidecars` api.
**Description**
Instead of throwing an error, we do the following now post fulu
- fetch data column sidecars for block from db (if we custody at least
64 columns)
- reconstruct blobs from data column sidecars
- reconstruct blob sidecars from blobs (recompute kzg proof and
inclusion proof)
**Motivation**
- Review of #8468 metrics
- In https://github.com/ChainSafe/lodestar/pull/8449, use of
`datastore-level` was unilaterally removed in favor of the bun-supported
`datastore-fs`
- This caused a regression
**Description**
- use `datastore-level` by default, only use `datastore-fs` in bun
---------
Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation**
Recently, we have been experiencing many instances of AI generated slop
submitted by first time contributors and a policy rework is needed to
ensure our time is respected whilst allowing an external contributor
freely contribute to further enhancing this project. This proposal
ensures that new contributors self-disclose any use of AI technologies
as part of their submission to minimize time waste and allow issues to
be resolved with thorough understanding of the output at hand.
**Description**
This PR:
- Adds an AI Assistance Notice to contribution.md
- Moves important context for first time contributors to the top of the
document
- Corrects minor grammar
Additional credit to @nflaig and [Ghostly's
policy](https://github.com/ghostty-org/ghostty/blob/main/CONTRIBUTING.md#ai-assistance-notice)
to approach minimizing this problem.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>