**Motivation**
- run a node starting from electra passing through fulu
- start another node to do range sync and try to catch up
**Description**
- to reproduce the issue in #8247
```
../../node_modules/.bin/vitest test/e2e/sync/finalizedSync.test.ts
```
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
- there are a lot of "unknown" read/write requests tracked in #8334
**Description**
- add bucketId to abstractPrefixedRepository.ts where it's missed
part of #8334
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
Log out reqresp missing column indices that we do not have in the db but
should.
Added metrics as well. I think this might get noisy in the logs until we
finish backfill so might need to comment out the log and just use the
metrics for finding regressions.
**Motivation**
We want to use `prettyPrintIndices` consistently for logging subnet
indices.
**Description**
Replaced `sampleSubnets.join(" ")` with
`prettyPrintIndices(sampleSubnets)`
to improve readability and consistency with other log fields.
Closes#8292
**Steps to test or reproduce**
```sh
git checkout maishivamhoo123/prettyPrintIndices-fresh
npm run test
**Motivation**
<!-- Why is this PR exists? What are the goals of the pull request? -->
**Description**
This PR fixes an inconsistency in Lodestar's Beacon-API behavior when
querying:
```
curl "http://127.0.0.1:9596/eth/v1/beacon/states/head/committees?epoch=2&slot=118"
```
Previously, Lodestar would return a `500 Internal Server Error` when the
provided slot did not belong to the given epoch (e.g.,
`epoch=2&slot=118`), due to an unhandled
`EPOCH_CONTEXT_ERROR_DECISION_ROOT_EPOCH_OUT_OF_RANGE` error.
<img width="948" alt="Screenshot 2025-06-10 at 08 15 05"
src="https://github.com/user-attachments/assets/7f6feea9-90c8-4299-ae0a-ce2a1e6a2282"
/>
<img width="948" alt="Screenshot 2025-06-10 at 08 14 33"
src="https://github.com/user-attachments/assets/7fafea27-09a3-4f94-a5b6-d3a818ee5c65"
/>
<!-- A clear and concise general description of the changes of this PR
commits -->
<!-- If applicable, add screenshots to help explain your solution -->
<!-- Link to issues: Resolves#111, Resolves#222 -->
Closes https://github.com/ChainSafe/lodestar/issues/7882
**Steps to test or reproduce**
<!--Steps to reproduce the behavior:
```sh
git checkout <feature_branch>
lodestar beacon --new-flag option1
```
-->
---------
Co-authored-by: Nico Flaig <nflaig@protonmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
**Motivation**
- we currently throw error if there is no DataColumnSidecars for the
block when archiving block
**Description**
- log it instead
Closes#8314
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
We fail to publish builder blocks as we attempt to reconstruct the full
block
```
Sep-03 21:52:48.755[api] info: Selected builder block reason=block_value, slot=304614, parentSlot=304613, parentBlockRoot=0x17a2824216119eef2d74aae47c7473c727e4ef67e02214b10b209c115c4ef10b, fork=fulu, builderSelection=default, isBuilderEnabled=true, isEngineEnabled=true, strictFeeRecipientCheck=false, builderBoostFactor=90, engineDurationMs=162, engineExecutionPayloadValue=0.00932 ETH, engineConsensusBlockValue=0.00071 ETH, engineBlockTotalValue=0.01003 ETH, builderDurationMs=746, builderExecutionPayloadValue=1.01245 ETH, builderConsensusBlockValue=0.00071 ETH, builderBlockTotalValue=1.01317 ETH
Sep-03 21:52:48.756[rest] debug: Res req-2qfu produceBlockV3 - 200
Sep-03 21:52:48.764[rest] debug: Req req-2qfw 172.18.0.4 publishBlindedBlockV2
Sep-03 21:52:48.765[rest] debug: Exec req-2qfw 172.18.0.4 publishBlindedBlockV2
Sep-03 21:52:48.766[chain] debug: Reconstructing the full signed block contents slot=304614, blockRoot=0xb2d3e029598895fd59a37eb89a120d9dd422c861077872117739c39cc2594442, source=engine
Sep-03 21:52:48.766[rest] error: Req req-2qfw publishBlindedBlockV2 error - Missing executionPayload to reconstruct post-bellatrix full block
Error: Missing executionPayload to reconstruct post-bellatrix full block
at signedBlindedBlockToFull (file:///usr/app/packages/state-transition/src/util/blindedBlock.ts:92:11)
at reconstructSignedBlockContents (file:///usr/app/packages/state-transition/src/util/blindedBlock.ts:132:23)
at publishBlindedBlock (file:///usr/app/packages/beacon-node/src/api/impl/beacon/blocks/index.ts:367:35)
```
```
Sep-03 21:52:48.768[] error: Error proposing block slot=304614, validator=0xa12e…8dda - publishBlindedBlockV2 failed with status 500: Missing executionPayload to reconstruct post-bellatrix full block - Failed to publish block
Error: publishBlindedBlockV2 failed with status 500: Missing executionPayload to reconstruct post-bellatrix full block - Failed to publish block
at ApiResponse.error (file:///usr/app/packages/api/src/utils/client/response.ts:165:12)
at ApiResponse.assertOk (file:///usr/app/packages/api/src/utils/client/response.ts:156:18)
at BlockProposingService.publishBlockWrapper (file:///usr/app/packages/validator/src/services/block.ts:176:9)
at processTicksAndRejections (node:internal/process/task_queues:105:5)
at BlockProposingService.createAndPublishBlock (file:///usr/app/packages/validator/src/services/block.ts:153:7)
at async Promise.all (index 0)
```
**Description**
Do not attempt to reconstruct builder blocks by checking not only for
`producedResult` but also the block type, in case of `BlockType.Blinded`
we don't have the execution payload/blobs to reconstruct the full block
which results in the error above.
Follow up on #7687 , we will want to apply similar naming convention to
sync committee when it comes to indices.
- `validatorSyncCommitteeIndices` should be the position indices in the
sync committee
- `syncCommitteeValidatorIndices` should be the validator indices (eg.
1063664) of the sync committee members
**Motivation**
Should be able to fetch the finalized data columns.
**Description**
- Fix the bucket id for the archive data columns side cars
- Fix the length check for the data column side cars
**Steps to test or reproduce**
- Run all tests
**Motivation**
Make sure the deserlization of keys works as expected and match with
original keys.
**Description**
- Fix db keys unwrapping
- Add e2e tests
**Steps to test or reproduce**
Run all tests
**Note**
As these tests actually interact with I/O so why I categorized those as
e2e and not unit.
**Motivation**
Use the newer client versions to support only post-electra forks.
**Description**
- Update images versions
- Update the runner script
- Update tests
**Steps to test or reproduce**
- Run all tests
---------
Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation**
This is required to pass spec tests since spec example was updated in
https://github.com/ethereum/beacon-APIs/pull/550
**Description**
Update `payload_attributes` event test data
**Note:** we already emit all required fields, no changes needed there
**Motivation**
- as shown in #8301, there could be leaked streams which cause the
connection to be closed
- it's good to know number of streams opened vs closed so that we know
if we have leaked streams on a running nodes
**Description**
- track incoming/outgoing streams opened/closed by methods
**Test**
it already showed that when we handle streams, number of streams closed
successfully is less than number of streams opened
<img width="1529" height="658" alt="Screenshot 2025-09-02 at 15 57 25"
src="https://github.com/user-attachments/assets/7472b95e-34af-4707-8af5-d1591e085dba"
/>
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
Use the more type aware version of Biome to get benefit from type safety
rules.
**Description**
- Keep the rules matching to previous behavior
- Add explanation to all ignore as it's required in new version
**Steps to test or reproduce**
Run all tests
---------
Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation**
Adds a bash script that allows one to launch up a local kurtosis testnet
based off of the changes made locally.
Might be useful to others -- if so, I can clean it up
<!-- Why is this PR exists? What are the goals of the pull request? -->
**Description**
```
# start a testnet with local changes
./scripts/kurtosis/run.sh start
# stop the testnet and cleanup
./scripts/kurtosis/run.sh stop
```
---------
Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation**
The check was missed on the length of the objects.
**Description**
- The check was required to have on the length of the array
- Earlier we were returning `T[] | null`, later changed to `(T |
null)[]`, and some was missed the check and also weird that linter
didn't found it.
**Steps to test or reproduce**
- Run all tests
**Notes**
As stated by @nflaig the specs says as below:
```
in the deneb spec it's pretty clear that we should skip in range requests
> Slots that do not contain known blobs MUST be skipped, mimicking the behaviour of the BlocksByRange request. Only response chunks with known blobs should therefore be sent.
similar note for columns
> Slots that do not contain known data columns MUST be skipped, mimicking the
> behaviour of the `BlocksByRange` request. Only response chunks with known data
> columns should therefore be sent.
```
So if we throw error during the serving of data columns, we might miss
serving the one's which we have.
**Motivation**
- make e2e tests stable
- peers get disconnected in e2e tests
**Description**
- I was not able to run `finalizeSync.test.ts` e2e tests in
`mkeil/refactor-block-input-on-unstable` until I found this option added
since #7762
- sometimes I found same issue with `unknownBlockSync.test.ts` e2e test,
suppose it will help that test too since it uses same utils
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
Add types support for Bun. So we can start using `if(Bun)` in our
codebase.
**Description**
- Add `@types/bun` package
**Steps to test or reproduce**
- Run all tests
**Motivation**
Fork-choice package doesn't have its own metrics. Some fork-choice
metrics are scraped in beacon-chain through the onScrapeMetrics. This PR
is a part of metrics refactoring #7098. There is also a need to add new
fork-choice metrics for FOCIL.
**Description**
Fork-choice metrics from beacon node were moved to fork-choice package.
Part of #7098
**Steps to test or reproduce**
Check Lodestar Grafana dashboard with this branch running.
---------
Co-authored-by: Nico Flaig <nflaig@protonmail.com>
**Motivation**
While going through block production logs I noticed in some rare cases
we sent header requests pretty delayed into the slot, but not if you
look at our logs, it shows that header was sent at ~100ms but mev-boost
received it at ~600ms which is a huge discrepancy. When further
analysing this I noticed that ~600ms is roughtly the time we finished
producing the common block body. This means we can't really trust the
logs here as they might be emitted but the actual HTTP request is not
sent.
Since this issue is pretty rare and most of the time we sent the
requests <100ms I think it's race condition and highly depends on I/O
timing and what other async/sync load is processed.
Another interesting side effect of this is that when these huge delays
happen (up to 500ms) I also noticed that builder HTTP requests run into
the 1 second timeout even though it looks from mev-boost perspective
that response was sent within time. My guess here is that the `fetch`
implementation sets the timer and starts it but since sending the actual
HTTP requests requires to go through poll phase it might not actually
bet sent.
Long story short, we need to wait until next evet loop iteration to
ensure I/O operations are processed before starting common block body
production as it is synchronous and blocking.
**Description**
Defer call produce common block body `produceCommonBlockBody` to next
event loop by running it in `setImmediate` callback.
Previous PR https://github.com/ChainSafe/lodestar/pull/7814
**Motivation**
- node slow to sync due to dead requests tracked in self rate limiter
**Description**
- track request ids in self rate limiter
- if it's > 30s, it's considered dead and we remove
Closes#8263
part of #8256
**Test**
- was able to sync fusaka-devnet-3 with no selft rate limited errors
<img width="1677" height="645" alt="Screenshot 2025-08-29 at 20 38 45"
src="https://github.com/user-attachments/assets/2388639e-7232-4941-a1bf-4b9ecac55a58"
/>
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
## Summary
This PR optimizes event listener count checks for
`routes.events.EventType` emissions, following the principle that
**listener count checks should only be added where there's expensive
preprocessing work before emit**, since `emit()` is already a no-op when
no listeners exist.
## Changes Made
### ✅ Added listener count checks (where expensive preprocessing
occurs):
**`packages/beacon-node/src/network/processor/gossipHandlers.ts`:**
- **`dataColumnSidecar`** emission: Added check to avoid
`kzgCommitments.map(toHex)` array mapping when no listeners
**`packages/beacon-node/src/chain/prepareNextSlot.ts`:**
- **`payloadAttributes`** emission: Added check to avoid `await
getPayloadAttributesForSSE()` async function call when no listeners
### ✅ Removed unnecessary listener count checks (where no expensive
preprocessing occurs):
**`packages/beacon-node/src/network/processor/gossipHandlers.ts`:**
- **`blockGossip`** emission: Removed check since it only uses existing
variables `{slot, block: blockRootHex}`
**`packages/beacon-node/src/api/impl/beacon/blocks/index.ts`:**
- **`blockGossip`** emission: Removed check since it's a simple emission
with existing variables
### ✅ Kept existing correct checks (for expensive operations):
- **`blobSidecar`** emissions: Keep checks for `toHex()` conversions and
`kzgCommitmentToVersionedHash()`
- **All loop-based emissions** in `importBlock.ts`: Keep checks for
`for` loop iterations
- **`block`** emission: Keep check for `isOptimisticBlock()` computation
## Performance Benefits
1. **Reduced CPU usage**: Expensive operations like async function calls
and array mapping only occur when needed
2. **Better resource utilization**: Memory allocations and computations
are avoided when events won't be consumed
3. **Cleaner code**: Removed redundant checks where the emit operation
itself is lightweight
## Key Principle Applied
**Only add listener count checks where there's expensive preprocessing
work before emission:**
- ✅ Function calls, array operations, crypto operations
- ❌ Simple emissions using existing variables
Since `emit()` is already a no-op when no listeners exist, explicit
checks are only beneficial when avoiding preprocessing overhead.
## Testing
- ✅ Type checks pass
- ✅ Build succeeds
- ✅ Linting passes
Resolves#7996🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude <noreply@anthropic.com>
- `validatorCommitteeIndices` should be the position index in the
committee
- `committeeValidatorIndices` should be the validator indices (eg.
1063664) of the committee members
---------
Co-authored-by: NC <17676176+ensi321@users.noreply.github.com>
**Motivation**
This log is way too verbose on devnet-3 right now
```
devops@lodestar-reth-1 ➜ ~ docker logs beacon 2>&1 | grep "Peer disconnected during identify protocol" | wc -l
113256
```
As suggested in
https://github.com/ChainSafe/lodestar/pull/8188#discussion_r2269948281
we should observe the error for a bit and it seems to be only
`unexpected end of input` which is not very useful.
**Description**
Remove error stacktrace from peer disconnected during identify protocol
logs
~~Alternative could be to add message to context or concat to the
message, open to any of these if we still think including the error
message is valuable.~~ went with still printing out the error message
**Motivation**
Validate all types in Bun tuntime.
**Description**
- fix broken types for `datastore-level`
**Steps to test or reproduce**
- Run all tests
**Motivation**
Fix block production if block has 0 blobs
**Description**
Do not create data column sidecars if there block has no blobs and
return early in data column validation if there are no data column
sidecars to validate.
Closes https://github.com/ChainSafe/lodestar/issues/8276
**Motivation**
- improve the time to deserialize hex, especially for getBlobsV2() where
each blob is 131kb
- if the benchmark is correct, we can expect the time to deserialize
blob hex from `332.6842 ms/op` to more or less 2ms
- will apply it for getBlobsV2() in the next PR by preallocate some
memory and reuse it for all slots
**Description**
- implement `fromHexInto()` using `String.charCodeAt()` for browser and
use that for NodeJs as well
- the Buffer/NodeJS implementation is too bad that I only maintain it in
the benchmark
**Test result on a regular lodestar node**
- `browser fromHexInto(blob)` is 1000x faster than `browser
fromHex(blob)` and >100x faster than `nodejs fromHexInto(blob) `
```
packages/utils/test/perf/bytes.test.ts
bytes utils
✔ nodejs block root to RootHex using toHex 2817687 ops/s 354.9010 ns/op - 1324 runs 0.897 s
✔ nodejs block root to RootHex using toRootHex 4369044 ops/s 228.8830 ns/op - 1793 runs 0.902 s
✔ nodejs fromhex(blob) 3.005854 ops/s 332.6842 ms/op - 10 runs 4.18 s
✔ nodejs fromHexInto(blob) 3.617654 ops/s 276.4222 ms/op - 10 runs 3.36 s
✔ browser block root to RootHex using the deprecated toHexString 1656696 ops/s 603.6110 ns/op - 963 runs 1.34 s
✔ browser block root to RootHex using toHex 2060611 ops/s 485.2930 ns/op - 424 runs 0.812 s
✔ browser block root to RootHex using toRootHex 2320476 ops/s 430.9460 ns/op - 889 runs 0.841 s
✔ browser fromHexInto(blob) 503.7166 ops/s 1.985243 ms/op - 10 runs 21.9 s
✔ browser fromHex(blob) 0.5095370 ops/s 1.962566 s/op - 10 runs 21.7 s
```
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
- when verifying DataColumnSidecars, we track inclusion proof
verification time but not kzg proofs verification time
**Description**
- also track kzg proofs verification time
part of #8260
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
Bumps [cipher-base](https://github.com/crypto-browserify/cipher-base)
from 1.0.4 to 1.0.6.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/browserify/cipher-base/blob/master/CHANGELOG.md">cipher-base's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/browserify/cipher-base/compare/v1.0.5...v1.0.6">v1.0.6</a>
- 2024-11-26</h2>
<h3>Commits</h3>
<ul>
<li>[Fix] io.js 3.0 - Node.js 5.3 typed array support <a
href="b7ddd2ac24"><code>b7ddd2a</code></a></li>
</ul>
<h2><a
href="https://github.com/browserify/cipher-base/compare/v1.0.4...v1.0.5">v1.0.5</a>
- 2024-11-17</h2>
<h3>Commits</h3>
<ul>
<li>[Tests] standard -> eslint, make test dir, etc <a
href="ae02fd6624"><code>ae02fd6</code></a></li>
<li>[Tests] migrate from travis to GHA <a
href="66387d7146"><code>66387d7</code></a></li>
<li>[meta] fix package.json indentation <a
href="5c02918ac5"><code>5c02918</code></a></li>
<li>[Fix] return valid values on multi-byte-wide TypedArray input <a
href="8fd136432c"><code>8fd1364</code></a></li>
<li>[meta] add <code>auto-changelog</code> <a
href="88dc806806"><code>88dc806</code></a></li>
<li>[meta] add <code>npmignore</code> and
<code>safe-publish-latest</code> <a
href="7a137d749c"><code>7a137d7</code></a></li>
<li>Only apps should have lockfiles <a
href="42528f291d"><code>42528f2</code></a></li>
<li>[Deps] update <code>inherits</code>, <code>safe-buffer</code> <a
href="0e7a2d9a33"><code>0e7a2d9</code></a></li>
<li>[meta] add missing <code>engines.node</code> <a
href="f2dc13e47b"><code>f2dc13e</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="f5249f9461"><code>f5249f9</code></a>
v1.0.6</li>
<li><a
href="b7ddd2ac24"><code>b7ddd2a</code></a>
[Fix] io.js 3.0 - Node.js 5.3 typed array support</li>
<li><a
href="f03cebfdad"><code>f03cebf</code></a>
v1.0.5</li>
<li><a
href="88dc806806"><code>88dc806</code></a>
[meta] add <code>auto-changelog</code></li>
<li><a
href="7a137d749c"><code>7a137d7</code></a>
[meta] add <code>npmignore</code> and
<code>safe-publish-latest</code></li>
<li><a
href="5c02918ac5"><code>5c02918</code></a>
[meta] fix package.json indentation</li>
<li><a
href="8fd136432c"><code>8fd1364</code></a>
[Fix] return valid values on multi-byte-wide TypedArray input</li>
<li><a
href="66387d7146"><code>66387d7</code></a>
[Tests] migrate from travis to GHA</li>
<li><a
href="f2dc13e47b"><code>f2dc13e</code></a>
[meta] add missing <code>engines.node</code></li>
<li><a
href="0e7a2d9a33"><code>0e7a2d9</code></a>
[Deps] update <code>inherits</code>, <code>safe-buffer</code></li>
<li>Additional commits viewable in <a
href="https://github.com/crypto-browserify/cipher-base/compare/v1.0.4...v1.0.6">compare
view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by <a
href="https://www.npmjs.com/~ljharb">ljharb</a>, a new releaser for
cipher-base since your current version.</p>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/ChainSafe/lodestar/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
**Motivation**
- we want to test different sync scenarios post-electra and post-fulu so
that when we change syncing strategy, we have e2e tests to confirm it
works well
**Description**
- fix different issues in mock EL to return correct data
- e2e test to start from electra so it'll skip the "merge pow block"
logic
---------
Co-authored-by: Tuyen Nguyen <twoeths@users.noreply.github.com>
**Motivation**
Make the code transition for compatibility with the Bun.
**Description**
- The dependency `cpu-features` is not compatible with the `Bun`
- Removed the direct dependency
- Upgrade the `@chainsafe/persistent-merkle-tree` and `@chainsafe/ssz`
so the hasher detection is done implicitly.
- Latest commit for
[hahstree](e86a8b136a)
has the support for fallback, which is not used in the
`@chainsafe/persistent-merkle-tree`
**Steps to test or reproduce**
Run all tests
**Motivation**
We currently don't print out the current blob limit anywhere but it
would be good to know for debugging and just to inform the user if a BPO
is activated and what are the new blob parameters.
**Description**
Print out blob parameters if BPO fork is activated
---------
Co-authored-by: NC <17676176+ensi321@users.noreply.github.com>