#### This PR sets the foundation for the new logging features.
---
The goal of this big PR is the following:
1. Adding a log.go file to every package:
[_commit_](54f6396d4c)
- Writing a bash script that adds the log.go file to every package that
imports logrus, except the excluded packages, configured at the top of
the bash script.
- the log.go file creates a log variable and sets a field called
`package` to the full path of that package.
- I have tried to fix every error/problem that came from mass generation
of this file. (duplicate declarations, different prefix names, etc...)
- some packages had the log.go file from before, and had some helper
functions in there as well. I've moved all of them to a `log_helpers.go`
file within each package.
2. Create a CI rule which verifies that:
[_commit_](b799c3a0ef)
- every package which imports logrus, also has a log.go file, except the
excluded packages.
- the `package` field of each log.go variable, has the correct path. (to
detect when we move a package or change it's name)
- I pushed a commit with a manually changed log.go file to trigger the
ci check failure and it worked.
3. Alter the logging system to read the prefix from this `package` field
for every log while outputing:
[_commit_](b0c7f1146c)
- some packages have/want/need a different log prefix than their package
name (like `kv`). This can be solved by keeping a map of package paths
to prefix names somewhere.
---
**Some notes:**
- Please review everything carefully.
- I created the `prefixReplacement` map and populated the data that I
deemed necessary. Please check it and complain if something doesn't make
sense or is missing. I attached at the bottom, the list of all the
packages that used to use a different name than their package name as
their prefix.
- I have chosen to mark some packages to be excluded from this whole
process. They will either not log anything, or log without a prefix, or
log using their previously defined prefix. See the list of exclusions in
the bottom.
- I fixed all the tests that failed because of this change. These were
failing because they were expecting the old prefix to be in the
generated logs. I have changed those to expect the new `package` field
instead. This might not be a great solution. Ideally we might want to
remove this from the tests so they only test for relevant fields in the
logs. but this is a problem for another day.
- Please run the node with this config, and mention if you see something
weird in the logs. (use different verbosities)
- The CI workflow uses a script that basically runs the
`hack/gen-logs.sh` and checks that the git diff is zero. that script is
`hack/check-logs.sh`. This means that if one runs this script locally,
it will not actually _check_ anything, rather than just regenerate the
log.go files and fix any mistake. This might be confusing. Please
suggest solutions if you think it's a problem.
---
**A list of packages that used a different prefix than their package
names for their logs:**
- beacon-chain/cache/depositsnapshot/ package depositsnapshot, prefix
"cache"
- beacon-chain/core/transition/log.go — package transition, prefix
"state"
- beacon-chain/db/kv/log.go — package kv, prefix "db"
- beacon-chain/db/slasherkv/log.go — package slasherkv, prefix
"slasherdb"
- beacon-chain/db/pruner/pruner.go — package pruner, prefix "db-pruner"
- beacon-chain/light-client/log.go — package light_client, prefix
"light-client"
- beacon-chain/operations/attestations/log.go — package attestations,
prefix "pool/attestations"
- beacon-chain/operations/slashings/log.go — package slashings, prefix
"pool/slashings"
- beacon-chain/rpc/core/log.go — package core, prefix "rpc/core"
- beacon-chain/rpc/eth/beacon/log.go — package beacon, prefix
"rpc/beaconv1"
- beacon-chain/rpc/eth/validator/log.go — package validator, prefix
"beacon-api"
- beacon-chain/rpc/prysm/v1alpha1/beacon/log.go — package beacon, prefix
"rpc"
- beacon-chain/rpc/prysm/v1alpha1/validator/log.go — package validator,
prefix "rpc/validator"
- beacon-chain/state/stategen/log.go — package stategen, prefix
"state-gen"
- beacon-chain/sync/checkpoint/log.go — package checkpoint, prefix
"checkpoint-sync"
- beacon-chain/sync/initial-sync/log.go — package initialsync, prefix
"initial-sync"
- cmd/prysmctl/p2p/log.go — package p2p, prefix "prysmctl-p2p"
- config/features/log.go -- package features, prefix "flags"
- io/file/log.go — package file, prefix "fileutil"
- proto/prysm/v1alpha1/log.go — package eth, prefix "protobuf"
- validator/client/beacon-api/log.go — package beacon_api, prefix
"beacon-api"
- validator/db/kv/log.go — package kv, prefix "db"
- validator/db/filesystem/db.go — package filesystem, prefix "db"
- validator/keymanager/derived/log.go — package derived, prefix
"derived-keymanager"
- validator/keymanager/local/log.go — package local, prefix
"local-keymanager"
- validator/keymanager/remote-web3signer/log.go — package
remote_web3signer, prefix "remote-keymanager"
- validator/keymanager/remote-web3signer/internal/log.go — package
internal, prefix "remote-web3signer-
internal"
- beacon-chain/forkchoice/doubly... prefix is
"forkchoice-doublylinkedtree"
**List of excluded directories (their subdirectories are also
excluded):**
```
EXCLUDED_PATH_PREFIXES=(
"testing"
"validator/client/testutil"
"beacon-chain/p2p/testing"
"beacon-chain/rpc/eth/config"
"beacon-chain/rpc/prysm/v1alpha1/debug"
"tools"
"runtime"
"monitoring"
"io"
"cmd"
".well-known"
"changelog"
"hack"
"specrefs"
"third_party"
"bazel-out"
"bazel-bin"
"bazel-prysm"
"bazel-testlogs"
"build"
".github"
".jj"
".idea"
".vscode"
)
```
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
Before this PR, all `.sszs` files containing the data column sidecars
were read an process sequentially, taking some time.
After this PR, every `.sszs` files of a given epoch (so, up to 32 files
with the current `SLOT_PER_EPOCHS` value) are processed in parallel.
**Which issues(s) does this PR fix?**
- https://github.com/OffchainLabs/prysm/issues/16204
Tested on - [Netcup VPS 4000 G11](https://www.netcup.com/en/server/vps).
**Before this PR (3 trials)**:
```
[2026-01-02 08:55:12.71] INFO filesystem: Data column filesystem cache warm-up complete elapsed=1m22.894007534s
[2026-01-02 12:59:33.62] INFO filesystem: Data column filesystem cache warm-up complete elapsed=42.346732863s
[2026-01-02 13:03:13.65] INFO filesystem: Data column filesystem cache warm-up complete elapsed=56.143565960s
```
**After this PR (3 trials)**:
```
[2026-01-02 12:50:07.53] INFO filesystem: Data column filesystem cache warm-up complete elapsed=2.019424193s
[2026-01-02 12:52:01.34] INFO filesystem: Data column filesystem cache warm-up complete elapsed=1.960671225s
[2026-01-02 12:53:34.66] INFO filesystem: Data column filesystem cache warm-up complete elapsed=2.549555363s
```
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
**What type of PR is this?**
Optimisation
**What does this PR do? Why is it needed?**
While constructing data column sidecars from the execution layer is very
cheap compared to reconstructing data column sidecars from data column
sidecars, it is still efficient to run this construction in parallel.
(**Reminder:** Using `getBlobsV2`, all the cell proofs are present, but
only 64 (out of 128) cells are present. Recomputing the missing cells is
cheap, while reconstruction the missing proofs is expensive.)
This PR:
- adds some metrics
- ensure the construction is done in parallel
**Other notes for review**
Please read commit by commit
The red vertical lines represent the limit between before and after this
pull request
<img width="1575" height="603" alt="image"
src="https://github.com/user-attachments/assets/24811b1b-8e3c-4bf5-ac82-f920d385573a"
/>
The last commit transforms the bottom right histogram to summary, since
it makes no sense any more to have an histogram for values.
Please check "hide whitespace" so this PR is easier to review:
<img width="229" height="196" alt="image"
src="https://github.com/user-attachments/assets/548cb2f4-b6f4-41d1-b3b3-4d4c8554f390"
/>
Updated metrics:
Now, for every **non missed slot**, for a block **with at least one
commitment**, we have either:
```
[2025-12-10 10:02:12.93] DEBUG sync: Constructed data column sidecars from the execution client count=118 indices=0-5,7-16,18-27,29-35,37-46,48-49,51-82,84-100,102-106,108-125,127 iteration=0 proposerIndex=855082 root=0xf8f44e7d4cbc209b2ff2796c07fcf91e85ab45eebe145c4372017a18b25bf290 slot=1928961 type=BeaconBlock
```
either
```
[2025-12-10 10:02:25.69] DEBUG sync: No data column sidecars constructed from the execution client iteration=2 proposerIndex=1093657 root=0x64c2f6c31e369cd45f2edaf5524b64f4869e8148cd29fb84b5b8866be529eea3 slot=1928962 type=DataColumnSidecar
```
<img width="1581" height="957" alt="image"
src="https://github.com/user-attachments/assets/514dbdae-ef14-47e2-9127-502ac6d26bc0"
/>
<img width="1596" height="916" alt="image"
src="https://github.com/user-attachments/assets/343d4710-4191-49e8-98be-afe70d5ffe1c"
/>
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
This pull request removes `NUMBER_OF_COLUMNS` and
`MAX_CELLS_IN_EXTENDED_MATRIX` configuration.
**Other notes for review**
Please read commit by commit, with commit messages.
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Feature
**What does this PR do? Why is it needed?**
| Feature | Semi-Supernode | Supernode |
| ----------------------- | ------------------------- |
------------------------ |
| **Custody Groups** | 64 | 128 |
| **Data Columns** | 64 | 128 |
| **Storage** | ~50% | ~100% |
| **Blob Reconstruction** | Yes (via Reed-Solomon) | No reconstruction
needed |
| **Flag** | `--semi-supernode` | `--supernode` |
| **Can serve all blobs** | Yes (with reconstruction) | Yes (directly) |
**note** if your validator total effective balance results in more
custody than the semi-supernode it will override those those
requirements.
cgc=64 from @nalepae
Pro:
- We are useful to the network
- Less disconnection likelihood
- Straight forward to implement
Con:
- We cannot revert to a full node
- We have to serve incoming RPC requests corresponding to 64 columns
Tested the following using this kurtosis setup
```
participants:
# Super-nodes
- el_type: geth
el_image: ethpandaops/geth:master
cl_type: prysm
vc_image: gcr.io/offchainlabs/prysm/validator:latest
cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
count: 2
cl_extra_params:
- --supernode
vc_extra_params:
- --verbosity=debug
# Full-nodes
- el_type: geth
el_image: ethpandaops/geth:master
cl_type: prysm
vc_image: gcr.io/offchainlabs/prysm/validator:latest
cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
count: 2
validator_count: 1
cl_extra_params:
- --semi-supernode
vc_extra_params:
- --verbosity=debug
additional_services:
- dora
- spamoor
spamoor_params:
image: ethpandaops/spamoor:master
max_mem: 4000
spammers:
- scenario: eoatx
config:
throughput: 200
- scenario: blobs
config:
throughput: 20
network_params:
fulu_fork_epoch: 0
withdrawal_type: "0x02"
preset: mainnet
global_log_level: debug
```
```
curl -H "Accept: application/json" http://127.0.0.1:32961/eth/v1/node/identity
{"data":{"peer_id":"16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw","enr":"enr:-Ni4QIH5u2NQz17_pTe9DcCfUyG8TidDJJjIeBpJRRm4ACQzGBpCJdyUP9eGZzwwZ2HS1TnB9ACxFMQ5LP5njnMDLm-GAZqZEXjih2F0dG5ldHOIAAAAAAAwAACDY2djQIRldGgykLZy_whwAAA4__________-CaWSCdjSCaXCErBAAE4NuZmSEAAAAAIRxdWljgjLIiXNlY3AyNTZrMaECulJrXpSOBmCsQWcGYzQsst7r3-Owlc9iZbEcJTDkB6qIc3luY25ldHMFg3RjcIIyyIN1ZHCCLuA","p2p_addresses":["/ip4/172.16.0.19/tcp/13000/p2p/16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw","/ip4/172.16.0.19/udp/13000/quic-v1/p2p/16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw"],"discovery_addresses":["/ip4/172.16.0.19/udp/12000/p2p/16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw"],"metadata":{"seq_number":"3","attnets":"0x0000000000300000","syncnets":"0x05","custody_group_count":"64"}}}
```
```
curl -s http://127.0.0.1:32961/eth/v1/debug/beacon/data_column_sidecars/head | jq '.data | length'
64
```
```
curl -X 'GET' \
'http://127.0.0.1:32961/eth/v1/beacon/blobs/head' \
-H 'accept: application/json'
```
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [x] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for reviewers to understand this PR.
---------
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: james-prysm <jhe@offchainlabs.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
**What type of PR is this?**
Feature
**What does this PR do? Why is it needed?**
This PR adds integrates state-diff into `HasState()`.
One thing to note: we are assuming that, for a given block root, that
has either a state summary or a block in db, and also falls in the state
diff tree, then there must exist a state. This function could return
true, even when there is no actual state saved due to any error.
But this is fine, because we have that assumption throughout the whole
state diff feature.
**Review after #16033 is merged**
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
This PR refactors the code to find the corresponding slot of a given
block root using state summary or the block itself, into its own
function `SlotByBlockRoot(ctx, blockroot)`.
Note that there exists a function `slotByBlockRoot(ctx context.Context,
tx *bolt.Tx, blockRoot []byte)` immediately below the new function. Also
note that this function has two drawbacks, which led to creation of the
new function:
- the old function requires a boltdb tx, which is not necessarily
available to the caller.
- the old function does NOT make use of the state summary cache.
edit:
- the old function also uses the state bucket to retrieve the state and
it's slot. this is not something we want in the state diff feature,
since there is no state bucket.
**What type of PR is this?**
Feature
**What does this PR do? Why is it needed?**
This PR integrates the state diff path into the `State()` function from
`db/kv`, which allows reading of states using the state diff db, when
the `EnableStateDiff` flag is enabled.
**Notes for reviewers:**
Files `kv/state_diff_test.go` and `config/features/config.go` only
contain renamings:
- `kv/state_diff_test.go`: rename `setDefaultExponents()` to
`setDefaultStateDiffExponents()` to be less vague.
- `config/features/config.go`: rename `enableStateDiff` to
`EnableStateDiff` to make it public.
**What type of PR is this?**
Bug fix
**What does this PR do? Why is it needed?**
This PR fixes a bug in the state diff `getBaseAndDiffChain()` method. In
the process of finding the diff chain indices, there could be a scenario
where multiple levels return the same diff slot number not equal to the
base (full snapshot) slot number.
This resulted in multiple diff items being added to the diff chain for
the same slot, but on different levels, which resulted in errors reading
the diff.
Fix: we keep a `lastSeenAnchorSlot` equal to the `BaseAnchorSlot` and
update it every time we see a new anchor slot. We ignore if the current
found anchor slot is equal to the `lastSeenAnchorSlot`.
Scenario example:
exponents: [20, 14, 10, 7, 5]
offset: 0
slots saved: slot 2^11, and slot (2^11 + 2^5)
slot to read: slot (2^11 + 2^5)
resulting list of anchor slots (diff chain indices): [0, 0, 2^11, 2^11,
2^11 + 2^5]
* Implement `AvailableBlocks`.
* `blobSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available.
* `dataColumnSidecarByRootRPCHandler`: Do not do extra work if only needed for TRACE logging.
* `TestDataColumnSidecarsByRootRPCHandler`: Re-arrange (no functional change).
* `TestDataColumnSidecarsByRootRPCHandler`: Save blocks corresponding to sidecars into DB.
* `dataColumnSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available.
* Add changelog
* `TestDataColumnSidecarsByRootRPCHandler`: Use `assert` instead of `require` in goroutines.
https://github.com/stretchr/testify?tab=readme-ov-file#require-package
* default new blob storage layouts to by-epoch
also, do not log migration message until we see a directory that needs to be migrated
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* manu feedback
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update Earliest available slot when pruning
* bazel run //:gazelle -- fix
* custodyUpdater interface to avoid import cycle
* bazel run //:gazelle -- fix
* simplify test
* separation of concerns
* debug log for updating eas
* UpdateEarliestAvailableSlot function in CustodyManager
* fix test
* UpdateEarliestAvailableSlot function for FakeP2P
* lint
* UpdateEarliestAvailableSlot instead of UpdateCustodyInfo + check for Fulu
* fix test and lint
* bugfix: enforce minimum retention period in pruner
* remove MinEpochsForBlockRequests function and use from config
* remove modifying earliest_available_slot after data column pruning
* correct earliestAvailableSlot validation: allow backfill decrease but prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS
* lint
* bazel run //:gazelle -- fix
* lint and remove unwanted debug logs
* Return a wrapped error, and let the caller decide what to do
* fix tests because updateEarliestSlot returns error now
* avoid re-doing computation in the test function
* lint and correct changelog
* custody updater should be a mandatory part of the pruner service
* ensure never increase eas if we are in the block requests window
* slot level granularity edge case
* update the value stored in the DB
* log tidy up
* use errNoCustodyInfo
* allow earliestAvailableSlot edit when custodyGroupCount doesnt change
* undo the minimal config change
* add context to CustodyGroupCount after merging from develop
* cosmetic change
* shift responsibility from caller to callee, protection for updateEarliestSlot. UpdateEarliestAvailableSlot returns cgc
* allow increase in earliestAvailableSlot only when custodyGroupCount also increases
* remove CustodyGroupCount as it is no longer needed as UpdateEarliestAvailableSlot returns cgc now
* proper place for log and name refactor
* test for Nil custody info
* allow decreasing earliest slot in DB (just like in memory)
* invert if statement to make more readable
* UpdateEarliestAvailableSlot for DB (equivalent of p2p's UpdateEarliestAvailableSlot) & undo changes made to UpdateCustodyInfo
* in UpdateEarliestAvailableSlot, no need to return unused values
* no need to log stored group count
* log.WithField instead of log.WithFields
* Fixed metadata extraction on Windows by correctly splitting file paths
* `TestExtractFileMetadata`: Refactor a bit.
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* slasherkv: Set a 1 minute timeout on PruneAttestationOnEpoch operations to prevent very large bolt transactions.
* Fix CI
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Change wrap message to avoid the could not...: could not...: could not... effect.
Reference: https://github.com/uber-go/guide/blob/master/style.md#error-wrapping.
* Log: remove period at the end of the latest sentence.
* Dirty quick fix to ensure that the custody group count is set at P2P service start.
A real fix would involve a chan implement a proper synchronization scheme.
* Add changelog.
* `reconstructSaveBroadcastDataColumnSidecars`: Use the `s.columnIndicesToSample` instead of recoding its content.
ddd
* Rename `custodyColumns` ==> `columnIndicesToSample`.
* `DataColumnStorage.Save`: Remove wrong godoc.
* Implement `receiveDataColumnSidecars` and transform `receiveDataColumnSidecar` as a subcase of the plural version.
* `dataColumnSubscriber`: Add godoc and remove only once used variable.
* `processDataColumnSidecarsFromExecution`: Use single flight directly in the function.
So the caller does not have any more the responsability to deal with multiple simultaneous calls.
* `processDataColumnSidecarsFromReconstruction`: Guard against a single flight.
In `dataColumnSubscriber`, trig in parallel `processDataColumnSidecarsFromReconstruction` and `processDataColumnSidecarsFromExecution`.
Stop when the first of them is successful.
* `processDataColumnSidecarsFromExecution`: Use `receiveDataColumnSidecars` instead of `receiveDataColumnSidecar`.
* Implement and use `broadcastAndReceiveUnseenDataColumnSidecars`.
* Add changelog.
* Fix James' comment.
* Fix James' comment.
* `processDataColumnSidecarsFromReconstruction`: Log reconstruction duration.
* initialize genesis data asap at node start
* add genesis validation tests with embedded state verification
* Add test for hardcoded mainnet genesis validator root and time from init() function
* Add test for UnmarshalState in encoding/ssz/detect/configfork.go
* Add tests for genesis.Initialize
* Move genesis/embedded to genesis/internal/embedded
* Gazelle / BUILD fix
* James feedback
* Fix lint
* Revert lock
---------
Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
* Add entry for sequence number in chain-metadata bucket & Basic getter/setter
* Mark p2p-metadata flag as deprecated
* Fix metaDataFromConfig: use DB instead to get seqnum
* Save sequence number after updating the metadata
* Fix beacon-chain/p2p unit tests: add DB in config
* Add changelog
* Add ReadOnlyDatabaseWithSeqNum
* Code suggestion from Manu
* Remove seqnum getter at interface
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* Convert genesis times from seconds to time.Time
* Fixing failed forkchoice tests in a new commit so it doesn't get worse
Fixing failed spectest tests in a new commit so it doesn't get worse
Fixing forkchoice tests, then spectests
* Fixing forkchoice tests, then spectests. Now asking for help...
* Fix TestForkChoice_GetProposerHead
* Fix broken build
* Resolve TODO(preston) items
* Changelog fragment
* Resolve TODO(preston) items again
* Resolve lint issues
* Use consistant field names for sinceSlotStart (no spaces)
* Manu's feedback
* Renamed StartTime -> UnsafeStartTime, marked as deprecated because it doesn't handle overflow scenarios.
Renamed SlotTime -> StartTime
Renamed SlotAt -> At
Handled the error in cases where StartTime was used.
@james-prysm feedback
* Revert beacon-chain/blockchain/receive_block_test.go from 1b7844de
* Fixing issues after rebase
* Accepted suggestions from @potuz
* Remove CanonicalHeadSlot from merge conflicts
---------
Co-authored-by: potuz <potuz@prysmaticlabs.com>
* Add log capitalization analyzer and apply fixes across codebase
Implements a new nogo analyzer to enforce proper log message capitalization and applies the fixes to all affected log statements throughout the beacon chain, validator, and supporting components.
Co-Authored-By: Claude <noreply@anthropic.com>
* Radek's feedback
---------
Co-authored-by: Claude <noreply@anthropic.com>
* Add the new Fulu state with the new field
* fix the hasher for the fulu state
* Fix ToProto() and ToProtoUnsafe()
* Add the fields as shared
* Add epoch transition code
* short circuit the proposer cache to use the state
* Marshal the state JSON
* update spectests to 1.6.0-alpha.1
* Remove deneb and electra entries from blob schedule
This was cherry picked from PR #15364
and edited to remove the minimal cases
* Fix minimal tests
* Increase deadling for processing blocks in spectests
* Preston's review
* review
---------
Co-authored-by: terence tsao <terence@prysmaticlabs.com>