Replace the proposer indices cache usage in data column sidecar
verification with direct state lookahead access. Since data column
sidecars require the Fulu fork, the state always has a ProposerLookahead
field that provides O(1) proposer index lookups for current and next
epoch.
This simplifies SidecarProposerExpected() by removing:
- Checkpoint-based proposer cache lookup
- Singleflight wrapper (not needed for O(1) access)
- Target root computation for cache keys
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
Before this PR, all `.sszs` files containing the data column sidecars
were read an process sequentially, taking some time.
After this PR, every `.sszs` files of a given epoch (so, up to 32 files
with the current `SLOT_PER_EPOCHS` value) are processed in parallel.
**Which issues(s) does this PR fix?**
- https://github.com/OffchainLabs/prysm/issues/16204
Tested on - [Netcup VPS 4000 G11](https://www.netcup.com/en/server/vps).
**Before this PR (3 trials)**:
```
[2026-01-02 08:55:12.71] INFO filesystem: Data column filesystem cache warm-up complete elapsed=1m22.894007534s
[2026-01-02 12:59:33.62] INFO filesystem: Data column filesystem cache warm-up complete elapsed=42.346732863s
[2026-01-02 13:03:13.65] INFO filesystem: Data column filesystem cache warm-up complete elapsed=56.143565960s
```
**After this PR (3 trials)**:
```
[2026-01-02 12:50:07.53] INFO filesystem: Data column filesystem cache warm-up complete elapsed=2.019424193s
[2026-01-02 12:52:01.34] INFO filesystem: Data column filesystem cache warm-up complete elapsed=1.960671225s
[2026-01-02 12:53:34.66] INFO filesystem: Data column filesystem cache warm-up complete elapsed=2.549555363s
```
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
There is a race condition introduced in #16149 in which the update to
the NSC happens with a context that may be cancelled by the time the
routine is called. This PR starts a new context with a deadline to call
the routine in the background.
fixes#16205
This PR introduces several simplifications to block processing.
It calls to notify the engine in the background when forkchoice needs to
be updated.
It no longer updates the caches and process epoch transition before
computing payload attributes, since this is no longer needed after Fulu.
It removes a complicated second call to FCU with the same head after
processing the last slot of the epoch.
Some checks for reviewers:
- the single caller of sendFCU held a lock to forkchoice. Since the call
now is in the background this helper can aquire the lock.
- All paths to handleEpochBoundary are now **NOT** locked. This allows
the lock to get the target root to be taken locally in place.
- The checkpoint cache is completely useless and thus the target root
call could be removed. But removing the proposer ID cache is more
complicated and out of scope for this PR.
- lateBlockTasks has pre and post-fulu cased, we could remove pre-fulu
checks and defer to the update function if deemed cleaner.
- Conversely, postBlockProcess does not have this casing and thus
pre-Fulu blocks on gossip may fail to get proposed correctly because of
the lack of the proposer being correctly computed.
**What type of PR is this?**
Bug fix
**What does this PR do? Why is it needed?**
Validation of `--backfill-oldest-slot` fails for values > 1056767,
because the validation code is comparing the slot/32 to
`MIN_EPOCHS_FOR_BLOCK_REQUESTS` (33024), instead of comparing it to
`current_epoch - MIN_EPOCHS_FOR_BLOCK_REQUESTS`.
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
In the event that the target checkpoint of an attestation is for the
previous epoch, and the head state has the same dependent root at that
epoch. The reason being that this guarantees that both seed and active
validator indices are guaranteed to be the same at the checkpoint's
epoch, from the point of view of the attester (even on a different
branch) and the head view.
**What type of PR is this?**
Tooling
**What does this PR do? Why is it needed?**
Renames `httperror` analyzer to `httpwriter` and extends it to the
following functions:
- `WriteError`
- `WriteJson`
- `WriteSsz`
_**NOTE: The PR is currently red because the fix in
https://github.com/OffchainLabs/prysm/pull/16175 must be merged first**_
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
SortedForkSchedule should never be empty for a properly initialized
network schedule, but the handler already had a branch to support an
empty result. Without an early return, we wrote a JSON response and then
still accessed schedule[0], which could panic and double-write the HTTP
response in misconfigured setups.
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Performance
**What does this PR do? Why is it needed?**
`PendingBalanceToWithdraw` was used to find the `bal` only to check
later if `bal` is greater than 0 or not. No need to calculate the full
balance and we could just check if `bal` is greater than 0 or not by
using an existing function `HasPendingBalanceToWithdraw`. So this should
help in reducing some unnecessary computation.
`HasPendingBalanceToWithdraw` returns immediately on first match of
non-zero instance, while `PendingBalanceToWithdraw` always iterates
through all entries to compute the sum.
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
When computing payload attributes post-Fulu, we do not need to process
slots, nor copy the state if we need to find out if the node is
proposing in the next slot. This prevents an immediate epoch processing
after block 31 is processed unless we are actually proposing.
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
When an (potentially aggregated) attestation is received **before** the
block being voted for, Prysm queues this attestation, then processes the
queue when the block has been received.
This behavior is consistent with the [Phase0 specification
](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/p2p-interface.md#beacon_attestation_subnet_id).
> [IGNORE] The block being voted for
(attestation.data.beacon_block_root) has been seen (via gossip or
non-gossip sources) (a client MAY queue attestations for processing once
block is retrieved).
Once the block being voted for is processed, previously queued
(potentially aggregated) attestations are then processed, and
broadcasted.
Processing (potentially aggregated) attestations takes some non
negligible time. For this reason, (potentially aggregated) attestations
are deduplicated before being introduced into the pending queue, to
avoid eventually processing duplicates.
Before this PR, two aggregated attestations were considered duplicated
if all of the following conditions were gathered:
1. Attestations have the same version,
2. **Attestations have the same aggregator index (aka., the same
validator aggregated them)**,
3. Attestations have the same slot,
4. Attestations have the same committee index, and
5. Attestations have the same aggregation bits
Aggregated attestations are then broadcasted.
The final purpose of aggregated attestations is to be packed into the
next block by the next proposer.
When packing attestations, the aggregator index is not used any more.
This pull request modifies the deduplication function used in the
pending aggregated attestations queue by considering that multiple
aggregated attestations only differing by the aggregator index are
equivalent (removing `2.` of the previous list.)
As a consequence, the count of aggregated attestations to be introduced
in the pending queue is reduced from 1 aggregated attestation by
aggregator to, in the best case,
[MAX_COMMITTEE_PER_SLOT=64](https://github.com/ethereum/consensus-specs/blob/master/specs/phase0/beacon-chain.md#misc-1).
Also, only a single aggregated attestation for a given version, slot,
committee index and aggregation bits will be re-broadcasted. This is a
correct behavior, since no data to be included in a block will be lost.
(We can even say that this will reduce by a bit the total networking
volume.)
**How to test**:
1. Start a beacon node (preferably, on a slow computer) from a
checkpoint.
2. Filter logs containing `Synced new block` and `Verified and saved
pending attestations to pool`. (You can pipe logs into `grep -E "Synced
new block|Verified and saved pending attestations to pool"`.
- In `Synced new block` logs, monitor the `sinceSlotStartTime` value.
This should monotonically decrease.
- In `Verified and saved pending attestations to pool`, monitor the
`pendingAggregateAttAndProofCount` value. It should be a "honest" value.
"honest" is not really quantifiable here, since it depends on the
aggregators. But it's likely to be less than
`5*MAX_COMMITTEE_PER_SLOT=320`.
**Which issues(s) does this PR fix?**
Partially fixes:
- https://github.com/OffchainLabs/prysm/issues/16160
**Other notes for review**
Please read commit by commit, with commit messages.
The important commit is b748c04a67.
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
When we receive data column sidecars via gossip, if the sidecar does not
respect the validation rules, a scary ERROR log is displayed. We can't
to anything about it, since the error comes from an invalid incoming
sidecar, so there is no need to print an ERROR message.
Node: As all REJECTED gossip message, a DEBUG log is also always
displayed.
Example of ERROR log:
```
[2025-12-18 15:38:26.46] ERROR sync: Failed to decode message error=invalid ssz encoding. first variable element offset indexes into fixed value data
[2025-12-18 15:38:26.46] DEBUG sync: Gossip message was rejected agent=erigon/caplin error=invalid ssz encoding. first variable element offset indexes into fixed value data gossipScore=0 multiaddress=/ip4/141.147.32.105/tcp/9000 peerID=16Uiu2HAmHu88k97iBist1vJg7cPNuTjJFRARKvDF7yaH3Pv3Vmso topic=/eth2/c6ecb76c/data_column_sidecar_30/ssz_snappy
```
(After this PR, the DEBUG one will still be printed.)
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Bug fix
**What does this PR do? Why is it needed?**
Prysm starting throwing this error `Could not write response message"
error="write tcp 10.104.92.212:5052->10.104.92.196:41876: write: broken
pipe` because a validator got attestation data from a synced node and
submitted attestation to a syncing node, when the node couldn't replay
the state the validator context deadlined and disconnected but the
writer when it finally responded tries to write it gets this broken pipe
error.
applies to `/eth/v2/beacon/pool/attestations` and
`/eth/v1/beacon/pool/sync_committees`
the solution is 2 part.
1. we shouldn't allow submission of an attestation if the node is
syncing because we can't save the attestation without the state
information.
2. we were doing the expensive state call before broadcast before in
rest and now it should match gRPC where it happens afterward in its own
go routine.
Tested manually running kurtosis with rest validators
```
participants:
# Super-nodes
- el_type: nethermind
cl_type: prysm
cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
count: 2
supernode: true
cl_extra_params:
- --subscribe-all-subnets
- --verbosity=debug
vc_extra_params:
- --enable-beacon-rest-api
- --verbosity=debug
# Full-nodes
- el_type: nethermind
cl_type: prysm
cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
validator_count: 63
cl_extra_params:
- --verbosity=debug
vc_extra_params:
- --enable-beacon-rest-api
- --verbosity=debug
- el_type: nethermind
cl_type: prysm
cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
cl_extra_params:
- --verbosity=debug
vc_extra_params:
- --enable-beacon-rest-api
- --verbosity=debug
validator_count: 13
additional_services:
- dora
- spamoor
spamoor_params:
image: ethpandaops/spamoor:master
max_mem: 4000
spammers:
- scenario: eoatx
config:
throughput: 200
- scenario: blobs
config:
throughput: 20
network_params:
fulu_fork_epoch: 2
bpo_1_epoch: 8
bpo_1_max_blobs: 21
withdrawal_type: "0x02"
preset: mainnet
seconds_per_slot: 6
global_log_level: debug
```
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
**What type of PR is this?**
Bug fix
**What does this PR do? Why is it needed?**
This PR fixes the LC p2p `fork version not recognized` bug. It adds
object mappings for the LC types for Fulu, and fixes tests to cover such
cases in the future.
**What type of PR is this?**
Optimisation
**What does this PR do? Why is it needed?**
While constructing data column sidecars from the execution layer is very
cheap compared to reconstructing data column sidecars from data column
sidecars, it is still efficient to run this construction in parallel.
(**Reminder:** Using `getBlobsV2`, all the cell proofs are present, but
only 64 (out of 128) cells are present. Recomputing the missing cells is
cheap, while reconstruction the missing proofs is expensive.)
This PR:
- adds some metrics
- ensure the construction is done in parallel
**Other notes for review**
Please read commit by commit
The red vertical lines represent the limit between before and after this
pull request
<img width="1575" height="603" alt="image"
src="https://github.com/user-attachments/assets/24811b1b-8e3c-4bf5-ac82-f920d385573a"
/>
The last commit transforms the bottom right histogram to summary, since
it makes no sense any more to have an histogram for values.
Please check "hide whitespace" so this PR is easier to review:
<img width="229" height="196" alt="image"
src="https://github.com/user-attachments/assets/548cb2f4-b6f4-41d1-b3b3-4d4c8554f390"
/>
Updated metrics:
Now, for every **non missed slot**, for a block **with at least one
commitment**, we have either:
```
[2025-12-10 10:02:12.93] DEBUG sync: Constructed data column sidecars from the execution client count=118 indices=0-5,7-16,18-27,29-35,37-46,48-49,51-82,84-100,102-106,108-125,127 iteration=0 proposerIndex=855082 root=0xf8f44e7d4cbc209b2ff2796c07fcf91e85ab45eebe145c4372017a18b25bf290 slot=1928961 type=BeaconBlock
```
either
```
[2025-12-10 10:02:25.69] DEBUG sync: No data column sidecars constructed from the execution client iteration=2 proposerIndex=1093657 root=0x64c2f6c31e369cd45f2edaf5524b64f4869e8148cd29fb84b5b8866be529eea3 slot=1928962 type=DataColumnSidecar
```
<img width="1581" height="957" alt="image"
src="https://github.com/user-attachments/assets/514dbdae-ef14-47e2-9127-502ac6d26bc0"
/>
<img width="1596" height="916" alt="image"
src="https://github.com/user-attachments/assets/343d4710-4191-49e8-98be-afe70d5ffe1c"
/>
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Tests
**What does this PR do? Why is it needed?**
```
--- PASS: TestEndToEnd_MinimalConfig/chain_started (0.50s)
--
--- PASS: TestEndToEnd_MinimalConfig/finished_syncing_0 (0.00s)
--- PASS: TestEndToEnd_MinimalConfig/all_nodes_have_same_head_0 (0.00s)
--- PASS: TestEndToEnd_MinimalConfig/validators_active_epoch_0 (0.00s)
--- FAIL: TestEndToEnd_MinimalConfig/validator_sync_participation_0 (0.01s)
--- PASS: TestEndToEnd_MinimalConfig/peers_connect_epoch_0 (0.11s)
```
This PR attempts to reduce flakes on validator sync participation
failures by skipping the first slot of the block after startup
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
If there is a context deadline updating the committee cache, but the
indices have been computed correctly, do not error out but rather return
the indices and log the error.
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Bug fix
**What does this PR do? Why is it needed?**
s.Stater.StateBySlot may replay if it's the current epoch as it's for
values in the db, if we are in the current we should try to get head
slot and use the cache, proposer duties was doing this already but the
other 2 duties endpoints was not. this pr aligns all 3 and introduces a
new `statebyepoch` that just wraps the approach.
I tested by running this kurtosis config with and without the fix to see
that the replays stop, the blockchain progresses, and the upgraded to
fulu is not printed multiple times
```
participants:
# Super-nodes
- el_type: nethermind
cl_type: prysm
cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
count: 2
supernode: true
cl_extra_params:
- --subscribe-all-subnets
- --verbosity=debug
vc_extra_params:
- --enable-beacon-rest-api
- --verbosity=debug
# Full-nodes
- el_type: nethermind
cl_type: prysm
cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
validator_count: 63
cl_extra_params:
- --verbosity=debug
vc_extra_params:
- --enable-beacon-rest-api
- --verbosity=debug
- el_type: nethermind
cl_type: prysm
cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
cl_extra_params:
- --verbosity=debug
vc_extra_params:
- --enable-beacon-rest-api
- --verbosity=debug
validator_count: 13
additional_services:
- dora
- spamoor
spamoor_params:
image: ethpandaops/spamoor:master
max_mem: 4000
spammers:
- scenario: eoatx
config:
throughput: 200
- scenario: blobs
config:
throughput: 20
network_params:
fulu_fork_epoch: 2
bpo_1_epoch: 8
bpo_1_max_blobs: 21
withdrawal_type: "0x02"
preset: mainnet
seconds_per_slot: 6
global_log_level: debug
```
**Which issues(s) does this PR fix?**
Fixes # https://github.com/OffchainLabs/prysm/issues/16135
**Other notes for review**
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
**What type of PR is this?**
Tooling
**What does the PR do?**
Every call to `httputil.HandleError` must be followed by a `return`
statement. It's easy to miss this during reviews, so having a static
analyzer that enforces this will make our life easier.
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
`validateWithKzgBatchVerifier` could timeout (12s) and once it times out
because `resChan` is unbuffered, the verifier will stuck at following
line at `verifyKzgBatch` as its waiting for someone to grab the result
from `resChan`:
```
for _, verifier := range kzgBatch {
verifier.resChan <- verificationErr
}
```
Fix is to make kzg batch verification non blocking on timeouts by
buffering each request’s buffered size 1
Ensure SubmitAttesterSlashingsV2 returns immediately when the
Eth-Consensus-Version header is missing. Without this early return the
handler calls version.FromString with an empty value and writes a second
JSON error to the response, producing invalid JSON and duplicating error
output. This change aligns the handler with the error-handling pattern
used in other endpoints that validate the version header.
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
Calls to `Stater.StateBySlot` and `Stater.State` should be followed by
`shared.WriteStateFetchError` to provide the most robust error handling.
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Design Doc
**What does this PR do? Why is it needed?**
This PR adds a design doc for adding graffiti. The idea is to have it
populated judiciously so that we can get proper information about the
EL, CL and their corresponding version info. At the same time being
flexible enough with the user input.
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
This pull request modifies the `PULL_REQUEST_TEMPLATE.md` to ensure the
developer checked that their PR works as expected.
Some contributors push some changes, without even running the modified
client once to see if their changes work as expected.
Avoidable back-and-forth trips between the contributor and the reviewers
could be prevented thanks to running the modified client.
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
---------
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
**Which issues(s) does this PR fix?**
**Other notes for review**
Did not delete the fragments as they are still needed to generate v7.1.0
release notes. This release is all cherry-picks which would be included
in v7.1.0
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
A TOCTOU issue was reported by EF security in which two attestations
being validated at the same time may result in both of them being
forwarded. The spec says that we need to forward only the first one.
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
- Remove unnecessary `Copy()` call in `Eth1DataHasEnoughSupport`
- `data.Copy()` was called on every iteration of the vote counting loop,
even though `AreEth1DataEqual` only reads the data and never mutates it.
- Additionally, `Eth1DataVotes()` already returns copies of all votes,
so state is protected regardless.
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
The for loop in MigrateToCold function was brute force in nature. It
could be improved by just directly jumping by `slotsPerArchivedPoint`
rather than going over every single slot.
```
for slot := oldFSlot; slot < fSlot; slot++ {
...
if slot%s.slotsPerArchivedPoint == 0 && slot != 0 {
```
No need to do the modulo for every single slot.
We could just find the correct starting point and jump by
slotsPerArchivedPoint at a time.
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
---------
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
The constructor `NewStateRootNotFoundError` incorrectly returned
`StateNotFoundError`. This prevented handlers that rely on
errors.As(err, *lookup.StateRootNotFoundError) from matching and mapping
the error to HTTP 404. The function now returns
StateRootNotFoundError and constructs that type, restoring the intended
behavior for “state root not found” cases.
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
This PR introduces flag `--ignore-unviable-attestations` (replaces and
deprecates `--disable-last-epoch-targets`) to drop attestations whose
target state is not viable; default remains to process them unless
explicitly enabled.
The head state is guaranteed to have the same shuffling and active
indices if the previous dependent root coincides with the target
checkpoint's in some cases.
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
Move the "Not enough connected peers" (for a given subnet) from WARN to
DEBUG
**Rationale:**
The
<img width="1839" height="31" alt="image"
src="https://github.com/user-attachments/assets/44dbdc8d-3e37-42ee-967b-75a7a1fbcafb"
/>
log is (potentially) printed every 5 minutes.
Every 5 minutes, the BN checks if, for a given subnet, the actual count
of peers is at least equal to a minimum one.
If not, this kind of log is printed.
When validators are connected and selected to be an aggregator in the
next epoch, the BN needs to subscribe and find new peers in the
corresponding attestation subnet.
If, right after the beacon is subscribed (but before it had time to find
peers), the "5 min ticker" ticks, then this warning log is displayed,
even if the slot for which the validator is selected as an aggregator is
still minutes away.
For this reason, this log is moved from WARN to DEBUG
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
This pull request removes `NUMBER_OF_COLUMNS` and
`MAX_CELLS_IN_EXTENDED_MATRIX` configuration.
**Other notes for review**
Please read commit by commit, with commit messages.
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
- Added log prefix to the `genesis` package.
- Added log prefix to the `params` package.
- `WithGenesisValidatorsRoot`: Use camelCase for log field param.
- Move `Origin checkpoint found in db` log from WARN to INFO, since it
is the expected behaviour.
**Other notes for review**
Please read commit by commit
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Bug fix
**What does this PR do? Why is it needed?**
Allows for starting e2e tests from electra or a specific fork of
interest again. doesn't fix missing execution requests tests, nishant
reverted it.
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
The slot parameter in blobCacheEntry.filter was unused and redundant.
All slot/epoch-sensitive checks happen before filter
(commitmentsToCheck), and disk availability is handled via
BlobStorageSummary (epoch-aware).
Changes:
- Drop slot from blobCacheEntry.filter signature.
- Update call sites in availability_blobs.go and blob_cache_test.go.
Mirrors the data_column_cache.filter API (which does not take slot),
reduces API noise, and removes dead code without changing behavior.
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Feature
**What does this PR do? Why is it needed?**
| Feature | Semi-Supernode | Supernode |
| ----------------------- | ------------------------- |
------------------------ |
| **Custody Groups** | 64 | 128 |
| **Data Columns** | 64 | 128 |
| **Storage** | ~50% | ~100% |
| **Blob Reconstruction** | Yes (via Reed-Solomon) | No reconstruction
needed |
| **Flag** | `--semi-supernode` | `--supernode` |
| **Can serve all blobs** | Yes (with reconstruction) | Yes (directly) |
**note** if your validator total effective balance results in more
custody than the semi-supernode it will override those those
requirements.
cgc=64 from @nalepae
Pro:
- We are useful to the network
- Less disconnection likelihood
- Straight forward to implement
Con:
- We cannot revert to a full node
- We have to serve incoming RPC requests corresponding to 64 columns
Tested the following using this kurtosis setup
```
participants:
# Super-nodes
- el_type: geth
el_image: ethpandaops/geth:master
cl_type: prysm
vc_image: gcr.io/offchainlabs/prysm/validator:latest
cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
count: 2
cl_extra_params:
- --supernode
vc_extra_params:
- --verbosity=debug
# Full-nodes
- el_type: geth
el_image: ethpandaops/geth:master
cl_type: prysm
vc_image: gcr.io/offchainlabs/prysm/validator:latest
cl_image: gcr.io/offchainlabs/prysm/beacon-chain:latest
count: 2
validator_count: 1
cl_extra_params:
- --semi-supernode
vc_extra_params:
- --verbosity=debug
additional_services:
- dora
- spamoor
spamoor_params:
image: ethpandaops/spamoor:master
max_mem: 4000
spammers:
- scenario: eoatx
config:
throughput: 200
- scenario: blobs
config:
throughput: 20
network_params:
fulu_fork_epoch: 0
withdrawal_type: "0x02"
preset: mainnet
global_log_level: debug
```
```
curl -H "Accept: application/json" http://127.0.0.1:32961/eth/v1/node/identity
{"data":{"peer_id":"16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw","enr":"enr:-Ni4QIH5u2NQz17_pTe9DcCfUyG8TidDJJjIeBpJRRm4ACQzGBpCJdyUP9eGZzwwZ2HS1TnB9ACxFMQ5LP5njnMDLm-GAZqZEXjih2F0dG5ldHOIAAAAAAAwAACDY2djQIRldGgykLZy_whwAAA4__________-CaWSCdjSCaXCErBAAE4NuZmSEAAAAAIRxdWljgjLIiXNlY3AyNTZrMaECulJrXpSOBmCsQWcGYzQsst7r3-Owlc9iZbEcJTDkB6qIc3luY25ldHMFg3RjcIIyyIN1ZHCCLuA","p2p_addresses":["/ip4/172.16.0.19/tcp/13000/p2p/16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw","/ip4/172.16.0.19/udp/13000/quic-v1/p2p/16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw"],"discovery_addresses":["/ip4/172.16.0.19/udp/12000/p2p/16Uiu2HAm7xzhnGwea8gkcxRSC6fzUkvryP6d9HdWNkoeTkj6RSqw"],"metadata":{"seq_number":"3","attnets":"0x0000000000300000","syncnets":"0x05","custody_group_count":"64"}}}
```
```
curl -s http://127.0.0.1:32961/eth/v1/debug/beacon/data_column_sidecars/head | jq '.data | length'
64
```
```
curl -X 'GET' \
'http://127.0.0.1:32961/eth/v1/beacon/blobs/head' \
-H 'accept: application/json'
```
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [x] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for reviewers to understand this PR.
---------
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: james-prysm <jhe@offchainlabs.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
**What type of PR is this?**
Bug fix
**What does this PR do? Why is it needed?**
I am seeing massive metrics cardinality on error cases.
Example:
```
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682952",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682953",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682954",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682955",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682956",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682957",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682958",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682959",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682960",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682961",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682962",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682966",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682967",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682968",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682969",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682970",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682971",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682972",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682973",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682974",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682975",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682976",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682977",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682978",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682980",method="GET"} 2
http_error_count{code="Not Found",endpoint="/eth/v1/beacon/blob_sidecars/1682983",method="GET"} 2
```
Now it looks like this:
```
# TYPE http_error_count counter
http_error_count{code="Not Found",endpoint="beacon.GetBlockV2",method="GET"} 606
http_error_count{code="Not Found",endpoint="blob.Blobs",method="GET"} 4304
```
**Which issues(s) does this PR fix?**
**Other notes for review**
Other uses of http metrics use the endpoint name rather than the request
URL.
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
This change ensures FlagOptions in cmd/beacon-chain/execution/options.go
appends only one endpoint option depending on whether a JWT secret is
present. Previously the code always appended WithHttpEndpoint and then
conditionally appended WithHttpEndpointAndJWTSecret which overwrote the
first option, adding unnecessary allocations and cognitive overhead.
Since WithHttpEndpointAndJWTSecret fully configures the endpoint,
including URL and Bearer auth needed by the Engine API, the initial
WithHttpEndpoint is redundant when a JWT is supplied. The refactor
preserves behavior while simplifying option composition and avoiding
redundant state churn.
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
When unmarshaling a block with fastssz, if the target block's
`ExecutionRequests` field is nil, it will not get populated
```
if b.ExecutionRequests == nil {
b.ExecutionRequests = new(v1.ExecutionRequests)
}
if err = b.ExecutionRequests.UnmarshalSSZ(buf); err != nil {
return err
}
```
This is true for other fields and that's why we initialize them in our
gossip data maps. There is no bug at the moment because even if
execution requests are nil, we initialize them in
`consensus-types/blocks/proto.go`
```
er := pb.ExecutionRequests
if er == nil {
er = &enginev1.ExecutionRequests{}
}
```
However, since we initialize other fields in the data map, it's safer to
do it for execution requests too, to avoid a bug in case the code in
`consensus-types/blocks/proto.go` changes in the future.
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
**What type of PR is this?**
Feature
**What does this PR do? Why is it needed?**
This PR adds integrates state-diff into `HasState()`.
One thing to note: we are assuming that, for a given block root, that
has either a state summary or a block in db, and also falls in the state
diff tree, then there must exist a state. This function could return
true, even when there is no actual state saved due to any error.
But this is fine, because we have that assumption throughout the whole
state diff feature.
**Review after #16033 is merged**
**What type of PR is this?**
Other
**What does this PR do? Why is it needed?**
This PR refactors the code to find the corresponding slot of a given
block root using state summary or the block itself, into its own
function `SlotByBlockRoot(ctx, blockroot)`.
Note that there exists a function `slotByBlockRoot(ctx context.Context,
tx *bolt.Tx, blockRoot []byte)` immediately below the new function. Also
note that this function has two drawbacks, which led to creation of the
new function:
- the old function requires a boltdb tx, which is not necessarily
available to the caller.
- the old function does NOT make use of the state summary cache.
edit:
- the old function also uses the state bucket to retrieve the state and
it's slot. this is not something we want in the state diff feature,
since there is no state bucket.
**What type of PR is this?**
Feature
**What does this PR do? Why is it needed?**
This PR integrates the state diff path into the `State()` function from
`db/kv`, which allows reading of states using the state diff db, when
the `EnableStateDiff` flag is enabled.
**Notes for reviewers:**
Files `kv/state_diff_test.go` and `config/features/config.go` only
contain renamings:
- `kv/state_diff_test.go`: rename `setDefaultExponents()` to
`setDefaultStateDiffExponents()` to be less vague.
- `config/features/config.go`: rename `enableStateDiff` to
`EnableStateDiff` to make it public.
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here
https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA),
which will show up as a comment from a bot in this pull request after
you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is
non-trivial and requires context for our team to understand. All
features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll
discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not
accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
Bug fix
**What does this PR do? Why is it needed?**
Previously, JWT secrets longer than 256 bits could cause client
compatibility issues. For example, Prysm would accept longer secrets
while Geth strictly requires exactly 32 bytes, causing Geth startup
failures when using the same secret file.
This change enforces the Engine API specification requirement that JWT
secrets must be exactly 256 bits (32 bytes), ensuring consistent
behavior across different client implementations.
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
## **Description:**
**What type of PR is this?**
> Bug fix
**What does this PR do? Why is it needed?**
Replaces fixed `time.Sleep(time.Second)` delays in `TestLifecycle` with
active polling to wait for service readiness/shutdown. This improves
test reliability and reduces execution time by eliminating unnecessary
waits when services start/stop faster than expected.
**Which issues(s) does this PR fix?**
N/A - Minor test improvement
**Other notes for review**
- Uses 50ms polling interval with 3s timeout for both startup and
shutdown checks
- Maintains same test logic while making it more efficient and less
flaky
- No functional changes to the service itself
**Acknowledgements**
- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
This PR removes the last production usage for: `LastArchivedRoot` by
- extend the P2P config to accept a `StateGen` dependency and wire it up
from the beacon node
- update gossip scoring to read the active validator count via stategen
using last finalized block root
note: i think the correct implementation should process last finalizes
state to current slot, but that's a bigger change i dont want to make in
this PR, i just want to remove usages for `LastArchivedRoot`
**What type of PR is this?**
Bug fix
**What does this PR do? Why is it needed?**
This PR fixes a bug in the state diff `getBaseAndDiffChain()` method. In
the process of finding the diff chain indices, there could be a scenario
where multiple levels return the same diff slot number not equal to the
base (full snapshot) slot number.
This resulted in multiple diff items being added to the diff chain for
the same slot, but on different levels, which resulted in errors reading
the diff.
Fix: we keep a `lastSeenAnchorSlot` equal to the `BaseAnchorSlot` and
update it every time we see a new anchor slot. We ignore if the current
found anchor slot is equal to the `lastSeenAnchorSlot`.
Scenario example:
exponents: [20, 14, 10, 7, 5]
offset: 0
slots saved: slot 2^11, and slot (2^11 + 2^5)
slot to read: slot (2^11 + 2^5)
resulting list of anchor slots (diff chain indices): [0, 0, 2^11, 2^11,
2^11 + 2^5]
* stop emitting payload attribute events during late block handling when we are not proposing the next slot
* Change the behavior to not even enter FCU if we are not proposing next slot
* Vendor go-bip39 dependency locally to third_party/
The github.com/tyler-smith/go-bip39 repository has been deleted from GitHub
but is still needed for BIP-39 mnemonic functionality in the validator wallet
system. This change vendors v1.1.0 of the library into third_party/go-bip39/
to ensure continued availability.
Changes:
- Copy go-bip39 v1.1.0 source from Go module cache to third_party/go-bip39/
- Create BUILD.bazel files for main package and wordlists subpackage
- Update 5 BUILD.bazel files to reference local vendored version instead of external dependency
- Remove go-bip39 from go.mod and deps.bzl
- All builds and tests pass successfully
The vendored package includes all 9 language wordlists (English, Chinese Simplified/Traditional,
Czech, French, Italian, Japanese, Korean, Spanish) and maintains the original import paths for
compatibility.
* Changelog fraagment
* use go mod replace for vendored lib
* Run gazelle
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* init
* reverting some functions
* rolling back a change and fixing linting
* wip
* wip
* fixing test
* breaking up proofs and cells for cleaner code
* fixing test and type
* fixing safe conversion
* fixing test
* fixing more tests
* fixing even more tests
* fix the 0 indices option
* adding a test for coverage
* small test update
* changelog
* radek's suggestions
* Update beacon-chain/core/peerdas/validator.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* addressing comments on kzg package
* addressing suggestions for reconstruction
* more manu feedback items
* removing unneeded files
* removing unneeded setter
---------
Co-authored-by: james-prysm <jhe@offchainlabs.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Reverted all config.FuluForkEpoch = config.FarFutureEpoch from 8b6f187b15
* Fix tests by referencing electra epoch / slot values in requests and test setup
* Changelog fragment
* wip
* fixing tests
* adding script to update workspace for eth clients
* updating test sepc to 1.6.0 and fixing broadcaster test
* fix specrefs
* more ethspecify fixes
* still trying to fix ethspecify
* fixing attestation tests
* fixing sha for consensus specs
* removing script for now until i have something more standard
* fixing more p2p tests
* fixing discovery tests
* attempting to fix discovery test flakeyness
* attempting to fix port binding issue
* more attempts to fix flakey tests
* Revert "more attempts to fix flakey tests"
This reverts commit 25e8183703.
* Revert "attempting to fix port binding issue"
This reverts commit 583df8000d.
* Revert "attempting to fix discovery test flakeyness"
This reverts commit 3c76525870.
* Revert "fixing discovery tests"
This reverts commit 8c701bf3b9.
* Revert "fixing more p2p tests"
This reverts commit 140d5db203.
* Revert "fixing attestation tests"
This reverts commit 26ded244cb.
* fixing attestation tests
* fixing more p2p tests
* fixing discovery tests
* attempting to fix discovery test flakeyness
* attempting to fix port binding issue
* more attempts to fix flakey tests
* changelog
* fixing import
* adding some missing dependencies, but TestService_BroadcastAttestationWithDiscoveryAttempts is still failing
* attempting to fix test
* reverting test as it migrated to other pr
* reverting test
* fixing test from merge
* Fix `TestService_BroadcastAttestationWithDiscoveryAttempts`.
* Fix again `TestService_Start_OnlyStartsOnce`.
* fixing TestListenForNewNodes
* removing manual set of fulu epoch
* missed a few
* fixing subnet test
* Update beacon-chain/rpc/eth/config/handlers_test.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* removing a few more missed spots of reverting fulu epoch setting
* updating test name based on feedback
* fixing rest apis, they actually need the setting of the epoch due to the guard
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* Remove Beacon API endpoints that were deprecated in Electra
* changelog <3
* build fix
* remove more stuff
* fix post-submit e2e and remove structs
* list endpoints in the changelog
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* Use head for block validation when possible
When validating blocks for pubsub, we always copy a state and advance
when we simply need to get a read only beacon state without a copy in
most cases since the head state normally works.
* fix test
* fix tests
* fix more tests
* fix more tests
* Add nil check to be safe
* fix more tests
* add test case
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* Only use head if it's compatible with target
* Allow blocks from the previous epoch to be viable for checkpoints
* Add feature flag to make it configurable
* fix tests
* @satushh's review
* Manu's nit
* Use fields in logs
* Set default value of `--blob-batch-limit` to 384.
So, using default values, `--blob-batch-limit * --blob-batch-limit-burst-factor = 384*3 = MAX_REQUEST_BLOB_SIDECARS = 1152.`
* `blobSidecarByRootRPCHandler`: Add rate limiting.
Bacause now the rate limiter validation is done before the request validation,
adapt `TestBlobsByRootValidation` consequently and add new specific tests for
`validateBlobByRootRequest` to cover the now untested case.
* Set default value of `--data-column-batch-limit-burst-factor` to 4.
So, using default values, `--data-column-batch-limit * --data-column-batch-limit-burst-factor = 4096*2 = MAX_REQUEST_DATA_COLUMN_SIDECARS_ELECTRA = 16384`.
* `validateDataColumnsByRootRequest`: Take a count instead of idents.
* `dataColumnSidecarByRootRPCHandler`: Add rate limiting.
* `SidecarProposerExpected`: Add the slot in the single flight key.
* Fix Kasey's comment.
* Revert "Fix Kasey's comment."
This reverts commit 9e3b4b7acf.
* `signatureData`: Add `string` function.
* `RODataColumnsVerifier.ValidProposerSignature`: Ensure the expensive signature verification is only performed once for concurrent requests for the same signature data.
Share flight group
* `parentState` ==> `state`.
* `RODataColumnsVerifier.SidecarProposerExpected: Ensure the expensive index computation is only performed once for concurrent requests.`
* Add `wrapAttestationError`
* Fix Kasey's comment.
* Fix Terence's comment.
* updated path processing data types, refactored ParsePath and fixed tests
* updated generalized index accordingly, changed input parameter path type from []PathElemen to Path
* updated query.go accordingly, changed input parameter path type from []PathElemen to Path
* added descriptive changelog
* Update encoding/ssz/query/path.go
Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
* Added documentation for Path struct and renamed to for clarity
* Update encoding/ssz/query/path.go
Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
* updated changelog to its correct type: Changed
* updated outdated comment in generalized_index.go and removed test in generalized_index_test.go as this one belongs in path_test.go
* Added validateRawPath with strict raw-path validation only - no raw-path fixing is added. Added test suite covering
* added extra tests for wrongly formated paths
---------
Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Add a lock for p2p computation of active validator count and limit only to topics that need it.
* Changelog fragment
* Update gossip_scoring_params.go
Wrap errors
* Implement `AvailableBlocks`.
* `blobSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available.
* `dataColumnSidecarByRootRPCHandler`: Do not do extra work if only needed for TRACE logging.
* `TestDataColumnSidecarsByRootRPCHandler`: Re-arrange (no functional change).
* `TestDataColumnSidecarsByRootRPCHandler`: Save blocks corresponding to sidecars into DB.
* `dataColumnSidecarByRootRPCHandler`: Do not serve a sidecar if the corresponding block is not available.
* Add changelog
* `TestDataColumnSidecarsByRootRPCHandler`: Use `assert` instead of `require` in goroutines.
https://github.com/stretchr/testify?tab=readme-ov-file#require-package
* added tests for calculating generalized indices
* added first version of GI calculation walking the specified path with no recursion. Extended test coverage for bitlist and bitvectors.
vectors need more testing
* refactored code. Detached PathElement processing, currently done at the beginning. Swap to regex to gain flexibility.
* added an updateRoot function with the GI formula. more refactoring
* added changelog
* replaced TODO tag
* udpated some comments
* simplified code - removed duplicated code in processingLengthField function
* run gazelle
* merging all input path processing into path.go
* reviewed Jun's feedback
* removed unnecessary idx pointer var + fixed error with length data type (uint64 instead of uint8)
* refactored path.go after merging path elements from generalized_indices.go
* re-computed GIs for tests as VariableTestContainer added a new field.
* added minor comment - rawPath MUST be snake case
removed extractFieldName func.
* fixed vector GI calculation - updated tests GIs
* removed updateRoot function in favor of inline code
* path input data enforced to be snake case
* added sanity checks for accessing outbound element indices - checked against vector.length/list.limit
* fixed issues triggered after merging develop
* Removed redundant comment
Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
* removed unreachable condition as `strings.Split` always return a slice with length >= 1
If s does not contain sep and sep is not empty, Split returns a slice of
length 1 whose only element is s.
* added tests to cover edge cases + cleaned code (toLower is no longer needed in extractFieldName function
* added Jun's feedback + more testing
* postponed snake case conversion to do it on a per-element-basis. Added more testing focused mainly in snake case conversion
* addressed several Jun's comments.
* added sanity check to prevent length of a multi-dimensional array. added more tests with extended paths
* Update encoding/ssz/query/generalized_index.go
Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
* Update encoding/ssz/query/generalized_index.go
Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
* Update encoding/ssz/query/generalized_index.go
Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
* placed constant bitsPerChunk in the right place. Exported BitsPerChunk and BytesPerChunk and updated code that use them
* added helpers for computing GI of each data type
* changed %q in favor of %s
* Update encoding/ssz/query/path.go
Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
* removed the least restrictive condition isBasicType
* replaced length of containerInfo.order for containerInfo.fields for clarity
* removed outdated comment
* removed toSnakeCase conversion.
* moved isBasicType func to its natural place, SSZType
* cosmetic refactor
- renamed itemLengthFromInfo to itemLength (same name is in spec).
- arranged all SSZ helpers.
* cleaned tests
* renamed "root" to "index"
* removed unnecessary check for negative integers. Replaced %q for %s.
* refactored regex variables and prevented re-assignation
* added length regex explanation
* added more testing for stressing regex for path processing
* renamed currentIndex to parentIndex for clarity and documented the returns from calculate<Type>GeneralizedIndex functions
* Update encoding/ssz/query/generalized_index.go
Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
* run gazelle
* fixed never asserted error. Updated error message
---------
Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
Co-authored-by: Radosław Kapka <radoslaw.kapka@gmail.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Define TCP and QUIC as `InternetProtocol` (no functional change).
* Group types. (No functional changes)
* Rename variables and use range syntax.
* Add `p2pMaxPeers` and `p2pPeerCountDirectionType` metrics
* `p2p_subscribed_topic_peer_total`: Reset to avoid dangling values.
* `validateConfig`:
- Use `Warning` with fields instead of `Warnf`.
- Avoid to both modify in place the input value and return it.
* Add `p2p_minimum_peers_per_subnet` metric.
* `beaconConfig` => `cfg`.
https://github.com/OffchainLabs/prysm/pull/15880#discussion_r2436826215
* Add changelog
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* default new blob storage layouts to by-epoch
also, do not log migration message until we see a directory that needs to be migrated
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* manu feedback
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update Earliest available slot when pruning
* bazel run //:gazelle -- fix
* custodyUpdater interface to avoid import cycle
* bazel run //:gazelle -- fix
* simplify test
* separation of concerns
* debug log for updating eas
* UpdateEarliestAvailableSlot function in CustodyManager
* fix test
* UpdateEarliestAvailableSlot function for FakeP2P
* lint
* UpdateEarliestAvailableSlot instead of UpdateCustodyInfo + check for Fulu
* fix test and lint
* bugfix: enforce minimum retention period in pruner
* remove MinEpochsForBlockRequests function and use from config
* remove modifying earliest_available_slot after data column pruning
* correct earliestAvailableSlot validation: allow backfill decrease but prevent increase within MIN_EPOCHS_FOR_BLOCK_REQUESTS
* lint
* bazel run //:gazelle -- fix
* lint and remove unwanted debug logs
* Return a wrapped error, and let the caller decide what to do
* fix tests because updateEarliestSlot returns error now
* avoid re-doing computation in the test function
* lint and correct changelog
* custody updater should be a mandatory part of the pruner service
* ensure never increase eas if we are in the block requests window
* slot level granularity edge case
* update the value stored in the DB
* log tidy up
* use errNoCustodyInfo
* allow earliestAvailableSlot edit when custodyGroupCount doesnt change
* undo the minimal config change
* add context to CustodyGroupCount after merging from develop
* cosmetic change
* shift responsibility from caller to callee, protection for updateEarliestSlot. UpdateEarliestAvailableSlot returns cgc
* allow increase in earliestAvailableSlot only when custodyGroupCount also increases
* remove CustodyGroupCount as it is no longer needed as UpdateEarliestAvailableSlot returns cgc now
* proper place for log and name refactor
* test for Nil custody info
* allow decreasing earliest slot in DB (just like in memory)
* invert if statement to make more readable
* UpdateEarliestAvailableSlot for DB (equivalent of p2p's UpdateEarliestAvailableSlot) & undo changes made to UpdateCustodyInfo
* in UpdateEarliestAvailableSlot, no need to return unused values
* no need to log stored group count
* log.WithField instead of log.WithFields
* Add serialization code for state diffs
Adds serialization code for state diffs.
Adds code to create and apply state diffs
Adds fuzz tests and benchmarks for serialization/deserialization
Co-authored-by: Claude <noreply@anthropic.com>
* Add Fulu support
* Review #1
* gazelle
* Fix some fuzzers
* Failing cases from the fuzzers in consensus-types/hdiff
* Fix more fuzz tests
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* add comparison tests
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Use ConvertToElectra in UpgradeToElectra
* Add comments on constants
* Fix readEth1Data
* remove colons from error messages
* Add design doc
* Apply suggestions from code review
Bast
Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
* Move ssz_query objects into testing folder (ensuring test objects only used in test environment)
* Add containers for response
* Export sszInfo
* Add QueryBeaconState/Block
* Add comments and few refactor
* Fix merge conflict issues
* Return 500 when calculate offset fails
* Add test for QueryBeaconState
* Add test for QueryBeaconBlock
* Changelog :)
* Rename `QuerySSZRequest` to `SSZQueryRequest`
* Fix middleware hooks for RPC to accept JSON from client and return SSZ
* Convert to `SSZObject` directly from proto
* Move marshalling/calculating hash tree root part after `CalculateOffsetAndLength`
* Make nogo happy
* Add informing comment for using proto unsafe conversion
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Fixed metadata extraction on Windows by correctly splitting file paths
* `TestExtractFileMetadata`: Refactor a bit.
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* `VerifiedRODataColumnError`: Don't reuse Blob error.
* `VerifiedRODataColumnFromDisk`: Use a specific error when the count of read bytes is lower than expected.
* Add changelog.
* Add basic parsing feature for accessing by index
* Add more tests for 2d byte vector
* Add List case for access indexing
* Handle 2D bytes List example
* Fix misleading cases for CalculateOffsetAndLength
* Use elementSizes[index] if it is the last path element
* Add variable_container_list field for mocking attester_slashings in BeaconBlockBody
* Remove redundant protobuf message
* Better documentation
* Changelog
* Fix `expectedSize` of `VariableTestContainer`: as we added `variable_container_list` here
* Apply reviews from Radek
* reject committee index >= committees_per_slot in unaggregated attestation validation
* Create phrwlk_fix-attestation-committee-index-bound.md
* add a unit test
* fix test
* fixing test
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
h/t to the NuConstruct team for reporting this. The event feed
incorrectly sends epoch transition flag on head events when the first
slot of the epoch is missing (or reorgs across epoch transition).
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* `TestDataColumnSidecarsByRangeRPCHandler`: Remove commented code.
* Remove double import
* `dataColumnSidecarsByRangeRPCHandler`: Gracefully close the stream if no data to return.
* Tests: Change `require` to `assert` in goroutines in tests.
https://pkg.go.dev/github.com/stretchr/testify/require#hdr-Assertions
* Add changelog.
* Fix `/eth/v1/beacon/blob_sidecars/` beacon API is the fulu fork epoch is set to the far future epoch.
* Fix Terence's comment.
* adding a test
---------
Co-authored-by: james-prysm <james@prysmaticlabs.com>
* Add SizeSSZ as a member of SSZObject
* Temporarily rename dereferencePointer function
* Fix analyzeType: use reflect.Value for analyzing
* Fix PopulateVariableLengthInfo: change function signature & reset pointer
* Remove Container arm for Size function as it'll be handled in the previous branch
* Remove OffsetBytes function in listInfo
* Refactor and document codes
* Remove misleading "fixedSize" concept & Add Uint8...64 SSZTypes
* Add size testing
* Move TestSSZObject_Batch and rename it as TestHashTreeRoot
* Changelog :)
* Rename endOffset to fixedOffset
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* stored CL object to enable the usage Fastssz's HashTreeRoot(). added basic test
* refactorization - using interfaces instead of storing original object
* added tests covering ssz custom types
* renamed hash_tree_root to ssz_interface as it contains MarshalSSZ and UnmarshalSSZ functions
* run gazelle
* renamed test and improved comments
* refactored test and extend to marshalSSZ and UnmarshalSSZ
* added changelog
* updated comment
* Changed SSZIface name to SSZObject. Removed MarshalSSZ and UnmarshalSSZ function signatures from interface as they are not used still. Refactored tests.
* renamed file ssz_interface.go to ssz_object.go. merge test from ssz_interface_test.go into query_test.go.
reordered source SSZObject field from sszInfo struct
* sticked SSZObject interface to HashTreeRoot() function, the only one needed so far
* run gazelle :)
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* fix allocation size of proofs in ComputeCellsAndProofsFromStructured
the preallocated slice for KZG Proofs was 48x bigger than it needed to
be.
* changelog
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* Do not verify block data when calculating rewards
* remove `Get` from function names
* changelog <3
* do not verify sync committee sig in handler
* Revert "remove `Get` from function names"
This reverts commit 770a89d990.
* typo fix
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* feature: Use service context and continue on slasher attestation errors
* Create Galoretka_feature-slasher-feed-use-service-ctx
* Rename Galoretka_feature-slasher-feed-use-service-ctx to Galoretka_feature-slasher-feed-use-service-ctx.md
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* make registerSubscribers idempotent
* clean up debugging changes
* test fix
* rm unused var
* sobbing noises
* naming feedback and separate test for digestActionDone
* gazelle
* manu's feedback
* refactor to enable immediate sub after init sync
* preston comment re panic causing db corruption risk
* ensure we check that we're 1 epoch past the fork
* manu feedback
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* `findPeersWithSubnets`: If the `filter` function returns an error for a given peer, log an error and skip the peer instead of aborting the whole function.
* `computeIndicesByRootByPeer`: If the loop returns an error for a given peer, log an error and skip the peer instead of aborting the whole function.
* Add changelog.
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* `buildStatusFromStream`: Use parent context.
* Status tests: Use `t.Context` everywhere.
* `buildStatusFromStream`: Respond statusV2 only if Fulu is enabled.
Without doing so, earliest available slot is never defined, and then `s.cfg.p2p.EarliestAvailableSlot` will block until the context is canceled.
* Send our real earliest available slot when sending a Status request post Fulu instead of `0`.
* Add changelog.
* Prettify logs for byRange/byRoot data column sidecar requests.
* Moving byRoot/byRange data column sidecars requests from peers to TRACE level.
* Move "Peer requested blob sidecar by root not found in db" in TRACE.
* Add changelog.
* Fix Kasey's comment.
* Apply Kasey's suggestion.
* Revert "`createLocalNode`: Wait before retrying to retrieve the custody group count if not present. (#15735)"
This reverts commit 4585cdc932.
* Revert "Fix no custody info available at start (#15732)"
This reverts commit 80eba4e6dd.
* Add context to `EarliestAvailableSlot` and `CustodyGroupCount` (no functional change).
* Remove double imports.
* `EarliestAvailableSlot` and `CustodyGroupCount`: Wait for custody info to be initialized.
* exclude unscheduled forks from the schedule
* update tests that relied on fulu being far future
* don't mess with the version->epoch mapping
* LastFork cheats the tests by filtering by far_future_epoch
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* ignore version/digest mismatch if far future
* bonus: this log generates a lot of noise, bump it down to trace
* unit test
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* Avoid unnecessary calls to ExitInformation()
ExitInformation runs a loop over the whole validator set. This is needed
in case that there are slashings or exits to be processed in a block (we
could be caching or avoid this entirely post-Electra though). This PR
removes these calls on normal state transition to this function. h/t to
@terencechain for finding out this bug.
In addition, on processing withdrawal requests and registry updates, we
kept recomputing the exit information at the same time that the state is
updated and the function that updates the state already takes care of
tracking and updating the right exit information. So this PR removes the
calls to compute this exit information on a loop. Notice that this bug
has been present even before we had a function `ExitInformation()` so I
will document here to help the reviewer
Our previous behavior is to do this in a loop
```
st, err = validators.InitiateValidatorExit(ctx, st, vIdx, validators.ExitInformation(st))
```
This is a bit problematic since `ExitInformation` loops over the whole validator set to compute the exit information (and the total active balance) and then the function `InitiateValidatorExit` actually recomputes the total active balance looping again over the whole validator set and overwriting the pointer returned by `ExitInformation`.
On the other hand, the funciton `InitiateValidatorExit` does mutate the state `st` itself. So each call to `ExitInformation(st)` may actually return a different pointer.
The function ExitInformation computes as follows
```
err := s.ReadFromEveryValidator(func(idx int, val state.ReadOnlyValidator) error {
e := val.ExitEpoch()
if e != farFutureEpoch {
if e > exitInfo.HighestExitEpoch {
exitInfo.HighestExitEpoch = e
exitInfo.Churn = 1
} else if e == exitInfo.HighestExitEpoch {
exitInfo.Churn++
}
```
So it simply increases the churn for each validator that has epoch equal to the highest exit epoch.
The function `InitiateValidatorExit` mutates this pointer in the following way
if the state is post-electra, it disregards completely this pointer and computes the highest exit epoch and updates churn inconditionally, so the pointer `exitInfo.HighestExitEpoch` will always have the right value and is not even neded to be computed before. We could even avoid the fist loop even. If the state is pre-Electra then the function itself updates correctly the exit info for the next iteration.
* Only care about exits pre-Electra
* Update beacon-chain/core/transition/transition_no_verify_sig.go
Co-authored-by: terence <terence@prysmaticlabs.com>
* Radek's review
---------
Co-authored-by: terence <terence@prysmaticlabs.com>
* attempting to improve duties v2
* removing go routine
* changelog
* unnessesary variable
* fixing test
* small optimization existing early on CommitteeAssignments function
* fixing small bug
* fixes performance issues with duties v2
* fixed changelog
* gofmt
* Sort sidecars by index before calling `RecoverCellsAndKZGProofs`.
Reason: Starting at `c-kzg-4844 v2.1.2`, the library needs input to be sorted.
* Update `c-kzg-4844` to `v2.1.3`
* Update `c-kzg-4844` to `v2.1.5`
* additional log information around invalid payloads
* fix test with reversed require.ErrorIs args
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* slasherkv: Set a 1 minute timeout on PruneAttestationOnEpoch operations to prevent very large bolt transactions.
* Fix CI
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* fix: replace fmt.Printf with proper test error handling in web3signer test
* Update keymanager_test.go
* Create DeVikingMark_fix-web3signer-test-error-handling.md
* Change wrap message to avoid the could not...: could not...: could not... effect.
Reference: https://github.com/uber-go/guide/blob/master/style.md#error-wrapping.
* Log: remove period at the end of the latest sentence.
* Dirty quick fix to ensure that the custody group count is set at P2P service start.
A real fix would involve a chan implement a proper synchronization scheme.
* Add changelog.
* Update rules_go to v0.54.1
* Fix NotEmpty assertion for new protobuf private fields.
* Update rules_go to v0.55.0
* Update protobuf to 28.3
* Update rules_go to v0.57.0
* Update go to v1.25.0
* Changelog fragment
* Update go to v1.25.1
* Update generated protobuf and ssz files
* Support Fulu genesis block
* `NewGenesisBlockForState`: Factorize Electra and Fulu, which share the same `BeaconBlock`.
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* `Broadcasted data column sidecar` log: Add `blobCount`.
* `broadcastAndReceiveDataColumns`: Broadcast and receive data columns in parallel.
* `ProposeBeaconBlock`: First broadcast/receive block, and then sidecars.
* `broadcastReceiveBlock`: Add log.
* Add changelog
* Fix deadlock-option 1.
* Fix deadlock-option 2.
* Take notifier out of the critical section
* only compute common info once, for all sidecars
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* Add bitvector field for FixedTestContainer
* Handle Bitvector type using isBitfield flag
* Add Bitvector case for Stringify
* Add bitlist field for VariableTestContainer
* Add bitlistInfo
* Changelog
* Add bitvectorInfo
* Remove analyzeBit* functions and just inline them
* Fix misleading comments
* Add comments for bitlistInfo's Size
* Apply reviews from Radek
* Fix misleading log msg on shutdown
gRPCServer.GracefulStop blocks until it has been shutdown. Logging
"Initiated graceful stop" after it has been completed is misleading.
Names are added to the message to discern services. Also, a minimum test
is added mainly to verify the change made with this commit.
* Add changelog fragment file
* Capitalize log messages
* Update endtoend test for fixed log messages
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* `reconstructSaveBroadcastDataColumnSidecars`: Use the `s.columnIndicesToSample` instead of recoding its content.
ddd
* Rename `custodyColumns` ==> `columnIndicesToSample`.
* `DataColumnStorage.Save`: Remove wrong godoc.
* Implement `receiveDataColumnSidecars` and transform `receiveDataColumnSidecar` as a subcase of the plural version.
* `dataColumnSubscriber`: Add godoc and remove only once used variable.
* `processDataColumnSidecarsFromExecution`: Use single flight directly in the function.
So the caller does not have any more the responsability to deal with multiple simultaneous calls.
* `processDataColumnSidecarsFromReconstruction`: Guard against a single flight.
In `dataColumnSubscriber`, trig in parallel `processDataColumnSidecarsFromReconstruction` and `processDataColumnSidecarsFromExecution`.
Stop when the first of them is successful.
* `processDataColumnSidecarsFromExecution`: Use `receiveDataColumnSidecars` instead of `receiveDataColumnSidecar`.
* Implement and use `broadcastAndReceiveUnseenDataColumnSidecars`.
* Add changelog.
* Fix James' comment.
* Fix James' comment.
* `processDataColumnSidecarsFromReconstruction`: Log reconstruction duration.
* fix: use assigned committee index in attestation data after Electra
* update change log
* update add unit test
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* Change InsertChain
InsertChain uses `ROBlock` since #14571, this allows it to insert the
last block of the chain as well. We change the semantics of InsertChain
to include all blocks and take them in increasing order.
* Fix tests
* Use slices.Reverse
* PeerDAS: Implement sync
* Fix Potuz's comment.
* Fix Potuz's comment.
* Fix Potuz's comment.
* Fix Potuz's comment.
* Fix Potuz's comment.
* Implement `TestFetchDataColumnSidecarsFromPeers`.
* Implement `TestSelectPeers`.
* Fix James' comment.
* Fix flakiness in `TestSelectPeers`.
* Revert "Fix Potuz's comment."
This reverts commit c45230b455.
* Revert "Fix James' comment."
This reverts commit a3f919205a.
* `selectPeers`: Avoid map with key but empty value.
* Fix Potuz's comment.
* Add DataColumnStorage and SubscribeAllDataSubnets flag.
* getBlobsV2: retry if reconstruction isnt successful
* test: engine client and sync package, metrics
* lint: fmt and log capitalisation
* lint: return error when it is not nil
* config: make retry interval configurable
* sidecar: recover function and different context for retrying
* lint: remove unused field
* beacon: default retry interval
* reconstruct: load once, correctly deliver the result to all waiting goroutines
* reconstruct: simplify multi goroutine case and avoid race condition
* engine: remove isDataAlreadyAvailable function
* sync: no goroutine, getblobsv2 in absence of block as well, wrap error
* exec: hardcode retry interval
* da: non blocking checks
* sync: remove unwanted checks
* execution: fix test
* execution: retry atomicity test
* da: updated IsDataAvailable
* sync: remove unwanted tests
* bazel: bazel run //:gazelle -- fix
* blockchain: fix CustodyGroupCount return
* lint: formatting
* lint: lint and use unused metrics
* execution: retry logic inside ReconstructDataColumnSidecars itself
* lint: format
* execution: ensure the retry actually happens when it needs to
* execution: ensure single responsibility, execution should not do DA check
* sync: don't call ReconstructDataColumnSidecars if not required
* blockchain: move IsDataAvailable interface to blockchain package
* execution: make reconstructSingleflight part of the service struct
* blockchain: cleaner DA check
* lint: formatting and remove confusing comment
* sync: fix lint, test and add extra test for when data is actually not available
* sync: new appropriate mock service
* execution: edge case - delete activeRetries on success
* execution: use service context instead of function's for retry
* blockchain: get variable samplesPerSlot only when required
* remove redundant function and fix name
* fix test
* fix more tests
* put samplesPerSlot at appropriate place
* tidy up IsDataAvailable
* correct bad merge
* fix bad merge
* remove redundant flag option
* refactor to deduplicate sidecar construction code
* - Add godocs
- Rename some functions to be closer to the spec
- Add err in return of commitments
* Replace mutating public method (but only internally used) `Populate` but private not mutating method `extract`.
* Implement a unique `processDataColumnSidecarsFromExecution` instead 2 separate functions from block and from sidecar.
* `ReceiveBlock`: Wrap errors.
* Remove useless tests.
* `ConstructionPopulator`: Add tests.
* Fix tests
* Move functions to be consistent with blobs.
* `fetchCellsAndProofsFromExecution`: Avoid useless flattening.
* `processDataColumnSidecarsFromExecution`: Stop using DB cache.
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* create lc cache to track branches
* save lc stuff
* remove finalized data from LC cache on finalization
* read lc stuff
* edit tests
* changelog
* linter
* address commments
* address commments 2
* address commments 3
* address commments 4
* lint
* address commments 5 x_x
* set beacon lcStore to mimick registrable services
* clean up the error propagation
* pass the state to saveLCBootstrap since it's not saved in db yet
* remove "experimental" from backfill flag name
* backwards-compatible alias
* changelog
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* `computeIndicesByRootByPeer`: Add 1 slack epoch regarding peer head slot.
* `FetchDataColumnSidecars`: Switch mode.
Before this commit, this function returned on error as long as at least ONE requested sidecar was not retrieved.
Now, this function retrieves what it can (best effort mode) and returns an additional value which is the map of missing sidecars after running this function.
It is now the role of the caller to check this extra returned value and decide what to do in case some requested sidecars are still missing.
* `fetchOriginDataColumnSidecars`: Optimize
Before this commit, when running `fetchOriginDataColumnSidecars`, all the missing sidecars had to been retrieved in a single shot for the sidecars to be considered as available. The issue was, if for example `sync.FetchDataColumnSidecars` returned all but one sidecar, the returned sidecars were NOT saved, and on the next iteration, all the previously fetched sidecars had to be requested again (from peers.)
After this commit, we greedily save all fetched sidecars, solving this issue.
* Initial sync: Do not fetch data column sidecars before the retention period.
* Implement perfect peerdas syncing.
* Add changelog.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Update beacon-chain/sync/data_column_sidecars.go
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
* Update beacon-chain/sync/data_column_sidecars.go
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
* Update beacon-chain/sync/data_column_sidecars.go
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
* Update after Potuz's comment.
* Fix Potuz's commit.
* Fix James' comment.
---------
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
* Add flag for colocation whitelisting. --p2p-ip-colocation-whitelist
This change updates the peer IP colocation checking to respect the
configured CIDR whitelist (--p2p-ip-colocation-whitelist flag).
Changes:
- Added IPColocationWhitelist field to peers.StatusConfig
- Added ipColocationWhitelist field to Status struct to store parsed IPNets
- Parse CIDR strings into net.IPNet in NewStatus constructor
- Updated isfromBadIP method to skip colocation limits for whitelisted IPs
- Pass IPColocationWhitelist from Service config when creating Status
The IP colocation whitelist allows operators to exempt specific IP ranges
from the colocation limit, useful for deployments with known trusted
address ranges or legitimate node clustering.
Only check if an IP is in the whitelist when the colocation limit
is actually exceeded, rather than checking for every IP. This is
more efficient and matches the intended behavior.
* Changelog fragment
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* @kasey feedback: Move IP colocation parsing to the node construction
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Implement KZG proof batch verification for data column gossip validation
* Manu's feedback
* Add tests
* Update beacon-chain/sync/batch_verifier.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/batch_verifier.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/kzg_batch_verifier_test.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/kzg_batch_verifier_test.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/kzg_batch_verifier_test.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/kzg_batch_verifier_test.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/kzg_batch_verifier_test.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Fix tests
* Kasey's feedback
* `validateWithKzgBatchVerifier`: Give up after a full slot.
Before this commit:
After 100 ms, an un-batched verification is launched concurrently to the batched one.
As a result, a stressed node could start to be even more stressed by the multiple verifications.
Also, it is always hard to choose a correct timeout value.
100ms may be OK for a given node with a given BPO version, and not ok for the same node with a BPO version with 10x more blobs.
However, we know this gossip validation won't be useful after a full slot duration.
After this commit:
After a full slot duration, we just ignore the incoming gossip message.
It's important to ignore it and not to reject it, since rejecting it would downscore the peer sending this message.
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Calculate max epoch and churn for slashing once
* calculate once for proposer and attester slashings
* changelog <3
* introduce struct
* check if err is nil in ProcessVoluntaryExits
* rename exitData to exitInfo and return from functions
* cleanup + tests
* cleanup after rebase
* Potuz's review
* pre-calculate total active balance
* remove `slashValidatorFunc` closure
* Avoid a second validator loop
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* remove balance parameter from slashing functions
---------
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: potuz <potuz@prysmaticlabs.com>
* adding web 3 signer changes for fulu
* missed adding fulu values
* add accounting for fulu version
* updated web3signer version and fixing linting
* Update validator/keymanager/remote-web3signer/types/requests_test.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update validator/keymanager/remote-web3signer/types/mock/mocks.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* radek suggestions
* removing redundant check and removing old function, changed changelog to reflect
* gaz
* radek's suggestion
* fixing test as v1 proof is no longer used
* fixing more tests
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
When peers return invalid data during initial sync, log the specific
validation failure reason. This helps identify:
- Whether peer exceeded requested block count
- Whether peer exceeded MAX_REQUEST_BLOCKS protocol limit
- Whether blocks are outside the requested slot range
- Whether blocks are out of order (not increasing or wrong step)
Each log includes the specific condition that failed, making it easier
to debug whether the issue is with peer implementations or request
validation logic.
* Add vectorInfo
* Add 2D bytes field for test
* Add tag_parser for parsing SSZ tags
* Integrate tag parser with analyzer
* Add ByteList test case
* Changelog
* Better printing feature with Stringer implementation
* Return error for non-determined case without printing other values
* Update tag_parser.go: handle Vector and List mutually exclusive (inspired by OffchainLabs/fastssz)
* Make linter happy
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Subscription: Get fanout peers in all data column subnets when at least one validator is connected
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Updated test structure to @nalepae suggestion
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Add VariableTestContainer in ssz_query.proto
* Add listInfo
* Use errors.New for making an error with a static string literal
* Add listInfo field when analyzing the List type
* Persist the field order in the container
* Add actualOffset and goFieldName at fieldInfo
* Add PopulateFromValue function & update test runner
* Handle slice of ssz object for marshalling
* Add CalculateOffsetAndLength test
* Add comments for better doc
* Changelog :)
* Apply reviews from Radek
* Remove actualOffset and update offset field instead
* Add Nested container of variable-sized for testing nested path
* Fix offset adding logics: for variable-sized field, always add 4 instead of its fixed size
* Fix multiple import issue
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Fix next epoch proposer duties
* Do not update state's slot when computing the proposer
Also do not call Fulu's proposer lookahead if the requested epoch is not
current or next.
* retract Terence's test
* Fix tests
* removing epoch check to pass spec test
* reverting rollback and fixing test setup
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
* propose block changes from peerdas branch
* breaking out broadcast code into its own helper, changing fulu broadcast for rest api to properly send datasidecars
* renamed validate blobsidecars to validate blobs, and added check for max blobs
* gofmt
* adding in batch verification for blobs"
* changelog
* adding kzg tests, moving new kzg functions to validation.go
* linting and other small fixes
* fixing linting issues and adding some proposer tests
* missing dependencies
* fixing test
* fixing more tests
* gaz
* removed return on broadcast data columns
* more cleanup and unit test adjustments
* missed removal of unneeded field
* adding data column receiver initialization
* Update beacon-chain/rpc/eth/beacon/handlers.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* partial review feedback from manu
* gaz
* reverting some code to peerdas as I don't believe the broadcast code needs to be reused
* missed removal of build dependency
* fixing tests and adding another test based on manu's suggestion
* fixing linting
* Update beacon-chain/rpc/eth/beacon/handlers.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update beacon-chain/blockchain/kzg/validation.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* radek's review changes
* adding missed test
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Swap the wrong arguments in a call
I saw that the names of the passed arguments and the ones of the
function parameters don't match, so I suspect that it's a bug.
* Add changelog
* Add validation for the fillInForkChocieMissingBlocks checkpoints.
* Add test for checkpoint epoch validation in fillInForkChoiceMissingBlocks.
* Use a sentinel error rather than error string
---------
Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
By default when starting a node, we load the finalized checkpoint from
db and set it as head. When the chain has not been finalizing for a
while and the user does not start from the latest head, it may still be
benefitial to start from the latest justified checkpoint that has to be
a descendant of the finalized one.
* `FetchDataColumnSidecars`: If possible, try to reconstruct after retrieving sidecars from peers if not some are still missing.
* `randomPeer`: Deterministic randomness.
Before this commit, `randomPeer` was non derterministic, even with a deterministic random source. There reason is we iterated over a map (which is fully random) and then stopped the iteration on a chosen random index (which can be deterministic if the random source is deterministic.)
After this commit, `randomPeer` and all its callers are fully deterministic when using a deterministic random source.
* Fix Potuz's comment.
* Fix James' comment.
* `tryReconstructFromStorageAndPeers`: Improve godoc.
* Add basic PathElement
* Add ssz_type.go
* Add basic sszInfo
* Add containerInfo
* Add basic analyzer without analyzing list/vector
* Add analyzer for homogeneous collection types
* Add offset/length calculator
* Add testutil package in encoding/ssz/query
* Add first round trip test for IndexedAttestationElectra
* Go mod tidy
* Add Print function for debugging purpose
* Add changelog
* Add testonly flag for testutil package & Nit for nogo
* Apply reviews from Radek
* Replace fastssz with prysmaticlabs one
* Add proto/ssz_query package for testing purpose
* Update encoding/ssz/query tests to decouple with beacon types
* Use require.* instead of assert.*
* Fix import name for proto ssz_query package
* Remove uint8/uint16 and some byte arrays in FixedTestContainer
* Add newline for files
* Fix comment about byte array in ssz_query.proto
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* `startBaseServices`: Warm data column storage cache.
* `TestFindPeers_NodeDeduplication`: Use `t.context`.
* `BUILD.bazel`: Moge `# gazelle.ignore` at the top of the file.
Rationale: This directive is applied to the whole file, regardless its position in the file.
* Improve `TestConstructGenericBeaconBlock`: Courtesy of Terence
* Add `TestDataColumnStoragePath_FlagSpecified`.
* `appFlags`: Move `flags.SubscribeAllDataSubnets` (cosmetic).
* `appFlags`: Add `storage.DataColumnStoragePathFlag`.
* Add changelog.
* Fix subnet peer discovery
Currently computeAllNeededSubnets is called only once when the subnets
are subscribed. It should have been called regularly.
* changelog
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* adding what I think could be a fix for find peer
* removing uneeded comment
* unit tests
* linting
* gofmt
* changelog
* Update beacon-chain/p2p/discovery_test.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update changelog/james-prysm_fix-find-peers.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fixing test import
* applying suggestions
* fixing typo
* manu feedback
* accidently checked in files
* addressing manu's edgecase, old bug
* moving tests from service-test.go to subnets_test.go and adding coverage for receiving bad existing node with higher seq
* cleanup
* updating for clarity
* missingPeerCount should increment if we are removing the peer from map
* manu's recommendations on defective subnet rollback edge case
* rollback introduced too much complication as well as a new bug so we are removing it
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update consensus spec to v1.6.0-alpha.4 and implement data column support for forkchoice spectests
* Apply suggestion from @prestonvanloon
Co-Authored-By: Preston Van Loon <pvanloon@offchainlabs.com>
---------
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* refactor: use auto-generated HashTreeRoot functions in htrutil.go
* refactor: use type alias for Transaction & use SliceRoot for TransactionsRoot
* changelog
* fix: TransactionsRoot receives raw 2d bytes as an argument
* fix: handle nil argument
* test: add nil test for fork and checkpoint
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* initialize genesis data asap at node start
* add genesis validation tests with embedded state verification
* Add test for hardcoded mainnet genesis validator root and time from init() function
* Add test for UnmarshalState in encoding/ssz/detect/configfork.go
* Add tests for genesis.Initialize
* Move genesis/embedded to genesis/internal/embedded
* Gazelle / BUILD fix
* James feedback
* Fix lint
* Revert lock
---------
Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
* Fix validateConsensus
Reported by NuConstruct
The stater package looks for a stateroot using the head state from the
blockchain package. However, this state is very unlikely to have the
poststateroot since that's only added after slot processing. I assume
that essentially any REST endpoint that uses this mechanism to get head
is broken if it needs to gather a state by stateroot.
This PR is a placeholder to verify this is the issue, here I just check
if the NSC already has the post-state since that will have already the
processing state cached.
* Add changelog
* add fallback
* Fix tests
* Add entry for sequence number in chain-metadata bucket & Basic getter/setter
* Mark p2p-metadata flag as deprecated
* Fix metaDataFromConfig: use DB instead to get seqnum
* Save sequence number after updating the metadata
* Fix beacon-chain/p2p unit tests: add DB in config
* Add changelog
* Add ReadOnlyDatabaseWithSeqNum
* Code suggestion from Manu
* Remove seqnum getter at interface
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* Fix race on ReceibeBlock
In the event two routines for `ReceiveBlock` are triggered with the same
block (it may happen if one routine is triggered over gossip and the
other in init-sync) it may happen that the second routine believes it's
syncing the block for the first time. This is because the cache on
`blocksBeingSynced` is not checked to be set and the block may still not
be put in forkchoice by the first routine.
In the normal case this would not cause any trouble as the second
forkchoice insertion is a noop by design. However, if the second routine
times out or has any error in processing (for example the engine will
return an error if we try to send FCU to an older head) then the second
routine will attempt to remove the inserted block from forkchoice and
this bricks the node since forkchoice refuses to remove a valid node,
the root is removed inconditionally from db and the node ends up with a
root that is not in db and remains in forkchoice.
This PR just prevents the race.
As a followup perhaps we can gate the rollback function from db to nodes
that are effectively not in forkchoice, alternatively, force removal
from forkchoice when rolling back from db (although this version is
complicated due to possible accounting issues on forkchoice).
* Fix lint
* Assign max_cell_proofs_length the correct value
* Add changelog fragment
* Run update-go-pbs.sh
* Run update-go-ssz.sh
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Currently the payload attribute events is triggered on
`forkchoiceUpodateWithExecution`. However when we import an early block,
we do not call this function, we make two calls to FCU, the first one is
on a locked path at the end of `postBlockProcess` and this call is made
without any payload attributes to avoid updating the shuffling caches.
The second call is made on `handleSecondFCUCall` which calls directly
`notifyForkchoiceUpdate` bypassing the call to
`forkchoiceUpdateWithExecution`, but this call is the one that actually
computes the payload attributes. So the event handler is never called
with the new attributes.
This PR moves the event trigger to the same place where we actually call
FCU with the computed payload attributes.
Some considerations with forkchoice locking logic: since the calls are
always in a go routine, anyway the routine will wait to forkchoice to be
unlocked to proceed.
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* fix: submitPoolSyncCommitteeSignatures reponse inconsistent
* update: bazel build file
* update: add changelog fragment file
* update api/server/structs/BUILD.bazel format
* update the unit test
* update: the error format
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
This commit adds a Prometheus histogram metric to measure the processing
duration of the PublishBlockV2 beacon API endpoint in milliseconds.
The metric covers the complete request processing time including:
- Request validation and parsing
- Block decoding (SSZ/JSON)
- Broadcast validation checks
- Block proposal through ProposeBeaconBlock
- All synchronous operations and awaited goroutines
Background operations that run in goroutines (block broadcasting, blob
sidecar processing) are included in the timing since the main function
waits for their completion before returning.
Files changed:
- beacon-chain/rpc/eth/beacon/metrics.go: New metric definition
- beacon-chain/rpc/eth/beacon/handlers.go: Timing instrumentation
- beacon-chain/rpc/eth/beacon/BUILD.bazel: Added metrics.go and Prometheus deps
- changelog/potuz_add_publishv2_metric.md: Changelog entry
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <noreply@anthropic.com>
* Log when downscoring a peer.
* `validateSequenceNumber`: Downscore peer in function, clarify and add logs
* `AddConnectionHandler`: Send majority code to the outer scope (no funtional change).
* `disconnectBadPeer`: Improve log.
* `sendRPCStatusRequest`: Improve log.
* `findPeersWithSubnets`: Add preventive peer filtering.
(As done in `s.findPeers`.)
* `Stop`: Use one `defer` for the whole function.
Reminder: `defer`s are executed backwards.
* `Stop`: Send a goodbye message to all connected peers when stopping the service.
Before this commit, stopping the service did not send any goodbye message to all connected peers. The issue with this approach is that the peer still thinks we are alive, and behaves so by trying to communicate with us. Unfortunatly, because we are offline, we cannot respond. Because of that, the peer starts to downscore us, and then bans us. As a consequence, when we restart, the peer refuses our connection request.
By sending a goodbye message when stopping the service, we ensure the peer stops to expect anything from us. When restarting, everything is allright.
* `ConnectedF` and `DisconnectedF`: Workaround very probable libp2p bug by preventing outbound connection to very recently disconnected peers.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* `AddDisconnectionHandler`: Handle multiple close calls to `DisconnectedF` for the same peer.
* removing ssz-only flag
* gaz
* reverting other uses of sszonly
* gaz
* adding kasey and radek's suggestions
* update changelog
* adding test
* radek advice with new headers and tests
* adding logs and fixing comments
* adding logs and fixing comments
* gaz
* Update validator/client/beacon-api/rest_handler_client.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update api/apiutil/header.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update api/apiutil/header.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* radek's comments
* adding another failing case based on radek's suggestion
* another unit test
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Subnets subscription: Avoid dynamic subscribing blocking in case not enough peers per subnets are found.
* `subscribeWithParameters`: Use struct to avoid too many function parameters (no functional changes).
* Optimise subnets search.
Currently, when we are looking for peers in let's say data column sidecars subnets 3, 6 and 7, we first look for peers in subnet 3.
If, during the crawling, we meet some peers with subnet 6, we discard them (because we are exclusively looking for peers with subnet 3).
When we are happy, we start again with peers with subnet 6.
This commit optimizes that by looking for peers with satisfy our constraints in one look.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' commnet.
* Fix James' comment.
* Fix James' comment.
* Fix James's comment.
* Simplify following James' comment.
* Fix James' comment.
* Update beacon-chain/sync/rpc_goodbye.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* Update config/params/config.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* Update beacon-chain/sync/subscriber.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* Fix Preston's comment.
* Fix Preston's comment.
* `TestService_BroadcastDataColumn`: Re-add sleep 50 ms.
* Fix Preston's comment.
* Update beacon-chain/p2p/subnets.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
---------
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* Convert genesis times from seconds to time.Time
* Fixing failed forkchoice tests in a new commit so it doesn't get worse
Fixing failed spectest tests in a new commit so it doesn't get worse
Fixing forkchoice tests, then spectests
* Fixing forkchoice tests, then spectests. Now asking for help...
* Fix TestForkChoice_GetProposerHead
* Fix broken build
* Resolve TODO(preston) items
* Changelog fragment
* Resolve TODO(preston) items again
* Resolve lint issues
* Use consistant field names for sinceSlotStart (no spaces)
* Manu's feedback
* Renamed StartTime -> UnsafeStartTime, marked as deprecated because it doesn't handle overflow scenarios.
Renamed SlotTime -> StartTime
Renamed SlotAt -> At
Handled the error in cases where StartTime was used.
@james-prysm feedback
* Revert beacon-chain/blockchain/receive_block_test.go from 1b7844de
* Fixing issues after rebase
* Accepted suggestions from @potuz
* Remove CanonicalHeadSlot from merge conflicts
---------
Co-authored-by: potuz <potuz@prysmaticlabs.com>
* Reword DV selection proofs
* Add changelog
* Expect 1 call to DomainData in DV update duties test
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* moving the ticker from chain start to right before the main loop and also after the wait for activation edge case
* fixing unit test
* adding in a unit test
* adding in comment based on potuz's feedback
* Add log capitalization analyzer and apply fixes across codebase
Implements a new nogo analyzer to enforce proper log message capitalization and applies the fixes to all affected log statements throughout the beacon chain, validator, and supporting components.
Co-Authored-By: Claude <noreply@anthropic.com>
* Radek's feedback
---------
Co-authored-by: Claude <noreply@anthropic.com>
* always init service through NewService
* move head state cache to service struct
* changelog
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* poc changes for safe validator shutdown
* simplifying health routine and adding safe shutdown after max restarts reached
* fixing health tests
* fixing tests
* changelog
* gofmt
* fixing runner
* improve how runner times out
* improvements to ux on logs
* linting
* adding in max healthcheck flag
* changelog
* Update james-prysm_safe-validator-shutdown.md
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update validator/client/runner.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update validator/client/service.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update validator/client/runner.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update validator/client/runner.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* addressing some feedback from radek
* addressing some more feedback
* fixing name based on feedback
* fixing mistake on max health checks
* conflict accidently checked in
* go 1.23 no longer needs you to stop for the ticker
* Update flags.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* wip no unit test for recursive healthy host find
* rework healthcheck
* gaz
* fixing bugs and improving logs with new monitor
* removing health tracker, fixing runner tests, and adding placeholder for monitor tests
* fixing event stream check
* linting
* adding in health monitor tests
* gaz
* improving test
* removing some log.fatals
* forgot to remove comment
* missed fatal removal
* doppleganger should exit the node safely now
* Update validator/client/health_monitor.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* radek review
* Update validator/client/validator.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update validator/client/validator.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update validator/client/health_monitor.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update validator/client/health_monitor.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update validator/client/health_monitor.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update validator/client/validator.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* radek feedback
* read up on more suggestions and making fixes to channel
* suggested updates after more reading
* reverting some of this because it froze the validator after healthcheck failed
* fully reverting
* some improvements I found during testing
* Update cmd/validator/flags/flags.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* preston's feedback
* clarifications on changelog
* converted to using an event feed instead of my own channel publishing implementation, adding relevant logs
* preston log suggestion
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* Topic mapping: Groupe `const` and `var`.
Cosmetic change, no functional change.
* `TopicFromMessage`: Do not assume anymore that all Fulu specific topics are V3 only.
* Proto: Remove unused `DataColumnIdentifier` and add new `StatusV2`.
`DataColumnIdentifier` was removed in the spec here: https://github.com/ethereum/consensus-specs/pull/4284. Eventually, we stopped using it in Prysm, but never removed the corresponding proto message.
The new `StatusV2` is introduced in the spec here: https://github.com/ethereum/consensus-specs/pull/4374
* `readChunkedDataColumnSideCar` ==> `readChunkedDataColumnSidecar`.
* `rpc_send_request.go`: Reorganize file (no function change).
* `readChunkedDataColumnSidecar`: Add `validationFunctions` parameter and add tests.
* `dataColumnsRPCMinValidSlot`: Add new test case.
* peerDAS: Implement `dataColumnSidecarByRangeRPCHandler`.
* `rateLimitingAmount`: Define out of the function.
* Fix James' comment.
* Fix James' comment.
* Add the new Fulu state with the new field
* fix the hasher for the fulu state
* Fix ToProto() and ToProtoUnsafe()
* Add the fields as shared
* Add epoch transition code
* short circuit the proposer cache to use the state
* Marshal the state JSON
* update spectests to 1.6.0-alpha.1
* Remove deneb and electra entries from blob schedule
This was cherry picked from PR #15364
and edited to remove the minimal cases
* Fix minimal tests
* Increase deadling for processing blocks in spectests
* Preston's review
* review
---------
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
* remove usages of params from proto packages
* make it harder to mess up the order of request limit args
* remove errant edit (Terence review)
* fix missed updates after sig change
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* `verifyBlobCommitmentCount`: Print max allowed blob count in error message.
* `TestPersist`: Use `fieldparams.RootLength` instead of `32`.
* `TestDataColumnSidecarsByRootReq_Marshal`: Remove blank line.
* `ConvertPeerIDToNodeID`: Improve readability by using one line per field.
* `parseIndices`: Return `[]int` instead of `[]uint64`.
Rational: These indices are used in
`func (m *SparseMerkleTrie) MerkleProof(index int) ([][]byte, error)`
that requires `int` and not `uint64`.
This `MerkleProof` function is used at a lot of places in the codebase.
==> Changing the signature of `parseIndices` is simpler than changing the signature of `MerkleProof`.
* adding in proto for getdutiesv2
* adding in GetDutiesV2
* updating changelog and mock
* fixing tests
* breaking up function into smaller functions for gocognit
* reverting some changes so we can break it into a separate PR for validator client only
* reverted too much adding mock change back in
* removing reserved based on preston's feedback
* adding duties tests back in
* Update beacon-chain/rpc/prysm/v1alpha1/validator/duties.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* updating based on preston's feedback
* updating build for new files
* maybe there's some flake with how the state is used, trying with it moved
* reverting config override
* reverting all changes to get duties v1 based on preston's feedback
* Update proto/prysm/v1alpha1/validator.proto
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* addressing partial feedback
* adding small comment
* reorganizing function based on feedback
* removing unused field
* adding some more unit tests
* adding attribute for pubkeys to span
* preston's feedback
* fixing current to next
* probably safer to register current period now
* Update beacon-chain/core/helpers/beacon_committee.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* adding in preston's comment
---------
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* fix getCommittees when slot not in given epoch
* add changelog file
* add unit test & fix changelog file format
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Flatten spectests directory: Move all spectests to a single directory per network.
Commands ran:
```
cd testing/spectest/
```
Then for each network, I ran the following command twice.
```
find . -type f -name '*_test.go' -exec bash -c '
for file; do
dir=$(dirname "$file")
base=$(basename "$file" _test.go)
new_name="./${dir#./}__${base}_test.go"
git mv "$file" "$new_name"
done
' bash {} +
```
Then updated the packages with a command like
```
sed -i 's/package [a-zA-Z0-9_]\+/package mainnet/g' *.go
```
Updated commit from 5edadd7b to address @Kasey's feedback.
* Fix panic when checking types. String is not compatible with DeepEqual.
* Docs: add commentary on the filename convention
* Add a section about nightly tests to the spectest readme. Ref https://github.com/OffchainLabs/prysm/pull/15312
* Set shard_count to optimal value... one!
* Changelog fragment
* use latest unclog release
* Update spectest build instructions after #9122
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* wip
* more wip
* commenting out builder redefinition of payload types
* removing unneeded commented out code and unused type conversions
* fixing unit tests and removing irrelevant unit tests
* gaz
* adding changelog
* Update api/server/structs/conversions_block_execution.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* Update api/server/structs/conversions_block_execution.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* Revert some changes to test files, change the minimum to support the new type changes
* Remove commented block
* fixing import
---------
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* PeerDAS: Implement the validation pipeline for data column sidecars received via gossip
* Fix Terence's comment
* Fix Terence's comment.
* Fix Terence's comment.
* Add blob schedule support
* Feedback
* Fix e2e test
* adding in temporary fix until web3signer is supported for blob schedule in config
* Add fallback
* Uncomment blob schedule from plcae holder fields for test
* Log blob limit increase
* Sort
---------
Co-authored-by: james-prysm <james@prysmaticlabs.com>
* Added a symlink to the Docker image from /bin/sh -> /bin/bash
* Added changelog fragment
---------
Co-authored-by: Preston Van Loon <preston@pvl.dev>
* Force duties update on received blocks.
- Change the context on UpdateDuties to be passed by the calling
function.
- Change the context passed to UpdateDuties to not be dependent on a
slot context.
- Change the deadlines to be forced to be an entire epoch.
- Force duties to be initialized when receiving a HeadEvent if they
aren't already.
- Adds a read lock on the event handling
* review
* Add deadlines at start and healthyagain
* cancel once
* Use otelgrpc for tracing grpc server and client.
* Changelog fragment
* gofmt
* Use context in prometheus service
* Remove async start of prometheus service
* Use random port to reduce the probability of concurrent tests using the same port
* Remove comment
* fix lint error
---------
Co-authored-by: Bastin <bastin.m@proton.me>
* include block in attr event and use stategen
* use no-copy state cache for proposal in same epoch
* only advance to the start of epoch
---------
Co-authored-by: Kasey <kasey@users.noreply.github.com>
* Add equivocation detection logic; broadcast slashing immediately on equivocation
* nit: comments
* move equivocation detection to validateBeaconBlockPubSub
* include broadcasting logic within the helper function
* fix lint
* Add unit tests for equivocation detection
* remove comment that are not required
* Add changelog file
* Add descriptive comment for detectAndBroadcastEquivocation
* use head block instead of block cache for equivocation detection
* add more equivocation unit tests; update a mock to include HeadState error
* update the order of the checks
* move slashing before state fetch; update Tests
* update changelog
* use verifyProposerSlashing to verify and reject block; remove verifySlashableBlock; update tests
* Update changelog
* nit: cleaner error check
* nit: clean up
* revert code logic; update string check; add a unit test
* improve errors; merge tests
* Update a unit test
* fix lint
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Implement all needed KZG wrappers for peerDAS in the `kzg` package.
This way, If we need to change the KZG backend, the only package to
modify is the `kzg` package.
* `.bazelrc`: Add `build --compilation_mode=opt`
* Remove --compilation_mode=opt, use supranational blst headers.
* Fix Terence's comment.
* Fix Terence`s comments.
---------
Co-authored-by: Preston Van Loon <preston@pvl.dev>
* separate block/blob peer scoring
* Preston's test coverage feedback
* test to ensure we don't combine distinct errors
---------
Co-authored-by: Kasey <kasey@users.noreply.github.com>
* adding in guard and tests
* adding in review from slack
* reverting error wrapping
* fixing tests
* fixing more tests
* removing defer cleanups
* Update beacon-chain/rpc/eth/validator/handlers_test.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* Update beacon-chain/rpc/eth/beacon/handlers_pool.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* fixing test and adding some coverage for validator_test.go for function
* gaz
* adding happy path test
* fixing buildkite removing test helper dependency
* fixing tests
* gaz
---------
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* Add support for electra fork epoch
* Fix tests
* fixing tests without needing to override electra fork config
* reverting electra fork override on blob test too, replacing with custom timefetcher
* reverting old changelog
* Fix genesis time
* Move mainnet test into mainnet_config_test.go
* Update spec test to v1.5.0-beta.5
---------
Co-authored-by: james-prysm <james@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
* Migrate Prysm repo to Offchain Labs organization ahead of Pectra upgrade v6
* Replace prysmaticlabs with OffchainLabs on general markdowns
* Update mock
* Gazelle and add mock.go to excluded generated mock file
* add lcStore to Node
* changelog entry
* add atomic getters and setters for the store
* change store fields visibility to private
* refactor method names and add tests
* remove get from getters
* Trigger Consolidation
* Finally have it working
* Fix Build
* Revert Change
* Fix Context
* Finally have consolidations working
* Get Evaluator Working
* Get Withdrawals Working
* Changelog
* Finally Passes
* New line
* Try again
* fmt
* fmt
* Fix Test
* removing redundant loop
* Update beacon-chain/rpc/prysm/v1alpha1/validator/proposer_attestations_electra.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* removing unused import
* replacing with more used function
* resolving Unsafe cast from uint64 to int error
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* cleanup
* fixing optimal sort order for struct
* reverting clictx change based on kasey's feedback
* adding in fix for old trace file flag
* more cleanup for clarity
* optimizing if statement check and fixing wallet open on web enable
* optimizing if statement check and fixing wallet open on web enable
* some more cleanup and bug fix on open wallet
* reverting debug.go changes will handle in a separate PR
* removing useless comment
* changing useWeb to enableAPI
* fixing tests and linting
* manu feedback and one optimization removing auth token check
* gaz
* wip
* fixing unit tests
* changing is aggregator function
* wip
* fully removing the use of committee from validator client, adding a wrapper type for duties
* fixing tests
* fixing linting
* fixing more tests
* changelog
* adding some more tests
* Update proto/prysm/v1alpha1/validator.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* radek's feedback
* removing accidently checked in
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Implement static analysis to prevent panics
* Add nopanic to nogo
* Fix violations and add exclusions
Fix violations and add exclusions for all
* Changelog fragment
* Use pass.Report instead of pass.Reportf
* Remove strings.ToLower for checking init method name
* Add exclusion for herumi init
* Move api/client/beacon template function to init and its own file
* Fix nopanic testcase
* feat(event-stream): Add block_gossip topic support
* feat(event-stream): Add block_gossip topic support
* feat(event-stream): Add block_gossip topic support
* feat: add block gossip topic support to beacon api event stream
* fix: sync_fuzz_test panic
* fix: check for nil operationNotifier before sending block gossip
The operationNotifier was not being checked for nil before being used,
which could lead to a panic if it was not initialized. This commit adds
a nil check to prevent the panic.
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
* adding in check for non prysm node and moving the prysm endpoint to the prysm beacon client
* fixing a bug connecting to a non prysm client and moving the prysm api call to the prysm beacon client
* changelog
* fixing linting
* Update validator/client/metrics.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update seen unaggregated att cache to Electra
* changelog <3
* pass full att
* revert extracting length check
* check if 1 bit set
* test fix
* adding end to end unit test saving unaggregated electra att and then verifying that it's already seen in the cache
* Terence's feedback
* return false on errors
---------
Co-authored-by: james-prysm <james@prysmaticlabs.com>
* Add feature flag to start from any beacon block in db
The new feature flag called --sync-from takes a string that can take
values:
- `head` or
- a 0x-prefixed hex encoded beacon block root.
The beacon block root or the head block root has to be known in db and
has to be a descendant of the current justified checkpoint.
* Fix Bugs In Sync From Head (#15006)
* Fix Bugs
* Remove log
* missing save
* add tests
* Kasey review #1
* Kasey's review #2
* Kasey's review #3
---------
Co-authored-by: Nishant Das <nishdas93@gmail.com>
* Return the genesis block root from last validated checkpoint if zero
When starting a node we load the last validated checkpoint. On tests or a new node this checkpoint can have the zero blockroot (it returns the finalized checkpoint). This PR ensures that it returns the genesis block root instead.
It can't affect runnning code since the root is only used at startup in `setup_forkchoice`. But it may affect tests because now `OptimisticForRoot` will error out if there is no genesis block root set on db.
* Terence review
* fix test
* clean up block-slot-indices on block deletion
* also remove parent root index entry
* treat parent root index as packed key (like slot idx)
* fix bug where input slice is modified, with test
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* Update rules_go to v0.53.0
* Update staticcheck to v0.6.0
* Update to go 1.24.0
* Update github.com/trailofbits/go-mutexasserts to latest
* Use rules_go @ cf3c3af34bd869b864f5f2b98e2f41c2b220d6c9
* Provide the go binary to SszGen.
https://github.com/bazel-contrib/rules_go/pull/4173
* Unskip SA9003
* Update github ci checks to go1.24.0
* CI: Update gosec to v2.22.1 and golangci to v1..64.5
* Temporarily disable usetesting lint check for go1.24
* gosec: Disable G115 - integer overflow conversion
* gosec: Ignore G407 for "hardcoded" IV. It's not hardcoded.
* Fix uses of rand.Seed. This is a no-op in go1.24 and deprecated since go1.20.
* Changelog fragment
* Decompose Electra block attestations
* comments
* changelog <3
* remove redundant comparison
* typo in comment
* fix changelog section name
* review from James and Sammy
* continue pruning on error
* only prune when committees are cached
* fix bad assignments in test
* fix pruner timing issue with batch pruning
* add changelog entry
* add batchSize and number of slots deleted to debug logs
* fix lint
* prune in small batches in one prune cycle
* remove noisy logs
* fix number of batches to prune and return early if there's nothing to delete
* use context with timeout
* fix lint by ignoring nil err return
* Beacon flags: Comment on deprecated section
* Beacon flags: Organize flags relevant to logging, comment on logging section
* Beacon flags: Organize flags relevant to p2p, comment on p2p section
* Beacon flags: Introduce db flag section, organize flags relevant to db, comment on db section
* Beacon flags: Introduce builder flag section, organize flags relevant to builder, comment on builder section
* Beacon flags: Introduce sync flag section, organize flags relevant to sync, comment on sync section
* Beacon flags: Introduce execution layer flag section, organize flags relevant to execution layer, comment on execution layer section
* Beacon flags: Introduce monitoring flag section, organize flags relevant to monitoring, comment on monitoring section
* Beacon flags: Organizing remaining flags in cmd and beacon-chain sections
* Beacon flags: Introduce slasher flag section, organize flags relevant to slasher, comment on slasher section
* Move slasher flag from features to the slasher section
* Changelog fragment
* Beacon flags: Reorganize sections
* Move MaxGoroutines to debug section
* bundle handlers test
* ssz support for optimistic and finality updates APIs
* changelog PR link
* delete helpers
---------
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Fixed otelhttp client setups.
Note: This may not be the best solution as the http client is defined in many places. There should be a canoncial http client with the proper setup.
* Changelog fragment
* go mod tidy and gazelle
* created new converstions_execution files moving the from and to consensus functions for execution related items, also created a placeholder for block_execution with the intent of adding execution types there
* moving execution types from block.go to block_execution.go
* migrating more types and logic
* adding all the to consensus functions for payloads
* changelog
* linting
* updating unit tests for conversions
* fixing linting
* forgot to fix test
* updating name based on feedback
* wip electra e2e
* add Deneb state to `validatorsParticipating`
* Run bazel
* add Electra state to `validatorsParticipating`
* fixing some e2e issues
* more evaluator fixes and changelog
* adding in special condition to pass electra epoch participation
* fixing typo
* missed updating forks for e2e tests
* reverting change current release fork
* missed updating e2e config for test
* updating to latest devnet 5 to fix unit tests
* go mod tidy
* fixing branch, temporary will need to update geth version later
* update to goethereum v1.15.0
* changing changelog to reflect update in geth dependency
* fixing test failures
* adding fix for range request limit during transition period between forks
* enabling validator rest for Electra
* rolling back error message
* adding fixed change logs
* fixing dependencies based on nishant's comments, deps.bzl should be updated not workspace
* partially reverting incorrect change
* removing fixes from change log, handled in separate prs
* removing comment
* updating update fraction field to the corrected spec value from prague
* Update testing/endtoend/evaluators/fork.go
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
---------
Co-authored-by: rkapka <radoslaw.kapka@gmail.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Nishant Das <nishdas93@gmail.com>
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* refactored code and added in checks for blinded endpoints
* changelog
* cleaning up some comments and error messages
* fixing linting
* adding clarifying comment
* Add tests for TestSendBlobsByRangeRequest. Currently not working with sequential blob validation.
* Copy Root First
* Allow Test For Maximum Amount of Blobs
* Fails with the Same error
* Fix Last Test Assertion
* Add in Fix
* Changelog
---------
Co-authored-by: Preston Van Loon <preston@pvl.dev>
* Add metrics for pruned proofs & pending deposits
* Add PruneAllProofs & PruneAllPendingDeposits
* Add simple unit tests
* Add DepositPruner interface
* Add pruning logic at post finalization task
* Move pruner logic into new file(deposit_pruner.go)
Rationale:
As deposit_fetcher.go contains all pruning logics, it would be better to separate its interest into fetcher/inserter/pruner.
* Gofmt
* Add reference link for deprecating eth1 polling
* Add changelog
* Apply reviews from nisdas and james
* add pre and post deposit request tests
* nishant's comment
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
* organize blob directories by period and epoch
* changelog
* remove Indices and replace with Summary
* old PR feedback
* log to advise about the speed of blob migration
* rename level->layer (hoping term is more clear)
* assert path in tests for increased legibility
* lint
* lint
* remove test covering a newly impossible error
* improve feedback from flag validation failure
* Try to clean dangling dirs epoch->flat migration
* lint
* Preston feedback
* try all layouts and short-circuit if base not found
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* Update Beacon API events to Electra
* changelog <3
* fix issues
* send notifications from pending att queue
* Revert "send notifications from pending att queue"
This reverts commit 545408f6cf.
* change field IDs in `AggregateAttestationAndProofElectra`
* fix typo in `validator.proto`
* correct slashing indices length and shashings length
* check length in indexed attestation's `ToConsensus` method
* use `fieldparams.BLSSignatureLength`
* Add length checks for execution request
* fix typo in `beacon_state.proto`
* fix typo in `ssz_proto_library.bzl`
* fix error messages about incorrect types in block factory
* add Electra case to `BeaconBlockContainerToSignedBeaconBlock`
* move PeerDAS config items to PeerDAS section
* remove redundant params
* rename `PendingDepositLimit` to `PendingDepositsLimit`
* improve requests unmarshaling errors
* rename `get_validator_max_effective_balance` to `get_max_effective_balance`
* fix typo in `consolidations.go`
* rename `index` to `validator_index` in `PendingPartialWithdrawal`
* rename `randomByte` to `randomBytes` in `validators.go`
* fix for version in a comment in `validator.go`
* changelog <3
* Revert "rename `index` to `validator_index` in `PendingPartialWithdrawal`"
This reverts commit 87e4da0ea2.
* wip skip eth1data voting after electra
* updating technique
* adding fix for electra eth1 voting
* fixing linting on test
* seeing if reversing genesis state fixes problem
* increasing safety of legacy check
* review feedback
* forgot to fix tests
* nishant's feedback
* nishant's feedback
* rename function a little
* Update beacon-chain/core/helpers/legacy.go
Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
* fixing naming
---------
Co-authored-by: Jun Song <87601811+syjn99@users.noreply.github.com>
* EIP-7549: slasher
* update chunks and detection
* update tests
* encode+decode
* timer
* test fixes
* testing the timer
* Decouple pool from service
* update mock
* cleanup
* make review easier
* comments and changelog
* Don't mark blocks as invalid on context deadlines
When processing state transition, if the error is because of a context
deadline, do not mark it as invalid.
* review
* fix changelog
* fix: early return for gathering local deposits when EIP-6110 is applied
* Add an entry on CHANGELOG.md
* Fix weird indent at CHANGELOG.md
* Add changelog
* Fix CHANGELOG.md
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2025-01-23 16:21:57 +00:00
3642 changed files with 418000 additions and 84504 deletions
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.
- [ ] I have added a description with sufficient context for reviewers to understand this PR.
- [ ] I have tested that my changes work as expected and I added a testing plan to the PR description (if applicable).
@@ -4,7 +4,7 @@ Note: The latest and most up-to-date documentation can be found on our [docs por
Excited by our work and want to get involved in building out our sharding releases? Or maybe you haven't learned as much about the Ethereum protocol but are a savvy developer?
You can explore our [Open Issues](https://github.com/prysmaticlabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PR’s after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/CTYGPUJ) drop us a line there if you want to get more involved or have any questions on our implementation!
You can explore our [Open Issues](https://github.com/OffchainLabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PR’s after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/prysm) drop us a line there if you want to get more involved or have any questions on our implementation!
> [!IMPORTANT]
> Please, **do not send pull requests for trivial changes**, such as typos, these will be rejected. These types of pull requests incur a cost to reviewers and do not provide much value to the project. If you are unsure, please open an issue first to discuss the change.
@@ -15,15 +15,15 @@ You can explore our [Open Issues](https://github.com/prysmaticlabs/prysm/issues)
**2. Fork the Prysm repo.**
Sign in to your GitHub account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
Sign in to your GitHub account or create a new account if you do not have one already. Then navigate your browser to https://github.com/OffchainLabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
$ git remote -v (you should see myrepo and prysm in the list of remotes)
```
**6. Find an issue to work on.**
Check out open issues at https://github.com/prysmaticlabs/prysm/issues and pick one. Leave a comment to let the development team know that you would like to work on it. Or examine the code for areas that can be improved and leave a comment to the development team to ask if they would like you to work on it.
Check out open issues at https://github.com/OffchainLabs/prysm/issues and pick one. Leave a comment to let the development team know that you would like to work on it. Or examine the code for areas that can be improved and leave a comment to the development team to ask if they would like you to work on it.
**7. Create a local branch with a name that clearly identifies what you will be working on.**
@@ -129,7 +129,7 @@ All PRs must must include a changelog fragment file in the `changelog` directory
**17. Create a pull request.**
Navigate your browser to https://github.com/prysmaticlabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base develop”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/prysmaticlabs/prysm/pulls. Ensure that you have added an entry to CHANGELOG.md if your PR is a user-facing change. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
Navigate your browser to https://github.com/OffchainLabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base develop”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/OffchainLabs/prysm/pulls. Ensure that you have added an entry to CHANGELOG.md if your PR is a user-facing change. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the [Ethereum Consensus](https://ethereum.org/en/developers/docs/consensus-mechanisms/#proof-of-stake) [specification](https://github.com/ethereum/consensus-specs), developed by [Offchain Labs](https://www.offchainlabs.com). See the [Changelog](https://github.com/prysmaticlabs/prysm/releases) for details of the latest releases and upcoming breaking changes.
</div>
### Getting Started
---
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the [official documentation portal](https://docs.prylabs.network). If you still have questions, feel free to stop by our [Discord](https://discord.gg/prysmaticlabs).
## 📖 Overview
### Staking on Mainnet
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the [Ethereum Consensus](https://ethereum.org/en/developers/docs/consensus-mechanisms/#proof-of-stake) [specification](https://github.com/ethereum/consensus-specs), developed by [Offchain Labs](https://www.offchainlabs.com).
To participate in staking, you can join the [official eth2 launchpad](https://launchpad.ethereum.org). The launchpad is the only recommended way to become a validator on mainnet. You can explore validator rewards/penalties via Bitfly's block explorer: [beaconcha.in](https://beaconcha.in), and follow the latest blocks added to the chain on [beaconscan](https://beaconscan.com).
See the [Changelog](https://github.com/OffchainLabs/prysm/releases) for details of the latest releases and upcoming breaking changes.
---
## 🚀 Getting Started
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the **[official documentation portal](https://docs.prylabs.network)**.
💬 **Need help?** Join our **[Discord Community](https://discord.gg/prysm)** for support.
---
## 🏆 Staking on Mainnet
To participate in staking, you can join the **[official Ethereum launchpad](https://launchpad.ethereum.org)**. The launchpad is the **only recommended** way to become a validator on mainnet.
🔍 Explore validator rewards/penalties:
- **[beaconcha.in](https://beaconcha.in)**
- **[beaconscan](https://beaconscan.com)**
---
## 🤝 Contributing
### 🔥 Branches
## Contributing
### Branches
Prysm maintains two permanent branches:
* [master](https://github.com/prysmaticlabs/prysm/tree/master): This points to the latest stable release. It is ideal for most users.
* [develop](https://github.com/prysmaticlabs/prysm/tree/develop): This is used for development, it contains the latest PRs. Developers should base their PRs on this branch.
-**[`master`](https://github.com/OffchainLabs/prysm/tree/master)** - This points to the latest stable release. It is ideal for most users.
-**[`develop`](https://github.com/OffchainLabs/prysm/tree/develop)** - This is used for development and contains the latest PRs. Developers should base their PRs on this branch.
### Guide
Want to get involved? Check out our [Contribution Guide](https://docs.prylabs.network/docs/contribute/contribution-guidelines/) to learn more!
### 🛠 Contribution Guide
## License
Want to get involved? Check out our **[Contribution Guide](https://docs.prylabs.network/docs/contribute/contribution-guidelines/)** to learn more!
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html)
[Releases](https://github.com/prysmaticlabs/prysm/releases/) contains all available releases. We recommend using the [most recently released version](https://github.com/prysmaticlabs/prysm/releases/latest).
[Releases](https://github.com/OffchainLabs/prysm/releases/) contains all available releases. We recommend using the [most recently released version](https://github.com/OffchainLabs/prysm/releases/latest).
## Reporting a Vulnerability
Please see our signed [security.txt](https://github.com/prysmaticlabs/prysm/blob/develop/.well-known/security.txt) for preferred encryption and reporting destination.
Please see our signed [security.txt](https://github.com/OffchainLabs/prysm/blob/develop/.well-known/security.txt) for preferred encryption and reporting destination.
**Please do not file a public ticket** mentioning the vulnerability, as doing so could increase the likelihood of the vulnerability being used before a fix has been created, released and installed on the network.
@@ -15,3 +15,10 @@ var ErrBadRequest = errors.Wrap(ErrNotOK, "recv 400 BadRequest response from API
// ErrNoContent specifically means that a '204 - No Content' response was received from the API.
// Typically, a 204 is a success but in this case for the Header API means No header is available
varErrNoContent=errors.New("recv 204 no content response from API, No header is available")
// ErrUnsupportedMediaType specifically means that a '415 - Unsupported Media Type' was received from the API.
varErrUnsupportedMediaType=errors.Wrap(ErrNotOK,"The media type in \"Content-Type\" header is unsupported, and the request has been rejected. This occurs when a HTTP request supplies a payload in a content-type that the server is not able to handle.")
// ErrNotAcceptable specifically means that a '406 - Not Acceptable' was received from the API.
varErrNotAcceptable=errors.Wrap(ErrNotOK,"The accept header value is not acceptable")
varErrBadGateway=errors.Wrap(ErrNotOK,"recv 502 BadGateway response from API")
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.