* create lc cache to track branches
* save lc stuff
* remove finalized data from LC cache on finalization
* read lc stuff
* edit tests
* changelog
* linter
* address commments
* address commments 2
* address commments 3
* address commments 4
* lint
* address commments 5 x_x
* set beacon lcStore to mimick registrable services
* clean up the error propagation
* pass the state to saveLCBootstrap since it's not saved in db yet
* remove "experimental" from backfill flag name
* backwards-compatible alias
* changelog
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* `computeIndicesByRootByPeer`: Add 1 slack epoch regarding peer head slot.
* `FetchDataColumnSidecars`: Switch mode.
Before this commit, this function returned on error as long as at least ONE requested sidecar was not retrieved.
Now, this function retrieves what it can (best effort mode) and returns an additional value which is the map of missing sidecars after running this function.
It is now the role of the caller to check this extra returned value and decide what to do in case some requested sidecars are still missing.
* `fetchOriginDataColumnSidecars`: Optimize
Before this commit, when running `fetchOriginDataColumnSidecars`, all the missing sidecars had to been retrieved in a single shot for the sidecars to be considered as available. The issue was, if for example `sync.FetchDataColumnSidecars` returned all but one sidecar, the returned sidecars were NOT saved, and on the next iteration, all the previously fetched sidecars had to be requested again (from peers.)
After this commit, we greedily save all fetched sidecars, solving this issue.
* Initial sync: Do not fetch data column sidecars before the retention period.
* Implement perfect peerdas syncing.
* Add changelog.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Fix James' comment.
* Update beacon-chain/sync/data_column_sidecars.go
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
* Update beacon-chain/sync/data_column_sidecars.go
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
* Update beacon-chain/sync/data_column_sidecars.go
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
* Update after Potuz's comment.
* Fix Potuz's commit.
* Fix James' comment.
---------
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
* Add flag for colocation whitelisting. --p2p-ip-colocation-whitelist
This change updates the peer IP colocation checking to respect the
configured CIDR whitelist (--p2p-ip-colocation-whitelist flag).
Changes:
- Added IPColocationWhitelist field to peers.StatusConfig
- Added ipColocationWhitelist field to Status struct to store parsed IPNets
- Parse CIDR strings into net.IPNet in NewStatus constructor
- Updated isfromBadIP method to skip colocation limits for whitelisted IPs
- Pass IPColocationWhitelist from Service config when creating Status
The IP colocation whitelist allows operators to exempt specific IP ranges
from the colocation limit, useful for deployments with known trusted
address ranges or legitimate node clustering.
Only check if an IP is in the whitelist when the colocation limit
is actually exceeded, rather than checking for every IP. This is
more efficient and matches the intended behavior.
* Changelog fragment
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* @kasey feedback: Move IP colocation parsing to the node construction
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Implement KZG proof batch verification for data column gossip validation
* Manu's feedback
* Add tests
* Update beacon-chain/sync/batch_verifier.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/batch_verifier.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/kzg_batch_verifier_test.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/kzg_batch_verifier_test.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/kzg_batch_verifier_test.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/kzg_batch_verifier_test.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Update beacon-chain/sync/kzg_batch_verifier_test.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Fix tests
* Kasey's feedback
* `validateWithKzgBatchVerifier`: Give up after a full slot.
Before this commit:
After 100 ms, an un-batched verification is launched concurrently to the batched one.
As a result, a stressed node could start to be even more stressed by the multiple verifications.
Also, it is always hard to choose a correct timeout value.
100ms may be OK for a given node with a given BPO version, and not ok for the same node with a BPO version with 10x more blobs.
However, we know this gossip validation won't be useful after a full slot duration.
After this commit:
After a full slot duration, we just ignore the incoming gossip message.
It's important to ignore it and not to reject it, since rejecting it would downscore the peer sending this message.
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Calculate max epoch and churn for slashing once
* calculate once for proposer and attester slashings
* changelog <3
* introduce struct
* check if err is nil in ProcessVoluntaryExits
* rename exitData to exitInfo and return from functions
* cleanup + tests
* cleanup after rebase
* Potuz's review
* pre-calculate total active balance
* remove `slashValidatorFunc` closure
* Avoid a second validator loop
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* remove balance parameter from slashing functions
---------
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: potuz <potuz@prysmaticlabs.com>
When peers return invalid data during initial sync, log the specific
validation failure reason. This helps identify:
- Whether peer exceeded requested block count
- Whether peer exceeded MAX_REQUEST_BLOCKS protocol limit
- Whether blocks are outside the requested slot range
- Whether blocks are out of order (not increasing or wrong step)
Each log includes the specific condition that failed, making it easier
to debug whether the issue is with peer implementations or request
validation logic.
* Subscription: Get fanout peers in all data column subnets when at least one validator is connected
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Apply suggestion from @nalepae
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Updated test structure to @nalepae suggestion
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* Fix next epoch proposer duties
* Do not update state's slot when computing the proposer
Also do not call Fulu's proposer lookahead if the requested epoch is not
current or next.
* retract Terence's test
* Fix tests
* removing epoch check to pass spec test
* reverting rollback and fixing test setup
---------
Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
Co-authored-by: james-prysm <james@prysmaticlabs.com>
* propose block changes from peerdas branch
* breaking out broadcast code into its own helper, changing fulu broadcast for rest api to properly send datasidecars
* renamed validate blobsidecars to validate blobs, and added check for max blobs
* gofmt
* adding in batch verification for blobs"
* changelog
* adding kzg tests, moving new kzg functions to validation.go
* linting and other small fixes
* fixing linting issues and adding some proposer tests
* missing dependencies
* fixing test
* fixing more tests
* gaz
* removed return on broadcast data columns
* more cleanup and unit test adjustments
* missed removal of unneeded field
* adding data column receiver initialization
* Update beacon-chain/rpc/eth/beacon/handlers.go
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
* partial review feedback from manu
* gaz
* reverting some code to peerdas as I don't believe the broadcast code needs to be reused
* missed removal of build dependency
* fixing tests and adding another test based on manu's suggestion
* fixing linting
* Update beacon-chain/rpc/eth/beacon/handlers.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Update beacon-chain/blockchain/kzg/validation.go
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* radek's review changes
* adding missed test
---------
Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
Co-authored-by: Radosław Kapka <rkapka@wp.pl>
* Swap the wrong arguments in a call
I saw that the names of the passed arguments and the ones of the
function parameters don't match, so I suspect that it's a bug.
* Add changelog
* Add validation for the fillInForkChocieMissingBlocks checkpoints.
* Add test for checkpoint epoch validation in fillInForkChoiceMissingBlocks.
* Use a sentinel error rather than error string
---------
Co-authored-by: kasey <489222+kasey@users.noreply.github.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
By default when starting a node, we load the finalized checkpoint from
db and set it as head. When the chain has not been finalizing for a
while and the user does not start from the latest head, it may still be
benefitial to start from the latest justified checkpoint that has to be
a descendant of the finalized one.
* `FetchDataColumnSidecars`: If possible, try to reconstruct after retrieving sidecars from peers if not some are still missing.
* `randomPeer`: Deterministic randomness.
Before this commit, `randomPeer` was non derterministic, even with a deterministic random source. There reason is we iterated over a map (which is fully random) and then stopped the iteration on a chosen random index (which can be deterministic if the random source is deterministic.)
After this commit, `randomPeer` and all its callers are fully deterministic when using a deterministic random source.
* Fix Potuz's comment.
* Fix James' comment.
* `tryReconstructFromStorageAndPeers`: Improve godoc.
* `startBaseServices`: Warm data column storage cache.
* `TestFindPeers_NodeDeduplication`: Use `t.context`.
* `BUILD.bazel`: Moge `# gazelle.ignore` at the top of the file.
Rationale: This directive is applied to the whole file, regardless its position in the file.
* Improve `TestConstructGenericBeaconBlock`: Courtesy of Terence
* Add `TestDataColumnStoragePath_FlagSpecified`.
* `appFlags`: Move `flags.SubscribeAllDataSubnets` (cosmetic).
* `appFlags`: Add `storage.DataColumnStoragePathFlag`.
* Add changelog.
* Fix subnet peer discovery
Currently computeAllNeededSubnets is called only once when the subnets
are subscribed. It should have been called regularly.
* changelog
---------
Co-authored-by: Kasey Kirkham <kasey@users.noreply.github.com>
* adding what I think could be a fix for find peer
* removing uneeded comment
* unit tests
* linting
* gofmt
* changelog
* Update beacon-chain/p2p/discovery_test.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update changelog/james-prysm_fix-find-peers.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fixing test import
* applying suggestions
* fixing typo
* manu feedback
* accidently checked in files
* addressing manu's edgecase, old bug
* moving tests from service-test.go to subnets_test.go and adding coverage for receiving bad existing node with higher seq
* cleanup
* updating for clarity
* missingPeerCount should increment if we are removing the peer from map
* manu's recommendations on defective subnet rollback edge case
* rollback introduced too much complication as well as a new bug so we are removing it
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update consensus spec to v1.6.0-alpha.4 and implement data column support for forkchoice spectests
* Apply suggestion from @prestonvanloon
Co-Authored-By: Preston Van Loon <pvanloon@offchainlabs.com>
---------
Co-authored-by: Preston Van Loon <pvanloon@offchainlabs.com>
* initialize genesis data asap at node start
* add genesis validation tests with embedded state verification
* Add test for hardcoded mainnet genesis validator root and time from init() function
* Add test for UnmarshalState in encoding/ssz/detect/configfork.go
* Add tests for genesis.Initialize
* Move genesis/embedded to genesis/internal/embedded
* Gazelle / BUILD fix
* James feedback
* Fix lint
* Revert lock
---------
Co-authored-by: Kasey <kasey@users.noreply.github.com>
Co-authored-by: terence tsao <terence@prysmaticlabs.com>
Co-authored-by: Preston Van Loon <preston@pvl.dev>
* Fix validateConsensus
Reported by NuConstruct
The stater package looks for a stateroot using the head state from the
blockchain package. However, this state is very unlikely to have the
poststateroot since that's only added after slot processing. I assume
that essentially any REST endpoint that uses this mechanism to get head
is broken if it needs to gather a state by stateroot.
This PR is a placeholder to verify this is the issue, here I just check
if the NSC already has the post-state since that will have already the
processing state cached.
* Add changelog
* add fallback
* Fix tests