Compare commits

..

75 Commits

Author SHA1 Message Date
terence tsao
1ccb625762 Set slot 2024-11-08 21:13:47 +08:00
terence tsao
eaa6c6ef90 Log ptc attestations (dont forget to set interop genesis state to 4096) 2024-11-08 10:29:28 +08:00
Potuz
5f5544a32b pass epbs as version 2024-11-07 15:58:55 -03:00
Potuz
01e66eb809 payload attribute and withdrawal fixes 2024-11-07 15:40:21 -03:00
terence tsao
7fe4b1835c Interop 2024-11-07 11:08:49 -03:00
Potuz
ce0130a7b8 get right parentblockhash on local blocks 2024-11-07 11:05:13 -03:00
Potuz
b22febf463 Update the NSC on epbs (#14626) 2024-11-07 09:13:06 -03:00
Potuz
05ba83b67b Epbs payload (#14619)
* Send FCU on epbs

* handle late block tasks

* deal with reorgs by attestations

* save head on regular sync

* don't sleep 1 sec
2024-11-07 08:16:44 -03:00
Potuz
a316053688 update header when computing the payload state root (#14625) 2024-11-07 07:35:54 -03:00
Potuz
7c6a92e0ef Get the right state when processing blocks 2024-11-06 17:07:14 -03:00
Potuz
2f3cf1e3f7 fix pruning 2024-11-06 07:21:15 -03:00
Potuz
42fa9b8b61 add ePBS to getPayloadAttribute 2024-11-05 06:32:21 -03:00
Potuz
56dfaf8bbd Process blocks after ePBS (#14611)
These are some of the things that are left to be done

    - Process the payload
    - Change stategen to get the poststate of the block and the payload
      separately
    - Change the next slot cache to be safe for full/empty
2024-11-04 07:07:47 -03:00
Potuz
2e107f49df Share resources between empty and full nodes (#14517)
* Share resources between empty and full nodes

- Share a block structure withing the forkchoice node. The surrounding
  envelope contains information about the payload presence and the
  children links, the inner structure contains the usual FFG and parent links.
- Reworked setOptimistictoInvalid
- Changed the PTC vote logic to have validity handled outside of
  forkchoice and have forkchoice only keep the total count of votes.

* Fix tests

* gazelle

* Update head twice pre-epbs

* only upadte best descendants without computing head

* skip forkchoice tests

* fix some blockchain tests

* Nil optimistic sync fix

* only count weight of empty nodes
2024-11-04 07:06:24 -03:00
terence
be4c426130 Add support to generate genesis state for epbs (#14594) 2024-11-04 07:06:24 -03:00
Potuz
1040f5a132 Remove invalid tests 2024-11-04 07:06:24 -03:00
Potuz
1fb1789b83 Fix pending balance deposits 2024-11-04 07:06:24 -03:00
Potuz
fa8b1f1709 Use wait for ptc duty in submit payload attestation message (#14491) 2024-11-04 07:06:24 -03:00
terence
2acf161184 Initialize exeuction requests for tests (#14489) 2024-11-04 07:06:24 -03:00
Potuz
35e6126896 fix PayloadEnvelopeStateTransition test 2024-11-04 07:06:24 -03:00
Potuz
baae0c015a Fix compute field roots with hasher 2024-11-04 07:06:24 -03:00
Potuz
af9c08fb45 export random execution request 2024-11-04 07:06:24 -03:00
Potuz
887b1bd3b5 fix build 2024-11-04 07:06:24 -03:00
Potuz
3613931b46 Prysm rpc: submit signed execution payload header (#14441) 2024-11-04 07:06:24 -03:00
terence
305f545088 Update attestor and aggregator respect epbs intervals (#14454) 2024-11-04 07:06:24 -03:00
Potuz
1e14c0df6f Provide a helper to compute the state root after a payload envelope (#14450) 2024-11-04 07:06:24 -03:00
Potuz
a927ca8b1c Fix pubkeyToIndex usage 2024-11-04 07:06:24 -03:00
terence
e889572c26 Prysm rpc: Get payload attestation data (#14380) 2024-11-04 07:06:24 -03:00
Potuz
ef21d64ed9 Handle execution payload insertion in forkchoice (#14422) 2024-11-04 07:06:24 -03:00
Potuz
b17e331f1e Add GetPTCVote helpers (#14420) 2024-11-04 07:06:24 -03:00
terence
3beda885d2 Add wait until PTC duty helper function (#14419)
Add wait until PTC duty
2024-11-04 07:06:24 -03:00
Potuz
4470a4ed52 Prysm rpc: submit execution payload envelope (#14395) 2024-11-04 07:06:24 -03:00
Potuz
5c1285f9c0 Remove Changelog workflow 2024-11-04 07:06:24 -03:00
Potuz
9134802943 Prysm rpc: Submit payload attestation data (#14381) 2024-11-04 07:06:24 -03:00
Potuz
25e86efa3d Receive ptc message (#14394)
* Handle incoming ptc attestation messages in the chain package

* fix double import

* remove unused error
2024-11-04 07:06:24 -03:00
Potuz
d75bcf673a Add ePBS fork schedule to config (#14383) 2024-11-04 07:06:24 -03:00
Potuz
31c3b4668f [ePBS] implement UpdateVotesOnPayloadAttestation (#14308) 2024-11-04 07:06:24 -03:00
Potuz
87a88ebe66 Signed execution payload header for sync (#14363)
* Signed execution payload header for sync

* Use RO state

* SignedExecutionPayloadHeader by hash and root

* Fix execution headers cache
2024-11-04 07:06:24 -03:00
Potuz
36f51234ca Refactor currentlySyncingPayload cache (#14350) 2024-11-04 07:06:24 -03:00
JihoonSong
f8c8530b73 Add unit tests of ExecutionPayloadEnvelope verification (#14373)
* Correct requirement list of EnvelopeVerifier

* Add unit tests of ExecutionPayloadEnvelope verification
2024-11-04 07:06:24 -03:00
terence
23b32ffe39 Enable validator client to sign execution payload envelope (#14346)
* Enable validator client to sign execution payload envelope

* Update comment

Co-authored-by: JihoonSong <jihoonsong@users.noreply.github.com>

---------

Co-authored-by: JihoonSong <jihoonsong@users.noreply.github.com>
2024-11-04 07:06:24 -03:00
Potuz
4879a2d0c7 Sync changes to process execution payload envelopes 2024-11-04 07:06:24 -03:00
Potuz
81272e8102 Process withdrawal (#14297)
* process_withdrawal_fn and isParentfull test

* suggestions applied

* minor change

* removed

* lint

* lint fix

* removed Latestheader

* test added with nil error

* tests passing

* IsParentNode Test added

* lint

* fix test

* updated godoc

* fix in godoc

* comment removed

* fixed braces

* removed var

* removed var

* Update beacon-chain/core/blocks/withdrawals.go

* Update beacon-chain/core/blocks/withdrawals_test.go

* gazelle

* test added and removed previous changes in Testprocesswithdrawal

* added check for nil state

* decrease chromatic complexity

---------

Co-authored-by: Potuz <potuz@potuz.net>
Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2024-11-04 07:06:24 -03:00
Potuz
cc5794b1fa Enable validator client to sign execution header (#14333)
* Enable validator client to sign execution header

* Update proto/prysm/v1alpha1/validator-client/keymanager.proto

---------

Co-authored-by: Potuz <potuz@prysmaticlabs.com>
2024-11-04 07:06:24 -03:00
terence
5fc4688584 Initialize payload att message verfier in sync (#14323) 2024-11-04 07:06:24 -03:00
terence
836474450d Add getter for payload attestation cache (#14328)
* Add getter for payload attestation cache

* Check against status

* Feedback #1
2024-11-04 07:06:24 -03:00
Potuz
fafbb54dc1 Payload Attestation Sync package changes (#13989)
* Payload Attestation Sync package changes

* With verifier

* change idx back to uint64

* subscribe to topic

* add back error

---------

Co-authored-by: terence tsao <terence@prysmaticlabs.com>
2024-11-04 07:06:24 -03:00
Potuz
ea0b439748 Process Execution Payload Envelope in Chain Service (#14295)
Adds the processing of execution payload envelope
Corrects the protos for attestations and slashings in Electra versions
Adds generators of full blocks for Electra
2024-11-04 07:06:24 -03:00
Md Amaan
79eeef162b Indexed paylaod attestation test (#14299)
* test-added

* nil check fix

* randomized inputs

* hardcoded inputs

* suggestions applied

* minor-typo fixed

* deleted
2024-11-04 07:06:24 -03:00
JihoonSong
976b9e9a9e Add execution_payload and payload_attestation_message topics (#14304)
* Add `execution_payload` and `payload_attestation_message` topics

* Set `SourcePubkey` to 48 bytes long

* Add randomly populated `PayloadAttestationMessage` object

* Add tests for `execution_payload` and `payload_attestation_message` topics
2024-11-04 07:06:24 -03:00
terence
81e5e5519c Broadcast signed execution payload header to peer (#14300) 2024-11-04 07:06:24 -03:00
terence
ff1483888b Read only payload attestation message with Verifier (#14222)
* Read only payload attestation message with verifier

* Payload attestation tests (#14242)

* Payload attestation in verification package

* Feedback #1

---------

Co-authored-by: Md Amaan <114795592+Redidacove@users.noreply.github.com>
2024-11-04 07:06:24 -03:00
Potuz
4c3e68ea40 Allow nodes with and without payload in forkchoice (#14288)
* Allow nodes with and without payload in forkchoice

    This PR takes care of adding nodes to forkchoice that may or may not
    have a corresponding payload. The rationale is as follows

    - The node structure is kept almost the same as today.
    - A zero payload hash is considered as if the node was empty (except for
      the tree root)
    - When inserting a node we check what the right parent node would be
      depending on whether the parent had a payload or not.
    - For pre-epbs forks all nodes are full, no logic changes except a new
      steps to gather the parent hash that is needed for block insertion.

    This PR had to change some core consensus types and interfaces.
    - It removed the ROBlockEPBS interface and added the corresponding ePBS
      fields to the ReadOnlyBeaconBlockBody
    - It moved the setters and getters to epbs dedicated files.

    It also added a checker for `IsParentFull` on forkchoice that simply
    checks for the parent hash of the parent node.

* review
2024-11-04 07:06:24 -03:00
Potuz
a2e66899b2 Use BeaconCommittees helper to get the ptc (#14286) 2024-11-04 07:06:24 -03:00
JihoonSong
cc8135aa5b Add payload attestation helper functions (#14258)
* Add `IndexedPayloadAttestation` container

* Add `GetPayloadAttestingIndices` and its unit test

* Add `GetIndexedPayloadAttestation` and its unit test

* Add `is_valid_indexed_payload_attestation` and its unit test

* Create a smaller set of validators for faster unit test

* Pass context to `GetPayloadTimelinessCommittee`

* Iterate `ValidatorsReadOnly` instead of copying all validators
2024-11-04 07:06:24 -03:00
Potuz
56a64f5dfb Use slot for latest message in forkchoice (#14279) 2024-11-04 07:06:24 -03:00
Potuz
72bf445e6b Ensure epbs state getters & setters check versions (#14276)
* Ensure EPBS state getters and setters check versions

* Rename to LatestExecutionPayloadHeaderEPBS

* Add minimal beacon state
2024-11-04 07:06:24 -03:00
JihoonSong
293d083a4e Add remove_flag and its unit test (#14260)
* Add `remove_flag` and its unit test

* Add a test case trying to remove a flag that is not set
2024-11-04 07:06:24 -03:00
JihoonSong
a737a44cc0 Modify get_ptc function to follow the Python spec (#14256)
* Modify `get_ptc` function to follow the Python spec

* Assign PTC members from the beginning of beacon committee array
2024-11-04 07:06:24 -03:00
Potuz
6d9b7054da Remove inclusion list from epbs (#14188) 2024-11-04 07:06:24 -03:00
Potuz
6c0c4ae49a Enable validator client to submit payload attestation message (#14064) 2024-11-04 07:06:24 -03:00
Potuz
8bc3793822 Add PTC assignment support for Duty endpoint (#14032) 2024-11-04 07:06:24 -03:00
Potuz
ad16d4a030 use Keymanager() in validator client 2024-11-04 07:06:24 -03:00
Potuz
92f980c679 Change gwei math to primitives package for ePBS state 2024-11-04 07:06:24 -03:00
terence
3ab3366787 Fix GetPayloadTimelinessCommittee to return correct PTC size (#14012) 2024-11-04 07:06:23 -03:00
Potuz
c61f1299e3 Add ePBS to db (#13971)
* Add ePBS to db
2024-11-04 07:06:23 -03:00
Potuz
619a8cfd1d Add EPBS slashing params 2024-11-04 07:06:23 -03:00
Potuz
35d9817781 Implement get_ptc
This implements a helper to get the ptc committee from a state. It uses
the cached beacon committees if possible

It also implements a helper to compute the largest power of two of a
uint64 and a helper to test for nil payload attestation messages
2024-11-04 07:06:23 -03:00
Potuz
6a5d13df60 Add ePBS to state (#13926) 2024-11-04 07:06:23 -03:00
Potuz
12803fb6ce Add testing utility methods to return randomly populated ePBS objects 2024-11-04 07:06:23 -03:00
Potuz
5451ca836a Add ePBS stuff to consensus-types: block 2024-11-04 07:06:23 -03:00
Potuz
3ede8bec91 Helper for Payload Attestation Signing (#13901) 2024-11-04 07:06:23 -03:00
Potuz
23c4609293 ePBS configuration constants 2024-11-04 07:06:23 -03:00
Potuz
f46a118e39 Add ePBS beacon state proto 2024-11-04 07:06:23 -03:00
Potuz
732ce21ef4 Add protos for ePBS except state 2024-11-04 07:06:23 -03:00
637 changed files with 39861 additions and 30729 deletions

View File

@@ -1,33 +0,0 @@
name: CI
on:
pull_request:
branches:
- develop
jobs:
changed_files:
runs-on: ubuntu-latest
name: Check CHANGELOG.md
steps:
- uses: actions/checkout@v4
- name: changelog modified
id: changelog-modified
uses: tj-actions/changed-files@v45
with:
files: CHANGELOG.md
- name: List all changed files
env:
ALL_CHANGED_FILES: ${{ steps.changelog-modified.outputs.all_changed_files }}
run: |
if [[ ${ALL_CHANGED_FILES[*]} =~ (^|[[:space:]])"CHANGELOG.md"($|[[:space:]]) ]];
then
echo "CHANGELOG.md was modified.";
exit 0;
else
echo "CHANGELOG.md was not modified.";
echo "Please see CHANGELOG.md and follow the instructions to add your changes to that file."
echo "In some rare scenarios, a changelog entry is not required and this CI check can be ignored."
exit 1;
fi

View File

@@ -54,7 +54,7 @@ jobs:
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
with:
version: v1.56.1
version: v1.55.2
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:

View File

@@ -73,7 +73,6 @@ linters:
- promlinter
- protogetter
- revive
- spancheck
- staticcheck
- stylecheck
- tagalign

View File

@@ -4,95 +4,20 @@ All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [Unreleased](https://github.com/prysmaticlabs/prysm/compare/v5.2.0...HEAD)
## [Unreleased](https://github.com/prysmaticlabs/prysm/compare/v5.1.2...HEAD)
### Added
- Added proper gas limit check for header from the builder.
- Added an error field to log `Finished building block`.
- Implemented a new `EmptyExecutionPayloadHeader` function.
- `Finished building block`: Display error only if not nil.
- Added support to update target and max blob count to different values per hard fork config.
- Log before blob filesystem cache warm-up.
- New design for the attestation pool. [PR](https://github.com/prysmaticlabs/prysm/pull/14324)
- Add field param placeholder for Electra blob target and max to pass spec tests.
- Add EIP-7691: Blob throughput increase.
- SSZ files generation: Remove the `// Hash: ...` header.
### Changed
- Process light client finality updates only for new finalized epochs instead of doing it for every block.
- Refactor subnets subscriptions.
- Refactor RPC handlers subscriptions.
- Go deps upgrade, from `ioutil` to `io`
- Move successfully registered validator(s) on builder log to debug.
- Update some test files to use `crypto/rand` instead of `math/rand`
- Enforce Compound prefix (0x02) for target when processing pending consolidation request.
- Limit consolidating by validator's effective balance.
- Use 16-bit random value for proposer and sync committee selection filter.
- Re-organize the content of the `*.proto` files (No functional change).
### Deprecated
### Removed
### Fixed
- Added check to prevent nil pointer deference or out of bounds array access when validating the BLSToExecutionChange on an impossibly nil validator.
- EIP-7691: Ensure new blobs subnets are subscribed on epoch in advance.
### Security
## [v5.2.0](https://github.com/prysmaticlabs/prysm/compare/v5.1.2...v5.2.0)
Updating to this release is highly recommended, especially for users running v5.1.1 or v5.1.2.
This release is **mandatory** for all validator clients using mev-boost with a gas limit increase.
Without upgrading to this release, validator clients will default to using local execution blocks
when the gas limit starts to increase.
This release has several fixes and new features. In this release, we have enabled QUIC protocol by
default, which uses port 13000 for `--p2p-quic-port`. This may be a [breaking change](https://github.com/prysmaticlabs/prysm/pull/14688#issuecomment-2516713826)
if you're using port 13000 already. This release has some improvements for raising the gas limit,
but there are [known issues](https://hackmd.io/@ttsao/prysm-gas-limit) with the proposer settings
file provided gas limit not being respected for mev-boost outsourced blocks. Signalling an increase
for the gas limit works perfectly for local block production as of this release. See [pumpthegas.org](https://pumpthegas.org) for more info on raising the gas limit on L1.
Notable features:
- Prysm can reuse blobs from the EL via engine_getBlobsV1, [potentially saving bandwidth](https://hackmd.io/@ttsao/get-blobs-early-results).
- QUIC is enabled by default. This is a UDP based networking protocol with default port 13000.
### Added
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430).
- Electra EIP6110: Queue deposit [pr](https://github.com/prysmaticlabs/prysm/pull/14430)
- Add Bellatrix tests for light client functions.
- Add Discovery Rebooter Feature.
- Added GetBlockAttestationsV2 endpoint.
- Light client support: Consensus types for Electra.
- Light client support: Consensus types for Electra
- Added SubmitPoolAttesterSlashingV2 endpoint.
- Added SubmitAggregateAndProofsRequestV2 endpoint.
- Updated the `beacon-chain/monitor` package to Electra. [PR](https://github.com/prysmaticlabs/prysm/pull/14562)
- Added ListAttestationsV2 endpoint.
- Add ability to rollback node's internal state during processing.
- Change how unsafe protobuf state is created to prevent unnecessary copies.
- Added benchmarks for process slots for Capella, Deneb, Electra.
- Add helper to cast bytes to string without allocating memory.
- Added GetAggregatedAttestationV2 endpoint.
- Added SubmitAttestationsV2 endpoint.
- Validator REST mode Electra block support.
- Added validator index label to `validator_statuses` metric.
- Added Validator REST mode use of Attestation V2 endpoints and Electra attestations.
- PeerDAS: Added proto for `DataColumnIdentifier`, `DataColumnSidecar`, `DataColumnSidecarsByRangeRequest` and `MetadataV2`.
- Better attestation packing for Electra. [PR](https://github.com/prysmaticlabs/prysm/pull/14534)
- P2P: Add logs when a peer is (dis)connected. Add the reason of the disconnection when we initiate it.
- Added a Prometheus error counter metric for HTTP requests to track beacon node requests.
- Added a Prometheus error counter metric for SSE requests.
- Save light client updates and bootstraps in DB.
- Added more comprehensive tests for `BlockToLightClientHeader`. [PR](https://github.com/prysmaticlabs/prysm/pull/14699)
- Added light client feature flag check to RPC handlers. [PR](https://github.com/prysmaticlabs/prysm/pull/14736)
- Light client: Add better error handling. [PR](https://github.com/prysmaticlabs/prysm/pull/14749)
### Changed
@@ -114,42 +39,18 @@ Notable features:
- Simplified `ExitedValidatorIndices`.
- Simplified `EjectedValidatorIndices`.
- `engine_newPayloadV4`,`engine_getPayloadV4` are changes due to new execution request serialization decisions, [PR](https://github.com/prysmaticlabs/prysm/pull/14580)
- Fixed various small things in state-native code.
- Use ROBlock earlier in block syncing pipeline.
- Changed the signature of `ProcessPayload`.
- Only Build the Protobuf state once during serialization.
- Capella blocks are execution.
- Fixed panic when http request to subscribe to event stream fails.
- Return early for blob reconstructor during capella fork.
- Updated block endpoint from V1 to V2.
- Rename instances of "deposit receipts" to "deposit requests".
- Non-blocking payload attribute event handling in beacon api [pr](https://github.com/prysmaticlabs/prysm/pull/14644).
- Updated light client protobufs. [PR](https://github.com/prysmaticlabs/prysm/pull/14650)
- Added `Eth-Consensus-Version` header to `ListAttestationsV2` and `GetAggregateAttestationV2` endpoints.
- Updated light client consensus types. [PR](https://github.com/prysmaticlabs/prysm/pull/14652)
- Update earliest exit epoch for upgrade to electra
- Add missed exit checks to consolidation processing
- Fixed pending deposits processing on Electra.
- Modified `ListAttestationsV2`, `GetAttesterSlashingsV2` and `GetAggregateAttestationV2` endpoints to use slot to determine fork version.
- Improvements to HTTP response handling. [pr](https://github.com/prysmaticlabs/prysm/pull/14673)
- Updated `Blobs` endpoint to return additional metadata fields.
- Made QUIC the default method to connect with peers.
- Check kzg commitments align with blobs and proofs for beacon api end point.
- Revert "Proposer checks gas limit before accepting builder's bid".
- Updated quic-go to v0.48.2 .
- Changed the signature of `ProcessPayload`
- Only Build the Protobuf state once during serialization
### Deprecated
- `/eth/v1alpha1/validator/activation/stream` grpc wait for activation stream is deprecated. [pr](https://github.com/prysmaticlabs/prysm/pull/14514)
- `--interop-genesis-time` and `--interop-num-validators` have been deprecated in the beacon node as the functionality has been removed. These flags have no effect.
### Removed
- Removed finalized validator index cache, no longer needed.
- Removed validator queue position log on key reload and wait for activation.
- Removed outdated spectest exclusions for EIP-6110.
- Removed support for starting a beacon node with a deterministic interop genesis state via interop flags. Alteratively, create a genesis state with prysmctl and use `--genesis-state`. This removes about 9Mb (~11%) of unnecessary code and dependencies from the final production binary.
- Removed kzg proof check from blob reconstructor.
### Fixed
@@ -164,20 +65,7 @@ Notable features:
- Fix keymanager API so that get keys returns an empty response instead of a 500 error when using an unsupported keystore.
- Small log imporvement, removing some redundant or duplicate logs
- EIP7521 - Fixes withdrawal bug by accounting for pending partial withdrawals and deducting already withdrawn amounts from the sweep balance. [PR](https://github.com/prysmaticlabs/prysm/pull/14578)
- unskip electra merkle spec test
- Fix panic in validator REST mode when checking status after removing all keys
- Fix panic on attestation interface since we call data before validation
- corrects nil check on some interface attestation types
- temporary solution to handling electra attesation and attester_slashing events. [pr](14655)
- Diverse log improvements and comment additions.
- Validate that each committee bitfield in an aggregate contains at least one non-zero bit
- P2P: Avoid infinite loop when looking for peers in small networks.
- Fixed another rollback bug due to a context deadline.
- Fix checkpoint sync bug on holesky. [pr](https://github.com/prysmaticlabs/prysm/pull/14689)
- Fix proposer boost spec tests being flakey by adjusting start time from 3 to 2s into slot.
- Fix segmentation fault in E2E when light-client feature flag is enabled. [PR](https://github.com/prysmaticlabs/prysm/pull/14699)
- Fix `searchForPeers` infinite loop in small networks.
- Fix slashing pool behavior to enforce MaxAttesterSlashings limit in Electra version.
### Security
@@ -291,7 +179,6 @@ Updating to this release is recommended at your convenience.
- Light client support: fix light client attested header execution fields' wrong version bug.
- Testing: added custom matcher for better push settings testing.
- Registered `GetDepositSnapshot` Beacon API endpoint.
- Fix rolling back of a block due to a context deadline.
### Security
@@ -502,7 +389,6 @@ block profit. If you want to preserve the existing behavior, set --local-block-v
- Set default LocalBlockValueBoost to 10
- Add bid value metrics
- REST VC metrics
- `startDB`: Add log when checkpoint sync.
### Changed

View File

@@ -129,7 +129,7 @@ If your change is user facing, you must include a CHANGELOG.md entry. See the [M
**17. Create a pull request.**
Navigate your browser to https://github.com/prysmaticlabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base develop”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/prysmaticlabs/prysm/pulls. Ensure that you have added an entry to CHANGELOG.md if your PR is a user-facing change. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
Navigate your browser to https://github.com/prysmaticlabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base master”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/prysmaticlabs/prysm/pulls. Ensure that you have added an entry to CHANGELOG.md if your PR is a user-facing change. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
**18. Respond to comments by Core Contributors.**

View File

@@ -2,21 +2,18 @@
This README details how to setup Prysm for interop testing for usage with other Ethereum consensus clients.
> [!IMPORTANT]
> This guide is likely to be outdated. The Prysm team does not have capacity to troubleshoot
> outdated interop guides or instructions. If you experience issues with this guide, please file an
> issue for visibility and propose fixes, if possible.
## Installation & Setup
1. Install [Bazel](https://docs.bazel.build/versions/master/install.html) **(Recommended)**
2. `git clone https://github.com/prysmaticlabs/prysm && cd prysm`
3. `bazel build //cmd/...`
3. `bazel build //...`
## Starting from Genesis
Prysm can be started from a built-in mainnet genesis state, or started with a provided genesis state by
using the `--genesis-state` flag and providing a path to the genesis.ssz file.
Prysm supports a few ways to quickly launch a beacon node from basic configurations:
- `NumValidators + GenesisTime`: Launches a beacon node by deterministically generating a state from a num-validators flag along with a genesis time **(Recommended)**
- `SSZ Genesis`: Launches a beacon node from a .ssz file containing a SSZ-encoded, genesis beacon state
## Generating a Genesis State
@@ -24,34 +21,21 @@ To setup the necessary files for these quick starts, Prysm provides a tool to ge
a deterministically generated set of validator private keys following the official interop YAML format
[here](https://github.com/ethereum/eth2.0-pm/blob/master/interop/mocked_start).
You can use `prysmctl` to create a deterministic genesis state for interop.
You can use `bazel run //tools/genesis-state-gen` to create a deterministic genesis state for interop.
```sh
# Download (or create) a chain config file.
curl https://raw.githubusercontent.com/ethereum/consensus-specs/refs/heads/dev/configs/minimal.yaml -o /tmp/minimal.yaml
### Usage
- **--genesis-time** uint: Unix timestamp used as the genesis time in the generated genesis state (defaults to now)
- **--num-validators** int: Number of validators to deterministically include in the generated genesis state
- **--output-ssz** string: Output filename of the SSZ marshaling of the generated genesis state
- **--config-name=interop** string: name of the beacon chain config to use when generating the state. ex mainnet|minimal|interop
The example below creates 64 validator keys, instantiates a genesis state with those 64 validators and with genesis unix timestamp 1567542540,
and finally writes a ssz encoded output to ~/Desktop/genesis.ssz. This file can be used to kickstart the beacon chain in the next section. When using the `--interop-*` flags, the beacon node will assume the `interop` config should be used, unless a different config is specified on the command line.
# Run prysmctl to generate genesis with a 2 minute genesis delay and 256 validators.
bazel run //cmd/prysmctl --config=minimal -- \
testnet generate-genesis \
--genesis-time-delay=120 \
--num-validators=256 \
--output-ssz=/tmp/genesis.ssz \
--chain-config-file=/tmp/minimal.yaml
```
The flags are explained below:
- `bazel run //cmd/prysmctl` is the bazel command to compile and run prysmctl.
- `--config=minimal` is a bazel build time configuration flag to compile Prysm with minimal state constants.
- `--` is an argument divider to tell bazel that everything after this divider should be passed as arguments to prysmctl. Without this divider, it isn't clear to bazel if the arguments are meant to be build time arguments or runtime arguments so the operation complains and fails to build without this divider.
- `testnet` is the primary command argument for prysmctl.
- `generate-genesis` is the subcommand to `testnet` in prysmctl.
- `--genesis-time-delay` uint: The number of seconds in the future to define genesis. Example: a value of 60 will set the genesis time to 1 minute in the future. This should be sufficiently large enough to allow for you to start the beacon node before the genesis time.
- `--num-validators` int: Number of validators to deterministically include in the generated genesis state
- `--output-ssz` string: Output filename of the SSZ marshaling of the generated genesis state
- `--chain-config-file` string: Filepath to a chain config yaml file.
Note: This guide saves items to the `/tmp/` directory which will not persist if your machine is
restarted. Consider tweaking the arguments if persistence is needed.
bazel run //tools/genesis-state-gen -- --config-name interop --output-ssz ~/Desktop/genesis.ssz --num-validators 64 --genesis-time 1567542540
```
## Launching a Beacon Node + Validator Client
@@ -60,33 +44,45 @@ restarted. Consider tweaking the arguments if persistence is needed.
Open up two terminal windows, run:
```
bazel run //cmd/beacon-chain --config=minimal -- \
--minimal-config \
--bootstrap-node= \
--deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 \
--datadir=/tmp/beacon-chain-minimal-devnet \
--force-clear-db \
--min-sync-peers=0 \
--genesis-state=/tmp/genesis.ssz \
--chain-config-file=/tmp/minimal.yaml
bazel run //beacon-chain -- \
--bootstrap-node= \
--deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 \
--datadir=/tmp/beacon-chain-interop \
--force-clear-db \
--min-sync-peers=0 \
--interop-num-validators 64 \
--interop-eth1data-votes
```
This will start the system with 256 validators. The flags used can be explained as such:
- `bazel run //cmd/beacon-chain --config=minimal` builds and runs the beacon node in minimal build configuration.
- `--` is a flag divider to distinguish between bazel flags and flags that should be passed to the application. All flags and arguments after this divider are passed to the beacon chain.
- `--minimal-config` tells the beacon node to use minimal network configuration. This is different from the compile time state configuration flag `--config=minimal` and both are required.
- `--bootstrap-node=` disables the default bootstrap nodes. This prevents the client from attempting to peer with mainnet nodes.
- `--datadir=/tmp/beacon-chain-minimal-devnet` sets the data directory in a temporary location. Change this to your preferred destination.
- `--force-clear-db` will delete the beaconchain.db file without confirming with the user. This is helpful for iteratively running local devnets without changing the datadir, but less helpful for one off runs where there was no database in the data directory.
- `--min-sync-peers=0` allows the beacon node to skip initial sync without peers. This is essential because Prysm expects at least a few peers to start the blockchain.
- `--genesis-state=/tmp/genesis.ssz` defines the path to the generated genesis ssz file. The beacon node will use this as the initial genesis state.
- `--chain-config-file=/tmp/minimal.yaml` defines the path to the yaml file with the chain configuration.
As soon as the beacon node has started, start the validator in the other terminal window.
This will deterministically generate a beacon genesis state and start
the system with 64 validators and the genesis time set to the current unix timestamp.
Wait a bit until your beacon chain starts, and in the other window:
```
bazel run //cmd/validator --config=minimal -- --datadir=/tmp/validator --interop-num-validators=256 --minimal-config --suggested-fee-recipient=0x8A04d14125D0FDCDc742F4A05C051De07232EDa4
bazel run //validator -- --keymanager=interop --keymanageropts='{"keys":64}'
```
This will launch and kickstart the system with your 256 validators performing their duties accordingly.
This will launch and kickstart the system with your 64 validators performing their duties accordingly.
### Launching from `genesis.ssz`
Assuming you generated a `genesis.ssz` file with 64 validators, open up two terminal windows, run:
```
bazel run //beacon-chain -- \
--bootstrap-node= \
--deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 \
--datadir=/tmp/beacon-chain-interop \
--force-clear-db \
--min-sync-peers=0 \
--interop-genesis-state /path/to/genesis.ssz \
--interop-eth1data-votes
```
Wait a bit until your beacon chain starts, and in the other window:
```
bazel run //validator -- --keymanager=interop --keymanageropts='{"keys":64}'
```
This will launch and kickstart the system with your 64 validators performing their duties accordingly.

View File

@@ -227,7 +227,7 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.5.0-alpha.10"
consensus_spec_version = "v1.5.0-alpha.8"
bls_test_version = "v0.1.1"
@@ -243,7 +243,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-NtWIhbO/mVMb1edq5jqABL0o8R1tNFiuG8PCMAsUHcs=",
integrity = "sha256-BsGIbEyJuYrzhShGl0tHhR4lP5Qwno8R3k8a6YBR/DA=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
)
@@ -259,7 +259,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-DFlFlnzls1bBrDm+/xD8NK2ivvkhxR+rSNVLLqScVKc=",
integrity = "sha256-DkdvhPP2KiqUOpwFXQIFDCWCwsUDIC/xhTBD+TZevm0=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
)
@@ -275,7 +275,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-G9ENPF8udZL/BqRHbi60GhFPnZDPZAH6UjcjRiOlvbk=",
integrity = "sha256-vkZqV0HB8A2Uc56C1Us/p5G57iaHL+zw2No93Xt6M/4=",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
)
@@ -290,7 +290,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-ClOLKkmAcEi8/uKi6LDeqthask5+E3sgxVoA0bqmQ0c=",
integrity = "sha256-D/HPAW61lKqjoWwl7N0XvhdX+67dCEFAy8JxVzqBGtU=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -12,10 +12,8 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/client:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",

View File

@@ -14,7 +14,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -177,7 +176,7 @@ func (c *Client) do(ctx context.Context, method string, path string, body io.Rea
err = non200Err(r)
return
}
res, err = io.ReadAll(io.LimitReader(r.Body, client.MaxBodySize))
res, err = io.ReadAll(r.Body)
if err != nil {
err = errors.Wrap(err, "error reading http response body from builder server")
return
@@ -282,7 +281,7 @@ func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValid
if err != nil {
return err
}
log.WithField("registrationCount", len(svr)).Debug("Successfully registered validator(s) on builder")
log.WithField("num_registrations", len(svr)).Info("successfully registered validator(s) on builder")
return nil
}
@@ -359,7 +358,7 @@ func (c *Client) Status(ctx context.Context) error {
}
func non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(io.LimitReader(response.Body, client.MaxErrBodySize))
bodyBytes, err := io.ReadAll(response.Body)
var errMessage ErrorMessage
var body string
if err != nil {

View File

@@ -9,7 +9,6 @@ import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
types "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
@@ -1014,7 +1013,7 @@ func (bb *BuilderBidDeneb) ToProto() (*eth.BuilderBidDeneb, error) {
if err != nil {
return nil, err
}
if len(bb.BlobKzgCommitments) > params.BeaconConfig().DeprecatedMaxBlobsPerBlock {
if len(bb.BlobKzgCommitments) > fieldparams.MaxBlobsPerBlock {
return nil, fmt.Errorf("too many blob commitments: %d", len(bb.BlobKzgCommitments))
}
kzgCommitments := make([][]byte, len(bb.BlobKzgCommitments))

View File

@@ -10,18 +10,11 @@ import (
"github.com/pkg/errors"
)
const (
MaxBodySize int64 = 1 << 23 // 8MB default, WithMaxBodySize can override
MaxBodySizeState int64 = 1 << 29 // 512MB
MaxErrBodySize int64 = 1 << 17 // 128KB
)
// Client is a wrapper object around the HTTP client.
type Client struct {
hc *http.Client
baseURL *url.URL
token string
maxBodySize int64
hc *http.Client
baseURL *url.URL
token string
}
// NewClient constructs a new client with the provided options (ex WithTimeout).
@@ -33,9 +26,8 @@ func NewClient(host string, opts ...ClientOpt) (*Client, error) {
return nil, err
}
c := &Client{
hc: &http.Client{},
baseURL: u,
maxBodySize: MaxBodySize,
hc: &http.Client{},
baseURL: u,
}
for _, o := range opts {
o(c)
@@ -80,7 +72,7 @@ func (c *Client) NodeURL() string {
// Get is a generic, opinionated GET function to reduce boilerplate amongst the getters in this package.
func (c *Client) Get(ctx context.Context, path string, opts ...ReqOption) ([]byte, error) {
u := c.baseURL.ResolveReference(&url.URL{Path: path})
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), http.NoBody)
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), nil)
if err != nil {
return nil, err
}
@@ -97,7 +89,7 @@ func (c *Client) Get(ctx context.Context, path string, opts ...ReqOption) ([]byt
if r.StatusCode != http.StatusOK {
return nil, Non200Err(r)
}
b, err := io.ReadAll(io.LimitReader(r.Body, c.maxBodySize))
b, err := io.ReadAll(r.Body)
if err != nil {
return nil, errors.Wrap(err, "error reading http response body")
}

View File

@@ -25,16 +25,16 @@ var ErrInvalidNodeVersion = errors.New("invalid node version response")
var ErrConnectionIssue = errors.New("could not connect")
// Non200Err is a function that parses an HTTP response to handle responses that are not 200 with a formatted error.
func Non200Err(r *http.Response) error {
b, err := io.ReadAll(io.LimitReader(r.Body, MaxErrBodySize))
func Non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(response.Body)
var body string
if err != nil {
body = "(Unable to read response body.)"
} else {
body = "response body:\n" + string(b)
body = "response body:\n" + string(bodyBytes)
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", r.StatusCode, r.Request.URL, body)
switch r.StatusCode {
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case http.StatusNotFound:
return errors.Wrap(ErrNotFound, msg)
default:

View File

@@ -93,7 +93,6 @@ func (h *EventStream) Subscribe(eventsChannel chan<- *Event) {
EventType: EventConnectionError,
Data: []byte(errors.Wrap(err, client.ErrConnectionIssue.Error()).Error()),
}
return
}
defer func() {

View File

@@ -40,7 +40,7 @@ func TestNewEventStream(t *testing.T) {
func TestEventStream(t *testing.T) {
mux := http.NewServeMux()
mux.HandleFunc("/eth/v1/events", func(w http.ResponseWriter, _ *http.Request) {
mux.HandleFunc("/eth/v1/events", func(w http.ResponseWriter, r *http.Request) {
flusher, ok := w.(http.Flusher)
require.Equal(t, true, ok)
for i := 1; i <= 3; i++ {
@@ -79,23 +79,3 @@ func TestEventStream(t *testing.T) {
}
}
}
func TestEventStreamRequestError(t *testing.T) {
topics := []string{"head"}
eventsChannel := make(chan *Event, 1)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// use valid url that will result in failed request with nil body
stream, err := NewEventStream(ctx, http.DefaultClient, "http://badhost:1234", topics)
require.NoError(t, err)
// error will happen when request is made, should be received over events channel
go stream.Subscribe(eventsChannel)
event := <-eventsChannel
if event.EventType != EventConnectionError {
t.Errorf("Expected event type %q, got %q", EventConnectionError, event.EventType)
}
}

View File

@@ -46,10 +46,3 @@ func WithAuthenticationToken(token string) ClientOpt {
c.token = token
}
}
// WithMaxBodySize overrides the default max body size of 8MB.
func WithMaxBodySize(size int64) ClientOpt {
return func(c *Client) {
c.maxBodySize = size
}
}

View File

@@ -36,8 +36,9 @@ go_library(
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/migration:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",

View File

@@ -1546,10 +1546,3 @@ func EventChainReorgFromV1(event *ethv1.EventChainReorg) *ChainReorgEvent {
ExecutionOptimistic: event.ExecutionOptimistic,
}
}
func SyncAggregateFromConsensus(sa *eth.SyncAggregate) *SyncAggregate {
return &SyncAggregate{
SyncCommitteeBits: hexutil.Encode(sa.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(sa.SyncCommitteeSignature),
}
}

View File

@@ -3,227 +3,125 @@ package structs
import (
"encoding/json"
"fmt"
"strconv"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
v1 "github.com/prysmaticlabs/prysm/v5/proto/eth/v1"
v2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
"github.com/prysmaticlabs/prysm/v5/proto/migration"
)
func LightClientUpdateFromConsensus(update interfaces.LightClientUpdate) (*LightClientUpdate, error) {
attestedHeader, err := lightClientHeaderToJSON(update.AttestedHeader())
func LightClientUpdateFromConsensus(update *v2.LightClientUpdate) (*LightClientUpdate, error) {
attestedHeader, err := lightClientHeaderContainerToJSON(update.AttestedHeader)
if err != nil {
return nil, errors.Wrap(err, "could not marshal attested light client header")
}
finalizedHeader, err := lightClientHeaderToJSON(update.FinalizedHeader())
finalizedHeader, err := lightClientHeaderContainerToJSON(update.FinalizedHeader)
if err != nil {
return nil, errors.Wrap(err, "could not marshal finalized light client header")
}
var scBranch [][32]byte
var finalityBranch [][32]byte
if update.Version() >= version.Electra {
scb, err := update.NextSyncCommitteeBranchElectra()
if err != nil {
return nil, err
}
scBranch = scb[:]
fb, err := update.FinalityBranchElectra()
if err != nil {
return nil, err
}
finalityBranch = fb[:]
} else {
scb, err := update.NextSyncCommitteeBranch()
if err != nil {
return nil, err
}
scBranch = scb[:]
fb, err := update.FinalityBranch()
if err != nil {
return nil, err
}
finalityBranch = fb[:]
}
return &LightClientUpdate{
AttestedHeader: attestedHeader,
NextSyncCommittee: SyncCommitteeFromConsensus(update.NextSyncCommittee()),
NextSyncCommitteeBranch: branchToJSON(scBranch),
NextSyncCommittee: SyncCommitteeFromConsensus(migration.V2SyncCommitteeToV1Alpha1(update.NextSyncCommittee)),
NextSyncCommitteeBranch: branchToJSON(update.NextSyncCommitteeBranch),
FinalizedHeader: finalizedHeader,
FinalityBranch: branchToJSON(finalityBranch),
SyncAggregate: SyncAggregateFromConsensus(update.SyncAggregate()),
SignatureSlot: fmt.Sprintf("%d", update.SignatureSlot()),
FinalityBranch: branchToJSON(update.FinalityBranch),
SyncAggregate: syncAggregateToJSON(update.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(update.SignatureSlot), 10),
}, nil
}
func LightClientFinalityUpdateFromConsensus(update interfaces.LightClientFinalityUpdate) (*LightClientFinalityUpdate, error) {
attestedHeader, err := lightClientHeaderToJSON(update.AttestedHeader())
func LightClientFinalityUpdateFromConsensus(update *v2.LightClientFinalityUpdate) (*LightClientFinalityUpdate, error) {
attestedHeader, err := lightClientHeaderContainerToJSON(update.AttestedHeader)
if err != nil {
return nil, errors.Wrap(err, "could not marshal attested light client header")
}
finalizedHeader, err := lightClientHeaderToJSON(update.FinalizedHeader())
finalizedHeader, err := lightClientHeaderContainerToJSON(update.FinalizedHeader)
if err != nil {
return nil, errors.Wrap(err, "could not marshal finalized light client header")
}
var finalityBranch [][32]byte
if update.Version() >= version.Electra {
b, err := update.FinalityBranchElectra()
if err != nil {
return nil, err
}
finalityBranch = b[:]
} else {
b, err := update.FinalityBranch()
if err != nil {
return nil, err
}
finalityBranch = b[:]
}
return &LightClientFinalityUpdate{
AttestedHeader: attestedHeader,
FinalizedHeader: finalizedHeader,
FinalityBranch: branchToJSON(finalityBranch),
SyncAggregate: SyncAggregateFromConsensus(update.SyncAggregate()),
SignatureSlot: fmt.Sprintf("%d", update.SignatureSlot()),
FinalityBranch: branchToJSON(update.FinalityBranch),
SyncAggregate: syncAggregateToJSON(update.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(update.SignatureSlot), 10),
}, nil
}
func LightClientOptimisticUpdateFromConsensus(update interfaces.LightClientOptimisticUpdate) (*LightClientOptimisticUpdate, error) {
attestedHeader, err := lightClientHeaderToJSON(update.AttestedHeader())
func LightClientOptimisticUpdateFromConsensus(update *v2.LightClientOptimisticUpdate) (*LightClientOptimisticUpdate, error) {
attestedHeader, err := lightClientHeaderContainerToJSON(update.AttestedHeader)
if err != nil {
return nil, errors.Wrap(err, "could not marshal attested light client header")
}
return &LightClientOptimisticUpdate{
AttestedHeader: attestedHeader,
SyncAggregate: SyncAggregateFromConsensus(update.SyncAggregate()),
SignatureSlot: fmt.Sprintf("%d", update.SignatureSlot()),
SyncAggregate: syncAggregateToJSON(update.SyncAggregate),
SignatureSlot: strconv.FormatUint(uint64(update.SignatureSlot), 10),
}, nil
}
func branchToJSON[S [][32]byte](branchBytes S) []string {
func branchToJSON(branchBytes [][]byte) []string {
if branchBytes == nil {
return nil
}
branch := make([]string, len(branchBytes))
for i, root := range branchBytes {
branch[i] = hexutil.Encode(root[:])
branch[i] = hexutil.Encode(root)
}
return branch
}
func lightClientHeaderToJSON(header interfaces.LightClientHeader) (json.RawMessage, error) {
func syncAggregateToJSON(input *v1.SyncAggregate) *SyncAggregate {
return &SyncAggregate{
SyncCommitteeBits: hexutil.Encode(input.SyncCommitteeBits),
SyncCommitteeSignature: hexutil.Encode(input.SyncCommitteeSignature),
}
}
func lightClientHeaderContainerToJSON(container *v2.LightClientHeaderContainer) (json.RawMessage, error) {
// In the case that a finalizedHeader is nil.
if header == nil {
if container == nil {
return nil, nil
}
var result any
beacon, err := container.GetBeacon()
if err != nil {
return nil, errors.Wrap(err, "could not get beacon block header")
}
switch v := header.Version(); v {
case version.Altair:
result = &LightClientHeader{Beacon: BeaconBlockHeaderFromConsensus(header.Beacon())}
case version.Capella:
exInterface, err := header.Execution()
var header any
switch t := (container.Header).(type) {
case *v2.LightClientHeaderContainer_HeaderAltair:
header = &LightClientHeader{Beacon: BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(beacon))}
case *v2.LightClientHeaderContainer_HeaderCapella:
execution, err := ExecutionPayloadHeaderCapellaFromConsensus(t.HeaderCapella.Execution)
if err != nil {
return nil, err
}
ex, ok := exInterface.Proto().(*enginev1.ExecutionPayloadHeaderCapella)
if !ok {
return nil, fmt.Errorf("execution data is not %T", &enginev1.ExecutionPayloadHeaderCapella{})
}
execution, err := ExecutionPayloadHeaderCapellaFromConsensus(ex)
if err != nil {
return nil, err
}
executionBranch, err := header.ExecutionBranch()
if err != nil {
return nil, err
}
result = &LightClientHeaderCapella{
Beacon: BeaconBlockHeaderFromConsensus(header.Beacon()),
header = &LightClientHeaderCapella{
Beacon: BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(beacon)),
Execution: execution,
ExecutionBranch: branchToJSON(executionBranch[:]),
ExecutionBranch: branchToJSON(t.HeaderCapella.ExecutionBranch),
}
case version.Deneb:
exInterface, err := header.Execution()
case *v2.LightClientHeaderContainer_HeaderDeneb:
execution, err := ExecutionPayloadHeaderDenebFromConsensus(t.HeaderDeneb.Execution)
if err != nil {
return nil, err
}
ex, ok := exInterface.Proto().(*enginev1.ExecutionPayloadHeaderDeneb)
if !ok {
return nil, fmt.Errorf("execution data is not %T", &enginev1.ExecutionPayloadHeaderDeneb{})
}
execution, err := ExecutionPayloadHeaderDenebFromConsensus(ex)
if err != nil {
return nil, err
}
executionBranch, err := header.ExecutionBranch()
if err != nil {
return nil, err
}
result = &LightClientHeaderDeneb{
Beacon: BeaconBlockHeaderFromConsensus(header.Beacon()),
header = &LightClientHeaderDeneb{
Beacon: BeaconBlockHeaderFromConsensus(migration.V1HeaderToV1Alpha1(beacon)),
Execution: execution,
ExecutionBranch: branchToJSON(executionBranch[:]),
}
case version.Electra:
exInterface, err := header.Execution()
if err != nil {
return nil, err
}
ex, ok := exInterface.Proto().(*enginev1.ExecutionPayloadHeaderElectra)
if !ok {
return nil, fmt.Errorf("execution data is not %T", &enginev1.ExecutionPayloadHeaderElectra{})
}
execution, err := ExecutionPayloadHeaderElectraFromConsensus(ex)
if err != nil {
return nil, err
}
executionBranch, err := header.ExecutionBranch()
if err != nil {
return nil, err
}
result = &LightClientHeaderDeneb{
Beacon: BeaconBlockHeaderFromConsensus(header.Beacon()),
Execution: execution,
ExecutionBranch: branchToJSON(executionBranch[:]),
ExecutionBranch: branchToJSON(t.HeaderDeneb.ExecutionBranch),
}
default:
return nil, fmt.Errorf("unsupported header version %s", version.String(v))
return nil, fmt.Errorf("unsupported header type %T", t)
}
return json.Marshal(result)
}
func LightClientBootstrapFromConsensus(bootstrap interfaces.LightClientBootstrap) (*LightClientBootstrap, error) {
header, err := lightClientHeaderToJSON(bootstrap.Header())
if err != nil {
return nil, errors.Wrap(err, "could not marshal light client header")
}
var scBranch [][32]byte
if bootstrap.Version() >= version.Electra {
b, err := bootstrap.CurrentSyncCommitteeBranchElectra()
if err != nil {
return nil, err
}
scBranch = b[:]
} else {
b, err := bootstrap.CurrentSyncCommitteeBranch()
if err != nil {
return nil, err
}
scBranch = b[:]
}
return &LightClientBootstrap{
Header: header,
CurrentSyncCommittee: SyncCommitteeFromConsensus(bootstrap.CurrentSyncCommittee()),
CurrentSyncCommitteeBranch: branchToJSON(scBranch),
}, nil
return json.Marshal(header)
}

View File

@@ -26,7 +26,7 @@ type ListAttestationsResponse struct {
}
type SubmitAttestationsRequest struct {
Data json.RawMessage `json:"data"`
Data []*Attestation `json:"data"`
}
type ListVoluntaryExitsResponse struct {

View File

@@ -1,10 +1,7 @@
package structs
type SidecarsResponse struct {
Version string `json:"version"`
Data []*Sidecar `json:"data"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data []*Sidecar `json:"data"`
}
type Sidecar struct {

View File

@@ -7,8 +7,7 @@ import (
)
type AggregateAttestationResponse struct {
Version string `json:"version,omitempty"`
Data json.RawMessage `json:"data"`
Data *Attestation `json:"data"`
}
type SubmitContributionAndProofsRequest struct {

View File

@@ -6,9 +6,11 @@ go_library(
"chain_info.go",
"chain_info_forkchoice.go",
"currently_syncing_block.go",
"currently_syncing_execution_payload_envelope.go",
"defragment.go",
"error.go",
"execution_engine.go",
"execution_engine_epbs.go",
"forkchoice_update_execution.go",
"head.go",
"head_sync_committee_info.go",
@@ -25,6 +27,8 @@ go_library(
"receive_attestation.go",
"receive_blob.go",
"receive_block.go",
"receive_execution_payload_envelope.go",
"receive_payload_attestation_message.go",
"service.go",
"tracked_proposer.go",
"weak_subjectivity_checks.go",
@@ -43,6 +47,7 @@ go_library(
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/epbs:go_default_library",
"//beacon-chain/core/epoch/precompute:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
@@ -84,6 +89,7 @@ go_library(
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/eth/v2:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
@@ -96,6 +102,7 @@ go_library(
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_prometheus_client_golang//prometheus/promauto:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@io_opencensus_go//trace:go_default_library",
"@org_golang_x_sync//errgroup:go_default_library",
],
)
@@ -108,6 +115,7 @@ go_test(
"chain_info_norace_test.go",
"chain_info_test.go",
"checktags_test.go",
"epbs_test.go",
"error_test.go",
"execution_engine_test.go",
"forkchoice_update_execution_test.go",
@@ -123,6 +131,7 @@ go_test(
"process_block_test.go",
"receive_attestation_test.go",
"receive_block_test.go",
"receive_execution_payload_envelope_test.go",
"service_norace_test.go",
"service_test.go",
"setup_test.go",
@@ -140,7 +149,6 @@ go_test(
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/light-client:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/das:go_default_library",
@@ -177,6 +185,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",

View File

@@ -43,7 +43,7 @@ type ForkchoiceFetcher interface {
GetProposerHead() [32]byte
SetForkChoiceGenesisTime(uint64)
UpdateHead(context.Context, primitives.Slot)
HighestReceivedBlockSlot() primitives.Slot
HighestReceivedBlockSlotRoot() (primitives.Slot, [32]byte)
ReceivedBlocksLastEpoch() (uint64, error)
InsertNode(context.Context, state.BeaconState, consensus_blocks.ROBlock) error
ForkChoiceDump(context.Context) (*forkchoice.Dump, error)
@@ -51,6 +51,7 @@ type ForkchoiceFetcher interface {
ProposerBoost() [32]byte
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
IsCanonical(ctx context.Context, blockRoot [32]byte) (bool, error)
GetPTCVote(root [32]byte) primitives.PTCStatus
}
// TimeFetcher retrieves the Ethereum consensus data that's related to time.
@@ -119,6 +120,12 @@ type OptimisticModeFetcher interface {
IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error)
}
// ExecutionPayloadFetcher defines a common interface that returns forkchoice
// information about payload block hashes
type ExecutionPayloadFetcher interface {
HashInForkchoice([32]byte) bool
}
// FinalizedCheckpt returns the latest finalized checkpoint from chain store.
func (s *Service) FinalizedCheckpt() *ethpb.Checkpoint {
s.cfg.ForkChoiceStore.RLock()
@@ -400,6 +407,14 @@ func (s *Service) InForkchoice(root [32]byte) bool {
return s.cfg.ForkChoiceStore.HasNode(root)
}
// HashInForkchoice returns true if the given payload block hash is found in
// forkchoice
func (s *Service) HashInForkchoice(hash [32]byte) bool {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.HasHash(hash)
}
// IsOptimisticForRoot takes the root as argument instead of the current head
// and returns true if it is optimistic.
func (s *Service) IsOptimisticForRoot(ctx context.Context, root [32]byte) (bool, error) {

View File

@@ -6,6 +6,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
@@ -30,11 +31,11 @@ func (s *Service) SetForkChoiceGenesisTime(timestamp uint64) {
s.cfg.ForkChoiceStore.SetGenesisTime(timestamp)
}
// HighestReceivedBlockSlot returns the corresponding value from forkchoice
func (s *Service) HighestReceivedBlockSlot() primitives.Slot {
// HighestReceivedBlockSlotRoot returns the corresponding value from forkchoice
func (s *Service) HighestReceivedBlockSlotRoot() (primitives.Slot, [32]byte) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.HighestReceivedBlockSlot()
return s.cfg.ForkChoiceStore.HighestReceivedBlockSlotRoot()
}
// ReceivedBlocksLastEpoch returns the corresponding value from forkchoice
@@ -100,3 +101,30 @@ func (s *Service) ParentRoot(root [32]byte) ([32]byte, error) {
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ParentRoot(root)
}
// GetPTCVote wraps a call to the corresponding method in forkchoice and checks
// the currently syncing status
// Warning: this method will return the current PTC status regardless of
// timeliness. A client MUST call this method when about to submit a PTC
// attestation, that is exactly at the threshold to submit the attestation.
func (s *Service) GetPTCVote(root [32]byte) primitives.PTCStatus {
s.cfg.ForkChoiceStore.RLock()
f := s.cfg.ForkChoiceStore.GetPTCVote()
s.cfg.ForkChoiceStore.RUnlock()
if f != primitives.PAYLOAD_ABSENT {
return f
}
f, isSyncing := s.payloadBeingSynced.isSyncing(root)
if isSyncing {
return f
}
return primitives.PAYLOAD_ABSENT
}
// insertPayloadEnvelope wraps a locked call to the corresponding method in
// forkchoice
func (s *Service) insertPayloadEnvelope(envelope interfaces.ROExecutionPayloadEnvelope) error {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
return s.cfg.ForkChoiceStore.InsertPayloadEnvelope(envelope)
}

View File

@@ -13,6 +13,7 @@ import (
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
@@ -36,9 +37,10 @@ func prepareForkchoiceState(
blockRoot [32]byte,
parentRoot [32]byte,
payloadHash [32]byte,
parentHash [32]byte,
justified *ethpb.Checkpoint,
finalized *ethpb.Checkpoint,
) (state.BeaconState, blocks.ROBlock, error) {
) (state.BeaconState, consensus_blocks.ROBlock, error) {
blockHeader := &ethpb.BeaconBlockHeader{
ParentRoot: parentRoot[:],
}
@@ -60,7 +62,7 @@ func prepareForkchoiceState(
base.BlockRoots[0] = append(base.BlockRoots[0], blockRoot[:]...)
st, err := state_native.InitializeFromProtoBellatrix(base)
if err != nil {
return nil, blocks.ROBlock{}, err
return nil, consensus_blocks.ROBlock{}, err
}
blk := &ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
@@ -68,16 +70,17 @@ func prepareForkchoiceState(
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: &enginev1.ExecutionPayload{
BlockHash: payloadHash[:],
BlockHash: payloadHash[:],
ParentHash: parentHash[:],
},
},
},
}
signed, err := blocks.NewSignedBeaconBlock(blk)
if err != nil {
return nil, blocks.ROBlock{}, err
return nil, consensus_blocks.ROBlock{}, err
}
roblock, err := blocks.NewROBlockWithRoot(signed, blockRoot)
roblock, err := consensus_blocks.NewROBlockWithRoot(signed, blockRoot)
return st, roblock, err
}
@@ -141,7 +144,7 @@ func TestUnrealizedJustifiedBlockHash(t *testing.T) {
service := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: []byte{'j'}}
ofc := &ethpb.Checkpoint{Root: []byte{'f'}}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
service.cfg.ForkChoiceStore.SetBalancesByRooter(func(_ context.Context, _ [32]byte) ([]uint64, error) { return []uint64{}, nil })
@@ -335,22 +338,22 @@ func TestService_ChainHeads(t *testing.T) {
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'a'}, [32]byte{'D'}, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'b'}, [32]byte{'E'}, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -432,10 +435,10 @@ func TestService_IsOptimistic(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -468,10 +471,10 @@ func TestService_IsOptimisticForRoot(t *testing.T) {
c := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, head: &head{root: [32]byte{'b'}}}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, ojc, ofc)
st, roblock, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, c.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))

View File

@@ -0,0 +1,36 @@
package blockchain
import (
"sync"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
type currentlySyncingPayload struct {
sync.Mutex
roots map[[32]byte]primitives.PTCStatus
}
func (b *currentlySyncingPayload) set(envelope interfaces.ROExecutionPayloadEnvelope) {
b.Lock()
defer b.Unlock()
if envelope.PayloadWithheld() {
b.roots[envelope.BeaconBlockRoot()] = primitives.PAYLOAD_WITHHELD
} else {
b.roots[envelope.BeaconBlockRoot()] = primitives.PAYLOAD_PRESENT
}
}
func (b *currentlySyncingPayload) unset(root [32]byte) {
b.Lock()
defer b.Unlock()
delete(b.roots, root)
}
func (b *currentlySyncingPayload) isSyncing(root [32]byte) (status primitives.PTCStatus, isSyncing bool) {
b.Lock()
defer b.Unlock()
status, isSyncing = b.roots[root]
return
}

View File

@@ -0,0 +1,18 @@
package blockchain
import (
"testing"
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestServiceGetPTCVote(t *testing.T) {
c := &currentlySyncingPayload{roots: make(map[[32]byte]primitives.PTCStatus)}
s := &Service{cfg: &config{ForkChoiceStore: doublylinkedtree.New()}, payloadBeingSynced: c}
r := [32]byte{'r'}
require.Equal(t, primitives.PAYLOAD_ABSENT, s.GetPTCVote(r))
c.roots[r] = primitives.PAYLOAD_WITHHELD
require.Equal(t, primitives.PAYLOAD_WITHHELD, s.GetPTCVote(r))
}

View File

@@ -30,6 +30,9 @@ var (
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
// ErrNilHead is returned when no head is present in the blockchain service.
ErrNilHead = errors.New("nil head")
// errInvalidValidatorIndex is returned when a validator index is
// invalid or unexpected
errInvalidValidatorIndex = errors.New("invalid validator index")
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")

View File

@@ -2,15 +2,13 @@ package blockchain
import (
"context"
"crypto/sha256"
"fmt"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -30,6 +28,8 @@ import (
"github.com/sirupsen/logrus"
)
const blobCommitmentVersionKZG uint8 = 0x01
var defaultLatestValidHash = bytesutil.PadTo([]byte{0xff}, 32)
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
@@ -72,7 +72,6 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
if arg.attributes == nil {
arg.attributes = payloadattribute.EmptyWithVersion(headBlk.Version())
}
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), arg)
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, arg.attributes)
if err != nil {
switch {
@@ -95,6 +94,14 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
log.WithError(err).Error("Could not set head root to invalid")
return nil, nil
}
if len(invalidRoots) == 0 {
log.WithFields(logrus.Fields{
"slot": headBlk.Slot(),
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(headRoot[:])),
}).Warn("invalid payload")
return nil, nil
}
if err := s.removeInvalidBlockAndState(ctx, invalidRoots); err != nil {
log.WithError(err).Error("Could not remove invalid block and state")
return nil, nil
@@ -109,6 +116,7 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
}).Warn("Pruned invalid blocks, could not update head root")
return nil, invalidBlock{error: ErrInvalidPayload, root: arg.headRoot, invalidAncestorRoots: invalidRoots}
}
b, err := s.getBlock(ctx, r)
if err != nil {
log.WithError(err).Error("Could not get head block")
@@ -171,38 +179,6 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
return payloadID, nil
}
func firePayloadAttributesEvent(ctx context.Context, f event.SubscriberSender, cfg *fcuConfig) {
pidx, err := helpers.BeaconProposerIndex(ctx, cfg.headState)
if err != nil {
log.WithError(err).
WithField("head_root", cfg.headRoot[:]).
Error("Could not get proposer index for PayloadAttributes event")
return
}
evd := payloadattribute.EventData{
ProposerIndex: pidx,
ProposalSlot: cfg.headState.Slot(),
ParentBlockRoot: cfg.headRoot[:],
Attributer: cfg.attributes,
HeadRoot: cfg.headRoot,
HeadState: cfg.headState,
HeadBlock: cfg.headBlock,
}
if cfg.headBlock != nil && !cfg.headBlock.IsNil() {
headPayload, err := cfg.headBlock.Block().Body().Execution()
if err != nil {
log.WithError(err).Error("Could not get execution payload for head block")
return
}
evd.ParentBlockHash = headPayload.BlockHash()
evd.ParentBlockNumber = headPayload.BlockNumber()
}
f.Send(&feed.Event{
Type: statefeed.PayloadAttributes,
Data: evd,
})
}
// getPayloadHash returns the payload hash given the block root.
// if the block is before bellatrix fork epoch, it returns the zero hash.
func (s *Service) getPayloadHash(ctx context.Context, root []byte) ([32]byte, error) {
@@ -364,7 +340,7 @@ func (s *Service) getPayloadAttribute(ctx context.Context, st state.BeaconState,
var attr payloadattribute.Attributer
switch st.Version() {
case version.Deneb, version.Electra:
case version.Deneb, version.Electra, version.EPBS:
withdrawals, _, err := st.ExpectedWithdrawals()
if err != nil {
log.WithError(err).Error("Could not get expected withdrawals to get payload attribute")
@@ -443,7 +419,13 @@ func kzgCommitmentsToVersionedHashes(body interfaces.ReadOnlyBeaconBlockBody) ([
versionedHashes := make([]common.Hash, len(commitments))
for i, commitment := range commitments {
versionedHashes[i] = primitives.ConvertKzgCommitmentToVersionedHash(commitment)
versionedHashes[i] = ConvertKzgCommitmentToVersionedHash(commitment)
}
return versionedHashes, nil
}
func ConvertKzgCommitmentToVersionedHash(commitment []byte) common.Hash {
versionedHash := sha256.Sum256(commitment)
versionedHash[0] = blobCommitmentVersionKZG
return versionedHash
}

View File

@@ -0,0 +1,62 @@
package blockchain
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/config/features"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/sirupsen/logrus"
)
// notifyForkchoiceUpdate signals execution engine the fork choice updates. Execution engine should:
// 1. Re-organizes the execution payload chain and corresponding state to make head_block_hash the head.
// 2. Applies finality to the execution state: it irreversibly persists the chain of all execution payloads and corresponding state, up to and including finalized_block_hash.
func (s *Service) notifyForkchoiceUpdateEPBS(ctx context.Context, blockhash [32]byte, attributes payloadattribute.Attributer) (*enginev1.PayloadIDBytes, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyForkchoiceUpdateEPBS")
defer span.End()
finalizedHash := s.cfg.ForkChoiceStore.FinalizedPayloadBlockHash()
justifiedHash := s.cfg.ForkChoiceStore.UnrealizedJustifiedPayloadBlockHash()
fcs := &enginev1.ForkchoiceState{
HeadBlockHash: blockhash[:],
SafeBlockHash: justifiedHash[:],
FinalizedBlockHash: finalizedHash[:],
}
if attributes == nil {
attributes = payloadattribute.EmptyWithVersion(version.EPBS)
}
payloadID, lastValidHash, err := s.cfg.ExecutionEngineCaller.ForkchoiceUpdated(ctx, fcs, attributes)
if err != nil {
switch {
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
forkchoiceUpdatedOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"headPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(blockhash[:])),
"finalizedPayloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(finalizedHash[:])),
}).Info("Called fork choice updated with optimistic block")
return payloadID, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
log.WithError(err).Info("forkchoice updated to invalid block")
return nil, invalidBlock{error: ErrInvalidPayload, root: [32]byte(lastValidHash)}
default:
log.WithError(err).Error(ErrUndefinedExecutionEngineError)
return nil, nil
}
}
forkchoiceUpdatedValidNodeCount.Inc()
// If the forkchoice update call has an attribute, update the payload ID cache.
hasAttr := attributes != nil && !attributes.IsEmpty()
if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
log.WithFields(logrus.Fields{
"blockHash": fmt.Sprintf("%#x", blockhash[:]),
}).Error("Received nil payload ID on VALID engine response")
}
return payloadID, nil
}

View File

@@ -46,13 +46,13 @@ func Test_NotifyForkchoiceUpdate_GetPayloadAttrErrorCanContinue(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -104,13 +104,13 @@ func Test_NotifyForkchoiceUpdate(t *testing.T) {
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -287,16 +287,16 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, &forkchoicetypes.Checkpoint{}))
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -316,10 +316,8 @@ func Test_NotifyForkchoiceUpdate_NIlLVH(t *testing.T) {
headRoot: brd,
}
_, err = service.notifyForkchoiceUpdate(ctx, a)
require.Equal(t, true, IsInvalidBlock(err))
require.Equal(t, brd, InvalidBlockRoot(err))
require.Equal(t, brd, InvalidAncestorRoots(err)[0])
require.Equal(t, 1, len(InvalidAncestorRoots(err)))
// The incoming block is not invalid because the empty node is still valid on ePBS.
require.Equal(t, false, IsInvalidBlock(err))
}
//
@@ -398,28 +396,28 @@ func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, &forkchoicetypes.Checkpoint{}))
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 1, bra, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
bState, _ := util.DeterministicGenesisState(t, 10)
require.NoError(t, beaconDB.SaveState(ctx, bState, bra))
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, brb, bra, [32]byte{'B'}, [32]byte{'A'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 3, brc, brb, [32]byte{'C'}, [32]byte{'B'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 4, brd, brc, [32]byte{'D'}, [32]byte{'C'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 5, bre, brb, [32]byte{'E'}, [32]byte{'D'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 6, brf, bre, [32]byte{'F'}, [32]byte{'E'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 7, brg, bre, [32]byte{'G'}, [32]byte{'F'}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -512,10 +510,10 @@ func Test_NotifyNewPayload(t *testing.T) {
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, r, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, r, [32]byte{}, [32]byte{'A'}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -692,7 +690,7 @@ func Test_NotifyNewPayload(t *testing.T) {
}
service.cfg.ExecutionEngineCaller = e
root := [32]byte{'a'}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, root, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, root, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
postVersion, postHeader, err := getStateVersionAndPayload(tt.postState)
@@ -759,17 +757,17 @@ func Test_reportInvalidBlock(t *testing.T) {
service, tr := minimalTestService(t)
ctx, _, fcs := tr.ctx, tr.db, tr.fcs
jcp := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, 0, [32]byte{'A'}, [32]byte{}, [32]byte{'a'}, jcp, jcp)
st, root, err := prepareForkchoiceState(ctx, 0, [32]byte{'A'}, [32]byte{}, [32]byte{'a'}, [32]byte{}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 1, [32]byte{'B'}, [32]byte{'A'}, [32]byte{'b'}, jcp, jcp)
st, root, err = prepareForkchoiceState(ctx, 1, [32]byte{'B'}, [32]byte{'A'}, [32]byte{'b'}, [32]byte{'a'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'C'}, [32]byte{'B'}, [32]byte{'c'}, jcp, jcp)
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'C'}, [32]byte{'B'}, [32]byte{'c'}, [32]byte{'b'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 3, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'d'}, jcp, jcp)
st, root, err = prepareForkchoiceState(ctx, 3, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'d'}, [32]byte{'c'}, jcp, jcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
@@ -931,7 +929,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
fjc := &forkchoicetypes.Checkpoint{Epoch: 0, Root: params.BeaconConfig().ZeroHash}
require.NoError(t, fcs.UpdateJustifiedCheckpoint(ctx, fjc))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(fjc))
state, blkRoot, err := prepareForkchoiceState(ctx, 0, genesisRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, genesisRoot, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
fcs.SetOriginRoot(genesisRoot)
@@ -965,7 +963,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
require.NoError(t, beaconDB.SaveStateSummary(ctx, opStateSummary))
tenjc := &ethpb.Checkpoint{Epoch: 10, Root: genesisRoot[:]}
tenfc := &ethpb.Checkpoint{Epoch: 10, Root: genesisRoot[:]}
state, blkRoot, err = prepareForkchoiceState(ctx, 320, opRoot, genesisRoot, params.BeaconConfig().ZeroHash, tenjc, tenfc)
state, blkRoot, err = prepareForkchoiceState(ctx, 320, opRoot, genesisRoot, params.BeaconConfig().ZeroHash, [32]byte{}, tenjc, tenfc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
assert.NoError(t, beaconDB.SaveGenesisBlockRoot(ctx, opRoot))
@@ -994,7 +992,7 @@ func Test_UpdateLastValidatedCheckpoint(t *testing.T) {
require.NoError(t, beaconDB.SaveStateSummary(ctx, validSummary))
twentyjc := &ethpb.Checkpoint{Epoch: 20, Root: validRoot[:]}
twentyfc := &ethpb.Checkpoint{Epoch: 20, Root: validRoot[:]}
state, blkRoot, err = prepareForkchoiceState(ctx, 640, validRoot, genesisRoot, params.BeaconConfig().ZeroHash, twentyjc, twentyfc)
state, blkRoot, err = prepareForkchoiceState(ctx, 640, validRoot, genesisRoot, params.BeaconConfig().ZeroHash, [32]byte{}, twentyjc, twentyfc)
require.NoError(t, err)
fcs.SetBalancesByRooter(func(_ context.Context, _ [32]byte) ([]uint64, error) { return []uint64{}, nil })
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -1056,8 +1054,8 @@ func TestService_removeInvalidBlockAndState(t *testing.T) {
require.NoError(t, service.removeInvalidBlockAndState(ctx, [][32]byte{r1, r2}))
require.Equal(t, false, service.hasBlock(ctx, r1))
require.Equal(t, false, service.hasBlock(ctx, r2))
require.Equal(t, false, service.chainHasBlock(ctx, r1))
require.Equal(t, false, service.chainHasBlock(ctx, r2))
require.Equal(t, false, service.cfg.BeaconDB.HasStateSummary(ctx, r1))
require.Equal(t, false, service.cfg.BeaconDB.HasStateSummary(ctx, r2))
has, err := service.cfg.StateGen.HasState(ctx, r1)

View File

@@ -122,13 +122,13 @@ func TestService_forkchoiceUpdateWithExecution_SameHeadRootNewProposer(t *testin
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 1, altairBlkRoot, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 2, bellatrixBlkRoot, altairBlkRoot, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, state, blkRoot))
@@ -164,10 +164,10 @@ func TestShouldOverrideFCU(t *testing.T) {
headRoot := [32]byte{'b'}
parentRoot := [32]byte{'a'}
ojc := &ethpb.Checkpoint{}
st, root, err := prepareForkchoiceState(ctx, 1, parentRoot, [32]byte{}, [32]byte{}, ojc, ojc)
st, root, err := prepareForkchoiceState(ctx, 1, parentRoot, [32]byte{}, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, headRoot, parentRoot, [32]byte{}, ojc, ojc)
st, root, err = prepareForkchoiceState(ctx, 2, headRoot, parentRoot, [32]byte{}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, root))

View File

@@ -11,7 +11,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -405,19 +404,13 @@ func (s *Service) saveOrphanedOperations(ctx context.Context, orphanedRoot [32]b
if a.GetData().Slot+params.BeaconConfig().SlotsPerEpoch < s.CurrentSlot() {
continue
}
if features.Get().EnableExperimentalAttestationPool {
if err = s.cfg.AttestationCache.Add(a); err != nil {
if helpers.IsAggregated(a) {
if err := s.cfg.AttPool.SaveAggregatedAttestation(a); err != nil {
return err
}
} else {
if a.IsAggregated() {
if err = s.cfg.AttPool.SaveAggregatedAttestation(a); err != nil {
return err
}
} else {
if err = s.cfg.AttPool.SaveUnaggregatedAttestation(a); err != nil {
return err
}
if err := s.cfg.AttPool.SaveUnaggregatedAttestation(a); err != nil {
return err
}
}
saveOrphanedAttCount.Inc()

View File

@@ -48,7 +48,7 @@ func TestSaveHead_Different(t *testing.T) {
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
@@ -63,11 +63,11 @@ func TestSaveHead_Different(t *testing.T) {
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -101,7 +101,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
require.NoError(t, err)
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, oldBlock.Block().Slot(), oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
service.head = &head{
@@ -110,7 +110,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
}
reorgChainParent := [32]byte{'B'}
state, blkRoot, err = prepareForkchoiceState(ctx, 0, reorgChainParent, oldRoot, oldBlock.Block().ParentRoot(), ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, 0, reorgChainParent, oldRoot, oldBlock.Block().ParentRoot(), [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
@@ -122,7 +122,7 @@ func TestSaveHead_Different_Reorg(t *testing.T) {
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -238,11 +238,11 @@ func TestRetrieveHead_ReadOnly(t *testing.T) {
wsb := util.SaveBlock(t, context.Background(), service.cfg.BeaconDB, newHeadSignedBlock)
newRoot, err := newHeadBlock.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, slots.PrevSlot(wsb.Block().Slot()), wsb.Block().ParentRoot(), service.cfg.ForkChoiceStore.CachedHeadRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, ojc, ofc)
state, blkRoot, err = prepareForkchoiceState(ctx, wsb.Block().Slot(), newRoot, wsb.Block().ParentRoot(), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
headState, err := util.NewBeaconState()
@@ -304,7 +304,7 @@ func TestSaveOrphanedAtts(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -381,7 +381,7 @@ func TestSaveOrphanedOps(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -451,7 +451,7 @@ func TestSaveOrphanedAtts_CanFilter(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlockCapella{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -509,7 +509,7 @@ func TestSaveOrphanedAtts_DoublyLinkedTrie(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk3, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -568,7 +568,7 @@ func TestSaveOrphanedAtts_CanFilter_DoublyLinkedTrie(t *testing.T) {
for _, blk := range []*ethpb.SignedBeaconBlock{blkG, blk1, blk2, blk4} {
r, err := blk.Block.HashTreeRoot()
require.NoError(t, err)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, blk.Block.Slot, r, bytesutil.ToBytes32(blk.Block.ParentRoot), [32]byte{}, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
util.SaveBlock(t, ctx, beaconDB, blk)
@@ -583,7 +583,7 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
ctx, beaconDB, fcs := tr.ctx, tr.db, tr.fcs
ojp := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
st, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, ojp, ojp)
st, blkRoot, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, [32]byte{}, [32]byte{}, ojp, ojp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, blkRoot))
@@ -603,7 +603,7 @@ func TestUpdateHead_noSavedChanges(t *testing.T) {
headRoot := service.headRoot()
require.Equal(t, [32]byte{}, headRoot)
st, blkRoot, err = prepareForkchoiceState(ctx, 0, bellatrixBlkRoot, [32]byte{}, [32]byte{}, fcp, fcp)
st, blkRoot, err = prepareForkchoiceState(ctx, 0, bellatrixBlkRoot, [32]byte{}, [32]byte{}, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, fcs.InsertNode(ctx, st, blkRoot))
fcs.SetBalancesByRooter(func(context.Context, [32]byte) ([]uint64, error) { return []uint64{1, 2}, nil })

View File

@@ -26,7 +26,8 @@ go_test(
deps = [
"//consensus-types/blocks:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_consensys_gnark_crypto//ecc/bls12-381/fr:go_default_library",
"@com_github_crate_crypto_go_kzg_4844//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,14 +1,51 @@
package kzg
import (
"bytes"
"crypto/sha256"
"encoding/binary"
"testing"
"github.com/consensys/gnark-crypto/ecc/bls12-381/fr"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/sirupsen/logrus"
)
func deterministicRandomness(seed int64) [32]byte {
// Converts an int64 to a byte slice
buf := new(bytes.Buffer)
err := binary.Write(buf, binary.BigEndian, seed)
if err != nil {
logrus.WithError(err).Error("Failed to write int64 to bytes buffer")
return [32]byte{}
}
bytes := buf.Bytes()
return sha256.Sum256(bytes)
}
// Returns a serialized random field element in big-endian
func GetRandFieldElement(seed int64) [32]byte {
bytes := deterministicRandomness(seed)
var r fr.Element
r.SetBytes(bytes[:])
return GoKZG.SerializeScalar(r)
}
// Returns a random blob using the passed seed as entropy
func GetRandBlob(seed int64) GoKZG.Blob {
var blob GoKZG.Blob
bytesPerBlob := GoKZG.ScalarsPerBlob * GoKZG.SerializedScalarSize
for i := 0; i < bytesPerBlob; i += GoKZG.SerializedScalarSize {
fieldElementBytes := GetRandFieldElement(seed + int64(i))
copy(blob[i:i+GoKZG.SerializedScalarSize], fieldElementBytes[:])
}
return blob
}
func GenerateCommitmentAndProof(blob GoKZG.Blob) (GoKZG.KZGCommitment, GoKZG.KZGProof, error) {
commitment, err := kzgContext.BlobToKZGCommitment(blob, 0)
if err != nil {
@@ -37,7 +74,7 @@ func TestBytesToAny(t *testing.T) {
}
func TestGenerateCommitmentAndProof(t *testing.T) {
blob := util.GetRandBlob(123)
blob := GetRandBlob(123)
commitment, proof, err := GenerateCommitmentAndProof(blob)
require.NoError(t, err)
expectedCommitment := GoKZG.KZGCommitment{180, 218, 156, 194, 59, 20, 10, 189, 186, 254, 132, 93, 7, 127, 104, 172, 238, 240, 237, 70, 83, 89, 1, 152, 99, 0, 165, 65, 143, 62, 20, 215, 230, 14, 205, 95, 28, 245, 54, 25, 160, 16, 178, 31, 232, 207, 38, 85}

View File

@@ -45,28 +45,44 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
}
log = log.WithField("syncBitsCount", agg.SyncCommitteeBits.Count())
}
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if b.Version() >= version.EPBS {
sh, err := b.Body().SignedExecutionPayloadHeader()
if err != nil {
return err
}
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
}
if b.Version() >= version.Deneb {
kzgs, err := b.Body().BlobKzgCommitments()
header, err := sh.Header()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
} else if len(kzgs) > 0 {
log = log.WithField("kzgCommitmentCount", len(kzgs))
return err
}
log = log.WithFields(logrus.Fields{"payloadHash": fmt.Sprintf("%#x", header.BlockHash()),
"builderIndex": header.BuilderIndex(),
"value": header.Value(),
"blobKzgCommitmentsRoot": fmt.Sprintf("%#x", header.BlobKzgCommitmentsRoot()),
})
} else {
if b.Version() >= version.Bellatrix {
p, err := b.Body().Execution()
if err != nil {
return err
}
log = log.WithField("payloadHash", fmt.Sprintf("%#x", bytesutil.Trunc(p.BlockHash())))
txs, err := p.Transactions()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
case err != nil:
return err
default:
log = log.WithField("txCount", len(txs))
txsPerSlotCount.Set(float64(len(txs)))
}
}
if b.Version() >= version.Deneb {
kzgs, err := b.Body().BlobKzgCommitments()
if err != nil {
log.WithError(err).Error("Failed to get blob KZG commitments")
} else if len(kzgs) > 0 {
log = log.WithField("kzgCommitmentCount", len(kzgs))
}
}
}
log.Info("Finished applying state transition")
@@ -112,6 +128,9 @@ func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte
// logs payload related data every slot.
func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
if block.Version() >= version.EPBS {
return nil
}
isExecutionBlk, err := blocks.IsExecutionBlock(block.Body())
if err != nil {
return errors.Wrap(err, "could not determine if block is execution block")

View File

@@ -182,6 +182,10 @@ var (
Name: "chain_service_processing_milliseconds",
Help: "Total time to call a chain service in ReceiveBlock()",
})
executionEngineProcessingTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "execution_engine_processing_milliseconds",
Help: "Total time to process an execution payload envelope in ReceiveExecutionPayloadEnvelope()",
})
dataAvailWaitedTime = promauto.NewSummary(prometheus.SummaryOpts{
Name: "da_waited_time_milliseconds",
Help: "Total time spent waiting for a data availability check in ReceiveBlock()",

View File

@@ -1,6 +1,8 @@
package blockchain
import (
"sync"
"github.com/prysmaticlabs/prysm/v5/async/event"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
@@ -69,6 +71,22 @@ func WithDepositCache(c cache.DepositCache) Option {
}
}
// WithPayloadAttestationCache for payload attestation cache.
func WithPayloadAttestationCache(c *cache.PayloadAttestationCache) Option {
return func(s *Service) error {
s.cfg.PayloadAttestationCache = c
return nil
}
}
// WithPayloadEnvelopeCache for payload envelope cache.
func WithPayloadEnvelopeCache(c *sync.Map) Option {
return func(s *Service) error {
s.cfg.PayloadEnvelopeCache = c
return nil
}
}
// WithPayloadIDCache for payload ID cache.
func WithPayloadIDCache(c *cache.PayloadIDCache) Option {
return func(s *Service) error {
@@ -85,14 +103,6 @@ func WithTrackedValidatorsCache(c *cache.TrackedValidatorsCache) Option {
}
}
// WithAttestationCache for attestation lifecycle after chain inclusion.
func WithAttestationCache(c *cache.AttestationCache) Option {
return func(s *Service) error {
s.cfg.AttestationCache = c
return nil
}
}
// WithAttestationPool for attestation lifecycle after chain inclusion.
func WithAttestationPool(p attestations.Pool) Option {
return func(s *Service) error {

View File

@@ -97,7 +97,7 @@ func (s *Service) OnAttestation(ctx context.Context, a ethpb.Att, disparity time
// We assume trusted attestation in this function has verified signature.
// Update forkchoice store with the new attestation for updating weight.
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indexedAtt.GetAttestingIndices(), bytesutil.ToBytes32(a.GetData().BeaconBlockRoot), a.GetData().Target.Epoch)
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indexedAtt.GetAttestingIndices(), bytesutil.ToBytes32(a.GetData().BeaconBlockRoot), a.GetData().Slot)
return nil
}

View File

@@ -32,7 +32,7 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
util.SaveBlock(t, ctx, beaconDB, blkWithoutState)
cp := &ethpb.Checkpoint{}
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, cp, cp)
st, roblock, err := prepareForkchoiceState(ctx, 0, [32]byte{}, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, cp, cp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
@@ -41,7 +41,7 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
r, err := blkWithStateBadAtt.Block.HashTreeRoot()
require.NoError(t, err)
cp = &ethpb.Checkpoint{Root: r[:]}
st, roblock, err = prepareForkchoiceState(ctx, blkWithStateBadAtt.Block.Slot, r, [32]byte{}, params.BeaconConfig().ZeroHash, cp, cp)
st, roblock, err = prepareForkchoiceState(ctx, blkWithStateBadAtt.Block.Slot, r, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, cp, cp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
util.SaveBlock(t, ctx, beaconDB, blkWithStateBadAtt)
@@ -92,12 +92,12 @@ func TestStore_OnAttestation_ErrorConditions(t *testing.T) {
{
name: "process nil attestation",
a: nil,
wantedErr: "attestation is nil",
wantedErr: "attestation can't be nil",
},
{
name: "process nil field (a.Data) in attestation",
a: &ethpb.Attestation{},
wantedErr: "attestation is nil",
wantedErr: "attestation's data can't be nil",
},
{
name: "process nil field (a.Target) in attestation",
@@ -139,7 +139,7 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
ojc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
ofc := &ethpb.Checkpoint{Epoch: 0, Root: tRoot[:]}
state, roblock, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
state, roblock, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, roblock))
require.NoError(t, service.OnAttestation(ctx, att[0], 0))
@@ -170,7 +170,7 @@ func TestService_GetRecentPreState(t *testing.T) {
err = s.SetFinalizedCheckpoint(cp0)
require.NoError(t, err)
st, root, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, cp0, cp0)
st, root, err := prepareForkchoiceState(ctx, 31, [32]byte(ckRoot), [32]byte{}, [32]byte{'R'}, [32]byte{}, cp0, cp0)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
service.head = &head{
@@ -202,7 +202,7 @@ func TestService_GetAttPreState_Concurrency(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: ckRoot}))
st, root, err := prepareForkchoiceState(ctx, 100, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, cp1, cp1)
st, root, err := prepareForkchoiceState(ctx, 100, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, [32]byte{}, cp1, cp1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
@@ -259,7 +259,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)}))
st, root, err := prepareForkchoiceState(ctx, 1, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, cp1, cp1)
st, root, err := prepareForkchoiceState(ctx, 1, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, [32]byte{}, cp1, cp1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
s1, err := service.getAttPreState(ctx, cp1)
@@ -273,7 +273,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
_, err = service.getAttPreState(ctx, cp2)
require.ErrorContains(t, "epoch 2 root 0x4200000000000000000000000000000000000000000000000000000000000000: not a checkpoint in forkchoice", err)
st, root, err = prepareForkchoiceState(ctx, 33, [32]byte(cp2.Root), [32]byte(cp1.Root), [32]byte{'R'}, cp2, cp2)
st, root, err = prepareForkchoiceState(ctx, 33, [32]byte(cp2.Root), [32]byte(cp1.Root), [32]byte{'R'}, [32]byte{}, cp2, cp2)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
@@ -298,7 +298,7 @@ func TestStore_SaveCheckpointState(t *testing.T) {
cp3 := &ethpb.Checkpoint{Epoch: 1, Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'C'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: bytesutil.PadTo([]byte{'C'}, fieldparams.RootLength)}))
st, root, err = prepareForkchoiceState(ctx, 31, [32]byte(cp3.Root), [32]byte(cp2.Root), [32]byte{'P'}, cp2, cp2)
st, root, err = prepareForkchoiceState(ctx, 31, [32]byte(cp3.Root), [32]byte(cp2.Root), [32]byte{'P'}, [32]byte{}, cp2, cp2)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
@@ -318,7 +318,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
require.NoError(t, err)
checkpoint := &ethpb.Checkpoint{Epoch: epoch, Root: r1[:]}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(checkpoint.Root)))
st, roblock, err := prepareForkchoiceState(ctx, blk.Block.Slot, r1, [32]byte{}, params.BeaconConfig().ZeroHash, checkpoint, checkpoint)
st, roblock, err := prepareForkchoiceState(ctx, blk.Block.Slot, r1, [32]byte{}, params.BeaconConfig().ZeroHash, [32]byte{}, checkpoint, checkpoint)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
returned, err := service.getAttPreState(ctx, checkpoint)
@@ -336,7 +336,7 @@ func TestStore_UpdateCheckpointState(t *testing.T) {
require.NoError(t, err)
newCheckpoint := &ethpb.Checkpoint{Epoch: epoch, Root: r2[:]}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, baseState, bytesutil.ToBytes32(newCheckpoint.Root)))
st, roblock, err = prepareForkchoiceState(ctx, blk.Block.Slot, r2, r1, params.BeaconConfig().ZeroHash, newCheckpoint, newCheckpoint)
st, roblock, err = prepareForkchoiceState(ctx, blk.Block.Slot, r2, r1, params.BeaconConfig().ZeroHash, [32]byte{}, newCheckpoint, newCheckpoint)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, roblock))
returned, err = service.getAttPreState(ctx, newCheckpoint)

View File

@@ -7,6 +7,8 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
@@ -15,6 +17,7 @@ import (
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
@@ -64,20 +67,17 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
fcuArgs := &fcuConfig{}
if s.inRegularSync() {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
if features.Get().EnableLightClient && slots.ToEpoch(s.CurrentSlot()) >= params.BeaconConfig().AltairForkEpoch {
defer s.processLightClientUpdates(cfg)
defer s.saveLightClientUpdate(cfg)
if cfg.roblock.Version() < version.EPBS {
defer s.handleSecondFCUCall(cfg, fcuArgs)
}
}
defer s.sendLightClientFeeds(cfg)
defer s.sendStateFeedOnBlock(cfg)
defer reportProcessingTime(startTime)
defer reportAttestationInclusion(cfg.roblock.Block())
err := s.cfg.ForkChoiceStore.InsertNode(ctx, cfg.postState, cfg.roblock)
if err != nil {
// Do not use parent context in the event it deadlined
ctx = trace.NewContext(context.Background(), span)
s.rollbackBlock(ctx, cfg.roblock.Root())
return errors.Wrapf(err, "could not insert block %d to fork choice store", cfg.roblock.Block().Slot())
}
@@ -101,6 +101,18 @@ func (s *Service) postBlockProcess(cfg *postBlockProcessConfig) error {
s.logNonCanonicalBlockReceived(cfg.roblock.Root(), cfg.headRoot)
return nil
}
if cfg.roblock.Version() >= version.EPBS {
if err := s.saveHead(ctx, cfg.headRoot, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("could not save head")
}
if err := s.pruneAttsFromPool(cfg.roblock); err != nil {
log.WithError(err).Error("could not prune attestations from pool")
}
// update the NSC and handle epoch boundaries here since we do
// not send FCU at all
return s.updateCachesPostBlockProcessing(cfg)
}
if err := s.getFCUArgs(cfg, fcuArgs); err != nil {
log.WithError(err).Error("Could not get forkchoice update argument")
return nil
@@ -377,12 +389,8 @@ func (s *Service) handleBlockAttestations(ctx context.Context, blk interfaces.Re
}
r := bytesutil.ToBytes32(a.GetData().BeaconBlockRoot)
if s.cfg.ForkChoiceStore.HasNode(r) {
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indices, r, a.GetData().Target.Epoch)
} else if features.Get().EnableExperimentalAttestationPool {
if err = s.cfg.AttestationCache.Add(a); err != nil {
return err
}
} else if err = s.cfg.AttPool.SaveBlockAttestation(a); err != nil {
s.cfg.ForkChoiceStore.ProcessAttestation(ctx, indices, r, a.GetData().Slot)
} else if err := s.cfg.AttPool.SaveBlockAttestation(a); err != nil {
return err
}
}
@@ -410,9 +418,10 @@ func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b interface
return errors.Wrapf(err, "could not save block from slot %d", b.Block().Slot())
}
if err := s.cfg.StateGen.SaveState(ctx, r, st); err != nil {
// Do not use parent context in the event it deadlined
ctx = trace.NewContext(context.Background(), span)
s.rollbackBlock(ctx, r)
log.Warnf("Rolling back insertion of block with root %#x", r)
if err := s.cfg.BeaconDB.DeleteBlock(ctx, r); err != nil {
log.WithError(err).Errorf("Could not delete block with block root %#x", r)
}
return errors.Wrap(err, "could not save state")
}
return nil
@@ -422,11 +431,7 @@ func (s *Service) savePostStateInfo(ctx context.Context, r [32]byte, b interface
func (s *Service) pruneAttsFromPool(headBlock interfaces.ReadOnlySignedBeaconBlock) error {
atts := headBlock.Block().Body().Attestations()
for _, att := range atts {
if features.Get().EnableExperimentalAttestationPool {
if err := s.cfg.AttestationCache.DeleteCovered(att); err != nil {
return errors.Wrap(err, "could not delete attestation")
}
} else if att.IsAggregated() {
if helpers.IsAggregated(att) {
if err := s.cfg.AttPool.DeleteAggregatedAttestation(att); err != nil {
return err
}
@@ -487,9 +492,15 @@ func (s *Service) runLateBlockTasks() {
attThreshold := params.BeaconConfig().SecondsPerSlot / 3
ticker := slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
epbs := params.BeaconConfig().EPBSForkEpoch
for {
select {
case <-ticker.C():
case slot := <-ticker.C():
if slots.ToEpoch(slot) == epbs && slot%32 == 0 {
ticker.Done()
attThreshold := params.BeaconConfig().SecondsPerSlot / 4
ticker = slots.NewSlotTickerWithOffset(s.genesisTime, time.Duration(attThreshold)*time.Second, params.BeaconConfig().SecondsPerSlot)
}
s.lateBlockTasks(s.ctx)
case <-s.ctx.Done():
log.Debug("Context closed, exiting routine")
@@ -503,15 +514,14 @@ func (s *Service) runLateBlockTasks() {
// It returns a map where each key represents a missing BlobSidecar index.
// An empty map means we have all indices; a non-empty map can be used to compare incoming
// BlobSidecars against the set of known missing sidecars.
func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte, slot primitives.Slot) (map[uint64]struct{}, error) {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte) (map[uint64]struct{}, error) {
if len(expected) == 0 {
return nil, nil
}
if len(expected) > maxBlobsPerBlock {
if len(expected) > fieldparams.MaxBlobsPerBlock {
return nil, errMaxBlobsExceeded
}
indices, err := bs.Indices(root, slot)
indices, err := bs.Indices(root)
if err != nil {
return nil, err
}
@@ -560,7 +570,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
return nil
}
// get a map of BlobSidecar indices that are not currently available.
missing, err := missingIndices(s.blobStorage, root, kzgCommitments, block.Slot())
missing, err := missingIndices(s.blobStorage, root, kzgCommitments)
if err != nil {
return err
}
@@ -571,7 +581,7 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
// The gossip handler for blobs writes the index of each verified blob referencing the given
// root to the channel returned by blobNotifiers.forRoot.
nc := s.blobNotifiers.forRoot(root, block.Slot())
nc := s.blobNotifiers.forRoot(root)
// Log for DA checks that cross over into the next slot; helpful for debugging.
nextSlot := slots.BeginsAt(signed.Block().Slot()+1, s.genesisTime)
@@ -628,6 +638,9 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
if !s.inRegularSync() {
return
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.MissedSlot,
})
s.headLock.RLock()
headRoot := s.headRoot()
headState := s.headState(ctx)
@@ -655,34 +668,39 @@ func (s *Service) lateBlockTasks(ctx context.Context) {
attribute := s.getPayloadAttribute(ctx, headState, s.CurrentSlot()+1, headRoot[:])
// return early if we are not proposing next slot
if attribute.IsEmpty() {
return
}
if headState.Version() >= version.EPBS {
bh, err := headState.LatestBlockHash()
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve latest block hash")
return
}
_, err = s.notifyForkchoiceUpdateEPBS(ctx, [32]byte(bh), attribute)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
} else {
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
s.headLock.RUnlock()
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve head block")
return
}
s.headLock.RUnlock()
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: nil,
headBlock: headBlock,
attributes: attribute,
}
go firePayloadAttributesEvent(ctx, s.cfg.StateNotifier.StateFeed(), fcuArgs)
return
}
s.headLock.RLock()
headBlock, err := s.headBlock()
if err != nil {
s.headLock.RUnlock()
log.WithError(err).Debug("could not perform late block tasks: failed to retrieve head block")
return
}
s.headLock.RUnlock()
fcuArgs := &fcuConfig{
headState: headState,
headRoot: headRoot,
headBlock: headBlock,
attributes: attribute,
}
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
_, err = s.notifyForkchoiceUpdate(ctx, fcuArgs)
if err != nil {
log.WithError(err).Debug("could not perform late block tasks: failed to update forkchoice with engine")
}
}
}

View File

@@ -1,13 +1,12 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"strings"
"time"
lightclient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
@@ -17,6 +16,7 @@ import (
doublylinkedtree "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/doubly-linked-tree"
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
@@ -25,6 +25,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
mathutil "github.com/prysmaticlabs/prysm/v5/math"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
ethpbv2 "github.com/prysmaticlabs/prysm/v5/proto/eth/v2"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
@@ -115,147 +116,76 @@ func (s *Service) sendStateFeedOnBlock(cfg *postBlockProcessConfig) {
})
}
func (s *Service) processLightClientUpdates(cfg *postBlockProcessConfig) {
if err := s.processLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client optimistic update")
}
if err := s.processLightClientFinalityUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to process light client finality update")
// sendLightClientFeeds sends the light client feeds when feature flag is enabled.
func (s *Service) sendLightClientFeeds(cfg *postBlockProcessConfig) {
if features.Get().EnableLightClient {
if _, err := s.sendLightClientOptimisticUpdate(cfg.ctx, cfg.roblock, cfg.postState); err != nil {
log.WithError(err).Error("Failed to send light client optimistic update")
}
// Get the finalized checkpoint
finalized := s.ForkChoicer().FinalizedCheckpoint()
// LightClientFinalityUpdate needs super majority
s.tryPublishLightClientFinalityUpdate(cfg.ctx, cfg.roblock, finalized, cfg.postState)
}
}
// saveLightClientUpdate saves the light client update for this block
// if it's better than the already saved one, when feature flag is enabled.
func (s *Service) saveLightClientUpdate(cfg *postBlockProcessConfig) {
attestedRoot := cfg.roblock.Block().ParentRoot()
attestedBlock, err := s.getBlock(cfg.ctx, attestedRoot)
func (s *Service) tryPublishLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock, finalized *forkchoicetypes.Checkpoint, postState state.BeaconState) {
if finalized.Epoch <= s.lastPublishedLightClientEpoch {
return
}
config := params.BeaconConfig()
if finalized.Epoch < config.AltairForkEpoch {
return
}
syncAggregate, err := signed.Block().Body().SyncAggregate()
if err != nil || syncAggregate == nil {
return
}
// LightClientFinalityUpdate needs super majority
if syncAggregate.SyncCommitteeBits.Count()*3 < config.SyncCommitteeSize*2 {
return
}
_, err = s.sendLightClientFinalityUpdate(ctx, signed, postState)
if err != nil {
log.WithError(err).Errorf("Saving light client update failed: Could not get attested block for root %#x", attestedRoot)
return
}
if attestedBlock == nil || attestedBlock.IsNil() {
log.Error("Saving light client update failed: Attested block is nil")
return
}
attestedState, err := s.cfg.StateGen.StateByRoot(cfg.ctx, attestedRoot)
if err != nil {
log.WithError(err).Errorf("Saving light client update failed: Could not get attested state for root %#x", attestedRoot)
return
}
if attestedState == nil || attestedState.IsNil() {
log.Error("Saving light client update failed: Attested state is nil")
return
}
finalizedRoot := attestedState.FinalizedCheckpoint().Root
finalizedBlock, err := s.getBlock(cfg.ctx, [32]byte(finalizedRoot))
if err != nil {
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
log.Debugf("Skipping saving light client update: Finalized block is nil for root %#x", finalizedRoot)
} else {
log.WithError(err).Errorf("Saving light client update failed: Could not get finalized block for root %#x", finalizedRoot)
}
return
}
update, err := lightclient.NewLightClientUpdateFromBeaconState(
cfg.ctx,
s.CurrentSlot(),
cfg.postState,
cfg.roblock,
attestedState,
attestedBlock,
finalizedBlock,
)
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not create light client update")
return
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(attestedState.Slot()))
oldUpdate, err := s.cfg.BeaconDB.LightClientUpdate(cfg.ctx, period)
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not get current light client update")
return
}
if oldUpdate == nil {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
} else {
log.WithField("period", period).Debug("Saving light client update: Saved new update")
}
return
}
isNewUpdateBetter, err := lightclient.IsBetterUpdate(update, oldUpdate)
if err != nil {
log.WithError(err).Error("Saving light client update failed: Could not compare light client updates")
return
}
if isNewUpdateBetter {
if err := s.cfg.BeaconDB.SaveLightClientUpdate(cfg.ctx, period, update); err != nil {
log.WithError(err).Error("Saving light client update failed: Could not save light client update")
} else {
log.WithField("period", period).Debug("Saving light client update: Saved new update")
}
log.WithError(err).Error("Failed to send light client finality update")
} else {
log.WithField("period", period).Debug("Saving light client update: New update is not better than the current one. Skipping save.")
s.lastPublishedLightClientEpoch = finalized.Epoch
}
}
// saveLightClientBootstrap saves a light client bootstrap for this block
// when feature flag is enabled.
func (s *Service) saveLightClientBootstrap(cfg *postBlockProcessConfig) {
blockRoot := cfg.roblock.Root()
bootstrap, err := lightclient.NewLightClientBootstrapFromBeaconState(cfg.ctx, s.CurrentSlot(), cfg.postState, cfg.roblock)
if err != nil {
log.WithError(err).Error("Saving light client bootstrap failed: Could not create light client bootstrap")
return
}
err = s.cfg.BeaconDB.SaveLightClientBootstrap(cfg.ctx, blockRoot[:], bootstrap)
if err != nil {
log.WithError(err).Error("Saving light client bootstrap failed: Could not save light client bootstrap in DB")
}
}
func (s *Service) processLightClientFinalityUpdate(
ctx context.Context,
signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState,
) error {
// sendLightClientFinalityUpdate sends a light client finality update notification to the state feed.
func (s *Service) sendLightClientFinalityUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedBlock, err := s.cfg.BeaconDB.Block(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
return 0, errors.Wrap(err, "could not get attested block")
}
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
return 0, errors.Wrap(err, "could not get attested state")
}
finalizedCheckpoint := attestedState.FinalizedCheckpoint()
// Check if the finalized checkpoint has changed
if finalizedCheckpoint == nil || bytes.Equal(finalizedCheckpoint.GetRoot(), postState.FinalizedCheckpoint().Root) {
return nil
}
finalizedRoot := bytesutil.ToBytes32(finalizedCheckpoint.Root)
finalizedBlock, err := s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
if errors.Is(err, errBlockNotFoundInCacheOrDB) {
log.Debugf("Skipping processing light client finality update: Finalized block is nil for root %#x", finalizedRoot)
return nil
// Get finalized block
var finalizedBlock interfaces.ReadOnlySignedBeaconBlock
finalizedCheckPoint := attestedState.FinalizedCheckpoint()
if finalizedCheckPoint != nil {
finalizedRoot := bytesutil.ToBytes32(finalizedCheckPoint.Root)
finalizedBlock, err = s.cfg.BeaconDB.Block(ctx, finalizedRoot)
if err != nil {
finalizedBlock = nil
}
return errors.Wrapf(err, "could not get finalized block for root %#x", finalizedRoot)
}
update, err := lightclient.NewLightClientFinalityUpdateFromBeaconState(
ctx,
postState.Slot(),
postState,
signed,
attestedState,
@@ -264,31 +194,38 @@ func (s *Service) processLightClientFinalityUpdate(
)
if err != nil {
return errors.Wrap(err, "could not create light client finality update")
return 0, errors.Wrap(err, "could not create light client update")
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
// Return the result
result := &ethpbv2.LightClientFinalityUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: update,
}
// Send event
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientFinalityUpdate,
Data: update,
})
return nil
Data: result,
}), nil
}
func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) error {
// sendLightClientOptimisticUpdate sends a light client optimistic update notification to the state feed.
func (s *Service) sendLightClientOptimisticUpdate(ctx context.Context, signed interfaces.ReadOnlySignedBeaconBlock,
postState state.BeaconState) (int, error) {
// Get attested state
attestedRoot := signed.Block().ParentRoot()
attestedBlock, err := s.cfg.BeaconDB.Block(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested block for root %#x", attestedRoot)
return 0, errors.Wrap(err, "could not get attested block")
}
attestedState, err := s.cfg.StateGen.StateByRoot(ctx, attestedRoot)
if err != nil {
return errors.Wrapf(err, "could not get attested state for root %#x", attestedRoot)
return 0, errors.Wrap(err, "could not get attested state")
}
update, err := lightclient.NewLightClientOptimisticUpdateFromBeaconState(
ctx,
postState.Slot(),
postState,
signed,
attestedState,
@@ -296,19 +233,19 @@ func (s *Service) processLightClientOptimisticUpdate(ctx context.Context, signed
)
if err != nil {
if strings.Contains(err.Error(), lightclient.ErrNotEnoughSyncCommitteeBits) {
log.WithError(err).Debug("Skipping processing light client optimistic update")
return nil
}
return errors.Wrap(err, "could not create light client optimistic update")
return 0, errors.Wrap(err, "could not create light client update")
}
s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: update,
})
// Return the result
result := &ethpbv2.LightClientOptimisticUpdateWithVersion{
Version: ethpbv2.Version(signed.Version()),
Data: update,
}
return nil
return s.cfg.StateNotifier.StateFeed().Send(&feed.Event{
Type: statefeed.LightClientOptimisticUpdate,
Data: result,
}), nil
}
// updateCachesPostBlockProcessing updates the next slot cache and handles the epoch
@@ -368,7 +305,26 @@ func (s *Service) getBlockPreState(ctx context.Context, b interfaces.ReadOnlyBea
return nil, err
}
preState, err := s.cfg.StateGen.StateByRoot(ctx, b.ParentRoot())
parentRoot := b.ParentRoot()
if b.Version() >= version.EPBS {
s.ForkChoicer().RLock()
parentHash := s.ForkChoicer().HashForBlockRoot(parentRoot)
s.ForkChoicer().RUnlock()
signedBid, err := b.Body().SignedExecutionPayloadHeader()
if err != nil {
return nil, errors.Wrap(err, "could not get signed execution payload header")
}
bid, err := signedBid.Header()
if err != nil {
return nil, errors.Wrap(err, "could not get execution payload header")
}
if parentHash == bid.BlockHash() {
// It's based on full, use the state by hash
parentRoot = parentHash
}
}
preState, err := s.cfg.StateGen.StateByRoot(ctx, parentRoot)
if err != nil {
return nil, errors.Wrapf(err, "could not get pre state for slot %d", b.Slot())
}

View File

@@ -14,7 +14,6 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
lightClient "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/light-client"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
@@ -41,7 +40,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
prysmTime "github.com/prysmaticlabs/prysm/v5/time"
"github.com/prysmaticlabs/prysm/v5/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -142,7 +140,7 @@ func TestFillForkChoiceMissingBlocks_CanSave(t *testing.T) {
// the parent of the last block inserted is the tree node.
fcp := &ethpb.Checkpoint{Epoch: 0, Root: service.originBlockRoot[:]}
r0 := bytesutil.ToBytes32(roots[0])
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, fcp, fcp)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
fcp2 := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r0}
@@ -184,7 +182,7 @@ func TestFillForkChoiceMissingBlocks_RootsMatch(t *testing.T) {
// the parent of the last block inserted is the tree node.
fcp := &ethpb.Checkpoint{Epoch: 0, Root: service.originBlockRoot[:]}
r0 := bytesutil.ToBytes32(roots[0])
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, fcp, fcp)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, r0, service.originBlockRoot, [32]byte{}, [32]byte{}, fcp, fcp)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
fcp2 := &forkchoicetypes.Checkpoint{Epoch: 0, Root: r0}
@@ -464,7 +462,7 @@ func TestAncestor_CanUseForkchoice(t *testing.T) {
beaconBlock.Block.ParentRoot = bytesutil.PadTo(b.Block.ParentRoot, 32)
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
st, blkRoot, err := prepareForkchoiceState(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, ojc, ofc)
st, blkRoot, err := prepareForkchoiceState(context.Background(), b.Block.Slot, r, bytesutil.ToBytes32(b.Block.ParentRoot), params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
}
@@ -504,7 +502,7 @@ func TestAncestor_CanUseDB(t *testing.T) {
util.SaveBlock(t, context.Background(), beaconDB, beaconBlock)
}
st, blkRoot, err := prepareForkchoiceState(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, ojc, ofc)
st, blkRoot, err := prepareForkchoiceState(context.Background(), 200, r200, r200, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
@@ -1111,7 +1109,7 @@ func TestOnBlock_ProcessBlocksParallel(t *testing.T) {
logHook := logTest.NewGlobal()
for i := 0; i < 10; i++ {
fc := &ethpb.Checkpoint{}
st, blkRoot, err := prepareForkchoiceState(ctx, 0, wsb1.Block().ParentRoot(), [32]byte{}, [32]byte{}, fc, fc)
st, blkRoot, err := prepareForkchoiceState(ctx, 0, wsb1.Block().ParentRoot(), [32]byte{}, [32]byte{}, [32]byte{}, fc, fc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, blkRoot))
var wg sync.WaitGroup
@@ -2206,23 +2204,23 @@ func TestMissingIndices(t *testing.T) {
},
{
name: "expected exceeds max",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0) + 1),
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock + 1),
err: errMaxBlobsExceeded,
},
{
name: "first missing",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
present: []uint64{1, 2, 3, 4, 5},
result: fakeResult([]uint64{0}),
},
{
name: "all missing",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
result: fakeResult([]uint64{0, 1, 2, 3, 4, 5}),
},
{
name: "none missing",
expected: fakeCommitments(params.BeaconConfig().MaxBlobsPerBlock(0)),
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
present: []uint64{0, 1, 2, 3, 4, 5},
result: fakeResult([]uint64{}),
},
@@ -2256,7 +2254,7 @@ func TestMissingIndices(t *testing.T) {
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
t.Run(c.name, func(t *testing.T) {
require.NoError(t, bm.CreateFakeIndices(c.root, c.present...))
missing, err := missingIndices(bs, c.root, c.expected, 0)
missing, err := missingIndices(bs, c.root, c.expected)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
@@ -2354,141 +2352,6 @@ func TestRollbackBlock(t *testing.T) {
require.Equal(t, false, hasState)
}
func TestRollbackBlock_SavePostStateInfo_ContextDeadline(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
st, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
require.NoError(t, service.saveGenesisData(ctx, st))
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
parentRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: parentRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: parentRoot[:]}))
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 128)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
// Save state summaries so that the cache is flushed and saved to disk
// later.
for i := 1; i <= 127; i++ {
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{
Slot: primitives.Slot(i),
Root: bytesutil.Bytes32(uint64(i)),
}))
}
// Set deadlined context when saving block and state
cancCtx, canc := context.WithCancel(ctx)
canc()
require.ErrorContains(t, context.Canceled.Error(), service.savePostStateInfo(cancCtx, root, wsb, postState))
// The block should no longer exist.
require.Equal(t, false, service.cfg.BeaconDB.HasBlock(ctx, root))
hasState, err := service.cfg.StateGen.HasState(ctx, root)
require.NoError(t, err)
require.Equal(t, false, hasState)
}
func TestRollbackBlock_ContextDeadline(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx
st, keys := util.DeterministicGenesisState(t, 64)
stateRoot, err := st.HashTreeRoot(ctx)
require.NoError(t, err, "Could not hash genesis state")
require.NoError(t, service.saveGenesisData(ctx, st))
genesis := blocks.NewGenesisBlock(stateRoot[:])
wsb, err := consensusblocks.NewSignedBeaconBlock(genesis)
require.NoError(t, err)
require.NoError(t, service.cfg.BeaconDB.SaveBlock(ctx, wsb), "Could not save genesis block")
parentRoot, err := genesis.Block.HashTreeRoot()
require.NoError(t, err, "Could not get signing root")
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, st, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveHeadBlockRoot(ctx, parentRoot), "Could not save genesis state")
require.NoError(t, service.cfg.BeaconDB.SaveJustifiedCheckpoint(ctx, &ethpb.Checkpoint{Root: parentRoot[:]}))
require.NoError(t, service.cfg.BeaconDB.SaveFinalizedCheckpoint(ctx, &ethpb.Checkpoint{Root: parentRoot[:]}))
st, err = service.HeadState(ctx)
require.NoError(t, err)
b, err := util.GenerateFullBlock(st, keys, util.DefaultBlockGenConfig(), 33)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err := b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err := service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
roblock, err := consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
b, err = util.GenerateFullBlock(postState, keys, util.DefaultBlockGenConfig(), 34)
require.NoError(t, err)
wsb, err = consensusblocks.NewSignedBeaconBlock(b)
require.NoError(t, err)
root, err = b.Block.HashTreeRoot()
require.NoError(t, err)
preState, err = service.getBlockPreState(ctx, wsb.Block())
require.NoError(t, err)
postState, err = service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, root, wsb, postState))
require.Equal(t, true, service.cfg.BeaconDB.HasBlock(ctx, root))
hasState, err := service.cfg.StateGen.HasState(ctx, root)
require.NoError(t, err)
require.Equal(t, true, hasState)
// Set deadlined context when processing the block
cancCtx, canc := context.WithCancel(context.Background())
canc()
roblock, err = consensusblocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
parentRoot = roblock.Block().ParentRoot()
cj := &ethpb.Checkpoint{}
cj.Epoch = 1
cj.Root = parentRoot[:]
require.NoError(t, postState.SetCurrentJustifiedCheckpoint(cj))
require.NoError(t, postState.SetFinalizedCheckpoint(cj))
// Rollback block insertion into db and caches.
require.ErrorContains(t, "context canceled", service.postBlockProcess(&postBlockProcessConfig{cancCtx, roblock, [32]byte{}, postState, false}))
// The block should no longer exist.
require.Equal(t, false, service.cfg.BeaconDB.HasBlock(ctx, root))
hasState, err = service.cfg.StateGen.HasState(ctx, root)
require.NoError(t, err)
require.Equal(t, false, hasState)
}
func fakeCommitments(n int) [][]byte {
f := make([][]byte, n)
for i := range f {
@@ -2504,605 +2367,3 @@ func fakeResult(missing []uint64) map[uint64]struct{} {
}
return r
}
func TestSaveLightClientUpdate(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
s, tr := minimalTestService(t)
ctx := tr.ctx
t.Run("Altair", func(t *testing.T) {
t.Run("No old update", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientUpdate(cfg)
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Altair)
})
t.Run("New update is better", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Altair)
})
t.Run("Old update is better", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
})
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), version.Altair)
})
})
t.Run("Capella", func(t *testing.T) {
t.Run("No old update", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestCapella(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientUpdate(cfg)
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Capella)
})
t.Run("New update is better", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestCapella(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Capella)
})
t.Run("Old update is better", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestCapella(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
})
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), version.Capella)
})
})
t.Run("Deneb", func(t *testing.T) {
t.Run("No old update", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestDeneb(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientUpdate(cfg)
// Check that the light client update is saved
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Deneb)
})
t.Run("New update is better", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestDeneb(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
attestedStateRoot, err := l.AttestedState.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, attestedStateRoot, [32]byte(u.AttestedHeader().Beacon().StateRoot))
require.Equal(t, u.Version(), version.Deneb)
})
t.Run("Old update is better", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestDeneb(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
err := s.cfg.BeaconDB.SaveBlock(ctx, l.AttestedBlock)
require.NoError(t, err)
attestedBlockRoot, err := l.AttestedBlock.Block().HashTreeRoot()
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.AttestedState, attestedBlockRoot)
require.NoError(t, err)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, l.FinalizedBlock)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
period := slots.SyncCommitteePeriod(slots.ToEpoch(l.AttestedState.Slot()))
// create and save old update
oldUpdate, err := lightClient.CreateDefaultLightClientUpdate(s.CurrentSlot(), l.AttestedState)
require.NoError(t, err)
scb := make([]byte, 64)
for i := 0; i < 5; i++ {
scb[i] = 0x01
}
oldUpdate.SetSyncAggregate(&ethpb.SyncAggregate{
SyncCommitteeBits: scb,
SyncCommitteeSignature: make([]byte, 96),
})
err = s.cfg.BeaconDB.SaveLightClientUpdate(ctx, period, oldUpdate)
require.NoError(t, err)
s.saveLightClientUpdate(cfg)
u, err := s.cfg.BeaconDB.LightClientUpdate(ctx, period)
require.NoError(t, err)
require.NotNil(t, u)
require.DeepEqual(t, oldUpdate, u)
require.Equal(t, u.Version(), version.Deneb)
})
})
reset()
}
func TestSaveLightClientBootstrap(t *testing.T) {
featCfg := &features.Flags{}
featCfg.EnableLightClient = true
reset := features.InitWithReset(featCfg)
s, tr := minimalTestService(t)
ctx := tr.ctx
t.Run("Altair", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestAltair()
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().AltairForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Altair)
})
t.Run("Capella", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestCapella(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().CapellaForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Capella)
})
t.Run("Deneb", func(t *testing.T) {
l := util.NewTestLightClient(t).SetupTestDeneb(false)
s.genesisTime = time.Unix(time.Now().Unix()-(int64(params.BeaconConfig().DenebForkEpoch)*int64(params.BeaconConfig().SlotsPerEpoch)*int64(params.BeaconConfig().SecondsPerSlot)), 0)
currentBlockRoot, err := l.Block.Block().HashTreeRoot()
require.NoError(t, err)
roblock, err := consensusblocks.NewROBlockWithRoot(l.Block, currentBlockRoot)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveBlock(ctx, roblock)
require.NoError(t, err)
err = s.cfg.BeaconDB.SaveState(ctx, l.State, currentBlockRoot)
require.NoError(t, err)
cfg := &postBlockProcessConfig{
ctx: ctx,
roblock: roblock,
postState: l.State,
isValidPayload: true,
}
s.saveLightClientBootstrap(cfg)
// Check that the light client bootstrap is saved
b, err := s.cfg.BeaconDB.LightClientBootstrap(ctx, currentBlockRoot[:])
require.NoError(t, err)
require.NotNil(t, b)
stateRoot, err := l.State.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, stateRoot, [32]byte(b.Header().Beacon().StateRoot))
require.Equal(t, b.Version(), version.Deneb)
})
reset()
}

View File

@@ -9,8 +9,8 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/config/params"
payloadattribute "github.com/prysmaticlabs/prysm/v5/consensus-types/payload-attribute"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
@@ -148,15 +148,35 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
return
}
newAttHeadElapsedTime.Observe(float64(time.Since(start).Milliseconds()))
var attributes payloadattribute.Attributer
if s.inRegularSync() {
attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if headState.Version() >= version.EPBS {
bh, err := headState.LatestBlockHash()
if err != nil {
log.WithError(err).Error("could not get latest block hash")
return
}
_, err = s.notifyForkchoiceUpdateEPBS(ctx, [32]byte(bh), attributes)
if err != nil {
log.WithError(err).Error("could not notify forkchoice update")
}
if err := s.saveHead(ctx, newHeadRoot, headBlock, headState); err != nil {
log.WithError(err).Error("could not save head")
return
}
if err := s.pruneAttsFromPool(headBlock); err != nil {
log.WithError(err).Error("could not prune attestations from pool")
}
return
}
fcuArgs := &fcuConfig{
headState: headState,
headRoot: newHeadRoot,
headBlock: headBlock,
proposingSlot: proposingSlot,
}
if s.inRegularSync() {
fcuArgs.attributes = s.getPayloadAttribute(ctx, headState, proposingSlot, newHeadRoot[:])
}
if fcuArgs.attributes != nil && s.shouldOverrideFCU(newHeadRoot, proposingSlot) {
return
}
@@ -167,13 +187,7 @@ func (s *Service) UpdateHead(ctx context.Context, proposingSlot primitives.Slot)
// This processes fork choice attestations from the pool to account for validator votes and fork choice.
func (s *Service) processAttestations(ctx context.Context, disparity time.Duration) {
var atts []ethpb.Att
if features.Get().EnableExperimentalAttestationPool {
atts = s.cfg.AttestationCache.ForkchoiceAttestations()
} else {
atts = s.cfg.AttPool.ForkchoiceAttestations()
}
atts := s.cfg.AttPool.ForkchoiceAttestations()
for _, a := range atts {
// Based on the spec, don't process the attestation until the subsequent slot.
// This delays consideration in the fork choice until their slot is in the past.
@@ -184,16 +198,12 @@ func (s *Service) processAttestations(ctx context.Context, disparity time.Durati
}
hasState := s.cfg.BeaconDB.HasStateSummary(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
hasBlock := s.hasBlock(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
hasBlock := s.chainHasBlock(ctx, bytesutil.ToBytes32(a.GetData().BeaconBlockRoot))
if !(hasState && hasBlock) {
continue
}
if features.Get().EnableExperimentalAttestationPool {
if err := s.cfg.AttestationCache.DeleteForkchoiceAttestation(a); err != nil {
log.WithError(err).Error("Could not delete fork choice attestation in pool")
}
} else if err := s.cfg.AttPool.DeleteForkchoiceAttestation(a); err != nil {
if err := s.cfg.AttPool.DeleteForkchoiceAttestation(a); err != nil {
log.WithError(err).Error("Could not delete fork choice attestation in pool")
}

View File

@@ -10,6 +10,7 @@ import (
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
@@ -42,11 +43,11 @@ func TestVerifyLMDFFGConsistent(t *testing.T) {
f := service.cfg.ForkChoiceStore
fc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, r32, err := prepareForkchoiceState(ctx, 32, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, fc, fc)
state, r32, err := prepareForkchoiceState(ctx, 32, [32]byte{'a'}, params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, [32]byte{}, fc, fc)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, r32))
state, r33, err := prepareForkchoiceState(ctx, 33, [32]byte{'b'}, r32.Root(), params.BeaconConfig().ZeroHash, fc, fc)
state, r33, err := prepareForkchoiceState(ctx, 33, [32]byte{'b'}, r32.Root(), params.BeaconConfig().ZeroHash, [32]byte{}, fc, fc)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, r33))
@@ -82,11 +83,13 @@ func TestProcessAttestations_Ok(t *testing.T) {
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, copied, tRoot))
ofc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
ojc := &ethpb.Checkpoint{Root: params.BeaconConfig().ZeroHash[:]}
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, ojc, ofc)
state, blkRoot, err := prepareForkchoiceState(ctx, 0, tRoot, tRoot, params.BeaconConfig().ZeroHash, [32]byte{}, ojc, ofc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
attsToSave := make([]ethpb.Att, len(atts))
copy(attsToSave, atts)
for i, a := range atts {
attsToSave[i] = a
}
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(attsToSave))
service.processAttestations(ctx, 0)
require.Equal(t, 0, len(service.cfg.AttPool.ForkchoiceAttestations()))
@@ -116,7 +119,7 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
roblock, err := consensus_blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
copied, err = service.cfg.StateGen.StateByRoot(ctx, tRoot)
@@ -128,7 +131,9 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
atts, err := util.GenerateAttestations(copied, pks, 1, 1, false)
require.NoError(t, err)
attsToSave := make([]ethpb.Att, len(atts))
copy(attsToSave, atts)
for i, a := range atts {
attsToSave[i] = a
}
require.NoError(t, service.cfg.AttPool.SaveForkchoiceAttestations(attsToSave))
// Verify the target is in forkchoice
require.Equal(t, true, fcs.HasNode(bytesutil.ToBytes32(atts[0].GetData().BeaconBlockRoot)))
@@ -142,7 +147,7 @@ func TestService_ProcessAttestationsAndUpdateHead(t *testing.T) {
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, ojc, ojc)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.Equal(t, 3, fcs.NodeCount())
@@ -176,7 +181,7 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
postState, err := service.validateStateTransition(ctx, preState, wsb)
require.NoError(t, err)
require.NoError(t, service.savePostStateInfo(ctx, tRoot, wsb, postState))
roblock, err := blocks.NewROBlockWithRoot(wsb, tRoot)
roblock, err := consensus_blocks.NewROBlockWithRoot(wsb, tRoot)
require.NoError(t, err)
require.NoError(t, service.postBlockProcess(&postBlockProcessConfig{ctx, roblock, [32]byte{}, postState, false}))
require.Equal(t, 2, fcs.NodeCount())
@@ -191,7 +196,7 @@ func TestService_UpdateHead_NoAtts(t *testing.T) {
r, err := b.Block.HashTreeRoot()
require.NoError(t, err)
util.SaveBlock(t, ctx, service.cfg.BeaconDB, b)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, ojc, ojc)
state, blkRoot, err := prepareForkchoiceState(ctx, 2, r, service.originBlockRoot, [32]byte{'b'}, [32]byte{}, ojc, ojc)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, state, blkRoot))
require.Equal(t, 3, fcs.NodeCount())

View File

@@ -4,13 +4,12 @@ import (
"context"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
// SendNewBlobEvent sends a message to the BlobNotifier channel that the blob
// for the block root `root` is ready in the database
func (s *Service) sendNewBlobEvent(root [32]byte, index uint64, slot primitives.Slot) {
s.blobNotifiers.notifyIndex(root, index, slot)
func (s *Service) sendNewBlobEvent(root [32]byte, index uint64) {
s.blobNotifiers.notifyIndex(root, index)
}
// ReceiveBlob saves the blob to database and sends the new event
@@ -19,6 +18,6 @@ func (s *Service) ReceiveBlob(ctx context.Context, b blocks.VerifiedROBlob) erro
return err
}
s.sendNewBlobEvent(b.BlockRoot(), b.Index, b.Slot())
s.sendNewBlobEvent(b.BlockRoot(), b.Index)
return nil
}

View File

@@ -17,6 +17,8 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/features"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -39,18 +41,30 @@ var epochsSinceFinalityExpandCache = primitives.Epoch(4)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte, avs das.AvailabilityStore) error
ReceiveExecutionPayloadEnvelope(ctx context.Context, env interfaces.ROSignedExecutionPayloadEnvelope, avs das.AvailabilityStore) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
}
// PayloadAttestationReceiver defines methods of the chain service for receiving
// and processing new payload attestations and payload attestation messages
type PayloadAttestationReceiver interface {
ReceivePayloadAttestationMessage(ctx context.Context, a *ethpb.PayloadAttestationMessage) error
}
// BlobReceiver interface defines the methods of chain service for receiving new
// blobs
type BlobReceiver interface {
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
}
// ExecutionPayloadReceiver interface defines the methods of chain service for receiving `ROExecutionPayloadEnvelope`.
type ExecutionPayloadReceiver interface {
ReceiveExecutionPayloadEnvelope(ctx context.Context, envelope interfaces.ROSignedExecutionPayloadEnvelope, _ das.AvailabilityStore) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
type SlashingReceiver interface {
ReceiveAttesterSlashing(ctx context.Context, slashing ethpb.AttSlashing)
@@ -83,18 +97,34 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
}
currentCheckpoints := s.saveCurrentCheckpoints(preState)
roblock, err := blocks.NewROBlockWithRoot(blockCopy, blockRoot)
roblock, err := consensus_blocks.NewROBlockWithRoot(blockCopy, blockRoot)
if err != nil {
return err
}
postState, isValidPayload, err := s.validateExecutionAndConsensus(ctx, preState, roblock)
if err != nil {
return err
}
daWaitedTime, err := s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
return err
var postState state.BeaconState
var isValidPayload bool
var daWaitedTime time.Duration
if blockCopy.Version() >= version.EPBS {
postState, err = s.validateStateTransition(ctx, preState, roblock)
if err != nil {
return errors.Wrap(err, "could not validate state transition")
}
optimistic, err := s.IsOptimisticForRoot(ctx, roblock.Block().ParentRoot())
if err != nil {
return errors.Wrap(err, "could not check if parent is optimistic")
}
// if the parent is not optimistic then we can set the block as
// not optimistic.
isValidPayload = !optimistic
} else {
postState, isValidPayload, err = s.validateExecutionAndConsensus(ctx, preState, roblock)
if err != nil {
return err
}
daWaitedTime, err = s.handleDA(ctx, blockCopy, blockRoot, avs)
if err != nil {
return err
}
}
// Defragment the state before continuing block processing.
s.defragmentState(postState)
@@ -188,7 +218,7 @@ func (s *Service) updateCheckpoints(
func (s *Service) validateExecutionAndConsensus(
ctx context.Context,
preState state.BeaconState,
block blocks.ROBlock,
block consensusblocks.ROBlock,
) (state.BeaconState, bool, error) {
preStateVersion, preStateHeader, err := getStateVersionAndPayload(preState)
if err != nil {
@@ -558,7 +588,7 @@ func (s *Service) sendBlockAttestationsToSlasher(signed interfaces.ReadOnlySigne
}
// validateExecutionOnBlock notifies the engine of the incoming block execution payload and returns true if the payload is valid
func (s *Service) validateExecutionOnBlock(ctx context.Context, ver int, header interfaces.ExecutionData, block blocks.ROBlock) (bool, error) {
func (s *Service) validateExecutionOnBlock(ctx context.Context, ver int, header interfaces.ExecutionData, block consensusblocks.ROBlock) (bool, error) {
isValidPayload, err := s.notifyNewPayload(ctx, ver, header, block)
if err != nil {
s.cfg.ForkChoiceStore.Lock()

View File

@@ -0,0 +1,241 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"time"
"github.com/ethereum/go-ethereum/common"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/sirupsen/logrus"
"go.opencensus.io/trace"
"golang.org/x/sync/errgroup"
)
// ReceiveExecutionPayloadEnvelope is a function that defines the operations (minus pubsub)
// that are performed on a received execution payload envelope. The operations consist of:
// 1. Validate the payload, apply state transition.
// 2. Apply fork choice to the processed payload
// 3. Save latest head info
func (s *Service) ReceiveExecutionPayloadEnvelope(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope, _ das.AvailabilityStore) error {
receivedTime := time.Now()
envelope, err := signed.Envelope()
if err != nil {
return err
}
root := envelope.BeaconBlockRoot()
s.payloadBeingSynced.set(envelope)
defer s.payloadBeingSynced.unset(root)
preState, err := s.getPayloadEnvelopePrestate(ctx, envelope)
if err != nil {
return errors.Wrap(err, "could not get prestate")
}
eg, _ := errgroup.WithContext(ctx)
eg.Go(func() error {
if err := epbs.ValidatePayloadStateTransition(ctx, preState, envelope); err != nil {
return errors.Wrap(err, "failed to validate consensus state transition function")
}
return nil
})
var isValidPayload bool
eg.Go(func() error {
var err error
isValidPayload, err = s.validateExecutionOnEnvelope(ctx, envelope)
if err != nil {
return errors.Wrap(err, "could not notify the engine of the new payload")
}
return nil
})
if err := eg.Wait(); err != nil {
return err
}
daStartTime := time.Now()
// TODO: Add DA check
daWaitedTime := time.Since(daStartTime)
dataAvailWaitedTime.Observe(float64(daWaitedTime.Milliseconds()))
if err := s.savePostPayload(ctx, signed, preState); err != nil {
return err
}
if err := s.insertPayloadEnvelope(envelope); err != nil {
return errors.Wrap(err, "could not insert payload to forkchoice")
}
if isValidPayload {
s.ForkChoicer().Lock()
if err := s.ForkChoicer().SetOptimisticToValid(ctx, root); err != nil {
s.ForkChoicer().Unlock()
return errors.Wrap(err, "could not set optimistic payload to valid")
}
s.ForkChoicer().Unlock()
}
headRoot, err := s.HeadRoot(ctx)
if err != nil {
log.WithError(err).Error("could not get headroot to compute attributes")
return nil
}
if bytes.Equal(headRoot, root[:]) {
attr := s.getPayloadAttribute(ctx, preState, envelope.Slot()+1, headRoot)
execution, err := envelope.Execution()
if err != nil {
log.WithError(err).Error("could not get execution data")
return nil
}
blockHash := [32]byte(execution.BlockHash())
payloadID, err := s.notifyForkchoiceUpdateEPBS(ctx, blockHash, attr)
if err != nil {
if IsInvalidBlock(err) {
// TODO handle the lvh here
return err
}
return nil
}
if attr != nil && !attr.IsEmpty() && payloadID != nil {
var pid [8]byte
copy(pid[:], payloadID[:])
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(headRoot)),
"headSlot": envelope.Slot(),
"payloadID": fmt.Sprintf("%#x", bytesutil.Trunc(payloadID[:])),
}).Info("Forkchoice updated with payload attributes for proposal")
s.cfg.PayloadIDCache.Set(envelope.Slot()+1, root, pid)
}
headBlk, err := s.HeadBlock(ctx)
if err != nil {
log.WithError(err).Error("could not get head block")
}
if err := s.saveHead(ctx, root, headBlk, preState); err != nil {
log.WithError(err).Error("could not save new head")
}
// update the NSC with the hash for the full block
if err := transition.UpdateNextSlotCache(ctx, blockHash[:], preState); err != nil {
log.WithError(err).Error("could not update next slot cache with payload")
}
}
timeWithoutDaWait := time.Since(receivedTime) - daWaitedTime
executionEngineProcessingTime.Observe(float64(timeWithoutDaWait.Milliseconds()))
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", root),
"stateRoot": fmt.Sprintf("%#x", envelope.StateRoot()),
"builder": envelope.BuilderIndex(),
}).Info("Successfully processed execution payload envelope")
return nil
}
// notifyNewPayload signals execution engine on a new payload.
// It returns true if the EL has returned VALID for the block
func (s *Service) notifyNewEnvelope(ctx context.Context, envelope interfaces.ROExecutionPayloadEnvelope) (bool, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.notifyNewPayload")
defer span.End()
payload, err := envelope.Execution()
if err != nil {
return false, errors.Wrap(err, "could not get execution payload")
}
versionedHashes := envelope.VersionedHashes()
root := envelope.BeaconBlockRoot()
parentRoot, err := s.ParentRoot(root)
if err != nil {
return false, errors.Wrap(err, "could not get parent block root")
}
pr := common.Hash(parentRoot)
requests := envelope.ExecutionRequests()
lastValidHash, err := s.cfg.ExecutionEngineCaller.NewPayload(ctx, payload, versionedHashes, &pr, requests)
switch {
case err == nil:
newPayloadValidNodeCount.Inc()
return true, nil
case errors.Is(err, execution.ErrAcceptedSyncingPayloadStatus):
newPayloadOptimisticNodeCount.Inc()
log.WithFields(logrus.Fields{
"payloadBlockHash": fmt.Sprintf("%#x", bytesutil.Trunc(payload.BlockHash())),
}).Info("Called new payload with optimistic block")
return false, nil
case errors.Is(err, execution.ErrInvalidPayloadStatus):
lvh := bytesutil.ToBytes32(lastValidHash)
return false, invalidBlock{
error: ErrInvalidPayload,
lastValidHash: lvh,
}
default:
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
}
// validateExecutionOnEnvelope notifies the engine of the incoming execution payload and returns true if the payload is valid
func (s *Service) validateExecutionOnEnvelope(ctx context.Context, e interfaces.ROExecutionPayloadEnvelope) (bool, error) {
isValidPayload, err := s.notifyNewEnvelope(ctx, e)
if err == nil {
return isValidPayload, nil
}
blockRoot := e.BeaconBlockRoot()
parentRoot, rootErr := s.ParentRoot(blockRoot)
if rootErr != nil {
return false, errors.Wrap(rootErr, "could not get parent block root")
}
s.cfg.ForkChoiceStore.Lock()
err = s.handleInvalidExecutionError(ctx, err, blockRoot, parentRoot)
s.cfg.ForkChoiceStore.Unlock()
return false, err
}
func (s *Service) getPayloadEnvelopePrestate(ctx context.Context, e interfaces.ROExecutionPayloadEnvelope) (state.BeaconState, error) {
ctx, span := trace.StartSpan(ctx, "blockChain.getPayloadEnvelopePreState")
defer span.End()
// Verify incoming payload has a valid pre state.
root := e.BeaconBlockRoot()
// Verify the referred block is known to forkchoice
if !s.InForkchoice(root) {
return nil, errors.New("Cannot import execution payload envelope for unknown block")
}
if err := s.verifyBlkPreState(ctx, root); err != nil {
return nil, errors.Wrap(err, "could not verify payload prestate")
}
preState, err := s.cfg.StateGen.StateByRoot(ctx, root)
if err != nil {
return nil, errors.Wrap(err, "could not get pre state")
}
if preState == nil || preState.IsNil() {
return nil, errors.Wrap(err, "nil pre state")
}
return preState, nil
}
func (s *Service) savePostPayload(ctx context.Context, signed interfaces.ROSignedExecutionPayloadEnvelope, st state.BeaconState) error {
if err := s.cfg.BeaconDB.SaveBlindPayloadEnvelope(ctx, signed); err != nil {
return err
}
envelope, err := signed.Envelope()
if err != nil {
return err
}
execution, err := envelope.Execution()
if err != nil {
return err
}
r := envelope.BeaconBlockRoot()
if err := s.cfg.StateGen.SaveState(ctx, [32]byte(execution.BlockHash()), st); err != nil {
log.Warnf("Rolling back insertion of block with root %#x", r)
if err := s.cfg.BeaconDB.DeleteBlock(ctx, r); err != nil {
log.WithError(err).Errorf("Could not delete block with block root %#x", r)
}
return errors.Wrap(err, "could not save state")
}
return nil
}

View File

@@ -0,0 +1,100 @@
package blockchain
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
mockExecution "github.com/prysmaticlabs/prysm/v5/beacon-chain/execution/testing"
forkchoicetypes "github.com/prysmaticlabs/prysm/v5/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
)
func Test_getPayloadEnvelopePrestate(t *testing.T) {
service, tr := minimalTestService(t)
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
p := random.ExecutionPayloadEnvelope(t)
p.BeaconBlockRoot = service.originBlockRoot[:]
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
_, err = service.getPayloadEnvelopePrestate(ctx, e)
require.NoError(t, err)
}
func Test_notifyNewEnvelope(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
p := random.ExecutionPayloadEnvelope(t)
p.BeaconBlockRoot = service.originBlockRoot[:]
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
engine := &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = engine
isValidPayload, err := service.notifyNewEnvelope(ctx, e)
require.NoError(t, err)
require.Equal(t, true, isValidPayload)
}
func Test_validateExecutionOnEnvelope(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
p := random.ExecutionPayloadEnvelope(t)
p.BeaconBlockRoot = service.originBlockRoot[:]
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
engine := &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = engine
isValidPayload, err := service.validateExecutionOnEnvelope(ctx, e)
require.NoError(t, err)
require.Equal(t, true, isValidPayload)
}
func Test_ReceiveExecutionPayloadEnvelope(t *testing.T) {
service, tr := minimalTestService(t, WithPayloadIDCache(cache.NewPayloadIDCache()))
ctx, fcs := tr.ctx, tr.fcs
gs, _ := util.DeterministicGenesisStateEpbs(t, 32)
require.NoError(t, service.saveGenesisData(ctx, gs))
require.NoError(t, fcs.UpdateFinalizedCheckpoint(&forkchoicetypes.Checkpoint{Root: service.originBlockRoot}))
post := gs.Copy()
p := &enginev1.ExecutionPayloadEnvelope{
Payload: &enginev1.ExecutionPayloadElectra{
ParentHash: make([]byte, 32),
BlockHash: make([]byte, 32),
},
BeaconBlockRoot: service.originBlockRoot[:],
BlobKzgCommitments: make([][]byte, 0),
StateRoot: make([]byte, 32),
ExecutionRequests: &enginev1.ExecutionRequests{},
}
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
das := &das.MockAvailabilityStore{}
blockHeader := post.LatestBlockHeader()
prevStateRoot, err := post.HashTreeRoot(ctx)
require.NoError(t, err)
blockHeader.StateRoot = prevStateRoot[:]
require.NoError(t, post.SetLatestBlockHeader(blockHeader))
stRoot, err := post.HashTreeRoot(ctx)
require.NoError(t, err)
p.StateRoot = stRoot[:]
engine := &mockExecution.EngineClient{}
service.cfg.ExecutionEngineCaller = engine
require.NoError(t, service.ReceiveExecutionPayloadEnvelope(ctx, e, das))
}

View File

@@ -0,0 +1,33 @@
package blockchain
import (
"context"
"slices"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
func (s *Service) ReceivePayloadAttestationMessage(ctx context.Context, a *eth.PayloadAttestationMessage) error {
if err := helpers.ValidateNilPayloadAttestationMessage(a); err != nil {
return err
}
root := [32]byte(a.Data.BeaconBlockRoot)
st, err := s.HeadStateReadOnly(ctx)
if err != nil {
return err
}
ptc, err := helpers.GetPayloadTimelinessCommittee(ctx, st, a.Data.Slot)
if err != nil {
return err
}
idx := slices.Index(ptc, a.ValidatorIndex)
if idx == -1 {
return errInvalidValidatorIndex
}
if s.cfg.PayloadAttestationCache.Seen(root, uint64(primitives.ValidatorIndex(idx))) {
return nil
}
return s.cfg.PayloadAttestationCache.Add(a, uint64(idx))
}

View File

@@ -33,8 +33,10 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v5/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
consensus_blocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -47,24 +49,26 @@ import (
// Service represents a service that handles the internal
// logic of managing the full PoS beacon chain.
type Service struct {
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
cfg *config
ctx context.Context
cancel context.CancelFunc
genesisTime time.Time
head *head
headLock sync.RWMutex
originBlockRoot [32]byte // genesis root, or weak subjectivity checkpoint root, depending on how the node is initialized
boundaryRoots [][32]byte
checkpointStateCache *cache.CheckpointStateCache
initSyncBlocks map[[32]byte]interfaces.ReadOnlySignedBeaconBlock
initSyncBlocksLock sync.RWMutex
wsVerifier *WeakSubjectivityVerifier
clockSetter startup.ClockSetter
clockWaiter startup.ClockWaiter
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
payloadBeingSynced *currentlySyncingPayload
blobStorage *filesystem.BlobStorage
lastPublishedLightClientEpoch primitives.Epoch
}
// config options for the service.
@@ -73,9 +77,10 @@ type config struct {
ChainStartFetcher execution.ChainStartFetcher
BeaconDB db.HeadAccessDatabase
DepositCache cache.DepositCache
PayloadAttestationCache *cache.PayloadAttestationCache
PayloadEnvelopeCache *sync.Map
PayloadIDCache *cache.PayloadIDCache
TrackedValidatorsCache *cache.TrackedValidatorsCache
AttestationCache *cache.AttestationCache
AttPool attestations.Pool
ExitPool voluntaryexits.PoolManager
SlashingPool slashings.PoolManager
@@ -105,22 +110,18 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
type blobNotifierMap struct {
sync.RWMutex
notifiers map[[32]byte]chan uint64
seenIndex map[[32]byte][]bool
seenIndex map[[32]byte][fieldparams.MaxBlobsPerBlock]bool
}
// notifyIndex notifies a blob by its index for a given root.
// It uses internal maps to keep track of seen indices and notifier channels.
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitives.Slot) {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if idx >= uint64(maxBlobsPerBlock) {
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
if idx >= fieldparams.MaxBlobsPerBlock {
return
}
bn.Lock()
seen := bn.seenIndex[root]
if seen == nil {
seen = make([]bool, maxBlobsPerBlock)
}
if seen[idx] {
bn.Unlock()
return
@@ -131,7 +132,7 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitive
// Retrieve or create the notifier channel for the given root.
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, maxBlobsPerBlock)
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
bn.notifiers[root] = c
}
@@ -140,13 +141,12 @@ func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64, slot primitive
c <- idx
}
func (bn *blobNotifierMap) forRoot(root [32]byte, slot primitives.Slot) chan uint64 {
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
func (bn *blobNotifierMap) forRoot(root [32]byte) chan uint64 {
bn.Lock()
defer bn.Unlock()
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, maxBlobsPerBlock)
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
bn.notifiers[root] = c
}
return c
@@ -172,7 +172,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
bn := &blobNotifierMap{
notifiers: make(map[[32]byte]chan uint64),
seenIndex: make(map[[32]byte][]bool),
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
}
srv := &Service{
ctx: ctx,
@@ -183,6 +183,7 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
blobNotifiers: bn,
cfg: &config{},
blockBeingSynced: &currentlySyncingBlock{roots: make(map[[32]byte]struct{})},
payloadBeingSynced: &currentlySyncingPayload{roots: make(map[[32]byte]primitives.PTCStatus)},
}
for _, opt := range opts {
if err := opt(srv); err != nil {
@@ -311,7 +312,7 @@ func (s *Service) StartFromSavedState(saved state.BeaconState) error {
if err != nil {
return errors.Wrap(err, "could not get finalized checkpoint block")
}
roblock, err := blocks.NewROBlockWithRoot(finalizedBlock, fRoot)
roblock, err := consensus_blocks.NewROBlockWithRoot(finalizedBlock, fRoot)
if err != nil {
return err
}
@@ -527,7 +528,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
gb, err := blocks.NewROBlockWithRoot(genesisBlk, genesisBlkRoot)
gb, err := consensus_blocks.NewROBlockWithRoot(genesisBlk, genesisBlkRoot)
if err != nil {
return err
}
@@ -558,7 +559,7 @@ func (s *Service) saveGenesisData(ctx context.Context, genesisState state.Beacon
// 2.) Check DB.
// Checking 1.) is ten times faster than checking 2.)
// this function requires a lock in forkchoice
func (s *Service) hasBlock(ctx context.Context, root [32]byte) bool {
func (s *Service) chainHasBlock(ctx context.Context, root [32]byte) bool {
if s.cfg.ForkChoiceStore.HasNode(root) {
return true
}

View File

@@ -386,8 +386,8 @@ func TestHasBlock_ForkChoiceAndDB_DoublyLinkedTree(t *testing.T) {
require.NoError(t, err)
require.NoError(t, s.cfg.ForkChoiceStore.InsertNode(ctx, beaconState, roblock))
assert.Equal(t, false, s.hasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.hasBlock(ctx, r), "Should have block")
assert.Equal(t, false, s.chainHasBlock(ctx, [32]byte{}), "Should not have block")
assert.Equal(t, true, s.chainHasBlock(ctx, r), "Should have block")
}
func TestServiceStop_SaveCachedBlocks(t *testing.T) {
@@ -587,7 +587,7 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
func TestNotifyIndex(t *testing.T) {
// Initialize a blobNotifierMap
bn := &blobNotifierMap{
seenIndex: make(map[[32]byte][]bool),
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
notifiers: make(map[[32]byte]chan uint64),
}
@@ -596,7 +596,7 @@ func TestNotifyIndex(t *testing.T) {
copy(root[:], "exampleRoot")
// Test notifying a new index
bn.notifyIndex(root, 1, 1)
bn.notifyIndex(root, 1)
if !bn.seenIndex[root][1] {
t.Errorf("Index was not marked as seen")
}
@@ -607,13 +607,13 @@ func TestNotifyIndex(t *testing.T) {
}
// Test notifying an already seen index
bn.notifyIndex(root, 1, 1)
bn.notifyIndex(root, 1)
if len(bn.notifiers[root]) > 1 {
t.Errorf("Notifier channel should not receive multiple messages for the same index")
}
// Test notifying a new index again
bn.notifyIndex(root, 2, 1)
bn.notifyIndex(root, 2)
if !bn.seenIndex[root][2] {
t.Errorf("Index was not marked as seen")
}

View File

@@ -3,7 +3,10 @@ load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
testonly = True,
srcs = ["mock.go"],
srcs = [
"mock.go",
"mock_epbs.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/blockchain/testing",
visibility = [
"//beacon-chain:__subpackages__",

View File

@@ -37,44 +37,50 @@ var ErrNilState = errors.New("nil state")
// ChainService defines the mock interface for testing
type ChainService struct {
NotFinalized bool
Optimistic bool
ValidAttestation bool
ValidatorsRoot [32]byte
PublicKey [fieldparams.BLSPubkeyLength]byte
FinalizedCheckPoint *ethpb.Checkpoint
CurrentJustifiedCheckPoint *ethpb.Checkpoint
PreviousJustifiedCheckPoint *ethpb.Checkpoint
Slot *primitives.Slot // Pointer because 0 is a useful value, so checking against it can be incorrect.
Balance *precompute.Balance
CanonicalRoots map[[32]byte]bool
Fork *ethpb.Fork
ETH1Data *ethpb.Eth1Data
InitSyncBlockRoots map[[32]byte]bool
DB db.Database
State state.BeaconState
Block interfaces.ReadOnlySignedBeaconBlock
VerifyBlkDescendantErr error
stateNotifier statefeed.Notifier
BlocksReceived []interfaces.ReadOnlySignedBeaconBlock
SyncCommitteeIndices []primitives.CommitteeIndex
blockNotifier blockfeed.Notifier
opNotifier opfeed.Notifier
Root []byte
SyncCommitteeDomain []byte
SyncSelectionProofDomain []byte
SyncContributionProofDomain []byte
SyncCommitteePubkeys [][]byte
Genesis time.Time
ForkChoiceStore forkchoice.ForkChoicer
ReceiveBlockMockErr error
OptimisticCheckRootReceived [32]byte
FinalizedRoots map[[32]byte]bool
OptimisticRoots map[[32]byte]bool
BlockSlot primitives.Slot
SyncingRoot [32]byte
Blobs []blocks.VerifiedROBlob
TargetRoot [32]byte
NotFinalized bool
Optimistic bool
ValidAttestation bool
ValidatorsRoot [32]byte
PublicKey [fieldparams.BLSPubkeyLength]byte
FinalizedCheckPoint *ethpb.Checkpoint
CurrentJustifiedCheckPoint *ethpb.Checkpoint
PreviousJustifiedCheckPoint *ethpb.Checkpoint
Slot *primitives.Slot // Pointer because 0 is a useful value, so checking against it can be incorrect.
Balance *precompute.Balance
CanonicalRoots map[[32]byte]bool
Fork *ethpb.Fork
ETH1Data *ethpb.Eth1Data
InitSyncBlockRoots map[[32]byte]bool
DB db.Database
State state.BeaconState
Block interfaces.ReadOnlySignedBeaconBlock
ExecutionPayloadEnvelope interfaces.ROExecutionPayloadEnvelope
VerifyBlkDescendantErr error
stateNotifier statefeed.Notifier
BlocksReceived []interfaces.ReadOnlySignedBeaconBlock
SyncCommitteeIndices []primitives.CommitteeIndex
blockNotifier blockfeed.Notifier
opNotifier opfeed.Notifier
Root []byte
SyncCommitteeDomain []byte
SyncSelectionProofDomain []byte
SyncContributionProofDomain []byte
SyncCommitteePubkeys [][]byte
Genesis time.Time
ForkChoiceStore forkchoice.ForkChoicer
ReceiveBlockMockErr error
ReceiveEnvelopeMockErr error
OptimisticCheckRootReceived [32]byte
FinalizedRoots map[[32]byte]bool
OptimisticRoots map[[32]byte]bool
BlockSlot primitives.Slot
SyncingRoot [32]byte
Blobs []blocks.VerifiedROBlob
TargetRoot [32]byte
HighestReceivedSlot primitives.Slot
HighestReceivedRoot [32]byte
PayloadStatus primitives.PTCStatus
ReceivePayloadAttestationMessageErr error
}
func (s *ChainService) Ancestor(ctx context.Context, root []byte, slot primitives.Slot) ([]byte, error) {
@@ -641,12 +647,12 @@ func (s *ChainService) ReceivedBlocksLastEpoch() (uint64, error) {
return 0, nil
}
// HighestReceivedBlockSlot mocks the same method in the chain service
func (s *ChainService) HighestReceivedBlockSlot() primitives.Slot {
// HighestReceivedBlockSlotRoot mocks the same method in the chain service
func (s *ChainService) HighestReceivedBlockSlotRoot() (primitives.Slot, [32]byte) {
if s.ForkChoiceStore != nil {
return s.ForkChoiceStore.HighestReceivedBlockSlot()
return s.ForkChoiceStore.HighestReceivedBlockSlotRoot()
}
return 0
return s.HighestReceivedSlot, s.HighestReceivedRoot
}
// InsertNode mocks the same method in the chain service
@@ -706,3 +712,17 @@ func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) e
func (c *ChainService) TargetRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil
}
// HashInForkchoice mocks the same method in the chain service
func (c *ChainService) HashInForkchoice([32]byte) bool {
return false
}
// ReceivePayloadAttestationMessage mocks the same method in the chain service
func (c *ChainService) ReceivePayloadAttestationMessage(_ context.Context, _ *ethpb.PayloadAttestationMessage) error {
return c.ReceivePayloadAttestationMessageErr
}
func (c *ChainService) GetPTCVote(root [32]byte) primitives.PTCStatus {
return c.PayloadStatus
}

View File

@@ -0,0 +1,25 @@
package testing
import (
"context"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
)
// ReceiveExecutionPayloadEnvelope mocks the method in chain service.
func (s *ChainService) ReceiveExecutionPayloadEnvelope(ctx context.Context, env interfaces.ROExecutionPayloadEnvelope, _ das.AvailabilityStore) error {
if s.ReceiveBlockMockErr != nil {
return s.ReceiveBlockMockErr
}
if s.State == nil {
return ErrNilState
}
if s.State.Slot() == env.Slot() {
if err := s.State.SetLatestFullSlot(s.State.Slot()); err != nil {
return err
}
}
s.ExecutionPayloadEnvelope = env
return nil
}

View File

@@ -5,7 +5,6 @@ go_library(
srcs = [
"active_balance.go",
"active_balance_disabled.go", # keep
"attestation.go",
"attestation_data.go",
"balance_cache_key.go",
"checkpoint_state.go",
@@ -16,11 +15,13 @@ go_library(
"doc.go",
"error.go",
"interfaces.go",
"payload_attestation.go",
"payload_id.go",
"proposer_indices.go",
"proposer_indices_disabled.go", # keep
"proposer_indices_type.go",
"registration.go",
"signed_execution_header.go",
"skip_slot_cache.go",
"subnet_ids.go",
"sync_committee.go",
@@ -37,7 +38,6 @@ go_library(
],
deps = [
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/operations/attestations/attmap:go_default_library",
"//beacon-chain/state:go_default_library",
"//cache/lru:go_default_library",
"//config/fieldparams:go_default_library",
@@ -50,8 +50,8 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
@@ -70,15 +70,16 @@ go_test(
srcs = [
"active_balance_test.go",
"attestation_data_test.go",
"attestation_test.go",
"cache_test.go",
"checkpoint_state_test.go",
"committee_fuzz_test.go",
"committee_test.go",
"payload_attestation_test.go",
"payload_id_test.go",
"private_access_test.go",
"proposer_indices_test.go",
"registration_test.go",
"signed_execution_header_test.go",
"skip_slot_cache_test.go",
"subnet_ids_test.go",
"sync_committee_head_state_test.go",
@@ -93,17 +94,16 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls/blst:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//proto/prysm/v1alpha1/attestation:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_hashicorp_golang_lru//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_stretchr_testify//require:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],

View File

@@ -1,275 +0,0 @@
package cache
import (
"sync"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/operations/attestations/attmap"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
log "github.com/sirupsen/logrus"
)
type attGroup struct {
slot primitives.Slot
atts []ethpb.Att
}
// AttestationCache holds a map of attGroup items that group together all attestations for a single slot.
// When we add an attestation to the cache by calling Add, we either create a new group with this attestation
// (if this is the first attestation for some slot) or two things can happen:
//
// - If the attestation is unaggregated, we add its attestation bit to attestation bits of the first
// attestation in the group.
// - If the attestation is aggregated, we append it to the group. There should be no redundancy
// in the list because we ignore redundant aggregates in gossip.
//
// The first bullet point above means that we keep one aggregate attestation to which we keep appending bits
// as new single-bit attestations arrive. This means that at any point during seconds 0-4 of a slot
// we will have only one attestation for this slot in the cache.
//
// NOTE: This design in principle can result in worse aggregates since we lose the ability to aggregate some
// single bit attestations in case of overlaps with incoming aggregates.
//
// The cache also keeps forkchoice attestations in a separate struct. These attestations are used for
// forkchoice-related operations.
type AttestationCache struct {
atts map[attestation.Id]*attGroup
sync.RWMutex
forkchoiceAtts *attmap.Attestations
}
// NewAttestationCache creates a new cache instance.
func NewAttestationCache() *AttestationCache {
return &AttestationCache{
atts: make(map[attestation.Id]*attGroup),
forkchoiceAtts: attmap.New(),
}
}
// Add does one of two things:
//
// - For unaggregated attestations, it adds the attestation bit to attestation bits of the running aggregate,
// which is the first aggregate for the slot.
// - For aggregated attestations, it appends the attestation to the existng list of attestations for the slot.
func (c *AttestationCache) Add(att ethpb.Att) error {
if att.IsNil() {
log.Debug("Attempted to add a nil attestation to the attestation cache")
return nil
}
if len(att.GetAggregationBits().BitIndices()) == 0 {
log.Debug("Attempted to add an attestation with 0 bits set to the attestation cache")
return nil
}
c.Lock()
defer c.Unlock()
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrapf(err, "could not create attestation ID")
}
group := c.atts[id]
if group == nil {
group = &attGroup{
slot: att.GetData().Slot,
atts: []ethpb.Att{att},
}
c.atts[id] = group
return nil
}
if att.IsAggregated() {
group.atts = append(group.atts, att.Clone())
return nil
}
// This should never happen because we return early for a new group.
if len(group.atts) == 0 {
log.Error("Attestation group contains no attestations, skipping insertion")
return nil
}
a := group.atts[0]
// Indexing is safe because we have guarded against 0 bits set.
bit := att.GetAggregationBits().BitIndices()[0]
if a.GetAggregationBits().BitAt(uint64(bit)) {
return nil
}
sig, err := aggregateSig(a, att)
if err != nil {
return errors.Wrapf(err, "could not aggregate signatures")
}
a.GetAggregationBits().SetBitAt(uint64(bit), true)
a.SetSignature(sig)
return nil
}
// GetAll returns all attestations in the cache, excluding forkchoice attestations.
func (c *AttestationCache) GetAll() []ethpb.Att {
c.RLock()
defer c.RUnlock()
var result []ethpb.Att
for _, group := range c.atts {
result = append(result, group.atts...)
}
return result
}
// Count returns the number of all attestations in the cache, excluding forkchoice attestations.
func (c *AttestationCache) Count() int {
c.RLock()
defer c.RUnlock()
count := 0
for _, group := range c.atts {
count += len(group.atts)
}
return count
}
// DeleteCovered removes all attestations whose attestation bits are a proper subset of the passed-in attestation.
func (c *AttestationCache) DeleteCovered(att ethpb.Att) error {
if att.IsNil() {
return nil
}
c.Lock()
defer c.Unlock()
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return errors.Wrapf(err, "could not create attestation ID")
}
group := c.atts[id]
if group == nil {
return nil
}
idx := 0
for _, a := range group.atts {
if covered, err := att.GetAggregationBits().Contains(a.GetAggregationBits()); err != nil {
return err
} else if !covered {
group.atts[idx] = a
idx++
}
}
group.atts = group.atts[:idx]
if len(group.atts) == 0 {
delete(c.atts, id)
}
return nil
}
// PruneBefore removes all attestations whose slot is earlier than the passed-in slot.
func (c *AttestationCache) PruneBefore(slot primitives.Slot) uint64 {
c.Lock()
defer c.Unlock()
var pruneCount int
for id, group := range c.atts {
if group.slot < slot {
pruneCount += len(group.atts)
delete(c.atts, id)
}
}
return uint64(pruneCount)
}
// AggregateIsRedundant checks whether all attestation bits of the passed-in aggregate
// are already included by any aggregate in the cache.
func (c *AttestationCache) AggregateIsRedundant(att ethpb.Att) (bool, error) {
if att.IsNil() {
return true, nil
}
c.RLock()
defer c.RUnlock()
id, err := attestation.NewId(att, attestation.Data)
if err != nil {
return true, errors.Wrapf(err, "could not create attestation ID")
}
group := c.atts[id]
if group == nil {
return false, nil
}
for _, a := range group.atts {
if redundant, err := a.GetAggregationBits().Contains(att.GetAggregationBits()); err != nil {
return true, err
} else if redundant {
return true, nil
}
}
return false, nil
}
// SaveForkchoiceAttestations saves forkchoice attestations.
func (c *AttestationCache) SaveForkchoiceAttestations(att []ethpb.Att) error {
return c.forkchoiceAtts.SaveMany(att)
}
// ForkchoiceAttestations returns all forkchoice attestations.
func (c *AttestationCache) ForkchoiceAttestations() []ethpb.Att {
return c.forkchoiceAtts.GetAll()
}
// DeleteForkchoiceAttestation deletes a forkchoice attestation.
func (c *AttestationCache) DeleteForkchoiceAttestation(att ethpb.Att) error {
return c.forkchoiceAtts.Delete(att)
}
// GetBySlotAndCommitteeIndex returns all attestations in the cache that match the provided slot
// and committee index. Forkchoice attestations are not returned.
//
// NOTE: This function cannot be declared as a method on the AttestationCache because it is a generic function.
func GetBySlotAndCommitteeIndex[T ethpb.Att](c *AttestationCache, slot primitives.Slot, committeeIndex primitives.CommitteeIndex) []T {
c.RLock()
defer c.RUnlock()
var result []T
for _, group := range c.atts {
if len(group.atts) > 0 {
// We can safely compare the first attestation because all attestations in a group
// must have the same slot and committee index, since they are under the same key.
a, ok := group.atts[0].(T)
if ok && a.GetData().Slot == slot && a.CommitteeBitsVal().BitAt(uint64(committeeIndex)) {
for _, a := range group.atts {
a, ok := a.(T)
if ok {
result = append(result, a)
}
}
}
}
}
return result
}
func aggregateSig(agg ethpb.Att, att ethpb.Att) ([]byte, error) {
aggSig, err := bls.SignatureFromBytesNoValidation(agg.GetSignature())
if err != nil {
return nil, err
}
attSig, err := bls.SignatureFromBytesNoValidation(att.GetSignature())
if err != nil {
return nil, err
}
return bls.AggregateSignatures([]bls.Signature{aggSig, attSig}).Marshal(), nil
}

View File

@@ -15,24 +15,24 @@ type AttestationConsensusData struct {
Source forkchoicetypes.Checkpoint
}
// AttestationDataCache stores cached results of AttestationData requests.
type AttestationDataCache struct {
// AttestationCache stores cached results of AttestationData requests.
type AttestationCache struct {
a *AttestationConsensusData
sync.RWMutex
}
// NewAttestationDataCache creates a new instance of AttestationDataCache.
func NewAttestationDataCache() *AttestationDataCache {
return &AttestationDataCache{}
// NewAttestationCache creates a new instance of AttestationCache.
func NewAttestationCache() *AttestationCache {
return &AttestationCache{}
}
// Get retrieves cached attestation data, recording a cache hit or miss. This method is lock free.
func (c *AttestationDataCache) Get() *AttestationConsensusData {
func (c *AttestationCache) Get() *AttestationConsensusData {
return c.a
}
// Put adds a response to the cache. This method is lock free.
func (c *AttestationDataCache) Put(a *AttestationConsensusData) error {
func (c *AttestationCache) Put(a *AttestationConsensusData) error {
if a == nil {
return errors.New("attestation cannot be nil")
}

View File

@@ -9,7 +9,7 @@ import (
)
func TestAttestationCache_RoundTrip(t *testing.T) {
c := cache.NewAttestationDataCache()
c := cache.NewAttestationCache()
a := c.Get()
require.Nil(t, a)

View File

@@ -1,353 +0,0 @@
package cache
import (
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls/blst"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/attestation"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestAdd(t *testing.T) {
k, err := blst.RandKey()
require.NoError(t, err)
sig := k.Sign([]byte{'X'})
t.Run("new ID", func(t *testing.T) {
t.Run("first ID ever", func(t *testing.T) {
c := NewAttestationCache()
ab := bitfield.NewBitlist(8)
ab.SetBitAt(0, true)
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: ab,
Signature: sig.Marshal(),
}
id, err := attestation.NewId(att, attestation.Data)
require.NoError(t, err)
require.NoError(t, c.Add(att))
require.Equal(t, 1, len(c.atts))
group, ok := c.atts[id]
require.Equal(t, true, ok)
assert.Equal(t, primitives.Slot(123), group.slot)
require.Equal(t, 1, len(group.atts))
assert.DeepEqual(t, group.atts[0], att)
})
t.Run("other ID exists", func(t *testing.T) {
c := NewAttestationCache()
ab := bitfield.NewBitlist(8)
ab.SetBitAt(0, true)
existingAtt := &ethpb.Attestation{
Data: &ethpb.AttestationData{BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: ab,
Signature: sig.Marshal(),
}
existingId, err := attestation.NewId(existingAtt, attestation.Data)
require.NoError(t, err)
c.atts[existingId] = &attGroup{slot: existingAtt.Data.Slot, atts: []ethpb.Att{existingAtt}}
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: ab,
Signature: sig.Marshal(),
}
id, err := attestation.NewId(att, attestation.Data)
require.NoError(t, err)
require.NoError(t, c.Add(att))
require.Equal(t, 2, len(c.atts))
group, ok := c.atts[id]
require.Equal(t, true, ok)
assert.Equal(t, primitives.Slot(123), group.slot)
require.Equal(t, 1, len(group.atts))
assert.DeepEqual(t, group.atts[0], att)
})
})
t.Run("aggregated", func(t *testing.T) {
c := NewAttestationCache()
existingAtt := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
id, err := attestation.NewId(existingAtt, attestation.Data)
require.NoError(t, err)
c.atts[id] = &attGroup{slot: existingAtt.Data.Slot, atts: []ethpb.Att{existingAtt}}
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att.AggregationBits.SetBitAt(0, true)
att.AggregationBits.SetBitAt(1, true)
require.NoError(t, c.Add(att))
require.Equal(t, 1, len(c.atts))
group, ok := c.atts[id]
require.Equal(t, true, ok)
assert.Equal(t, primitives.Slot(123), group.slot)
require.Equal(t, 2, len(group.atts))
assert.DeepEqual(t, group.atts[0], existingAtt)
assert.DeepEqual(t, group.atts[1], att)
})
t.Run("unaggregated - existing bit", func(t *testing.T) {
c := NewAttestationCache()
existingAtt := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
existingAtt.AggregationBits.SetBitAt(0, true)
id, err := attestation.NewId(existingAtt, attestation.Data)
require.NoError(t, err)
c.atts[id] = &attGroup{slot: existingAtt.Data.Slot, atts: []ethpb.Att{existingAtt}}
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att.AggregationBits.SetBitAt(0, true)
require.NoError(t, c.Add(att))
require.Equal(t, 1, len(c.atts))
group, ok := c.atts[id]
require.Equal(t, true, ok)
assert.Equal(t, primitives.Slot(123), group.slot)
require.Equal(t, 1, len(group.atts))
assert.DeepEqual(t, []int{0}, group.atts[0].GetAggregationBits().BitIndices())
})
t.Run("unaggregated - new bit", func(t *testing.T) {
c := NewAttestationCache()
existingAtt := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
existingAtt.AggregationBits.SetBitAt(0, true)
id, err := attestation.NewId(existingAtt, attestation.Data)
require.NoError(t, err)
c.atts[id] = &attGroup{slot: existingAtt.Data.Slot, atts: []ethpb.Att{existingAtt}}
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att.AggregationBits.SetBitAt(1, true)
require.NoError(t, c.Add(att))
require.Equal(t, 1, len(c.atts))
group, ok := c.atts[id]
require.Equal(t, true, ok)
assert.Equal(t, primitives.Slot(123), group.slot)
require.Equal(t, 1, len(group.atts))
assert.DeepEqual(t, []int{0, 1}, group.atts[0].GetAggregationBits().BitIndices())
})
}
func TestGetAll(t *testing.T) {
c := NewAttestationCache()
c.atts[bytesutil.ToBytes32([]byte("id1"))] = &attGroup{atts: []ethpb.Att{&ethpb.Attestation{}, &ethpb.Attestation{}}}
c.atts[bytesutil.ToBytes32([]byte("id2"))] = &attGroup{atts: []ethpb.Att{&ethpb.Attestation{}}}
assert.Equal(t, 3, len(c.GetAll()))
}
func TestCount(t *testing.T) {
c := NewAttestationCache()
c.atts[bytesutil.ToBytes32([]byte("id1"))] = &attGroup{atts: []ethpb.Att{&ethpb.Attestation{}, &ethpb.Attestation{}}}
c.atts[bytesutil.ToBytes32([]byte("id2"))] = &attGroup{atts: []ethpb.Att{&ethpb.Attestation{}}}
assert.Equal(t, 3, c.Count())
}
func TestDeleteCovered(t *testing.T) {
k, err := blst.RandKey()
require.NoError(t, err)
sig := k.Sign([]byte{'X'})
att1 := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att1.AggregationBits.SetBitAt(0, true)
att2 := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att2.AggregationBits.SetBitAt(1, true)
att2.AggregationBits.SetBitAt(2, true)
att3 := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att3.AggregationBits.SetBitAt(1, true)
att3.AggregationBits.SetBitAt(3, true)
att3.AggregationBits.SetBitAt(4, true)
c := NewAttestationCache()
id, err := attestation.NewId(att1, attestation.Data)
require.NoError(t, err)
c.atts[id] = &attGroup{slot: att1.Data.Slot, atts: []ethpb.Att{att1, att2, att3}}
t.Run("no matching group", func(t *testing.T) {
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 456, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att.AggregationBits.SetBitAt(0, true)
att.AggregationBits.SetBitAt(1, true)
att.AggregationBits.SetBitAt(2, true)
att.AggregationBits.SetBitAt(3, true)
att.AggregationBits.SetBitAt(4, true)
require.NoError(t, c.DeleteCovered(att))
assert.Equal(t, 3, len(c.atts[id].atts))
})
t.Run("covered atts deleted", func(t *testing.T) {
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att.AggregationBits.SetBitAt(0, true)
att.AggregationBits.SetBitAt(1, true)
att.AggregationBits.SetBitAt(3, true)
att.AggregationBits.SetBitAt(4, true)
require.NoError(t, c.DeleteCovered(att))
atts := c.atts[id].atts
require.Equal(t, 1, len(atts))
assert.DeepEqual(t, att2, atts[0])
})
t.Run("last att in group deleted", func(t *testing.T) {
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att.AggregationBits.SetBitAt(0, true)
att.AggregationBits.SetBitAt(1, true)
att.AggregationBits.SetBitAt(2, true)
att.AggregationBits.SetBitAt(3, true)
att.AggregationBits.SetBitAt(4, true)
require.NoError(t, c.DeleteCovered(att))
assert.Equal(t, 0, len(c.atts))
})
}
func TestPruneBefore(t *testing.T) {
c := NewAttestationCache()
c.atts[bytesutil.ToBytes32([]byte("id1"))] = &attGroup{slot: 1, atts: []ethpb.Att{&ethpb.Attestation{}, &ethpb.Attestation{}}}
c.atts[bytesutil.ToBytes32([]byte("id2"))] = &attGroup{slot: 3, atts: []ethpb.Att{&ethpb.Attestation{}}}
c.atts[bytesutil.ToBytes32([]byte("id3"))] = &attGroup{slot: 2, atts: []ethpb.Att{&ethpb.Attestation{}}}
count := c.PruneBefore(3)
require.Equal(t, 1, len(c.atts))
_, ok := c.atts[bytesutil.ToBytes32([]byte("id2"))]
assert.Equal(t, true, ok)
assert.Equal(t, uint64(3), count)
}
func TestAggregateIsRedundant(t *testing.T) {
k, err := blst.RandKey()
require.NoError(t, err)
sig := k.Sign([]byte{'X'})
c := NewAttestationCache()
existingAtt := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 123, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
existingAtt.AggregationBits.SetBitAt(0, true)
existingAtt.AggregationBits.SetBitAt(1, true)
id, err := attestation.NewId(existingAtt, attestation.Data)
require.NoError(t, err)
c.atts[id] = &attGroup{slot: existingAtt.Data.Slot, atts: []ethpb.Att{existingAtt}}
t.Run("no matching group", func(t *testing.T) {
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: 456, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att.AggregationBits.SetBitAt(0, true)
redundant, err := c.AggregateIsRedundant(att)
require.NoError(t, err)
assert.Equal(t, false, redundant)
})
t.Run("redundant", func(t *testing.T) {
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: existingAtt.Data.Slot, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att.AggregationBits.SetBitAt(0, true)
redundant, err := c.AggregateIsRedundant(att)
require.NoError(t, err)
assert.Equal(t, true, redundant)
})
t.Run("not redundant", func(t *testing.T) {
t.Run("strictly better", func(t *testing.T) {
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: existingAtt.Data.Slot, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att.AggregationBits.SetBitAt(0, true)
att.AggregationBits.SetBitAt(1, true)
att.AggregationBits.SetBitAt(2, true)
redundant, err := c.AggregateIsRedundant(att)
require.NoError(t, err)
assert.Equal(t, false, redundant)
})
t.Run("overlapping and new bits", func(t *testing.T) {
att := &ethpb.Attestation{
Data: &ethpb.AttestationData{Slot: existingAtt.Data.Slot, BeaconBlockRoot: make([]byte, 32), Source: &ethpb.Checkpoint{Root: make([]byte, 32)}, Target: &ethpb.Checkpoint{Root: make([]byte, 32)}},
AggregationBits: bitfield.NewBitlist(8),
Signature: sig.Marshal(),
}
att.AggregationBits.SetBitAt(0, true)
att.AggregationBits.SetBitAt(2, true)
redundant, err := c.AggregateIsRedundant(att)
require.NoError(t, err)
assert.Equal(t, false, redundant)
})
})
}
func TestGetBySlotAndCommitteeIndex(t *testing.T) {
c := NewAttestationCache()
c.atts[bytesutil.ToBytes32([]byte("id1"))] = &attGroup{slot: 1, atts: []ethpb.Att{&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 1}}, &ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 1, CommitteeIndex: 1}}}}
c.atts[bytesutil.ToBytes32([]byte("id2"))] = &attGroup{slot: 2, atts: []ethpb.Att{&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 2}}}}
c.atts[bytesutil.ToBytes32([]byte("id3"))] = &attGroup{slot: 1, atts: []ethpb.Att{&ethpb.Attestation{Data: &ethpb.AttestationData{Slot: 2, CommitteeIndex: 2}}}}
// committeeIndex has to be small enough to fit in the bitvector
atts := GetBySlotAndCommitteeIndex[*ethpb.Attestation](c, 1, 1)
require.Equal(t, 2, len(atts))
assert.Equal(t, primitives.Slot(1), atts[0].Data.Slot)
assert.Equal(t, primitives.Slot(1), atts[1].Data.Slot)
assert.Equal(t, primitives.CommitteeIndex(1), atts[0].Data.CommitteeIndex)
assert.Equal(t, primitives.CommitteeIndex(1), atts[1].Data.CommitteeIndex)
}

View File

@@ -0,0 +1,132 @@
package cache
import (
"errors"
"sync"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
var errNilPayloadAttestationMessage = errors.New("nil Payload Attestation Message")
// PayloadAttestationCache keeps a map of all the PTC votes that were seen,
// already aggregated. The key is the beacon block root.
type PayloadAttestationCache struct {
root [32]byte
attestations [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation
sync.Mutex
}
// Seen returns true if a vote for the given Beacon Block Root has already been processed
// for this Payload Timeliness Committee index. This will return true even if
// the Payload status differs.
func (p *PayloadAttestationCache) Seen(root [32]byte, idx uint64) bool {
p.Lock()
defer p.Unlock()
if p.root != root {
return false
}
for _, agg := range p.attestations {
if agg == nil {
continue
}
if agg.AggregationBits.BitAt(idx) {
return true
}
}
return false
}
// messageToPayloadAttestation creates a PayloadAttestation with a single
// aggregated bit from the passed PayloadAttestationMessage
func messageToPayloadAttestation(att *eth.PayloadAttestationMessage, idx uint64) *eth.PayloadAttestation {
bits := primitives.NewPayloadAttestationAggregationBits()
bits.SetBitAt(idx, true)
data := &eth.PayloadAttestationData{
BeaconBlockRoot: bytesutil.SafeCopyBytes(att.Data.BeaconBlockRoot),
Slot: att.Data.Slot,
PayloadStatus: att.Data.PayloadStatus,
}
return &eth.PayloadAttestation{
AggregationBits: bits,
Data: data,
Signature: bytesutil.SafeCopyBytes(att.Signature),
}
}
// aggregateSigFromMessage returns the aggregated signature from a Payload
// Attestation by adding the passed signature in the PayloadAttestationMessage,
// no signature validation is performed.
func aggregateSigFromMessage(aggregated *eth.PayloadAttestation, message *eth.PayloadAttestationMessage) ([]byte, error) {
aggSig, err := bls.SignatureFromBytesNoValidation(aggregated.Signature)
if err != nil {
return nil, err
}
sig, err := bls.SignatureFromBytesNoValidation(message.Signature)
if err != nil {
return nil, err
}
return bls.AggregateSignatures([]bls.Signature{aggSig, sig}).Marshal(), nil
}
// Add adds a PayloadAttestationMessage to the internal cache of aggregated
// PayloadAttestations.
// If the index has already been seen for this attestation status the function does nothing.
// If the root is not the cached root, the function will clear the previous cache
// This function assumes that the message has already been validated. In
// particular that the signature is valid and that the block root corresponds to
// the given slot in the attestation data.
func (p *PayloadAttestationCache) Add(att *eth.PayloadAttestationMessage, idx uint64) error {
if att == nil || att.Data == nil || att.Data.BeaconBlockRoot == nil {
return errNilPayloadAttestationMessage
}
p.Lock()
defer p.Unlock()
root := [32]byte(att.Data.BeaconBlockRoot)
if p.root != root {
p.root = root
p.attestations = [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{}
}
agg := p.attestations[att.Data.PayloadStatus]
if agg == nil {
p.attestations[att.Data.PayloadStatus] = messageToPayloadAttestation(att, idx)
return nil
}
if agg.AggregationBits.BitAt(idx) {
return nil
}
sig, err := aggregateSigFromMessage(agg, att)
if err != nil {
return err
}
agg.Signature = sig
agg.AggregationBits.SetBitAt(idx, true)
return nil
}
// Get returns the aggregated PayloadAttestation for the given root and status
// if the root doesn't exist or status is invalid, the function returns nil.
func (p *PayloadAttestationCache) Get(root [32]byte, status primitives.PTCStatus) *eth.PayloadAttestation {
p.Lock()
defer p.Unlock()
if p.root != root {
return nil
}
if status >= primitives.PAYLOAD_INVALID_STATUS {
return nil
}
return eth.CopyPayloadAttestation(p.attestations[status])
}
// Clear clears the internal map
func (p *PayloadAttestationCache) Clear() {
p.Lock()
defer p.Unlock()
p.root = [32]byte{}
p.attestations = [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{}
}

View File

@@ -0,0 +1,143 @@
package cache
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestPayloadAttestationCache(t *testing.T) {
p := &PayloadAttestationCache{}
//Test Has seen
root := [32]byte{'r'}
idx := uint64(5)
require.Equal(t, false, p.Seen(root, idx))
// Test Add
msg := &eth.PayloadAttestationMessage{
Signature: bls.NewAggregateSignature().Marshal(),
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: root[:],
Slot: 1,
PayloadStatus: primitives.PAYLOAD_PRESENT,
},
}
// Add new root
require.NoError(t, p.Add(msg, idx))
require.Equal(t, true, p.Seen(root, idx))
require.Equal(t, root, p.root)
att := p.attestations[primitives.PAYLOAD_PRESENT]
indices := att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx)}, indices)
singleSig := bytesutil.SafeCopyBytes(msg.Signature)
require.DeepEqual(t, singleSig, att.Signature)
// Test Seen
require.Equal(t, true, p.Seen(root, idx))
require.Equal(t, false, p.Seen(root, idx+1))
// Add another attestation on the same data
msg2 := &eth.PayloadAttestationMessage{
Signature: bls.NewAggregateSignature().Marshal(),
Data: att.Data,
}
idx2 := uint64(7)
require.NoError(t, p.Add(msg2, idx2))
att = p.attestations[primitives.PAYLOAD_PRESENT]
indices = att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx), int(idx2)}, indices)
require.DeepNotEqual(t, att.Signature, msg.Signature)
// Try again the same index
require.NoError(t, p.Add(msg2, idx2))
att2 := p.attestations[primitives.PAYLOAD_PRESENT]
indices = att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx), int(idx2)}, indices)
require.DeepEqual(t, att, att2)
// Test Seen
require.Equal(t, true, p.Seen(root, idx2))
require.Equal(t, false, p.Seen(root, idx2+1))
// Add another payload status for a different payload status
msg3 := &eth.PayloadAttestationMessage{
Signature: bls.NewAggregateSignature().Marshal(),
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: root[:],
Slot: 1,
PayloadStatus: primitives.PAYLOAD_WITHHELD,
},
}
idx3 := uint64(17)
require.NoError(t, p.Add(msg3, idx3))
att3 := p.attestations[primitives.PAYLOAD_WITHHELD]
indices3 := att3.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx3)}, indices3)
require.DeepEqual(t, singleSig, att3.Signature)
// Add a different root
root2 := [32]byte{'s'}
msg.Data.BeaconBlockRoot = root2[:]
require.NoError(t, p.Add(msg, idx))
require.Equal(t, root2, p.root)
require.Equal(t, true, p.Seen(root2, idx))
require.Equal(t, false, p.Seen(root, idx))
att = p.attestations[primitives.PAYLOAD_PRESENT]
indices = att.AggregationBits.BitIndices()
require.DeepEqual(t, []int{int(idx)}, indices)
}
func TestPayloadAttestationCache_Get(t *testing.T) {
root := [32]byte{1, 2, 3}
wrongRoot := [32]byte{4, 5, 6}
status := primitives.PAYLOAD_PRESENT
invalidStatus := primitives.PAYLOAD_INVALID_STATUS
cache := &PayloadAttestationCache{
root: root,
attestations: [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{
{
Signature: []byte{1},
},
{
Signature: []byte{2},
},
{
Signature: []byte{3},
},
},
}
t.Run("valid root and status", func(t *testing.T) {
result := cache.Get(root, status)
require.NotNil(t, result, "Expected a non-nil result")
require.DeepEqual(t, cache.attestations[status], result)
})
t.Run("invalid root", func(t *testing.T) {
result := cache.Get(wrongRoot, status)
require.IsNil(t, result)
})
t.Run("status out of bound", func(t *testing.T) {
result := cache.Get(root, invalidStatus)
require.IsNil(t, result)
})
t.Run("no attestation", func(t *testing.T) {
emptyCache := &PayloadAttestationCache{
root: root,
attestations: [primitives.PAYLOAD_INVALID_STATUS]*eth.PayloadAttestation{},
}
result := emptyCache.Get(root, status)
require.IsNil(t, result)
})
}

View File

@@ -0,0 +1,76 @@
package cache
import (
"bytes"
"sync"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
)
// ExecutionPayloadHeaders is used by the sync service to store signed execution payload headers after they pass validation,
// and filter out subsequent headers with lower value.
// The signed header from this cache could be used by the proposer when proposing the next slot.
type ExecutionPayloadHeaders struct {
headers map[primitives.Slot][]*enginev1.SignedExecutionPayloadHeader
sync.RWMutex
}
func NewExecutionPayloadHeaders() *ExecutionPayloadHeaders {
return &ExecutionPayloadHeaders{
headers: make(map[primitives.Slot][]*enginev1.SignedExecutionPayloadHeader),
}
}
// SaveSignedExecutionPayloadHeader saves the signed execution payload header to the cache.
// The cache stores headers for up to two slots. If the input slot is higher than the lowest slot
// currently in the cache, the lowest slot is removed to make space for the new header.
// Only the highest value header for a given parent block hash will be stored.
// This function assumes caller already checks header's slot is current or next slot, it doesn't account for slot validation.
func (c *ExecutionPayloadHeaders) SaveSignedExecutionPayloadHeader(header *enginev1.SignedExecutionPayloadHeader) {
c.Lock()
defer c.Unlock()
for s := range c.headers {
if s+1 < header.Message.Slot {
delete(c.headers, s)
}
}
// Add or update the header in the map
if _, ok := c.headers[header.Message.Slot]; !ok {
c.headers[header.Message.Slot] = []*enginev1.SignedExecutionPayloadHeader{header}
} else {
found := false
for i, h := range c.headers[header.Message.Slot] {
if bytes.Equal(h.Message.ParentBlockHash, header.Message.ParentBlockHash) && bytes.Equal(h.Message.ParentBlockRoot, header.Message.ParentBlockRoot) {
if header.Message.Value > h.Message.Value {
c.headers[header.Message.Slot][i] = header
}
found = true
break
}
}
if !found {
c.headers[header.Message.Slot] = append(c.headers[header.Message.Slot], header)
}
}
}
// SignedExecutionPayloadHeader returns the signed payload header for the given slot and parent block hash and block root.
// Returns nil if the header is not found.
// This should be used when the caller wants the header to match parent block hash and parent block root such as proposer choosing a header to propose.
func (c *ExecutionPayloadHeaders) SignedExecutionPayloadHeader(slot primitives.Slot, parentBlockHash []byte, parentBlockRoot []byte) *enginev1.SignedExecutionPayloadHeader {
c.RLock()
defer c.RUnlock()
if headers, ok := c.headers[slot]; ok {
for _, header := range headers {
if bytes.Equal(header.Message.ParentBlockHash, parentBlockHash) && bytes.Equal(header.Message.ParentBlockRoot, parentBlockRoot) {
return header
}
}
}
return nil
}

View File

@@ -0,0 +1,243 @@
package cache
import (
"testing"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func Test_SaveSignedExecutionPayloadHeader(t *testing.T) {
t.Run("First header should be added to cache", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
require.Equal(t, 1, len(c.headers))
require.Equal(t, header, c.headers[1][0])
})
t.Run("Second header with higher slot should be added, and both slots should be in cache", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
require.Equal(t, 2, len(c.headers))
require.Equal(t, header1, c.headers[1][0])
require.Equal(t, header2, c.headers[2][0])
})
t.Run("Third header with higher slot should replace the oldest slot", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 100,
},
}
header3 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 3,
ParentBlockHash: []byte("parent3"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
c.SaveSignedExecutionPayloadHeader(header3)
require.Equal(t, 2, len(c.headers))
require.Equal(t, header2, c.headers[2][0])
require.Equal(t, header3, c.headers[3][0])
})
t.Run("Header with same slot but higher value should replace the existing one", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
require.Equal(t, 1, len(c.headers[2]))
require.Equal(t, header2, c.headers[2][0])
})
t.Run("Header with different parent block hash should be appended to the same slot", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
require.Equal(t, 2, len(c.headers[2]))
require.Equal(t, header1, c.headers[2][0])
require.Equal(t, header2, c.headers[2][1])
})
}
func TestSignedExecutionPayloadHeader(t *testing.T) {
t.Run("Return header when slot and parentBlockHash match", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
ParentBlockRoot: []byte("root1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
result := c.SignedExecutionPayloadHeader(1, []byte("parent1"), []byte("root1"))
require.NotNil(t, result)
require.Equal(t, header, result)
})
t.Run("Return nil when no matching slot and parentBlockHash", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
ParentBlockRoot: []byte("root1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
result := c.SignedExecutionPayloadHeader(2, []byte("parent2"), []byte("root1"))
require.IsNil(t, result)
})
t.Run("Return nil when no matching slot and parentBlockRoot", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
ParentBlockRoot: []byte("root1"),
Value: 100,
},
}
c.SaveSignedExecutionPayloadHeader(header)
result := c.SignedExecutionPayloadHeader(2, []byte("parent1"), []byte("root2"))
require.IsNil(t, result)
})
t.Run("Return header when there are two slots in the cache and a match is found", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
// Check for the first header
result1 := c.SignedExecutionPayloadHeader(1, []byte("parent1"), []byte{})
require.NotNil(t, result1)
require.Equal(t, header1, result1)
// Check for the second header
result2 := c.SignedExecutionPayloadHeader(2, []byte("parent2"), []byte{})
require.NotNil(t, result2)
require.Equal(t, header2, result2)
})
t.Run("Return nil when slot is evicted from cache", func(t *testing.T) {
c := NewExecutionPayloadHeaders()
header1 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 1,
ParentBlockHash: []byte("parent1"),
Value: 100,
},
}
header2 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 2,
ParentBlockHash: []byte("parent2"),
Value: 200,
},
}
header3 := &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
Slot: 3,
ParentBlockHash: []byte("parent3"),
Value: 300,
},
}
c.SaveSignedExecutionPayloadHeader(header1)
c.SaveSignedExecutionPayloadHeader(header2)
c.SaveSignedExecutionPayloadHeader(header3)
// The first slot should be evicted, so result should be nil
result := c.SignedExecutionPayloadHeader(1, []byte("parent1"), []byte{})
require.IsNil(t, result)
// The second slot should still be present
result = c.SignedExecutionPayloadHeader(2, []byte("parent2"), []byte{})
require.NotNil(t, result)
require.Equal(t, header2, result)
// The third slot should be present
result = c.SignedExecutionPayloadHeader(3, []byte("parent3"), []byte{})
require.NotNil(t, result)
require.Equal(t, header3, result)
})
}

View File

@@ -2,7 +2,6 @@ package altair
import (
"context"
"encoding/binary"
goErrors "errors"
"fmt"
"time"
@@ -23,6 +22,8 @@ import (
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
const maxRandomByte = uint64(1<<8 - 1)
var (
ErrTooLate = errors.New("sync message is too late")
)
@@ -90,22 +91,19 @@ func NextSyncCommittee(ctx context.Context, s state.BeaconState) (*ethpb.SyncCom
// """
// epoch = Epoch(get_current_epoch(state) + 1)
//
// MAX_RANDOM_VALUE = 2**16 - 1 # [Modified in Electra]
// MAX_RANDOM_BYTE = 2**8 - 1
// active_validator_indices = get_active_validator_indices(state, epoch)
// active_validator_count = uint64(len(active_validator_indices))
// seed = get_seed(state, epoch, DOMAIN_SYNC_COMMITTEE)
// i = uint64(0)
// i = 0
// sync_committee_indices: List[ValidatorIndex] = []
// while len(sync_committee_indices) < SYNC_COMMITTEE_SIZE:
// shuffled_index = compute_shuffled_index(uint64(i % active_validator_count), active_validator_count, seed)
// candidate_index = active_validator_indices[shuffled_index]
// # [Modified in Electra]
// random_bytes = hash(seed + uint_to_bytes(i // 16))
// offset = i % 16 * 2
// random_value = bytes_to_uint64(random_bytes[offset:offset + 2])
// random_byte = hash(seed + uint_to_bytes(uint64(i // 32)))[i % 32]
// effective_balance = state.validators[candidate_index].effective_balance
// # [Modified in Electra:EIP7251]
// if effective_balance * MAX_RANDOM_VALUE >= MAX_EFFECTIVE_BALANCE_ELECTRA * random_value:
// if effective_balance * MAX_RANDOM_BYTE >= MAX_EFFECTIVE_BALANCE_ELECTRA * random_byte:
// sync_committee_indices.append(candidate_index)
// i += 1
// return sync_committee_indices
@@ -125,11 +123,12 @@ func NextSyncCommitteeIndices(ctx context.Context, s state.BeaconState) ([]primi
cIndices := make([]primitives.ValidatorIndex, 0, syncCommitteeSize)
hashFunc := hash.CustomSHA256Hasher()
// Preallocate buffers to avoid repeated allocations
seedBuffer := make([]byte, len(seed)+8)
copy(seedBuffer, seed[:])
maxEB := cfg.MaxEffectiveBalanceElectra
if s.Version() < version.Electra {
maxEB = cfg.MaxEffectiveBalance
}
for i := primitives.ValidatorIndex(0); uint64(len(cIndices)) < syncCommitteeSize; i++ {
for i := primitives.ValidatorIndex(0); uint64(len(cIndices)) < params.BeaconConfig().SyncCommitteeSize; i++ {
if ctx.Err() != nil {
return nil, ctx.Err()
}
@@ -138,30 +137,18 @@ func NextSyncCommitteeIndices(ctx context.Context, s state.BeaconState) ([]primi
if err != nil {
return nil, err
}
b := append(seed[:], bytesutil.Bytes8(uint64(i.Div(32)))...)
randomByte := hashFunc(b)[i%32]
cIndex := indices[sIndex]
v, err := s.ValidatorAtIndexReadOnly(cIndex)
if err != nil {
return nil, err
}
effectiveBal := v.EffectiveBalance()
if s.Version() >= version.Electra {
// Use the preallocated seed buffer
binary.LittleEndian.PutUint64(seedBuffer[len(seed):], uint64(i/16))
randomByte := hashFunc(seedBuffer)
offset := (i % 16) * 2
randomValue := uint64(randomByte[offset]) | uint64(randomByte[offset+1])<<8
if effectiveBal*fieldparams.MaxRandomValueElectra >= cfg.MaxEffectiveBalanceElectra*randomValue {
cIndices = append(cIndices, cIndex)
}
} else {
// Use the preallocated seed buffer
binary.LittleEndian.PutUint64(seedBuffer[len(seed):], uint64(i/32))
randomByte := hashFunc(seedBuffer)[i%32]
if effectiveBal*fieldparams.MaxRandomByte >= cfg.MaxEffectiveBalance*uint64(randomByte) {
cIndices = append(cIndices, cIndex)
}
if effectiveBal*maxRandomByte >= maxEB*uint64(randomByte) {
cIndices = append(cIndices, cIndex)
}
}

View File

@@ -107,6 +107,7 @@ go_test(
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time/slots:go_default_library",
"@com_github_google_gofuzz//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -448,7 +448,6 @@ func TestValidateIndexedAttestation_AboveMaxLength(t *testing.T) {
Target: &ethpb.Checkpoint{
Epoch: primitives.Epoch(i),
},
Source: &ethpb.Checkpoint{},
}
}
@@ -490,7 +489,6 @@ func TestValidateIndexedAttestation_BadAttestationsSignatureSet(t *testing.T) {
Target: &ethpb.Checkpoint{
Root: []byte{},
},
Source: &ethpb.Checkpoint{},
},
Signature: sig.Marshal(),
AggregationBits: list,

View File

@@ -1,5 +1,5 @@
package blocks
var ProcessBLSToExecutionChange = processBLSToExecutionChange
var ErrInvalidBLSPrefix = errInvalidBLSPrefix
var VerifyBlobCommitmentCount = verifyBlobCommitmentCount

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
@@ -222,6 +223,42 @@ func NewGenesisBlockForState(ctx context.Context, st state.BeaconState) (interfa
},
Signature: params.BeaconConfig().EmptySignature[:],
})
case *ethpb.BeaconStateEPBS:
kzgs := make([][]byte, 0)
kzgRoot, err := ssz.KzgCommitmentsRoot(kzgs)
if err != nil {
return nil, err
}
return blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockEpbs{
Block: &ethpb.BeaconBlockEpbs{
ParentRoot: params.BeaconConfig().ZeroHash[:],
StateRoot: root[:],
Body: &ethpb.BeaconBlockBodyEpbs{
RandaoReveal: make([]byte, 96),
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
},
Graffiti: make([]byte, 32),
SyncAggregate: &ethpb.SyncAggregate{
SyncCommitteeBits: make([]byte, fieldparams.SyncCommitteeLength/8),
SyncCommitteeSignature: make([]byte, fieldparams.BLSSignatureLength),
},
SignedExecutionPayloadHeader: &enginev1.SignedExecutionPayloadHeader{
Message: &enginev1.ExecutionPayloadHeaderEPBS{
ParentBlockHash: make([]byte, 32),
ParentBlockRoot: make([]byte, 32),
BlockHash: make([]byte, 32),
BlobKzgCommitmentsRoot: kzgRoot[:],
},
Signature: make([]byte, 96),
},
BlsToExecutionChanges: make([]*ethpb.SignedBLSToExecutionChange, 0),
PayloadAttestations: make([]*ethpb.PayloadAttestation, 0),
},
},
Signature: params.BeaconConfig().EmptySignature[:],
})
default:
return nil, ErrUnrecognizedState
}

View File

@@ -8,11 +8,10 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
@@ -59,12 +58,12 @@ func IsMergeTransitionComplete(st state.BeaconState) (bool, error) {
//
// return block.body.execution_payload != ExecutionPayload()
func IsExecutionBlock(body interfaces.ReadOnlyBeaconBlockBody) (bool, error) {
if body == nil {
return false, errors.New("nil block body")
}
if body.Version() >= version.Capella {
return true, nil
}
if body == nil {
return false, errors.New("nil block body")
}
payload, err := body.Execution()
switch {
case errors.Is(err, consensus_types.ErrUnsupportedField):
@@ -211,7 +210,7 @@ func ProcessPayload(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBod
if err != nil {
return err
}
if err := verifyBlobCommitmentCount(st.Slot(), body); err != nil {
if err := verifyBlobCommitmentCount(body); err != nil {
return err
}
if err := ValidatePayloadWhenMergeCompletes(st, payload); err != nil {
@@ -226,7 +225,7 @@ func ProcessPayload(st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBod
return nil
}
func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBeaconBlockBody) error {
func verifyBlobCommitmentCount(body interfaces.ReadOnlyBeaconBlockBody) error {
if body.Version() < version.Deneb {
return nil
}
@@ -234,8 +233,7 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
if err != nil {
return err
}
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlock(slot)
if len(kzgs) > maxBlobsPerBlock {
if len(kzgs) > field_params.MaxBlobsPerBlock {
return fmt.Errorf("too many kzg commitments in block: %d", len(kzgs))
}
return nil
@@ -244,12 +242,47 @@ func verifyBlobCommitmentCount(slot primitives.Slot, body interfaces.ReadOnlyBea
// GetBlockPayloadHash returns the hash of the execution payload of the block
func GetBlockPayloadHash(blk interfaces.ReadOnlyBeaconBlock) ([32]byte, error) {
var payloadHash [32]byte
if IsPreBellatrixVersion(blk.Version()) {
return payloadHash, nil
if blk.Version() >= version.EPBS {
header, err := blk.Body().SignedExecutionPayloadHeader()
if err != nil {
return payloadHash, err
}
payload, err := header.Header()
if err != nil {
return payloadHash, err
}
return payload.BlockHash(), nil
}
payload, err := blk.Body().Execution()
if err != nil {
return payloadHash, err
if blk.Version() >= version.Bellatrix {
payload, err := blk.Body().Execution()
if err != nil {
return payloadHash, err
}
return bytesutil.ToBytes32(payload.BlockHash()), nil
}
return bytesutil.ToBytes32(payload.BlockHash()), nil
return payloadHash, nil
}
// GetBlockParentHash returns the hash of the parent execution payload
func GetBlockParentHash(blk interfaces.ReadOnlyBeaconBlock) ([32]byte, error) {
var parentHash [32]byte
if blk.Version() >= version.EPBS {
header, err := blk.Body().SignedExecutionPayloadHeader()
if err != nil {
return parentHash, err
}
payload, err := header.Header()
if err != nil {
return parentHash, err
}
return payload.ParentBlockHash(), nil
}
if blk.Version() >= version.Bellatrix {
payload, err := blk.Body().Execution()
if err != nil {
return parentHash, err
}
return bytesutil.ToBytes32(payload.ParentHash()), nil
}
return parentHash, nil
}

View File

@@ -9,7 +9,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
@@ -254,8 +253,7 @@ func Test_IsExecutionBlockCapella(t *testing.T) {
require.NoError(t, err)
got, err := blocks.IsExecutionBlock(wrappedBlock.Body())
require.NoError(t, err)
// #14614
require.Equal(t, true, got)
require.Equal(t, false, got)
}
func Test_IsExecutionEnabled(t *testing.T) {
@@ -924,10 +922,10 @@ func TestVerifyBlobCommitmentCount(t *testing.T) {
b := &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{}}
rb, err := consensusblocks.NewBeaconBlock(b)
require.NoError(t, err)
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
require.NoError(t, blocks.VerifyBlobCommitmentCount(rb.Body()))
b = &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1)}}
b = &ethpb.BeaconBlockDeneb{Body: &ethpb.BeaconBlockBodyDeneb{BlobKzgCommitments: make([][]byte, fieldparams.MaxBlobsPerBlock+1)}}
rb, err = consensusblocks.NewBeaconBlock(b)
require.NoError(t, err)
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", params.BeaconConfig().MaxBlobsPerBlock(rb.Slot())+1), blocks.VerifyBlobCommitmentCount(rb.Slot(), rb.Body()))
require.ErrorContains(t, fmt.Sprintf("too many kzg commitments in block: %d", fieldparams.MaxBlobsPerBlock+1), blocks.VerifyBlobCommitmentCount(rb.Body()))
}

View File

@@ -14,8 +14,8 @@ import (
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/crypto/hash"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
@@ -100,11 +100,8 @@ func ValidateBLSToExecutionChange(st state.ReadOnlyBeaconState, signed *ethpb.Si
if err != nil {
return nil, err
}
if val == nil {
return nil, errors.Wrap(errInvalidWithdrawalCredentials, "validator is nil") // This should not be possible.
}
cred := val.WithdrawalCredentials
if len(cred) < 2 || cred[0] != params.BeaconConfig().BLSWithdrawalPrefixByte {
if cred[0] != params.BeaconConfig().BLSWithdrawalPrefixByte {
return nil, errInvalidBLSPrefix
}
@@ -118,14 +115,97 @@ func ValidateBLSToExecutionChange(st state.ReadOnlyBeaconState, signed *ethpb.Si
return val, nil
}
func checkWithdrawalsAgainstPayload(
executionData interfaces.ExecutionData,
numExpected int,
expectedRoot [32]byte,
) error {
var wdRoot [32]byte
if executionData.IsBlinded() {
r, err := executionData.WithdrawalsRoot()
if err != nil {
return errors.Wrap(err, "could not get withdrawals root")
}
copy(wdRoot[:], r)
} else {
wds, err := executionData.Withdrawals()
if err != nil {
return errors.Wrap(err, "could not get withdrawals")
}
if len(wds) != numExpected {
return fmt.Errorf("execution payload header has %d withdrawals when %d were expected", len(wds), numExpected)
}
wdRoot, err = ssz.WithdrawalSliceRoot(wds, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return errors.Wrap(err, "could not get withdrawals root")
}
}
if expectedRoot != wdRoot {
return fmt.Errorf("expected withdrawals root %#x, got %#x", expectedRoot, wdRoot)
}
return nil
}
func processWithdrawalStateTransition(
st state.BeaconState,
expectedWithdrawals []*enginev1.Withdrawal,
partialWithdrawalsCount uint64,
) (err error) {
for _, withdrawal := range expectedWithdrawals {
err := helpers.DecreaseBalance(st, withdrawal.ValidatorIndex, withdrawal.Amount)
if err != nil {
return errors.Wrap(err, "could not decrease balance")
}
}
if st.Version() >= version.Electra {
if err := st.DequeuePartialWithdrawals(partialWithdrawalsCount); err != nil {
return fmt.Errorf("unable to dequeue partial withdrawals from state: %w", err)
}
}
if len(expectedWithdrawals) > 0 {
if err := st.SetNextWithdrawalIndex(expectedWithdrawals[len(expectedWithdrawals)-1].Index + 1); err != nil {
return errors.Wrap(err, "could not set next withdrawal index")
}
}
var nextValidatorIndex primitives.ValidatorIndex
if uint64(len(expectedWithdrawals)) < params.BeaconConfig().MaxWithdrawalsPerPayload {
nextValidatorIndex, err = st.NextWithdrawalValidatorIndex()
if err != nil {
return errors.Wrap(err, "could not get next withdrawal validator index")
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1
if nextValidatorIndex == primitives.ValidatorIndex(st.NumValidators()) {
nextValidatorIndex = 0
}
}
if err := st.SetNextWithdrawalValidatorIndex(nextValidatorIndex); err != nil {
return errors.Wrap(err, "could not set next withdrawal validator index")
}
return nil
}
// ProcessWithdrawals processes the validator withdrawals from the provided execution payload
// into the beacon state.
//
// Spec pseudocode definition:
//
// def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
// def process_withdrawals(state: BeaconState, payload: ExecutionPayload) -> None:
// if state.fork.current_version >= EIP7732_FORK_VERSION :
// if not is_parent_block_full(state): # [New in EIP-7732]
// return
//
// expected_withdrawals, processed_partial_withdrawals_count = get_expected_withdrawals(state) # [Modified in Electra:EIP7251]
// expected_withdrawals, partial_withdrawals_count = get_expected_withdrawals(state) # [Modified in Electra:EIP7251]
//
// if state.fork.current_version >= EIP7732_FORK_VERSION :
// state.latest_withdrawals_root = hash_tree_root(expected_withdrawals) # [New in EIP-7732]
// else :
// assert len(payload.withdrawals) == len(expected_withdrawals)
//
// assert len(payload.withdrawals) == len(expected_withdrawals)
//
@@ -152,76 +232,39 @@ func ValidateBLSToExecutionChange(st state.ReadOnlyBeaconState, signed *ethpb.Si
// next_validator_index = ValidatorIndex(next_index % len(state.validators))
// state.next_withdrawal_validator_index = next_validator_index
func ProcessWithdrawals(st state.BeaconState, executionData interfaces.ExecutionData) (state.BeaconState, error) {
expectedWithdrawals, processedPartialWithdrawalsCount, err := st.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get expected withdrawals")
if st.Version() >= version.EPBS {
IsParentBlockFull, err := st.IsParentBlockFull()
if err != nil {
return nil, errors.Wrap(err, "could not check if parent block is full")
}
if !IsParentBlockFull {
return st, nil
}
}
var wdRoot [32]byte
if executionData.IsBlinded() {
r, err := executionData.WithdrawalsRoot()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
wdRoot = bytesutil.ToBytes32(r)
} else {
wds, err := executionData.Withdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals")
}
if len(wds) != len(expectedWithdrawals) {
return nil, fmt.Errorf("execution payload header has %d withdrawals when %d were expected", len(wds), len(expectedWithdrawals))
}
wdRoot, err = ssz.WithdrawalSliceRoot(wds, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get withdrawals root")
}
expectedWithdrawals, partialWithdrawalsCount, err := st.ExpectedWithdrawals()
if err != nil {
return nil, errors.Wrap(err, "could not get expected withdrawals")
}
expectedRoot, err := ssz.WithdrawalSliceRoot(expectedWithdrawals, fieldparams.MaxWithdrawalsPerPayload)
if err != nil {
return nil, errors.Wrap(err, "could not get expected withdrawals root")
}
if expectedRoot != wdRoot {
return nil, fmt.Errorf("expected withdrawals root %#x, got %#x", expectedRoot, wdRoot)
}
for _, withdrawal := range expectedWithdrawals {
err := helpers.DecreaseBalance(st, withdrawal.ValidatorIndex, withdrawal.Amount)
if st.Version() >= version.EPBS {
err = st.SetLastWithdrawalsRoot(expectedRoot[:])
if err != nil {
return nil, errors.Wrap(err, "could not decrease balance")
return nil, errors.Wrap(err, "could not set withdrawals root")
}
}
if st.Version() >= version.Electra {
if err := st.DequeuePendingPartialWithdrawals(processedPartialWithdrawalsCount); err != nil {
return nil, fmt.Errorf("unable to dequeue partial withdrawals from state: %w", err)
}
}
if len(expectedWithdrawals) > 0 {
if err := st.SetNextWithdrawalIndex(expectedWithdrawals[len(expectedWithdrawals)-1].Index + 1); err != nil {
return nil, errors.Wrap(err, "could not set next withdrawal index")
}
}
var nextValidatorIndex primitives.ValidatorIndex
if uint64(len(expectedWithdrawals)) < params.BeaconConfig().MaxWithdrawalsPerPayload {
nextValidatorIndex, err = st.NextWithdrawalValidatorIndex()
if err != nil {
return nil, errors.Wrap(err, "could not get next withdrawal validator index")
}
nextValidatorIndex += primitives.ValidatorIndex(params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep)
nextValidatorIndex = nextValidatorIndex % primitives.ValidatorIndex(st.NumValidators())
} else {
nextValidatorIndex = expectedWithdrawals[len(expectedWithdrawals)-1].ValidatorIndex + 1
if nextValidatorIndex == primitives.ValidatorIndex(st.NumValidators()) {
nextValidatorIndex = 0
if err := checkWithdrawalsAgainstPayload(executionData, len(expectedWithdrawals), expectedRoot); err != nil {
return nil, err
}
}
if err := st.SetNextWithdrawalValidatorIndex(nextValidatorIndex); err != nil {
return nil, errors.Wrap(err, "could not set next withdrawal validator index")
if err := processWithdrawalStateTransition(st, expectedWithdrawals, partialWithdrawalsCount); err != nil {
return nil, err
}
return st, nil
}

View File

@@ -22,6 +22,7 @@ import (
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
@@ -113,42 +114,7 @@ func TestProcessBLSToExecutionChange(t *testing.T) {
require.NoError(t, err)
require.DeepEqual(t, digest[:], val.WithdrawalCredentials)
})
t.Run("nil validator does not panic", func(t *testing.T) {
priv, err := bls.RandKey()
require.NoError(t, err)
pubkey := priv.PublicKey().Marshal()
message := &ethpb.BLSToExecutionChange{
ToExecutionAddress: []byte{0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10, 0x11, 0x12, 0x13},
ValidatorIndex: 0,
FromBlsPubkey: pubkey,
}
registry := []*ethpb.Validator{
nil,
}
st, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
Validators: registry,
Fork: &ethpb.Fork{
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
Slot: params.BeaconConfig().SlotsPerEpoch * 5,
})
require.NoError(t, err)
signature, err := signing.ComputeDomainAndSign(st, time.CurrentEpoch(st), message, params.BeaconConfig().DomainBLSToExecutionChange, priv)
require.NoError(t, err)
signed := &ethpb.SignedBLSToExecutionChange{
Message: message,
Signature: signature,
}
_, err = blocks.ValidateBLSToExecutionChange(st, signed)
// The state should return an empty validator, even when the validator object in the registry is
// nil. This error should return when the withdrawal credentials are invalid or too short.
require.ErrorIs(t, err, blocks.ErrInvalidBLSPrefix)
})
t.Run("non-existent validator", func(t *testing.T) {
priv, err := bls.RandKey()
require.NoError(t, err)
@@ -1167,13 +1133,407 @@ func TestProcessWithdrawals(t *testing.T) {
checkPostState(t, test.Control, post)
}
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = saved
})
}
})
}
}
func TestProcessWithdrawalsEPBS(t *testing.T) {
const (
currentEpoch = primitives.Epoch(10)
epochInFuture = primitives.Epoch(12)
epochInPast = primitives.Epoch(8)
numValidators = 128
notWithdrawableIndex = 127
notPartiallyWithdrawable = 126
maxSweep = uint64(80)
)
maxEffectiveBalance := params.BeaconConfig().MaxEffectiveBalance
type args struct {
Name string
NextWithdrawalValidatorIndex primitives.ValidatorIndex
NextWithdrawalIndex uint64
FullWithdrawalIndices []primitives.ValidatorIndex
PendingPartialWithdrawalIndices []primitives.ValidatorIndex
Withdrawals []*enginev1.Withdrawal
PendingPartialWithdrawals []*ethpb.PendingPartialWithdrawal // Electra
LatestBlockHash []byte // EIP-7732
}
type control struct {
NextWithdrawalValidatorIndex primitives.ValidatorIndex
NextWithdrawalIndex uint64
Balances map[uint64]uint64
}
type Test struct {
Args args
Control control
}
executionAddress := func(i primitives.ValidatorIndex) []byte {
wc := make([]byte, 20)
wc[19] = byte(i)
return wc
}
withdrawalAmount := func(i primitives.ValidatorIndex) uint64 {
return maxEffectiveBalance + uint64(i)*100000
}
fullWithdrawal := func(i primitives.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
return &enginev1.Withdrawal{
Index: idx,
ValidatorIndex: i,
Address: executionAddress(i),
Amount: withdrawalAmount(i),
}
}
PendingPartialWithdrawal := func(i primitives.ValidatorIndex, idx uint64) *enginev1.Withdrawal {
return &enginev1.Withdrawal{
Index: idx,
ValidatorIndex: i,
Address: executionAddress(i),
Amount: withdrawalAmount(i) - maxEffectiveBalance,
}
}
tests := []Test{
{
Args: args{
Name: "success no withdrawals",
NextWithdrawalValidatorIndex: 10,
NextWithdrawalIndex: 3,
},
Control: control{
NextWithdrawalValidatorIndex: 90,
NextWithdrawalIndex: 3,
},
},
{
Args: args{
Name: "success one full withdrawal",
NextWithdrawalIndex: 3,
NextWithdrawalValidatorIndex: 5,
FullWithdrawalIndices: []primitives.ValidatorIndex{70},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(70, 3),
},
},
Control: control{
NextWithdrawalValidatorIndex: 85,
NextWithdrawalIndex: 4,
Balances: map[uint64]uint64{70: 0},
},
},
{
Args: args{
Name: "success one partial withdrawal",
NextWithdrawalIndex: 21,
NextWithdrawalValidatorIndex: 120,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 21),
},
},
Control: control{
NextWithdrawalValidatorIndex: 72,
NextWithdrawalIndex: 22,
Balances: map[uint64]uint64{7: maxEffectiveBalance},
},
},
{
Args: args{
Name: "success many full withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), fullWithdrawal(28, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{7: 0, 19: 0, 28: 0},
},
},
{
Args: args{
Name: "less than max sweep at end",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{80, 81, 82, 83},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(80, 22), fullWithdrawal(81, 23), fullWithdrawal(82, 24),
fullWithdrawal(83, 25),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 26,
Balances: map[uint64]uint64{80: 0, 81: 0, 82: 0, 83: 0},
},
},
{
Args: args{
Name: "less than max sweep and beginning",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{4, 5, 6},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(4, 22), fullWithdrawal(5, 23), fullWithdrawal(6, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{4: 0, 5: 0, 6: 0},
},
},
{
Args: args{
Name: "success many partial withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(7, 22), PendingPartialWithdrawal(19, 23), PendingPartialWithdrawal(28, 24),
},
},
Control: control{
NextWithdrawalValidatorIndex: 84,
NextWithdrawalIndex: 25,
Balances: map[uint64]uint64{
7: maxEffectiveBalance,
19: maxEffectiveBalance,
28: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success many withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(89, 22), PendingPartialWithdrawal(1, 23), PendingPartialWithdrawal(2, 24),
fullWithdrawal(7, 25), PendingPartialWithdrawal(15, 26), fullWithdrawal(19, 27),
fullWithdrawal(28, 28),
},
},
Control: control{
NextWithdrawalValidatorIndex: 40,
NextWithdrawalIndex: 29,
Balances: map[uint64]uint64{
7: 0, 19: 0, 28: 0,
2: maxEffectiveBalance, 1: maxEffectiveBalance, 89: maxEffectiveBalance,
15: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success many withdrawals with pending partial withdrawals in state",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 88,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28},
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{2, 1, 89, 15},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(89, 22), PendingPartialWithdrawal(1, 23), PendingPartialWithdrawal(2, 24),
fullWithdrawal(7, 25), PendingPartialWithdrawal(15, 26), fullWithdrawal(19, 27),
fullWithdrawal(28, 28),
},
PendingPartialWithdrawals: []*ethpb.PendingPartialWithdrawal{
{
Index: 11,
Amount: withdrawalAmount(11) - maxEffectiveBalance,
},
},
},
Control: control{
NextWithdrawalValidatorIndex: 40,
NextWithdrawalIndex: 29,
Balances: map[uint64]uint64{
7: 0, 19: 0, 28: 0,
2: maxEffectiveBalance, 1: maxEffectiveBalance, 89: maxEffectiveBalance,
15: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "success more than max fully withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
FullWithdrawalIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(1, 22), fullWithdrawal(2, 23), fullWithdrawal(3, 24),
fullWithdrawal(4, 25), fullWithdrawal(5, 26), fullWithdrawal(6, 27),
fullWithdrawal(7, 28), fullWithdrawal(8, 29), fullWithdrawal(9, 30),
fullWithdrawal(21, 31), fullWithdrawal(22, 32), fullWithdrawal(23, 33),
fullWithdrawal(24, 34), fullWithdrawal(25, 35), fullWithdrawal(26, 36),
fullWithdrawal(27, 37),
},
},
Control: control{
NextWithdrawalValidatorIndex: 28,
NextWithdrawalIndex: 38,
Balances: map[uint64]uint64{
1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0,
21: 0, 22: 0, 23: 0, 24: 0, 25: 0, 26: 0, 27: 0,
},
},
},
{
Args: args{
Name: "success more than max partially withdrawals",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 0,
PendingPartialWithdrawalIndices: []primitives.ValidatorIndex{1, 2, 3, 4, 5, 6, 7, 8, 9, 21, 22, 23, 24, 25, 26, 27, 29, 35, 89},
Withdrawals: []*enginev1.Withdrawal{
PendingPartialWithdrawal(1, 22), PendingPartialWithdrawal(2, 23), PendingPartialWithdrawal(3, 24),
PendingPartialWithdrawal(4, 25), PendingPartialWithdrawal(5, 26), PendingPartialWithdrawal(6, 27),
PendingPartialWithdrawal(7, 28), PendingPartialWithdrawal(8, 29), PendingPartialWithdrawal(9, 30),
PendingPartialWithdrawal(21, 31), PendingPartialWithdrawal(22, 32), PendingPartialWithdrawal(23, 33),
PendingPartialWithdrawal(24, 34), PendingPartialWithdrawal(25, 35), PendingPartialWithdrawal(26, 36),
PendingPartialWithdrawal(27, 37),
},
},
Control: control{
NextWithdrawalValidatorIndex: 28,
NextWithdrawalIndex: 38,
Balances: map[uint64]uint64{
1: maxEffectiveBalance,
2: maxEffectiveBalance,
3: maxEffectiveBalance,
4: maxEffectiveBalance,
5: maxEffectiveBalance,
6: maxEffectiveBalance,
7: maxEffectiveBalance,
8: maxEffectiveBalance,
9: maxEffectiveBalance,
21: maxEffectiveBalance,
22: maxEffectiveBalance,
23: maxEffectiveBalance,
24: maxEffectiveBalance,
25: maxEffectiveBalance,
26: maxEffectiveBalance,
27: maxEffectiveBalance,
},
},
},
{
Args: args{
Name: "Parent Node is not full",
NextWithdrawalIndex: 22,
NextWithdrawalValidatorIndex: 4,
FullWithdrawalIndices: []primitives.ValidatorIndex{7, 19, 28, 1},
Withdrawals: []*enginev1.Withdrawal{
fullWithdrawal(7, 22), fullWithdrawal(19, 23), fullWithdrawal(28, 24),
},
LatestBlockHash: []byte{1, 2, 3},
},
},
}
checkPostState := func(t *testing.T, expected control, st state.BeaconState) {
l, err := st.NextWithdrawalValidatorIndex()
require.NoError(t, err)
require.Equal(t, expected.NextWithdrawalValidatorIndex, l)
n, err := st.NextWithdrawalIndex()
require.NoError(t, err)
require.Equal(t, expected.NextWithdrawalIndex, n)
balances := st.Balances()
for idx, bal := range expected.Balances {
require.Equal(t, bal, balances[idx])
}
}
prepareValidators := func(st state.BeaconState, arguments args) error {
validators := make([]*ethpb.Validator, numValidators)
if err := st.SetBalances(make([]uint64, numValidators)); err != nil {
return err
}
for i := range validators {
v := &ethpb.Validator{}
v.EffectiveBalance = maxEffectiveBalance
v.WithdrawableEpoch = epochInFuture
v.WithdrawalCredentials = make([]byte, 32)
v.WithdrawalCredentials[31] = byte(i)
if err := st.UpdateBalancesAtIndex(primitives.ValidatorIndex(i), v.EffectiveBalance-uint64(rand.Intn(1000))); err != nil {
return err
}
validators[i] = v
}
for _, idx := range arguments.FullWithdrawalIndices {
if idx != notWithdrawableIndex {
validators[idx].WithdrawableEpoch = epochInPast
}
if err := st.UpdateBalancesAtIndex(idx, withdrawalAmount(idx)); err != nil {
return err
}
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
}
for _, idx := range arguments.PendingPartialWithdrawalIndices {
validators[idx].WithdrawalCredentials[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
if err := st.UpdateBalancesAtIndex(idx, withdrawalAmount(idx)); err != nil {
return err
}
}
return st.SetValidators(validators)
}
for _, test := range tests {
t.Run(test.Args.Name, func(t *testing.T) {
saved := params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = maxSweep
if test.Args.Withdrawals == nil {
test.Args.Withdrawals = make([]*enginev1.Withdrawal, 0)
}
if test.Args.FullWithdrawalIndices == nil {
test.Args.FullWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
if test.Args.PendingPartialWithdrawalIndices == nil {
test.Args.PendingPartialWithdrawalIndices = make([]primitives.ValidatorIndex, 0)
}
slot, err := slots.EpochStart(currentEpoch)
require.NoError(t, err)
var st state.BeaconState
var p interfaces.ExecutionData
spb := &ethpb.BeaconStateEPBS{
Slot: slot,
NextWithdrawalValidatorIndex: test.Args.NextWithdrawalValidatorIndex,
NextWithdrawalIndex: test.Args.NextWithdrawalIndex,
PendingPartialWithdrawals: test.Args.PendingPartialWithdrawals,
LatestExecutionPayloadHeader: &enginev1.ExecutionPayloadHeaderEPBS{
BlockHash: []byte{},
},
LatestBlockHash: test.Args.LatestBlockHash,
}
st, err = state_native.InitializeFromProtoUnsafeEpbs(spb)
require.NoError(t, err)
env := random.ExecutionPayloadEnvelope(t)
env.Payload.Withdrawals = test.Args.Withdrawals
wp, err := consensusblocks.WrappedROExecutionPayloadEnvelope(env)
require.NoError(t, err)
p, err = wp.Execution()
require.NoError(t, err)
err = prepareValidators(st, test.Args)
require.NoError(t, err)
post, err := blocks.ProcessWithdrawals(st, p)
if test.Args.Name == "Parent Node is not full" {
require.IsNil(t, post)
require.IsNil(t, err)
} else {
require.NoError(t, err)
checkPostState(t, test.Control, post)
}
params.BeaconConfig().MaxValidatorsPerWithdrawalsSweep = saved
})
}
}
func TestProcessBLSToExecutionChanges(t *testing.T) {
spb := &ethpb.BeaconStateCapella{
Fork: &ethpb.Fork{

View File

@@ -10,7 +10,6 @@ go_library(
"effective_balance_updates.go",
"registry_updates.go",
"transition.go",
"transition_no_verify_sig.go",
"upgrade.go",
"validator.go",
"withdrawals.go",
@@ -41,7 +40,6 @@ go_library(
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_ethereum_go_ethereum//common/math:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],

View File

@@ -20,7 +20,7 @@ func createValidatorsWithTotalActiveBalance(totalBal primitives.Gwei) []*eth.Val
vals := make([]*eth.Validator, num)
for i := range vals {
wd := make([]byte, 32)
wd[0] = params.BeaconConfig().CompoundingWithdrawalPrefixByte
wd[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wd[31] = byte(i)
vals[i] = &eth.Validator{

View File

@@ -5,7 +5,6 @@ import (
"context"
"fmt"
"github.com/ethereum/go-ethereum/common/math"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
@@ -37,7 +36,8 @@ import (
// break
//
// # Calculate the consolidated balance
// source_effective_balance = min(state.balances[pending_consolidation.source_index], source_validator.effective_balance)
// max_effective_balance = get_max_effective_balance(source_validator)
// source_effective_balance = min(state.balances[pending_consolidation.source_index], max_effective_balance)
//
// # Move active balance to target. Excess balance is withdrawable.
// decrease_balance(state, pending_consolidation.source_index, source_effective_balance)
@@ -77,7 +77,7 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
if err != nil {
return err
}
b := min(validatorBalance, sourceValidator.EffectiveBalance())
b := min(validatorBalance, helpers.ValidatorMaxEffectiveBalance(sourceValidator))
if err := helpers.DecreaseBalance(st, pc.SourceIndex, b); err != nil {
return err
@@ -140,8 +140,8 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
// if not (has_correct_credential and is_correct_source_address):
// return
//
// # Verify that target has compounding withdrawal credentials
// if not has_compounding_withdrawal_credential(target_validator):
// # Verify that target has execution withdrawal credentials
// if not has_execution_withdrawal_credential(target_validator):
// return
//
// # Verify the source and the target are active
@@ -156,13 +156,6 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
// if target_validator.exit_epoch != FAR_FUTURE_EPOCH:
// return
//
// # Verify the source has been active long enough
// if current_epoch < source_validator.activation_epoch + SHARD_COMMITTEE_PERIOD:
// return
//
// # Verify the source has no pending withdrawals in the queue
// if get_pending_balance_to_withdraw(state, source_index) > 0:
// return
// # Initiate source validator exit and append pending consolidation
// source_validator.exit_epoch = compute_consolidation_epoch_and_update_churn(
// state, source_validator.effective_balance
@@ -174,6 +167,10 @@ func ProcessPendingConsolidations(ctx context.Context, st state.BeaconState) err
// source_index=source_index,
// target_index=target_index
// ))
//
// # Churn any target excess active balance of target and raise its max
// if has_eth1_withdrawal_credential(target_validator):
// switch_to_compounding_validator(state, target_index)
func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, reqs []*enginev1.ConsolidationRequest) error {
if len(reqs) == 0 || st == nil {
return nil
@@ -248,7 +245,7 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
}
// Target validator must have their withdrawal credentials set appropriately.
if !helpers.HasCompoundingWithdrawalCredential(tgtV) {
if !helpers.HasExecutionWithdrawalCredentials(tgtV) {
continue
}
@@ -261,23 +258,6 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
continue
}
e, overflow := math.SafeAdd(uint64(srcV.ActivationEpoch), uint64(params.BeaconConfig().ShardCommitteePeriod))
if overflow {
log.Error("Overflow when adding activation epoch and shard committee period")
continue
}
if uint64(curEpoch) < e {
continue
}
bal, err := st.PendingBalanceToWithdraw(srcIdx)
if err != nil {
log.WithError(err).Error("failed to fetch pending balance to withdraw")
continue
}
if bal > 0 {
continue
}
// Initiate the exit of the source validator.
exitEpoch, err := ComputeConsolidationEpochAndUpdateChurn(ctx, st, primitives.Gwei(srcV.EffectiveBalance))
if err != nil {
@@ -293,6 +273,13 @@ func ProcessConsolidationRequests(ctx context.Context, st state.BeaconState, req
if err := st.AppendPendingConsolidation(&eth.PendingConsolidation{SourceIndex: srcIdx, TargetIndex: tgtIdx}); err != nil {
return fmt.Errorf("failed to append pending consolidation: %w", err) // This should never happen.
}
if helpers.HasETH1WithdrawalCredential(tgtV) {
if err := SwitchToCompoundingValidator(st, tgtIdx); err != nil {
log.WithError(err).Error("failed to switch to compounding validator")
continue
}
}
}
return nil

View File

@@ -46,7 +46,6 @@ func TestProcessPendingConsolidations(t *testing.T) {
Validators: []*eth.Validator{
{
WithdrawalCredentials: []byte{0x01, 0xFF},
EffectiveBalance: params.BeaconConfig().MinActivationBalance,
},
{
WithdrawalCredentials: []byte{0x01, 0xAB},
@@ -214,22 +213,15 @@ func TestProcessConsolidationRequests(t *testing.T) {
name: "one valid request",
state: func() state.BeaconState {
st := &eth.BeaconStateElectra{
Slot: params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().ShardCommitteePeriod)),
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
}
// Validator scenario setup. See comments in reqs section.
st.Validators[3].WithdrawalCredentials = bytesutil.Bytes32(0)
st.Validators[8].WithdrawalCredentials = bytesutil.Bytes32(1)
st.Validators[8].WithdrawalCredentials = bytesutil.Bytes32(0)
st.Validators[9].ActivationEpoch = params.BeaconConfig().FarFutureEpoch
st.Validators[12].ActivationEpoch = params.BeaconConfig().FarFutureEpoch
st.Validators[13].ExitEpoch = 10
st.Validators[16].ExitEpoch = 10
st.PendingPartialWithdrawals = []*eth.PendingPartialWithdrawal{
{
Index: 17,
Amount: 100,
},
}
s, err := state_native.InitializeFromProtoElectra(st)
require.NoError(t, err)
return s
@@ -247,7 +239,7 @@ func TestProcessConsolidationRequests(t *testing.T) {
SourcePubkey: []byte("val_5"),
TargetPubkey: []byte("val_6"),
},
// Target does not have their withdrawal credentials set appropriately. (Using eth1 address prefix)
// Target does not have their withdrawal credentials set appropriately.
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(7)),
SourcePubkey: []byte("val_7"),
@@ -295,12 +287,6 @@ func TestProcessConsolidationRequests(t *testing.T) {
SourcePubkey: []byte("val_0"),
TargetPubkey: []byte("val_0"),
},
// Has pending partial withdrawal
{
SourceAddress: append(bytesutil.PadTo(nil, 19), byte(0)),
SourcePubkey: []byte("val_17"),
TargetPubkey: []byte("val_1"),
},
// Valid consolidation request. This should be last to ensure invalid requests do
// not end the processing early.
{
@@ -361,7 +347,6 @@ func TestProcessConsolidationRequests(t *testing.T) {
name: "pending consolidations limit reached during processing",
state: func() state.BeaconState {
st := &eth.BeaconStateElectra{
Slot: params.BeaconConfig().SlotsPerEpoch.Mul(uint64(params.BeaconConfig().ShardCommitteePeriod)),
Validators: createValidatorsWithTotalActiveBalance(32000000000000000), // 32M ETH
PendingConsolidations: make([]*eth.PendingConsolidation, params.BeaconConfig().PendingConsolidationsLimit-1),
}

View File

@@ -386,14 +386,8 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
return errors.Wrap(err, "batch signature verification failed")
}
pubKeyMap := make(map[[48]byte]struct{}, len(pendingDeposits))
// Process each deposit individually
for _, pendingDeposit := range pendingDeposits {
_, found := pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)]
if !found {
pubKeyMap[bytesutil.ToBytes48(pendingDeposit.PublicKey)] = struct{}{}
}
validSignature := allSignaturesVerified
// If batch verification failed, check the individual deposit signature
@@ -411,16 +405,9 @@ func batchProcessNewPendingDeposits(ctx context.Context, state state.BeaconState
// Add validator to the registry if the signature is valid
if validSignature {
if found {
index, _ := state.ValidatorIndexByPubkey(bytesutil.ToBytes48(pendingDeposit.PublicKey))
if err := helpers.IncreaseBalance(state, index, pendingDeposit.Amount); err != nil {
return errors.Wrap(err, "could not increase balance")
}
} else {
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
}
err = AddValidatorToRegistry(state, pendingDeposit.PublicKey, pendingDeposit.WithdrawalCredentials, pendingDeposit.Amount)
if err != nil {
return errors.Wrap(err, "failed to add validator to registry")
}
}
}
@@ -573,7 +560,7 @@ func ProcessDepositRequests(ctx context.Context, beaconState state.BeaconState,
return beaconState, nil
}
// processDepositRequest processes the specific deposit request
// processDepositRequest processes the specific deposit receipt
// def process_deposit_request(state: BeaconState, deposit_request: DepositRequest) -> None:
//
// # Set deposit request start index
@@ -603,8 +590,8 @@ func processDepositRequest(beaconState state.BeaconState, request *enginev1.Depo
}
if err := beaconState.AppendPendingDeposit(&ethpb.PendingDeposit{
PublicKey: bytesutil.SafeCopyBytes(request.Pubkey),
WithdrawalCredentials: bytesutil.SafeCopyBytes(request.WithdrawalCredentials),
Amount: request.Amount,
WithdrawalCredentials: bytesutil.SafeCopyBytes(request.WithdrawalCredentials),
Signature: bytesutil.SafeCopyBytes(request.Signature),
Slot: beaconState.Slot(),
}); err != nil {

View File

@@ -22,40 +22,6 @@ import (
"github.com/prysmaticlabs/prysm/v5/testing/util"
)
func TestProcessPendingDepositsMultiplesSameDeposits(t *testing.T) {
st := stateWithActiveBalanceETH(t, 1000)
deps := make([]*eth.PendingDeposit, 2) // Make same deposit twice
validators := st.Validators()
sk, err := bls.RandKey()
require.NoError(t, err)
for i := 0; i < len(deps); i += 1 {
wc := make([]byte, 32)
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(i)
validators[i].PublicKey = sk.PublicKey().Marshal()
validators[i].WithdrawalCredentials = wc
deps[i] = stateTesting.GeneratePendingDeposit(t, sk, 32, bytesutil.ToBytes32(wc), 0)
}
require.NoError(t, st.SetPendingDeposits(deps))
err = electra.ProcessPendingDeposits(context.TODO(), st, 10000)
require.NoError(t, err)
val := st.Validators()
seenPubkeys := make(map[string]struct{})
for i := 0; i < len(val); i += 1 {
if len(val[i].PublicKey) == 0 {
continue
}
_, ok := seenPubkeys[string(val[i].PublicKey)]
if ok {
t.Fatalf("duplicated pubkeys")
} else {
seenPubkeys[string(val[i].PublicKey)] = struct{}{}
}
}
}
func TestProcessPendingDeposits(t *testing.T) {
tests := []struct {
name string
@@ -319,7 +285,7 @@ func TestBatchProcessNewPendingDeposits(t *testing.T) {
wc[0] = params.BeaconConfig().ETH1AddressWithdrawalPrefixByte
wc[31] = byte(0)
validDep := stateTesting.GeneratePendingDeposit(t, sk, params.BeaconConfig().MinActivationBalance, bytesutil.ToBytes32(wc), 0)
invalidDep := &eth.PendingDeposit{PublicKey: make([]byte, 48)}
invalidDep := &eth.PendingDeposit{}
// have a combination of valid and invalid deposits
deps := []*eth.PendingDeposit{validDep, invalidDep}
require.NoError(t, electra.BatchProcessNewPendingDeposits(context.Background(), st, deps))

View File

@@ -29,6 +29,7 @@ var (
ProcessParticipationFlagUpdates = altair.ProcessParticipationFlagUpdates
ProcessSyncCommitteeUpdates = altair.ProcessSyncCommitteeUpdates
AttestationsDelta = altair.AttestationsDelta
ProcessSyncAggregate = altair.ProcessSyncAggregate
)
// ProcessEpoch describes the per epoch operations that are performed on the beacon state.

View File

@@ -16,20 +16,36 @@ import (
)
// UpgradeToElectra updates inputs a generic state to return the version Electra state.
//
// nolint:dupword
// Spec code:
// def upgrade_to_electra(pre: deneb.BeaconState) -> BeaconState:
//
// epoch = deneb.get_current_epoch(pre)
// latest_execution_payload_header = pre.latest_execution_payload_header
// latest_execution_payload_header = ExecutionPayloadHeader(
// parent_hash=pre.latest_execution_payload_header.parent_hash,
// fee_recipient=pre.latest_execution_payload_header.fee_recipient,
// state_root=pre.latest_execution_payload_header.state_root,
// receipts_root=pre.latest_execution_payload_header.receipts_root,
// logs_bloom=pre.latest_execution_payload_header.logs_bloom,
// prev_randao=pre.latest_execution_payload_header.prev_randao,
// block_number=pre.latest_execution_payload_header.block_number,
// gas_limit=pre.latest_execution_payload_header.gas_limit,
// gas_used=pre.latest_execution_payload_header.gas_used,
// timestamp=pre.latest_execution_payload_header.timestamp,
// extra_data=pre.latest_execution_payload_header.extra_data,
// base_fee_per_gas=pre.latest_execution_payload_header.base_fee_per_gas,
// block_hash=pre.latest_execution_payload_header.block_hash,
// transactions_root=pre.latest_execution_payload_header.transactions_root,
// withdrawals_root=pre.latest_execution_payload_header.withdrawals_root,
// blob_gas_used=pre.latest_execution_payload_header.blob_gas_used,
// excess_blob_gas=pre.latest_execution_payload_header.excess_blob_gas,
// deposit_requests_root=Root(), # [New in Electra:EIP6110]
// withdrawal_requests_root=Root(), # [New in Electra:EIP7002],
// consolidation_requests_root=Root(), # [New in Electra:EIP7251]
// )
//
// earliest_exit_epoch = compute_activation_exit_epoch(get_current_epoch(pre))
// for validator in pre.validators:
// if validator.exit_epoch != FAR_FUTURE_EPOCH:
// if validator.exit_epoch > earliest_exit_epoch:
// earliest_exit_epoch = validator.exit_epoch
// earliest_exit_epoch += Epoch(1)
// exit_epochs = [v.exit_epoch for v in pre.validators if v.exit_epoch != FAR_FUTURE_EPOCH]
// if not exit_epochs:
// exit_epochs = [get_current_epoch(pre)]
// earliest_exit_epoch = max(exit_epochs) + 1
//
// post = BeaconState(
// # Versioning
@@ -104,20 +120,7 @@ import (
// ))
//
// for index in pre_activation:
// balance = post.balances[index]
// post.balances[index] = 0
// validator = post.validators[index]
// validator.effective_balance = 0
// validator.activation_eligibility_epoch = FAR_FUTURE_EPOCH
// # Use bls.G2_POINT_AT_INFINITY as a signature field placeholder
// # and GENESIS_SLOT to distinguish from a pending deposit request
// post.pending_deposits.append(PendingDeposit(
// pubkey=validator.pubkey,
// withdrawal_credentials=validator.withdrawal_credentials,
// amount=balance,
// signature=bls.G2_POINT_AT_INFINITY,
// slot=GENESIS_SLOT,
// ))
// queue_entire_balance_and_reset_validator(post, ValidatorIndex(index))
//
// # Ensure early adopters of compounding credentials go through the activation churn
// for index, validator in enumerate(post.validators):
@@ -184,7 +187,7 @@ func UpgradeToElectra(beaconState state.BeaconState) (state.BeaconState, error)
}
// [New in Electra:EIP7251]
earliestExitEpoch := helpers.ActivationExitEpoch(time.CurrentEpoch(beaconState))
earliestExitEpoch := time.CurrentEpoch(beaconState)
preActivationIndices := make([]primitives.ValidatorIndex, 0)
compoundWithdrawalIndices := make([]primitives.ValidatorIndex, 0)
if err = beaconState.ReadFromEveryValidator(func(index int, val state.ReadOnlyValidator) error {

View File

@@ -159,7 +159,7 @@ func TestUpgradeToElectra(t *testing.T) {
eee, err := mSt.EarliestExitEpoch()
require.NoError(t, err)
require.Equal(t, helpers.ActivationExitEpoch(primitives.Epoch(1)), eee)
require.Equal(t, primitives.Epoch(1), eee)
cbtc, err := mSt.ConsolidationBalanceToConsume()
require.NoError(t, err)

View File

@@ -0,0 +1,52 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"attestation.go",
"execution_payload_envelope.go",
"execution_payload_header.go",
"operations.go",
"payload_attestation.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/core/blocks:go_default_library",
"//beacon-chain/core/electra:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/validators:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//network/forks:go_default_library",
"//proto/engine/v1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"attestation_test.go",
"execution_payload_envelope_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/core/altair:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
],
)

View File

@@ -0,0 +1,13 @@
package epbs
import (
"fmt"
)
// RemoveValidatorFlag removes validator flag from existing one.
func RemoveValidatorFlag(flag, flagPosition uint8) (uint8, error) {
if flagPosition > 7 {
return flag, fmt.Errorf("flag position %d exceeds length", flagPosition)
}
return flag & ^(1 << flagPosition), nil
}

View File

@@ -0,0 +1,93 @@
package epbs_test
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/epbs"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestValidatorFlag_Remove(t *testing.T) {
tests := []struct {
name string
add []uint8
remove []uint8
expectedTrue []uint8
expectedFalse []uint8
}{
{
name: "none",
add: []uint8{},
remove: []uint8{},
expectedTrue: []uint8{},
expectedFalse: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
{
name: "source",
add: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
remove: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
expectedTrue: []uint8{},
expectedFalse: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
{
name: "source, target",
add: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex},
remove: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
expectedTrue: []uint8{params.BeaconConfig().TimelyTargetFlagIndex},
expectedFalse: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
{
name: "source, target, head",
add: []uint8{params.BeaconConfig().TimelySourceFlagIndex, params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
remove: []uint8{params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
expectedTrue: []uint8{params.BeaconConfig().TimelySourceFlagIndex},
expectedFalse: []uint8{params.BeaconConfig().TimelyTargetFlagIndex, params.BeaconConfig().TimelyHeadFlagIndex},
},
}
var err error
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
flag := uint8(0)
// Add flags.
for _, flagPosition := range test.add {
flag, err = altair.AddValidatorFlag(flag, flagPosition)
require.NoError(t, err)
has, err := altair.HasValidatorFlag(flag, flagPosition)
require.NoError(t, err)
require.Equal(t, true, has)
}
// Remove flags.
for _, flagPosition := range test.remove {
flag, err = epbs.RemoveValidatorFlag(flag, flagPosition)
require.NoError(t, err)
}
// Check if flags are set correctly.
for _, flagPosition := range test.expectedTrue {
has, err := altair.HasValidatorFlag(flag, flagPosition)
require.NoError(t, err)
require.Equal(t, true, has)
}
for _, flagPosition := range test.expectedFalse {
has, err := altair.HasValidatorFlag(flag, flagPosition)
require.NoError(t, err)
require.Equal(t, false, has)
}
})
}
}
func TestValidatorFlag_Remove_ExceedsLength(t *testing.T) {
_, err := epbs.RemoveValidatorFlag(0, 8)
require.ErrorContains(t, "flag position 8 exceeds length", err)
}
func TestValidatorFlag_Remove_NotSet(t *testing.T) {
_, err := epbs.RemoveValidatorFlag(0, 1)
require.NoError(t, err)
}

View File

@@ -0,0 +1,126 @@
package epbs
import (
"context"
"fmt"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
)
// ValidatePayloadStateTransition performs the process_execution_payload
// function.
func ValidatePayloadStateTransition(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
if err := UpdateHeaderAndVerify(ctx, preState, envelope); err != nil {
return err
}
committedHeader, err := preState.LatestExecutionPayloadHeaderEPBS()
if err != nil {
return err
}
if err := validateAgainstCommittedBid(committedHeader, envelope); err != nil {
return err
}
if err := ProcessPayloadStateTransition(ctx, preState, envelope); err != nil {
return err
}
return checkPostStateRoot(ctx, preState, envelope)
}
func ProcessPayloadStateTransition(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
er := envelope.ExecutionRequests()
preState, err := electra.ProcessDepositRequests(ctx, preState, er.Deposits)
if err != nil {
return errors.Wrap(err, "could not process deposit receipts")
}
preState, err = electra.ProcessWithdrawalRequests(ctx, preState, er.Withdrawals)
if err != nil {
return errors.Wrap(err, "could not process ercution layer withdrawal requests")
}
if err := electra.ProcessConsolidationRequests(ctx, preState, er.Consolidations); err != nil {
return errors.Wrap(err, "could not process consolidation requests")
}
payload, err := envelope.Execution()
if err != nil {
return errors.Wrap(err, "could not get execution payload")
}
if err := preState.SetLatestBlockHash(payload.BlockHash()); err != nil {
return err
}
return preState.SetLatestFullSlot(preState.Slot())
}
func UpdateHeaderAndVerify(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
blockHeader := preState.LatestBlockHeader()
if blockHeader == nil {
return errors.New("invalid nil latest block header")
}
if len(blockHeader.StateRoot) == 0 || [32]byte(blockHeader.StateRoot) == [32]byte{} {
prevStateRoot, err := preState.HashTreeRoot(ctx)
if err != nil {
return errors.Wrap(err, "could not compute previous state root")
}
blockHeader.StateRoot = prevStateRoot[:]
if err := preState.SetLatestBlockHeader(blockHeader); err != nil {
return errors.Wrap(err, "could not set latest block header")
}
}
blockHeaderRoot, err := blockHeader.HashTreeRoot()
if err != nil {
return err
}
beaconBlockRoot := envelope.BeaconBlockRoot()
if blockHeaderRoot != beaconBlockRoot {
return fmt.Errorf("beacon block root does not match previous header, got: %#x wanted: %#x", beaconBlockRoot, blockHeaderRoot)
}
return nil
}
func validateAgainstCommittedBid(
committedHeader *enginev1.ExecutionPayloadHeaderEPBS,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
builderIndex := envelope.BuilderIndex()
if committedHeader.BuilderIndex != builderIndex {
return errors.New("builder index does not match committed header")
}
kzgRoot, err := envelope.BlobKzgCommitmentsRoot()
if err != nil {
return err
}
if [32]byte(committedHeader.BlobKzgCommitmentsRoot) != kzgRoot {
return errors.New("blob KZG commitments root does not match committed header")
}
return nil
}
func checkPostStateRoot(
ctx context.Context,
preState state.BeaconState,
envelope interfaces.ROExecutionPayloadEnvelope,
) error {
stateRoot, err := preState.HashTreeRoot(ctx)
if err != nil {
return err
}
envelopeStateRoot := envelope.StateRoot()
if stateRoot != envelopeStateRoot {
return errors.New("state root mismatch")
}
return nil
}

View File

@@ -0,0 +1,109 @@
package epbs
import (
"context"
"testing"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
enginev1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
)
func TestProcessPayloadStateTransition(t *testing.T) {
bh := [32]byte{'h'}
p := random.ExecutionPayloadEnvelope(t)
p.Payload.BlockHash = bh[:]
p.ExecutionRequests = &enginev1.ExecutionRequests{}
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
validators := make([]*ethpb.Validator, 0)
stpb := &ethpb.BeaconStateEPBS{Slot: 3, Validators: validators}
st, err := state_native.InitializeFromProtoUnsafeEpbs(stpb)
require.NoError(t, err)
ctx := context.Background()
lbh, err := st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, [32]byte{}, [32]byte(lbh))
require.NoError(t, ProcessPayloadStateTransition(ctx, st, e))
lbh, err = st.LatestBlockHash()
require.NoError(t, err)
require.Equal(t, bh, [32]byte(lbh))
lfs, err := st.LatestFullSlot()
require.NoError(t, err)
require.Equal(t, lfs, st.Slot())
}
func Test_validateAgainstHeader(t *testing.T) {
bh := [32]byte{'h'}
payload := &enginev1.ExecutionPayloadElectra{BlockHash: bh[:]}
p := random.ExecutionPayloadEnvelope(t)
p.Payload = payload
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
stpb := &ethpb.BeaconStateEPBS{Slot: 3}
st, err := state_native.InitializeFromProtoUnsafeEpbs(stpb)
require.NoError(t, err)
ctx := context.Background()
require.ErrorContains(t, "invalid nil latest block header", UpdateHeaderAndVerify(ctx, st, e))
prest, _ := util.DeterministicGenesisStateEpbs(t, 64)
br := [32]byte{'r'}
p.BeaconBlockRoot = br[:]
require.ErrorContains(t, "beacon block root does not match previous header", UpdateHeaderAndVerify(ctx, prest, e))
header := prest.LatestBlockHeader()
require.NoError(t, err)
headerRoot, err := header.HashTreeRoot()
require.NoError(t, err)
p.BeaconBlockRoot = headerRoot[:]
require.NoError(t, UpdateHeaderAndVerify(ctx, prest, e))
}
func Test_validateAgainstCommittedBid(t *testing.T) {
payload := &enginev1.ExecutionPayloadElectra{}
p := random.ExecutionPayloadEnvelope(t)
p.Payload = payload
p.BuilderIndex = 1
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
h := &enginev1.ExecutionPayloadHeaderEPBS{}
require.ErrorContains(t, "builder index does not match committed header", validateAgainstCommittedBid(h, e))
h.BuilderIndex = 1
p.BlobKzgCommitments = make([][]byte, 6)
for i := range p.BlobKzgCommitments {
p.BlobKzgCommitments[i] = make([]byte, 48)
}
h.BlobKzgCommitmentsRoot = make([]byte, 32)
require.ErrorContains(t, "blob KZG commitments root does not match committed header", validateAgainstCommittedBid(h, e))
root, err := e.BlobKzgCommitmentsRoot()
require.NoError(t, err)
h.BlobKzgCommitmentsRoot = root[:]
require.NoError(t, validateAgainstCommittedBid(h, e))
}
func TestCheckPostStateRoot(t *testing.T) {
payload := &enginev1.ExecutionPayloadElectra{}
p := random.ExecutionPayloadEnvelope(t)
p.Payload = payload
p.BuilderIndex = 1
e, err := blocks.WrappedROExecutionPayloadEnvelope(p)
require.NoError(t, err)
ctx := context.Background()
st, _ := util.DeterministicGenesisStateEpbs(t, 64)
p.StateRoot = make([]byte, 32)
require.ErrorContains(t, "state root mismatch", checkPostStateRoot(ctx, st, e))
root, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
p.StateRoot = root[:]
require.NoError(t, checkPostStateRoot(ctx, st, e))
}

View File

@@ -0,0 +1,50 @@
package epbs
import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/network/forks"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
// ValidatePayloadHeaderSignature validates the signature of the execution payload header.
func ValidatePayloadHeaderSignature(st state.ReadOnlyBeaconState, sh interfaces.ROSignedExecutionPayloadHeader) error {
h, err := sh.Header()
if err != nil {
return err
}
pubkey := st.PubkeyAtIndex(h.BuilderIndex())
pub, err := bls.PublicKeyFromBytes(pubkey[:])
if err != nil {
return err
}
s := sh.Signature()
sig, err := bls.SignatureFromBytes(s[:])
if err != nil {
return err
}
currentEpoch := slots.ToEpoch(h.Slot())
f, err := forks.Fork(currentEpoch)
if err != nil {
return err
}
domain, err := signing.Domain(f, currentEpoch, params.BeaconConfig().DomainBeaconBuilder, st.GenesisValidatorsRoot())
if err != nil {
return err
}
root, err := sh.SigningRoot(domain)
if err != nil {
return err
}
if !sig.Verify(pub, root[:]) {
return signing.ErrSigFailedToVerify
}
return nil
}

View File

@@ -1,4 +1,4 @@
package electra
package epbs
import (
"context"
@@ -6,9 +6,11 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/electra"
v "github.com/prysmaticlabs/prysm/v5/beacon-chain/core/validators"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
var (
@@ -62,11 +64,11 @@ func ProcessOperations(
if err != nil {
return nil, errors.Wrap(err, "could not process altair attester slashing")
}
st, err = ProcessAttestationsNoVerifySignature(ctx, st, block)
st, err = electra.ProcessAttestationsNoVerifySignature(ctx, st, block)
if err != nil {
return nil, errors.Wrap(err, "could not process altair attestation")
}
if _, err := ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
if _, err := electra.ProcessDeposits(ctx, st, bb.Deposits()); err != nil { // new in electra
return nil, errors.Wrap(err, "could not process altair deposit")
}
st, err = ProcessVoluntaryExits(ctx, st, bb.VoluntaryExits())
@@ -77,20 +79,27 @@ func ProcessOperations(
if err != nil {
return nil, errors.Wrap(err, "could not process bls-to-execution changes")
}
// new in ePBS
if block.Version() >= version.EPBS {
if err := ProcessPayloadAttestations(st, bb); err != nil {
return nil, err
}
return st, nil
}
// new in electra
requests, err := bb.ExecutionRequests()
if err != nil {
return nil, errors.Wrap(err, "could not get execution requests")
}
st, err = ProcessDepositRequests(ctx, st, requests.Deposits)
st, err = electra.ProcessDepositRequests(ctx, st, requests.Deposits)
if err != nil {
return nil, errors.Wrap(err, "could not process deposit requests")
return nil, errors.Wrap(err, "could not process deposit receipts")
}
st, err = ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
st, err = electra.ProcessWithdrawalRequests(ctx, st, requests.Withdrawals)
if err != nil {
return nil, errors.Wrap(err, "could not process withdrawal requests")
return nil, errors.Wrap(err, "could not process execution layer withdrawal requests")
}
if err := ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
if err := electra.ProcessConsolidationRequests(ctx, st, requests.Consolidations); err != nil {
return nil, fmt.Errorf("could not process consolidation requests: %w", err)
}
return st, nil

View File

@@ -0,0 +1,125 @@
package epbs
import (
"bytes"
"context"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/altair"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
)
func ProcessPayloadAttestations(state state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) error {
atts, err := body.PayloadAttestations()
if err != nil {
return err
}
if len(atts) == 0 {
return nil
}
ctx, cancel := context.WithTimeout(context.Background(), 12*time.Second)
defer cancel()
lbh := state.LatestBlockHeader()
proposerIndex := lbh.ProposerIndex
var participation []byte
if state.Slot()%32 == 0 {
participation, err = state.PreviousEpochParticipation()
} else {
participation, err = state.CurrentEpochParticipation()
}
if err != nil {
return err
}
totalBalance, err := helpers.TotalActiveBalance(state)
if err != nil {
return err
}
baseReward, err := altair.BaseRewardWithTotalBalance(state, proposerIndex, totalBalance)
if err != nil {
return err
}
lfs, err := state.LatestFullSlot()
if err != nil {
return err
}
cfg := params.BeaconConfig()
sourceFlagIndex := cfg.TimelySourceFlagIndex
targetFlagIndex := cfg.TimelyTargetFlagIndex
headFlagIndex := cfg.TimelyHeadFlagIndex
penaltyNumerator := uint64(0)
rewardNumerator := uint64(0)
rewardDenominator := (cfg.WeightDenominator - cfg.ProposerWeight) * cfg.WeightDenominator / cfg.ProposerWeight
for _, att := range atts {
data := att.Data
if !bytes.Equal(data.BeaconBlockRoot, lbh.ParentRoot) {
return errors.New("invalid beacon block root in payload attestation data")
}
if data.Slot+1 != state.Slot() {
return errors.New("invalid data slot")
}
indexed, err := helpers.GetIndexedPayloadAttestation(ctx, state, data.Slot, att)
if err != nil {
return err
}
valid, err := helpers.IsValidIndexedPayloadAttestation(state, indexed)
if err != nil {
return err
}
if !valid {
return errors.New("invalid payload attestation")
}
payloadWasPreset := data.Slot == lfs
votedPresent := data.PayloadStatus == primitives.PAYLOAD_PRESENT
if votedPresent != payloadWasPreset {
for _, idx := range indexed.GetAttestingIndices() {
flags := participation[idx]
has, err := altair.HasValidatorFlag(flags, targetFlagIndex)
if err != nil {
return err
}
if has {
penaltyNumerator += baseReward * cfg.TimelyTargetWeight
}
has, err = altair.HasValidatorFlag(flags, sourceFlagIndex)
if err != nil {
return err
}
if has {
penaltyNumerator += baseReward * cfg.TimelySourceWeight
}
has, err = altair.HasValidatorFlag(flags, headFlagIndex)
if err != nil {
return err
}
if has {
penaltyNumerator += baseReward * cfg.TimelyHeadWeight
}
participation[idx] = 0
}
} else {
for _, idx := range indexed.GetAttestingIndices() {
participation[idx] = (1 << headFlagIndex) | (1 << sourceFlagIndex) | (1 << targetFlagIndex)
rewardNumerator += baseReward * (cfg.TimelyHeadWeight + cfg.TimelySourceWeight + cfg.TimelyTargetWeight)
}
}
}
if penaltyNumerator > 0 {
if err := helpers.DecreaseBalance(state, proposerIndex, penaltyNumerator/rewardDenominator); err != nil {
return err
}
}
if rewardNumerator > 0 {
if err := helpers.IncreaseBalance(state, proposerIndex, penaltyNumerator/rewardDenominator); err != nil {
return err
}
}
return nil
}

View File

@@ -31,8 +31,6 @@ const (
LightClientFinalityUpdate
// LightClientOptimisticUpdate event
LightClientOptimisticUpdate
// PayloadAttributes events are fired upon a missed slot or new head.
PayloadAttributes
)
// BlockProcessedData is the data sent with BlockProcessed events.

View File

@@ -8,6 +8,7 @@ go_library(
"block.go",
"genesis.go",
"metrics.go",
"payload_attestation.go",
"randao.go",
"rewards_penalties.go",
"shuffle.go",
@@ -20,11 +21,13 @@ go_library(
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/epbs:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
@@ -53,6 +56,8 @@ go_test(
"attestation_test.go",
"beacon_committee_test.go",
"block_test.go",
"exports_test.go",
"payload_attestation_test.go",
"private_access_fuzz_noop_test.go", # keep
"private_access_test.go",
"randao_test.go",
@@ -69,21 +74,27 @@ go_test(
tags = ["CI_race_detection"],
deps = [
"//beacon-chain/cache:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//beacon-chain/state/state-native:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/epbs:go_default_library",
"//consensus-types/primitives:go_default_library",
"//container/slice:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/hash:go_default_library",
"//crypto/rand:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//testing/util/random:go_default_library",
"//time:go_default_library",
"//time/slots:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",

View File

@@ -23,8 +23,11 @@ var (
// Access to these nil fields will result in run time panic,
// it is recommended to run these checks as first line of defense.
func ValidateNilAttestation(attestation ethpb.Att) error {
if attestation == nil || attestation.IsNil() {
return errors.New("attestation is nil")
if attestation == nil {
return errors.New("attestation can't be nil")
}
if attestation.GetData() == nil {
return errors.New("attestation's data can't be nil")
}
if attestation.GetData().Source == nil {
return errors.New("attestation's source can't be nil")
@@ -67,6 +70,12 @@ func IsAggregator(committeeCount uint64, slotSig []byte) (bool, error) {
return binary.LittleEndian.Uint64(b[:8])%modulo == 0, nil
}
// IsAggregated returns true if the attestation is an aggregated attestation,
// false otherwise.
func IsAggregated(attestation ethpb.Att) bool {
return attestation.GetAggregationBits().Count() > 1
}
// ComputeSubnetForAttestation returns the subnet for which the provided attestation will be broadcasted to.
// This differs from the spec definition by instead passing in the active validators indices in the attestation's
// given epoch.

View File

@@ -260,12 +260,12 @@ func TestValidateNilAttestation(t *testing.T) {
{
name: "nil attestation",
attestation: nil,
errString: "attestation is nil",
errString: "attestation can't be nil",
},
{
name: "nil attestation data",
attestation: &ethpb.Attestation{},
errString: "attestation is nil",
errString: "attestation's data can't be nil",
},
{
name: "nil attestation source",

View File

@@ -213,6 +213,7 @@ type CommitteeAssignment struct {
Committee []primitives.ValidatorIndex
AttesterSlot primitives.Slot
CommitteeIndex primitives.CommitteeIndex
PtcSlot primitives.Slot
}
// verifyAssignmentEpoch verifies if the given epoch is valid for assignment based on the provided state.
@@ -294,7 +295,7 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
if err := verifyAssignmentEpoch(epoch, state); err != nil {
return nil, err
}
startSlot, err := slots.EpochStart(epoch)
slot, err := slots.EpochStart(epoch)
if err != nil {
return nil, err
}
@@ -303,14 +304,17 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
vals[v] = struct{}{}
}
assignments := make(map[primitives.ValidatorIndex]*CommitteeAssignment)
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not compute beacon committees")
}
ptcPerSlot, ptcMembersPerCommittee := PtcAllocation(len(committees))
// Compute committee assignments for each slot in the epoch.
for slot := startSlot; slot < startSlot+params.BeaconConfig().SlotsPerEpoch; slot++ {
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not compute beacon committees")
}
endSlot := slot + params.BeaconConfig().SlotsPerEpoch
for {
for j, committee := range committees {
for _, vIndex := range committee {
for i, vIndex := range committee {
if _, ok := vals[vIndex]; !ok { // Skip if the validator is not in the provided validators slice.
continue
}
@@ -320,8 +324,19 @@ func CommitteeAssignments(ctx context.Context, state state.BeaconState, epoch pr
assignments[vIndex].Committee = committee
assignments[vIndex].AttesterSlot = slot
assignments[vIndex].CommitteeIndex = primitives.CommitteeIndex(j)
if uint64(j) < ptcPerSlot && uint64(i) < ptcMembersPerCommittee {
assignments[vIndex].PtcSlot = slot
}
}
}
slot++
if slot == endSlot {
break
}
committees, err = BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not compute beacon committees")
}
}
return assignments, nil
}

View File

@@ -3,6 +3,7 @@ package helpers_test
import (
"context"
"fmt"
"slices"
"strconv"
"testing"
@@ -10,6 +11,7 @@ import (
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/time"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
field_params "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/container/slice"
@@ -722,15 +724,26 @@ func TestCommitteeIndices(t *testing.T) {
assert.DeepEqual(t, []primitives.CommitteeIndex{0, 1, 3}, indices)
}
func TestAttestationCommittees(t *testing.T) {
validators := make([]*ethpb.Validator, params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().TargetCommitteeSize))
func TestCommitteeAssignments_PTC(t *testing.T) {
helpers.ClearCache()
// Create 10 committees. Total 40960 validators.
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
validatorIndices := make([]primitives.ValidatorIndex, validatorCount)
for i := 0; i < len(validators); i++ {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
PublicKey: k,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
validatorIndices[i] = primitives.ValidatorIndex(i)
}
state, err := state_native.InitializeFromProtoPhase0(&ethpb.BeaconState{
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
@@ -754,6 +767,31 @@ func TestAttestationCommittees(t *testing.T) {
assert.Equal(t, params.BeaconConfig().TargetCommitteeSize, uint64(len(committees[0])))
assert.Equal(t, params.BeaconConfig().TargetCommitteeSize, uint64(len(committees[1])))
})
as, err := helpers.CommitteeAssignments(context.Background(), state, 1, validatorIndices)
require.NoError(t, err)
// Capture all the slots and all the validator index that belonged in a PTC using a map for verification later.
slotValidatorMap := make(map[primitives.Slot][]primitives.ValidatorIndex)
for i, a := range as {
slotValidatorMap[a.PtcSlot] = append(slotValidatorMap[a.PtcSlot], i)
}
// Verify that all the slots have the correct number of PTC.
for s, v := range slotValidatorMap {
if s == 0 {
continue
}
// Make sure all the PTC are the correct size from the map.
require.Equal(t, len(v), field_params.PTCSize)
// Get the actual PTC from the beacon state using the helper function
ptc, err := helpers.GetPayloadTimelinessCommittee(context.Background(), state, s)
require.NoError(t, err)
for _, index := range ptc {
i := slices.Index(v, index)
require.NotEqual(t, -1, i) // PTC not found from the assignment map
}
}
}
func TestBeaconCommittees(t *testing.T) {

View File

@@ -0,0 +1,12 @@
package helpers
var (
ErrNilMessage = errNilMessage
ErrNilData = errNilData
ErrNilBeaconBlockRoot = errNilBeaconBlockRoot
ErrNilPayloadAttestation = errNilPayloadAttestation
ErrNilSignature = errNilSignature
ErrNilAggregationBits = errNilAggregationBits
ErrPreEPBSState = errPreEPBSState
ErrCommitteeOverflow = errCommitteeOverflow
)

View File

@@ -0,0 +1,296 @@
package helpers
import (
"context"
"slices"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/epbs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/math"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
var (
errNilMessage = errors.New("nil PayloadAttestationMessage")
errNilData = errors.New("nil PayloadAttestationData")
errNilBeaconBlockRoot = errors.New("nil BeaconBlockRoot")
errNilPayloadAttestation = errors.New("nil PayloadAttestation")
errNilSignature = errors.New("nil Signature")
errNilAggregationBits = errors.New("nil AggregationBits")
errPreEPBSState = errors.New("beacon state pre ePBS fork")
errCommitteeOverflow = errors.New("beacon committee of insufficient size")
)
// ValidateNilPayloadAttestationData checks if any composite field of the
// payload attestation data is nil
func ValidateNilPayloadAttestationData(data *eth.PayloadAttestationData) error {
if data == nil {
return errNilData
}
if data.BeaconBlockRoot == nil {
return errNilBeaconBlockRoot
}
return nil
}
// ValidateNilPayloadAttestationMessage checks if any composite field of the
// payload attestation message is nil
func ValidateNilPayloadAttestationMessage(att *eth.PayloadAttestationMessage) error {
if att == nil {
return errNilMessage
}
if att.Signature == nil {
return errNilSignature
}
return ValidateNilPayloadAttestationData(att.Data)
}
// ValidateNilPayloadAttestation checks if any composite field of the
// payload attestation is nil
func ValidateNilPayloadAttestation(att *eth.PayloadAttestation) error {
if att == nil {
return errNilPayloadAttestation
}
if att.AggregationBits == nil {
return errNilAggregationBits
}
if att.Signature == nil {
return errNilSignature
}
return ValidateNilPayloadAttestationData(att.Data)
}
// InPayloadTimelinessCommittee returns whether the given index belongs to the
// PTC computed from the passed state.
func InPayloadTimelinessCommittee(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot, idx primitives.ValidatorIndex) (bool, error) {
ptc, err := GetPayloadTimelinessCommittee(ctx, state, slot)
if err != nil {
return false, err
}
for _, i := range ptc {
if i == idx {
return true, nil
}
}
return false, nil
}
// GetPayloadTimelinessCommittee returns the PTC for the given slot, computed from the passed state as in the
// spec function `get_ptc`.
func GetPayloadTimelinessCommittee(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot) (indices []primitives.ValidatorIndex, err error) {
if state.Version() < version.EPBS {
return nil, errPreEPBSState
}
committees, err := BeaconCommittees(ctx, state, slot)
if err != nil {
return nil, errors.Wrap(err, "could not get beacon committees")
}
committeesPerSlot, membersPerCommittee := PtcAllocation(len(committees))
for i, committee := range committees {
if uint64(i) >= committeesPerSlot {
return
}
if uint64(len(committee)) < membersPerCommittee {
return nil, errCommitteeOverflow
}
indices = append(indices, committee[:membersPerCommittee]...)
}
return
}
// PtcAllocation returns:
// 1. The number of beacon committees that PTC will borrow from in a slot.
// 2. The number of validators that PTC will borrow from in a beacon committee.
func PtcAllocation(slotCommittees int) (committeesPerSlot, membersPerCommittee uint64) {
committeesPerSlot = math.LargestPowerOfTwo(math.Min(uint64(slotCommittees), fieldparams.PTCSize))
membersPerCommittee = fieldparams.PTCSize / committeesPerSlot
return
}
// GetPayloadAttestingIndices returns the set of attester indices corresponding to the given PayloadAttestation.
//
// Spec pseudocode definition:
//
// def get_payload_attesting_indices(state: BeaconState, slot: Slot,
// payload_attestation: PayloadAttestation) -> Set[ValidatorIndex]:
// """
// Return the set of attesting indices corresponding to ``payload_attestation``.
// """
// ptc = get_ptc(state, slot)
// return set(index for i, index in enumerate(ptc) if payload_attestation.aggregation_bits[i])
func GetPayloadAttestingIndices(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot, att *eth.PayloadAttestation) (indices []primitives.ValidatorIndex, err error) {
if state.Version() < version.EPBS {
return nil, errPreEPBSState
}
ptc, err := GetPayloadTimelinessCommittee(ctx, state, slot)
if err != nil {
return nil, err
}
for i, validatorIndex := range ptc {
if att.AggregationBits.BitAt(uint64(i)) {
indices = append(indices, validatorIndex)
}
}
return
}
// GetIndexedPayloadAttestation replaces a PayloadAttestation's AggregationBits with sorted AttestingIndices and returns an IndexedPayloadAttestation.
//
// Spec pseudocode definition:
//
// def get_indexed_payload_attestation(state: BeaconState, slot: Slot,
// payload_attestation: PayloadAttestation) -> IndexedPayloadAttestation:
// """
// Return the indexed payload attestation corresponding to ``payload_attestation``.
// """
// attesting_indices = get_payload_attesting_indices(state, slot, payload_attestation)
//
// return IndexedPayloadAttestation(
// attesting_indices=sorted(attesting_indices),
// data=payload_attestation.data,
// signature=payload_attestation.signature,
// )
func GetIndexedPayloadAttestation(ctx context.Context, state state.ReadOnlyBeaconState, slot primitives.Slot, att *eth.PayloadAttestation) (*epbs.IndexedPayloadAttestation, error) {
if state.Version() < version.EPBS {
return nil, errPreEPBSState
}
attestingIndices, err := GetPayloadAttestingIndices(ctx, state, slot, att)
if err != nil {
return nil, err
}
slices.Sort(attestingIndices)
return &epbs.IndexedPayloadAttestation{
AttestingIndices: attestingIndices,
Data: att.Data,
Signature: att.Signature,
}, nil
}
// IsValidIndexedPayloadAttestation validates the given IndexedPayloadAttestation.
//
// Spec pseudocode definition:
//
// def is_valid_indexed_payload_attestation(
// state: BeaconState,
// indexed_payload_attestation: IndexedPayloadAttestation) -> bool:
// """
// Check if ``indexed_payload_attestation`` is not empty, has sorted and unique indices and has
// a valid aggregate signature.
// """
// # Verify the data is valid
// if indexed_payload_attestation.data.payload_status >= PAYLOAD_INVALID_STATUS:
// return False
//
// # Verify indices are sorted and unique
// indices = indexed_payload_attestation.attesting_indices
// if len(indices) == 0 or not indices == sorted(set(indices)):
// return False
//
// # Verify aggregate signature
// pubkeys = [state.validators[i].pubkey for i in indices]
// domain = get_domain(state, DOMAIN_PTC_ATTESTER, None)
// signing_root = compute_signing_root(indexed_payload_attestation.data, domain)
// return bls.FastAggregateVerify(pubkeys, signing_root, indexed_payload_attestation.signature)
func IsValidIndexedPayloadAttestation(state state.ReadOnlyBeaconState, att *epbs.IndexedPayloadAttestation) (bool, error) {
if state.Version() < version.EPBS {
return false, errPreEPBSState
}
// Verify the data is valid.
if att.Data.PayloadStatus >= primitives.PAYLOAD_INVALID_STATUS {
return false, nil
}
// Verify indices are sorted and unique.
indices := att.AttestingIndices
slices.Sort(indices)
if len(indices) == 0 || !slices.Equal(att.AttestingIndices, indices) {
return false, nil
}
// Verify aggregate signature.
publicKeys := make([]bls.PublicKey, len(indices))
for i, index := range indices {
validator, err := state.ValidatorAtIndexReadOnly(index)
if err != nil {
return false, err
}
publicKeyBytes := validator.PublicKey()
publicKey, err := bls.PublicKeyFromBytes(publicKeyBytes[:])
if err != nil {
return false, err
}
publicKeys[i] = publicKey
}
domain, err := signing.Domain(
state.Fork(),
slots.ToEpoch(state.Slot()),
params.BeaconConfig().DomainPTCAttester,
state.GenesisValidatorsRoot(),
)
if err != nil {
return false, err
}
signingRoot, err := signing.ComputeSigningRoot(att.Data, domain)
if err != nil {
return false, err
}
signature, err := bls.SignatureFromBytes(att.Signature)
if err != nil {
return false, err
}
return signature.FastAggregateVerify(publicKeys, signingRoot), nil
}
// ValidatePayloadAttestationMessageSignature verifies the signature of a
// payload attestation message.
func ValidatePayloadAttestationMessageSignature(ctx context.Context, st state.ReadOnlyBeaconState, msg *eth.PayloadAttestationMessage) error {
if err := ValidateNilPayloadAttestationMessage(msg); err != nil {
return err
}
val, err := st.ValidatorAtIndex(msg.ValidatorIndex)
if err != nil {
return err
}
pub, err := bls.PublicKeyFromBytes(val.PublicKey)
if err != nil {
return err
}
sig, err := bls.SignatureFromBytes(msg.Signature)
if err != nil {
return err
}
currentEpoch := slots.ToEpoch(st.Slot())
domain, err := signing.Domain(st.Fork(), currentEpoch, params.BeaconConfig().DomainPTCAttester, st.GenesisValidatorsRoot())
if err != nil {
return err
}
root, err := signing.ComputeSigningRoot(msg.Data, domain)
if err != nil {
return err
}
if !sig.Verify(pub, root[:]) {
return signing.ErrSigFailedToVerify
}
return nil
}

View File

@@ -0,0 +1,363 @@
package helpers_test
import (
"context"
"slices"
"strconv"
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/signing"
state_native "github.com/prysmaticlabs/prysm/v5/beacon-chain/state/state-native"
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/epbs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
"github.com/prysmaticlabs/prysm/v5/crypto/rand"
"github.com/prysmaticlabs/prysm/v5/math"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/testing/util/random"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
func TestValidateNilPayloadAttestation(t *testing.T) {
require.ErrorIs(t, helpers.ErrNilData, helpers.ValidateNilPayloadAttestationData(nil))
data := &eth.PayloadAttestationData{}
require.ErrorIs(t, helpers.ErrNilBeaconBlockRoot, helpers.ValidateNilPayloadAttestationData(data))
data.BeaconBlockRoot = make([]byte, 32)
require.NoError(t, helpers.ValidateNilPayloadAttestationData(data))
require.ErrorIs(t, helpers.ErrNilMessage, helpers.ValidateNilPayloadAttestationMessage(nil))
message := &eth.PayloadAttestationMessage{}
require.ErrorIs(t, helpers.ErrNilSignature, helpers.ValidateNilPayloadAttestationMessage(message))
message.Signature = make([]byte, 96)
require.ErrorIs(t, helpers.ErrNilData, helpers.ValidateNilPayloadAttestationMessage(message))
message.Data = data
require.NoError(t, helpers.ValidateNilPayloadAttestationMessage(message))
require.ErrorIs(t, helpers.ErrNilPayloadAttestation, helpers.ValidateNilPayloadAttestation(nil))
att := &eth.PayloadAttestation{}
require.ErrorIs(t, helpers.ErrNilAggregationBits, helpers.ValidateNilPayloadAttestation(att))
att.AggregationBits = bitfield.NewBitvector512()
require.ErrorIs(t, helpers.ErrNilSignature, helpers.ValidateNilPayloadAttestation(att))
att.Signature = message.Signature
require.ErrorIs(t, helpers.ErrNilData, helpers.ValidateNilPayloadAttestation(att))
att.Data = data
require.NoError(t, helpers.ValidateNilPayloadAttestation(att))
}
func TestGetPayloadTimelinessCommittee(t *testing.T) {
helpers.ClearCache()
// Create 10 committees
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
k := make([]byte, 48)
copy(k, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: k,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
state, err := state_native.InitializeFromProtoEpbs(random.BeaconState(t))
require.NoError(t, err)
require.NoError(t, state.SetValidators(validators))
require.NoError(t, state.SetSlot(200))
ctx := context.Background()
indices, err := helpers.BeaconCommitteeFromState(ctx, state, state.Slot(), 1)
require.NoError(t, err)
require.Equal(t, 128, len(indices))
epoch := slots.ToEpoch(state.Slot())
activeCount, err := helpers.ActiveValidatorCount(ctx, state, epoch)
require.NoError(t, err)
require.Equal(t, uint64(40960), activeCount)
computedCommitteeCount := helpers.SlotCommitteeCount(activeCount)
require.Equal(t, committeeCount, computedCommitteeCount)
committeesPerSlot := math.LargestPowerOfTwo(math.Min(committeeCount, fieldparams.PTCSize))
require.Equal(t, uint64(8), committeesPerSlot)
ptc, err := helpers.GetPayloadTimelinessCommittee(ctx, state, state.Slot())
require.NoError(t, err)
require.Equal(t, fieldparams.PTCSize, len(ptc))
committee1, err := helpers.BeaconCommitteeFromState(ctx, state, state.Slot(), 0)
require.NoError(t, err)
require.DeepEqual(t, committee1[:64], ptc[:64])
}
func Test_PtcAllocation(t *testing.T) {
tests := []struct {
committeeCount int
memberPerCommittee uint64
committeesPerSlot uint64
}{
{1, 512, 1},
{4, 128, 4},
{128, 4, 128},
{512, 1, 512},
{1024, 1, 512},
}
for _, test := range tests {
committeesPerSlot, memberPerCommittee := helpers.PtcAllocation(test.committeeCount)
if memberPerCommittee != test.memberPerCommittee {
t.Errorf("memberPerCommittee(%d) = %d; expected %d", test.committeeCount, memberPerCommittee, test.memberPerCommittee)
}
if committeesPerSlot != test.committeesPerSlot {
t.Errorf("committeesPerSlot(%d) = %d; expected %d", test.committeeCount, committeesPerSlot, test.committeesPerSlot)
}
}
}
func TestGetPayloadAttestingIndices(t *testing.T) {
helpers.ClearCache()
// Create 10 committees. Total 40960 validators.
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
pubkey := make([]byte, 48)
copy(pubkey, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: pubkey,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Create a beacon state.
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
// Get PTC.
ptc, err := helpers.GetPayloadTimelinessCommittee(context.Background(), state, state.Slot())
require.NoError(t, err)
require.Equal(t, fieldparams.PTCSize, len(ptc))
// Generate random indices. PTC members at the corresponding indices are considered attested.
randGen := rand.NewDeterministicGenerator()
attesterCount := randGen.Intn(fieldparams.PTCSize) + 1
indices := randGen.Perm(fieldparams.PTCSize)[:attesterCount]
slices.Sort(indices)
require.Equal(t, attesterCount, len(indices))
// Create a PayloadAttestation with AggregationBits set true at the indices.
aggregationBits := bitfield.NewBitvector512()
for _, index := range indices {
aggregationBits.SetBitAt(uint64(index), true)
}
payloadAttestation := &eth.PayloadAttestation{
AggregationBits: aggregationBits,
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
}
// Get attesting indices.
attesters, err := helpers.GetPayloadAttestingIndices(context.Background(), state, state.Slot(), payloadAttestation)
require.NoError(t, err)
require.Equal(t, len(indices), len(attesters))
// Check if each attester equals to the PTC member at the corresponding index.
for i, index := range indices {
require.Equal(t, attesters[i], ptc[index])
}
}
func TestGetIndexedPayloadAttestation(t *testing.T) {
helpers.ClearCache()
// Create 10 committees. Total 40960 validators.
committeeCount := uint64(10)
validatorCount := committeeCount * params.BeaconConfig().TargetCommitteeSize * uint64(params.BeaconConfig().SlotsPerEpoch)
validators := make([]*ethpb.Validator, validatorCount)
for i := 0; i < len(validators); i++ {
publicKey := make([]byte, 48)
copy(publicKey, strconv.Itoa(i))
validators[i] = &ethpb.Validator{
PublicKey: publicKey,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Create a beacon state.
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
// Get PTC.
ptc, err := helpers.GetPayloadTimelinessCommittee(context.Background(), state, state.Slot())
require.NoError(t, err)
require.Equal(t, fieldparams.PTCSize, len(ptc))
// Generate random indices. PTC members at the corresponding indices are considered attested.
randGen := rand.NewDeterministicGenerator()
attesterCount := randGen.Intn(fieldparams.PTCSize) + 1
indices := randGen.Perm(fieldparams.PTCSize)[:attesterCount]
slices.Sort(indices)
require.Equal(t, attesterCount, len(indices))
// Create a PayloadAttestation with AggregationBits set true at the indices.
aggregationBits := bitfield.NewBitvector512()
for _, index := range indices {
aggregationBits.SetBitAt(uint64(index), true)
}
payloadAttestation := &eth.PayloadAttestation{
AggregationBits: aggregationBits,
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, 32),
},
Signature: make([]byte, 96),
}
// Get attesting indices.
ctx := context.Background()
attesters, err := helpers.GetPayloadAttestingIndices(ctx, state, state.Slot(), payloadAttestation)
require.NoError(t, err)
require.Equal(t, len(indices), len(attesters))
// Get an IndexedPayloadAttestation.
indexedPayloadAttestation, err := helpers.GetIndexedPayloadAttestation(ctx, state, state.Slot(), payloadAttestation)
require.NoError(t, err)
require.Equal(t, len(indices), len(indexedPayloadAttestation.AttestingIndices))
require.DeepEqual(t, payloadAttestation.Data, indexedPayloadAttestation.Data)
require.DeepEqual(t, payloadAttestation.Signature, indexedPayloadAttestation.Signature)
// Check if the attesting indices are the same.
slices.Sort(attesters) // GetIndexedPayloadAttestation sorts attesting indices.
require.DeepEqual(t, attesters, indexedPayloadAttestation.AttestingIndices)
}
func TestIsValidIndexedPayloadAttestation(t *testing.T) {
helpers.ClearCache()
// Create validators.
validatorCount := uint64(350)
validators := make([]*ethpb.Validator, validatorCount)
_, secretKeys, err := util.DeterministicDepositsAndKeys(validatorCount)
require.NoError(t, err)
for i := 0; i < len(validators); i++ {
validators[i] = &ethpb.Validator{
PublicKey: secretKeys[i].PublicKey().Marshal(),
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
// Create a beacon state.
state, err := state_native.InitializeFromProtoEpbs(&ethpb.BeaconStateEPBS{
Validators: validators,
Fork: &ethpb.Fork{
Epoch: 0,
CurrentVersion: params.BeaconConfig().GenesisForkVersion,
PreviousVersion: params.BeaconConfig().GenesisForkVersion,
},
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
})
require.NoError(t, err)
// Define test cases.
tests := []struct {
attestation *epbs.IndexedPayloadAttestation
}{
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{1},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{13, 19},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{123, 234, 345},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{38, 46, 54, 62, 70, 78, 86, 194},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
{
attestation: &epbs.IndexedPayloadAttestation{
AttestingIndices: []primitives.ValidatorIndex{5},
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: make([]byte, fieldparams.RootLength),
},
Signature: make([]byte, fieldparams.BLSSignatureLength),
},
},
}
// Run test cases.
for _, test := range tests {
signatures := make([]bls.Signature, len(test.attestation.AttestingIndices))
for i, index := range test.attestation.AttestingIndices {
signedBytes, err := signing.ComputeDomainAndSign(
state,
slots.ToEpoch(test.attestation.Data.Slot),
test.attestation.Data,
params.BeaconConfig().DomainPTCAttester,
secretKeys[index],
)
require.NoError(t, err)
signature, err := bls.SignatureFromBytes(signedBytes)
require.NoError(t, err)
signatures[i] = signature
}
aggregatedSignature := bls.AggregateSignatures(signatures)
test.attestation.Signature = aggregatedSignature.Marshal()
isValid, err := helpers.IsValidIndexedPayloadAttestation(state, test.attestation)
require.NoError(t, err)
require.Equal(t, true, isValid)
}
}

Some files were not shown because too many files have changed in this diff Show More