Compare commits

..

30 Commits

Author SHA1 Message Date
potuz
c9671c15cb Add InsertPayload to gloas forkchoice 2026-02-11 14:24:31 -03:00
potuz
306053d194 review 1 2026-02-11 13:01:26 -03:00
potuz
1891117c1d copy on allConsensusChildren 2026-02-11 13:01:26 -03:00
potuz
0417482237 Pass right header 2026-02-11 13:01:26 -03:00
potuz
a7c93b5ee4 fix some tests 2026-02-11 13:01:26 -03:00
potuz
83033ebbfd add changelog 2026-02-11 13:01:26 -03:00
potuz
df08bb006a Fix engine tests 2026-02-11 13:01:26 -03:00
potuz
93244f8839 fix remaining tests 2026-02-11 13:01:26 -03:00
potuz
f363d34df3 fix reorg late blocks logic 2026-02-11 13:01:26 -03:00
potuz
29466255fe Fix more tests 2026-02-11 13:01:26 -03:00
potuz
b495d3f65d fix optimistic sync prunning tests 2026-02-11 13:01:26 -03:00
potuz
ee9116d187 fix some tests 2026-02-11 13:01:26 -03:00
potuz
821b5e5f20 Compile tests 2026-02-11 13:01:26 -03:00
potuz
11a03f50a9 hash -> root 2026-02-11 13:01:26 -03:00
potuz
172299c578 Gloas Initial forkchoice implementation
This is the bare minimum to have the **current fork** logic being
compatible with having full/empty slots in forkchoice. That is,
forkchoice is designed as if we had the option to have full/empty slots,
but we still create the full node when we import a block pre-gloas.

No PTC and no ordering between empty/full is yet implemented, it just
always choses the full child between the two.

The *Node structure corresponds to the `PAYLOAD_PENDING_STATUS` in the
spec, while the enclosing *PayloadNode structures are the full
forkchoice nodes that either have full or emtpy status.

The most complicated path to check in this PR is the working of
`SetOptimisticToInvalid` that handles invalid block insertion from the
EL. There are many options, either the passed root exits or not in
forkchoice and wether this invalid node built on top of full or empty,
which is particularly ugly because if the node is not in forkchoice we
still need to find out if its parent was full or empty.

-[x] compiling beacon chain
-[ ] fix tests
2026-02-11 13:01:25 -03:00
james-prysm
eaf3aa3e8e simplify get and post block parsing for REST (#16307)
**What type of PR is this?**

 Feature


**What does this PR do? Why is it needed?**

PR is attempts to remove code duplication and process through a map of
configurations for get and post block apis. this should simplify
maintainability.

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-02-10 18:28:55 +00:00
terence
9c49bb484c Add gossip for payload attestation (#16333)
This PR implements gossip validation and subscription for payload
attestation
2026-02-10 16:17:22 +00:00
terence
bb0f70ad60 gloas: add read only wrapper for payload envelope (#16339)
This PR adds read only wrapper for execution payload envelope
2026-02-09 17:23:12 +00:00
satushh
dc66f8872d Close libp2p host (#16313)
**What type of PR is this?**

 Other

**What does this PR do? Why is it needed?**

Close host to save resource

**Which issues(s) does this PR fix?**

Fixes #

**Other notes for review**

**Acknowledgements**

- [ ] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [ ] I have added a description with sufficient context for reviewers
to understand this PR.
- [ ] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: Bastin <43618253+Inspector-Butters@users.noreply.github.com>
2026-02-09 14:40:07 +00:00
Preston Van Loon
db2bb5505c db: Copy byte slices that live outside of the view transaction (#16332)
**What type of PR is this?**

Bug fix

**What does this PR do? Why is it needed?**

The bbolt documentation suggests that the byte slices used during the
View transaction should not be used outside of the transaction and
mutating those slices could break things.

**Which issues(s) does this PR fix?**

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-02-07 14:10:07 +00:00
terence
14f01bbc6c gloas: move kzg commitments to bid (#16309)
This PR moves kzg commitments to bid. The rationale behind is captured
in this [issue](https://github.com/ethereum/consensus-specs/issues/4870)
* Moves blob KZG commitments to the earlier point where builder intent
is known
* Removes duplicated commitments from data column sidecars which saves
descent b/w per slot
* Enables nodes to start fetching blobs via getBlobs as soon as the bid
is received
* Slightly increases bid size and may add minor bidding channel latency
but the tradeoff favors lower network load and simpler DA handling
2026-02-06 23:07:54 +00:00
Potuz
c3e74e4a5d Remove unused method in forkchoice (#16337) 2026-02-06 19:21:58 +00:00
Potuz
e7ae6a004b Remove unused method in forkchoice (#16331) 2026-02-06 15:30:50 +00:00
Bastin
862fb2eb4a Fix gen-logs.sh - gitignore bug (#16328)
**What does this PR do? Why is it needed?**
`gen-logs.sh` was skipping `cmd/beacon-chain/execution/` due to a rule
in `.gitignore`.
Added a fix in `gen-logs.sh` to ignore `.gitignore` entries by
specification.
2026-02-05 14:06:42 +00:00
Potuz
bb80a9c832 Remove unused map in forkchoice (#16329)
nodeByPayload was not being used.
2026-02-05 13:25:06 +00:00
Bastin
c1b668a50a Fix logging issue (#16322)
**What does this PR do? Why is it needed?**
This PR, in an attempt to fix the logging issue described in #16314,
does the following:
- Adds a new field `Identifier` to the `WriterHook` struct, and filters
out log entries that have the key `log_target` and the value of the
hook's `Identifier`. For now the identifiers are `ephemeral` and `user`,
differentiating between the user facing terminal/log file, and the
debugger facing ephemeral log file.
- Stores the value of the `--verbosity` and `--log.vmodule` flags in
`io/logs`, so it can be accessed by packages that need to know the
verbosity they're logging with. (note that since #16272 each package can
have a different verbosity, so verbosity is now defined per package
instead of globally)
- Improves the calculation of the global logging level by ignoring the
`ephemeralLogFileVerbosity` when the `--disable-ephemeral-log-file` flag
is enabled.
- Uses these added logic to fix the problem in
`logStateTransitionData()` (described in #16314)

Note: since we're saving this new data in `io/logs`, we should refactor
`prefixFormatter` to read the data from here. but that creates a
circular import error. I will try to fix this and refactor the formatter
in a future PR.

---------

Co-authored-by: Manu NALEPA <enalepa@offchainlabs.com>
2026-02-05 10:24:50 +00:00
Justin Traglia
fab687d96d Improve ethspecify integration (#16304)
**What type of PR is this?**

Documentation

**What does this PR do? Why is it needed?**

* Move the ethspecify config from `/specrefs/.ethspecify` to
`/.ethspecify`.
* This allows developers to use inline specrefs (eg spec functions in
godoc comments).
* To do this, simply add a spec tag and run `ethspecify` to populate it.
* Clean up specref exceptions; organize by upgrade & put items in the
correct section.
* Update a few godoc comments to use the new inline specref feature.
* Update check-specrefs GitHub action so that it enforces up-to-date
godocs.
* Standardize specref naming; requiring a `#fork` tag for everything.
* Add new specrefs (which haven't been implemented yet) which were
missing.

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).

---------

Co-authored-by: james-prysm <90280386+james-prysm@users.noreply.github.com>
2026-02-04 18:44:01 +00:00
james-prysm
cf94ccbf72 node fallback cleanup (#16316)
**What type of PR is this?**

 Other

**What does this PR do? Why is it needed?**

Follow up to https://github.com/OffchainLabs/prysm/pull/16215 this pr
improves logging, fixes stuttering in package naming, adds additional
unit tests, and deduplicates fallback node code.

**Which issues(s) does this PR fix?**

fixes a potential race if reconnecting to the same host very quickly
which has a stale connection still.

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-02-04 15:59:42 +00:00
Aarsh Shah
75895c1e0b fix: Set Beacon Node Options after reading the config file (#16320)
**What type of PR is this?**
Bug fix

**What does this PR do? Why is it needed?**
This PR ensures that we set the beacon node options AFTER reading the
config file (if one is given to override the defaults).

**Which issues(s) does this PR fix?**
It fixes the issue that Barnabas reported around the
"MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS" override not being
respected (and potentially other issues resulting from setting the
options before reading the config).

Fixes #

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-02-03 16:51:01 +00:00
Preston Van Loon
d1b9281677 golangci-lint: Remove test exclusion from formatting (#16318)
**What type of PR is this?**

> Other

**What does this PR do? Why is it needed?**

**Which issues(s) does this PR fix?**

Follow up to #16311

**Other notes for review**

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [x] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description with sufficient context for reviewers
to understand this PR.
- [x] I have tested that my changes work as expected and I added a
testing plan to the PR description (if applicable).
2026-02-02 17:42:05 +00:00
236 changed files with 9984 additions and 5048 deletions

View File

@@ -1,25 +1,39 @@
version: v1.7.0-alpha.1
version: v1.7.0-alpha.2
style: full
specrefs:
search_root: ..
search_root: .
auto_standardize_names: true
auto_add_missing_entries: true
require_exceptions_have_fork: true
files:
- configs.yml
- constants.yml
- containers.yml
- dataclasses.yml
- functions.yml
- presets.yml
- specrefs/configs.yml
- specrefs/constants.yml
- specrefs/containers.yml
- specrefs/dataclasses.yml
- specrefs/functions.yml
- specrefs/presets.yml
exceptions:
presets:
# Not implemented: gloas (future fork)
# gloas
- BUILDER_PENDING_WITHDRAWALS_LIMIT#gloas
- MAX_PAYLOAD_ATTESTATIONS#gloas
- PTC_SIZE#gloas
constants:
# Constants in the KZG library
# phase0
- BASIS_POINTS#phase0
- ENDIANNESS#phase0
- MAX_CONCURRENT_REQUESTS#phase0
- UINT64_MAX#phase0
- UINT64_MAX_SQRT#phase0
# altair
- PARTICIPATION_FLAG_WEIGHTS#altair
# bellatrix
- SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY#bellatrix
# deneb
- BLS_MODULUS#deneb
- BYTES_PER_COMMITMENT#deneb
- BYTES_PER_FIELD_ELEMENT#deneb
@@ -33,18 +47,9 @@ exceptions:
- PRIMITIVE_ROOT_OF_UNITY#deneb
- RANDOM_CHALLENGE_KZG_BATCH_DOMAIN#deneb
- RANDOM_CHALLENGE_KZG_CELL_BATCH_DOMAIN#fulu
# Not implemented
- BASIS_POINTS#phase0
- ENDIANNESS#phase0
- MAX_CONCURRENT_REQUESTS#phase0
- PARTICIPATION_FLAG_WEIGHTS#altair
- SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY#bellatrix
# fulu
- UINT256_MAX#fulu
- UINT64_MAX#phase0
- UINT64_MAX_SQRT#phase0
# Not implemented: gloas (future fork)
# gloas
- BUILDER_PAYMENT_THRESHOLD_DENOMINATOR#gloas
- BUILDER_PAYMENT_THRESHOLD_NUMERATOR#gloas
- BUILDER_WITHDRAWAL_PREFIX#gloas
@@ -61,61 +66,62 @@ exceptions:
- PTC_TIMELINESS_INDEX#gloas
configs:
# Not implemented: gloas (future fork)
# gloas
- AGGREGATE_DUE_BPS_GLOAS#gloas
- ATTESTATION_DUE_BPS_GLOAS#gloas
- CONTRIBUTION_DUE_BPS_GLOAS#gloas
- GLOAS_FORK_EPOCH#gloas
- GLOAS_FORK_VERSION#gloas
- MAX_REQUEST_PAYLOADS#gloas
- MIN_BUILDER_WITHDRAWABILITY_DELAY#gloas
- PAYLOAD_ATTESTATION_DUE_BPS#gloas
- SYNC_MESSAGE_DUE_BPS_GLOAS#gloas
- MIN_BUILDER_WITHDRAWABILITY_DELAY#gloas
ssz_objects:
# Not implemented
# phase0
- Eth1Block#phase0
- MatrixEntry#fulu
# Not implemented: capella
# capella
- LightClientBootstrap#capella
- LightClientFinalityUpdate#capella
- LightClientOptimisticUpdate#capella
- LightClientUpdate#capella
# Not implemented: gloas (future fork)
# fulu
- MatrixEntry#fulu
# gloas
- BeaconBlockBody#gloas
- BeaconState#gloas
- Builder#gloas
- BuilderPendingPayment#gloas
- BuilderPendingWithdrawal#gloas
- DataColumnSidecar#gloas
- ExecutionPayloadEnvelope#gloas
- ExecutionPayloadBid#gloas
- ExecutionPayloadEnvelope#gloas
- ForkChoiceNode#gloas
- IndexedPayloadAttestation#gloas
- PayloadAttestation#gloas
- PayloadAttestationData#gloas
- PayloadAttestationMessage#gloas
- SignedExecutionPayloadEnvelope#gloas
- SignedExecutionPayloadBid#gloas
- Builder#gloas
- ProposerPreferences#gloas
- SignedExecutionPayloadBid#gloas
- SignedExecutionPayloadEnvelope#gloas
- SignedProposerPreferences#gloas
dataclasses:
# Not implemented
- BlobParameters#fulu
- ExpectedWithdrawals#capella
- ExpectedWithdrawals#electra
# phase0
- LatestMessage#phase0
- LightClientStore#altair
- OptimisticStore#bellatrix
- Store#phase0
# Not implemented: capella
# altair
- LightClientStore#altair
# bellatrix
- OptimisticStore#bellatrix
# capella
- ExpectedWithdrawals#capella
- LightClientStore#capella
# Not implemented: gloas (future fork)
# electra
- ExpectedWithdrawals#electra
# fulu
- BlobParameters#fulu
# gloas
- ExpectedWithdrawals#gloas
- LatestMessage#gloas
- Store#gloas
@@ -140,7 +146,6 @@ exceptions:
- g1_lincomb#deneb
- hash_to_bls_field#deneb
- is_power_of_two#deneb
- multi_exp#deneb
- reverse_bits#deneb
- validate_kzg_g1#deneb
- verify_blob_kzg_proof#deneb
@@ -175,7 +180,12 @@ exceptions:
- verify_cell_kzg_proof_batch#fulu
- verify_cell_kzg_proof_batch_impl#fulu
# Not implemented: phase0
# phase0
- update_proposer_boost_root#phase0
- is_proposer_equivocation#phase0
- record_block_timeliness#phase0
- compute_proposer_score#phase0
- get_attestation_score#phase0
- calculate_committee_fraction#phase0
- compute_fork_version#phase0
- compute_pulled_up_tip#phase0
@@ -221,8 +231,7 @@ exceptions:
- validate_on_attestation#phase0
- validate_target_epoch_against_current_time#phase0
- xor#phase0
# Not implemented: altair
# altair
- compute_merkle_proof#altair
- compute_sync_committee_period_at_slot#altair
- get_contribution_and_proof#altair
@@ -244,27 +253,29 @@ exceptions:
- process_sync_committee_contributions#altair
- set_or_append_list#altair
- validate_light_client_update#altair
# Not implemented: bellatrix
# bellatrix
- get_execution_payload#bellatrix
- is_merge_transition_block#bellatrix
- is_optimistic_candidate_block#bellatrix
- latest_verified_ancestor#bellatrix
- prepare_execution_payload#bellatrix
# Not implemented: capella
# capella
- apply_withdrawals#capella
- get_balance_after_withdrawals#capella
- get_lc_execution_root#capella
- get_validators_sweep_withdrawals#capella
- is_valid_light_client_header#capella
- prepare_execution_payload#capella
- process_epoch#capella
- update_next_withdrawal_index#capella
- update_next_withdrawal_validator_index#capella
- upgrade_lc_bootstrap_to_capella#capella
- upgrade_lc_finality_update_to_capella#capella
- upgrade_lc_header_to_capella#capella
- upgrade_lc_optimistic_update_to_capella#capella
- upgrade_lc_store_to_capella#capella
- upgrade_lc_update_to_capella#capella
# Not implemented: deneb
# deneb
- get_lc_execution_root#deneb
- is_valid_light_client_header#deneb
- prepare_execution_payload#deneb
@@ -274,33 +285,34 @@ exceptions:
- upgrade_lc_optimistic_update_to_deneb#deneb
- upgrade_lc_store_to_deneb#deneb
- upgrade_lc_update_to_deneb#deneb
# Not implemented: electra
# electra
- compute_weak_subjectivity_period#electra
- current_sync_committee_gindex_at_slot#electra
- finalized_root_gindex_at_slot#electra
- get_eth1_vote#electra
- get_lc_execution_root#electra
- get_pending_partial_withdrawals#electra
- get_validators_sweep_withdrawals#electra
- is_compounding_withdrawal_credential#electra
- is_eligible_for_partial_withdrawals#electra
- is_within_weak_subjectivity_period#electra
- next_sync_committee_gindex_at_slot#electra
- normalize_merkle_branch#electra
- prepare_execution_payload#electra
- update_pending_partial_withdrawals#electra
- upgrade_lc_bootstrap_to_electra#electra
- upgrade_lc_finality_update_to_electra#electra
- upgrade_lc_header_to_electra#electra
- upgrade_lc_optimistic_update_to_electra#electra
- upgrade_lc_store_to_electra#electra
- upgrade_lc_update_to_electra#electra
# Not implemented: fulu
# fulu
- compute_matrix#fulu
- get_blob_parameters#fulu
- get_data_column_sidecars_from_block#fulu
- get_data_column_sidecars_from_column_sidecar#fulu
- recover_matrix#fulu
# Not implemented: gloas (future fork)
# gloas
- compute_balance_weighted_acceptance#gloas
- compute_balance_weighted_selection#gloas
- compute_fork_version#gloas
@@ -368,49 +380,42 @@ exceptions:
- verify_execution_payload_bid_signature#gloas
- add_builder_to_registry#gloas
- apply_deposit_for_builder#gloas
- apply_withdrawals#capella
- apply_withdrawals#gloas
- can_builder_cover_bid#gloas
- compute_proposer_score#phase0
- convert_builder_index_to_validator_index#gloas
- convert_validator_index_to_builder_index#gloas
- get_attestation_score#gloas
- get_attestation_score#phase0
- get_balance_after_withdrawals#capella
- get_builder_from_deposit#gloas
- get_builder_withdrawals#gloas
- get_builders_sweep_withdrawals#gloas
- get_index_for_new_builder#gloas
- get_pending_balance_to_withdraw_for_builder#gloas
- get_pending_partial_withdrawals#electra
- get_proposer_preferences_signature#gloas
- get_upcoming_proposal_slots#gloas
- get_validators_sweep_withdrawals#capella
- get_validators_sweep_withdrawals#electra
- initiate_builder_exit#gloas
- is_active_builder#gloas
- is_builder_index#gloas
- is_data_available#gloas
- is_eligible_for_partial_withdrawals#electra
- is_head_late#gloas
- is_head_weak#gloas
- is_parent_strong#gloas
- is_proposer_equivocation#phase0
- is_valid_proposal_slot#gloas
- onboard_builders_from_pending_deposits#gloas
- process_deposit_request#gloas
- process_voluntary_exit#gloas
- record_block_timeliness#gloas
- record_block_timeliness#phase0
- verify_data_column_sidecar_kzg_proofs#gloas
- should_apply_proposer_boost#gloas
- update_builder_pending_withdrawals#gloas
- update_next_withdrawal_builder_index#gloas
- update_next_withdrawal_index#capella
- update_next_withdrawal_validator_index#capella
- update_payload_expected_withdrawals#gloas
- update_pending_partial_withdrawals#electra
- update_proposer_boost_root#gloas
- update_proposer_boost_root#phase0
presets:
# gloas
- BUILDER_PENDING_WITHDRAWALS_LIMIT#gloas
- BUILDER_REGISTRY_LIMIT#gloas
- MAX_BUILDERS_PER_WITHDRAWALS_SWEEP#gloas

View File

@@ -12,11 +12,11 @@ jobs:
- name: Check version consistency
run: |
WORKSPACE_VERSION=$(grep 'consensus_spec_version = ' WORKSPACE | sed 's/.*"\(.*\)"/\1/')
ETHSPECIFY_VERSION=$(grep '^version:' specrefs/.ethspecify.yml | sed 's/version: //')
ETHSPECIFY_VERSION=$(grep '^version:' .ethspecify.yml | sed 's/version: //')
if [ "$WORKSPACE_VERSION" != "$ETHSPECIFY_VERSION" ]; then
echo "Version mismatch between WORKSPACE and ethspecify"
echo " WORKSPACE: $WORKSPACE_VERSION"
echo " specrefs/.ethspecify.yml: $ETHSPECIFY_VERSION"
echo " .ethspecify.yml: $ETHSPECIFY_VERSION"
exit 1
else
echo "Versions match: $WORKSPACE_VERSION"
@@ -26,7 +26,7 @@ jobs:
run: python3 -mpip install ethspecify
- name: Update spec references
run: ethspecify process --path=specrefs
run: ethspecify
- name: Check for differences
run: |
@@ -40,4 +40,4 @@ jobs:
fi
- name: Check spec references
run: ethspecify check --path=specrefs
run: ethspecify check

View File

@@ -2,7 +2,7 @@ name: Go
on:
push:
branches: [ master ]
branches: [ master, develop ]
pull_request:
branches: [ '*' ]
merge_group:

View File

@@ -33,9 +33,8 @@ formatters:
generated: lax
paths:
- validator/web/site_data.go
- .*_test.go
- proto
- tools/analyzers
- third_party$
- builtin$
- examples$
- examples$

View File

@@ -273,16 +273,16 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.7.0-alpha.1"
consensus_spec_version = "v1.7.0-alpha.2"
load("@prysm//tools:download_spectests.bzl", "consensus_spec_tests")
consensus_spec_tests(
name = "consensus_spec_tests",
flavors = {
"general": "sha256-j5R3jA7Oo4OSDMTvpMuD+8RomaCByeFSwtfkq6fL0Zg=",
"minimal": "sha256-tdTqByoyswOS4r6OxFmo70y2BP7w1TgEok+gf4cbxB0=",
"mainnet": "sha256-5gB4dt6SnSDKzdBc06VedId3NkgvSYyv9n9FRxWKwYI=",
"general": "sha256-iGQsGZ1cHah+2CSod9jC3kN8Ku4n6KO0hIwfINrn/po=",
"minimal": "sha256-TgcYt8N8sXSttdHTGvOa+exUZ1zn1UzlAMz0V7i37xc=",
"mainnet": "sha256-LnXyiLoJtrvEvbqLDSAAqpLMdN/lXv92SAgYG8fNjCs=",
},
version = consensus_spec_version,
)
@@ -298,7 +298,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
integrity = "sha256-J+43DrK1pF658kTXTwMS6zGf4KDjvas++m8w2a8swpg=",
integrity = "sha256-Y/67Dg393PksZj5rTFNLntiJ6hNdB7Rxbu5gZE2gebY=",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

19
api/fallback/BUILD.bazel Normal file
View File

@@ -0,0 +1,19 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"fallback.go",
"log.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/api/fallback",
visibility = ["//visibility:public"],
deps = ["@com_github_sirupsen_logrus//:go_default_library"],
)
go_test(
name = "go_default_test",
srcs = ["fallback_test.go"],
embed = [":go_default_library"],
deps = ["//testing/assert:go_default_library"],
)

66
api/fallback/fallback.go Normal file
View File

@@ -0,0 +1,66 @@
package fallback
import (
"context"
"github.com/sirupsen/logrus"
)
// HostProvider is the subset of connection-provider methods that EnsureReady
// needs. Both grpc.GrpcConnectionProvider and rest.RestConnectionProvider
// satisfy this interface.
type HostProvider interface {
Hosts() []string
CurrentHost() string
SwitchHost(index int) error
}
// ReadyChecker can report whether the current endpoint is ready.
// iface.NodeClient satisfies this implicitly.
type ReadyChecker interface {
IsReady(ctx context.Context) bool
}
// EnsureReady iterates through the configured hosts and returns true as soon as
// one responds as ready. It starts from the provider's current host and wraps
// around using modular arithmetic, performing failover when a host is not ready.
func EnsureReady(ctx context.Context, provider HostProvider, checker ReadyChecker) bool {
hosts := provider.Hosts()
numHosts := len(hosts)
startingHost := provider.CurrentHost()
var attemptedHosts []string
// Find current index
currentIdx := 0
for i, h := range hosts {
if h == startingHost {
currentIdx = i
break
}
}
for i := range numHosts {
if checker.IsReady(ctx) {
if len(attemptedHosts) > 0 {
log.WithFields(logrus.Fields{
"previous": startingHost,
"current": provider.CurrentHost(),
"tried": attemptedHosts,
}).Info("Switched to responsive beacon node")
}
return true
}
attemptedHosts = append(attemptedHosts, provider.CurrentHost())
// Try next host if not the last iteration
if i < numHosts-1 {
nextIdx := (currentIdx + i + 1) % numHosts
if err := provider.SwitchHost(nextIdx); err != nil {
log.WithError(err).Error("Failed to switch host")
}
}
}
log.WithField("tried", attemptedHosts).Warn("No responsive beacon node found")
return false
}

View File

@@ -0,0 +1,94 @@
package fallback
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v7/testing/assert"
)
// mockHostProvider is a minimal HostProvider for unit tests.
type mockHostProvider struct {
hosts []string
hostIndex int
}
func (m *mockHostProvider) Hosts() []string { return m.hosts }
func (m *mockHostProvider) CurrentHost() string {
return m.hosts[m.hostIndex%len(m.hosts)]
}
func (m *mockHostProvider) SwitchHost(index int) error { m.hostIndex = index; return nil }
// mockReadyChecker records per-call IsReady results in sequence.
type mockReadyChecker struct {
results []bool
idx int
}
func (m *mockReadyChecker) IsReady(_ context.Context) bool {
if m.idx >= len(m.results) {
return false
}
r := m.results[m.idx]
m.idx++
return r
}
func TestEnsureReady_SingleHostReady(t *testing.T) {
provider := &mockHostProvider{hosts: []string{"http://host1:3500"}, hostIndex: 0}
checker := &mockReadyChecker{results: []bool{true}}
assert.Equal(t, true, EnsureReady(t.Context(), provider, checker))
assert.Equal(t, 0, provider.hostIndex)
}
func TestEnsureReady_SingleHostNotReady(t *testing.T) {
provider := &mockHostProvider{hosts: []string{"http://host1:3500"}, hostIndex: 0}
checker := &mockReadyChecker{results: []bool{false}}
assert.Equal(t, false, EnsureReady(t.Context(), provider, checker))
}
func TestEnsureReady_SingleHostError(t *testing.T) {
provider := &mockHostProvider{hosts: []string{"http://host1:3500"}, hostIndex: 0}
checker := &mockReadyChecker{results: []bool{false}}
assert.Equal(t, false, EnsureReady(t.Context(), provider, checker))
}
func TestEnsureReady_MultipleHostsFirstReady(t *testing.T) {
provider := &mockHostProvider{
hosts: []string{"http://host1:3500", "http://host2:3500"},
hostIndex: 0,
}
checker := &mockReadyChecker{results: []bool{true}}
assert.Equal(t, true, EnsureReady(t.Context(), provider, checker))
assert.Equal(t, 0, provider.hostIndex)
}
func TestEnsureReady_MultipleHostsFailoverToSecond(t *testing.T) {
provider := &mockHostProvider{
hosts: []string{"http://host1:3500", "http://host2:3500"},
hostIndex: 0,
}
checker := &mockReadyChecker{results: []bool{false, true}}
assert.Equal(t, true, EnsureReady(t.Context(), provider, checker))
assert.Equal(t, 1, provider.hostIndex)
}
func TestEnsureReady_MultipleHostsNoneReady(t *testing.T) {
provider := &mockHostProvider{
hosts: []string{"http://host1:3500", "http://host2:3500", "http://host3:3500"},
hostIndex: 0,
}
checker := &mockReadyChecker{results: []bool{false, false, false}}
assert.Equal(t, false, EnsureReady(t.Context(), provider, checker))
}
func TestEnsureReady_WrapAroundFromNonZeroIndex(t *testing.T) {
provider := &mockHostProvider{
hosts: []string{"http://host0:3500", "http://host1:3500", "http://host2:3500"},
hostIndex: 1,
}
// host1 (start) fails, host2 fails, host0 succeeds
checker := &mockReadyChecker{results: []bool{false, false, true}}
assert.Equal(t, true, EnsureReady(t.Context(), provider, checker))
assert.Equal(t, 0, provider.hostIndex)
}

9
api/fallback/log.go Normal file
View File

@@ -0,0 +1,9 @@
// Code generated by hack/gen-logs.sh; DO NOT EDIT.
// This file is created and regenerated automatically. Anything added here might get removed.
package fallback
import "github.com/sirupsen/logrus"
// The prefix for logs from this package will be the text after the last slash in the package path.
// If you wish to change this, you should add your desired name in the runtime/logging/logrus-prefixed-formatter/prefix-replacement.go file.
var log = logrus.WithField("package", "api/fallback")

View File

@@ -25,6 +25,11 @@ type GrpcConnectionProvider interface {
// SwitchHost switches to the endpoint at the given index.
// The new connection is created lazily on next CurrentConn() call.
SwitchHost(index int) error
// ConnectionCounter returns a monotonically increasing counter that increments
// each time SwitchHost changes the active endpoint. This allows consumers to
// detect connection changes even when the host string returns to a previous value
// (e.g., host0 → host1 → host0).
ConnectionCounter() uint64
// Close closes the current connection.
Close()
}
@@ -38,6 +43,7 @@ type grpcConnectionProvider struct {
// Current connection state (protected by mutex)
currentIndex uint64
conn *grpc.ClientConn
connCounter uint64
mu sync.Mutex
closed bool
@@ -138,6 +144,7 @@ func (p *grpcConnectionProvider) SwitchHost(index int) error {
p.conn = nil // Clear immediately - new connection created lazily
p.currentIndex = uint64(index)
p.connCounter++
// Close old connection asynchronously to avoid blocking the caller
if oldConn != nil {
@@ -155,6 +162,12 @@ func (p *grpcConnectionProvider) SwitchHost(index int) error {
return nil
}
func (p *grpcConnectionProvider) ConnectionCounter() uint64 {
p.mu.Lock()
defer p.mu.Unlock()
return p.connCounter
}
func (p *grpcConnectionProvider) Close() {
p.mu.Lock()
defer p.mu.Unlock()

View File

@@ -4,17 +4,24 @@ import "google.golang.org/grpc"
// MockGrpcProvider implements GrpcConnectionProvider for testing.
type MockGrpcProvider struct {
MockConn *grpc.ClientConn
MockHosts []string
MockConn *grpc.ClientConn
MockHosts []string
CurrentIndex int
ConnCounter uint64
}
func (m *MockGrpcProvider) CurrentConn() *grpc.ClientConn { return m.MockConn }
func (m *MockGrpcProvider) CurrentHost() string {
if len(m.MockHosts) > 0 {
return m.MockHosts[0]
return m.MockHosts[m.CurrentIndex]
}
return ""
}
func (m *MockGrpcProvider) Hosts() []string { return m.MockHosts }
func (m *MockGrpcProvider) SwitchHost(int) error { return nil }
func (m *MockGrpcProvider) Close() {}
func (m *MockGrpcProvider) Hosts() []string { return m.MockHosts }
func (m *MockGrpcProvider) SwitchHost(idx int) error {
m.CurrentIndex = idx
m.ConnCounter++
return nil
}
func (m *MockGrpcProvider) ConnectionCounter() uint64 { return m.ConnCounter }
func (m *MockGrpcProvider) Close() {}

View File

@@ -9,13 +9,13 @@ import (
// MockRestProvider implements RestConnectionProvider for testing.
type MockRestProvider struct {
MockClient *http.Client
MockHandler RestHandler
MockHandler Handler
MockHosts []string
HostIndex int
}
func (m *MockRestProvider) HttpClient() *http.Client { return m.MockClient }
func (m *MockRestProvider) RestHandler() RestHandler { return m.MockHandler }
func (m *MockRestProvider) Handler() Handler { return m.MockHandler }
func (m *MockRestProvider) CurrentHost() string {
if len(m.MockHosts) > 0 {
return m.MockHosts[m.HostIndex%len(m.MockHosts)]
@@ -25,25 +25,22 @@ func (m *MockRestProvider) CurrentHost() string {
func (m *MockRestProvider) Hosts() []string { return m.MockHosts }
func (m *MockRestProvider) SwitchHost(index int) error { m.HostIndex = index; return nil }
// MockRestHandler implements RestHandler for testing.
type MockRestHandler struct {
MockHost string
MockClient *http.Client
// MockHandler implements Handler for testing.
type MockHandler struct {
MockHost string
}
func (m *MockRestHandler) Get(_ context.Context, _ string, _ any) error { return nil }
func (m *MockRestHandler) GetStatusCode(_ context.Context, _ string) (int, error) {
func (m *MockHandler) Get(_ context.Context, _ string, _ any) error { return nil }
func (m *MockHandler) GetStatusCode(_ context.Context, _ string) (int, error) {
return http.StatusOK, nil
}
func (m *MockRestHandler) GetSSZ(_ context.Context, _ string) ([]byte, http.Header, error) {
func (m *MockHandler) GetSSZ(_ context.Context, _ string) ([]byte, http.Header, error) {
return nil, nil, nil
}
func (m *MockRestHandler) Post(_ context.Context, _ string, _ map[string]string, _ *bytes.Buffer, _ any) error {
func (m *MockHandler) Post(_ context.Context, _ string, _ map[string]string, _ *bytes.Buffer, _ any) error {
return nil
}
func (m *MockRestHandler) PostSSZ(_ context.Context, _ string, _ map[string]string, _ *bytes.Buffer) ([]byte, http.Header, error) {
func (m *MockHandler) PostSSZ(_ context.Context, _ string, _ map[string]string, _ *bytes.Buffer) ([]byte, http.Header, error) {
return nil, nil, nil
}
func (m *MockRestHandler) HttpClient() *http.Client { return m.MockClient }
func (m *MockRestHandler) Host() string { return m.MockHost }
func (m *MockRestHandler) SwitchHost(host string) { m.MockHost = host }
func (m *MockHandler) Host() string { return m.MockHost }

View File

@@ -17,8 +17,8 @@ import (
type RestConnectionProvider interface {
// HttpClient returns the configured HTTP client with headers, timeout, and optional tracing.
HttpClient() *http.Client
// RestHandler returns the REST handler for making API requests.
RestHandler() RestHandler
// Handler returns the REST handler for making API requests.
Handler() Handler
// CurrentHost returns the current REST API endpoint URL.
CurrentHost() string
// Hosts returns all configured REST API endpoint URLs.
@@ -54,7 +54,7 @@ func WithTracing() RestConnectionProviderOption {
type restConnectionProvider struct {
endpoints []string
httpClient *http.Client
restHandler RestHandler
restHandler *handler
currentIndex atomic.Uint64
timeout time.Duration
headers map[string][]string
@@ -96,7 +96,7 @@ func NewRestConnectionProvider(endpoint string, opts ...RestConnectionProviderOp
}
// Create the REST handler with the HTTP client and initial host
p.restHandler = newRestHandler(*p.httpClient, endpoints[0])
p.restHandler = newHandler(*p.httpClient, endpoints[0])
log.WithFields(logrus.Fields{
"endpoints": endpoints,
@@ -124,7 +124,7 @@ func (p *restConnectionProvider) HttpClient() *http.Client {
return p.httpClient
}
func (p *restConnectionProvider) RestHandler() RestHandler {
func (p *restConnectionProvider) Handler() Handler {
return p.restHandler
}

View File

@@ -21,32 +21,35 @@ import (
type reqOption func(*http.Request)
// RestHandler defines the interface for making REST API requests.
type RestHandler interface {
// Handler defines the interface for making REST API requests.
type Handler interface {
Get(ctx context.Context, endpoint string, resp any) error
GetStatusCode(ctx context.Context, endpoint string) (int, error)
GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error)
Post(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer, resp any) error
PostSSZ(ctx context.Context, endpoint string, headers map[string]string, data *bytes.Buffer) ([]byte, http.Header, error)
HttpClient() *http.Client
Host() string
SwitchHost(host string)
}
type restHandler struct {
type handler struct {
client http.Client
host string
reqOverrides []reqOption
}
// newRestHandler returns a RestHandler (internal use)
func newRestHandler(client http.Client, host string) RestHandler {
return NewRestHandler(client, host)
// newHandler returns a *handler for internal use within the rest package.
func newHandler(client http.Client, host string) *handler {
rh := &handler{
client: client,
host: host,
}
rh.appendAcceptOverride()
return rh
}
// NewRestHandler returns a RestHandler
func NewRestHandler(client http.Client, host string) RestHandler {
rh := &restHandler{
// NewHandler returns a Handler
func NewHandler(client http.Client, host string) Handler {
rh := &handler{
client: client,
host: host,
}
@@ -57,7 +60,7 @@ func NewRestHandler(client http.Client, host string) RestHandler {
// appendAcceptOverride enables the Accept header to be customized at runtime via an environment variable.
// This is specified as an env var because it is a niche option that prysm may use for performance testing or debugging
// bug which users are unlikely to need. Using an env var keeps the set of user-facing flags cleaner.
func (c *restHandler) appendAcceptOverride() {
func (c *handler) appendAcceptOverride() {
if accept := os.Getenv(params.EnvNameOverrideAccept); accept != "" {
c.reqOverrides = append(c.reqOverrides, func(req *http.Request) {
req.Header.Set("Accept", accept)
@@ -66,18 +69,18 @@ func (c *restHandler) appendAcceptOverride() {
}
// HttpClient returns the underlying HTTP client of the handler
func (c *restHandler) HttpClient() *http.Client {
func (c *handler) HttpClient() *http.Client {
return &c.client
}
// Host returns the underlying HTTP host
func (c *restHandler) Host() string {
func (c *handler) Host() string {
return c.host
}
// Get sends a GET request and decodes the response body as a JSON object into the passed in object.
// If an HTTP error is returned, the body is decoded as a DefaultJsonError JSON object and returned as the first return value.
func (c *restHandler) Get(ctx context.Context, endpoint string, resp any) error {
func (c *handler) Get(ctx context.Context, endpoint string, resp any) error {
url := c.host + endpoint
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
@@ -100,7 +103,7 @@ func (c *restHandler) Get(ctx context.Context, endpoint string, resp any) error
// GetStatusCode sends a GET request and returns only the HTTP status code.
// This is useful for endpoints like /eth/v1/node/health that communicate status via HTTP codes
// (200 = ready, 206 = syncing, 503 = unavailable) rather than response bodies.
func (c *restHandler) GetStatusCode(ctx context.Context, endpoint string) (int, error) {
func (c *handler) GetStatusCode(ctx context.Context, endpoint string) (int, error) {
url := c.host + endpoint
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
@@ -119,7 +122,7 @@ func (c *restHandler) GetStatusCode(ctx context.Context, endpoint string) (int,
return httpResp.StatusCode, nil
}
func (c *restHandler) GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error) {
func (c *handler) GetSSZ(ctx context.Context, endpoint string) ([]byte, http.Header, error) {
url := c.host + endpoint
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
@@ -174,7 +177,7 @@ func (c *restHandler) GetSSZ(ctx context.Context, endpoint string) ([]byte, http
// Post sends a POST request and decodes the response body as a JSON object into the passed in object.
// If an HTTP error is returned, the body is decoded as a DefaultJsonError JSON object and returned as the first return value.
func (c *restHandler) Post(
func (c *handler) Post(
ctx context.Context,
apiEndpoint string,
headers map[string]string,
@@ -210,7 +213,7 @@ func (c *restHandler) Post(
}
// PostSSZ sends a POST request and prefers an SSZ (application/octet-stream) response body.
func (c *restHandler) PostSSZ(
func (c *handler) PostSSZ(
ctx context.Context,
apiEndpoint string,
headers map[string]string,
@@ -311,6 +314,6 @@ func decodeResp(httpResp *http.Response, resp any) error {
return nil
}
func (c *restHandler) SwitchHost(host string) {
func (c *handler) SwitchHost(host string) {
c.host = host
}

View File

@@ -509,17 +509,17 @@ func (s *SignedBlindedBeaconBlockFulu) SigString() string {
// ----------------------------------------------------------------------------
type ExecutionPayloadBid struct {
ParentBlockHash string `json:"parent_block_hash"`
ParentBlockRoot string `json:"parent_block_root"`
BlockHash string `json:"block_hash"`
PrevRandao string `json:"prev_randao"`
FeeRecipient string `json:"fee_recipient"`
GasLimit string `json:"gas_limit"`
BuilderIndex string `json:"builder_index"`
Slot string `json:"slot"`
Value string `json:"value"`
ExecutionPayment string `json:"execution_payment"`
BlobKzgCommitmentsRoot string `json:"blob_kzg_commitments_root"`
ParentBlockHash string `json:"parent_block_hash"`
ParentBlockRoot string `json:"parent_block_root"`
BlockHash string `json:"block_hash"`
PrevRandao string `json:"prev_randao"`
FeeRecipient string `json:"fee_recipient"`
GasLimit string `json:"gas_limit"`
BuilderIndex string `json:"builder_index"`
Slot string `json:"slot"`
Value string `json:"value"`
ExecutionPayment string `json:"execution_payment"`
BlobKzgCommitments []string `json:"blob_kzg_commitments"`
}
type SignedExecutionPayloadBid struct {

View File

@@ -2939,18 +2939,22 @@ func SignedExecutionPayloadBidFromConsensus(b *eth.SignedExecutionPayloadBid) *S
}
func ExecutionPayloadBidFromConsensus(b *eth.ExecutionPayloadBid) *ExecutionPayloadBid {
blobKzgCommitments := make([]string, len(b.BlobKzgCommitments))
for i := range b.BlobKzgCommitments {
blobKzgCommitments[i] = hexutil.Encode(b.BlobKzgCommitments[i])
}
return &ExecutionPayloadBid{
ParentBlockHash: hexutil.Encode(b.ParentBlockHash),
ParentBlockRoot: hexutil.Encode(b.ParentBlockRoot),
BlockHash: hexutil.Encode(b.BlockHash),
PrevRandao: hexutil.Encode(b.PrevRandao),
FeeRecipient: hexutil.Encode(b.FeeRecipient),
GasLimit: fmt.Sprintf("%d", b.GasLimit),
BuilderIndex: fmt.Sprintf("%d", b.BuilderIndex),
Slot: fmt.Sprintf("%d", b.Slot),
Value: fmt.Sprintf("%d", b.Value),
ExecutionPayment: fmt.Sprintf("%d", b.ExecutionPayment),
BlobKzgCommitmentsRoot: hexutil.Encode(b.BlobKzgCommitmentsRoot),
ParentBlockHash: hexutil.Encode(b.ParentBlockHash),
ParentBlockRoot: hexutil.Encode(b.ParentBlockRoot),
BlockHash: hexutil.Encode(b.BlockHash),
PrevRandao: hexutil.Encode(b.PrevRandao),
FeeRecipient: hexutil.Encode(b.FeeRecipient),
GasLimit: fmt.Sprintf("%d", b.GasLimit),
BuilderIndex: fmt.Sprintf("%d", b.BuilderIndex),
Slot: fmt.Sprintf("%d", b.Slot),
Value: fmt.Sprintf("%d", b.Value),
ExecutionPayment: fmt.Sprintf("%d", b.ExecutionPayment),
BlobKzgCommitments: blobKzgCommitments,
}
}
@@ -3187,22 +3191,30 @@ func (b *ExecutionPayloadBid) ToConsensus() (*eth.ExecutionPayloadBid, error) {
if err != nil {
return nil, server.NewDecodeError(err, "ExecutionPayment")
}
blobKzgCommitmentsRoot, err := bytesutil.DecodeHexWithLength(b.BlobKzgCommitmentsRoot, fieldparams.RootLength)
err = slice.VerifyMaxLength(b.BlobKzgCommitments, fieldparams.MaxBlobCommitmentsPerBlock)
if err != nil {
return nil, server.NewDecodeError(err, "BlobKzgCommitmentsRoot")
return nil, server.NewDecodeError(err, "BlobKzgCommitments")
}
blobKzgCommitments := make([][]byte, len(b.BlobKzgCommitments))
for i, commitment := range b.BlobKzgCommitments {
kzg, err := bytesutil.DecodeHexWithLength(commitment, fieldparams.BLSPubkeyLength)
if err != nil {
return nil, server.NewDecodeError(err, fmt.Sprintf("BlobKzgCommitments[%d]", i))
}
blobKzgCommitments[i] = kzg
}
return &eth.ExecutionPayloadBid{
ParentBlockHash: parentBlockHash,
ParentBlockRoot: parentBlockRoot,
BlockHash: blockHash,
PrevRandao: prevRandao,
FeeRecipient: feeRecipient,
GasLimit: gasLimit,
BuilderIndex: primitives.BuilderIndex(builderIndex),
Slot: primitives.Slot(slot),
Value: primitives.Gwei(value),
ExecutionPayment: primitives.Gwei(executionPayment),
BlobKzgCommitmentsRoot: blobKzgCommitmentsRoot,
ParentBlockHash: parentBlockHash,
ParentBlockRoot: parentBlockRoot,
BlockHash: blockHash,
PrevRandao: prevRandao,
FeeRecipient: feeRecipient,
GasLimit: gasLimit,
BuilderIndex: primitives.BuilderIndex(builderIndex),
Slot: primitives.Slot(slot),
Value: primitives.Gwei(value),
ExecutionPayment: primitives.Gwei(executionPayment),
BlobKzgCommitments: blobKzgCommitments,
}, nil
}

View File

@@ -27,6 +27,7 @@ go_library(
"receive_blob.go",
"receive_block.go",
"receive_data_column.go",
"receive_payload_attestation_message.go",
"service.go",
"setup_forkchoice.go",
"tracked_proposer.go",
@@ -85,6 +86,7 @@ go_library(
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
"//io/logs:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",

View File

@@ -46,6 +46,7 @@ type ForkchoiceFetcher interface {
HighestReceivedBlockSlot() primitives.Slot
ReceivedBlocksLastEpoch() (uint64, error)
InsertNode(context.Context, state.BeaconState, consensus_blocks.ROBlock) error
InsertPayload(context.Context, interfaces.ROExecutionPayloadEnvelope) error
ForkChoiceDump(context.Context) (*forkchoice.Dump, error)
NewSlot(context.Context, primitives.Slot) error
ProposerBoost() [32]byte

View File

@@ -7,6 +7,7 @@ import (
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
consensus_blocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/forkchoice"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/runtime/version"
@@ -55,6 +56,13 @@ func (s *Service) InsertNode(ctx context.Context, st state.BeaconState, block co
return s.cfg.ForkChoiceStore.InsertNode(ctx, st, block)
}
// InsertPayload is a wrapper for payload insertion which is self locked
func (s *Service) InsertPayload(ctx context.Context, pe interfaces.ROExecutionPayloadEnvelope) error {
s.cfg.ForkChoiceStore.Lock()
defer s.cfg.ForkChoiceStore.Unlock()
return s.cfg.ForkChoiceStore.InsertPayload(ctx, pe)
}
// ForkChoiceDump returns the corresponding value from forkchoice
func (s *Service) ForkChoiceDump(ctx context.Context) (*forkchoice.Dump, error) {
s.cfg.ForkChoiceStore.RLock()

View File

@@ -101,11 +101,16 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *fcuConfig) (*
if len(lastValidHash) == 0 {
lastValidHash = defaultLatestValidHash
}
invalidRoots, err := s.cfg.ForkChoiceStore.SetOptimisticToInvalid(ctx, headRoot, headBlk.ParentRoot(), bytesutil.ToBytes32(lastValidHash))
// this call has guaranteed to have the `headRoot` with its payload in forkchoice.
invalidRoots, err := s.cfg.ForkChoiceStore.SetOptimisticToInvalid(ctx, headRoot, headBlk.ParentRoot(), bytesutil.ToBytes32(headPayload.ParentHash()), bytesutil.ToBytes32(lastValidHash))
if err != nil {
log.WithError(err).Error("Could not set head root to invalid")
return nil, nil
}
// TODO: Gloas, we should not include the head root in this call
if len(invalidRoots) == 0 || invalidRoots[0] != headRoot {
invalidRoots = append([][32]byte{headRoot}, invalidRoots...)
}
if err := s.removeInvalidBlockAndState(ctx, invalidRoots); err != nil {
log.WithError(err).Error("Could not remove invalid block and state")
return nil, nil
@@ -290,10 +295,10 @@ func (s *Service) notifyNewPayload(ctx context.Context, stVersion int, header in
return false, errors.WithMessage(ErrUndefinedExecutionEngineError, err.Error())
}
// reportInvalidBlock deals with the event that an invalid block was detected by the execution layer
func (s *Service) pruneInvalidBlock(ctx context.Context, root, parentRoot, lvh [32]byte) error {
// pruneInvalidBlock deals with the event that an invalid block was detected by the execution layer
func (s *Service) pruneInvalidBlock(ctx context.Context, root, parentRoot, parentHash [32]byte, lvh [32]byte) error {
newPayloadInvalidNodeCount.Inc()
invalidRoots, err := s.cfg.ForkChoiceStore.SetOptimisticToInvalid(ctx, root, parentRoot, lvh)
invalidRoots, err := s.cfg.ForkChoiceStore.SetOptimisticToInvalid(ctx, root, parentRoot, parentHash, lvh)
if err != nil {
return err
}

View File

@@ -465,9 +465,9 @@ func Test_NotifyForkchoiceUpdateRecursive_DoublyLinkedTree(t *testing.T) {
require.NoError(t, err)
require.Equal(t, brd, headRoot)
// Ensure F and G where removed but their parent E wasn't
require.Equal(t, false, fcs.HasNode(brf))
require.Equal(t, false, fcs.HasNode(brg))
// Ensure F and G's full nodes were removed but their empty (consensus) nodes remain, as does E
require.Equal(t, true, fcs.HasNode(brf))
require.Equal(t, true, fcs.HasNode(brg))
require.Equal(t, true, fcs.HasNode(bre))
}
@@ -703,14 +703,13 @@ func Test_reportInvalidBlock(t *testing.T) {
require.NoError(t, fcs.InsertNode(ctx, st, root))
require.NoError(t, fcs.SetOptimisticToValid(ctx, [32]byte{'A'}))
err = service.pruneInvalidBlock(ctx, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'a'})
err = service.pruneInvalidBlock(ctx, [32]byte{'D'}, [32]byte{'C'}, [32]byte{'c'}, [32]byte{'a'})
require.Equal(t, IsInvalidBlock(err), true)
require.Equal(t, InvalidBlockLVH(err), [32]byte{'a'})
invalidRoots := InvalidAncestorRoots(err)
require.Equal(t, 3, len(invalidRoots))
require.Equal(t, 2, len(invalidRoots))
require.Equal(t, [32]byte{'D'}, invalidRoots[0])
require.Equal(t, [32]byte{'C'}, invalidRoots[1])
require.Equal(t, [32]byte{'B'}, invalidRoots[2])
}
func Test_GetPayloadAttribute(t *testing.T) {
@@ -785,7 +784,7 @@ func Test_GetPayloadAttributeV2(t *testing.T) {
}
func Test_GetPayloadAttributeV3(t *testing.T) {
var testCases = []struct {
testCases := []struct {
name string
st bstate.BeaconState
}{

View File

@@ -10,6 +10,7 @@ import (
consensus_types "github.com/OffchainLabs/prysm/v7/consensus-types"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/io/logs"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/runtime/version"
prysmTime "github.com/OffchainLabs/prysm/v7/time"
@@ -87,36 +88,45 @@ func logStateTransitionData(b interfaces.ReadOnlyBeaconBlock) error {
func logBlockSyncStatus(block interfaces.ReadOnlyBeaconBlock, blockRoot [32]byte, justified, finalized *ethpb.Checkpoint, receivedTime time.Time, genesis time.Time, daWaitedTime time.Duration) error {
startTime, err := slots.StartTime(genesis, block.Slot())
if err != nil {
return err
return errors.Wrap(err, "failed to get slot start time")
}
level := log.Logger.GetLevel()
parentRoot := block.ParentRoot()
blkRoot := fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8])
finalizedRoot := fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8])
sinceSlotStartTime := prysmTime.Now().Sub(startTime)
lessFields := logrus.Fields{
"slot": block.Slot(),
"block": blkRoot,
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": finalizedRoot,
"epoch": slots.ToEpoch(block.Slot()),
"sinceSlotStartTime": sinceSlotStartTime,
}
moreFields := logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": blkRoot,
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": finalizedRoot,
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": sinceSlotStartTime,
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
}
level := logs.PackageVerbosity("beacon-chain/blockchain")
if level >= logrus.DebugLevel {
parentRoot := block.ParentRoot()
lf := logrus.Fields{
"slot": block.Slot(),
"slotInEpoch": block.Slot() % params.BeaconConfig().SlotsPerEpoch,
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"epoch": slots.ToEpoch(block.Slot()),
"justifiedEpoch": justified.Epoch,
"justifiedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(justified.Root)[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"parentRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(parentRoot[:])[:8]),
"version": version.String(block.Version()),
"sinceSlotStartTime": prysmTime.Now().Sub(startTime),
"chainServiceProcessedTime": prysmTime.Now().Sub(receivedTime) - daWaitedTime,
"dataAvailabilityWaitedTime": daWaitedTime,
}
log.WithFields(lf).Debug("Synced new block")
} else {
log.WithFields(logrus.Fields{
"slot": block.Slot(),
"block": fmt.Sprintf("0x%s...", hex.EncodeToString(blockRoot[:])[:8]),
"finalizedEpoch": finalized.Epoch,
"finalizedRoot": fmt.Sprintf("0x%s...", hex.EncodeToString(finalized.Root)[:8]),
"epoch": slots.ToEpoch(block.Slot()),
}).Info("Synced new block")
log.WithFields(moreFields).Info("Synced new block")
return nil
}
log.WithFields(lessFields).WithField(logs.LogTargetField, logs.LogTargetUser).Info("Synced new block")
log.WithFields(moreFields).WithField(logs.LogTargetField, logs.LogTargetEphemeral).Info("Synced new block")
return nil
}

View File

@@ -232,7 +232,8 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
postVersionAndHeaders[i].version,
postVersionAndHeaders[i].header, b)
if err != nil {
return s.handleInvalidExecutionError(ctx, err, root, b.Block().ParentRoot())
// this call does not have the root in forkchoice yet.
return s.handleInvalidExecutionError(ctx, err, root, b.Block().ParentRoot(), [32]byte(postVersionAndHeaders[i].header.ParentHash()))
}
if isValidPayload {
if err := s.validateMergeTransitionBlock(ctx, preVersionAndHeaders[i].version,
@@ -992,9 +993,9 @@ func (s *Service) waitForSync() error {
}
}
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot, parentRoot [fieldparams.RootLength]byte) error {
func (s *Service) handleInvalidExecutionError(ctx context.Context, err error, blockRoot, parentRoot [32]byte, parentHash [32]byte) error {
if IsInvalidBlock(err) && InvalidBlockLVH(err) != [32]byte{} {
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, InvalidBlockLVH(err))
return s.pruneInvalidBlock(ctx, blockRoot, parentRoot, parentHash, InvalidBlockLVH(err))
}
return err
}

View File

@@ -2006,6 +2006,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
// Check that we have justified the second epoch
jc := service.cfg.ForkChoiceStore.JustifiedCheckpoint()
require.Equal(t, primitives.Epoch(2), jc.Epoch)
time.Sleep(20 * time.Millisecond) // wait for async forkchoice update to be processed
// import block 19 to find out that the whole chain 13--18 was in fact
// invalid

View File

@@ -633,7 +633,7 @@ func (s *Service) validateExecutionOnBlock(ctx context.Context, ver int, header
isValidPayload, err := s.notifyNewPayload(ctx, ver, header, block)
if err != nil {
s.cfg.ForkChoiceStore.Lock()
err = s.handleInvalidExecutionError(ctx, err, block.Root(), block.Block().ParentRoot())
err = s.handleInvalidExecutionError(ctx, err, block.Root(), block.Block().ParentRoot(), [32]byte(header.BlockHash()))
s.cfg.ForkChoiceStore.Unlock()
return false, err
}

View File

@@ -0,0 +1,19 @@
package blockchain
import (
"context"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
)
// PayloadAttestationReceiver interface defines the methods of chain service for receiving
// validated payload attestation messages.
type PayloadAttestationReceiver interface {
ReceivePayloadAttestationMessage(context.Context, *ethpb.PayloadAttestationMessage) error
}
// ReceivePayloadAttestationMessage accepts a payload attestation message.
func (s *Service) ReceivePayloadAttestationMessage(ctx context.Context, a *ethpb.PayloadAttestationMessage) error {
// TODO: Handle payload attestation message processing once Gloas is fully wired.
return nil
}

View File

@@ -700,6 +700,14 @@ func (s *ChainService) InsertNode(ctx context.Context, st state.BeaconState, blo
return nil
}
// InsertPayload mocks the same method in the chain service
func (s *ChainService) InsertPayload(ctx context.Context, pe interfaces.ROExecutionPayloadEnvelope) error {
if s.ForkChoiceStore != nil {
return s.ForkChoiceStore.InsertPayload(ctx, pe)
}
return nil
}
// ForkChoiceDump mocks the same method in the chain service
func (s *ChainService) ForkChoiceDump(ctx context.Context) (*forkchoice2.Dump, error) {
if s.ForkChoiceStore != nil {
@@ -757,6 +765,11 @@ func (c *ChainService) ReceiveDataColumns(dcs []blocks.VerifiedRODataColumn) err
return nil
}
// ReceivePayloadAttestationMessage implements the same method in the chain service.
func (c *ChainService) ReceivePayloadAttestationMessage(_ context.Context, _ *ethpb.PayloadAttestationMessage) error {
return nil
}
// DependentRootForEpoch mocks the same method in the chain service
func (c *ChainService) DependentRootForEpoch(_ [32]byte, _ primitives.Epoch) ([32]byte, error) {
return c.TargetRoot, nil

View File

@@ -17,6 +17,7 @@ go_library(
"error.go",
"interfaces.go",
"log.go",
"payload_attestation.go",
"payload_id.go",
"proposer_indices.go",
"proposer_indices_disabled.go", # keep
@@ -76,6 +77,7 @@ go_test(
"checkpoint_state_test.go",
"committee_fuzz_test.go",
"committee_test.go",
"payload_attestation_test.go",
"payload_id_test.go",
"private_access_test.go",
"proposer_indices_test.go",

View File

@@ -0,0 +1,53 @@
package cache
import (
"sync"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
// PayloadAttestationCache tracks seen payload attestation messages for a single slot.
type PayloadAttestationCache struct {
slot primitives.Slot
seen map[primitives.ValidatorIndex]struct{}
mu sync.RWMutex
}
// Seen returns true if a vote for the given slot has already been
// processed for this validator index.
func (p *PayloadAttestationCache) Seen(slot primitives.Slot, idx primitives.ValidatorIndex) bool {
p.mu.RLock()
defer p.mu.RUnlock()
if p.slot != slot {
return false
}
if p.seen == nil {
return false
}
_, ok := p.seen[idx]
return ok
}
// Add marks the given slot and validator index as seen.
// This function assumes that the message has already been validated.
func (p *PayloadAttestationCache) Add(slot primitives.Slot, idx primitives.ValidatorIndex) error {
p.mu.Lock()
defer p.mu.Unlock()
if p.slot != slot {
p.slot = slot
p.seen = make(map[primitives.ValidatorIndex]struct{})
}
if p.seen == nil {
p.seen = make(map[primitives.ValidatorIndex]struct{})
}
p.seen[idx] = struct{}{}
return nil
}
// Clear clears the internal cache.
func (p *PayloadAttestationCache) Clear() {
p.mu.Lock()
defer p.mu.Unlock()
p.slot = 0
p.seen = nil
}

View File

@@ -0,0 +1,48 @@
package cache_test
import (
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/stretchr/testify/require"
)
func TestPayloadAttestationCache_SeenAndAdd(t *testing.T) {
var c cache.PayloadAttestationCache
slot1 := primitives.Slot(1)
slot2 := primitives.Slot(2)
idx1 := primitives.ValidatorIndex(3)
idx2 := primitives.ValidatorIndex(4)
require.False(t, c.Seen(slot1, idx1))
require.NoError(t, c.Add(slot1, idx1))
require.True(t, c.Seen(slot1, idx1))
require.False(t, c.Seen(slot1, idx2))
require.False(t, c.Seen(slot2, idx1))
require.NoError(t, c.Add(slot1, idx2))
require.True(t, c.Seen(slot1, idx1))
require.True(t, c.Seen(slot1, idx2))
require.NoError(t, c.Add(slot2, idx1))
require.True(t, c.Seen(slot2, idx1))
require.False(t, c.Seen(slot1, idx1))
require.False(t, c.Seen(slot1, idx2))
}
func TestPayloadAttestationCache_Clear(t *testing.T) {
var c cache.PayloadAttestationCache
slot := primitives.Slot(10)
idx := primitives.ValidatorIndex(42)
require.NoError(t, c.Add(slot, idx))
require.True(t, c.Seen(slot, idx))
c.Clear()
require.False(t, c.Seen(slot, idx))
require.NoError(t, c.Add(slot, idx))
require.True(t, c.Seen(slot, idx))
}

View File

@@ -17,27 +17,56 @@ import (
)
// ProcessExecutionPayloadBid processes a signed execution payload bid in the Gloas fork.
// Spec v1.7.0-alpha.0 (pseudocode):
// process_execution_payload_bid(state: BeaconState, block: BeaconBlock):
//
// signed_bid = block.body.signed_execution_payload_bid
// bid = signed_bid.message
// builder_index = bid.builder_index
// amount = bid.value
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// assert amount == 0
// assert signed_bid.signature == G2_POINT_AT_INFINITY
// else:
// assert is_active_builder(state, builder_index)
// assert can_builder_cover_bid(state, builder_index, amount)
// assert verify_execution_payload_bid_signature(state, signed_bid)
// assert bid.slot == block.slot
// assert bid.parent_block_hash == state.latest_block_hash
// assert bid.parent_block_root == block.parent_root
// assert bid.prev_randao == get_randao_mix(state, get_current_epoch(state))
// if amount > 0:
// state.builder_pending_payments[...] = BuilderPendingPayment(weight=0, withdrawal=BuilderPendingWithdrawal(fee_recipient=bid.fee_recipient, amount=amount, builder_index=builder_index))
// state.latest_execution_payload_bid = bid
// <spec fn="process_execution_payload_bid" fork="gloas" hash="823c9f3a">
// def process_execution_payload_bid(state: BeaconState, block: BeaconBlock) -> None:
// signed_bid = block.body.signed_execution_payload_bid
// bid = signed_bid.message
// builder_index = bid.builder_index
// amount = bid.value
//
// # For self-builds, amount must be zero regardless of withdrawal credential prefix
// if builder_index == BUILDER_INDEX_SELF_BUILD:
// assert amount == 0
// assert signed_bid.signature == bls.G2_POINT_AT_INFINITY
// else:
// # Verify that the builder is active
// assert is_active_builder(state, builder_index)
// # Verify that the builder has funds to cover the bid
// assert can_builder_cover_bid(state, builder_index, amount)
// # Verify that the bid signature is valid
// assert verify_execution_payload_bid_signature(state, signed_bid)
//
// # Verify commitments are under limit
// assert (
// len(bid.blob_kzg_commitments)
// <= get_blob_parameters(get_current_epoch(state)).max_blobs_per_block
// )
//
// # Verify that the bid is for the current slot
// assert bid.slot == block.slot
// # Verify that the bid is for the right parent block
// assert bid.parent_block_hash == state.latest_block_hash
// assert bid.parent_block_root == block.parent_root
// assert bid.prev_randao == get_randao_mix(state, get_current_epoch(state))
//
// # Record the pending payment if there is some payment
// if amount > 0:
// pending_payment = BuilderPendingPayment(
// weight=0,
// withdrawal=BuilderPendingWithdrawal(
// fee_recipient=bid.fee_recipient,
// amount=amount,
// builder_index=builder_index,
// ),
// )
// state.builder_pending_payments[SLOTS_PER_EPOCH + bid.slot % SLOTS_PER_EPOCH] = (
// pending_payment
// )
//
// # Cache the signed execution payload bid
// state.latest_execution_payload_bid = bid
// </spec>
func ProcessExecutionPayloadBid(st state.BeaconState, block interfaces.ReadOnlyBeaconBlock) error {
signedBid, err := block.Body().SignedExecutionPayloadBid()
if err != nil {
@@ -86,6 +115,12 @@ func ProcessExecutionPayloadBid(st state.BeaconState, block interfaces.ReadOnlyB
}
}
maxBlobsPerBlock := params.BeaconConfig().MaxBlobsPerBlockAtEpoch(slots.ToEpoch(block.Slot()))
commitmentCount := bid.BlobKzgCommitmentCount()
if commitmentCount > uint64(maxBlobsPerBlock) {
return fmt.Errorf("bid has %d blob KZG commitments over max %d", commitmentCount, maxBlobsPerBlock)
}
if err := validateBidConsistency(st, bid, block); err != nil {
return errors.Wrap(err, "bid consistency validation failed")
}

View File

@@ -184,6 +184,28 @@ func signBid(t *testing.T, sk common.SecretKey, bid *ethpb.ExecutionPayloadBid,
return out
}
func blobCommitmentsForSlot(slot primitives.Slot, count int) [][]byte {
max := int(params.BeaconConfig().MaxBlobsPerBlockAtEpoch(slots.ToEpoch(slot)))
if count > max {
count = max
}
commitments := make([][]byte, count)
for i := range commitments {
commitments[i] = bytes.Repeat([]byte{0xEE}, 48)
}
return commitments
}
func tooManyBlobCommitmentsForSlot(slot primitives.Slot) [][]byte {
max := int(params.BeaconConfig().MaxBlobsPerBlockAtEpoch(slots.ToEpoch(slot)))
count := max + 1
commitments := make([][]byte, count)
for i := range commitments {
commitments[i] = bytes.Repeat([]byte{0xEE}, 48)
}
return commitments
}
func TestProcessExecutionPayloadBid_SelfBuildSuccess(t *testing.T) {
slot := primitives.Slot(12)
proposerIdx := primitives.ValidatorIndex(0)
@@ -194,17 +216,17 @@ func TestProcessExecutionPayloadBid_SelfBuildSuccess(t *testing.T) {
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinActivationBalance+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 0,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 0,
ExecutionPayment: 0,
BlobKzgCommitments: blobCommitmentsForSlot(slot, 1),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
signed := &ethpb.SignedExecutionPayloadBid{
Message: bid,
@@ -236,16 +258,16 @@ func TestProcessExecutionPayloadBid_SelfBuildNonZeroAmountFails(t *testing.T) {
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinActivationBalance+1000, randao, latestHash, [48]byte{})
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xAA}, 32),
BlockHash: bytes.Repeat([]byte{0xBB}, 32),
PrevRandao: randao[:],
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xCC}, 32),
FeeRecipient: bytes.Repeat([]byte{0xDD}, 20),
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xAA}, 32),
BlockHash: bytes.Repeat([]byte{0xBB}, 32),
PrevRandao: randao[:],
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitments: blobCommitmentsForSlot(slot, 1),
FeeRecipient: bytes.Repeat([]byte{0xDD}, 20),
}
signed := &ethpb.SignedExecutionPayloadBid{
Message: bid,
@@ -280,17 +302,17 @@ func TestProcessExecutionPayloadBid_PendingPaymentAndCacheBid(t *testing.T) {
state := buildGloasState(t, slot, proposerIdx, builderIdx, balance, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 500_000,
ExecutionPayment: 1,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 500_000,
ExecutionPayment: 1,
BlobKzgCommitments: blobCommitmentsForSlot(slot, 1),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
@@ -341,17 +363,17 @@ func TestProcessExecutionPayloadBid_BuilderNotActive(t *testing.T) {
state = stateIface.(*state_native.BeaconState)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0x03}, 32),
BlockHash: bytes.Repeat([]byte{0x04}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x05}, 32),
FeeRecipient: bytes.Repeat([]byte{0x06}, 20),
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0x03}, 32),
BlockHash: bytes.Repeat([]byte{0x04}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitments: blobCommitmentsForSlot(slot, 1),
FeeRecipient: bytes.Repeat([]byte{0x06}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
@@ -394,17 +416,17 @@ func TestProcessExecutionPayloadBid_CannotCoverBid(t *testing.T) {
state = stateIface.(*state_native.BeaconState)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 25,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 25,
ExecutionPayment: 0,
BlobKzgCommitments: blobCommitmentsForSlot(slot, 1),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
@@ -436,17 +458,17 @@ func TestProcessExecutionPayloadBid_InvalidSignature(t *testing.T) {
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xEE}, 32),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 10,
ExecutionPayment: 0,
BlobKzgCommitments: blobCommitmentsForSlot(slot, 1),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
// Use an invalid signature.
invalidSig := [96]byte{1}
@@ -463,6 +485,42 @@ func TestProcessExecutionPayloadBid_InvalidSignature(t *testing.T) {
require.ErrorContains(t, "bid signature validation failed", err)
}
func TestProcessExecutionPayloadBid_TooManyBlobCommitments(t *testing.T) {
slot := primitives.Slot(9)
proposerIdx := primitives.ValidatorIndex(0)
builderIdx := params.BeaconConfig().BuilderIndexSelfBuild
randao := [32]byte(bytes.Repeat([]byte{0xAA}, 32))
latestHash := [32]byte(bytes.Repeat([]byte{0xBB}, 32))
pubKey := [48]byte{}
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinActivationBalance+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xCC}, 32),
BlockHash: bytes.Repeat([]byte{0xDD}, 32),
PrevRandao: randao[:],
BuilderIndex: builderIdx,
Slot: slot,
BlobKzgCommitments: tooManyBlobCommitmentsForSlot(slot),
FeeRecipient: bytes.Repeat([]byte{0xFF}, 20),
}
signed := &ethpb.SignedExecutionPayloadBid{
Message: bid,
Signature: common.InfiniteSignature[:],
}
block := stubBlock{
slot: slot,
proposer: proposerIdx,
parentRoot: bytesutil.ToBytes32(bid.ParentBlockRoot),
body: stubBlockBody{signedBid: signed},
v: version.Gloas,
}
err := ProcessExecutionPayloadBid(state, block)
require.ErrorContains(t, "blob KZG commitments over max", err)
}
func TestProcessExecutionPayloadBid_SlotMismatch(t *testing.T) {
slot := primitives.Slot(10)
builderIdx := primitives.BuilderIndex(1)
@@ -478,17 +536,17 @@ func TestProcessExecutionPayloadBid_SlotMismatch(t *testing.T) {
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xAA}, 32),
BlockHash: bytes.Repeat([]byte{0xBB}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot + 1, // mismatch
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0xCC}, 32),
FeeRecipient: bytes.Repeat([]byte{0xDD}, 20),
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0xAA}, 32),
BlockHash: bytes.Repeat([]byte{0xBB}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot + 1, // mismatch
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitments: blobCommitmentsForSlot(slot, 1),
FeeRecipient: bytes.Repeat([]byte{0xDD}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
@@ -520,17 +578,17 @@ func TestProcessExecutionPayloadBid_ParentHashMismatch(t *testing.T) {
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: bytes.Repeat([]byte{0x11}, 32), // mismatch
ParentBlockRoot: bytes.Repeat([]byte{0x22}, 32),
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
ParentBlockHash: bytes.Repeat([]byte{0x11}, 32), // mismatch
ParentBlockRoot: bytes.Repeat([]byte{0x22}, 32),
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitments: blobCommitmentsForSlot(slot, 1),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
@@ -563,17 +621,17 @@ func TestProcessExecutionPayloadBid_ParentRootMismatch(t *testing.T) {
parentRoot := bytes.Repeat([]byte{0x22}, 32)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: parentRoot,
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
ParentBlockHash: latestHash[:],
ParentBlockRoot: parentRoot,
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: randao[:],
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitments: blobCommitmentsForSlot(slot, 1),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)
@@ -605,17 +663,17 @@ func TestProcessExecutionPayloadBid_PrevRandaoMismatch(t *testing.T) {
state := buildGloasState(t, slot, proposerIdx, builderIdx, params.BeaconConfig().MinDepositAmount+1000, randao, latestHash, pubKey)
bid := &ethpb.ExecutionPayloadBid{
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0x22}, 32),
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: bytes.Repeat([]byte{0x01}, 32), // mismatch
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitmentsRoot: bytes.Repeat([]byte{0x44}, 32),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
ParentBlockHash: latestHash[:],
ParentBlockRoot: bytes.Repeat([]byte{0x22}, 32),
BlockHash: bytes.Repeat([]byte{0x33}, 32),
PrevRandao: bytes.Repeat([]byte{0x01}, 32), // mismatch
GasLimit: 1,
BuilderIndex: builderIdx,
Slot: slot,
Value: 1,
ExecutionPayment: 0,
BlobKzgCommitments: blobCommitmentsForSlot(slot, 1),
FeeRecipient: bytes.Repeat([]byte{0x55}, 20),
}
genesis := bytesutil.ToBytes32(state.GenesisValidatorsRoot())
sig := signBid(t, sk, bid, state.Fork(), genesis)

View File

@@ -24,14 +24,21 @@ import (
)
// ProcessPayloadAttestations validates payload attestations in a block body.
// Spec v1.7.0-alpha.0 (pseudocode):
// process_payload_attestation(state: BeaconState, payload_attestation: PayloadAttestation):
//
// data = payload_attestation.data
// assert data.beacon_block_root == state.latest_block_header.parent_root
// assert data.slot + 1 == state.slot
// indexed = get_indexed_payload_attestation(state, data.slot, payload_attestation)
// assert is_valid_indexed_payload_attestation(state, indexed)
// <spec fn="process_payload_attestation" fork="gloas" hash="f46bf0b0">
// def process_payload_attestation(
// state: BeaconState, payload_attestation: PayloadAttestation
// ) -> None:
// data = payload_attestation.data
//
// # Check that the attestation is for the parent beacon block
// assert data.beacon_block_root == state.latest_block_header.parent_root
// # Check that the attestation is for the previous slot
// assert data.slot + 1 == state.slot
// # Verify signature
// indexed_payload_attestation = get_indexed_payload_attestation(state, payload_attestation)
// assert is_valid_indexed_payload_attestation(state, indexed_payload_attestation)
// </spec>
func ProcessPayloadAttestations(ctx context.Context, st state.BeaconState, body interfaces.ReadOnlyBeaconBlockBody) error {
atts, err := body.PayloadAttestations()
if err != nil {
@@ -70,7 +77,7 @@ func ProcessPayloadAttestations(ctx context.Context, st state.BeaconState, body
// indexedPayloadAttestation converts a payload attestation into its indexed form.
func indexedPayloadAttestation(ctx context.Context, st state.ReadOnlyBeaconState, att *eth.PayloadAttestation) (*consensus_types.IndexedPayloadAttestation, error) {
committee, err := payloadCommittee(ctx, st, att.Data.Slot)
committee, err := PayloadCommittee(ctx, st, att.Data.Slot)
if err != nil {
return nil, err
}
@@ -89,19 +96,26 @@ func indexedPayloadAttestation(ctx context.Context, st state.ReadOnlyBeaconState
}, nil
}
// payloadCommittee returns the payload timeliness committee for a given slot for the state.
// Spec v1.7.0-alpha.0 (pseudocode):
// get_ptc(state: BeaconState, slot: Slot) -> Vector[ValidatorIndex, PTC_SIZE]:
// PayloadCommittee returns the payload timeliness committee for a given slot for the state.
//
// epoch = compute_epoch_at_slot(slot)
// seed = hash(get_seed(state, epoch, DOMAIN_PTC_ATTESTER) + uint_to_bytes(slot))
// indices = []
// committees_per_slot = get_committee_count_per_slot(state, epoch)
// for i in range(committees_per_slot):
// committee = get_beacon_committee(state, slot, CommitteeIndex(i))
// indices.extend(committee)
// return compute_balance_weighted_selection(state, indices, seed, size=PTC_SIZE, shuffle_indices=False)
func payloadCommittee(ctx context.Context, st state.ReadOnlyBeaconState, slot primitives.Slot) ([]primitives.ValidatorIndex, error) {
// <spec fn="get_ptc" fork="gloas" hash="ae15f761">
// def get_ptc(state: BeaconState, slot: Slot) -> Vector[ValidatorIndex, PTC_SIZE]:
// """
// Get the payload timeliness committee for the given ``slot``.
// """
// epoch = compute_epoch_at_slot(slot)
// seed = hash(get_seed(state, epoch, DOMAIN_PTC_ATTESTER) + uint_to_bytes(slot))
// indices: List[ValidatorIndex] = []
// # Concatenate all committees for this slot in order
// committees_per_slot = get_committee_count_per_slot(state, epoch)
// for i in range(committees_per_slot):
// committee = get_beacon_committee(state, slot, CommitteeIndex(i))
// indices.extend(committee)
// return compute_balance_weighted_selection(
// state, indices, seed, size=PTC_SIZE, shuffle_indices=False
// )
// </spec>
func PayloadCommittee(ctx context.Context, st state.ReadOnlyBeaconState, slot primitives.Slot) ([]primitives.ValidatorIndex, error) {
epoch := slots.ToEpoch(slot)
seed, err := ptcSeed(st, epoch, slot)
if err != nil {
@@ -152,17 +166,35 @@ func ptcSeed(st state.ReadOnlyBeaconState, epoch primitives.Epoch, slot primitiv
}
// selectByBalance selects a balance-weighted subset of input candidates.
// Spec v1.7.0-alpha.0 (pseudocode):
// compute_balance_weighted_selection(state, indices, seed, size, shuffle_indices):
// Note: shuffle_indices is false for PTC.
//
// total = len(indices); selected = []; i = 0
// while len(selected) < size:
// next = i % total
// if shuffle_indices: next = compute_shuffled_index(next, total, seed)
// if compute_balance_weighted_acceptance(state, indices[next], seed, i):
// selected.append(indices[next])
// i += 1
// <spec fn="compute_balance_weighted_selection" fork="gloas" hash="2c9f1c23">
// def compute_balance_weighted_selection(
// state: BeaconState,
// indices: Sequence[ValidatorIndex],
// seed: Bytes32,
// size: uint64,
// shuffle_indices: bool,
// ) -> Sequence[ValidatorIndex]:
// """
// Return ``size`` indices sampled by effective balance, using ``indices``
// as candidates. If ``shuffle_indices`` is ``True``, candidate indices
// are themselves sampled from ``indices`` by shuffling it, otherwise
// ``indices`` is traversed in order.
// """
// total = uint64(len(indices))
// assert total > 0
// selected: List[ValidatorIndex] = []
// i = uint64(0)
// while len(selected) < size:
// next_index = i % total
// if shuffle_indices:
// next_index = compute_shuffled_index(next_index, total, seed)
// candidate_index = indices[next_index]
// if compute_balance_weighted_acceptance(state, candidate_index, seed, i):
// selected.append(candidate_index)
// i += 1
// return selected
// </spec>
func selectByBalanceFill(
ctx context.Context,
st state.ReadOnlyBeaconState,
@@ -199,15 +231,22 @@ func selectByBalanceFill(
}
// acceptByBalance determines if a validator is accepted based on its effective balance.
// Spec v1.7.0-alpha.0 (pseudocode):
// compute_balance_weighted_acceptance(state, index, seed, i):
//
// MAX_RANDOM_VALUE = 2**16 - 1
// random_bytes = hash(seed + uint_to_bytes(i // 16))
// offset = i % 16 * 2
// random_value = bytes_to_uint64(random_bytes[offset:offset+2])
// effective_balance = state.validators[index].effective_balance
// return effective_balance * MAX_RANDOM_VALUE >= MAX_EFFECTIVE_BALANCE_ELECTRA * random_value
// <spec fn="compute_balance_weighted_acceptance" fork="gloas" hash="9954dcd0">
// def compute_balance_weighted_acceptance(
// state: BeaconState, index: ValidatorIndex, seed: Bytes32, i: uint64
// ) -> bool:
// """
// Return whether to accept the selection of the validator ``index``, with probability
// proportional to its ``effective_balance``, and randomness given by ``seed`` and ``i``.
// """
// MAX_RANDOM_VALUE = 2**16 - 1
// random_bytes = hash(seed + uint_to_bytes(i // 16))
// offset = i % 16 * 2
// random_value = bytes_to_uint64(random_bytes[offset : offset + 2])
// effective_balance = state.validators[index].effective_balance
// return effective_balance * MAX_RANDOM_VALUE >= MAX_EFFECTIVE_BALANCE_ELECTRA * random_value
// </spec>
func acceptByBalance(st state.ReadOnlyBeaconState, idx primitives.ValidatorIndex, seedBuf []byte, hashFunc func([]byte) [32]byte, maxBalance uint64, round uint64) (bool, error) {
// Reuse the seed buffer by overwriting the last 8 bytes with the round counter.
binary.LittleEndian.PutUint64(seedBuf[len(seedBuf)-8:], round/16)
@@ -224,16 +263,26 @@ func acceptByBalance(st state.ReadOnlyBeaconState, idx primitives.ValidatorIndex
}
// validIndexedPayloadAttestation verifies the signature of an indexed payload attestation.
// Spec v1.7.0-alpha.0 (pseudocode):
// is_valid_indexed_payload_attestation(state: BeaconState, indexed_payload_attestation: IndexedPayloadAttestation) -> bool:
//
// indices = indexed_payload_attestation.attesting_indices
// return len(indices) > 0 and indices == sorted(indices) and
// bls.FastAggregateVerify(
// [state.validators[i].pubkey for i in indices],
// compute_signing_root(indexed_payload_attestation.data, get_domain(state, DOMAIN_PTC_ATTESTER, compute_epoch_at_slot(attestation.data.slot)),
// indexed_payload_attestation.signature,
// )
// <spec fn="is_valid_indexed_payload_attestation" fork="gloas" hash="d76e0f89">
// def is_valid_indexed_payload_attestation(
// state: BeaconState, attestation: IndexedPayloadAttestation
// ) -> bool:
// """
// Check if ``attestation`` is non-empty, has sorted indices, and has
// a valid aggregate signature.
// """
// # Verify indices are non-empty and sorted
// indices = attestation.attesting_indices
// if len(indices) == 0 or not indices == sorted(indices):
// return False
//
// # Verify aggregate signature
// pubkeys = [state.validators[i].pubkey for i in indices]
// domain = get_domain(state, DOMAIN_PTC_ATTESTER, compute_epoch_at_slot(attestation.data.slot))
// signing_root = compute_signing_root(attestation.data, domain)
// return bls.FastAggregateVerify(pubkeys, signing_root, attestation.signature)
// </spec>
func validIndexedPayloadAttestation(st state.ReadOnlyBeaconState, att *consensus_types.IndexedPayloadAttestation) error {
indices := att.AttestingIndices
if len(indices) == 0 || !slices.IsSorted(indices) {

View File

@@ -10,17 +10,21 @@ import (
)
// ProcessBuilderPendingPayments processes the builder pending payments from the previous epoch.
// Spec v1.7.0-alpha.0 (pseudocode):
// def process_builder_pending_payments(state: BeaconState) -> None:
//
// quorum = get_builder_payment_quorum_threshold(state)
// for payment in state.builder_pending_payments[:SLOTS_PER_EPOCH]:
// if payment.weight >= quorum:
// state.builder_pending_withdrawals.append(payment.withdrawal)
// <spec fn="process_builder_pending_payments" fork="gloas" hash="10da48dd">
// def process_builder_pending_payments(state: BeaconState) -> None:
// """
// Processes the builder pending payments from the previous epoch.
// """
// quorum = get_builder_payment_quorum_threshold(state)
// for payment in state.builder_pending_payments[:SLOTS_PER_EPOCH]:
// if payment.weight >= quorum:
// state.builder_pending_withdrawals.append(payment.withdrawal)
//
// old_payments = state.builder_pending_payments[SLOTS_PER_EPOCH:]
// new_payments = [BuilderPendingPayment() for _ in range(SLOTS_PER_EPOCH)]
// state.builder_pending_payments = old_payments + new_payments
// old_payments = state.builder_pending_payments[SLOTS_PER_EPOCH:]
// new_payments = [BuilderPendingPayment() for _ in range(SLOTS_PER_EPOCH)]
// state.builder_pending_payments = old_payments + new_payments
// </spec>
func ProcessBuilderPendingPayments(state state.BeaconState) error {
quorum, err := builderQuorumThreshold(state)
if err != nil {
@@ -53,12 +57,16 @@ func ProcessBuilderPendingPayments(state state.BeaconState) error {
}
// builderQuorumThreshold calculates the quorum threshold for builder payments.
// Spec v1.7.0-alpha.0 (pseudocode):
// def get_builder_payment_quorum_threshold(state: BeaconState) -> uint64:
//
// per_slot_balance = get_total_active_balance(state) // SLOTS_PER_EPOCH
// quorum = per_slot_balance * BUILDER_PAYMENT_THRESHOLD_NUMERATOR
// return uint64(quorum // BUILDER_PAYMENT_THRESHOLD_DENOMINATOR)
// <spec fn="get_builder_payment_quorum_threshold" fork="gloas" hash="a64b7ffb">
// def get_builder_payment_quorum_threshold(state: BeaconState) -> uint64:
// """
// Calculate the quorum threshold for builder payments.
// """
// per_slot_balance = get_total_active_balance(state) // SLOTS_PER_EPOCH
// quorum = per_slot_balance * BUILDER_PAYMENT_THRESHOLD_NUMERATOR
// return uint64(quorum // BUILDER_PAYMENT_THRESHOLD_DENOMINATOR)
// </spec>
func builderQuorumThreshold(state state.ReadOnlyBeaconState) (primitives.Gwei, error) {
activeBalance, err := helpers.TotalActiveBalance(state)
if err != nil {

View File

@@ -11,16 +11,20 @@ import (
)
// RemoveBuilderPendingPayment removes the pending builder payment for the proposal slot.
// Spec v1.7.0 (pseudocode):
//
// <spec fn="process_proposer_slashing" fork="gloas" lines="22-32" hash="4da721ef">
// # [New in Gloas:EIP7732]
// # Remove the BuilderPendingPayment corresponding to
// # this proposal if it is still in the 2-epoch window.
// slot = header_1.slot
// proposal_epoch = compute_epoch_at_slot(slot)
// if proposal_epoch == get_current_epoch(state):
// payment_index = SLOTS_PER_EPOCH + slot % SLOTS_PER_EPOCH
// state.builder_pending_payments[payment_index] = BuilderPendingPayment()
// payment_index = SLOTS_PER_EPOCH + slot % SLOTS_PER_EPOCH
// state.builder_pending_payments[payment_index] = BuilderPendingPayment()
// elif proposal_epoch == get_previous_epoch(state):
// payment_index = slot % SLOTS_PER_EPOCH
// state.builder_pending_payments[payment_index] = BuilderPendingPayment()
// payment_index = slot % SLOTS_PER_EPOCH
// state.builder_pending_payments[payment_index] = BuilderPendingPayment()
// </spec>
func RemoveBuilderPendingPayment(st state.BeaconState, header *eth.BeaconBlockHeader) error {
proposalEpoch := slots.ToEpoch(header.Slot)
currentEpoch := time.CurrentEpoch(st)

View File

@@ -143,10 +143,11 @@ func ProcessSlot(ctx context.Context, state state.BeaconState) (state.BeaconStat
return nil, err
}
// Spec v1.6.1 (pseudocode):
// <spec fn="process_slot" fork="gloas" lines="11-13" hash="62b28839">
// # [New in Gloas:EIP7732]
// # Unset the next payload availability
// state.execution_payload_availability[(state.slot + 1) % SLOTS_PER_HISTORICAL_ROOT] = 0b0
// </spec>
if state.Version() >= version.Gloas {
index := uint64((state.Slot() + 1) % params.BeaconConfig().SlotsPerHistoricalRoot)
if err := state.UpdateExecutionPayloadAvailabilityAtIndex(index, 0x0); err != nil {

View File

@@ -78,7 +78,7 @@ func newGloasState(t *testing.T, slot primitives.Slot, availability []byte) stat
BlockHash: make([]byte, 32),
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
BlobKzgCommitmentsRoot: make([]byte, 32),
BlobKzgCommitments: [][]byte{make([]byte, 48)},
},
Eth1Data: &ethpb.Eth1Data{
DepositRoot: make([]byte, 32),

View File

@@ -2,6 +2,7 @@ package kv
import (
"context"
"slices"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
@@ -33,6 +34,9 @@ func (s *Store) LastArchivedRoot(ctx context.Context) [32]byte {
if err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(stateSlotIndicesBucket)
_, blockRoot = bkt.Cursor().Last()
if len(blockRoot) > 0 {
blockRoot = slices.Clone(blockRoot)
}
return nil
}); err != nil { // This view never returns an error, but we'll handle anyway for sanity.
panic(err) // lint:nopanic -- View never returns an error.
@@ -51,6 +55,9 @@ func (s *Store) ArchivedPointRoot(ctx context.Context, slot primitives.Slot) [32
if err := s.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket(stateSlotIndicesBucket)
blockRoot = bucket.Get(bytesutil.SlotToBytesBigEndian(slot))
if len(blockRoot) > 0 {
blockRoot = slices.Clone(blockRoot)
}
return nil
}); err != nil { // This view never returns an error, but we'll handle anyway for sanity.
panic(err) // lint:nopanic -- View never returns an error.

View File

@@ -812,7 +812,10 @@ func (s *Store) FeeRecipientByValidatorID(ctx context.Context, id primitives.Val
var addr []byte
err := s.db.View(func(tx *bolt.Tx) error {
bkt := tx.Bucket(feeRecipientBucket)
addr = bkt.Get(bytesutil.Uint64ToBytesBigEndian(uint64(id)))
stored := bkt.Get(bytesutil.Uint64ToBytesBigEndian(uint64(id)))
if len(stored) > 0 {
addr = slices.Clone(stored)
}
// IF the fee recipient is not found in the standard fee recipient bucket, then
// check the registration bucket. The fee recipient may be there.
// This is to resolve imcompatility until we fully migrate to the registration bucket.
@@ -826,7 +829,7 @@ func (s *Store) FeeRecipientByValidatorID(ctx context.Context, id primitives.Val
if err := decode(ctx, enc, reg); err != nil {
return err
}
addr = reg.FeeRecipient
addr = slices.Clone(reg.FeeRecipient)
}
return nil
})

View File

@@ -3,6 +3,7 @@ package kv
import (
"context"
"fmt"
"slices"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
"github.com/ethereum/go-ethereum/common"
@@ -17,7 +18,10 @@ func (s *Store) DepositContractAddress(ctx context.Context) ([]byte, error) {
var addr []byte
if err := s.db.View(func(tx *bolt.Tx) error {
chainInfo := tx.Bucket(chainMetadataBucket)
addr = chainInfo.Get(depositContractAddressKey)
stored := chainInfo.Get(depositContractAddressKey)
if len(stored) > 0 {
addr = slices.Clone(stored)
}
return nil
}); err != nil { // This view never returns an error, but we'll handle anyway for sanity.
panic(err) // lint:nopanic -- View never returns an error.

View File

@@ -199,7 +199,7 @@ func performValidatorStateMigration(ctx context.Context, bar *progressbar.Progre
func stateBucketKeys(stateBucket *bolt.Bucket) ([][]byte, error) {
var keys [][]byte
if err := stateBucket.ForEach(func(pubKey, v []byte) error {
keys = append(keys, pubKey)
keys = append(keys, bytes.Clone(pubKey))
return nil
}); err != nil {
return nil, err

View File

@@ -2,6 +2,7 @@ package kv
import (
"context"
"slices"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/cmd/beacon-chain/flags"
@@ -187,20 +188,23 @@ func (s *Store) getDiff(lvl int, slot uint64) (hdiff.HdiffBytes, error) {
return bolt.ErrBucketNotFound
}
buf := append(key, stateSuffix...)
stateDiff = bucket.Get(buf)
if stateDiff == nil {
rawStateDiff := bucket.Get(buf)
if len(rawStateDiff) == 0 {
return errors.New("state diff not found")
}
stateDiff = slices.Clone(rawStateDiff)
buf = append(key, validatorSuffix...)
validatorDiff = bucket.Get(buf)
if validatorDiff == nil {
rawValidatorDiff := bucket.Get(buf)
if len(rawValidatorDiff) == 0 {
return errors.New("validator diff not found")
}
validatorDiff = slices.Clone(rawValidatorDiff)
buf = append(key, balancesSuffix...)
balancesDiff = bucket.Get(buf)
if balancesDiff == nil {
rawBalancesDiff := bucket.Get(buf)
if len(rawBalancesDiff) == 0 {
return errors.New("balances diff not found")
}
balancesDiff = slices.Clone(rawBalancesDiff)
return nil
})
@@ -224,10 +228,11 @@ func (s *Store) getFullSnapshot(slot uint64) (state.BeaconState, error) {
if bucket == nil {
return bolt.ErrBucketNotFound
}
enc = bucket.Get(key)
if enc == nil {
rawEnc := bucket.Get(key)
if rawEnc == nil {
return errors.New("state not found")
}
enc = slices.Clone(rawEnc)
return nil
})

View File

@@ -2,6 +2,7 @@ package kv
import (
"context"
"slices"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
@@ -47,7 +48,11 @@ func (s *Store) StateSummary(ctx context.Context, blockRoot [32]byte) (*ethpb.St
}
var enc []byte
if err := s.db.View(func(tx *bolt.Tx) error {
enc = tx.Bucket(stateSummaryBucket).Get(blockRoot[:])
rawEnc := tx.Bucket(stateSummaryBucket).Get(blockRoot[:])
if len(rawEnc) == 0 {
return nil
}
enc = slices.Clone(rawEnc)
return nil
}); err != nil {
return nil, err

View File

@@ -8,7 +8,6 @@ go_library(
"deposit.go",
"engine_client.go",
"errors.go",
"graffiti_info.go",
"log.go",
"log_processing.go",
"metrics.go",
@@ -90,7 +89,6 @@ go_test(
"engine_client_fuzz_test.go",
"engine_client_test.go",
"execution_chain_test.go",
"graffiti_info_test.go",
"init_test.go",
"log_processing_test.go",
"mock_test.go",

View File

@@ -61,17 +61,7 @@ var (
}
)
// ClientVersionV1 represents the response from engine_getClientVersionV1.
type ClientVersionV1 struct {
Code string `json:"code"`
Name string `json:"name"`
Version string `json:"version"`
Commit string `json:"commit"`
}
const (
// GetClientVersionMethod is the engine_getClientVersionV1 method for JSON-RPC.
GetClientVersionMethod = "engine_getClientVersionV1"
// NewPayloadMethod v1 request string for JSON-RPC.
NewPayloadMethod = "engine_newPayloadV1"
// NewPayloadMethodV2 v2 request string for JSON-RPC.
@@ -360,27 +350,6 @@ func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
return elSupportedEndpointsSlice, nil
}
// GetClientVersion calls engine_getClientVersionV1 to retrieve EL client information.
func (s *Service) GetClientVersion(ctx context.Context) ([]ClientVersionV1, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.GetClientVersion")
defer span.End()
// Per spec, we send our own client info as the parameter
clVersion := ClientVersionV1{
Code: CLCode,
Name: Name,
Version: version.SemanticVersion(),
Commit: version.GetCommitPrefix(),
}
var result []ClientVersionV1
err := s.rpcClient.CallContext(ctx, &result, GetClientVersionMethod, clVersion)
if err != nil {
return nil, handleRPCError(err)
}
return result, nil
}
// GetTerminalBlockHash returns the valid terminal block hash based on total difficulty.
//
// Spec code:

View File

@@ -1,126 +0,0 @@
package execution
import (
"sync"
"github.com/OffchainLabs/prysm/v7/runtime/version"
)
const (
// CLCode is the two-letter client code for Prysm.
CLCode = "PR"
Name = "Prysm"
)
// GraffitiInfo holds version information for generating block graffiti.
// It is thread-safe and can be updated by the execution service and read by the validator server.
type GraffitiInfo struct {
mu sync.RWMutex
elCode string // From engine_getClientVersionV1
elCommit string // From engine_getClientVersionV1
logOnce sync.Once
}
// NewGraffitiInfo creates a new GraffitiInfo.
func NewGraffitiInfo() *GraffitiInfo {
return &GraffitiInfo{}
}
// UpdateFromEngine updates the EL client information.
func (g *GraffitiInfo) UpdateFromEngine(code, commit string) {
g.mu.Lock()
defer g.mu.Unlock()
g.elCode = code
g.elCommit = commit
}
// GenerateGraffiti generates graffiti using the flexible standard
// with the provided user graffiti from the validator client request.
// It places user graffiti first, then appends as much client info as space allows.
//
// Available Space | Format
// ≥12 bytes | user+EL(2)+commit(4)+CL(2)+commit(4) e.g. "SushiGEabcdPRxxxx"
// 8-11 bytes | user+EL(2)+commit(2)+CL(2)+commit(2) e.g. "SushiGEabPRxx"
// 4-7 bytes | user+EL(2)+CL(2) e.g. "SushiGEPR"
// 2-3 bytes | user+code(2) e.g. "SushiGE" or "SushiPR"
// <2 bytes | user only e.g. "Sushi"
func (g *GraffitiInfo) GenerateGraffiti(userGraffiti []byte) [32]byte {
g.mu.RLock()
defer g.mu.RUnlock()
var result [32]byte
userStr := string(userGraffiti)
// Trim trailing null bytes
for len(userStr) > 0 && userStr[len(userStr)-1] == 0 {
userStr = userStr[:len(userStr)-1]
}
available := 32 - len(userStr)
clCommit := version.GetCommitPrefix()
clCommit4 := truncateCommit(clCommit, 4)
clCommit2 := truncateCommit(clCommit, 2)
// If no EL info, clear EL commits but still include CL info
var elCommit4, elCommit2 string
if g.elCode != "" {
elCommit4 = truncateCommit(g.elCommit, 4)
elCommit2 = truncateCommit(g.elCommit, 2)
}
// Add a space separator between user graffiti and client info,
// but only if it won't reduce the space available for client version info.
space := func(minForTier int) string {
if len(userStr) > 0 && available >= minForTier+1 {
return " "
}
return ""
}
var graffiti string
switch {
case available >= 12:
// Full: user+EL(2)+commit(4)+CL(2)+commit(4)
graffiti = userStr + space(12) + g.elCode + elCommit4 + CLCode + clCommit4
case available >= 8:
// Reduced commits: user+EL(2)+commit(2)+CL(2)+commit(2)
graffiti = userStr + space(8) + g.elCode + elCommit2 + CLCode + clCommit2
case available >= 4:
// Codes only: user+EL(2)+CL(2)
graffiti = userStr + space(4) + g.elCode + CLCode
case available >= 2:
// Single code: user+code(2)
if g.elCode != "" {
graffiti = userStr + space(2) + g.elCode
} else {
graffiti = userStr + space(2) + CLCode
}
default:
// User graffiti only
graffiti = userStr
}
g.logOnce.Do(func() {
logGraffitiInfo(graffiti, available)
})
copy(result[:], graffiti)
return result
}
// logGraffitiInfo logs the graffiti that will be used.
func logGraffitiInfo(graffiti string, available int) {
if available >= 2 {
log.WithField("graffiti", graffiti).Info("Graffiti includes client version info appended after user graffiti")
} else {
log.WithField("graffiti", graffiti).Info("Prysm adds consensus and execution debugging information to the end of the graffiti field when possible. To prevent truncation, please consider using a shorter graffiti")
}
}
// truncateCommit returns the first n characters of the commit string.
func truncateCommit(commit string, n int) string {
if len(commit) <= n {
return commit
}
return commit[:n]
}

View File

@@ -1,235 +0,0 @@
package execution
import (
"testing"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestGraffitiInfo_GenerateGraffiti(t *testing.T) {
tests := []struct {
name string
elCode string
elCommit string
userGraffiti []byte
wantPrefix string // user graffiti appears first
wantSuffix string // client version info appended after
}{
// No EL info cases (CL info "PR" + commit still included when space allows)
{
name: "No EL - empty user graffiti",
elCode: "",
elCommit: "",
userGraffiti: []byte{},
wantPrefix: "PR", // Only CL code + commit (no user graffiti to prefix)
},
{
name: "No EL - short user graffiti",
elCode: "",
elCommit: "",
userGraffiti: []byte("my validator"),
wantPrefix: "my validator",
wantSuffix: " PR", // space + CL code appended
},
{
name: "No EL - 28 char user graffiti (4 bytes available)",
elCode: "",
elCommit: "",
userGraffiti: []byte("1234567890123456789012345678"), // 28 chars, 4 bytes available = codes only
wantPrefix: "1234567890123456789012345678",
wantSuffix: "PR", // CL code (no EL, so just PR)
},
{
name: "No EL - 30 char user graffiti (2 bytes available)",
elCode: "",
elCommit: "",
userGraffiti: []byte("123456789012345678901234567890"), // 30 chars, 2 bytes available = fits PR
wantPrefix: "123456789012345678901234567890",
wantSuffix: "PR",
},
{
name: "No EL - 31 char user graffiti (1 byte available)",
elCode: "",
elCommit: "",
userGraffiti: []byte("1234567890123456789012345678901"), // 31 chars, 1 byte available = not enough for code
wantPrefix: "1234567890123456789012345678901", // User only
},
{
name: "No EL - 32 char user graffiti (0 bytes available)",
elCode: "",
elCommit: "",
userGraffiti: []byte("12345678901234567890123456789012"),
wantPrefix: "12345678901234567890123456789012", // User only
},
// With EL info - flexible standard format cases
{
name: "With EL - full format (empty user graffiti)",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte{},
wantPrefix: "GEabcdPR", // No user graffiti, starts with client info
},
{
name: "With EL - full format (short user graffiti)",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("Bob"),
wantPrefix: "Bob",
wantSuffix: " GEabcdPR", // space + EL(2)+commit(4)+CL(2)+commit(4)
},
{
name: "With EL - full format (20 char user, 12 bytes available) - no space, would reduce tier",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("12345678901234567890"), // 20 chars, leaves exactly 12 bytes = full format, no room for space
wantPrefix: "12345678901234567890",
wantSuffix: "GEabcdPR",
},
{
name: "With EL - full format (19 char user, 13 bytes available) - space fits",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("1234567890123456789"), // 19 chars, leaves 13 bytes = full format + space
wantPrefix: "1234567890123456789",
wantSuffix: " GEabcdPR",
},
{
name: "With EL - reduced commits (24 char user, 8 bytes available) - no space, would reduce tier",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("123456789012345678901234"), // 24 chars, leaves exactly 8 bytes = reduced format, no room for space
wantPrefix: "123456789012345678901234",
wantSuffix: "GEabPR",
},
{
name: "With EL - reduced commits (23 char user, 9 bytes available) - space fits",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("12345678901234567890123"), // 23 chars, leaves 9 bytes = reduced format + space
wantPrefix: "12345678901234567890123",
wantSuffix: " GEabPR",
},
{
name: "With EL - codes only (28 char user, 4 bytes available) - no space, would reduce tier",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("1234567890123456789012345678"), // 28 chars, leaves exactly 4 bytes = codes only, no room for space
wantPrefix: "1234567890123456789012345678",
wantSuffix: "GEPR",
},
{
name: "With EL - codes only (27 char user, 5 bytes available) - space fits",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("123456789012345678901234567"), // 27 chars, leaves 5 bytes = codes only + space
wantPrefix: "123456789012345678901234567",
wantSuffix: " GEPR",
},
{
name: "With EL - EL code only (30 char user, 2 bytes available) - no space, would reduce tier",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("123456789012345678901234567890"), // 30 chars, leaves exactly 2 bytes = EL code only, no room for space
wantPrefix: "123456789012345678901234567890",
wantSuffix: "GE",
},
{
name: "With EL - EL code only (29 char user, 3 bytes available) - space fits",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("12345678901234567890123456789"), // 29 chars, leaves 3 bytes = EL code + space
wantPrefix: "12345678901234567890123456789",
wantSuffix: " GE",
},
{
name: "With EL - user only (31 char user, 1 byte available)",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("1234567890123456789012345678901"), // 31 chars, leaves 1 byte = not enough for code
wantPrefix: "1234567890123456789012345678901", // User only
},
{
name: "With EL - user only (32 char user, 0 bytes available)",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: []byte("12345678901234567890123456789012"),
wantPrefix: "12345678901234567890123456789012",
},
// Null byte handling
{
name: "Null bytes - input with trailing nulls",
elCode: "GE",
elCommit: "abcd1234",
userGraffiti: append([]byte("test"), 0, 0, 0),
wantPrefix: "test",
wantSuffix: " GEabcdPR",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
g := NewGraffitiInfo()
if tt.elCode != "" {
g.UpdateFromEngine(tt.elCode, tt.elCommit)
}
result := g.GenerateGraffiti(tt.userGraffiti)
resultStr := string(result[:])
trimmed := trimNullBytes(resultStr)
// Check prefix (user graffiti comes first)
require.Equal(t, true, len(trimmed) >= len(tt.wantPrefix), "Result too short for prefix check")
require.Equal(t, tt.wantPrefix, trimmed[:len(tt.wantPrefix)], "Prefix mismatch")
// Check suffix if specified (client version info appended)
if tt.wantSuffix != "" {
require.Equal(t, true, len(trimmed) >= len(tt.wantSuffix), "Result too short for suffix check")
// The suffix should appear somewhere after the prefix
afterPrefix := trimmed[len(tt.wantPrefix):]
require.Equal(t, true, len(afterPrefix) >= len(tt.wantSuffix), "Not enough room for suffix after prefix")
require.Equal(t, tt.wantSuffix, afterPrefix[:len(tt.wantSuffix)], "Suffix mismatch")
}
})
}
}
func TestGraffitiInfo_UpdateFromEngine(t *testing.T) {
g := NewGraffitiInfo()
// Initially no EL info - should still have CL info (PR + commit)
result := g.GenerateGraffiti([]byte{})
resultStr := trimNullBytes(string(result[:]))
require.Equal(t, "PR", resultStr[:2], "Expected CL info before update")
// Update with EL info
g.UpdateFromEngine("GE", "1234abcd")
result = g.GenerateGraffiti([]byte{})
resultStr = trimNullBytes(string(result[:]))
require.Equal(t, "GE1234PR", resultStr[:8], "Expected EL+CL info after update")
}
func TestTruncateCommit(t *testing.T) {
tests := []struct {
commit string
n int
want string
}{
{"abcd1234", 4, "abcd"},
{"ab", 4, "ab"},
{"", 4, ""},
{"abcdef", 2, "ab"},
}
for _, tt := range tests {
got := truncateCommit(tt.commit, tt.n)
require.Equal(t, tt.want, got)
}
}
func trimNullBytes(s string) string {
for len(s) > 0 && s[len(s)-1] == 0 {
s = s[:len(s)-1]
}
return s
}

View File

@@ -124,11 +124,3 @@ func WithVerifierWaiter(v *verification.InitializerWaiter) Option {
return nil
}
}
// WithGraffitiInfo sets the GraffitiInfo for client version tracking.
func WithGraffitiInfo(g *GraffitiInfo) Option {
return func(s *Service) error {
s.graffitiInfo = g
return nil
}
}

View File

@@ -162,7 +162,6 @@ type Service struct {
verifierWaiter *verification.InitializerWaiter
blobVerifier verification.NewBlobVerifier
capabilityCache *capabilityCache
graffitiInfo *GraffitiInfo
}
// NewService sets up a new instance with an ethclient when given a web3 endpoint as a string in the config.
@@ -319,28 +318,6 @@ func (s *Service) updateConnectedETH1(state bool) {
s.updateBeaconNodeStats()
}
// GraffitiInfo returns the GraffitiInfo struct for graffiti generation.
func (s *Service) GraffitiInfo() *GraffitiInfo {
return s.graffitiInfo
}
// updateGraffitiInfo fetches EL client version and updates the graffiti info.
func (s *Service) updateGraffitiInfo() {
if s.graffitiInfo == nil {
return
}
ctx, cancel := context.WithTimeout(s.ctx, time.Second)
defer cancel()
versions, err := s.GetClientVersion(ctx)
if err != nil {
log.WithError(err).Debug("Could not get execution client version for graffiti")
return
}
if len(versions) >= 1 {
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
}
}
// refers to the latest eth1 block which follows the condition: eth1_timestamp +
// SECONDS_PER_ETH1_BLOCK * ETH1_FOLLOW_DISTANCE <= current_unix_time
func (s *Service) followedBlockHeight(ctx context.Context) (uint64, error) {
@@ -621,12 +598,6 @@ func (s *Service) run(done <-chan struct{}) {
chainstartTicker := time.NewTicker(logPeriod)
defer chainstartTicker.Stop()
// Update graffiti info 4 times per epoch (~96 seconds with 12s slots and 32 slots/epoch)
graffitiTicker := time.NewTicker(96 * time.Second)
defer graffitiTicker.Stop()
// Initial update
s.updateGraffitiInfo()
for {
select {
case <-done:
@@ -651,8 +622,6 @@ func (s *Service) run(done <-chan struct{}) {
continue
}
s.logTillChainStart(context.Background())
case <-graffitiTicker.C:
s.updateGraffitiInfo()
}
}
}

View File

@@ -20,6 +20,7 @@ go_library(
"//config/fieldparams:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],

View File

@@ -6,7 +6,7 @@ go_library(
"doc.go",
"errors.go",
"forkchoice.go",
"last_root.go",
"gloas.go",
"log.go",
"metrics.go",
"node.go",
@@ -33,6 +33,7 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//monitoring/tracing/trace:go_default_library",
@@ -51,7 +52,7 @@ go_test(
srcs = [
"ffg_update_test.go",
"forkchoice_test.go",
"last_root_test.go",
"gloas_test.go",
"no_vote_test.go",
"node_test.go",
"on_tick_test.go",
@@ -71,6 +72,7 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",

View File

@@ -31,8 +31,8 @@ func New() *ForkChoice {
prevJustifiedCheckpoint: &forkchoicetypes.Checkpoint{},
finalizedCheckpoint: &forkchoicetypes.Checkpoint{},
proposerBoostRoot: [32]byte{},
nodeByRoot: make(map[[fieldparams.RootLength]byte]*Node),
nodeByPayload: make(map[[fieldparams.RootLength]byte]*Node),
emptyNodeByRoot: make(map[[fieldparams.RootLength]byte]*PayloadNode),
fullNodeByRoot: make(map[[fieldparams.RootLength]byte]*PayloadNode),
slashedIndices: make(map[primitives.ValidatorIndex]bool),
receivedBlocksLastEpoch: [fieldparams.SlotsPerEpoch]primitives.Slot{},
}
@@ -44,7 +44,7 @@ func New() *ForkChoice {
// NodeCount returns the current number of nodes in the Store.
func (f *ForkChoice) NodeCount() int {
return len(f.store.nodeByRoot)
return len(f.store.emptyNodeByRoot)
}
// Head returns the head root from fork choice store.
@@ -65,14 +65,14 @@ func (f *ForkChoice) Head(
return [32]byte{}, errors.Wrap(err, "could not apply proposer boost score")
}
if err := f.store.treeRootNode.applyWeightChanges(ctx); err != nil {
if err := f.store.applyWeightChangesConsensusNode(ctx, f.store.treeRootNode); err != nil {
return [32]byte{}, errors.Wrap(err, "could not apply weight changes")
}
jc := f.JustifiedCheckpoint()
fc := f.FinalizedCheckpoint()
currentEpoch := slots.EpochsSinceGenesis(f.store.genesisTime)
if err := f.store.treeRootNode.updateBestDescendant(ctx, jc.Epoch, fc.Epoch, currentEpoch); err != nil {
if err := f.store.updateBestDescendantConsensusNode(ctx, f.store.treeRootNode, jc.Epoch, fc.Epoch, currentEpoch); err != nil {
return [32]byte{}, errors.Wrap(err, "could not update best descendant")
}
return f.store.head(ctx)
@@ -119,14 +119,14 @@ func (f *ForkChoice) InsertNode(ctx context.Context, state state.BeaconState, ro
return errInvalidNilCheckpoint
}
finalizedEpoch := fc.Epoch
node, err := f.store.insert(ctx, roblock, justifiedEpoch, finalizedEpoch)
pn, err := f.store.insert(ctx, roblock, justifiedEpoch, finalizedEpoch)
if err != nil {
return err
}
jc, fc = f.store.pullTips(state, node, jc, fc)
jc, fc = f.store.pullTips(state, pn.node, jc, fc)
if err := f.updateCheckpoints(ctx, jc, fc); err != nil {
_, remErr := f.store.removeNode(ctx, node)
_, remErr := f.store.removeNode(ctx, pn)
if remErr != nil {
log.WithError(remErr).Error("Could not remove node")
}
@@ -149,49 +149,63 @@ func (f *ForkChoice) updateCheckpoints(ctx context.Context, jc, fc *ethpb.Checkp
if fc.Epoch <= f.store.finalizedCheckpoint.Epoch {
return nil
}
f.store.finalizedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: fc.Epoch,
Root: bytesutil.ToBytes32(fc.Root)}
f.store.finalizedCheckpoint = &forkchoicetypes.Checkpoint{
Epoch: fc.Epoch,
Root: bytesutil.ToBytes32(fc.Root),
}
return f.store.prune(ctx)
}
// HasNode returns true if the node exists in fork choice store,
// false else wise.
func (f *ForkChoice) HasNode(root [32]byte) bool {
_, ok := f.store.nodeByRoot[root]
_, ok := f.store.emptyNodeByRoot[root]
return ok
}
// IsCanonical returns true if the given root is part of the canonical chain.
func (f *ForkChoice) IsCanonical(root [32]byte) bool {
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
// It is fine to pick empty node here since we only check if the beacon block is canonical.
pn, ok := f.store.emptyNodeByRoot[root]
if !ok || pn == nil {
return false
}
if node.bestDescendant == nil {
if pn.node.bestDescendant == nil {
// The node doesn't have any children
if f.store.headNode.bestDescendant == nil {
return node == f.store.headNode
// headNode is itself head.
return pn.node == f.store.headNode
}
return node == f.store.headNode.bestDescendant
// headNode is not actualized and there are some descendants
return pn.node == f.store.headNode.bestDescendant
}
// The node has children
if f.store.headNode.bestDescendant == nil {
return node.bestDescendant == f.store.headNode
return pn.node.bestDescendant == f.store.headNode
}
return node.bestDescendant == f.store.headNode.bestDescendant
return pn.node.bestDescendant == f.store.headNode.bestDescendant
}
// IsOptimistic returns true if the given root has been optimistically synced.
// TODO: Gloas, the current implementation uses the result of the full block for
// the given root. In gloas this would be incorrect and we should specify the
// payload content, thus we should expose a full/empty version of this call.
func (f *ForkChoice) IsOptimistic(root [32]byte) (bool, error) {
if f.store.allTipsAreInvalid {
return true, nil
}
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
en, ok := f.store.emptyNodeByRoot[root]
if !ok || en == nil {
return true, ErrNilNode
}
fn := f.store.fullNodeByRoot[root]
if fn != nil {
return fn.optimistic, nil
}
return node.optimistic, nil
return en.optimistic, nil
}
// AncestorRoot returns the ancestor root of input block root at a given slot.
@@ -199,17 +213,21 @@ func (f *ForkChoice) AncestorRoot(ctx context.Context, root [32]byte, slot primi
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.AncestorRoot")
defer span.End()
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
pn, ok := f.store.emptyNodeByRoot[root]
if !ok || pn == nil {
return [32]byte{}, errors.Wrap(ErrNilNode, "could not determine ancestor root")
}
n := node
for n != nil && n.slot > slot {
n := pn.node
for n.slot > slot {
if ctx.Err() != nil {
return [32]byte{}, ctx.Err()
}
n = n.parent
if n.parent == nil {
n = nil
break
}
n = n.parent.node
}
if n == nil {
@@ -222,10 +240,11 @@ func (f *ForkChoice) AncestorRoot(ctx context.Context, root [32]byte, slot primi
// IsViableForCheckpoint returns whether the root passed is a checkpoint root for any
// known chain in forkchoice.
func (f *ForkChoice) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool, error) {
node, ok := f.store.nodeByRoot[cp.Root]
if !ok || node == nil {
pn, ok := f.store.emptyNodeByRoot[cp.Root]
if !ok || pn == nil {
return false, nil
}
node := pn.node
epochStart, err := slots.EpochStart(cp.Epoch)
if err != nil {
return false, err
@@ -234,10 +253,13 @@ func (f *ForkChoice) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool
return false, nil
}
if len(node.children) == 0 {
// If it's the start of the epoch, it is a checkpoint
if node.slot == epochStart {
return true, nil
}
if node.slot == epochStart {
// If there are no descendants of this beacon block, it is is viable as a checkpoint
children := f.store.allConsensusChildren(node)
if len(children) == 0 {
return true, nil
}
if !features.Get().IgnoreUnviableAttestations {
@@ -247,7 +269,8 @@ func (f *ForkChoice) IsViableForCheckpoint(cp *forkchoicetypes.Checkpoint) (bool
return true, nil
}
}
for _, child := range node.children {
// If some child is after the start of the epoch, the checkpoint is viable.
for _, child := range children {
if child.slot > epochStart {
return true, nil
}
@@ -288,7 +311,7 @@ func (f *ForkChoice) updateBalances() error {
if vote.currentRoot != vote.nextRoot || oldBalance != newBalance {
// Ignore the vote if the root is not in fork choice
// store, that means we have not seen the block before.
nextNode, ok := f.store.nodeByRoot[vote.nextRoot]
nextNode, ok := f.store.emptyNodeByRoot[vote.nextRoot]
if ok && vote.nextRoot != zHash {
// Protection against nil node
if nextNode == nil {
@@ -297,7 +320,7 @@ func (f *ForkChoice) updateBalances() error {
nextNode.balance += newBalance
}
currentNode, ok := f.store.nodeByRoot[vote.currentRoot]
currentNode, ok := f.store.emptyNodeByRoot[vote.currentRoot]
if ok && vote.currentRoot != zHash {
// Protection against nil node
if currentNode == nil {
@@ -338,13 +361,13 @@ func (f *ForkChoice) ProposerBoost() [fieldparams.RootLength]byte {
return f.store.proposerBoost()
}
// SetOptimisticToValid sets the node with the given root as a fully validated node
// SetOptimisticToValid sets the node with the given root as a fully validated node. The payload for this root MUST have been processed.
func (f *ForkChoice) SetOptimisticToValid(ctx context.Context, root [fieldparams.RootLength]byte) error {
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
fn, ok := f.store.fullNodeByRoot[root]
if !ok || fn == nil {
return errors.Wrap(ErrNilNode, "could not set node to valid")
}
return node.setNodeAndParentValidated(ctx)
return f.store.setNodeAndParentValidated(ctx, fn)
}
// PreviousJustifiedCheckpoint of fork choice store.
@@ -363,8 +386,8 @@ func (f *ForkChoice) FinalizedCheckpoint() *forkchoicetypes.Checkpoint {
}
// SetOptimisticToInvalid removes a block with an invalid execution payload from fork choice store
func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root, parentRoot, payloadHash [fieldparams.RootLength]byte) ([][32]byte, error) {
return f.store.setOptimisticToInvalid(ctx, root, parentRoot, payloadHash)
func (f *ForkChoice) SetOptimisticToInvalid(ctx context.Context, root, parentRoot, parentHash, payloadHash [fieldparams.RootLength]byte) ([][32]byte, error) {
return f.store.setOptimisticToInvalid(ctx, root, parentRoot, parentHash, payloadHash)
}
// InsertSlashedIndex adds the given slashed validator index to the
@@ -387,7 +410,7 @@ func (f *ForkChoice) InsertSlashedIndex(_ context.Context, index primitives.Vali
return
}
node, ok := f.store.nodeByRoot[f.votes[index].currentRoot]
node, ok := f.store.emptyNodeByRoot[f.votes[index].currentRoot]
if !ok || node == nil {
return
}
@@ -422,22 +445,30 @@ func (f *ForkChoice) UpdateFinalizedCheckpoint(fc *forkchoicetypes.Checkpoint) e
}
// CommonAncestor returns the common ancestor root and slot between the two block roots r1 and r2.
// This is payload aware. Consider the following situation
// [A,full] <--- [B, full] <---[C,pending]
//
// \---------[B, empty] <--[D, pending]
//
// Then even though C and D both descend from the beacon block B, their common ancestor is A.
// Notice that also this function **requires** that the two roots are actually contending blocks! otherwise the
// behavior is not defined.
func (f *ForkChoice) CommonAncestor(ctx context.Context, r1 [32]byte, r2 [32]byte) ([32]byte, primitives.Slot, error) {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.CommonAncestorRoot")
defer span.End()
n1, ok := f.store.nodeByRoot[r1]
if !ok || n1 == nil {
en1, ok := f.store.emptyNodeByRoot[r1]
if !ok || en1 == nil {
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
// Do nothing if the input roots are the same.
if r1 == r2 {
return r1, n1.slot, nil
return r1, en1.node.slot, nil
}
n2, ok := f.store.nodeByRoot[r2]
if !ok || n2 == nil {
en2, ok := f.store.emptyNodeByRoot[r2]
if !ok || en2 == nil {
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
@@ -445,23 +476,23 @@ func (f *ForkChoice) CommonAncestor(ctx context.Context, r1 [32]byte, r2 [32]byt
if ctx.Err() != nil {
return [32]byte{}, 0, ctx.Err()
}
if n1.slot > n2.slot {
n1 = n1.parent
if en1.node.slot > en2.node.slot {
en1 = en1.node.parent
// Reaches the end of the tree and unable to find common ancestor.
// This should not happen at runtime as the finalized
// node has to be a common ancestor
if n1 == nil {
if en1 == nil {
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
} else {
n2 = n2.parent
en2 = en2.node.parent
// Reaches the end of the tree and unable to find common ancestor.
if n2 == nil {
if en2 == nil {
return [32]byte{}, 0, forkchoice.ErrUnknownCommonAncestor
}
}
if n1 == n2 {
return n1.root, n1.slot, nil
if en1 == en2 {
return en1.node.root, en1.node.slot, nil
}
}
}
@@ -508,35 +539,17 @@ func (f *ForkChoice) CachedHeadRoot() [32]byte {
// FinalizedPayloadBlockHash returns the hash of the payload at the finalized checkpoint
func (f *ForkChoice) FinalizedPayloadBlockHash() [32]byte {
root := f.FinalizedCheckpoint().Root
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
// This should not happen
return [32]byte{}
}
return node.payloadHash
return f.store.latestHashForRoot(f.FinalizedCheckpoint().Root)
}
// JustifiedPayloadBlockHash returns the hash of the payload at the justified checkpoint
func (f *ForkChoice) JustifiedPayloadBlockHash() [32]byte {
root := f.JustifiedCheckpoint().Root
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
// This should not happen
return [32]byte{}
}
return node.payloadHash
return f.store.latestHashForRoot(f.JustifiedCheckpoint().Root)
}
// UnrealizedJustifiedPayloadBlockHash returns the hash of the payload at the unrealized justified checkpoint
func (f *ForkChoice) UnrealizedJustifiedPayloadBlockHash() [32]byte {
root := f.store.unrealizedJustifiedCheckpoint.Root
node, ok := f.store.nodeByRoot[root]
if !ok || node == nil {
// This should not happen
return [32]byte{}
}
return node.payloadHash
return f.store.latestHashForRoot(f.store.unrealizedJustifiedCheckpoint.Root)
}
// ForkChoiceDump returns a full dump of forkchoice.
@@ -560,7 +573,7 @@ func (f *ForkChoice) ForkChoiceDump(ctx context.Context) (*forkchoice2.Dump, err
nodes := make([]*forkchoice2.Node, 0, f.NodeCount())
var err error
if f.store.treeRootNode != nil {
nodes, err = f.store.treeRootNode.nodeTreeDump(ctx, nodes)
nodes, err = f.store.nodeTreeDump(ctx, f.store.treeRootNode, nodes)
if err != nil {
return nil, err
}
@@ -589,7 +602,7 @@ func (f *ForkChoice) SetBalancesByRooter(handler forkchoice.BalancesByRooter) {
// Weight returns the weight of the given root if found on the store
func (f *ForkChoice) Weight(root [32]byte) (uint64, error) {
n, ok := f.store.nodeByRoot[root]
n, ok := f.store.emptyNodeByRoot[root]
if !ok || n == nil {
return 0, ErrNilNode
}
@@ -617,11 +630,11 @@ func (f *ForkChoice) updateJustifiedBalances(ctx context.Context, root [32]byte)
// Slot returns the slot of the given root if it's known to forkchoice
func (f *ForkChoice) Slot(root [32]byte) (primitives.Slot, error) {
n, ok := f.store.nodeByRoot[root]
n, ok := f.store.emptyNodeByRoot[root]
if !ok || n == nil {
return 0, ErrNilNode
}
return n.slot, nil
return n.node.slot, nil
}
// DependentRoot returns the last root of the epoch prior to the requested ecoch in the canonical chain.
@@ -629,7 +642,7 @@ func (f *ForkChoice) DependentRoot(epoch primitives.Epoch) ([32]byte, error) {
return f.DependentRootForEpoch(f.CachedHeadRoot(), epoch)
}
// DependentRootForEpoch return the last root of the epoch prior to the requested ecoch for the given root.
// DependentRootForEpoch return the last root of the epoch prior to the requested epoch for the given root.
func (f *ForkChoice) DependentRootForEpoch(root [32]byte, epoch primitives.Epoch) ([32]byte, error) {
tr, err := f.TargetRootForEpoch(root, epoch)
if err != nil {
@@ -638,18 +651,18 @@ func (f *ForkChoice) DependentRootForEpoch(root [32]byte, epoch primitives.Epoch
if tr == [32]byte{} {
return [32]byte{}, nil
}
node, ok := f.store.nodeByRoot[tr]
if !ok || node == nil {
en, ok := f.store.emptyNodeByRoot[tr]
if !ok || en == nil {
return [32]byte{}, ErrNilNode
}
if slots.ToEpoch(node.slot) >= epoch {
if node.parent != nil {
node = node.parent
if slots.ToEpoch(en.node.slot) >= epoch {
if en.node.parent != nil {
en = en.node.parent
} else {
return f.store.finalizedDependentRoot, nil
}
}
return node.root, nil
return en.node.root, nil
}
// TargetRootForEpoch returns the root of the target block for a given epoch.
@@ -661,46 +674,48 @@ func (f *ForkChoice) DependentRootForEpoch(root [32]byte, epoch primitives.Epoch
// which case we return the root of the checkpoint of the chain containing the
// passed root, at the given epoch
func (f *ForkChoice) TargetRootForEpoch(root [32]byte, epoch primitives.Epoch) ([32]byte, error) {
n, ok := f.store.nodeByRoot[root]
n, ok := f.store.emptyNodeByRoot[root]
if !ok || n == nil {
return [32]byte{}, ErrNilNode
}
nodeEpoch := slots.ToEpoch(n.slot)
node := n.node
nodeEpoch := slots.ToEpoch(node.slot)
if epoch > nodeEpoch {
return n.root, nil
return node.root, nil
}
if n.target == nil {
if node.target == nil {
return [32]byte{}, nil
}
targetRoot := n.target.root
targetRoot := node.target.root
if epoch == nodeEpoch {
return targetRoot, nil
}
targetNode, ok := f.store.nodeByRoot[targetRoot]
targetNode, ok := f.store.emptyNodeByRoot[targetRoot]
if !ok || targetNode == nil {
return [32]byte{}, ErrNilNode
}
// If slot 0 was not missed we consider a previous block to go back at least one epoch
if nodeEpoch == slots.ToEpoch(targetNode.slot) {
targetNode = targetNode.parent
if nodeEpoch == slots.ToEpoch(targetNode.node.slot) {
targetNode = targetNode.node.parent
if targetNode == nil {
return [32]byte{}, ErrNilNode
}
}
return f.TargetRootForEpoch(targetNode.root, epoch)
return f.TargetRootForEpoch(targetNode.node.root, epoch)
}
// ParentRoot returns the block root of the parent node if it is in forkchoice.
// The exception is for the finalized checkpoint root which we return the zero
// hash.
func (f *ForkChoice) ParentRoot(root [32]byte) ([32]byte, error) {
n, ok := f.store.nodeByRoot[root]
n, ok := f.store.emptyNodeByRoot[root]
if !ok || n == nil {
return [32]byte{}, ErrNilNode
}
// Return the zero hash for the tree root
if n.parent == nil {
parent := n.node.parent
if parent == nil {
return [32]byte{}, nil
}
return n.parent.root, nil
return parent.node.root, nil
}

View File

@@ -3,7 +3,6 @@ package doublylinkedtree
import (
"context"
"encoding/binary"
"errors"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/forkchoice"
@@ -104,9 +103,9 @@ func TestForkChoice_UpdateBalancesPositiveChange(t *testing.T) {
f.justifiedBalances = []uint64{10, 20, 30}
require.NoError(t, f.updateBalances())
s := f.store
assert.Equal(t, uint64(10), s.nodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(20), s.nodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(30), s.nodeByRoot[indexToHash(3)].balance)
assert.Equal(t, uint64(10), s.emptyNodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(20), s.emptyNodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(30), s.emptyNodeByRoot[indexToHash(3)].balance)
}
func TestForkChoice_UpdateBalancesNegativeChange(t *testing.T) {
@@ -122,9 +121,9 @@ func TestForkChoice_UpdateBalancesNegativeChange(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
s := f.store
s.nodeByRoot[indexToHash(1)].balance = 100
s.nodeByRoot[indexToHash(2)].balance = 100
s.nodeByRoot[indexToHash(3)].balance = 100
s.emptyNodeByRoot[indexToHash(1)].balance = 100
s.emptyNodeByRoot[indexToHash(2)].balance = 100
s.emptyNodeByRoot[indexToHash(3)].balance = 100
f.balances = []uint64{100, 100, 100}
f.votes = []Vote{
@@ -135,9 +134,9 @@ func TestForkChoice_UpdateBalancesNegativeChange(t *testing.T) {
f.justifiedBalances = []uint64{10, 20, 30}
require.NoError(t, f.updateBalances())
assert.Equal(t, uint64(10), s.nodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(20), s.nodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(30), s.nodeByRoot[indexToHash(3)].balance)
assert.Equal(t, uint64(10), s.emptyNodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(20), s.emptyNodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(30), s.emptyNodeByRoot[indexToHash(3)].balance)
}
func TestForkChoice_UpdateBalancesUnderflow(t *testing.T) {
@@ -153,9 +152,9 @@ func TestForkChoice_UpdateBalancesUnderflow(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
s := f.store
s.nodeByRoot[indexToHash(1)].balance = 100
s.nodeByRoot[indexToHash(2)].balance = 100
s.nodeByRoot[indexToHash(3)].balance = 100
s.emptyNodeByRoot[indexToHash(1)].balance = 100
s.emptyNodeByRoot[indexToHash(2)].balance = 100
s.emptyNodeByRoot[indexToHash(3)].balance = 100
f.balances = []uint64{125, 125, 125}
f.votes = []Vote{
@@ -166,9 +165,9 @@ func TestForkChoice_UpdateBalancesUnderflow(t *testing.T) {
f.justifiedBalances = []uint64{10, 20, 30}
require.NoError(t, f.updateBalances())
assert.Equal(t, uint64(0), s.nodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(0), s.nodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(5), s.nodeByRoot[indexToHash(3)].balance)
assert.Equal(t, uint64(0), s.emptyNodeByRoot[indexToHash(1)].balance)
assert.Equal(t, uint64(0), s.emptyNodeByRoot[indexToHash(2)].balance)
assert.Equal(t, uint64(5), s.emptyNodeByRoot[indexToHash(3)].balance)
}
func TestForkChoice_IsCanonical(t *testing.T) {
@@ -224,12 +223,12 @@ func TestForkChoice_IsCanonicalReorg(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
f.store.nodeByRoot[[32]byte{'3'}].balance = 10
require.NoError(t, f.store.treeRootNode.applyWeightChanges(ctx))
require.Equal(t, uint64(10), f.store.nodeByRoot[[32]byte{'1'}].weight)
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'2'}].weight)
f.store.emptyNodeByRoot[[32]byte{'3'}].balance = 10
require.NoError(t, f.store.applyWeightChangesConsensusNode(ctx, f.store.treeRootNode))
require.Equal(t, uint64(10), f.store.emptyNodeByRoot[[32]byte{'1'}].node.weight)
require.Equal(t, uint64(0), f.store.emptyNodeByRoot[[32]byte{'2'}].node.weight)
require.NoError(t, f.store.treeRootNode.updateBestDescendant(ctx, 1, 1, 1))
require.NoError(t, f.store.updateBestDescendantConsensusNode(ctx, f.store.treeRootNode, 1, 1, 1))
require.DeepEqual(t, [32]byte{'3'}, f.store.treeRootNode.bestDescendant.root)
r1 := [32]byte{'1'}
@@ -260,7 +259,7 @@ func TestForkChoice_AncestorRoot(t *testing.T) {
st, roblock, err = prepareForkchoiceState(ctx, 5, indexToHash(3), indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
f.store.treeRootNode = f.store.nodeByRoot[indexToHash(1)]
f.store.treeRootNode = f.store.emptyNodeByRoot[indexToHash(1)].node
f.store.treeRootNode.parent = nil
r, err := f.AncestorRoot(ctx, indexToHash(3), 6)
@@ -342,21 +341,21 @@ func TestForkChoice_RemoveEquivocating(t *testing.T) {
// Process b's slashing, c is now head
f.InsertSlashedIndex(ctx, 1)
require.Equal(t, uint64(200), f.store.nodeByRoot[[32]byte{'b'}].balance)
require.Equal(t, uint64(200), f.store.emptyNodeByRoot[[32]byte{'b'}].balance)
f.justifiedBalances = []uint64{100, 200, 200, 300}
head, err = f.Head(ctx)
require.Equal(t, uint64(200), f.store.nodeByRoot[[32]byte{'b'}].weight)
require.Equal(t, uint64(300), f.store.nodeByRoot[[32]byte{'c'}].weight)
require.Equal(t, uint64(200), f.store.emptyNodeByRoot[[32]byte{'b'}].weight)
require.Equal(t, uint64(300), f.store.emptyNodeByRoot[[32]byte{'c'}].weight)
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, head)
// Process b's slashing again, should be a noop
f.InsertSlashedIndex(ctx, 1)
require.Equal(t, uint64(200), f.store.nodeByRoot[[32]byte{'b'}].balance)
require.Equal(t, uint64(200), f.store.emptyNodeByRoot[[32]byte{'b'}].balance)
f.justifiedBalances = []uint64{100, 200, 200, 300}
head, err = f.Head(ctx)
require.Equal(t, uint64(200), f.store.nodeByRoot[[32]byte{'b'}].weight)
require.Equal(t, uint64(300), f.store.nodeByRoot[[32]byte{'c'}].weight)
require.Equal(t, uint64(200), f.store.emptyNodeByRoot[[32]byte{'b'}].weight)
require.Equal(t, uint64(300), f.store.emptyNodeByRoot[[32]byte{'c'}].weight)
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, head)
@@ -514,58 +513,6 @@ func TestStore_CommonAncestor(t *testing.T) {
})
}
// a -- b -- c -- d
f = setup(0, 0)
st, roblock, err = prepareForkchoiceState(ctx, 0, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 1, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 2, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
st, roblock, err = prepareForkchoiceState(ctx, 3, [32]byte{'d'}, [32]byte{'c'}, [32]byte{}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
tests = []struct {
name string
r1 [32]byte
r2 [32]byte
wantRoot [32]byte
wantSlot primitives.Slot
}{
{
name: "Common ancestor between a and b is a",
r1: [32]byte{'a'},
r2: [32]byte{'b'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
{
name: "Common ancestor between b and d is b",
r1: [32]byte{'d'},
r2: [32]byte{'b'},
wantRoot: [32]byte{'b'},
wantSlot: 1,
},
{
name: "Common ancestor between d and a is a",
r1: [32]byte{'d'},
r2: [32]byte{'a'},
wantRoot: [32]byte{'a'},
wantSlot: 0,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
gotRoot, gotSlot, err := f.CommonAncestor(ctx, tc.r1, tc.r2)
require.NoError(t, err)
require.Equal(t, tc.wantRoot, gotRoot)
require.Equal(t, tc.wantSlot, gotSlot)
})
}
// Equal inputs should return the same root.
r, s, err := f.CommonAncestor(ctx, [32]byte{'b'}, [32]byte{'b'})
require.NoError(t, err)
@@ -588,10 +535,9 @@ func TestStore_CommonAncestor(t *testing.T) {
unrealizedJustifiedEpoch: 1,
finalizedEpoch: 1,
unrealizedFinalizedEpoch: 1,
optimistic: true,
}
f.store.nodeByRoot[[32]byte{'y'}] = n
f.store.emptyNodeByRoot[[32]byte{'y'}] = &PayloadNode{node: n, optimistic: true}
// broken link
_, _, err = f.CommonAncestor(ctx, [32]byte{'y'}, [32]byte{'a'})
require.ErrorIs(t, err, forkchoice.ErrUnknownCommonAncestor)
@@ -610,7 +556,8 @@ func TestStore_InsertChain(t *testing.T) {
require.NoError(t, err)
roblock, err := blocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
blks = append(blks, &forkchoicetypes.BlockAndCheckpoints{Block: roblock,
blks = append(blks, &forkchoicetypes.BlockAndCheckpoints{
Block: roblock,
JustifiedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: params.BeaconConfig().ZeroHash[:]},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: params.BeaconConfig().ZeroHash[:]},
})
@@ -625,7 +572,8 @@ func TestStore_InsertChain(t *testing.T) {
require.NoError(t, err)
roblock, err := blocks.NewROBlockWithRoot(wsb, root)
require.NoError(t, err)
blks = append(blks, &forkchoicetypes.BlockAndCheckpoints{Block: roblock,
blks = append(blks, &forkchoicetypes.BlockAndCheckpoints{
Block: roblock,
JustifiedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: params.BeaconConfig().ZeroHash[:]},
FinalizedCheckpoint: &ethpb.Checkpoint{Epoch: 1, Root: params.BeaconConfig().ZeroHash[:]},
})
@@ -742,7 +690,7 @@ func TestWeight(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
n, ok := f.store.nodeByRoot[root]
n, ok := f.store.emptyNodeByRoot[root]
require.Equal(t, true, ok)
n.weight = 10
w, err := f.Weight(root)
@@ -914,16 +862,3 @@ func TestForkchoiceParentRoot(t *testing.T) {
require.NoError(t, err)
require.Equal(t, zeroHash, root)
}
func TestForkChoice_CleanupInserting(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
st, roblock, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 2, 2)
f.SetBalancesByRooter(func(_ context.Context, _ [32]byte) ([]uint64, error) {
return f.justifiedBalances, errors.New("mock err")
})
require.NoError(t, err)
require.NotNil(t, f.InsertNode(ctx, st, roblock))
require.Equal(t, false, f.HasNode(roblock.Root()))
}

View File

@@ -0,0 +1,327 @@
package doublylinkedtree
import (
"bytes"
"context"
"slices"
"time"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
forkchoice2 "github.com/OffchainLabs/prysm/v7/consensus-types/forkchoice"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/pkg/errors"
)
func (s *Store) resolveParentPayloadStatus(block interfaces.ReadOnlyBeaconBlock, parent **PayloadNode, blockHash *[32]byte) error {
sb, err := block.Body().SignedExecutionPayloadBid()
if err != nil {
return err
}
wb, err := blocks.WrappedROSignedExecutionPayloadBid(sb)
if err != nil {
return errors.Wrap(err, "failed to wrap signed bid")
}
bid, err := wb.Bid()
if err != nil {
return errors.Wrap(err, "failed to get bid from wrapped bid")
}
*blockHash = bid.BlockHash()
parentRoot := block.ParentRoot()
*parent = s.emptyNodeByRoot[parentRoot]
if *parent == nil {
// This is the tree root node.
return nil
}
if bid.ParentBlockHash() == (*parent).node.blockHash {
// block builds on full
*parent = s.fullNodeByRoot[(*parent).node.root]
}
return nil
}
// applyWeightChangesConsensusNode recomputes the weight of the node passed as an argument and all of its descendants,
// using the current balance stored in each node.
func (s *Store) applyWeightChangesConsensusNode(ctx context.Context, n *Node) error {
// Recursively calling the children to sum their weights.
en := s.emptyNodeByRoot[n.root]
if err := s.applyWeightChangesPayloadNode(ctx, en); err != nil {
return err
}
childrenWeight := en.weight
fn := s.fullNodeByRoot[n.root]
if fn != nil {
if err := s.applyWeightChangesPayloadNode(ctx, fn); err != nil {
return err
}
childrenWeight += fn.weight
}
if n.root == params.BeaconConfig().ZeroHash {
return nil
}
n.weight = n.balance + childrenWeight
return nil
}
// applyWeightChangesPayloadNode recomputes the weight of the node passed as an argument and all of its descendants,
// using the current balance stored in each node.
func (s *Store) applyWeightChangesPayloadNode(ctx context.Context, n *PayloadNode) error {
// Recursively calling the children to sum their weights.
childrenWeight := uint64(0)
for _, child := range n.children {
if ctx.Err() != nil {
return ctx.Err()
}
if err := s.applyWeightChangesConsensusNode(ctx, child); err != nil {
return err
}
childrenWeight += child.weight
}
n.weight = n.balance + childrenWeight
return nil
}
// allConsensusChildren returns the list of all consensus blocks that build on the given node.
func (s *Store) allConsensusChildren(n *Node) []*Node {
en := s.emptyNodeByRoot[n.root]
fn, ok := s.fullNodeByRoot[n.root]
if ok {
return append(slices.Clone(en.children), fn.children...)
}
return en.children
}
// setNodeAndParentValidated sets the current node and all the ancestors as validated (i.e. non-optimistic).
func (s *Store) setNodeAndParentValidated(ctx context.Context, pn *PayloadNode) error {
if ctx.Err() != nil {
return ctx.Err()
}
if !pn.optimistic {
return nil
}
pn.optimistic = false
if pn.full {
// set the empty node also a as valid
en := s.emptyNodeByRoot[pn.node.root]
en.optimistic = false
}
if pn.node.parent == nil {
return nil
}
return s.setNodeAndParentValidated(ctx, pn.node.parent)
}
// fullAncestor returns the highest ancestor with a full payload that a block with the
// given root has. If there is a payload for the past root, then it will return that full
// node. Otherwise it will use the full parent actually being an ancestor of the given root
func (s *Store) fullAncestor(root [32]byte) *PayloadNode {
fn, ok := s.fullNodeByRoot[root]
if ok {
return fn
}
en := s.emptyNodeByRoot[root]
if en == nil {
return nil
}
return s.fullParent(en)
}
// fullParent returns the latest full node that this block builds on.
func (s *Store) fullParent(pn *PayloadNode) *PayloadNode {
parent := pn.node.parent
for ; parent != nil && !parent.full; parent = parent.node.parent {
}
return parent
}
// parentHash return the payload hash of the latest full node that this block builds on.
func (s *Store) parentHash(pn *PayloadNode) [32]byte {
fullParent := s.fullParent(pn)
if fullParent == nil {
return [32]byte{}
}
return fullParent.node.blockHash
}
// latestHashForRoot returns the latest payload hash for the given block root.
func (s *Store) latestHashForRoot(root [32]byte) [32]byte {
// try to get the full node first
fn, ok := s.fullNodeByRoot[root]
if ok && fn != nil {
return fn.node.blockHash
}
en := s.emptyNodeByRoot[root]
if !ok || en == nil {
// This should not happen
return [32]byte{}
}
return s.parentHash(en)
}
// updateBestDescendantPayloadNode updates the best descendant of this node and its
// children.
func (s *Store) updateBestDescendantPayloadNode(ctx context.Context, n *PayloadNode, justifiedEpoch, finalizedEpoch, currentEpoch primitives.Epoch) error {
if ctx.Err() != nil {
return ctx.Err()
}
var bestChild *Node
bestWeight := uint64(0)
for _, child := range n.children {
if child == nil {
return errors.Wrap(ErrNilNode, "could not update best descendant")
}
if err := s.updateBestDescendantConsensusNode(ctx, child, justifiedEpoch, finalizedEpoch, currentEpoch); err != nil {
return err
}
childLeadsToViableHead := child.leadsToViableHead(justifiedEpoch, currentEpoch)
if childLeadsToViableHead && bestChild == nil {
// The child leads to a viable head, but the current
// parent's best child doesn't.
bestWeight = child.weight
bestChild = child
} else if childLeadsToViableHead {
// If both are viable, compare their weights.
if child.weight == bestWeight {
// Tie-breaker of equal weights by root.
if bytes.Compare(child.root[:], bestChild.root[:]) > 0 {
bestChild = child
}
} else if child.weight > bestWeight {
bestChild = child
bestWeight = child.weight
}
}
}
if bestChild == nil {
n.bestDescendant = nil
} else {
if bestChild.bestDescendant == nil {
n.bestDescendant = bestChild
} else {
n.bestDescendant = bestChild.bestDescendant
}
}
return nil
}
// updateBestDescendantConsensusNode updates the best descendant of this node and its
// children.
func (s *Store) updateBestDescendantConsensusNode(ctx context.Context, n *Node, justifiedEpoch, finalizedEpoch, currentEpoch primitives.Epoch) error {
if ctx.Err() != nil {
return ctx.Err()
}
if len(s.allConsensusChildren(n)) == 0 {
n.bestDescendant = nil
return nil
}
en := s.emptyNodeByRoot[n.root]
if err := s.updateBestDescendantPayloadNode(ctx, en, justifiedEpoch, finalizedEpoch, currentEpoch); err != nil {
return err
}
fn := s.fullNodeByRoot[n.root]
if fn == nil {
n.bestDescendant = en.bestDescendant
return nil
}
// TODO GLOAS: pick between full or empty
if err := s.updateBestDescendantPayloadNode(ctx, fn, justifiedEpoch, finalizedEpoch, currentEpoch); err != nil {
return err
}
n.bestDescendant = fn.bestDescendant
return nil
}
// choosePayloadContent chooses between empty or full for the passed consensus node. TODO Gloas: use PTC to choose.
func (s *Store) choosePayloadContent(n *Node) *PayloadNode {
if n == nil {
return nil
}
fn := s.fullNodeByRoot[n.root]
if fn != nil {
return fn
}
return s.emptyNodeByRoot[n.root]
}
// nodeTreeDump appends to the given list all the nodes descending from this one
func (s *Store) nodeTreeDump(ctx context.Context, n *Node, nodes []*forkchoice2.Node) ([]*forkchoice2.Node, error) {
if ctx.Err() != nil {
return nil, ctx.Err()
}
var parentRoot [32]byte
if n.parent != nil {
parentRoot = n.parent.node.root
}
target := [32]byte{}
if n.target != nil {
target = n.target.root
}
optimistic := false
if n.parent != nil {
optimistic = n.parent.optimistic
}
en := s.emptyNodeByRoot[n.root]
timestamp := en.timestamp
fn := s.fullNodeByRoot[n.root]
if fn != nil {
optimistic = fn.optimistic
timestamp = fn.timestamp
}
thisNode := &forkchoice2.Node{
Slot: n.slot,
BlockRoot: n.root[:],
ParentRoot: parentRoot[:],
JustifiedEpoch: n.justifiedEpoch,
FinalizedEpoch: n.finalizedEpoch,
UnrealizedJustifiedEpoch: n.unrealizedJustifiedEpoch,
UnrealizedFinalizedEpoch: n.unrealizedFinalizedEpoch,
Balance: n.balance,
Weight: n.weight,
ExecutionOptimistic: optimistic,
ExecutionBlockHash: n.blockHash[:],
Timestamp: timestamp,
Target: target[:],
}
if optimistic {
thisNode.Validity = forkchoice2.Optimistic
} else {
thisNode.Validity = forkchoice2.Valid
}
nodes = append(nodes, thisNode)
var err error
children := s.allConsensusChildren(n)
for _, child := range children {
nodes, err = s.nodeTreeDump(ctx, child, nodes)
if err != nil {
return nil, err
}
}
return nodes, nil
}
func (f *ForkChoice) InsertPayload(ctx context.Context, pe interfaces.ROExecutionPayloadEnvelope) error {
s := f.store
root := pe.BeaconBlockRoot()
en := s.emptyNodeByRoot[root]
if en == nil {
return errors.Wrap(ErrNilNode, "cannot insert full node without an empty one")
}
if _, ok := s.fullNodeByRoot[root]; ok {
// We don't import two payloads for the same root
return nil
}
fn := &PayloadNode{
node: en.node,
optimistic: true,
timestamp: time.Now(),
full: true,
children: make([]*Node, 0),
}
s.fullNodeByRoot[root] = fn
return nil
}

View File

@@ -0,0 +1,300 @@
package doublylinkedtree
import (
"context"
"testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
state_native "github.com/OffchainLabs/prysm/v7/beacon-chain/state/state-native"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
enginev1 "github.com/OffchainLabs/prysm/v7/proto/engine/v1"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/assert"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
)
func prepareGloasForkchoiceState(
_ context.Context,
slot primitives.Slot,
blockRoot [32]byte,
parentRoot [32]byte,
blockHash [32]byte,
parentBlockHash [32]byte,
justifiedEpoch primitives.Epoch,
finalizedEpoch primitives.Epoch,
) (state.BeaconState, blocks.ROBlock, error) {
blockHeader := &ethpb.BeaconBlockHeader{
ParentRoot: parentRoot[:],
}
justifiedCheckpoint := &ethpb.Checkpoint{
Epoch: justifiedEpoch,
}
finalizedCheckpoint := &ethpb.Checkpoint{
Epoch: finalizedEpoch,
}
builderPendingPayments := make([]*ethpb.BuilderPendingPayment, 64)
for i := range builderPendingPayments {
builderPendingPayments[i] = &ethpb.BuilderPendingPayment{
Withdrawal: &ethpb.BuilderPendingWithdrawal{
FeeRecipient: make([]byte, 20),
},
}
}
base := &ethpb.BeaconStateGloas{
Slot: slot,
RandaoMixes: make([][]byte, params.BeaconConfig().EpochsPerHistoricalVector),
CurrentJustifiedCheckpoint: justifiedCheckpoint,
FinalizedCheckpoint: finalizedCheckpoint,
LatestBlockHeader: blockHeader,
LatestExecutionPayloadBid: &ethpb.ExecutionPayloadBid{
BlockHash: blockHash[:],
ParentBlockHash: parentBlockHash[:],
ParentBlockRoot: make([]byte, 32),
PrevRandao: make([]byte, 32),
FeeRecipient: make([]byte, 20),
BlobKzgCommitments: [][]byte{make([]byte, 48)},
},
Builders: make([]*ethpb.Builder, 0),
BuilderPendingPayments: builderPendingPayments,
ExecutionPayloadAvailability: make([]byte, 1024),
LatestBlockHash: make([]byte, 32),
PayloadExpectedWithdrawals: make([]*enginev1.Withdrawal, 0),
ProposerLookahead: make([]uint64, 64),
}
st, err := state_native.InitializeFromProtoUnsafeGloas(base)
if err != nil {
return nil, blocks.ROBlock{}, err
}
bid := util.HydrateSignedExecutionPayloadBid(&ethpb.SignedExecutionPayloadBid{
Message: &ethpb.ExecutionPayloadBid{
BlockHash: blockHash[:],
ParentBlockHash: parentBlockHash[:],
},
})
blk := util.HydrateSignedBeaconBlockGloas(&ethpb.SignedBeaconBlockGloas{
Block: &ethpb.BeaconBlockGloas{
Slot: slot,
ParentRoot: parentRoot[:],
Body: &ethpb.BeaconBlockBodyGloas{
SignedExecutionPayloadBid: bid,
},
},
})
signed, err := blocks.NewSignedBeaconBlock(blk)
if err != nil {
return nil, blocks.ROBlock{}, err
}
roblock, err := blocks.NewROBlockWithRoot(signed, blockRoot)
return st, roblock, err
}
func prepareGloasForkchoicePayload(
blockRoot [32]byte,
) (interfaces.ROExecutionPayloadEnvelope, error) {
env := &ethpb.ExecutionPayloadEnvelope{
BeaconBlockRoot: blockRoot[:],
Payload: &enginev1.ExecutionPayloadDeneb{},
}
return blocks.WrappedROExecutionPayloadEnvelope(env)
}
func TestInsertGloasBlock_EmptyNodeOnly(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
root := indexToHash(1)
blockHash := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, root, params.BeaconConfig().ZeroHash, blockHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
// Empty node should exist.
en := f.store.emptyNodeByRoot[root]
require.NotNil(t, en)
// Full node should NOT exist.
_, hasFull := f.store.fullNodeByRoot[root]
assert.Equal(t, false, hasFull)
// Parent should be the genesis full node.
genesisRoot := params.BeaconConfig().ZeroHash
genesisFull := f.store.fullNodeByRoot[genesisRoot]
require.NotNil(t, genesisFull)
assert.Equal(t, genesisFull, en.node.parent)
}
func TestInsertPayload_CreatesFullNode(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
root := indexToHash(1)
blockHash := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, root, params.BeaconConfig().ZeroHash, blockHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
require.Equal(t, 2, len(f.store.emptyNodeByRoot))
require.Equal(t, 1, len(f.store.fullNodeByRoot))
pe, err := prepareGloasForkchoicePayload(root)
require.NoError(t, err)
require.NoError(t, f.InsertPayload(ctx, pe))
require.Equal(t, 2, len(f.store.fullNodeByRoot))
fn := f.store.fullNodeByRoot[root]
require.NotNil(t, fn)
en := f.store.emptyNodeByRoot[root]
require.NotNil(t, en)
// Empty and full share the same *Node.
assert.Equal(t, en.node, fn.node)
assert.Equal(t, true, fn.optimistic)
assert.Equal(t, true, fn.full)
}
func TestInsertPayload_DuplicateIsNoop(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
root := indexToHash(1)
blockHash := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, root, params.BeaconConfig().ZeroHash, blockHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
pe, err := prepareGloasForkchoicePayload(root)
require.NoError(t, err)
require.NoError(t, f.InsertPayload(ctx, pe))
require.Equal(t, 2, len(f.store.fullNodeByRoot))
fn := f.store.fullNodeByRoot[root]
require.NotNil(t, fn)
// Insert again — should be a no-op.
require.NoError(t, f.InsertPayload(ctx, pe))
assert.Equal(t, fn, f.store.fullNodeByRoot[root])
require.Equal(t, 2, len(f.store.fullNodeByRoot))
}
func TestInsertPayload_WithoutEmptyNode_Errors(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
root := indexToHash(99)
pe, err := prepareGloasForkchoicePayload(root)
require.NoError(t, err)
err = f.InsertPayload(ctx, pe)
require.ErrorContains(t, ErrNilNode.Error(), err)
}
func TestGloasBlock_ChildBuildsOnEmpty(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
// Insert Gloas block A (empty only).
rootA := indexToHash(1)
blockHashA := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, rootA, params.BeaconConfig().ZeroHash, blockHashA, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
// Insert Gloas block B as child of (A, empty)
rootB := indexToHash(2)
blockHashB := indexToHash(200)
nonMatchingParentHash := indexToHash(999)
st, roblock, err = prepareGloasForkchoiceState(ctx, 2, rootB, rootA, blockHashB, nonMatchingParentHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
emptyA := f.store.emptyNodeByRoot[rootA]
require.NotNil(t, emptyA)
nodeB := f.store.emptyNodeByRoot[rootB]
require.NotNil(t, nodeB)
require.Equal(t, emptyA, nodeB.node.parent)
}
func TestGloasBlock_ChildrenOfEmptyAndFull(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
// Insert Gloas block A (empty only).
rootA := indexToHash(1)
blockHashA := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, rootA, params.BeaconConfig().ZeroHash, blockHashA, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
// Insert payload for A
pe, err := prepareGloasForkchoicePayload(rootA)
require.NoError(t, err)
require.NoError(t, f.InsertPayload(ctx, pe))
// Insert Gloas block B as child of (A, empty)
rootB := indexToHash(2)
blockHashB := indexToHash(200)
nonMatchingParentHash := indexToHash(999)
st, roblock, err = prepareGloasForkchoiceState(ctx, 2, rootB, rootA, blockHashB, nonMatchingParentHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
// Insert Gloas block C as child of (A, full)
rootC := indexToHash(3)
blockHashC := indexToHash(201)
st, roblock, err = prepareGloasForkchoiceState(ctx, 3, rootC, rootA, blockHashC, blockHashA, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
emptyA := f.store.emptyNodeByRoot[rootA]
require.NotNil(t, emptyA)
nodeB := f.store.emptyNodeByRoot[rootB]
require.NotNil(t, nodeB)
require.Equal(t, emptyA, nodeB.node.parent)
nodeC := f.store.emptyNodeByRoot[rootC]
require.NotNil(t, nodeC)
fullA := f.store.fullNodeByRoot[rootA]
require.NotNil(t, fullA)
require.Equal(t, fullA, nodeC.node.parent)
}
func TestGloasBlock_ChildBuildsOnFull(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
// Insert Gloas block A (empty only).
rootA := indexToHash(1)
blockHashA := indexToHash(100)
st, roblock, err := prepareGloasForkchoiceState(ctx, 1, rootA, params.BeaconConfig().ZeroHash, blockHashA, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
// Insert payload for A → creates the full node.
pe, err := prepareGloasForkchoicePayload(rootA)
require.NoError(t, err)
require.NoError(t, f.InsertPayload(ctx, pe))
fullA := f.store.fullNodeByRoot[rootA]
require.NotNil(t, fullA)
// Child for (A, full)
rootB := indexToHash(2)
blockHashB := indexToHash(200)
st, roblock, err = prepareGloasForkchoiceState(ctx, 2, rootB, rootA, blockHashB, blockHashA, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, roblock))
nodeB := f.store.emptyNodeByRoot[rootB]
require.NotNil(t, nodeB)
assert.Equal(t, fullA, nodeB.node.parent)
}

View File

@@ -1,26 +0,0 @@
package doublylinkedtree
import (
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
// LastRoot returns the last canonical block root in the given epoch
func (f *ForkChoice) LastRoot(epoch primitives.Epoch) [32]byte {
head := f.store.headNode
headEpoch := slots.ToEpoch(head.slot)
epochEnd, err := slots.EpochEnd(epoch)
if err != nil {
return [32]byte{}
}
if headEpoch <= epoch {
return head.root
}
for head != nil && head.slot > epochEnd {
head = head.parent
}
if head == nil {
return [32]byte{}
}
return head.root
}

View File

@@ -1,38 +0,0 @@
package doublylinkedtree
import (
"testing"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/testing/require"
)
func TestLastRoot(t *testing.T) {
f := setup(0, 0)
ctx := t.Context()
st, root, err := prepareForkchoiceState(ctx, 1, [32]byte{'1'}, params.BeaconConfig().ZeroHash, [32]byte{'1'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 2, [32]byte{'2'}, [32]byte{'1'}, [32]byte{'2'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 3, [32]byte{'3'}, [32]byte{'1'}, [32]byte{'3'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 32, [32]byte{'4'}, [32]byte{'3'}, [32]byte{'4'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 33, [32]byte{'5'}, [32]byte{'2'}, [32]byte{'5'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
st, root, err = prepareForkchoiceState(ctx, 34, [32]byte{'6'}, [32]byte{'5'}, [32]byte{'6'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
headNode := f.store.nodeByRoot[[32]byte{'6'}]
f.store.headNode = headNode
require.Equal(t, [32]byte{'6'}, f.store.headNode.root)
require.Equal(t, [32]byte{'2'}, f.LastRoot(0))
require.Equal(t, [32]byte{'6'}, f.LastRoot(1))
require.Equal(t, [32]byte{'6'}, f.LastRoot(2))
}

View File

@@ -1,95 +1,17 @@
package doublylinkedtree
import (
"bytes"
"context"
"time"
"github.com/OffchainLabs/prysm/v7/config/params"
forkchoice2 "github.com/OffchainLabs/prysm/v7/consensus-types/forkchoice"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// ProcessAttestationsThreshold is the amount of time after which we
// process attestations for the current slot
const ProcessAttestationsThreshold = 10 * time.Second
// applyWeightChanges recomputes the weight of the node passed as an argument and all of its descendants,
// using the current balance stored in each node.
func (n *Node) applyWeightChanges(ctx context.Context) error {
// Recursively calling the children to sum their weights.
childrenWeight := uint64(0)
for _, child := range n.children {
if ctx.Err() != nil {
return ctx.Err()
}
if err := child.applyWeightChanges(ctx); err != nil {
return err
}
childrenWeight += child.weight
}
if n.root == params.BeaconConfig().ZeroHash {
return nil
}
n.weight = n.balance + childrenWeight
return nil
}
// updateBestDescendant updates the best descendant of this node and its
// children.
func (n *Node) updateBestDescendant(ctx context.Context, justifiedEpoch, finalizedEpoch, currentEpoch primitives.Epoch) error {
if ctx.Err() != nil {
return ctx.Err()
}
if len(n.children) == 0 {
n.bestDescendant = nil
return nil
}
var bestChild *Node
bestWeight := uint64(0)
hasViableDescendant := false
for _, child := range n.children {
if child == nil {
return errors.Wrap(ErrNilNode, "could not update best descendant")
}
if err := child.updateBestDescendant(ctx, justifiedEpoch, finalizedEpoch, currentEpoch); err != nil {
return err
}
childLeadsToViableHead := child.leadsToViableHead(justifiedEpoch, currentEpoch)
if childLeadsToViableHead && !hasViableDescendant {
// The child leads to a viable head, but the current
// parent's best child doesn't.
bestWeight = child.weight
bestChild = child
hasViableDescendant = true
} else if childLeadsToViableHead {
// If both are viable, compare their weights.
if child.weight == bestWeight {
// Tie-breaker of equal weights by root.
if bytes.Compare(child.root[:], bestChild.root[:]) > 0 {
bestChild = child
}
} else if child.weight > bestWeight {
bestChild = child
bestWeight = child.weight
}
}
}
if hasViableDescendant {
if bestChild.bestDescendant == nil {
n.bestDescendant = bestChild
} else {
n.bestDescendant = bestChild.bestDescendant
}
} else {
n.bestDescendant = nil
}
return nil
}
// viableForHead returns true if the node is viable to head.
// Any node with different finalized or justified epoch than
// the ones in fork choice store should not be viable to head.
@@ -110,30 +32,13 @@ func (n *Node) leadsToViableHead(justifiedEpoch, currentEpoch primitives.Epoch)
return n.bestDescendant.viableForHead(justifiedEpoch, currentEpoch)
}
// setNodeAndParentValidated sets the current node and all the ancestors as validated (i.e. non-optimistic).
func (n *Node) setNodeAndParentValidated(ctx context.Context) error {
if ctx.Err() != nil {
return ctx.Err()
}
if !n.optimistic {
return nil
}
n.optimistic = false
if n.parent == nil {
return nil
}
return n.parent.setNodeAndParentValidated(ctx)
}
// arrivedEarly returns whether this node was inserted before the first
// threshold to orphan a block.
// Note that genesisTime has seconds granularity, therefore we use a strict
// inequality < here. For example a block that arrives 3.9999 seconds into the
// slot will have secs = 3 below.
func (n *Node) arrivedEarly(genesis time.Time) (bool, error) {
sss, err := slots.SinceSlotStart(n.slot, genesis, n.timestamp.Truncate(time.Second)) // Truncate such that 3.9999 seconds will have a value of 3.
func (n *PayloadNode) arrivedEarly(genesis time.Time) (bool, error) {
sss, err := slots.SinceSlotStart(n.node.slot, genesis, n.timestamp.Truncate(time.Second)) // Truncate such that 3.9999 seconds will have a value of 3.
votingWindow := params.BeaconConfig().SlotComponentDuration(params.BeaconConfig().AttestationDueBPS)
return sss < votingWindow, err
}
@@ -143,52 +48,7 @@ func (n *Node) arrivedEarly(genesis time.Time) (bool, error) {
// Note that genesisTime has seconds granularity, therefore we use an
// inequality >= here. For example a block that arrives 10.00001 seconds into the
// slot will have secs = 10 below.
func (n *Node) arrivedAfterOrphanCheck(genesis time.Time) (bool, error) {
secs, err := slots.SinceSlotStart(n.slot, genesis, n.timestamp.Truncate(time.Second)) // Truncate such that 10.00001 seconds will have a value of 10.
func (n *PayloadNode) arrivedAfterOrphanCheck(genesis time.Time) (bool, error) {
secs, err := slots.SinceSlotStart(n.node.slot, genesis, n.timestamp.Truncate(time.Second)) // Truncate such that 10.00001 seconds will have a value of 10.
return secs >= ProcessAttestationsThreshold, err
}
// nodeTreeDump appends to the given list all the nodes descending from this one
func (n *Node) nodeTreeDump(ctx context.Context, nodes []*forkchoice2.Node) ([]*forkchoice2.Node, error) {
if ctx.Err() != nil {
return nil, ctx.Err()
}
var parentRoot [32]byte
if n.parent != nil {
parentRoot = n.parent.root
}
target := [32]byte{}
if n.target != nil {
target = n.target.root
}
thisNode := &forkchoice2.Node{
Slot: n.slot,
BlockRoot: n.root[:],
ParentRoot: parentRoot[:],
JustifiedEpoch: n.justifiedEpoch,
FinalizedEpoch: n.finalizedEpoch,
UnrealizedJustifiedEpoch: n.unrealizedJustifiedEpoch,
UnrealizedFinalizedEpoch: n.unrealizedFinalizedEpoch,
Balance: n.balance,
Weight: n.weight,
ExecutionOptimistic: n.optimistic,
ExecutionBlockHash: n.payloadHash[:],
Timestamp: n.timestamp,
Target: target[:],
}
if n.optimistic {
thisNode.Validity = forkchoice2.Optimistic
} else {
thisNode.Validity = forkchoice2.Valid
}
nodes = append(nodes, thisNode)
var err error
for _, child := range n.children {
nodes, err = child.nodeTreeDump(ctx, nodes)
if err != nil {
return nil, err
}
}
return nodes, nil
}

View File

@@ -27,15 +27,15 @@ func TestNode_ApplyWeightChanges_PositiveChange(t *testing.T) {
// The updated balances of each node is 100
s := f.store
s.nodeByRoot[indexToHash(1)].balance = 100
s.nodeByRoot[indexToHash(2)].balance = 100
s.nodeByRoot[indexToHash(3)].balance = 100
s.emptyNodeByRoot[indexToHash(1)].balance = 100
s.emptyNodeByRoot[indexToHash(2)].balance = 100
s.emptyNodeByRoot[indexToHash(3)].balance = 100
assert.NoError(t, s.treeRootNode.applyWeightChanges(ctx))
assert.NoError(t, s.applyWeightChangesConsensusNode(ctx, s.treeRootNode))
assert.Equal(t, uint64(300), s.nodeByRoot[indexToHash(1)].weight)
assert.Equal(t, uint64(200), s.nodeByRoot[indexToHash(2)].weight)
assert.Equal(t, uint64(100), s.nodeByRoot[indexToHash(3)].weight)
assert.Equal(t, uint64(300), s.emptyNodeByRoot[indexToHash(1)].node.weight)
assert.Equal(t, uint64(200), s.emptyNodeByRoot[indexToHash(2)].node.weight)
assert.Equal(t, uint64(100), s.emptyNodeByRoot[indexToHash(3)].node.weight)
}
func TestNode_ApplyWeightChanges_NegativeChange(t *testing.T) {
@@ -53,19 +53,19 @@ func TestNode_ApplyWeightChanges_NegativeChange(t *testing.T) {
// The updated balances of each node is 100
s := f.store
s.nodeByRoot[indexToHash(1)].weight = 400
s.nodeByRoot[indexToHash(2)].weight = 400
s.nodeByRoot[indexToHash(3)].weight = 400
s.emptyNodeByRoot[indexToHash(1)].weight = 400
s.emptyNodeByRoot[indexToHash(2)].weight = 400
s.emptyNodeByRoot[indexToHash(3)].weight = 400
s.nodeByRoot[indexToHash(1)].balance = 100
s.nodeByRoot[indexToHash(2)].balance = 100
s.nodeByRoot[indexToHash(3)].balance = 100
s.emptyNodeByRoot[indexToHash(1)].balance = 100
s.emptyNodeByRoot[indexToHash(2)].balance = 100
s.emptyNodeByRoot[indexToHash(3)].balance = 100
assert.NoError(t, s.treeRootNode.applyWeightChanges(ctx))
assert.NoError(t, s.applyWeightChangesConsensusNode(ctx, s.treeRootNode))
assert.Equal(t, uint64(300), s.nodeByRoot[indexToHash(1)].weight)
assert.Equal(t, uint64(200), s.nodeByRoot[indexToHash(2)].weight)
assert.Equal(t, uint64(100), s.nodeByRoot[indexToHash(3)].weight)
assert.Equal(t, uint64(300), s.emptyNodeByRoot[indexToHash(1)].node.weight)
assert.Equal(t, uint64(200), s.emptyNodeByRoot[indexToHash(2)].node.weight)
assert.Equal(t, uint64(100), s.emptyNodeByRoot[indexToHash(3)].node.weight)
}
func TestNode_UpdateBestDescendant_NonViableChild(t *testing.T) {
@@ -78,7 +78,7 @@ func TestNode_UpdateBestDescendant_NonViableChild(t *testing.T) {
// Verify parent's best child and best descendant are `none`.
s := f.store
assert.Equal(t, 1, len(s.treeRootNode.children))
assert.Equal(t, 1, len(s.allConsensusChildren(s.treeRootNode)))
nilBestDescendant := s.treeRootNode.bestDescendant == nil
assert.Equal(t, true, nilBestDescendant)
}
@@ -92,8 +92,9 @@ func TestNode_UpdateBestDescendant_ViableChild(t *testing.T) {
require.NoError(t, f.InsertNode(ctx, state, blk))
s := f.store
assert.Equal(t, 1, len(s.treeRootNode.children))
assert.Equal(t, s.treeRootNode.children[0], s.treeRootNode.bestDescendant)
children := s.allConsensusChildren(s.treeRootNode)
assert.Equal(t, 1, len(children))
assert.Equal(t, children[0], s.treeRootNode.bestDescendant)
}
func TestNode_UpdateBestDescendant_HigherWeightChild(t *testing.T) {
@@ -108,32 +109,34 @@ func TestNode_UpdateBestDescendant_HigherWeightChild(t *testing.T) {
require.NoError(t, f.InsertNode(ctx, state, blk))
s := f.store
s.nodeByRoot[indexToHash(1)].weight = 100
s.nodeByRoot[indexToHash(2)].weight = 200
assert.NoError(t, s.treeRootNode.updateBestDescendant(ctx, 1, 1, 1))
s.emptyNodeByRoot[indexToHash(1)].weight = 100
s.emptyNodeByRoot[indexToHash(2)].weight = 200
assert.NoError(t, s.updateBestDescendantConsensusNode(ctx, s.treeRootNode, 1, 1, 1))
assert.Equal(t, 2, len(s.treeRootNode.children))
assert.Equal(t, s.treeRootNode.children[1], s.treeRootNode.bestDescendant)
children := s.allConsensusChildren(s.treeRootNode)
assert.Equal(t, 2, len(children))
assert.Equal(t, children[1], s.treeRootNode.bestDescendant)
}
func TestNode_UpdateBestDescendant_LowerWeightChild(t *testing.T) {
f := setup(1, 1)
ctx := t.Context()
// Input child is the best descendant
state, blk, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
state, blk, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, indexToHash(101), 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
state, blk, err = prepareForkchoiceState(ctx, 2, indexToHash(2), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
state, blk, err = prepareForkchoiceState(ctx, 2, indexToHash(2), params.BeaconConfig().ZeroHash, indexToHash(102), 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
s := f.store
s.nodeByRoot[indexToHash(1)].weight = 200
s.nodeByRoot[indexToHash(2)].weight = 100
assert.NoError(t, s.treeRootNode.updateBestDescendant(ctx, 1, 1, 1))
s.emptyNodeByRoot[indexToHash(1)].node.weight = 200
s.emptyNodeByRoot[indexToHash(2)].node.weight = 100
assert.NoError(t, s.updateBestDescendantConsensusNode(ctx, s.treeRootNode, 1, 1, 1))
assert.Equal(t, 2, len(s.treeRootNode.children))
assert.Equal(t, s.treeRootNode.children[0], s.treeRootNode.bestDescendant)
children := s.allConsensusChildren(s.treeRootNode)
assert.Equal(t, 2, len(children))
assert.Equal(t, children[0], s.treeRootNode.bestDescendant)
}
func TestNode_ViableForHead(t *testing.T) {
@@ -176,44 +179,44 @@ func TestNode_LeadsToViableHead(t *testing.T) {
require.NoError(t, f.InsertNode(ctx, state, blk))
require.Equal(t, true, f.store.treeRootNode.leadsToViableHead(4, 5))
require.Equal(t, true, f.store.nodeByRoot[indexToHash(5)].leadsToViableHead(4, 5))
require.Equal(t, false, f.store.nodeByRoot[indexToHash(2)].leadsToViableHead(4, 5))
require.Equal(t, false, f.store.nodeByRoot[indexToHash(4)].leadsToViableHead(4, 5))
require.Equal(t, true, f.store.emptyNodeByRoot[indexToHash(5)].node.leadsToViableHead(4, 5))
require.Equal(t, false, f.store.emptyNodeByRoot[indexToHash(2)].node.leadsToViableHead(4, 5))
require.Equal(t, false, f.store.emptyNodeByRoot[indexToHash(4)].node.leadsToViableHead(4, 5))
}
func TestNode_SetFullyValidated(t *testing.T) {
f := setup(1, 1)
ctx := t.Context()
storeNodes := make([]*Node, 6)
storeNodes[0] = f.store.treeRootNode
storeNodes := make([]*PayloadNode, 6)
storeNodes[0] = f.store.fullNodeByRoot[params.BeaconConfig().ZeroHash]
// insert blocks in the fork pattern (optimistic status in parenthesis)
//
// 0 (false) -- 1 (false) -- 2 (false) -- 3 (true) -- 4 (true)
// \
// -- 5 (true)
//
state, blk, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
state, blk, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, indexToHash(101), 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
storeNodes[1] = f.store.nodeByRoot[blk.Root()]
require.NoError(t, f.SetOptimisticToValid(ctx, params.BeaconConfig().ZeroHash))
state, blk, err = prepareForkchoiceState(ctx, 2, indexToHash(2), indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
storeNodes[2] = f.store.nodeByRoot[blk.Root()]
storeNodes[1] = f.store.fullNodeByRoot[blk.Root()]
require.NoError(t, f.SetOptimisticToValid(ctx, indexToHash(1)))
state, blk, err = prepareForkchoiceState(ctx, 3, indexToHash(3), indexToHash(2), params.BeaconConfig().ZeroHash, 1, 1)
state, blk, err = prepareForkchoiceState(ctx, 2, indexToHash(2), indexToHash(1), indexToHash(102), 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
storeNodes[3] = f.store.nodeByRoot[blk.Root()]
state, blk, err = prepareForkchoiceState(ctx, 4, indexToHash(4), indexToHash(3), params.BeaconConfig().ZeroHash, 1, 1)
storeNodes[2] = f.store.fullNodeByRoot[blk.Root()]
require.NoError(t, f.SetOptimisticToValid(ctx, indexToHash(2)))
state, blk, err = prepareForkchoiceState(ctx, 3, indexToHash(3), indexToHash(2), indexToHash(103), 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
storeNodes[4] = f.store.nodeByRoot[blk.Root()]
state, blk, err = prepareForkchoiceState(ctx, 5, indexToHash(5), indexToHash(1), params.BeaconConfig().ZeroHash, 1, 1)
storeNodes[3] = f.store.fullNodeByRoot[blk.Root()]
state, blk, err = prepareForkchoiceState(ctx, 4, indexToHash(4), indexToHash(3), indexToHash(104), 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
storeNodes[5] = f.store.nodeByRoot[blk.Root()]
storeNodes[4] = f.store.fullNodeByRoot[blk.Root()]
state, blk, err = prepareForkchoiceState(ctx, 5, indexToHash(5), indexToHash(1), indexToHash(105), 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blk))
storeNodes[5] = f.store.fullNodeByRoot[blk.Root()]
opt, err := f.IsOptimistic(indexToHash(5))
require.NoError(t, err)
@@ -223,7 +226,7 @@ func TestNode_SetFullyValidated(t *testing.T) {
require.NoError(t, err)
require.Equal(t, true, opt)
require.NoError(t, f.store.nodeByRoot[indexToHash(4)].setNodeAndParentValidated(ctx))
require.NoError(t, f.store.setNodeAndParentValidated(ctx, f.store.fullNodeByRoot[indexToHash(4)]))
// block 5 should still be optimistic
opt, err = f.IsOptimistic(indexToHash(5))
@@ -240,20 +243,20 @@ func TestNode_SetFullyValidated(t *testing.T) {
require.Equal(t, false, opt)
respNodes := make([]*forkchoice.Node, 0)
respNodes, err = f.store.treeRootNode.nodeTreeDump(ctx, respNodes)
respNodes, err = f.store.nodeTreeDump(ctx, f.store.treeRootNode, respNodes)
require.NoError(t, err)
require.Equal(t, len(respNodes), f.NodeCount())
for i, respNode := range respNodes {
require.Equal(t, storeNodes[i].slot, respNode.Slot)
require.DeepEqual(t, storeNodes[i].root[:], respNode.BlockRoot)
require.Equal(t, storeNodes[i].balance, respNode.Balance)
require.Equal(t, storeNodes[i].weight, respNode.Weight)
require.Equal(t, storeNodes[i].node.slot, respNode.Slot)
require.DeepEqual(t, storeNodes[i].node.root[:], respNode.BlockRoot)
require.Equal(t, storeNodes[i].node.balance, respNode.Balance)
require.Equal(t, storeNodes[i].node.weight, respNode.Weight)
require.Equal(t, storeNodes[i].optimistic, respNode.ExecutionOptimistic)
require.Equal(t, storeNodes[i].justifiedEpoch, respNode.JustifiedEpoch)
require.Equal(t, storeNodes[i].unrealizedJustifiedEpoch, respNode.UnrealizedJustifiedEpoch)
require.Equal(t, storeNodes[i].finalizedEpoch, respNode.FinalizedEpoch)
require.Equal(t, storeNodes[i].unrealizedFinalizedEpoch, respNode.UnrealizedFinalizedEpoch)
require.Equal(t, storeNodes[i].node.justifiedEpoch, respNode.JustifiedEpoch)
require.Equal(t, storeNodes[i].node.unrealizedJustifiedEpoch, respNode.UnrealizedJustifiedEpoch)
require.Equal(t, storeNodes[i].node.finalizedEpoch, respNode.FinalizedEpoch)
require.Equal(t, storeNodes[i].node.unrealizedFinalizedEpoch, respNode.UnrealizedFinalizedEpoch)
require.Equal(t, storeNodes[i].timestamp, respNode.Timestamp)
}
}
@@ -272,10 +275,10 @@ func TestNode_TimeStampsChecks(t *testing.T) {
headRoot, err := f.Head(ctx)
require.NoError(t, err)
require.Equal(t, root, headRoot)
early, err := f.store.headNode.arrivedEarly(f.store.genesisTime)
early, err := f.store.choosePayloadContent(f.store.headNode).arrivedEarly(f.store.genesisTime)
require.NoError(t, err)
require.Equal(t, true, early)
late, err := f.store.headNode.arrivedAfterOrphanCheck(f.store.genesisTime)
late, err := f.store.choosePayloadContent(f.store.headNode).arrivedAfterOrphanCheck(f.store.genesisTime)
require.NoError(t, err)
require.Equal(t, false, late)
@@ -289,10 +292,10 @@ func TestNode_TimeStampsChecks(t *testing.T) {
headRoot, err = f.Head(ctx)
require.NoError(t, err)
require.Equal(t, root, headRoot)
early, err = f.store.headNode.arrivedEarly(f.store.genesisTime)
early, err = f.store.choosePayloadContent(f.store.headNode).arrivedEarly(f.store.genesisTime)
require.NoError(t, err)
require.Equal(t, false, early)
late, err = f.store.headNode.arrivedAfterOrphanCheck(f.store.genesisTime)
late, err = f.store.choosePayloadContent(f.store.headNode).arrivedAfterOrphanCheck(f.store.genesisTime)
require.NoError(t, err)
require.Equal(t, false, late)
@@ -305,10 +308,10 @@ func TestNode_TimeStampsChecks(t *testing.T) {
headRoot, err = f.Head(ctx)
require.NoError(t, err)
require.Equal(t, root, headRoot)
early, err = f.store.headNode.arrivedEarly(f.store.genesisTime)
early, err = f.store.choosePayloadContent(f.store.headNode).arrivedEarly(f.store.genesisTime)
require.NoError(t, err)
require.Equal(t, false, early)
late, err = f.store.headNode.arrivedAfterOrphanCheck(f.store.genesisTime)
late, err = f.store.choosePayloadContent(f.store.headNode).arrivedAfterOrphanCheck(f.store.genesisTime)
require.NoError(t, err)
require.Equal(t, true, late)
@@ -320,10 +323,10 @@ func TestNode_TimeStampsChecks(t *testing.T) {
headRoot, err = f.Head(ctx)
require.NoError(t, err)
require.Equal(t, root, headRoot)
early, err = f.store.headNode.arrivedEarly(f.store.genesisTime)
early, err = f.store.choosePayloadContent(f.store.headNode).arrivedEarly(f.store.genesisTime)
require.ErrorContains(t, "invalid timestamp", err)
require.Equal(t, true, early)
late, err = f.store.headNode.arrivedAfterOrphanCheck(f.store.genesisTime)
late, err = f.store.choosePayloadContent(f.store.headNode).arrivedAfterOrphanCheck(f.store.genesisTime)
require.ErrorContains(t, "invalid timestamp", err)
require.Equal(t, false, late)
}

View File

@@ -7,93 +7,141 @@ import (
"github.com/pkg/errors"
)
func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, lastValidHash [32]byte) ([][32]byte, error) {
// setOptimisticToInvalid removes invalid nodes from forkchoice. It does NOT remove the empty node for the passed root.
func (s *Store) setOptimisticToInvalid(ctx context.Context, root, parentRoot, parentHash, lastValidHash [32]byte) ([][32]byte, error) {
invalidRoots := make([][32]byte, 0)
node, ok := s.nodeByRoot[root]
if !ok {
node, ok = s.nodeByRoot[parentRoot]
if !ok || node == nil {
return invalidRoots, errors.Wrap(ErrNilNode, "could not set node to invalid")
n := s.fullNodeByRoot[root]
if n == nil {
// The offending node with its payload is not in forkchoice. Try with the parent
n = s.emptyNodeByRoot[parentRoot]
if n == nil {
return invalidRoots, errors.Wrap(ErrNilNode, "could not set node to invalid, could not find consensus parent")
}
// return early if the parent is LVH
if node.payloadHash == lastValidHash {
if n.node.blockHash == lastValidHash {
// The parent node must have been full and with a valid payload
return invalidRoots, nil
}
} else {
if node == nil {
return invalidRoots, errors.Wrap(ErrNilNode, "could not set node to invalid")
if n.node.blockHash == parentHash {
// The parent was full and invalid
n = s.fullNodeByRoot[parentRoot]
if n == nil {
return invalidRoots, errors.Wrap(ErrNilNode, "could not set node to invalid, could not find full parent")
}
} else {
// The parent is empty and we don't yet know if it's valid or not
for n = n.node.parent; n != nil; n = n.node.parent {
if ctx.Err() != nil {
return invalidRoots, ctx.Err()
}
if n.node.blockHash == lastValidHash {
// The node built on empty and the whole chain was valid
return invalidRoots, nil
}
if n.node.blockHash == parentHash {
// The parent was full and invalid
break
}
}
if n == nil {
return nil, errors.Wrap(ErrNilNode, "could not set node to invalid, could not find full parent in ancestry")
}
}
if node.parent.root != parentRoot {
return invalidRoots, errInvalidParentRoot
} else {
// check consistency with the parent information
if n.node.parent == nil {
return nil, ErrNilNode
}
if n.node.parent.node.root != parentRoot {
return nil, errInvalidParentRoot
}
}
firstInvalid := node
for ; firstInvalid.parent != nil && firstInvalid.parent.payloadHash != lastValidHash; firstInvalid = firstInvalid.parent {
// n points to a full node that has an invalid payload in forkchoice. We need to find the fist node in the chain that is actually invalid.
startNode := n
fp := s.fullParent(n)
for ; fp != nil && fp.node.blockHash != lastValidHash; fp = s.fullParent(fp) {
if ctx.Err() != nil {
return invalidRoots, ctx.Err()
}
n = fp
}
// Deal with the case that the last valid payload is in a different fork
// This means we are dealing with an EE that does not follow the spec
if firstInvalid.parent == nil {
if fp == nil {
// return early if the invalid node was not imported
if node.root == parentRoot {
if startNode.node.root != root {
return invalidRoots, nil
}
firstInvalid = node
// Remove just the imported invalid root
n = startNode
}
return s.removeNode(ctx, firstInvalid)
return s.removeNode(ctx, n)
}
// removeNode removes the node with the given root and all of its children
// from the Fork Choice Store.
func (s *Store) removeNode(ctx context.Context, node *Node) ([][32]byte, error) {
func (s *Store) removeNode(ctx context.Context, pn *PayloadNode) ([][32]byte, error) {
invalidRoots := make([][32]byte, 0)
if node == nil {
if pn == nil {
return invalidRoots, errors.Wrap(ErrNilNode, "could not remove node")
}
if !node.optimistic || node.parent == nil {
if !pn.optimistic || pn.node.parent == nil {
return invalidRoots, errInvalidOptimisticStatus
}
children := node.parent.children
children := pn.node.parent.children
if len(children) == 1 {
node.parent.children = []*Node{}
pn.node.parent.children = []*Node{}
} else {
for i, n := range children {
if n == node {
if n == pn.node {
if i != len(children)-1 {
children[i] = children[len(children)-1]
}
node.parent.children = children[:len(children)-1]
pn.node.parent.children = children[:len(children)-1]
break
}
}
}
return s.removeNodeAndChildren(ctx, node, invalidRoots)
return s.removeNodeAndChildren(ctx, pn, invalidRoots)
}
// removeNodeAndChildren removes `node` and all of its descendant from the Store
func (s *Store) removeNodeAndChildren(ctx context.Context, node *Node, invalidRoots [][32]byte) ([][32]byte, error) {
func (s *Store) removeNodeAndChildren(ctx context.Context, pn *PayloadNode, invalidRoots [][32]byte) ([][32]byte, error) {
var err error
for _, child := range node.children {
// If we are removing an empty node, then remove the full node as well if it exists.
if !pn.full {
fn, ok := s.fullNodeByRoot[pn.node.root]
if ok {
invalidRoots, err = s.removeNodeAndChildren(ctx, fn, invalidRoots)
if err != nil {
return invalidRoots, err
}
}
}
// Now we remove the full node's children.
for _, child := range pn.children {
if ctx.Err() != nil {
return invalidRoots, ctx.Err()
}
if invalidRoots, err = s.removeNodeAndChildren(ctx, child, invalidRoots); err != nil {
// We need to remove only the empty node here since the recursion will take care of the full one.
en := s.emptyNodeByRoot[child.root]
if invalidRoots, err = s.removeNodeAndChildren(ctx, en, invalidRoots); err != nil {
return invalidRoots, err
}
}
invalidRoots = append(invalidRoots, node.root)
if node.root == s.proposerBoostRoot {
s.proposerBoostRoot = [32]byte{}
// Only append the root for the empty nodes.
if pn.full {
delete(s.fullNodeByRoot, pn.node.root)
} else {
invalidRoots = append(invalidRoots, pn.node.root)
if pn.node.root == s.proposerBoostRoot {
s.proposerBoostRoot = [32]byte{}
}
if pn.node.root == s.previousProposerBoostRoot {
s.previousProposerBoostRoot = params.BeaconConfig().ZeroHash
s.previousProposerBoostScore = 0
}
delete(s.emptyNodeByRoot, pn.node.root)
}
if node.root == s.previousProposerBoostRoot {
s.previousProposerBoostRoot = params.BeaconConfig().ZeroHash
s.previousProposerBoostScore = 0
}
delete(s.nodeByRoot, node.root)
delete(s.nodeByPayload, node.payloadHash)
return invalidRoots, nil
}

View File

@@ -23,93 +23,35 @@ import (
// And every block in the Fork choice is optimistic.
func TestPruneInvalid(t *testing.T) {
tests := []struct {
name string
root [32]byte // the root of the new INVALID block
parentRoot [32]byte // the root of the parent block
payload [32]byte // the last valid hash
parentHash [32]byte // the execution hash of the parent block
lastValidHash [32]byte // the last valid execution hash
wantedNodeNumber int
wantedRoots [][32]byte
wantedErr error
}{
{ // Bogus LVH, root not in forkchoice
[32]byte{'x'},
[32]byte{'i'},
[32]byte{'R'},
13,
[][32]byte{},
nil,
name: "bogus LVH not in forkchoice",
root: [32]byte{'x'}, parentRoot: [32]byte{'i'}, parentHash: [32]byte{'I'}, lastValidHash: [32]byte{'R'},
wantedNodeNumber: 13, wantedRoots: [][32]byte{},
},
{ // Bogus LVH
name: "bogus LVH",
root: [32]byte{'i'}, parentRoot: [32]byte{'h'}, parentHash: [32]byte{'H'}, lastValidHash: [32]byte{'R'},
wantedNodeNumber: 13, wantedRoots: [][32]byte{},
},
{
// Bogus LVH
[32]byte{'i'},
[32]byte{'h'},
[32]byte{'R'},
12,
[][32]byte{{'i'}},
nil,
name: "wanted j",
root: [32]byte{'j'}, parentRoot: [32]byte{'b'}, parentHash: [32]byte{'B'}, lastValidHash: [32]byte{'B'},
wantedNodeNumber: 13, wantedRoots: [][32]byte{},
},
{
[32]byte{'j'},
[32]byte{'b'},
[32]byte{'B'},
12,
[][32]byte{{'j'}},
nil,
},
{
[32]byte{'c'},
[32]byte{'b'},
[32]byte{'B'},
4,
[][32]byte{{'f'}, {'e'}, {'i'}, {'h'}, {'l'},
{'k'}, {'g'}, {'d'}, {'c'}},
nil,
},
{
[32]byte{'i'},
[32]byte{'h'},
[32]byte{'H'},
12,
[][32]byte{{'i'}},
nil,
},
{
[32]byte{'h'},
[32]byte{'g'},
[32]byte{'G'},
11,
[][32]byte{{'i'}, {'h'}},
nil,
},
{
[32]byte{'g'},
[32]byte{'d'},
[32]byte{'D'},
8,
[][32]byte{{'i'}, {'h'}, {'l'}, {'k'}, {'g'}},
nil,
},
{
[32]byte{'i'},
[32]byte{'h'},
[32]byte{'D'},
8,
[][32]byte{{'i'}, {'h'}, {'l'}, {'k'}, {'g'}},
nil,
},
{
[32]byte{'f'},
[32]byte{'e'},
[32]byte{'D'},
11,
[][32]byte{{'f'}, {'e'}},
nil,
},
{
[32]byte{'h'},
[32]byte{'g'},
[32]byte{'C'},
5,
[][32]byte{
name: "wanted 5",
root: [32]byte{'c'}, parentRoot: [32]byte{'b'}, parentHash: [32]byte{'B'}, lastValidHash: [32]byte{'B'},
wantedNodeNumber: 5,
wantedRoots: [][32]byte{
{'f'},
{'e'},
{'i'},
@@ -119,106 +61,118 @@ func TestPruneInvalid(t *testing.T) {
{'g'},
{'d'},
},
nil,
},
{
[32]byte{'g'},
[32]byte{'d'},
[32]byte{'E'},
8,
[][32]byte{{'i'}, {'h'}, {'l'}, {'k'}, {'g'}},
nil,
name: "wanted i",
root: [32]byte{'i'}, parentRoot: [32]byte{'h'}, parentHash: [32]byte{'H'}, lastValidHash: [32]byte{'H'},
wantedNodeNumber: 13, wantedRoots: [][32]byte{},
},
{
[32]byte{'z'},
[32]byte{'j'},
[32]byte{'B'},
12,
[][32]byte{{'j'}},
nil,
name: "wanted i and h",
root: [32]byte{'h'}, parentRoot: [32]byte{'g'}, parentHash: [32]byte{'G'}, lastValidHash: [32]byte{'G'},
wantedNodeNumber: 12, wantedRoots: [][32]byte{{'i'}},
},
{
[32]byte{'z'},
[32]byte{'j'},
[32]byte{'J'},
13,
[][32]byte{},
nil,
name: "wanted i--g",
root: [32]byte{'g'}, parentRoot: [32]byte{'d'}, parentHash: [32]byte{'D'}, lastValidHash: [32]byte{'D'},
wantedNodeNumber: 9, wantedRoots: [][32]byte{{'i'}, {'h'}, {'l'}, {'k'}},
},
{
[32]byte{'j'},
[32]byte{'a'},
[32]byte{'B'},
0,
[][32]byte{},
errInvalidParentRoot,
name: "wanted 9",
root: [32]byte{'i'}, parentRoot: [32]byte{'h'}, parentHash: [32]byte{'H'}, lastValidHash: [32]byte{'D'},
wantedNodeNumber: 9, wantedRoots: [][32]byte{{'i'}, {'h'}, {'l'}, {'k'}},
},
{
[32]byte{'z'},
[32]byte{'h'},
[32]byte{'D'},
8,
[][32]byte{{'i'}, {'h'}, {'l'}, {'k'}, {'g'}},
nil,
name: "wanted f and e",
root: [32]byte{'f'}, parentRoot: [32]byte{'e'}, parentHash: [32]byte{'E'}, lastValidHash: [32]byte{'D'},
wantedNodeNumber: 12, wantedRoots: [][32]byte{{'f'}},
},
{
[32]byte{'z'},
[32]byte{'h'},
[32]byte{'D'},
8,
[][32]byte{{'i'}, {'h'}, {'l'}, {'k'}, {'g'}},
nil,
name: "wanted 6",
root: [32]byte{'h'}, parentRoot: [32]byte{'g'}, parentHash: [32]byte{'G'}, lastValidHash: [32]byte{'C'},
wantedNodeNumber: 6,
wantedRoots: [][32]byte{
{'f'}, {'e'}, {'i'}, {'h'}, {'l'}, {'k'}, {'g'},
},
},
{
name: "wanted 9 again",
root: [32]byte{'g'}, parentRoot: [32]byte{'d'}, parentHash: [32]byte{'D'}, lastValidHash: [32]byte{'E'},
wantedNodeNumber: 9, wantedRoots: [][32]byte{{'i'}, {'h'}, {'l'}, {'k'}},
},
{
name: "wanted 13",
root: [32]byte{'z'}, parentRoot: [32]byte{'j'}, parentHash: [32]byte{'J'}, lastValidHash: [32]byte{'B'},
wantedNodeNumber: 13, wantedRoots: [][32]byte{},
},
{
name: "wanted empty",
root: [32]byte{'z'}, parentRoot: [32]byte{'j'}, parentHash: [32]byte{'J'}, lastValidHash: [32]byte{'J'},
wantedNodeNumber: 13, wantedRoots: [][32]byte{},
},
{
name: "errInvalidParentRoot",
root: [32]byte{'j'}, parentRoot: [32]byte{'a'}, parentHash: [32]byte{'A'}, lastValidHash: [32]byte{'B'},
wantedErr: errInvalidParentRoot,
},
{
name: "root z",
root: [32]byte{'z'}, parentRoot: [32]byte{'h'}, parentHash: [32]byte{'H'}, lastValidHash: [32]byte{'D'},
wantedNodeNumber: 9, wantedRoots: [][32]byte{{'i'}, {'h'}, {'l'}, {'k'}},
},
}
for _, tc := range tests {
ctx := t.Context()
f := setup(1, 1)
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{'J'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{'E'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{'G'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{'K'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{'I'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'L'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
roots, err := f.store.setOptimisticToInvalid(t.Context(), tc.root, tc.parentRoot, tc.payload)
if tc.wantedErr == nil {
t.Run(tc.name, func(t *testing.T) {
ctx := t.Context()
f := setup(1, 1)
require.NoError(t, f.SetOptimisticToValid(ctx, [32]byte{}))
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.DeepEqual(t, tc.wantedRoots, roots)
require.Equal(t, tc.wantedNodeNumber, f.NodeCount())
} else {
require.ErrorIs(t, tc.wantedErr, err)
}
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'j'}, [32]byte{'b'}, [32]byte{'J'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'e'}, [32]byte{'d'}, [32]byte{'E'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 104, [32]byte{'g'}, [32]byte{'d'}, [32]byte{'G'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'f'}, [32]byte{'e'}, [32]byte{'F'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'h'}, [32]byte{'g'}, [32]byte{'H'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 105, [32]byte{'k'}, [32]byte{'g'}, [32]byte{'K'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'i'}, [32]byte{'h'}, [32]byte{'I'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 106, [32]byte{'l'}, [32]byte{'k'}, [32]byte{'L'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
roots, err := f.store.setOptimisticToInvalid(t.Context(), tc.root, tc.parentRoot, tc.parentHash, tc.lastValidHash)
if tc.wantedErr == nil {
require.NoError(t, err)
require.Equal(t, len(tc.wantedRoots), len(roots))
require.DeepEqual(t, tc.wantedRoots, roots)
require.Equal(t, tc.wantedNodeNumber, f.NodeCount())
} else {
require.ErrorIs(t, tc.wantedErr, err)
}
})
}
}
@@ -240,11 +194,40 @@ func TestSetOptimisticToInvalid_ProposerBoost(t *testing.T) {
f.store.previousProposerBoostScore = 10
f.store.previousProposerBoostRoot = [32]byte{'b'}
_, err = f.SetOptimisticToInvalid(ctx, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'A'})
_, err = f.SetOptimisticToInvalid(ctx, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'B'}, [32]byte{'A'})
require.NoError(t, err)
// proposer boost is still applied to c
require.Equal(t, uint64(10), f.store.previousProposerBoostScore)
require.Equal(t, [32]byte{}, f.store.proposerBoostRoot)
require.Equal(t, [32]byte{'b'}, f.store.previousProposerBoostRoot)
}
func TestSetOptimisticToInvalid_ProposerBoost_Older(t *testing.T) {
ctx := t.Context()
f := setup(1, 1)
state, blkRoot, err := prepareForkchoiceState(ctx, 100, [32]byte{'a'}, params.BeaconConfig().ZeroHash, [32]byte{'A'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 101, [32]byte{'b'}, [32]byte{'a'}, [32]byte{'B'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
state, blkRoot, err = prepareForkchoiceState(ctx, 103, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'D'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
f.store.proposerBoostRoot = [32]byte{'d'}
f.store.previousProposerBoostScore = 10
f.store.previousProposerBoostRoot = [32]byte{'c'}
_, err = f.SetOptimisticToInvalid(ctx, [32]byte{'d'}, [32]byte{'c'}, [32]byte{'C'}, [32]byte{'A'})
require.NoError(t, err)
// proposer boost is still applied to c
require.Equal(t, uint64(0), f.store.previousProposerBoostScore)
require.DeepEqual(t, [32]byte{}, f.store.proposerBoostRoot)
require.DeepEqual(t, params.BeaconConfig().ZeroHash, f.store.previousProposerBoostRoot)
require.Equal(t, [32]byte{}, f.store.proposerBoostRoot)
require.Equal(t, [32]byte{}, f.store.previousProposerBoostRoot)
}
// This is a regression test (10565)
@@ -272,10 +255,9 @@ func TestSetOptimisticToInvalid_CorrectChildren(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
_, err = f.store.setOptimisticToInvalid(ctx, [32]byte{'d'}, [32]byte{'a'}, [32]byte{'A'})
_, err = f.store.setOptimisticToInvalid(ctx, [32]byte{'d'}, [32]byte{'a'}, [32]byte{'A'}, [32]byte{'A'})
require.NoError(t, err)
require.Equal(t, 2, len(f.store.nodeByRoot[[32]byte{'a'}].children))
require.Equal(t, 2, len(f.store.fullNodeByRoot[[32]byte{'a'}].children))
}
// Pow | Pos
@@ -322,13 +304,13 @@ func TestSetOptimisticToInvalid_ForkAtMerge(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
roots, err := f.SetOptimisticToInvalid(ctx, [32]byte{'x'}, [32]byte{'d'}, [32]byte{})
roots, err := f.SetOptimisticToInvalid(ctx, [32]byte{'x'}, [32]byte{'d'}, [32]byte{'D'}, [32]byte{})
require.NoError(t, err)
require.Equal(t, 4, len(roots))
require.Equal(t, 3, len(roots))
sort.Slice(roots, func(i, j int) bool {
return bytesutil.BytesToUint64BigEndian(roots[i][:]) < bytesutil.BytesToUint64BigEndian(roots[j][:])
})
require.DeepEqual(t, roots, [][32]byte{{'b'}, {'c'}, {'d'}, {'e'}})
require.DeepEqual(t, roots, [][32]byte{{'c'}, {'d'}, {'e'}})
}
// Pow | Pos
@@ -375,13 +357,13 @@ func TestSetOptimisticToInvalid_ForkAtMerge_bis(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, root))
roots, err := f.SetOptimisticToInvalid(ctx, [32]byte{'x'}, [32]byte{'d'}, [32]byte{})
roots, err := f.SetOptimisticToInvalid(ctx, [32]byte{'x'}, [32]byte{'d'}, [32]byte{'D'}, [32]byte{})
require.NoError(t, err)
require.Equal(t, 4, len(roots))
require.Equal(t, 3, len(roots))
sort.Slice(roots, func(i, j int) bool {
return bytesutil.BytesToUint64BigEndian(roots[i][:]) < bytesutil.BytesToUint64BigEndian(roots[j][:])
})
require.DeepEqual(t, roots, [][32]byte{{'b'}, {'c'}, {'d'}, {'e'}})
require.DeepEqual(t, roots, [][32]byte{{'c'}, {'d'}, {'e'}})
}
func TestSetOptimisticToValid(t *testing.T) {

View File

@@ -11,7 +11,7 @@ func (f *ForkChoice) applyProposerBoostScore() error {
s := f.store
proposerScore := uint64(0)
if s.previousProposerBoostRoot != params.BeaconConfig().ZeroHash {
previousNode, ok := s.nodeByRoot[s.previousProposerBoostRoot]
previousNode, ok := s.emptyNodeByRoot[s.previousProposerBoostRoot]
if !ok || previousNode == nil {
log.WithError(errInvalidProposerBoostRoot).Errorf("invalid prev root %#x", s.previousProposerBoostRoot)
} else {
@@ -20,7 +20,7 @@ func (f *ForkChoice) applyProposerBoostScore() error {
}
if s.proposerBoostRoot != params.BeaconConfig().ZeroHash {
currentNode, ok := s.nodeByRoot[s.proposerBoostRoot]
currentNode, ok := s.emptyNodeByRoot[s.proposerBoostRoot]
if !ok || currentNode == nil {
log.WithError(errInvalidProposerBoostRoot).Errorf("invalid current root %#x", s.proposerBoostRoot)
} else {

View File

@@ -166,14 +166,14 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
// (1: 48) -> (2: 38) -> (3: 10)
// \--------------->(4: 18)
//
node1 := f.store.nodeByRoot[indexToHash(1)]
require.Equal(t, node1.weight, uint64(48))
node2 := f.store.nodeByRoot[indexToHash(2)]
require.Equal(t, node2.weight, uint64(38))
node3 := f.store.nodeByRoot[indexToHash(3)]
require.Equal(t, node3.weight, uint64(10))
node4 := f.store.nodeByRoot[indexToHash(4)]
require.Equal(t, node4.weight, uint64(18))
node1 := f.store.emptyNodeByRoot[indexToHash(1)]
require.Equal(t, node1.node.weight, uint64(48))
node2 := f.store.emptyNodeByRoot[indexToHash(2)]
require.Equal(t, node2.node.weight, uint64(38))
node3 := f.store.emptyNodeByRoot[indexToHash(3)]
require.Equal(t, node3.node.weight, uint64(10))
node4 := f.store.emptyNodeByRoot[indexToHash(4)]
require.Equal(t, node4.node.weight, uint64(18))
// Regression: process attestations for C, check that it
// becomes head, we need two attestations to have C.weight = 30 > 24 = D.weight

View File

@@ -34,22 +34,23 @@ const orphanLateBlockProposingEarly = 2
func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
override = false
// We only need to override FCU if our current head is from the current
// We only need to override FCU if our current consensusHead is from the current
// slot. This differs from the spec implementation in that we assume
// that we will call this function in the previous slot to proposing.
head := f.store.headNode
if head == nil {
consensusHead := f.store.headNode
if consensusHead == nil {
return
}
if head.slot != slots.CurrentSlot(f.store.genesisTime) {
if consensusHead.slot != slots.CurrentSlot(f.store.genesisTime) {
return
}
// Do not reorg on epoch boundaries
if (head.slot+1)%params.BeaconConfig().SlotsPerEpoch == 0 {
if (consensusHead.slot+1)%params.BeaconConfig().SlotsPerEpoch == 0 {
return
}
head := f.store.choosePayloadContent(consensusHead)
// Only reorg blocks that arrive late
early, err := head.arrivedEarly(f.store.genesisTime)
if err != nil {
@@ -61,15 +62,15 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
}
// Only reorg if we have been finalizing
finalizedEpoch := f.store.finalizedCheckpoint.Epoch
if slots.ToEpoch(head.slot+1) > finalizedEpoch+params.BeaconConfig().ReorgMaxEpochsSinceFinalization {
if slots.ToEpoch(consensusHead.slot+1) > finalizedEpoch+params.BeaconConfig().ReorgMaxEpochsSinceFinalization {
return
}
// Only orphan a single block
parent := head.parent
parent := consensusHead.parent
if parent == nil {
return
}
if head.slot > parent.slot+1 {
if consensusHead.slot > parent.node.slot+1 {
return
}
// Do not orphan a block that has higher justification than the parent
@@ -78,12 +79,12 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
// }
// Only orphan a block if the head LMD vote is weak
if head.weight*100 > f.store.committeeWeight*params.BeaconConfig().ReorgHeadWeightThreshold {
if consensusHead.weight*100 > f.store.committeeWeight*params.BeaconConfig().ReorgHeadWeightThreshold {
return
}
// Return early if we are checking before 10 seconds into the slot
sss, err := slots.SinceSlotStart(head.slot, f.store.genesisTime, time.Now())
sss, err := slots.SinceSlotStart(consensusHead.slot, f.store.genesisTime, time.Now())
if err != nil {
log.WithError(err).Error("could not check current slot")
return true
@@ -92,7 +93,7 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
return true
}
// Only orphan a block if the parent LMD vote is strong
if parent.weight*100 < f.store.committeeWeight*params.BeaconConfig().ReorgParentWeightThreshold {
if parent.node.weight*100 < f.store.committeeWeight*params.BeaconConfig().ReorgParentWeightThreshold {
return
}
return true
@@ -106,60 +107,61 @@ func (f *ForkChoice) ShouldOverrideFCU() (override bool) {
// This function needs to be called only when proposing a block and all
// attestation processing has already happened.
func (f *ForkChoice) GetProposerHead() [32]byte {
head := f.store.headNode
if head == nil {
consensusHead := f.store.headNode
if consensusHead == nil {
return [32]byte{}
}
// Only reorg blocks from the previous slot.
currentSlot := slots.CurrentSlot(f.store.genesisTime)
if head.slot+1 != currentSlot {
return head.root
if consensusHead.slot+1 != currentSlot {
return consensusHead.root
}
// Do not reorg on epoch boundaries
if (head.slot+1)%params.BeaconConfig().SlotsPerEpoch == 0 {
return head.root
if (consensusHead.slot+1)%params.BeaconConfig().SlotsPerEpoch == 0 {
return consensusHead.root
}
// Only reorg blocks that arrive late
head := f.store.choosePayloadContent(consensusHead)
early, err := head.arrivedEarly(f.store.genesisTime)
if err != nil {
log.WithError(err).Error("could not check if block arrived early")
return head.root
return consensusHead.root
}
if early {
return head.root
return consensusHead.root
}
// Only reorg if we have been finalizing
finalizedEpoch := f.store.finalizedCheckpoint.Epoch
if slots.ToEpoch(head.slot+1) > finalizedEpoch+params.BeaconConfig().ReorgMaxEpochsSinceFinalization {
return head.root
if slots.ToEpoch(consensusHead.slot+1) > finalizedEpoch+params.BeaconConfig().ReorgMaxEpochsSinceFinalization {
return consensusHead.root
}
// Only orphan a single block
parent := head.parent
parent := consensusHead.parent
if parent == nil {
return head.root
return consensusHead.root
}
if head.slot > parent.slot+1 {
return head.root
if consensusHead.slot > parent.node.slot+1 {
return consensusHead.root
}
// Only orphan a block if the head LMD vote is weak
if head.weight*100 > f.store.committeeWeight*params.BeaconConfig().ReorgHeadWeightThreshold {
return head.root
if consensusHead.weight*100 > f.store.committeeWeight*params.BeaconConfig().ReorgHeadWeightThreshold {
return consensusHead.root
}
// Only orphan a block if the parent LMD vote is strong
if parent.weight*100 < f.store.committeeWeight*params.BeaconConfig().ReorgParentWeightThreshold {
return head.root
if parent.node.weight*100 < f.store.committeeWeight*params.BeaconConfig().ReorgParentWeightThreshold {
return consensusHead.root
}
// Only reorg if we are proposing early
sss, err := slots.SinceSlotStart(currentSlot, f.store.genesisTime, time.Now())
if err != nil {
log.WithError(err).Error("could not check if proposing early")
return head.root
return consensusHead.root
}
if sss >= orphanLateBlockProposingEarly*time.Second {
return head.root
return consensusHead.root
}
return parent.root
return parent.node.root
}

View File

@@ -38,7 +38,6 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
require.Equal(t, blk.Root(), headRoot)
t.Run("head is weak", func(t *testing.T) {
require.Equal(t, true, f.ShouldOverrideFCU())
})
t.Run("head is nil", func(t *testing.T) {
saved := f.store.headNode
@@ -60,10 +59,11 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
f.store.headNode.slot = saved
})
t.Run("head is early", func(t *testing.T) {
saved := f.store.headNode.timestamp
f.store.headNode.timestamp = saved.Add(-2 * time.Second)
fn := f.store.fullNodeByRoot[f.store.headNode.root]
saved := fn.timestamp
fn.timestamp = saved.Add(-2 * time.Second)
require.Equal(t, false, f.ShouldOverrideFCU())
f.store.headNode.timestamp = saved
fn.timestamp = saved
})
t.Run("chain not finalizing", func(t *testing.T) {
saved := f.store.headNode.slot
@@ -74,10 +74,10 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+time.Second)
})
t.Run("Not single block reorg", func(t *testing.T) {
saved := f.store.headNode.parent.slot
f.store.headNode.parent.slot = 0
saved := f.store.headNode.parent.node.slot
f.store.headNode.parent.node.slot = 0
require.Equal(t, false, f.ShouldOverrideFCU())
f.store.headNode.parent.slot = saved
f.store.headNode.parent.node.slot = saved
})
t.Run("parent is nil", func(t *testing.T) {
saved := f.store.headNode.parent
@@ -86,17 +86,17 @@ func TestForkChoice_ShouldOverrideFCU(t *testing.T) {
f.store.headNode.parent = saved
})
t.Run("parent is weak early call", func(t *testing.T) {
saved := f.store.headNode.parent.weight
f.store.headNode.parent.weight = 0
saved := f.store.headNode.parent.node.weight
f.store.headNode.parent.node.weight = 0
require.Equal(t, true, f.ShouldOverrideFCU())
f.store.headNode.parent.weight = saved
f.store.headNode.parent.node.weight = saved
})
t.Run("parent is weak late call", func(t *testing.T) {
saved := f.store.headNode.parent.weight
saved := f.store.headNode.parent.node.weight
driftGenesisTime(f, 2, 11*time.Second)
f.store.headNode.parent.weight = 0
f.store.headNode.parent.node.weight = 0
require.Equal(t, false, f.ShouldOverrideFCU())
f.store.headNode.parent.weight = saved
f.store.headNode.parent.node.weight = saved
driftGenesisTime(f, 2, orphanLateBlockFirstThreshold+time.Second)
})
t.Run("Head is strong", func(t *testing.T) {
@@ -135,7 +135,8 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
require.NoError(t, err)
require.Equal(t, blk.Root(), headRoot)
orphanLateBlockFirstThreshold := params.BeaconConfig().SlotComponentDuration(params.BeaconConfig().AttestationDueBPS)
f.store.headNode.timestamp.Add(-1 * (params.BeaconConfig().SlotDuration() - orphanLateBlockFirstThreshold))
fn := f.store.fullNodeByRoot[f.store.headNode.root]
fn.timestamp = fn.timestamp.Add(-1 * (params.BeaconConfig().SlotDuration() - orphanLateBlockFirstThreshold))
t.Run("head is weak", func(t *testing.T) {
require.Equal(t, parentRoot, f.GetProposerHead())
})
@@ -159,11 +160,12 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
f.store.headNode.slot = saved
})
t.Run("head is early", func(t *testing.T) {
saved := f.store.headNode.timestamp
fn := f.store.fullNodeByRoot[f.store.headNode.root]
saved := fn.timestamp
headTimeStamp := f.store.genesisTime.Add(time.Duration(uint64(f.store.headNode.slot)*params.BeaconConfig().SecondsPerSlot+1) * time.Second)
f.store.headNode.timestamp = headTimeStamp
fn.timestamp = headTimeStamp
require.Equal(t, childRoot, f.GetProposerHead())
f.store.headNode.timestamp = saved
fn.timestamp = saved
})
t.Run("chain not finalizing", func(t *testing.T) {
saved := f.store.headNode.slot
@@ -174,10 +176,10 @@ func TestForkChoice_GetProposerHead(t *testing.T) {
driftGenesisTime(f, 3, 1*time.Second)
})
t.Run("Not single block reorg", func(t *testing.T) {
saved := f.store.headNode.parent.slot
f.store.headNode.parent.slot = 0
saved := f.store.headNode.parent.node.slot
f.store.headNode.parent.node.slot = 0
require.Equal(t, childRoot, f.GetProposerHead())
f.store.headNode.parent.slot = saved
f.store.headNode.parent.node.slot = saved
})
t.Run("parent is nil", func(t *testing.T) {
saved := f.store.headNode.parent

View File

@@ -13,6 +13,7 @@ import (
"github.com/OffchainLabs/prysm/v7/runtime/version"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// head starts from justified root and then follows the best descendant links
@@ -26,13 +27,16 @@ func (s *Store) head(ctx context.Context) ([32]byte, error) {
}
// JustifiedRoot has to be known
justifiedNode, ok := s.nodeByRoot[s.justifiedCheckpoint.Root]
if !ok || justifiedNode == nil {
var jn *Node
ej := s.emptyNodeByRoot[s.justifiedCheckpoint.Root]
if ej != nil {
jn = ej.node
} else {
// If the justifiedCheckpoint is from genesis, then the root is
// zeroHash. In this case it should be the root of forkchoice
// tree.
if s.justifiedCheckpoint.Epoch == params.BeaconConfig().GenesisEpoch {
justifiedNode = s.treeRootNode
jn = s.treeRootNode
} else {
return [32]byte{}, errors.WithMessage(errUnknownJustifiedRoot, fmt.Sprintf("%#x", s.justifiedCheckpoint.Root))
}
@@ -40,9 +44,9 @@ func (s *Store) head(ctx context.Context) ([32]byte, error) {
// If the justified node doesn't have a best descendant,
// the best node is itself.
bestDescendant := justifiedNode.bestDescendant
bestDescendant := jn.bestDescendant
if bestDescendant == nil {
bestDescendant = justifiedNode
bestDescendant = jn
}
currentEpoch := slots.EpochsSinceGenesis(s.genesisTime)
if !bestDescendant.viableForHead(s.justifiedCheckpoint.Epoch, currentEpoch) {
@@ -66,29 +70,42 @@ func (s *Store) head(ctx context.Context) ([32]byte, error) {
// It then updates the new node's parent with the best child and descendant node.
func (s *Store) insert(ctx context.Context,
roblock consensus_blocks.ROBlock,
justifiedEpoch, finalizedEpoch primitives.Epoch) (*Node, error) {
justifiedEpoch, finalizedEpoch primitives.Epoch,
) (*PayloadNode, error) {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.insert")
defer span.End()
root := roblock.Root()
block := roblock.Block()
slot := block.Slot()
parentRoot := block.ParentRoot()
var payloadHash [32]byte
if block.Version() >= version.Bellatrix {
execution, err := block.Body().Execution()
if err != nil {
return nil, err
}
copy(payloadHash[:], execution.BlockHash())
}
// Return if the block has been inserted into Store before.
if n, ok := s.nodeByRoot[root]; ok {
if n, ok := s.emptyNodeByRoot[root]; ok {
return n, nil
}
parent := s.nodeByRoot[parentRoot]
block := roblock.Block()
slot := block.Slot()
var parent *PayloadNode
blockHash := &[32]byte{}
if block.Version() >= version.Gloas {
if err := s.resolveParentPayloadStatus(block, &parent, blockHash); err != nil {
return nil, err
}
} else {
if block.Version() >= version.Bellatrix {
execution, err := block.Body().Execution()
if err != nil {
return nil, err
}
copy(blockHash[:], execution.BlockHash())
}
parentRoot := block.ParentRoot()
en := s.emptyNodeByRoot[parentRoot]
parent = s.fullNodeByRoot[parentRoot]
if parent == nil && en != nil {
// pre-Gloas only full parents are allowed.
return nil, errInvalidParentRoot
}
}
n := &Node{
slot: slot,
root: root,
@@ -97,32 +114,52 @@ func (s *Store) insert(ctx context.Context,
unrealizedJustifiedEpoch: justifiedEpoch,
finalizedEpoch: finalizedEpoch,
unrealizedFinalizedEpoch: finalizedEpoch,
optimistic: true,
payloadHash: payloadHash,
timestamp: time.Now(),
blockHash: *blockHash,
}
// Set the node's target checkpoint
if slot%params.BeaconConfig().SlotsPerEpoch == 0 {
n.target = n
} else if parent != nil {
if slots.ToEpoch(slot) == slots.ToEpoch(parent.slot) {
n.target = parent.target
if slots.ToEpoch(slot) == slots.ToEpoch(parent.node.slot) {
n.target = parent.node.target
} else {
n.target = parent
n.target = parent.node
}
}
var ret *PayloadNode
optimistic := true
if parent != nil {
optimistic = n.parent.optimistic
}
// Make the empty node.It's optimistic status equals it's parent's status.
pn := &PayloadNode{
node: n,
optimistic: optimistic,
timestamp: time.Now(),
children: make([]*Node, 0),
}
s.emptyNodeByRoot[root] = pn
ret = pn
if block.Version() < version.Gloas {
// Make also the full node, this is optimistic until the engine returns the execution payload validation.
fn := &PayloadNode{
node: n,
optimistic: true,
timestamp: time.Now(),
full: true,
}
ret = fn
s.fullNodeByRoot[root] = fn
}
s.nodeByPayload[payloadHash] = n
s.nodeByRoot[root] = n
if parent == nil {
if s.treeRootNode == nil {
s.treeRootNode = n
s.headNode = n
s.highestReceivedNode = n
} else {
delete(s.nodeByRoot, root)
delete(s.nodeByPayload, payloadHash)
delete(s.emptyNodeByRoot, root)
delete(s.fullNodeByRoot, root)
return nil, errInvalidParentRoot
}
} else {
@@ -130,7 +167,7 @@ func (s *Store) insert(ctx context.Context,
// Apply proposer boost
now := time.Now()
if now.Before(s.genesisTime) {
return n, nil
return ret, nil
}
currentSlot := slots.CurrentSlot(s.genesisTime)
sss, err := slots.SinceSlotStart(currentSlot, s.genesisTime, now)
@@ -146,17 +183,16 @@ func (s *Store) insert(ctx context.Context,
// Update best descendants
jEpoch := s.justifiedCheckpoint.Epoch
fEpoch := s.finalizedCheckpoint.Epoch
if err := s.treeRootNode.updateBestDescendant(ctx, jEpoch, fEpoch, slots.ToEpoch(currentSlot)); err != nil {
_, remErr := s.removeNode(ctx, n)
if remErr != nil {
log.WithError(remErr).Error("could not remove node")
}
return nil, errors.Wrap(err, "could not update best descendants")
if err := s.updateBestDescendantConsensusNode(ctx, s.treeRootNode, jEpoch, fEpoch, slots.ToEpoch(currentSlot)); err != nil {
log.WithError(err).WithFields(logrus.Fields{
"slot": slot,
"root": root,
}).Error("Could not update best descendant")
}
}
// Update metrics.
processedBlockCount.Inc()
nodeCount.Set(float64(len(s.nodeByRoot)))
nodeCount.Set(float64(len(s.emptyNodeByRoot)))
// Only update received block slot if it's within epoch from current time.
if slot+params.BeaconConfig().SlotsPerEpoch > slots.CurrentSlot(s.genesisTime) {
@@ -167,10 +203,10 @@ func (s *Store) insert(ctx context.Context,
s.highestReceivedNode = n
}
return n, nil
return ret, nil
}
// pruneFinalizedNodeByRootMap prunes the `nodeByRoot` map
// pruneFinalizedNodeByRootMap prunes the `nodeByRoot` maps
// starting from `node` down to the finalized Node or to a leaf of the Fork
// choice store.
func (s *Store) pruneFinalizedNodeByRootMap(ctx context.Context, node, finalizedNode *Node) error {
@@ -183,45 +219,51 @@ func (s *Store) pruneFinalizedNodeByRootMap(ctx context.Context, node, finalized
}
return nil
}
for _, child := range node.children {
for _, child := range s.allConsensusChildren(node) {
if err := s.pruneFinalizedNodeByRootMap(ctx, child, finalizedNode); err != nil {
return err
}
}
node.children = nil
delete(s.nodeByRoot, node.root)
delete(s.nodeByPayload, node.payloadHash)
en := s.emptyNodeByRoot[node.root]
en.children = nil
delete(s.emptyNodeByRoot, node.root)
fn := s.fullNodeByRoot[node.root]
if fn != nil {
fn.children = nil
delete(s.fullNodeByRoot, node.root)
}
return nil
}
// prune prunes the fork choice store. It removes all nodes that compete with the finalized root.
// This function does not prune for invalid optimistically synced nodes, it deals only with pruning upon finalization
// TODO: GLOAS, to ensure that chains up to a full node are found, we may want to consider pruning only up to the latest full block that was finalized
func (s *Store) prune(ctx context.Context) error {
ctx, span := trace.StartSpan(ctx, "doublyLinkedForkchoice.Prune")
defer span.End()
finalizedRoot := s.finalizedCheckpoint.Root
finalizedEpoch := s.finalizedCheckpoint.Epoch
finalizedNode, ok := s.nodeByRoot[finalizedRoot]
if !ok || finalizedNode == nil {
fen, ok := s.emptyNodeByRoot[finalizedRoot]
if !ok || fen == nil {
return errors.WithMessage(errUnknownFinalizedRoot, fmt.Sprintf("%#x", finalizedRoot))
}
fn := fen.node
// return early if we haven't changed the finalized checkpoint
if finalizedNode.parent == nil {
if fn.parent == nil {
return nil
}
// Save the new finalized dependent root because it will be pruned
s.finalizedDependentRoot = finalizedNode.parent.root
s.finalizedDependentRoot = fn.parent.node.root
// Prune nodeByRoot starting from root
if err := s.pruneFinalizedNodeByRootMap(ctx, s.treeRootNode, finalizedNode); err != nil {
if err := s.pruneFinalizedNodeByRootMap(ctx, s.treeRootNode, fn); err != nil {
return err
}
finalizedNode.parent = nil
s.treeRootNode = finalizedNode
fn.parent = nil
s.treeRootNode = fn
prunedCount.Inc()
// Prune all children of the finalized checkpoint block that are incompatible with it
@@ -229,13 +271,13 @@ func (s *Store) prune(ctx context.Context) error {
if err != nil {
return errors.Wrap(err, "could not compute epoch start")
}
if finalizedNode.slot == checkpointMaxSlot {
if fn.slot == checkpointMaxSlot {
return nil
}
for _, child := range finalizedNode.children {
for _, child := range fen.children {
if child != nil && child.slot <= checkpointMaxSlot {
if err := s.pruneFinalizedNodeByRootMap(ctx, child, finalizedNode); err != nil {
if err := s.pruneFinalizedNodeByRootMap(ctx, child, fn); err != nil {
return errors.Wrap(err, "could not prune incompatible finalized child")
}
}
@@ -249,10 +291,10 @@ func (s *Store) tips() ([][32]byte, []primitives.Slot) {
var roots [][32]byte
var slots []primitives.Slot
for root, node := range s.nodeByRoot {
if len(node.children) == 0 {
for root, n := range s.emptyNodeByRoot {
if len(s.allConsensusChildren(n.node)) == 0 {
roots = append(roots, root)
slots = append(slots, node.slot)
slots = append(slots, n.node.slot)
}
}
return roots, slots
@@ -273,21 +315,6 @@ func (f *ForkChoice) HighestReceivedBlockSlot() primitives.Slot {
return f.store.highestReceivedNode.slot
}
// HighestReceivedBlockDelay returns the number of slots that the highest
// received block was late when receiving it. For example, a block was late by 12 slots,
// then this method is expected to return 12.
func (f *ForkChoice) HighestReceivedBlockDelay() primitives.Slot {
n := f.store.highestReceivedNode
if n == nil {
return 0
}
sss, err := slots.SinceSlotStart(n.slot, f.store.genesisTime, n.timestamp)
if err != nil {
return 0
}
return primitives.Slot(uint64(sss/time.Second) / params.BeaconConfig().SecondsPerSlot)
}
// ReceivedBlocksLastEpoch returns the number of blocks received in the last epoch
func (f *ForkChoice) ReceivedBlocksLastEpoch() (uint64, error) {
count := uint64(0)

View File

@@ -1,7 +1,6 @@
package doublylinkedtree
import (
"context"
"testing"
"time"
@@ -41,18 +40,18 @@ func TestStore_NodeByRoot(t *testing.T) {
state, blkRoot, err = prepareForkchoiceState(t.Context(), 2, indexToHash(2), indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
node0 := f.store.treeRootNode
node1 := node0.children[0]
node2 := node1.children[0]
node0 := f.store.emptyNodeByRoot[params.BeaconConfig().ZeroHash]
node1 := f.store.emptyNodeByRoot[indexToHash(1)]
node2 := f.store.emptyNodeByRoot[indexToHash(2)]
expectedRoots := map[[32]byte]*Node{
expectedRoots := map[[32]byte]*PayloadNode{
params.BeaconConfig().ZeroHash: node0,
indexToHash(1): node1,
indexToHash(2): node2,
}
require.Equal(t, 3, f.NodeCount())
for root, node := range f.store.nodeByRoot {
for root, node := range f.store.emptyNodeByRoot {
v, ok := expectedRoots[root]
require.Equal(t, ok, true)
require.Equal(t, v, node)
@@ -111,38 +110,28 @@ func TestStore_Head_BestDescendant(t *testing.T) {
require.Equal(t, h, indexToHash(4))
}
func TestStore_UpdateBestDescendant_ContextCancelled(t *testing.T) {
ctx, cancel := context.WithCancel(t.Context())
f := setup(0, 0)
state, blkRoot, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
cancel()
state, blkRoot, err = prepareForkchoiceState(ctx, 2, indexToHash(2), indexToHash(1), params.BeaconConfig().ZeroHash, 0, 0)
require.NoError(t, err)
err = f.InsertNode(ctx, state, blkRoot)
require.ErrorContains(t, "context canceled", err)
}
func TestStore_Insert(t *testing.T) {
// The new node does not have a parent.
treeRootNode := &Node{slot: 0, root: indexToHash(0)}
nodeByRoot := map[[32]byte]*Node{indexToHash(0): treeRootNode}
nodeByPayload := map[[32]byte]*Node{indexToHash(0): treeRootNode}
emptyRootPN := &PayloadNode{node: treeRootNode}
fullRootPN := &PayloadNode{node: treeRootNode, full: true, optimistic: true}
emptyNodeByRoot := map[[32]byte]*PayloadNode{indexToHash(0): emptyRootPN}
fullNodeByRoot := map[[32]byte]*PayloadNode{indexToHash(0): fullRootPN}
jc := &forkchoicetypes.Checkpoint{Epoch: 0}
fc := &forkchoicetypes.Checkpoint{Epoch: 0}
s := &Store{nodeByRoot: nodeByRoot, treeRootNode: treeRootNode, nodeByPayload: nodeByPayload, justifiedCheckpoint: jc, finalizedCheckpoint: fc, highestReceivedNode: &Node{}}
s := &Store{emptyNodeByRoot: emptyNodeByRoot, fullNodeByRoot: fullNodeByRoot, treeRootNode: treeRootNode, justifiedCheckpoint: jc, finalizedCheckpoint: fc, highestReceivedNode: &Node{}}
payloadHash := [32]byte{'a'}
ctx := t.Context()
_, blk, err := prepareForkchoiceState(ctx, 100, indexToHash(100), indexToHash(0), payloadHash, 1, 1)
require.NoError(t, err)
_, err = s.insert(ctx, blk, 1, 1)
require.NoError(t, err)
assert.Equal(t, 2, len(s.nodeByRoot), "Did not insert block")
assert.Equal(t, (*Node)(nil), treeRootNode.parent, "Incorrect parent")
assert.Equal(t, 1, len(treeRootNode.children), "Incorrect children number")
assert.Equal(t, payloadHash, treeRootNode.children[0].payloadHash, "Incorrect payload hash")
child := treeRootNode.children[0]
assert.Equal(t, 2, len(s.emptyNodeByRoot), "Did not insert block")
assert.Equal(t, (*PayloadNode)(nil), treeRootNode.parent, "Incorrect parent")
children := s.allConsensusChildren(treeRootNode)
assert.Equal(t, 1, len(children), "Incorrect children number")
assert.Equal(t, payloadHash, children[0].blockHash, "Incorrect payload hash")
child := children[0]
assert.Equal(t, primitives.Epoch(1), child.justifiedEpoch, "Incorrect justification")
assert.Equal(t, primitives.Epoch(1), child.finalizedEpoch, "Incorrect finalization")
assert.Equal(t, indexToHash(100), child.root, "Incorrect root")
@@ -167,7 +156,7 @@ func TestStore_Prune_MoreThanThreshold(t *testing.T) {
// Finalized root is at index 99 so everything before 99 should be pruned.
s.finalizedCheckpoint.Root = indexToHash(99)
require.NoError(t, s.prune(t.Context()))
assert.Equal(t, 1, len(s.nodeByRoot), "Incorrect nodes count")
assert.Equal(t, 1, len(s.emptyNodeByRoot), "Incorrect nodes count")
}
func TestStore_Prune_MoreThanOnce(t *testing.T) {
@@ -189,12 +178,12 @@ func TestStore_Prune_MoreThanOnce(t *testing.T) {
// Finalized root is at index 11 so everything before 11 should be pruned.
s.finalizedCheckpoint.Root = indexToHash(10)
require.NoError(t, s.prune(t.Context()))
assert.Equal(t, 90, len(s.nodeByRoot), "Incorrect nodes count")
assert.Equal(t, 90, len(s.emptyNodeByRoot), "Incorrect nodes count")
// One more time.
s.finalizedCheckpoint.Root = indexToHash(20)
require.NoError(t, s.prune(t.Context()))
assert.Equal(t, 80, len(s.nodeByRoot), "Incorrect nodes count")
assert.Equal(t, 80, len(s.emptyNodeByRoot), "Incorrect nodes count")
}
func TestStore_Prune_ReturnEarly(t *testing.T) {
@@ -237,8 +226,7 @@ func TestStore_Prune_NoDanglingBranch(t *testing.T) {
s := f.store
s.finalizedCheckpoint.Root = indexToHash(1)
require.NoError(t, s.prune(t.Context()))
require.Equal(t, len(s.nodeByRoot), 1)
require.Equal(t, len(s.nodeByPayload), 1)
require.Equal(t, len(s.emptyNodeByRoot), 1)
}
// This test starts with the following branching diagram
@@ -318,9 +306,7 @@ func TestStore_PruneMapsNodes(t *testing.T) {
s := f.store
s.finalizedCheckpoint.Root = indexToHash(1)
require.NoError(t, s.prune(t.Context()))
require.Equal(t, len(s.nodeByRoot), 1)
require.Equal(t, len(s.nodeByPayload), 1)
require.Equal(t, len(s.emptyNodeByRoot), 1)
}
func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
@@ -339,7 +325,6 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, uint64(1), count)
require.Equal(t, primitives.Slot(1), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(0), f.HighestReceivedBlockDelay())
// 64
// Received block last epoch is 1
@@ -352,7 +337,6 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, uint64(1), count)
require.Equal(t, primitives.Slot(64), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(0), f.HighestReceivedBlockDelay())
// 64 65
// Received block last epoch is 2
@@ -365,7 +349,6 @@ func TestForkChoice_ReceivedBlocksLastEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, uint64(2), count)
require.Equal(t, primitives.Slot(65), f.HighestReceivedBlockSlot())
require.Equal(t, primitives.Slot(1), f.HighestReceivedBlockDelay())
// 64 65 66
// Received block last epoch is 3
@@ -717,17 +700,3 @@ func TestStore_CleanupInserting(t *testing.T) {
require.NotNil(t, f.InsertNode(ctx, st, blk))
require.Equal(t, false, f.HasNode(blk.Root()))
}
func TestStore_HighestReceivedBlockDelay(t *testing.T) {
f := ForkChoice{
store: &Store{
genesisTime: time.Unix(0, 0),
highestReceivedNode: &Node{
slot: 10,
timestamp: time.Unix(int64(((10 + 12) * params.BeaconConfig().SecondsPerSlot)), 0), // 12 slots late
},
},
}
require.Equal(t, primitives.Slot(12), f.HighestReceivedBlockDelay())
}

View File

@@ -21,24 +21,26 @@ type ForkChoice struct {
balancesByRoot forkchoice.BalancesByRooter // handler to obtain balances for the state with a given root
}
var _ forkchoice.ForkChoicer = (*ForkChoice)(nil)
// Store defines the fork choice store which includes block nodes and the last view of checkpoint information.
type Store struct {
justifiedCheckpoint *forkchoicetypes.Checkpoint // latest justified epoch in store.
unrealizedJustifiedCheckpoint *forkchoicetypes.Checkpoint // best unrealized justified checkpoint in store.
unrealizedFinalizedCheckpoint *forkchoicetypes.Checkpoint // best unrealized finalized checkpoint in store.
prevJustifiedCheckpoint *forkchoicetypes.Checkpoint // previous justified checkpoint in store.
finalizedCheckpoint *forkchoicetypes.Checkpoint // latest finalized epoch in store.
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
finalizedDependentRoot [fieldparams.RootLength]byte // dependent root at finalized checkpoint.
committeeWeight uint64 // tracks the total active validator balance divided by the number of slots per Epoch.
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node
nodeByRoot map[[fieldparams.RootLength]byte]*Node // nodes indexed by roots.
nodeByPayload map[[fieldparams.RootLength]byte]*Node // nodes indexed by payload Hash
slashedIndices map[primitives.ValidatorIndex]bool // the list of equivocating validator indices
originRoot [fieldparams.RootLength]byte // The genesis block root
justifiedCheckpoint *forkchoicetypes.Checkpoint // latest justified epoch in store.
unrealizedJustifiedCheckpoint *forkchoicetypes.Checkpoint // best unrealized justified checkpoint in store.
unrealizedFinalizedCheckpoint *forkchoicetypes.Checkpoint // best unrealized finalized checkpoint in store.
prevJustifiedCheckpoint *forkchoicetypes.Checkpoint // previous justified checkpoint in store.
finalizedCheckpoint *forkchoicetypes.Checkpoint // latest finalized epoch in store.
proposerBoostRoot [fieldparams.RootLength]byte // latest block root that was boosted after being received in a timely manner.
previousProposerBoostRoot [fieldparams.RootLength]byte // previous block root that was boosted after being received in a timely manner.
previousProposerBoostScore uint64 // previous proposer boosted root score.
finalizedDependentRoot [fieldparams.RootLength]byte // dependent root at finalized checkpoint.
committeeWeight uint64 // tracks the total active validator balance divided by the number of slots per Epoch.
treeRootNode *Node // the root node of the store tree.
headNode *Node // last head Node
emptyNodeByRoot map[[fieldparams.RootLength]byte]*PayloadNode // nodes indexed by roots.
fullNodeByRoot map[[fieldparams.RootLength]byte]*PayloadNode // full nodes (the payload was present) indexed by beacon block root.
slashedIndices map[primitives.ValidatorIndex]bool // the list of equivocating validator indices
originRoot [fieldparams.RootLength]byte // The genesis block root
genesisTime time.Time
highestReceivedNode *Node // The highest slot node.
receivedBlocksLastEpoch [fieldparams.SlotsPerEpoch]primitives.Slot // Using `highestReceivedSlot`. The slot of blocks received in the last epoch.
@@ -50,19 +52,28 @@ type Store struct {
type Node struct {
slot primitives.Slot // slot of the block converted to the node.
root [fieldparams.RootLength]byte // root of the block converted to the node.
payloadHash [fieldparams.RootLength]byte // payloadHash of the block converted to the node.
parent *Node // parent index of this node.
blockHash [fieldparams.RootLength]byte // payloadHash of the block converted to the node.
parent *PayloadNode // parent index of this node.
target *Node // target checkpoint for
children []*Node // the list of direct children of this Node
bestDescendant *Node // bestDescendant node of this node.
justifiedEpoch primitives.Epoch // justifiedEpoch of this node.
unrealizedJustifiedEpoch primitives.Epoch // the epoch that would be justified if the block would be advanced to the next epoch.
finalizedEpoch primitives.Epoch // finalizedEpoch of this node.
unrealizedFinalizedEpoch primitives.Epoch // the epoch that would be finalized if the block would be advanced to the next epoch.
balance uint64 // the balance that voted for this node directly
weight uint64 // weight of this node: the total balance including children
bestDescendant *Node // bestDescendant node of this node.
optimistic bool // whether the block has been fully validated or not
timestamp time.Time // The timestamp when the node was inserted.
}
// PayloadNode defines a full Forkchoice node after the Gloas fork, with the payload status either empty of full
type PayloadNode struct {
optimistic bool // whether the block has been fully validated or not
full bool // whether this node represents a payload present or not
weight uint64 // weight of this node: the total balance including children
balance uint64 // the balance that voted for this node directly
bestDescendant *Node // bestDescendant node of this payload node.
node *Node // the consensus part of this full forkchoice node
timestamp time.Time // The timestamp when the node was inserted.
children []*Node // the list of direct children of this Node
}
// Vote defines an individual validator's vote.

View File

@@ -15,33 +15,34 @@ import (
)
func (s *Store) setUnrealizedJustifiedEpoch(root [32]byte, epoch primitives.Epoch) error {
node, ok := s.nodeByRoot[root]
if !ok || node == nil {
en, ok := s.emptyNodeByRoot[root]
if !ok || en == nil {
return errors.Wrap(ErrNilNode, "could not set unrealized justified epoch")
}
if epoch < node.unrealizedJustifiedEpoch {
if epoch < en.node.unrealizedJustifiedEpoch {
return errInvalidUnrealizedJustifiedEpoch
}
node.unrealizedJustifiedEpoch = epoch
en.node.unrealizedJustifiedEpoch = epoch
return nil
}
func (s *Store) setUnrealizedFinalizedEpoch(root [32]byte, epoch primitives.Epoch) error {
node, ok := s.nodeByRoot[root]
if !ok || node == nil {
en, ok := s.emptyNodeByRoot[root]
if !ok || en == nil {
return errors.Wrap(ErrNilNode, "could not set unrealized finalized epoch")
}
if epoch < node.unrealizedFinalizedEpoch {
if epoch < en.node.unrealizedFinalizedEpoch {
return errInvalidUnrealizedFinalizedEpoch
}
node.unrealizedFinalizedEpoch = epoch
en.node.unrealizedFinalizedEpoch = epoch
return nil
}
// updateUnrealizedCheckpoints "realizes" the unrealized justified and finalized
// epochs stored within nodes. It should be called at the beginning of each epoch.
func (f *ForkChoice) updateUnrealizedCheckpoints(ctx context.Context) error {
for _, node := range f.store.nodeByRoot {
for _, en := range f.store.emptyNodeByRoot {
node := en.node
node.justifiedEpoch = node.unrealizedJustifiedEpoch
node.finalizedEpoch = node.unrealizedFinalizedEpoch
if node.justifiedEpoch > f.store.justifiedCheckpoint.Epoch {
@@ -62,16 +63,17 @@ func (s *Store) pullTips(state state.BeaconState, node *Node, jc, fc *ethpb.Chec
if node.parent == nil { // Nothing to do if the parent is nil.
return jc, fc
}
pn := node.parent.node
currentEpoch := slots.ToEpoch(slots.CurrentSlot(s.genesisTime))
stateSlot := state.Slot()
stateEpoch := slots.ToEpoch(stateSlot)
currJustified := node.parent.unrealizedJustifiedEpoch == currentEpoch
prevJustified := node.parent.unrealizedJustifiedEpoch+1 == currentEpoch
currJustified := pn.unrealizedJustifiedEpoch == currentEpoch
prevJustified := pn.unrealizedJustifiedEpoch+1 == currentEpoch
tooEarlyForCurr := slots.SinceEpochStarts(stateSlot)*3 < params.BeaconConfig().SlotsPerEpoch*2
// Exit early if it's justified or too early to be justified.
if currJustified || (stateEpoch == currentEpoch && prevJustified && tooEarlyForCurr) {
node.unrealizedJustifiedEpoch = node.parent.unrealizedJustifiedEpoch
node.unrealizedFinalizedEpoch = node.parent.unrealizedFinalizedEpoch
node.unrealizedJustifiedEpoch = pn.unrealizedJustifiedEpoch
node.unrealizedFinalizedEpoch = pn.unrealizedFinalizedEpoch
return jc, fc
}

View File

@@ -22,12 +22,12 @@ func TestStore_SetUnrealizedEpochs(t *testing.T) {
state, blkRoot, err = prepareForkchoiceState(ctx, 102, [32]byte{'c'}, [32]byte{'b'}, [32]byte{'C'}, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.Equal(t, primitives.Epoch(1), f.store.nodeByRoot[[32]byte{'b'}].unrealizedJustifiedEpoch)
require.Equal(t, primitives.Epoch(1), f.store.nodeByRoot[[32]byte{'b'}].unrealizedFinalizedEpoch)
require.Equal(t, primitives.Epoch(1), f.store.emptyNodeByRoot[[32]byte{'b'}].node.unrealizedJustifiedEpoch)
require.Equal(t, primitives.Epoch(1), f.store.emptyNodeByRoot[[32]byte{'b'}].node.unrealizedFinalizedEpoch)
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'b'}, 2))
require.NoError(t, f.store.setUnrealizedFinalizedEpoch([32]byte{'b'}, 2))
require.Equal(t, primitives.Epoch(2), f.store.nodeByRoot[[32]byte{'b'}].unrealizedJustifiedEpoch)
require.Equal(t, primitives.Epoch(2), f.store.nodeByRoot[[32]byte{'b'}].unrealizedFinalizedEpoch)
require.Equal(t, primitives.Epoch(2), f.store.emptyNodeByRoot[[32]byte{'b'}].node.unrealizedJustifiedEpoch)
require.Equal(t, primitives.Epoch(2), f.store.emptyNodeByRoot[[32]byte{'b'}].node.unrealizedFinalizedEpoch)
require.ErrorIs(t, errInvalidUnrealizedJustifiedEpoch, f.store.setUnrealizedJustifiedEpoch([32]byte{'b'}, 0))
require.ErrorIs(t, errInvalidUnrealizedFinalizedEpoch, f.store.setUnrealizedFinalizedEpoch([32]byte{'b'}, 0))
@@ -78,9 +78,9 @@ func TestStore_LongFork(t *testing.T) {
// Add an attestation to c, it is head
f.ProcessAttestation(ctx, []uint64{0}, [32]byte{'c'}, 1)
f.justifiedBalances = []uint64{100}
c := f.store.nodeByRoot[[32]byte{'c'}]
require.Equal(t, primitives.Epoch(2), slots.ToEpoch(c.slot))
driftGenesisTime(f, c.slot, 0)
c := f.store.emptyNodeByRoot[[32]byte{'c'}]
require.Equal(t, primitives.Epoch(2), slots.ToEpoch(c.node.slot))
driftGenesisTime(f, c.node.slot, 0)
headRoot, err := f.Head(ctx)
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, headRoot)
@@ -91,15 +91,15 @@ func TestStore_LongFork(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
require.NoError(t, f.UpdateJustifiedCheckpoint(ctx, &forkchoicetypes.Checkpoint{Epoch: 2, Root: ha}))
d := f.store.nodeByRoot[[32]byte{'d'}]
require.Equal(t, primitives.Epoch(3), slots.ToEpoch(d.slot))
driftGenesisTime(f, d.slot, 0)
require.Equal(t, true, d.viableForHead(f.store.justifiedCheckpoint.Epoch, slots.ToEpoch(d.slot)))
d := f.store.emptyNodeByRoot[[32]byte{'d'}]
require.Equal(t, primitives.Epoch(3), slots.ToEpoch(d.node.slot))
driftGenesisTime(f, d.node.slot, 0)
require.Equal(t, true, d.node.viableForHead(f.store.justifiedCheckpoint.Epoch, slots.ToEpoch(d.node.slot)))
headRoot, err = f.Head(ctx)
require.NoError(t, err)
require.Equal(t, [32]byte{'c'}, headRoot)
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.nodeByRoot[[32]byte{'c'}].weight)
require.Equal(t, uint64(0), f.store.emptyNodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.emptyNodeByRoot[[32]byte{'c'}].weight)
}
// Epoch 1 Epoch 2 Epoch 3
@@ -243,8 +243,8 @@ func TestStore_ForkNextEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, [32]byte{'d'}, headRoot)
require.Equal(t, primitives.Epoch(2), f.JustifiedCheckpoint().Epoch)
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.nodeByRoot[[32]byte{'h'}].weight)
require.Equal(t, uint64(0), f.store.emptyNodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.emptyNodeByRoot[[32]byte{'h'}].weight)
// Set current epoch to 3, and H's unrealized checkpoint. Check it's head
driftGenesisTime(f, 99, 0)
require.NoError(t, f.store.setUnrealizedJustifiedEpoch([32]byte{'h'}, 2))
@@ -252,8 +252,8 @@ func TestStore_ForkNextEpoch(t *testing.T) {
require.NoError(t, err)
require.Equal(t, [32]byte{'h'}, headRoot)
require.Equal(t, primitives.Epoch(2), f.JustifiedCheckpoint().Epoch)
require.Equal(t, uint64(0), f.store.nodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.nodeByRoot[[32]byte{'h'}].weight)
require.Equal(t, uint64(0), f.store.emptyNodeByRoot[[32]byte{'d'}].weight)
require.Equal(t, uint64(100), f.store.emptyNodeByRoot[[32]byte{'h'}].weight)
}
func TestStore_PullTips_Heuristics(t *testing.T) {
@@ -263,14 +263,14 @@ func TestStore_PullTips_Heuristics(t *testing.T) {
st, root, err := prepareForkchoiceState(ctx, 65, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodeByRoot[[32]byte{'p'}].unrealizedJustifiedEpoch = primitives.Epoch(2)
f.store.emptyNodeByRoot[[32]byte{'p'}].node.unrealizedJustifiedEpoch = primitives.Epoch(2)
driftGenesisTime(f, 66, 0)
st, root, err = prepareForkchoiceState(ctx, 66, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
require.Equal(tt, primitives.Epoch(2), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
require.Equal(tt, primitives.Epoch(1), f.store.nodeByRoot[[32]byte{'h'}].unrealizedFinalizedEpoch)
require.Equal(tt, primitives.Epoch(2), f.store.emptyNodeByRoot[[32]byte{'h'}].node.unrealizedJustifiedEpoch)
require.Equal(tt, primitives.Epoch(1), f.store.emptyNodeByRoot[[32]byte{'h'}].node.unrealizedFinalizedEpoch)
})
t.Run("Previous Epoch is justified and too early for current", func(tt *testing.T) {
@@ -278,21 +278,21 @@ func TestStore_PullTips_Heuristics(t *testing.T) {
st, root, err := prepareForkchoiceState(ctx, 95, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodeByRoot[[32]byte{'p'}].unrealizedJustifiedEpoch = primitives.Epoch(2)
f.store.emptyNodeByRoot[[32]byte{'p'}].node.unrealizedJustifiedEpoch = primitives.Epoch(2)
driftGenesisTime(f, 96, 0)
st, root, err = prepareForkchoiceState(ctx, 96, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
require.Equal(tt, primitives.Epoch(2), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
require.Equal(tt, primitives.Epoch(1), f.store.nodeByRoot[[32]byte{'h'}].unrealizedFinalizedEpoch)
require.Equal(tt, primitives.Epoch(2), f.store.emptyNodeByRoot[[32]byte{'h'}].node.unrealizedJustifiedEpoch)
require.Equal(tt, primitives.Epoch(1), f.store.emptyNodeByRoot[[32]byte{'h'}].node.unrealizedFinalizedEpoch)
})
t.Run("Previous Epoch is justified and not too early for current", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 95, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodeByRoot[[32]byte{'p'}].unrealizedJustifiedEpoch = primitives.Epoch(2)
f.store.emptyNodeByRoot[[32]byte{'p'}].node.unrealizedJustifiedEpoch = primitives.Epoch(2)
driftGenesisTime(f, 127, 0)
st, root, err = prepareForkchoiceState(ctx, 127, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
@@ -302,14 +302,14 @@ func TestStore_PullTips_Heuristics(t *testing.T) {
// This test checks that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, primitives.Epoch(1), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
require.Equal(tt, primitives.Epoch(1), f.store.emptyNodeByRoot[[32]byte{'h'}].node.unrealizedJustifiedEpoch)
})
t.Run("Block from previous Epoch", func(tt *testing.T) {
f := setup(1, 1)
st, root, err := prepareForkchoiceState(ctx, 94, [32]byte{'p'}, [32]byte{}, [32]byte{}, 1, 1)
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
f.store.nodeByRoot[[32]byte{'p'}].unrealizedJustifiedEpoch = primitives.Epoch(2)
f.store.emptyNodeByRoot[[32]byte{'p'}].node.unrealizedJustifiedEpoch = primitives.Epoch(2)
driftGenesisTime(f, 96, 0)
st, root, err = prepareForkchoiceState(ctx, 95, [32]byte{'h'}, [32]byte{'p'}, [32]byte{}, 1, 1)
@@ -319,7 +319,7 @@ func TestStore_PullTips_Heuristics(t *testing.T) {
// This test checks that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, primitives.Epoch(1), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
require.Equal(tt, primitives.Epoch(1), f.store.emptyNodeByRoot[[32]byte{'h'}].node.unrealizedJustifiedEpoch)
})
t.Run("Previous Epoch is not justified", func(tt *testing.T) {
f := setup(1, 1)
@@ -335,6 +335,6 @@ func TestStore_PullTips_Heuristics(t *testing.T) {
// This test checks that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, primitives.Epoch(2), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
require.Equal(tt, primitives.Epoch(2), f.store.emptyNodeByRoot[[32]byte{'h'}].node.unrealizedJustifiedEpoch)
})
}

View File

@@ -284,7 +284,7 @@ func TestVotes_CanFindHead(t *testing.T) {
// 9 10
f.store.finalizedCheckpoint.Root = indexToHash(5)
require.NoError(t, f.store.prune(t.Context()))
assert.Equal(t, 5, len(f.store.nodeByRoot), "Incorrect nodes length after prune")
assert.Equal(t, 5, len(f.store.emptyNodeByRoot), "Incorrect nodes length after prune")
// we pruned artificially the justified root.
f.store.justifiedCheckpoint.Root = indexToHash(5)

View File

@@ -9,6 +9,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
consensus_blocks "github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
forkchoice2 "github.com/OffchainLabs/prysm/v7/consensus-types/forkchoice"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
)
@@ -23,6 +24,7 @@ type ForkChoicer interface {
Unlock()
HeadRetriever // to compute head.
BlockProcessor // to track new block for fork choice.
PayloadProcessor // to track new payloads for fork choice.
AttestationProcessor // to track new attestation for fork choice.
Getter // to retrieve fork choice information.
Setter // to set fork choice information.
@@ -47,6 +49,11 @@ type BlockProcessor interface {
InsertChain(context.Context, []*forkchoicetypes.BlockAndCheckpoints) error
}
// PayloadProcessor processes a payload envelope
type PayloadProcessor interface {
InsertPayload(context.Context, interfaces.ROExecutionPayloadEnvelope) error
}
// AttestationProcessor processes the attestation that's used for accounting fork choice.
type AttestationProcessor interface {
ProcessAttestation(context.Context, []uint64, [32]byte, primitives.Epoch)
@@ -67,13 +74,11 @@ type FastGetter interface {
HasNode([32]byte) bool
HighestReceivedBlockSlot() primitives.Slot
HighestReceivedBlockRoot() [32]byte
HighestReceivedBlockDelay() primitives.Slot
IsCanonical(root [32]byte) bool
IsOptimistic(root [32]byte) (bool, error)
IsViableForCheckpoint(*forkchoicetypes.Checkpoint) (bool, error)
JustifiedCheckpoint() *forkchoicetypes.Checkpoint
JustifiedPayloadBlockHash() [32]byte
LastRoot(primitives.Epoch) [32]byte
NodeCount() int
PreviousJustifiedCheckpoint() *forkchoicetypes.Checkpoint
ProposerBoost() [fieldparams.RootLength]byte
@@ -91,7 +96,7 @@ type FastGetter interface {
// Setter allows to set forkchoice information
type Setter interface {
SetOptimisticToValid(context.Context, [fieldparams.RootLength]byte) error
SetOptimisticToInvalid(context.Context, [fieldparams.RootLength]byte, [fieldparams.RootLength]byte, [fieldparams.RootLength]byte) ([][32]byte, error)
SetOptimisticToInvalid(context.Context, [32]byte, [32]byte, [32]byte, [32]byte) ([][32]byte, error)
UpdateJustifiedCheckpoint(context.Context, *forkchoicetypes.Checkpoint) error
UpdateFinalizedCheckpoint(*forkchoicetypes.Checkpoint) error
SetGenesisTime(time.Time)

View File

@@ -121,13 +121,6 @@ func (ro *ROForkChoice) HighestReceivedBlockRoot() [32]byte {
return ro.getter.HighestReceivedBlockRoot()
}
// HighestReceivedBlockDelay delegates to the underlying forkchoice call, under a lock.
func (ro *ROForkChoice) HighestReceivedBlockDelay() primitives.Slot {
ro.l.RLock()
defer ro.l.RUnlock()
return ro.getter.HighestReceivedBlockDelay()
}
// ReceivedBlocksLastEpoch delegates to the underlying forkchoice call, under a lock.
func (ro *ROForkChoice) ReceivedBlocksLastEpoch() (uint64, error) {
ro.l.RLock()
@@ -163,13 +156,6 @@ func (ro *ROForkChoice) Slot(root [32]byte) (primitives.Slot, error) {
return ro.getter.Slot(root)
}
// LastRoot delegates to the underlying forkchoice call, under a lock.
func (ro *ROForkChoice) LastRoot(e primitives.Epoch) [32]byte {
ro.l.RLock()
defer ro.l.RUnlock()
return ro.getter.LastRoot(e)
}
// DependentRoot delegates to the underlying forkchoice call, under a lock.
func (ro *ROForkChoice) DependentRoot(epoch primitives.Epoch) ([32]byte, error) {
ro.l.RLock()

View File

@@ -30,7 +30,6 @@ const (
nodeCountCalled
highestReceivedBlockSlotCalled
highestReceivedBlockRootCalled
highestReceivedBlockDelayCalled
receivedBlocksLastEpochCalled
weightCalled
isOptimisticCalled
@@ -118,11 +117,6 @@ func TestROLocking(t *testing.T) {
call: highestReceivedBlockSlotCalled,
cb: func(g FastGetter) { g.HighestReceivedBlockSlot() },
},
{
name: "highestReceivedBlockDelayCalled",
call: highestReceivedBlockDelayCalled,
cb: func(g FastGetter) { g.HighestReceivedBlockDelay() },
},
{
name: "receivedBlocksLastEpochCalled",
call: receivedBlocksLastEpochCalled,
@@ -148,11 +142,6 @@ func TestROLocking(t *testing.T) {
call: slotCalled,
cb: func(g FastGetter) { _, err := g.Slot([32]byte{}); _discard(t, err) },
},
{
name: "lastRootCalled",
call: lastRootCalled,
cb: func(g FastGetter) { g.LastRoot(0) },
},
{
name: "targetRootForEpochCalled",
call: targetRootForEpochCalled,
@@ -265,11 +254,6 @@ func (ro *mockROForkchoice) HighestReceivedBlockRoot() [32]byte {
return [32]byte{}
}
func (ro *mockROForkchoice) HighestReceivedBlockDelay() primitives.Slot {
ro.calls = append(ro.calls, highestReceivedBlockDelayCalled)
return 0
}
func (ro *mockROForkchoice) ReceivedBlocksLastEpoch() (uint64, error) {
ro.calls = append(ro.calls, receivedBlocksLastEpochCalled)
return 0, nil
@@ -295,11 +279,6 @@ func (ro *mockROForkchoice) Slot(_ [32]byte) (primitives.Slot, error) {
return 0, nil
}
func (ro *mockROForkchoice) LastRoot(_ primitives.Epoch) [32]byte {
ro.calls = append(ro.calls, lastRootCalled)
return [32]byte{}
}
// DependentRoot impoements FastGetter.
func (ro *mockROForkchoice) DependentRoot(_ primitives.Epoch) ([32]byte, error) {
ro.calls = append(ro.calls, dependentRootCalled)

View File

@@ -0,0 +1,95 @@
# Graffiti Version Info Implementation
## Summary
Add automatic EL+CL version info to block graffiti following [ethereum/execution-apis#517](https://github.com/ethereum/execution-apis/pull/517). Uses the [flexible standard](https://hackmd.io/@wmoBhF17RAOH2NZ5bNXJVg/BJX2c9gja) to pack client info into leftover space after user graffiti.
More details: https://github.com/ethereum/execution-apis/blob/main/src/engine/identification.md
## Implementation
### Core Component: GraffitiInfo Struct
Thread-safe struct holding version information:
```go
const clCode = "PR"
type GraffitiInfo struct {
mu sync.RWMutex
userGraffiti string // From --graffiti flag (set once at startup)
clCommit string // From version.GetCommitPrefix() helper function
elCode string // From engine_getClientVersionV1
elCommit string // From engine_getClientVersionV1
}
```
### Flow
1. **Startup**: Parse flags, create GraffitiInfo with user graffiti and CL info.
2. **Wiring**: Pass struct to both execution service and RPC validator server
3. **Runtime**: Execution service goroutine periodically calls `engine_getClientVersionV1` and updates EL fields
4. **Block Proposal**: RPC validator server calls `GenerateGraffiti()` to get formatted graffiti
### Flexible Graffiti Format
Packs as much client info as space allows (after user graffiti):
| Available Space | Format | Example |
|----------------|--------|---------|
| ≥12 bytes | `EL(2)+commit(4)+CL(2)+commit(4)+user` | `GE168dPR63afBob` |
| 8-11 bytes | `EL(2)+commit(2)+CL(2)+commit(2)+user` | `GE16PR63my node here` |
| 4-7 bytes | `EL(2)+CL(2)+user` | `GEPRthis is my graffiti msg` |
| 2-3 bytes | `EL(2)+user` | `GEalmost full graffiti message` |
| <2 bytes | user only | `full 32 byte user graffiti here` |
```go
func (g *GraffitiInfo) GenerateGraffiti() [32]byte {
available := 32 - len(userGraffiti)
if elCode == "" {
elCommit2 = elCommit4 = ""
}
switch {
case available >= 12:
return elCode + elCommit4 + clCode + clCommit4 + userGraffiti
case available >= 8:
return elCode + elCommit2 + clCode + clCommit2 + userGraffiti
case available >= 4:
return elCode + clCode + userGraffiti
case available >= 2:
return elCode + userGraffiti
default:
return userGraffiti
}
}
```
### Update Logic
Single testable function in execution service:
```go
func (s *Service) updateGraffitiInfo() {
versions, err := s.GetClientVersion(ctx)
if err != nil {
return // Keep last good value
}
if len(versions) == 1 {
s.graffitiInfo.UpdateFromEngine(versions[0].Code, versions[0].Commit)
}
}
```
Goroutine calls this on `slot % 8 == 4` timing (4 times per epoch, avoids slot boundaries).
### Files Changes Required
**New:**
- `beacon-chain/execution/graffiti_info.go` - The struct and methods
- `beacon-chain/execution/graffiti_info_test.go` - Unit tests
- `runtime/version/version.go` - Add `GetCommitPrefix()` helper that extracts first 4 hex chars from the git commit injected via Bazel ldflags at build time
**Modified:**
- `beacon-chain/execution/service.go` - Add goroutine + updateGraffitiInfo()
- `beacon-chain/execution/engine_client.go` - Add GetClientVersion() method that does engine call
- `beacon-chain/rpc/.../validator/proposer.go` - Call GenerateGraffiti()
- `beacon-chain/node/node.go` - Wire GraffitiInfo to services
### Testing Strategy
- Unit test GraffitiInfo methods (priority logic, thread safety)
- Unit test updateGraffitiInfo() with mocked engine client

View File

@@ -134,10 +134,20 @@ type BeaconNode struct {
// New creates a new node instance, sets up configuration options, and registers
// every required service to the node.
func New(cliCtx *cli.Context, cancel context.CancelFunc, opts ...Option) (*BeaconNode, error) {
func New(cliCtx *cli.Context, cancel context.CancelFunc, optFuncs []func(*cli.Context) ([]Option, error), opts ...Option) (*BeaconNode, error) {
if err := configureBeacon(cliCtx); err != nil {
return nil, errors.Wrap(err, "could not set beacon configuration options")
}
for _, of := range optFuncs {
ofo, err := of(cliCtx)
if err != nil {
return nil, err
}
if ofo != nil {
opts = append(opts, ofo...)
}
}
ctx := cliCtx.Context
beacon := &BeaconNode{
@@ -775,9 +785,6 @@ func (b *BeaconNode) registerPOWChainService() error {
return err
}
// Create GraffitiInfo for client version tracking in block graffiti
graffitiInfo := execution.NewGraffitiInfo()
// skipcq: CRT-D0001
opts := append(
b.serviceFlagOpts.executionChainFlagOpts,
@@ -790,7 +797,6 @@ func (b *BeaconNode) registerPOWChainService() error {
execution.WithFinalizedStateAtStartup(b.finalizedStateAtStartUp),
execution.WithJwtId(b.cliCtx.String(flags.JwtId.Name)),
execution.WithVerifierWaiter(b.verifyInitWaiter),
execution.WithGraffitiInfo(graffitiInfo),
)
web3Service, err := execution.NewService(b.ctx, opts...)
if err != nil {
@@ -997,7 +1003,6 @@ func (b *BeaconNode) registerRPCService(router *http.ServeMux) error {
TrackedValidatorsCache: b.trackedValidatorsCache,
PayloadIDCache: b.payloadIDCache,
LCStore: b.lcStore,
GraffitiInfo: web3Service.GraffitiInfo(),
})
return b.services.RegisterService(rpcService)

View File

@@ -59,7 +59,7 @@ func TestNodeClose_OK(t *testing.T) {
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
}
node, err := New(ctx, cancel, options...)
node, err := New(ctx, cancel, nil, options...)
require.NoError(t, err)
node.Close()
@@ -87,7 +87,7 @@ func TestNodeStart_Ok(t *testing.T) {
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
}
node, err := New(ctx, cancel, options...)
node, err := New(ctx, cancel, nil, options...)
require.NoError(t, err)
require.NotNil(t, node.lcStore)
node.services = &runtime.ServiceRegistry{}
@@ -116,7 +116,7 @@ func TestNodeStart_SyncChecker(t *testing.T) {
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
}
node, err := New(ctx, cancel, options...)
node, err := New(ctx, cancel, nil, options...)
require.NoError(t, err)
go func() {
node.Start()
@@ -151,7 +151,7 @@ func TestClearDB(t *testing.T) {
WithDataColumnStorage(filesystem.NewEphemeralDataColumnStorage(t)),
}
_, err = New(context, cancel, options...)
_, err = New(context, cancel, nil, options...)
require.NoError(t, err)
require.LogsContain(t, hook, "Removing database")
}

View File

@@ -25,6 +25,7 @@ var gossipTopicMappings = map[string]func() proto.Message{
LightClientOptimisticUpdateTopicFormat: func() proto.Message { return &ethpb.LightClientOptimisticUpdateAltair{} },
LightClientFinalityUpdateTopicFormat: func() proto.Message { return &ethpb.LightClientFinalityUpdateAltair{} },
DataColumnSubnetTopicFormat: func() proto.Message { return &ethpb.DataColumnSidecar{} },
PayloadAttestationMessageTopicFormat: func() proto.Message { return &ethpb.PayloadAttestationMessage{} },
}
// GossipTopicMappings is a function to return the assigned data type
@@ -144,4 +145,7 @@ func init() {
// Specially handle Fulu objects.
GossipTypeMapping[reflect.TypeFor[*ethpb.SignedBeaconBlockFulu]()] = BlockSubnetTopicFormat
// Payload attestation messages.
GossipTypeMapping[reflect.TypeFor[*ethpb.PayloadAttestationMessage]()] = PayloadAttestationMessageTopicFormat
}

View File

@@ -138,6 +138,9 @@ func connect(a, b host.Host) error {
func (p *TestP2P) ReceiveRPC(topic string, msg proto.Message) {
h, err := libp2p.New(libp2p.ResourceManager(&network.NullResourceManager{}))
require.NoError(p.t, err)
p.t.Cleanup(func() {
require.NoError(p.t, h.Close())
})
if err := connect(h, p.BHost); err != nil {
p.t.Fatalf("Failed to connect two peers for RPC: %v", err)
}
@@ -169,6 +172,9 @@ func (p *TestP2P) ReceiveRPC(topic string, msg proto.Message) {
func (p *TestP2P) ReceivePubSub(topic string, msg proto.Message) {
h, err := libp2p.New(libp2p.ResourceManager(&network.NullResourceManager{}))
require.NoError(p.t, err)
p.t.Cleanup(func() {
require.NoError(p.t, h.Close())
})
ps, err := pubsub.NewFloodSub(context.Background(), h,
pubsub.WithMessageSigning(false),
pubsub.WithStrictSignatureVerification(false),

View File

@@ -46,6 +46,8 @@ const (
GossipLightClientOptimisticUpdateMessage = "light_client_optimistic_update"
// GossipDataColumnSidecarMessage is the name for the data column sidecar message type.
GossipDataColumnSidecarMessage = "data_column_sidecar"
// GossipPayloadAttestationMessage is the name for the payload attestation message type.
GossipPayloadAttestationMessage = "payload_attestation_message"
// Topic Formats
//
@@ -75,6 +77,8 @@ const (
LightClientOptimisticUpdateTopicFormat = GossipProtocolAndDigest + GossipLightClientOptimisticUpdateMessage
// DataColumnSubnetTopicFormat is the topic format for the data column subnet.
DataColumnSubnetTopicFormat = GossipProtocolAndDigest + GossipDataColumnSidecarMessage + "_%d"
// PayloadAttestationMessageTopicFormat is the topic format for payload attestation messages.
PayloadAttestationMessageTopicFormat = GossipProtocolAndDigest + GossipPayloadAttestationMessage
)
// topic is a struct representing a single gossipsub topic.
@@ -141,7 +145,7 @@ func (s *Service) allTopics() []topic {
cfg := params.BeaconConfig()
// bellatrix: no special topics; electra: blobs topics handled all together
genesis, altair, capella := cfg.GenesisEpoch, cfg.AltairForkEpoch, cfg.CapellaForkEpoch
deneb, fulu, future := cfg.DenebForkEpoch, cfg.FuluForkEpoch, cfg.FarFutureEpoch
deneb, fulu, gloas, future := cfg.DenebForkEpoch, cfg.FuluForkEpoch, cfg.GloasForkEpoch, cfg.FarFutureEpoch
// Templates are starter topics - they have a placeholder digest and the subnet is set to the maximum value
// for the subnet (see how this is used in allSubnetsBelow). These are not directly returned by the method,
// they are copied and modified for each digest where they apply based on the start and end epochs.
@@ -158,6 +162,7 @@ func (s *Service) allTopics() []topic {
newTopic(altair, future, empty, GossipLightClientOptimisticUpdateMessage),
newTopic(altair, future, empty, GossipLightClientFinalityUpdateMessage),
newTopic(capella, future, empty, GossipBlsToExecutionChangeMessage),
newTopic(gloas, future, empty, GossipPayloadAttestationMessage),
}
last := params.GetNetworkScheduleEntry(genesis)
schedule := []params.NetworkScheduleEntry{last}

View File

@@ -89,13 +89,7 @@ func (vs *Server) GetBeaconBlock(ctx context.Context, req *ethpb.BlockRequest) (
}
// Set slot, graffiti, randao reveal, and parent root.
sBlk.SetSlot(req.Slot)
// Generate graffiti with client version info using flexible standard
if vs.GraffitiInfo != nil {
graffiti := vs.GraffitiInfo.GenerateGraffiti(req.Graffiti)
sBlk.SetGraffiti(graffiti[:])
} else {
sBlk.SetGraffiti(req.Graffiti)
}
sBlk.SetGraffiti(req.Graffiti)
sBlk.SetRandaoReveal(req.RandaoReveal)
sBlk.SetParentRoot(parentRoot[:])

View File

@@ -83,7 +83,6 @@ type Server struct {
ClockWaiter startup.ClockWaiter
CoreService *core.Service
AttestationStateFetcher blockchain.AttestationStateFetcher
GraffitiInfo *execution.GraffitiInfo
}
// Deprecated: The gRPC API will remain the default and fully supported through v8 (expected in 2026) but will be eventually removed in favor of REST API.

View File

@@ -125,7 +125,6 @@ type Config struct {
TrackedValidatorsCache *cache.TrackedValidatorsCache
PayloadIDCache *cache.PayloadIDCache
LCStore *lightClient.Store
GraffitiInfo *execution.GraffitiInfo
}
// NewService instantiates a new RPC service instance that will
@@ -257,7 +256,6 @@ func NewService(ctx context.Context, cfg *Config) *Service {
TrackedValidatorsCache: s.cfg.TrackedValidatorsCache,
PayloadIDCache: s.cfg.PayloadIDCache,
AttestationStateFetcher: s.cfg.AttestationReceiver,
GraffitiInfo: s.cfg.GraffitiInfo,
}
s.validatorServer = validatorServer
nodeServer := &nodev1alpha1.Server{

View File

@@ -46,14 +46,20 @@ func (b *BeaconState) BuilderPubkey(builderIndex primitives.BuilderIndex) ([fiel
}
// IsActiveBuilder returns true if the builder placement is finalized and it has not initiated exit.
// Spec v1.7.0-alpha.0 (pseudocode):
// def is_active_builder(state: BeaconState, builder_index: BuilderIndex) -> bool:
//
// builder = state.builders[builder_index]
// return (
// builder.deposit_epoch < state.finalized_checkpoint.epoch
// and builder.withdrawable_epoch == FAR_FUTURE_EPOCH
// )
// <spec fn="is_active_builder" fork="gloas" hash="1a599fb2">
// def is_active_builder(state: BeaconState, builder_index: BuilderIndex) -> bool:
// """
// Check if the builder at ``builder_index`` is active for the given ``state``.
// """
// builder = state.builders[builder_index]
// return (
// # Placement in builder list is finalized
// builder.deposit_epoch < state.finalized_checkpoint.epoch
// # Has not initiated exit
// and builder.withdrawable_epoch == FAR_FUTURE_EPOCH
// )
// </spec>
func (b *BeaconState) IsActiveBuilder(builderIndex primitives.BuilderIndex) (bool, error) {
if b.version < version.Gloas {
return false, errNotSupported("IsActiveBuilder", b.version)
@@ -72,15 +78,18 @@ func (b *BeaconState) IsActiveBuilder(builderIndex primitives.BuilderIndex) (boo
}
// CanBuilderCoverBid returns true if the builder has enough balance to cover the given bid amount.
// Spec v1.7.0-alpha.0 (pseudocode):
// def can_builder_cover_bid(state: BeaconState, builder_index: BuilderIndex, bid_amount: Gwei) -> bool:
//
// builder_balance = state.builders[builder_index].balance
// pending_withdrawals_amount = get_pending_balance_to_withdraw_for_builder(state, builder_index)
// min_balance = MIN_DEPOSIT_AMOUNT + pending_withdrawals_amount
// if builder_balance < min_balance:
// return False
// return builder_balance - min_balance >= bid_amount
// <spec fn="can_builder_cover_bid" fork="gloas" hash="9e3f2d7c">
// def can_builder_cover_bid(
// state: BeaconState, builder_index: BuilderIndex, bid_amount: Gwei
// ) -> bool:
// builder_balance = state.builders[builder_index].balance
// pending_withdrawals_amount = get_pending_balance_to_withdraw_for_builder(state, builder_index)
// min_balance = MIN_DEPOSIT_AMOUNT + pending_withdrawals_amount
// if builder_balance < min_balance:
// return False
// return builder_balance - min_balance >= bid_amount
// </spec>
func (b *BeaconState) CanBuilderCoverBid(builderIndex primitives.BuilderIndex, bidAmount primitives.Gwei) (bool, error) {
if b.version < version.Gloas {
return false, errNotSupported("CanBuilderCoverBid", b.version)

View File

@@ -82,20 +82,20 @@ func (b *BeaconState) SetExecutionPayloadBid(h interfaces.ROExecutionPayloadBid)
parentBlockRoot := h.ParentBlockRoot()
blockHash := h.BlockHash()
randao := h.PrevRandao()
blobKzgCommitmentsRoot := h.BlobKzgCommitmentsRoot()
blobKzgCommitments := h.BlobKzgCommitments()
feeRecipient := h.FeeRecipient()
b.latestExecutionPayloadBid = &ethpb.ExecutionPayloadBid{
ParentBlockHash: parentBlockHash[:],
ParentBlockRoot: parentBlockRoot[:],
BlockHash: blockHash[:],
PrevRandao: randao[:],
GasLimit: h.GasLimit(),
BuilderIndex: h.BuilderIndex(),
Slot: h.Slot(),
Value: h.Value(),
ExecutionPayment: h.ExecutionPayment(),
BlobKzgCommitmentsRoot: blobKzgCommitmentsRoot[:],
FeeRecipient: feeRecipient[:],
ParentBlockHash: parentBlockHash[:],
ParentBlockRoot: parentBlockRoot[:],
BlockHash: blockHash[:],
PrevRandao: randao[:],
GasLimit: h.GasLimit(),
BuilderIndex: h.BuilderIndex(),
Slot: h.Slot(),
Value: h.Value(),
ExecutionPayment: h.ExecutionPayment(),
BlobKzgCommitments: blobKzgCommitments,
FeeRecipient: feeRecipient[:],
}
b.markFieldAsDirty(types.LatestExecutionPayloadBid)

View File

@@ -14,17 +14,17 @@ import (
)
type testExecutionPayloadBid struct {
parentBlockHash [32]byte
parentBlockRoot [32]byte
blockHash [32]byte
prevRandao [32]byte
blobKzgCommitmentsRoot [32]byte
feeRecipient [20]byte
gasLimit uint64
builderIndex primitives.BuilderIndex
slot primitives.Slot
value primitives.Gwei
executionPayment primitives.Gwei
parentBlockHash [32]byte
parentBlockRoot [32]byte
blockHash [32]byte
prevRandao [32]byte
blobKzgCommitments [][]byte
feeRecipient [20]byte
gasLimit uint64
builderIndex primitives.BuilderIndex
slot primitives.Slot
value primitives.Gwei
executionPayment primitives.Gwei
}
func (t testExecutionPayloadBid) ParentBlockHash() [32]byte { return t.parentBlockHash }
@@ -40,9 +40,12 @@ func (t testExecutionPayloadBid) Value() primitives.Gwei { return t.value }
func (t testExecutionPayloadBid) ExecutionPayment() primitives.Gwei {
return t.executionPayment
}
func (t testExecutionPayloadBid) BlobKzgCommitmentsRoot() [32]byte { return t.blobKzgCommitmentsRoot }
func (t testExecutionPayloadBid) FeeRecipient() [20]byte { return t.feeRecipient }
func (t testExecutionPayloadBid) IsNil() bool { return false }
func (t testExecutionPayloadBid) BlobKzgCommitments() [][]byte { return t.blobKzgCommitments }
func (t testExecutionPayloadBid) BlobKzgCommitmentCount() uint64 {
return uint64(len(t.blobKzgCommitments))
}
func (t testExecutionPayloadBid) FeeRecipient() [20]byte { return t.feeRecipient }
func (t testExecutionPayloadBid) IsNil() bool { return false }
func TestSetExecutionPayloadBid(t *testing.T) {
t.Run("previous fork returns expected error", func(t *testing.T) {
@@ -57,7 +60,7 @@ func TestSetExecutionPayloadBid(t *testing.T) {
parentBlockRoot = [32]byte(bytes.Repeat([]byte{0xCD}, 32))
blockHash = [32]byte(bytes.Repeat([]byte{0xEF}, 32))
prevRandao = [32]byte(bytes.Repeat([]byte{0x11}, 32))
blobRoot = [32]byte(bytes.Repeat([]byte{0x22}, 32))
blobCommitments = [][]byte{bytes.Repeat([]byte{0x22}, 48)}
feeRecipient [20]byte
)
copy(feeRecipient[:], bytes.Repeat([]byte{0x33}, len(feeRecipient)))
@@ -66,17 +69,17 @@ func TestSetExecutionPayloadBid(t *testing.T) {
dirtyFields: make(map[types.FieldIndex]bool),
}
bid := testExecutionPayloadBid{
parentBlockHash: parentBlockHash,
parentBlockRoot: parentBlockRoot,
blockHash: blockHash,
prevRandao: prevRandao,
blobKzgCommitmentsRoot: blobRoot,
feeRecipient: feeRecipient,
gasLimit: 123,
builderIndex: 7,
slot: 9,
value: 11,
executionPayment: 22,
parentBlockHash: parentBlockHash,
parentBlockRoot: parentBlockRoot,
blockHash: blockHash,
prevRandao: prevRandao,
blobKzgCommitments: blobCommitments,
feeRecipient: feeRecipient,
gasLimit: 123,
builderIndex: 7,
slot: 9,
value: 11,
executionPayment: 22,
}
require.NoError(t, st.SetExecutionPayloadBid(bid))
@@ -86,7 +89,7 @@ func TestSetExecutionPayloadBid(t *testing.T) {
require.DeepEqual(t, parentBlockRoot[:], st.latestExecutionPayloadBid.ParentBlockRoot)
require.DeepEqual(t, blockHash[:], st.latestExecutionPayloadBid.BlockHash)
require.DeepEqual(t, prevRandao[:], st.latestExecutionPayloadBid.PrevRandao)
require.DeepEqual(t, blobRoot[:], st.latestExecutionPayloadBid.BlobKzgCommitmentsRoot)
require.DeepEqual(t, blobCommitments, st.latestExecutionPayloadBid.BlobKzgCommitments)
require.DeepEqual(t, feeRecipient[:], st.latestExecutionPayloadBid.FeeRecipient)
require.Equal(t, uint64(123), st.latestExecutionPayloadBid.GasLimit)
require.Equal(t, primitives.BuilderIndex(7), st.latestExecutionPayloadBid.BuilderIndex)

View File

@@ -47,6 +47,7 @@ go_library(
"subscriber_bls_to_execution_change.go",
"subscriber_data_column_sidecar.go",
"subscriber_handlers.go",
"subscriber_payload_attestation.go",
"subscriber_sync_committee_message.go",
"subscriber_sync_contribution_proof.go",
"subscription_topic_handler.go",
@@ -58,6 +59,7 @@ go_library(
"validate_bls_to_execution_change.go",
"validate_data_column.go",
"validate_light_client.go",
"validate_payload_attestation.go",
"validate_proposer_slashing.go",
"validate_sync_committee_message.go",
"validate_sync_contribution_proof.go",
@@ -114,6 +116,7 @@ go_library(
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attestation:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/leaky-bucket:go_default_library",
@@ -212,6 +215,7 @@ go_test(
"validate_bls_to_execution_change_test.go",
"validate_data_column_test.go",
"validate_light_client_test.go",
"validate_payload_attestation_test.go",
"validate_proposer_slashing_test.go",
"validate_sync_committee_message_test.go",
"validate_sync_contribution_proof_test.go",
@@ -264,6 +268,7 @@ go_test(
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attestation:go_default_library",
"//consensus-types/primitives:go_default_library",
"//consensus-types/wrapper:go_default_library",
"//container/leaky-bucket:go_default_library",

View File

@@ -207,6 +207,13 @@ func WithTrackedValidatorsCache(c *cache.TrackedValidatorsCache) Option {
}
}
func WithPayloadAttestationCache(c *cache.PayloadAttestationCache) Option {
return func(s *Service) error {
s.payloadAttestationCache = c
return nil
}
}
// WithSlasherEnabled configures the sync package to support slashing detection.
func WithSlasherEnabled(enabled bool) Option {
return func(s *Service) error {

View File

@@ -38,6 +38,7 @@ import (
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
"github.com/OffchainLabs/prysm/v7/consensus-types/interfaces"
payloadattestation "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attestation"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
leakybucket "github.com/OffchainLabs/prysm/v7/container/leaky-bucket"
"github.com/OffchainLabs/prysm/v7/crypto/rand"
@@ -121,6 +122,7 @@ type blockchainService interface {
blockchain.FinalizationFetcher
blockchain.ForkFetcher
blockchain.AttestationReceiver
blockchain.PayloadAttestationReceiver
blockchain.TimeFetcher
blockchain.GenesisFetcher
blockchain.CanonicalFetcher
@@ -173,6 +175,7 @@ type Service struct {
verifierWaiter *verification.InitializerWaiter
newBlobVerifier verification.NewBlobVerifier
newColumnsVerifier verification.NewDataColumnsVerifier
newPayloadAttestationVerifier verification.NewPayloadAttestationMsgVerifier
columnSidecarsExecSingleFlight singleflight.Group
reconstructionSingleFlight singleflight.Group
availableBlocker coverage.AvailableBlocker
@@ -182,6 +185,7 @@ type Service struct {
slasherEnabled bool
lcStore *lightClient.Store
dataColumnLogCh chan dataColumnLogEntry
payloadAttestationCache *cache.PayloadAttestationCache
digestActions perDigestSet
subscriptionSpawner func(func()) // see Service.spawn for details
}
@@ -190,15 +194,16 @@ type Service struct {
func NewService(ctx context.Context, opts ...Option) *Service {
ctx, cancel := context.WithCancel(ctx)
r := &Service{
ctx: ctx,
cancel: cancel,
chainStarted: abool.New(),
cfg: &config{clock: startup.NewClock(time.Unix(0, 0), [32]byte{})},
slotToPendingBlocks: gcache.New(pendingBlockExpTime /* exp time */, 0 /* disable janitor */),
seenPendingBlocks: make(map[[32]byte]bool),
blkRootToPendingAtts: make(map[[32]byte][]any),
dataColumnLogCh: make(chan dataColumnLogEntry, 1000),
reconstructionRandGen: rand.NewGenerator(),
ctx: ctx,
cancel: cancel,
chainStarted: abool.New(),
cfg: &config{clock: startup.NewClock(time.Unix(0, 0), [32]byte{})},
slotToPendingBlocks: gcache.New(pendingBlockExpTime /* exp time */, 0 /* disable janitor */),
seenPendingBlocks: make(map[[32]byte]bool),
blkRootToPendingAtts: make(map[[32]byte][]any),
dataColumnLogCh: make(chan dataColumnLogEntry, 1000),
reconstructionRandGen: rand.NewGenerator(),
payloadAttestationCache: &cache.PayloadAttestationCache{},
}
for _, opt := range opts {
@@ -250,6 +255,12 @@ func newDataColumnsVerifierFromInitializer(ini *verification.Initializer) verifi
}
}
func newPayloadAttestationMessageFromInitializer(ini *verification.Initializer) verification.NewPayloadAttestationMsgVerifier {
return func(pa payloadattestation.ROMessage, reqs []verification.Requirement) verification.PayloadAttestationMsgVerifier {
return ini.NewPayloadAttestationMsgVerifier(pa, reqs)
}
}
// Start the regular sync service.
func (s *Service) Start() {
v, err := s.verifierWaiter.WaitForInitializer(s.ctx)
@@ -259,6 +270,7 @@ func (s *Service) Start() {
}
s.newBlobVerifier = newBlobVerifierFromInitializer(v)
s.newColumnsVerifier = newDataColumnsVerifierFromInitializer(v)
s.newPayloadAttestationVerifier = newPayloadAttestationMessageFromInitializer(v)
go s.verifierRoutine()
go s.startDiscoveryAndSubscriptions()

View File

@@ -330,6 +330,18 @@ func (s *Service) registerSubscribers(nse params.NetworkScheduleEntry) bool {
})
})
}
// New gossip topic in Gloas.
if params.BeaconConfig().GloasForkEpoch <= nse.Epoch {
s.spawn(func() {
s.subscribe(
p2p.PayloadAttestationMessageTopicFormat,
s.validatePayloadAttestation,
s.payloadAttestationSubscriber,
nse,
)
})
}
return true
}

View File

@@ -0,0 +1,21 @@
package sync
import (
"context"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"google.golang.org/protobuf/proto"
)
func (s *Service) payloadAttestationSubscriber(ctx context.Context, msg proto.Message) error {
a, ok := msg.(*eth.PayloadAttestationMessage)
if !ok {
return errWrongMessage
}
if err := s.payloadAttestationCache.Add(a.Data.Slot, a.ValidatorIndex); err != nil {
return err
}
return s.cfg.chain.ReceivePayloadAttestationMessage(ctx, a)
}

View File

@@ -0,0 +1,131 @@
package sync
import (
"bytes"
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/transition"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
payloadattestation "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attestation"
"github.com/OffchainLabs/prysm/v7/encoding/bytesutil"
"github.com/OffchainLabs/prysm/v7/monitoring/tracing/trace"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/time/slots"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/peer"
"github.com/pkg/errors"
)
var (
errAlreadySeenPayloadAttestation = errors.New("payload attestation already seen for validator index")
)
func (s *Service) validatePayloadAttestation(ctx context.Context, pid peer.ID, msg *pubsub.Message) (pubsub.ValidationResult, error) {
if pid == s.cfg.p2p.PeerID() {
return pubsub.ValidationAccept, nil
}
if s.cfg.initialSync.Syncing() {
return pubsub.ValidationIgnore, nil
}
ctx, span := trace.StartSpan(ctx, "sync.validatePayloadAttestation")
defer span.End()
if msg.Topic == nil {
return pubsub.ValidationReject, p2p.ErrInvalidTopic
}
m, err := s.decodePubsubMessage(msg)
if err != nil {
return pubsub.ValidationReject, err
}
att, ok := m.(*eth.PayloadAttestationMessage)
if !ok {
return pubsub.ValidationReject, errWrongMessage
}
pa, err := payloadattestation.NewReadOnly(att)
if err != nil {
return pubsub.ValidationIgnore, err
}
v := s.newPayloadAttestationVerifier(pa, verification.GossipPayloadAttestationMessageRequirements)
// [IGNORE] The message's slot is for the current slot (with a MAXIMUM_GOSSIP_CLOCK_DISPARITY allowance),
// i.e. data.slot == current_slot.
if err := v.VerifyCurrentSlot(); err != nil {
return pubsub.ValidationIgnore, err
}
// [IGNORE] The payload_attestation_message is the first valid message received from the validator with
// index payload_attestation_message.validator_index.
if s.payloadAttestationCache.Seen(pa.Slot(), pa.ValidatorIndex()) {
return pubsub.ValidationIgnore, errAlreadySeenPayloadAttestation
}
// [IGNORE] The message's block data.beacon_block_root has been seen (via gossip or non-gossip sources)
// (a client MAY queue attestation for processing once the block is retrieved. Note a client might want to request payload after).
if err := v.VerifyBlockRootSeen(s.cfg.chain.InForkchoice); err != nil {
// TODO: queue attestation
return pubsub.ValidationIgnore, err
}
// [REJECT] The message's block data.beacon_block_root passes validation.
if err := v.VerifyBlockRootValid(s.hasBadBlock); err != nil {
return pubsub.ValidationReject, err
}
st, err := s.getPtcState(ctx, pa)
if err != nil {
return pubsub.ValidationIgnore, err
}
// [REJECT] The message's validator index is within the payload committee in get_ptc(state, data.slot).
// The state is the head state corresponding to processing the block up to the current slot.
if err := v.VerifyValidatorInPTC(ctx, st); err != nil {
return pubsub.ValidationReject, err
}
// [REJECT] payload_attestation_message.signature is valid with respect to the validator's public key.
if err := v.VerifySignature(st); err != nil {
return pubsub.ValidationReject, err
}
msg.ValidatorData = att
return pubsub.ValidationAccept, nil
}
func (s *Service) getPtcState(ctx context.Context, pa payloadattestation.ROMessage) (state.ReadOnlyBeaconState, error) {
blockRoot := pa.BeaconBlockRoot()
blockSlot := pa.Slot()
blockEpoch := slots.ToEpoch(blockSlot)
headSlot := s.cfg.chain.HeadSlot()
headEpoch := slots.ToEpoch(headSlot)
headRoot, err := s.cfg.chain.HeadRoot(ctx)
if err != nil {
return nil, err
}
if blockEpoch == headEpoch {
if bytes.Equal(blockRoot[:], headRoot) {
return s.cfg.chain.HeadStateReadOnly(ctx)
}
headDependent, err := s.cfg.chain.DependentRootForEpoch(bytesutil.ToBytes32(headRoot), blockEpoch)
if err != nil {
return nil, err
}
blockDependent, err := s.cfg.chain.DependentRootForEpoch(blockRoot, blockEpoch)
if err != nil {
return nil, err
}
if bytes.Equal(headDependent[:], blockDependent[:]) {
return s.cfg.chain.HeadStateReadOnly(ctx)
}
}
headState, err := s.cfg.chain.HeadState(ctx)
if err != nil {
return nil, err
}
return transition.ProcessSlotsUsingNextSlotCache(ctx, headState, headRoot, blockSlot)
}

View File

@@ -0,0 +1,165 @@
package sync
import (
"bytes"
"context"
"reflect"
"testing"
"time"
mock "github.com/OffchainLabs/prysm/v7/beacon-chain/blockchain/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/cache"
"github.com/OffchainLabs/prysm/v7/beacon-chain/p2p"
p2ptest "github.com/OffchainLabs/prysm/v7/beacon-chain/p2p/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
mockSync "github.com/OffchainLabs/prysm/v7/beacon-chain/sync/initial-sync/testing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/verification"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
payloadattestation "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attestation"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
"github.com/OffchainLabs/prysm/v7/testing/util"
pubsub "github.com/libp2p/go-libp2p-pubsub"
pb "github.com/libp2p/go-libp2p-pubsub/pb"
"github.com/pkg/errors"
)
func TestValidatePayloadAttestationMessage_IncorrectTopic(t *testing.T) {
ctx := context.Background()
p := p2ptest.NewTestP2P(t)
chainService := &mock.ChainService{Genesis: time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0)}
s := &Service{
payloadAttestationCache: &cache.PayloadAttestationCache{},
cfg: &config{chain: chainService, p2p: p, initialSync: &mockSync.Sync{}, clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)}}
msg := util.HydratePayloadAttestation(&ethpb.PayloadAttestation{}) // Using payload attestation for message should fail.
buf := new(bytes.Buffer)
_, err := p.Encoding().EncodeGossip(buf, msg)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeFor[*ethpb.PayloadAttestation]()]
digest, err := s.currentForkDigest()
require.NoError(t, err)
topic = s.addDigestToTopic(topic, digest)
result, err := s.validatePayloadAttestation(ctx, "", &pubsub.Message{
Message: &pb.Message{
Data: buf.Bytes(),
Topic: &topic,
}})
require.ErrorContains(t, "extraction failed for topic", err)
require.Equal(t, result, pubsub.ValidationReject)
}
func TestValidatePayloadAttestationMessage_ErrorPathsWithMock(t *testing.T) {
tests := []struct {
error error
verifier verification.NewPayloadAttestationMsgVerifier
result pubsub.ValidationResult
}{
{
error: errors.New("incorrect slot"),
verifier: func(pa payloadattestation.ROMessage, reqs []verification.Requirement) verification.PayloadAttestationMsgVerifier {
return &verification.MockPayloadAttestation{ErrIncorrectPayloadAttSlot: errors.New("incorrect slot")}
},
result: pubsub.ValidationIgnore,
},
{
error: errors.New("block root seen"),
verifier: func(pa payloadattestation.ROMessage, reqs []verification.Requirement) verification.PayloadAttestationMsgVerifier {
return &verification.MockPayloadAttestation{ErrPayloadAttBlockRootNotSeen: errors.New("block root seen")}
},
result: pubsub.ValidationIgnore,
},
{
error: errors.New("block root invalid"),
verifier: func(pa payloadattestation.ROMessage, reqs []verification.Requirement) verification.PayloadAttestationMsgVerifier {
return &verification.MockPayloadAttestation{ErrPayloadAttBlockRootInvalid: errors.New("block root invalid")}
},
result: pubsub.ValidationReject,
},
{
error: errors.New("validator not in PTC"),
verifier: func(pa payloadattestation.ROMessage, reqs []verification.Requirement) verification.PayloadAttestationMsgVerifier {
return &verification.MockPayloadAttestation{ErrIncorrectPayloadAttValidator: errors.New("validator not in PTC")}
},
result: pubsub.ValidationReject,
},
{
error: errors.New("incorrect signature"),
verifier: func(pa payloadattestation.ROMessage, reqs []verification.Requirement) verification.PayloadAttestationMsgVerifier {
return &verification.MockPayloadAttestation{ErrInvalidMessageSignature: errors.New("incorrect signature")}
},
result: pubsub.ValidationReject,
},
}
for _, tt := range tests {
t.Run(tt.error.Error(), func(t *testing.T) {
ctx := context.Background()
p := p2ptest.NewTestP2P(t)
chainService := &mock.ChainService{Genesis: time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0)}
s := &Service{
payloadAttestationCache: &cache.PayloadAttestationCache{},
cfg: &config{chain: chainService, p2p: p, initialSync: &mockSync.Sync{}, clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)}}
s.newPayloadAttestationVerifier = tt.verifier
msg := newPayloadAttestationMessage()
buf := new(bytes.Buffer)
_, err := p.Encoding().EncodeGossip(buf, msg)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeFor[*ethpb.PayloadAttestationMessage]()]
digest, err := s.currentForkDigest()
require.NoError(t, err)
topic = s.addDigestToTopic(topic, digest)
result, err := s.validatePayloadAttestation(ctx, "", &pubsub.Message{
Message: &pb.Message{
Data: buf.Bytes(),
Topic: &topic,
}})
require.ErrorContains(t, tt.error.Error(), err)
require.Equal(t, result, tt.result)
})
}
}
func TestValidatePayloadAttestationMessage_Accept(t *testing.T) {
ctx := context.Background()
p := p2ptest.NewTestP2P(t)
chainService := &mock.ChainService{Genesis: time.Unix(time.Now().Unix()-int64(params.BeaconConfig().SecondsPerSlot), 0)}
s := &Service{
payloadAttestationCache: &cache.PayloadAttestationCache{},
cfg: &config{chain: chainService, p2p: p, initialSync: &mockSync.Sync{}, clock: startup.NewClock(chainService.Genesis, chainService.ValidatorsRoot)}}
s.newPayloadAttestationVerifier = func(pa payloadattestation.ROMessage, reqs []verification.Requirement) verification.PayloadAttestationMsgVerifier {
return &verification.MockPayloadAttestation{}
}
msg := newPayloadAttestationMessage()
buf := new(bytes.Buffer)
_, err := p.Encoding().EncodeGossip(buf, msg)
require.NoError(t, err)
topic := p2p.GossipTypeMapping[reflect.TypeFor[*ethpb.PayloadAttestationMessage]()]
digest, err := s.currentForkDigest()
require.NoError(t, err)
topic = s.addDigestToTopic(topic, digest)
result, err := s.validatePayloadAttestation(ctx, "", &pubsub.Message{
Message: &pb.Message{
Data: buf.Bytes(),
Topic: &topic,
}})
require.NoError(t, err)
require.Equal(t, result, pubsub.ValidationAccept)
}
func newPayloadAttestationMessage() *ethpb.PayloadAttestationMessage {
return &ethpb.PayloadAttestationMessage{
ValidatorIndex: 0,
Data: util.HydratePayloadAttestationData(&ethpb.PayloadAttestationData{Slot: 1}),
Signature: make([]byte, fieldparams.BLSSignatureLength),
}
}

View File

@@ -15,12 +15,16 @@ go_library(
"log.go",
"metrics.go",
"mock.go",
"payload_attestation.go",
"payload_attestation_mock.go",
"requirements.go",
"result.go",
],
importpath = "github.com/OffchainLabs/prysm/v7/beacon-chain/verification",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/core/gloas:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
@@ -32,6 +36,7 @@ go_library(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/payload-attestation:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//encoding/bytesutil:go_default_library",
@@ -57,12 +62,14 @@ go_test(
"data_column_test.go",
"filesystem_test.go",
"initializer_test.go",
"payload_attestation_test.go",
"result_test.go",
"verification_test.go",
],
embed = [":go_default_library"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/core/peerdas:go_default_library",
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/db:go_default_library",
@@ -73,8 +80,10 @@ go_test(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/payload-attestation:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/bls:go_default_library",
"//crypto/bls/common:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/interop:go_default_library",

View File

@@ -14,24 +14,6 @@ import (
"github.com/pkg/errors"
)
const (
RequireBlobIndexInBounds Requirement = iota
RequireNotFromFutureSlot
RequireSlotAboveFinalized
RequireValidProposerSignature
RequireSidecarParentSeen
RequireSidecarParentValid
RequireSidecarParentSlotLower
RequireSidecarDescendsFromFinalized
RequireSidecarInclusionProven
RequireSidecarKzgProofVerified
RequireSidecarProposerExpected
// Data columns specific.
RequireValidFields
RequireCorrectSubnet
)
var allBlobSidecarRequirements = []Requirement{
RequireBlobIndexInBounds,
RequireNotFromFutureSlot,

View File

@@ -12,6 +12,7 @@ import (
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/config/params"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
payloadattestation "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attestation"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
ethpb "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"golang.org/x/sync/singleflight"
@@ -86,6 +87,16 @@ func (ini *Initializer) NewDataColumnsVerifier(roDataColumns []blocks.RODataColu
}
}
// NewPayloadAttestationMsgVerifier creates a PayloadAttestationMsgVerifier for a single payload attestation message,
// with the given set of requirements.
func (ini *Initializer) NewPayloadAttestationMsgVerifier(pa payloadattestation.ROMessage, reqs []Requirement) *PayloadAttMsgVerifier {
return &PayloadAttMsgVerifier{
sharedResources: ini.shared,
results: newResults(reqs...),
pa: pa,
}
}
// InitializerWaiter provides an Initializer once all dependent resources are ready
// via the WaitForInitializer method.
type InitializerWaiter struct {

View File

@@ -3,8 +3,10 @@ package verification
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
fieldparams "github.com/OffchainLabs/prysm/v7/config/fieldparams"
"github.com/OffchainLabs/prysm/v7/consensus-types/blocks"
payloadattestation "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attestation"
)
// BlobVerifier defines the methods implemented by the ROBlobVerifier.
@@ -54,3 +56,18 @@ type DataColumnsVerifier interface {
// NewDataColumnsVerifier is a function signature that can be used to mock a setup where a
// column verifier can be easily initialized.
type NewDataColumnsVerifier func(dataColumns []blocks.RODataColumn, reqs []Requirement) DataColumnsVerifier
// PayloadAttestationMsgVerifier defines the methods implemented by the ROPayloadAttestation.
type PayloadAttestationMsgVerifier interface {
VerifyCurrentSlot() error
VerifyBlockRootSeen(blockRootSeen func([32]byte) bool) error
VerifyBlockRootValid(func([32]byte) bool) error
VerifyValidatorInPTC(context.Context, state.ReadOnlyBeaconState) error
VerifySignature(state.ReadOnlyBeaconState) error
VerifiedPayloadAttestation() (payloadattestation.VerifiedROMessage, error)
SatisfyRequirement(Requirement)
}
// NewPayloadAttestationMsgVerifier is a function signature that can be used by code that needs to be
// able to mock Initializer.NewPayloadAttestationMsgVerifier without complex setup.
type NewPayloadAttestationMsgVerifier func(pa payloadattestation.ROMessage, reqs []Requirement) PayloadAttestationMsgVerifier

View File

@@ -0,0 +1,177 @@
package verification
import (
"context"
"fmt"
"slices"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/gloas"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
payloadattestation "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attestation"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/time/slots"
"github.com/pkg/errors"
)
// RequirementList defines a list of requirements.
type RequirementList []Requirement
// PayloadAttGossipRequirements defines the list of requirements for gossip payload attestation messages.
var PayloadAttGossipRequirements = []Requirement{
RequireCurrentSlot,
RequireMessageNotSeen,
RequireValidatorInPTC,
RequireBlockRootSeen,
RequireBlockRootValid,
RequireSignatureValid,
}
// GossipPayloadAttestationMessageRequirements is a requirement list for gossip payload attestation messages.
var GossipPayloadAttestationMessageRequirements = RequirementList(PayloadAttGossipRequirements)
var (
ErrIncorrectPayloadAttSlot = errors.New("payload att slot does not match the current slot")
ErrPayloadAttBlockRootNotSeen = errors.New("block root not seen")
ErrPayloadAttBlockRootInvalid = errors.New("block root invalid")
ErrIncorrectPayloadAttValidator = errors.New("validator not present in payload timeliness committee")
ErrInvalidPayloadAttMessage = errors.New("invalid payload attestation message")
)
var _ PayloadAttestationMsgVerifier = &PayloadAttMsgVerifier{}
// PayloadAttMsgVerifier is a read-only verifier for payload attestation messages.
type PayloadAttMsgVerifier struct {
*sharedResources
results *results
pa payloadattestation.ROMessage
}
// VerifyCurrentSlot verifies if the current slot matches the expected slot.
// Represents the following spec verification:
// [IGNORE] data.slot is the current slot.
func (v *PayloadAttMsgVerifier) VerifyCurrentSlot() (err error) {
defer v.record(RequireCurrentSlot, &err)
currentSlot := v.clock.CurrentSlot()
if v.pa.Slot() != currentSlot {
return fmt.Errorf("%w: got %d want %d", ErrIncorrectPayloadAttSlot, v.pa.Slot(), currentSlot)
}
return nil
}
// VerifyBlockRootSeen verifies if the block root has been seen before.
// Represents the following spec verification:
// [IGNORE] The attestation's data.beacon_block_root has been seen (via both gossip and non-gossip sources).
func (v *PayloadAttMsgVerifier) VerifyBlockRootSeen(blockRootSeen func([32]byte) bool) (err error) {
defer v.record(RequireBlockRootSeen, &err)
if blockRootSeen != nil && blockRootSeen(v.pa.BeaconBlockRoot()) {
return nil
}
return fmt.Errorf("%w: root=%#x", ErrPayloadAttBlockRootNotSeen, v.pa.BeaconBlockRoot())
}
// VerifyBlockRootValid verifies if the block root is valid.
// Represents the following spec verification:
// [REJECT] The beacon block with root data.beacon_block_root passes validation.
func (v *PayloadAttMsgVerifier) VerifyBlockRootValid(badBlock func([32]byte) bool) (err error) {
defer v.record(RequireBlockRootValid, &err)
if badBlock != nil && badBlock(v.pa.BeaconBlockRoot()) {
return fmt.Errorf("%w: root=%#x", ErrPayloadAttBlockRootInvalid, v.pa.BeaconBlockRoot())
}
return nil
}
// VerifyValidatorInPTC verifies if the validator is present.
// Represents the following spec verification:
// [REJECT] The validator index is within the payload committee in get_ptc(state, data.slot). For the current's slot head state.
func (v *PayloadAttMsgVerifier) VerifyValidatorInPTC(ctx context.Context, st state.ReadOnlyBeaconState) (err error) {
defer v.record(RequireValidatorInPTC, &err)
ptc, err := gloas.PayloadCommittee(ctx, st, v.pa.Slot())
if err != nil {
return err
}
if slices.Index(ptc, v.pa.ValidatorIndex()) == -1 {
return fmt.Errorf("%w: validatorIndex=%d", ErrIncorrectPayloadAttValidator, v.pa.ValidatorIndex())
}
return nil
}
// VerifySignature verifies the signature of the payload attestation message.
// Represents the following spec verification:
// [REJECT] The signature of payload_attestation_message.signature is valid with respect to the validator index.
func (v *PayloadAttMsgVerifier) VerifySignature(st state.ReadOnlyBeaconState) (err error) {
defer v.record(RequireSignatureValid, &err)
err = validatePayloadAttestationMessageSignature(st, v.pa)
if err != nil {
return err
}
return nil
}
// VerifiedPayloadAttestation returns a verified payload attestation message by checking all requirements.
func (v *PayloadAttMsgVerifier) VerifiedPayloadAttestation() (payloadattestation.VerifiedROMessage, error) {
if v.results.allSatisfied() {
return payloadattestation.NewVerifiedROMessage(v.pa), nil
}
return payloadattestation.VerifiedROMessage{}, ErrInvalidPayloadAttMessage
}
// SatisfyRequirement allows the caller to manually mark a requirement as satisfied.
func (v *PayloadAttMsgVerifier) SatisfyRequirement(req Requirement) {
v.record(req, nil)
}
// ValidatePayloadAttestationMessageSignature verifies the signature of a payload attestation message.
func validatePayloadAttestationMessageSignature(st state.ReadOnlyBeaconState, payloadAtt payloadattestation.ROMessage) error {
val, err := st.ValidatorAtIndex(payloadAtt.ValidatorIndex())
if err != nil {
return fmt.Errorf("validator %d: %w", payloadAtt.ValidatorIndex(), err)
}
pub, err := bls.PublicKeyFromBytes(val.PublicKey)
if err != nil {
return fmt.Errorf("public key: %w", err)
}
s := payloadAtt.Signature()
sig, err := bls.SignatureFromBytes(s[:])
if err != nil {
return fmt.Errorf("signature bytes: %w", err)
}
currentEpoch := slots.ToEpoch(st.Slot())
domain, err := signing.Domain(st.Fork(), currentEpoch, params.BeaconConfig().DomainPTCAttester, st.GenesisValidatorsRoot())
if err != nil {
return fmt.Errorf("domain: %w", err)
}
root, err := payloadAtt.SigningRoot(domain)
if err != nil {
return fmt.Errorf("signing root: %w", err)
}
if !sig.Verify(pub, root[:]) {
return fmt.Errorf("verify signature: %w", signing.ErrSigFailedToVerify)
}
return nil
}
// record records the result of a requirement verification.
func (v *PayloadAttMsgVerifier) record(req Requirement, err *error) {
if err == nil || *err == nil {
v.results.record(req, nil)
return
}
v.results.record(req, *err)
}

View File

@@ -0,0 +1,46 @@
package verification
import (
"context"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
payloadattestation "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attestation"
)
type MockPayloadAttestation struct {
ErrIncorrectPayloadAttSlot error
ErrIncorrectPayloadAttValidator error
ErrPayloadAttBlockRootNotSeen error
ErrPayloadAttBlockRootInvalid error
ErrInvalidPayloadAttMessage error
ErrInvalidMessageSignature error
ErrUnsatisfiedRequirement error
}
var _ PayloadAttestationMsgVerifier = &MockPayloadAttestation{}
func (m *MockPayloadAttestation) VerifyCurrentSlot() error {
return m.ErrIncorrectPayloadAttSlot
}
func (m *MockPayloadAttestation) VerifyValidatorInPTC(ctx context.Context, st state.ReadOnlyBeaconState) error {
return m.ErrIncorrectPayloadAttValidator
}
func (m *MockPayloadAttestation) VerifyBlockRootSeen(_ func([32]byte) bool) error {
return m.ErrPayloadAttBlockRootNotSeen
}
func (m *MockPayloadAttestation) VerifyBlockRootValid(func([32]byte) bool) error {
return m.ErrPayloadAttBlockRootInvalid
}
func (m *MockPayloadAttestation) VerifySignature(st state.ReadOnlyBeaconState) (err error) {
return m.ErrInvalidMessageSignature
}
func (m *MockPayloadAttestation) VerifiedPayloadAttestation() (payloadattestation.VerifiedROMessage, error) {
return payloadattestation.VerifiedROMessage{}, nil
}
func (m *MockPayloadAttestation) SatisfyRequirement(req Requirement) {}

View File

@@ -0,0 +1,167 @@
package verification
import (
"bytes"
"context"
"testing"
"time"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/helpers"
"github.com/OffchainLabs/prysm/v7/beacon-chain/core/signing"
"github.com/OffchainLabs/prysm/v7/beacon-chain/startup"
"github.com/OffchainLabs/prysm/v7/beacon-chain/state"
"github.com/OffchainLabs/prysm/v7/config/params"
payloadattestation "github.com/OffchainLabs/prysm/v7/consensus-types/payload-attestation"
"github.com/OffchainLabs/prysm/v7/consensus-types/primitives"
"github.com/OffchainLabs/prysm/v7/crypto/bls"
"github.com/OffchainLabs/prysm/v7/crypto/bls/common"
eth "github.com/OffchainLabs/prysm/v7/proto/prysm/v1alpha1"
"github.com/OffchainLabs/prysm/v7/testing/require"
testutil "github.com/OffchainLabs/prysm/v7/testing/util"
"github.com/OffchainLabs/prysm/v7/time/slots"
)
func TestPayloadAttestationVerifyCurrentSlot(t *testing.T) {
params.SetupTestConfigCleanup(t)
now := time.Unix(1000, 0)
genesis := now.Add(-time.Duration(params.BeaconConfig().SecondsPerSlot) * time.Second)
clock := startup.NewClock(genesis, [32]byte{}, startup.WithNower(func() time.Time { return now }))
ini := &Initializer{shared: &sharedResources{clock: clock}}
msg := newPayloadAttestationMessage(primitives.Slot(1), 0, bytes.Repeat([]byte{0x11}, 32))
pa, err := payloadattestation.NewReadOnly(msg)
require.NoError(t, err)
v := ini.NewPayloadAttestationMsgVerifier(pa, GossipPayloadAttestationMessageRequirements)
require.NoError(t, v.VerifyCurrentSlot())
msg = newPayloadAttestationMessage(primitives.Slot(2), 0, bytes.Repeat([]byte{0x11}, 32))
pa, err = payloadattestation.NewReadOnly(msg)
require.NoError(t, err)
v = ini.NewPayloadAttestationMsgVerifier(pa, GossipPayloadAttestationMessageRequirements)
require.ErrorIs(t, v.VerifyCurrentSlot(), ErrIncorrectPayloadAttSlot)
}
func TestPayloadAttestationVerifyBlockRootSeenAndValid(t *testing.T) {
params.SetupTestConfigCleanup(t)
ini := &Initializer{shared: &sharedResources{}}
root := bytes.Repeat([]byte{0x22}, 32)
var root32 [32]byte
copy(root32[:], root)
msg := newPayloadAttestationMessage(primitives.Slot(1), 0, root)
pa, err := payloadattestation.NewReadOnly(msg)
require.NoError(t, err)
v := ini.NewPayloadAttestationMsgVerifier(pa, GossipPayloadAttestationMessageRequirements)
require.NoError(t, v.VerifyBlockRootSeen(func(r [32]byte) bool { return r == root32 }))
require.ErrorIs(t, v.VerifyBlockRootSeen(func([32]byte) bool { return false }), ErrPayloadAttBlockRootNotSeen)
require.NoError(t, v.VerifyBlockRootValid(func([32]byte) bool { return false }))
require.ErrorIs(t, v.VerifyBlockRootValid(func([32]byte) bool { return true }), ErrPayloadAttBlockRootInvalid)
}
func TestPayloadAttestationVerifyValidatorInPTC(t *testing.T) {
setupPayloadAttTestConfig(t)
_, pk := newKey(t)
st := newTestState(t, []*eth.Validator{activeValidator(pk)}, 1)
msg := newPayloadAttestationMessage(primitives.Slot(1), 0, bytes.Repeat([]byte{0x33}, 32))
pa, err := payloadattestation.NewReadOnly(msg)
require.NoError(t, err)
v := (&Initializer{shared: &sharedResources{}}).NewPayloadAttestationMsgVerifier(pa, GossipPayloadAttestationMessageRequirements)
require.NoError(t, v.VerifyValidatorInPTC(context.Background(), st))
msg = newPayloadAttestationMessage(primitives.Slot(1), 1, bytes.Repeat([]byte{0x33}, 32))
pa, err = payloadattestation.NewReadOnly(msg)
require.NoError(t, err)
v = (&Initializer{shared: &sharedResources{}}).NewPayloadAttestationMsgVerifier(pa, GossipPayloadAttestationMessageRequirements)
require.ErrorIs(t, v.VerifyValidatorInPTC(context.Background(), st), ErrIncorrectPayloadAttValidator)
}
func TestPayloadAttestationVerifySignature(t *testing.T) {
setupPayloadAttTestConfig(t)
sk, pk := newKey(t)
st := newTestState(t, []*eth.Validator{activeValidator(pk)}, 1)
root := bytes.Repeat([]byte{0x44}, 32)
data := &eth.PayloadAttestationData{
BeaconBlockRoot: root,
Slot: 1,
PayloadPresent: true,
BlobDataAvailable: true,
}
msg := &eth.PayloadAttestationMessage{
ValidatorIndex: 0,
Data: data,
Signature: signPayloadAttestationMessage(t, st, data, sk),
}
pa, err := payloadattestation.NewReadOnly(msg)
require.NoError(t, err)
v := (&Initializer{shared: &sharedResources{}}).NewPayloadAttestationMsgVerifier(pa, GossipPayloadAttestationMessageRequirements)
require.NoError(t, v.VerifySignature(st))
sk2, _ := newKey(t)
msg.Signature = signPayloadAttestationMessage(t, st, data, sk2)
pa, err = payloadattestation.NewReadOnly(msg)
require.NoError(t, err)
v = (&Initializer{shared: &sharedResources{}}).NewPayloadAttestationMsgVerifier(pa, GossipPayloadAttestationMessageRequirements)
require.ErrorIs(t, v.VerifySignature(st), signing.ErrSigFailedToVerify)
}
func newPayloadAttestationMessage(slot primitives.Slot, idx primitives.ValidatorIndex, root []byte) *eth.PayloadAttestationMessage {
return &eth.PayloadAttestationMessage{
ValidatorIndex: idx,
Data: &eth.PayloadAttestationData{
BeaconBlockRoot: root,
Slot: slot,
PayloadPresent: true,
BlobDataAvailable: true,
},
Signature: []byte{0x01},
}
}
func newTestState(t *testing.T, vals []*eth.Validator, slot primitives.Slot) state.BeaconState {
st, err := testutil.NewBeaconStateGloas()
require.NoError(t, err)
for _, v := range vals {
require.NoError(t, st.AppendValidator(v))
require.NoError(t, st.AppendBalance(v.EffectiveBalance))
}
require.NoError(t, st.SetSlot(slot))
require.NoError(t, helpers.UpdateCommitteeCache(t.Context(), st, slots.ToEpoch(slot)))
return st
}
func setupPayloadAttTestConfig(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.SlotsPerEpoch = 1
cfg.MaxEffectiveBalanceElectra = cfg.MaxEffectiveBalance
params.OverrideBeaconConfig(cfg)
}
func activeValidator(pub []byte) *eth.Validator {
return &eth.Validator{
PublicKey: pub,
EffectiveBalance: params.BeaconConfig().MaxEffectiveBalance,
WithdrawalCredentials: make([]byte, 32),
ExitEpoch: params.BeaconConfig().FarFutureEpoch,
WithdrawableEpoch: params.BeaconConfig().FarFutureEpoch,
}
}
func newKey(t *testing.T) (common.SecretKey, []byte) {
sk, err := bls.RandKey()
require.NoError(t, err)
return sk, sk.PublicKey().Marshal()
}
func signPayloadAttestationMessage(t *testing.T, st state.ReadOnlyBeaconState, data *eth.PayloadAttestationData, sk common.SecretKey) []byte {
domain, err := signing.Domain(st.Fork(), slots.ToEpoch(st.Slot()), params.BeaconConfig().DomainPTCAttester, st.GenesisValidatorsRoot())
require.NoError(t, err)
root, err := signing.ComputeSigningRoot(data, domain)
require.NoError(t, err)
sig := sk.Sign(root[:])
return sig.Marshal()
}

Some files were not shown because too many files have changed in this diff Show More