Run mdformat on all markdown files (#4417)

This commit is contained in:
Justin Traglia
2025-06-30 16:27:01 -05:00
committed by GitHub
parent 8b9c2d8c0a
commit e660623d12
63 changed files with 1281 additions and 895 deletions

View File

@@ -177,12 +177,7 @@ PYLINT_CONFIG = $(CURDIR)/pylint.ini
PYLINT_SCOPE := $(foreach S,$(ALL_EXECUTABLE_SPEC_NAMES), $(PYSPEC_DIR)/eth2spec/$S)
MYPY_SCOPE := $(foreach S,$(ALL_EXECUTABLE_SPEC_NAMES), -p eth2spec.$S)
MARKDOWN_FILES = $(CURDIR)/README.md \
$(wildcard $(SPEC_DIR)/*/*.md) \
$(wildcard $(SPEC_DIR)/*/*/*.md) \
$(wildcard $(SPEC_DIR)/_features/*/*.md) \
$(wildcard $(SPEC_DIR)/_features/*/*/*.md) \
$(wildcard $(SSZ_DIR)/*.md)
MARKDOWN_FILES := $(shell find $(CURDIR) -name '*.md')
# Check for mistakes.
lint: pyspec

View File

@@ -2,10 +2,16 @@
## Supported Versions
Please see [Releases](https://github.com/ethereum/consensus-specs/releases/). We recommend using the [most recently released version](https://github.com/ethereum/consensus-specs/releases/latest).
Please see [Releases](https://github.com/ethereum/consensus-specs/releases/). We
recommend using the
[most recently released version](https://github.com/ethereum/consensus-specs/releases/latest).
## Reporting a Vulnerability
**Please do not file a public ticket** mentioning the vulnerability.
To find out how to disclose a vulnerability in the Ethereum Consensus Layer visit [https://bounty.ethereum.org](https://bounty.ethereum.org) or email bounty@ethereum.org. Please read the [disclosure page](https://bounty.ethereum.org) for more information about publicly disclosed security vulnerabilities.
To find out how to disclose a vulnerability in the Ethereum Consensus Layer
visit [https://bounty.ethereum.org](https://bounty.ethereum.org) or email
bounty@ethereum.org. Please read the
[disclosure page](https://bounty.ethereum.org) for more information about
publicly disclosed security vulnerabilities.

View File

@@ -1,39 +1,51 @@
# Configurations
This directory contains a set of configurations used for testing, testnets, and mainnet.
A client binary may be compiled for a specific `PRESET_BASE`,
and then load different configurations around that preset to participate in different networks or tests.
This directory contains a set of configurations used for testing, testnets, and
mainnet. A client binary may be compiled for a specific `PRESET_BASE`, and then
load different configurations around that preset to participate in different
networks or tests.
Standard configs:
- [`mainnet.yaml`](./mainnet.yaml): Mainnet configuration
- [`minimal.yaml`](./minimal.yaml): Minimal configuration, used in spec-testing along with the [`minimal`](../presets/minimal) preset.
Not all network configurations are in scope for the specification,
see [`github.com/eth-clients/eth2-networks`](https://github.com/eth-clients/eth2-networks) for common networks,
and additional testnet assets.
- [`mainnet.yaml`](./mainnet.yaml): Mainnet configuration
- [`minimal.yaml`](./minimal.yaml): Minimal configuration, used in spec-testing
along with the [`minimal`](../presets/minimal) preset.
Not all network configurations are in scope for the specification, see
[`github.com/eth-clients/eth2-networks`](https://github.com/eth-clients/eth2-networks)
for common networks, and additional testnet assets.
## Forking
Variables are not replaced but extended with forks. This is to support syncing from one state to another over a fork boundary, without hot-swapping a config.
Instead, for forks that introduce changes in a variable, the variable name is suffixed with the fork name, e.g. `INACTIVITY_PENALTY_QUOTIENT_ALTAIR`.
Variables are not replaced but extended with forks. This is to support syncing
from one state to another over a fork boundary, without hot-swapping a config.
Instead, for forks that introduce changes in a variable, the variable name is
suffixed with the fork name, e.g. `INACTIVITY_PENALTY_QUOTIENT_ALTAIR`.
Future-fork variables can be ignored, e.g. ignore Sharding variables as a client that only supports Phase 0 currently.
Future-fork variables can be ignored, e.g. ignore Sharding variables as a client
that only supports Phase 0 currently.
Over time, the need to sync an older state may be deprecated.
In this case, the suffix on the new variable may be removed, and the old variable will keep a special name before completely being removed.
Over time, the need to sync an older state may be deprecated. In this case, the
suffix on the new variable may be removed, and the old variable will keep a
special name before completely being removed.
A previous iteration of forking made use of "timelines", but this collides with the definitions used in the spec (variables for special forking slots, etc.), and was not integrated sufficiently in any of the spec tools or implementations.
Instead, the config essentially doubles as fork definition now, e.g. changing the value for `ALTAIR_FORK_EPOCH` changes the fork.
A previous iteration of forking made use of "timelines", but this collides with
the definitions used in the spec (variables for special forking slots, etc.),
and was not integrated sufficiently in any of the spec tools or implementations.
Instead, the config essentially doubles as fork definition now, e.g. changing
the value for `ALTAIR_FORK_EPOCH` changes the fork.
## Format
Each preset and configuration is a key-value mapping.
**Key**: an `UPPER_SNAKE_CASE` (a.k.a. "macro case") formatted string, name of the variable.
**Key**: an `UPPER_SNAKE_CASE` (a.k.a. "macro case") formatted string, name of
the variable.
**Value** can be either:
- an unsigned integer number, can be up to 64 bits (incl.)
- a hexadecimal string, prefixed with `0x`
This format is fully YAML compatible.
The presets and configurations may contain comments to describe the values.
- an unsigned integer number, can be up to 64 bits (incl.)
- a hexadecimal string, prefixed with `0x`
This format is fully YAML compatible. The presets and configurations may contain
comments to describe the values.

View File

@@ -25,22 +25,25 @@
### 1. Create a folder under `./specs/_features`
For example, if it's an `EIP-9999` CL spec, you can create a `./specs/_features/eip9999` folder.
For example, if it's an `EIP-9999` CL spec, you can create a
`./specs/_features/eip9999` folder.
### 2. Choose the "previous fork" to extend: usually, use the scheduled or the latest mainnet fork version.
For example, if the latest fork is Capella, use `./specs/capella` content as your "previous fork".
For example, if the latest fork is Capella, use `./specs/capella` content as
your "previous fork".
### 3. Write down your proposed `beacon-chain.md` change
- You can either use [Beacon Chain Spec Template](./templates/beacon-chain-template.md), or make a copy of the latest fork content and then edit it.
- Make a copy of the latest fork content and then edit it.
- Tips:
- The differences between "Constants", "Configurations", and "Presets":
- Constants: The constant that should never be changed.
- Configurations: The settings that we may change for different networks.
- Presets: The settings that we may change for testing.
- Readability and simplicity are more important than efficiency and optimization.
- Use simple Python rather than the fancy Python dark magic.
- The differences between "Constants", "Configurations", and "Presets":
- Constants: The constant that should never be changed.
- Configurations: The settings that we may change for different networks.
- Presets: The settings that we may change for testing.
- Readability and simplicity are more important than efficiency and
optimization.
- Use simple Python rather than the fancy Python dark magic.
### 4. Add `fork.md`
@@ -48,29 +51,50 @@ You can refer to the previous fork's `fork.md` file.
### 5. Make it executable
- Update Pyspec [`constants.py`](https://github.com/ethereum/consensus-specs/blob/dev/tests/core/pyspec/eth2spec/test/helpers/constants.py) with the new feature name.
- Update helpers for [`setup.py`](https://github.com/ethereum/consensus-specs/blob/dev/setup.py) for building the spec:
- Update [`pysetup/constants.py`](https://github.com/ethereum/consensus-specs/blob/dev/pysetup/constants.py) with the new feature name as Pyspec `constants.py` defined.
- Update [`pysetup/spec_builders/__init__.py`](https://github.com/ethereum/consensus-specs/blob/dev/pysetup/spec_builders/__init__.py). Implement a new `<FEATURE_NAME>SpecBuilder` in `pysetup/spec_builders/<FEATURE_NAME>.py` with the new feature name. e.g., `EIP9999SpecBuilder`. Append it to the `spec_builders` list.
- Update [`pysetup/md_doc_paths.py`](https://github.com/ethereum/consensus-specs/blob/dev/pysetup/md_doc_paths.py): add the path of the new markdown files in `get_md_doc_paths` function if needed.
- Update `PREVIOUS_FORK_OF` setting in both [`test/helpers/constants.py`](https://github.com/ethereum/consensus-specs/blob/dev/tests/core/pyspec/eth2spec/test/helpers/constants.py) and [`pysetup/md_doc_paths.py`](https://github.com/ethereum/consensus-specs/blob/dev/pysetup/md_doc_paths.py).
- NOTE: since these two modules (the pyspec itself and the spec builder tool) must be separate, the fork sequence setting has to be defined again.
- Update Pyspec
[`constants.py`](https://github.com/ethereum/consensus-specs/blob/dev/tests/core/pyspec/eth2spec/test/helpers/constants.py)
with the new feature name.
- Update helpers for
[`setup.py`](https://github.com/ethereum/consensus-specs/blob/dev/setup.py)
for building the spec:
- Update
[`pysetup/constants.py`](https://github.com/ethereum/consensus-specs/blob/dev/pysetup/constants.py)
with the new feature name as Pyspec `constants.py` defined.
- Update
[`pysetup/spec_builders/__init__.py`](https://github.com/ethereum/consensus-specs/blob/dev/pysetup/spec_builders/__init__.py).
Implement a new `<FEATURE_NAME>SpecBuilder` in
`pysetup/spec_builders/<FEATURE_NAME>.py` with the new feature name. e.g.,
`EIP9999SpecBuilder`. Append it to the `spec_builders` list.
- Update
[`pysetup/md_doc_paths.py`](https://github.com/ethereum/consensus-specs/blob/dev/pysetup/md_doc_paths.py):
add the path of the new markdown files in `get_md_doc_paths` function if
needed.
- Update `PREVIOUS_FORK_OF` setting in both
[`test/helpers/constants.py`](https://github.com/ethereum/consensus-specs/blob/dev/tests/core/pyspec/eth2spec/test/helpers/constants.py)
and
[`pysetup/md_doc_paths.py`](https://github.com/ethereum/consensus-specs/blob/dev/pysetup/md_doc_paths.py).
- NOTE: since these two modules (the pyspec itself and the spec builder tool)
must be separate, the fork sequence setting has to be defined again.
## B: Make it executable for pytest and test generator
### 1. [Optional] Add `light-client/*` docs if you updated the content of `BeaconBlock`
- You can refer to the previous fork's `light-client/*` file.
- Add the path of the new markdown files in [`pysetup/md_doc_paths.py`](https://github.com/ethereum/consensus-specs/blob/dev/pysetup/md_doc_paths.py)'s `get_md_doc_paths` function.
- Add the path of the new markdown files in
[`pysetup/md_doc_paths.py`](https://github.com/ethereum/consensus-specs/blob/dev/pysetup/md_doc_paths.py)'s
`get_md_doc_paths` function.
### 2. Add the mainnet and minimal presets and update the configs
- Add presets: `presets/mainnet/<new-feature-name>.yaml` and `presets/minimal/<new-feature-name>.yaml`
- Add presets: `presets/mainnet/<new-feature-name>.yaml` and
`presets/minimal/<new-feature-name>.yaml`
- Update configs: `configs/mainnet.yaml` and `configs/minimal.yaml`
### 3. Update [`context.py`](https://github.com/ethereum/consensus-specs/blob/dev/tests/core/pyspec/eth2spec/test/context.py)
- [Optional] Add `with_<new-feature-name>_and_later` decorator for writing pytest cases. e.g., `with_capella_and_later`.
- [Optional] Add `with_<new-feature-name>_and_later` decorator for writing
pytest cases. e.g., `with_capella_and_later`.
### 4. Update [`constants.py`](https://github.com/ethereum/consensus-specs/blob/dev/tests/core/pyspec/eth2spec/test/helpers/constants.py)
@@ -80,23 +104,29 @@ You can refer to the previous fork's `fork.md` file.
We use `create_genesis_state` to create the default `state` in tests.
- If the given feature changes `BeaconState` fields, you have to set the initial values by adding:
- If the given feature changes `BeaconState` fields, you have to set the initial
values by adding:
```python
def create_genesis_state(spec, validator_balances, activation_threshold):
...
if is_post_eip9999(spec):
state.<NEW_FIELD> = <value>
state.new_field = value
return state
```
- If the given feature changes `ExecutionPayload` fields, you have to set the initial values by updating `get_sample_genesis_execution_payload_header` helper.
- If the given feature changes `ExecutionPayload` fields, you have to set the
initial values by updating `get_sample_genesis_execution_payload_header`
helper.
### 6. Update CI configurations
- Update [GitHub Actions config](https://github.com/ethereum/consensus-specs/blob/dev/.github/workflows/run-tests.yml)
- Update `pyspec-tests.strategy.matrix.version` list by adding new feature to it
- Update
[GitHub Actions config](https://github.com/ethereum/consensus-specs/blob/dev/.github/workflows/run-tests.yml)
- Update `pyspec-tests.strategy.matrix.version` list by adding new feature to
it
## Others

View File

@@ -1,60 +0,0 @@
# `beacon-chain.md` Template
# <FORK_NAME> -- The Beacon Chain
<!-- mdformat-toc start --slug=github --no-anchors --maxlevel=6 --minlevel=2 -->
<!-- mdformat-toc end -->
## Introduction
## Notation
## Custom types
## Constants
### [CATEGORY OF CONSTANTS]
| Name | Value |
| - | - |
| `<CONSTANT_NAME>` | `<VALUE>`` |
## Preset
### [CATEGORY OF PRESETS]
| Name | Value |
| - | - |
| `<PRESET_FIELD_NAME>` | `<VALUE>` |
## Configuration
### [CATEGORY OF CONFIGURATIONS]
| Name | Value |
| - | - |
| `<CONFIGURATION_FIELD_NAME>` | `<VALUE>` |
## Containers
### [CATEGORY OF CONTAINERS]
#### `CONTAINER_NAME`
```python
class CONTAINER_NAME(Container):
FILED_NAME: SSZ_TYPE
```
## Helper functions
### [CATEGORY OF HELPERS]
```python
<PYTHON HELPER FUNCTION>
```
### Epoch processing
### Block processing

View File

@@ -10,10 +10,9 @@
## Introduction
Under honest majority and certain network synchronicity assumptions
there exists a block that is safe from re-orgs. Normally this block is
pretty close to the head of canonical chain which makes it valuable
to expose a safe block to users.
Under honest majority and certain network synchronicity assumptions there exists
a block that is safe from re-orgs. Normally this block is pretty close to the
head of canonical chain which makes it valuable to expose a safe block to users.
This section describes an algorithm to find a safe block.
@@ -24,8 +23,9 @@ def get_safe_beacon_block_root(store: Store) -> Root:
# Use most recent justified block as a stopgap
return store.justified_checkpoint.root
```
*Note*: Currently safe block algorithm simply returns `store.justified_checkpoint.root`
and is meant to be improved in the future.
*Note*: Currently safe block algorithm simply returns
`store.justified_checkpoint.root` and is meant to be improved in the future.
## `get_safe_execution_block_hash`
@@ -41,4 +41,5 @@ def get_safe_execution_block_hash(store: Store) -> Hash32:
return Hash32()
```
*Note*: This helper uses beacon block container extended in [Bellatrix](../specs/bellatrix/beacon-chain.md).
*Note*: This helper uses beacon block container extended in
[Bellatrix](../specs/bellatrix/beacon-chain.md).

View File

@@ -1,25 +1,30 @@
# Presets
Presets are more extensive than runtime configurations, and generally only applicable during compile-time.
Each preset is defined as a directory, with YAML files per fork.
Presets are more extensive than runtime configurations, and generally only
applicable during compile-time. Each preset is defined as a directory, with YAML
files per fork.
Configurations can extend a preset by setting the `PRESET_BASE` variable.
An implementation may choose to only support 1 preset per build-target and should validate
the `PRESET_BASE` variable in the config matches the running build.
Configurations can extend a preset by setting the `PRESET_BASE` variable. An
implementation may choose to only support 1 preset per build-target and should
validate the `PRESET_BASE` variable in the config matches the running build.
Standard presets:
- [`mainnet/`](./mainnet): Used in mainnet, mainnet-like testnets (e.g. Hoodi), and spec-testing
- [`minimal/`](./minimal): Used in low-resource local dev testnets, and spec-testing
Client implementers may opt to support additional presets, e.g. for extra large beacon states for benchmarking.
See [`/configs/`](../configs) for run-time configuration, e.g. to configure a new testnet.
- [`mainnet/`](./mainnet): Used in mainnet, mainnet-like testnets (e.g. Hoodi),
and spec-testing
- [`minimal/`](./minimal): Used in low-resource local dev testnets, and
spec-testing
Client implementers may opt to support additional presets, e.g. for extra large
beacon states for benchmarking. See [`/configs/`](../configs) for run-time
configuration, e.g. to configure a new testnet.
## Forking
Like the [config forking](../configs/README.md#forking),
the preset extends with every fork, instead of overwriting previous values.
An implementation can ignore preset files as a whole for future forks,
and can thus implement stricter compile-time warnings on unrecognized or missing variables in current forks.
Like the [config forking](../configs/README.md#forking), the preset extends with
every fork, instead of overwriting previous values. An implementation can ignore
preset files as a whole for future forks, and can thus implement stricter
compile-time warnings on unrecognized or missing variables in current forks.
## Format

View File

@@ -2,14 +2,22 @@
## History
This is a rewrite of the [Vyper Eth 2.0 deposit contract](https://github.com/ethereum/eth2.0-specs/blob/v0.12.2/deposit_contract/contracts/validator_registration.vy) to Solidity.
This is a rewrite of the
[Vyper Eth 2.0 deposit contract](https://github.com/ethereum/eth2.0-specs/blob/v0.12.2/deposit_contract/contracts/validator_registration.vy)
to Solidity.
The original motivation was to run the SMTChecker and the new Yul IR generator option (`--ir`) in the compiler.
The original motivation was to run the SMTChecker and the new Yul IR generator
option (`--ir`) in the compiler.
As of June 2020, version `r1` of the Solidity deposit contract has been verified and is considered for adoption.
See this [blog post](https://blog.ethereum.org/2020/06/23/eth2-quick-update-no-12/) for more information.
As of June 2020, version `r1` of the Solidity deposit contract has been verified
and is considered for adoption. See this
[blog post](https://blog.ethereum.org/2020/06/23/eth2-quick-update-no-12/) for
more information.
In August 2020, version `r2` was released with metadata modifications and relicensed to CC0-1.0. Afterward, this contract has been ported back to from [`axic/eth2-deposit-contract`](https://github.com/axic/eth2-deposit-contract) to this repository and replaced the Vyper deposit contract.
In August 2020, version `r2` was released with metadata modifications and
relicensed to CC0-1.0. Afterward, this contract has been ported back to from
[`axic/eth2-deposit-contract`](https://github.com/axic/eth2-deposit-contract) to
this repository and replaced the Vyper deposit contract.
## Compiling solidity deposit contract
@@ -19,12 +27,13 @@ In this directory run:
make compile_deposit_contract
```
The following parameters were used to generate the bytecode for the `DepositContract` available in this repository:
The following parameters were used to generate the bytecode for the
`DepositContract` available in this repository:
* Contract Name: `DepositContract`
* Compiler Version: Solidity `v0.6.11+commit.5ef660b1`
* Optimization Enabled: `Yes` with `5000000` runs
* Metadata Options: `--metadata-literal` (to verify metadata hash)
- Contract Name: `DepositContract`
- Compiler Version: Solidity `v0.6.11+commit.5ef660b1`
- Optimization Enabled: `Yes` with `5000000` runs
- Metadata Options: `--metadata-literal` (to verify metadata hash)
```sh
solc --optimize --optimize-runs 5000000 --metadata-literal --bin deposit_contract.sol
@@ -32,12 +41,15 @@ solc --optimize --optimize-runs 5000000 --metadata-literal --bin deposit_contrac
## Running web3 tests
1. In this directory run `make install_deposit_contract_web3_tester` to install the tools needed (make sure to have Python 3 and pip installed).
2. In this directory run `make test_deposit_contract_web3_tests` to execute the tests.
1. In this directory run `make install_deposit_contract_web3_tester` to install
the tools needed (make sure to have Python 3 and pip installed).
2. In this directory run `make test_deposit_contract_web3_tests` to execute the
tests.
## Running randomized `dapp` tests:
Install the latest version of `dapp` by following the instructions at [dapp.tools](https://dapp.tools/). Then in the `eth2.0-specs` directory run:
Install the latest version of `dapp` by following the instructions at
[dapp.tools](https://dapp.tools/). Then in the `eth2.0-specs` directory run:
```sh
make test_deposit_contract

View File

@@ -32,22 +32,22 @@
## Introduction
In order to provide a syncing execution engine with a partial view of the head
of the chain, it may be desirable for a consensus engine to import beacon
blocks without verifying the execution payloads. This partial sync is called an
of the chain, it may be desirable for a consensus engine to import beacon blocks
without verifying the execution payloads. This partial sync is called an
*optimistic sync*.
Optimistic sync is designed to be opt-in and backwards compatible (i.e.,
non-optimistic nodes can tolerate optimistic nodes on the network and vice
versa). Optimistic sync is not a fundamental requirement for consensus nodes.
Rather, it's a stop-gap measure to allow execution nodes to sync via
established methods until future Ethereum roadmap items are implemented (e.g.,
Rather, it's a stop-gap measure to allow execution nodes to sync via established
methods until future Ethereum roadmap items are implemented (e.g.,
statelessness).
## Constants
|Name|Value|Unit
|---|---|---|
|`SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY`| `128` | slots
| Name | Value | Unit |
| ------------------------------------- | ----- | ----- |
| `SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY` | `128` | slots |
*Note: the `SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY` must be user-configurable. See
[Fork Choice Poisoning](#fork-choice-poisoning).*
@@ -58,23 +58,24 @@ For brevity, we define two aliases for values of the `status` field on
`PayloadStatusV1`:
- Alias `NOT_VALIDATED` to:
- `SYNCING`
- `ACCEPTED`
- `SYNCING`
- `ACCEPTED`
- Alias `INVALIDATED` to:
- `INVALID`
- `INVALID_BLOCK_HASH`
- `INVALID`
- `INVALID_BLOCK_HASH`
Let `head: BeaconBlock` be the result of calling of the fork choice
algorithm at the time of block production. Let `head_block_root: Root` be the
root of that block.
Let `head: BeaconBlock` be the result of calling of the fork choice algorithm at
the time of block production. Let `head_block_root: Root` be the root of that
block.
Let `blocks: Dict[Root, BeaconBlock]` and `block_states: Dict[Root,
BeaconState]` be the blocks (and accompanying states) that have been verified
either completely or optimistically.
Let `blocks: Dict[Root, BeaconBlock]` and
`block_states: Dict[Root, BeaconState]` be the blocks (and accompanying states)
that have been verified either completely or optimistically.
Let `optimistic_roots: Set[Root]` be the set of `hash_tree_root(block)` for all
optimistically imported blocks which have only received a `NOT_VALIDATED` designation
from an execution engine (i.e., they are not known to be `INVALIDATED` or `VALID`).
optimistically imported blocks which have only received a `NOT_VALIDATED`
designation from an execution engine (i.e., they are not known to be
`INVALIDATED` or `VALID`).
Let `current_slot: Slot` be `(time - genesis_time) // SECONDS_PER_SLOT` where
`time` is the UNIX time according to the local system clock.
@@ -108,7 +109,9 @@ def is_execution_block(block: BeaconBlock) -> bool:
```
```python
def is_optimistic_candidate_block(opt_store: OptimisticStore, current_slot: Slot, block: BeaconBlock) -> bool:
def is_optimistic_candidate_block(
opt_store: OptimisticStore, current_slot: Slot, block: BeaconBlock
) -> bool:
if is_execution_block(opt_store.blocks[block.parent_root]):
return True
@@ -118,16 +121,19 @@ def is_optimistic_candidate_block(opt_store: OptimisticStore, current_slot: Slot
return False
```
Let a node be an *optimistic node* if its fork choice is in one of the following states:
Let a node be an *optimistic node* if its fork choice is in one of the following
states:
1. `is_optimistic(opt_store, head) is True`
2. Blocks from every viable (with respect to FFG) branch have transitioned from `NOT_VALIDATED` to `INVALIDATED`
leaving the block tree without viable branches
2. Blocks from every viable (with respect to FFG) branch have transitioned from
`NOT_VALIDATED` to `INVALIDATED` leaving the block tree without viable
branches
Let only a validator on an optimistic node be an *optimistic validator*.
When this specification only defines behaviour for an optimistic
node/validator, but *not* for the non-optimistic case, assume default
behaviours without regard for optimistic sync.
When this specification only defines behaviour for an optimistic node/validator,
but *not* for the non-optimistic case, assume default behaviours without regard
for optimistic sync.
## Mechanisms
@@ -155,63 +161,76 @@ these conditions.*
To optimistically import a block:
- The [`verify_and_notify_new_payload`](../specs/bellatrix/beacon-chain.md#verify_and_notify_new_payload) function MUST return `True` if the execution
engine returns `NOT_VALIDATED` or `VALID`. An `INVALIDATED` response MUST return `False`.
- The [`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block)
function MUST NOT raise an assertion if both the
`pow_block` and `pow_parent` are unknown to the execution engine.
- All other assertions in [`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block)
(e.g., `TERMINAL_BLOCK_HASH`) MUST prevent an optimistic import.
- The
[`verify_and_notify_new_payload`](../specs/bellatrix/beacon-chain.md#verify_and_notify_new_payload)
function MUST return `True` if the execution engine returns `NOT_VALIDATED` or
`VALID`. An `INVALIDATED` response MUST return `False`.
- The
[`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block)
function MUST NOT raise an assertion if both the `pow_block` and `pow_parent`
are unknown to the execution engine.
- All other assertions in
[`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block)
(e.g., `TERMINAL_BLOCK_HASH`) MUST prevent an optimistic import.
- The parent of the block MUST NOT have an `INVALIDATED` execution payload.
In addition to this change in validation, the consensus engine MUST track which
blocks returned `NOT_VALIDATED` and which returned `VALID` for subsequent processing.
blocks returned `NOT_VALIDATED` and which returned `VALID` for subsequent
processing.
Optimistically imported blocks MUST pass all verifications included in
`process_block` (withstanding the modifications to `verify_and_notify_new_payload`).
`process_block` (withstanding the modifications to
`verify_and_notify_new_payload`).
A consensus engine MUST be able to retrospectively (i.e., after import) modify
the status of `NOT_VALIDATED` blocks to be either `VALID` or `INVALIDATED` based upon responses
from an execution engine. I.e., perform the following transitions:
the status of `NOT_VALIDATED` blocks to be either `VALID` or `INVALIDATED` based
upon responses from an execution engine. I.e., perform the following
transitions:
- `NOT_VALIDATED` -> `VALID`
- `NOT_VALIDATED` -> `INVALIDATED`
When a block transitions from `NOT_VALIDATED` -> `VALID`, all *ancestors* of the
block MUST also transition from `NOT_VALIDATED` -> `VALID`. Such a block and any previously `NOT_VALIDATED` ancestors are no longer
considered "optimistically imported".
block MUST also transition from `NOT_VALIDATED` -> `VALID`. Such a block and any
previously `NOT_VALIDATED` ancestors are no longer considered "optimistically
imported".
When a block transitions from `NOT_VALIDATED` -> `INVALIDATED`, all *descendants* of the
block MUST also transition from `NOT_VALIDATED` -> `INVALIDATED`.
When a block transitions from `NOT_VALIDATED` -> `INVALIDATED`, all
*descendants* of the block MUST also transition from `NOT_VALIDATED` ->
`INVALIDATED`.
When a block transitions from the `NOT_VALIDATED` state, it is removed from the set of
`opt_store.optimistic_roots`.
When a block transitions from the `NOT_VALIDATED` state, it is removed from the
set of `opt_store.optimistic_roots`.
When a "merge block" (i.e. the first block which enables execution in a chain) is declared to be
`VALID` by an execution engine (either directly or indirectly), the full
When a "merge block" (i.e. the first block which enables execution in a chain)
is declared to be `VALID` by an execution engine (either directly or
indirectly), the full
[`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block)
MUST be run against the merge block. If the block
fails [`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block),
the merge block MUST be treated the same as
an `INVALIDATED` block (i.e., it and all its descendants are invalidated and
removed from the block tree).
MUST be run against the merge block. If the block fails
[`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block),
the merge block MUST be treated the same as an `INVALIDATED` block (i.e., it and
all its descendants are invalidated and removed from the block tree).
### How to apply `latestValidHash` when payload status is `INVALID`
Processing an `INVALID` payload status depends on the `latestValidHash` parameter.
The general approach is as follows:
1. Consensus engine MUST identify `invalidBlock` as per definition in the table below.
2. `invalidBlock` and all of its descendants MUST be transitioned from `NOT_VALIDATED` to `INVALIDATED`.
Processing an `INVALID` payload status depends on the `latestValidHash`
parameter. The general approach is as follows:
| `latestValidHash` | `invalidBlock` |
|:- |:- |
| Execution block hash | The *child* of a block with `body.execution_payload.block_hash == latestValidHash` in the chain containing the block with payload in question |
| `0x00..00` (all zeroes) | The first block with `body.execution_payload != ExecutionPayload()` in the chain containing a block with payload in question |
| `null` | Block with payload in question |
1. Consensus engine MUST identify `invalidBlock` as per definition in the table
below.
2. `invalidBlock` and all of its descendants MUST be transitioned from
`NOT_VALIDATED` to `INVALIDATED`.
| `latestValidHash` | `invalidBlock` |
| :---------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------- |
| Execution block hash | The *child* of a block with `body.execution_payload.block_hash == latestValidHash` in the chain containing the block with payload in question |
| `0x00..00` (all zeroes) | The first block with `body.execution_payload != ExecutionPayload()` in the chain containing a block with payload in question |
| `null` | Block with payload in question |
When `latestValidHash` is a meaningful execution block hash but consensus engine
cannot find a block satisfying `body.execution_payload.block_hash == latestValidHash`,
consensus engine SHOULD behave the same as if `latestValidHash` was `null`.
cannot find a block satisfying
`body.execution_payload.block_hash == latestValidHash`, consensus engine SHOULD
behave the same as if `latestValidHash` was `null`.
### Execution Engine Errors
@@ -224,12 +243,12 @@ validity request for some block, a consensus engine:
### Assumptions about Execution Engine Behaviour
This specification assumes execution engines will only return `NOT_VALIDATED` when
there is insufficient information available to make a `VALID` or `INVALIDATED`
determination on the given `ExecutionPayload` (e.g., the parent payload is
unknown). Specifically, `NOT_VALIDATED` responses should be fork-specific, in that
the search for a block on one chain MUST NOT trigger a `NOT_VALIDATED` response for
another chain.
This specification assumes execution engines will only return `NOT_VALIDATED`
when there is insufficient information available to make a `VALID` or
`INVALIDATED` determination on the given `ExecutionPayload` (e.g., the parent
payload is unknown). Specifically, `NOT_VALIDATED` responses should be
fork-specific, in that the search for a block on one chain MUST NOT trigger a
`NOT_VALIDATED` response for another chain.
### Re-Orgs
@@ -237,34 +256,33 @@ The consensus engine MUST support any chain reorganisation which does *not*
affect the justified checkpoint.
If the justified checkpoint transitions from `NOT_VALIDATED` -> `INVALIDATED`, a
consensus engine MAY choose to alert the user and force the application to
exit.
consensus engine MAY choose to alert the user and force the application to exit.
## Fork Choice
Consensus engines MUST support removing blocks from fork choice that transition
from `NOT_VALIDATED` to `INVALIDATED`. Specifically, a block deemed `INVALIDATED` at any
point MUST NOT be included in the canonical chain and the weights from those
`INVALIDATED` blocks MUST NOT be applied to any `VALID` or `NOT_VALIDATED` ancestors.
from `NOT_VALIDATED` to `INVALIDATED`. Specifically, a block deemed
`INVALIDATED` at any point MUST NOT be included in the canonical chain and the
weights from those `INVALIDATED` blocks MUST NOT be applied to any `VALID` or
`NOT_VALIDATED` ancestors.
### Fork Choice Poisoning
During the merge transition it is possible for an attacker to craft a
`BeaconBlock` with an execution payload that references an
eternally-unavailable `body.execution_payload.parent_hash` (i.e., the parent
hash is random bytes). In rare circumstances, it is possible that an attacker
can build atop such a block to trigger justification. If an optimistic node
imports this malicious chain, that node will have a "poisoned" fork choice
store, such that the node is unable to produce a block that descends from the
head (due to the invalid chain of payloads) and the node is unable to produce a
block that forks around the head (due to the justification of the malicious
chain).
`BeaconBlock` with an execution payload that references an eternally-unavailable
`body.execution_payload.parent_hash` (i.e., the parent hash is random bytes). In
rare circumstances, it is possible that an attacker can build atop such a block
to trigger justification. If an optimistic node imports this malicious chain,
that node will have a "poisoned" fork choice store, such that the node is unable
to produce a block that descends from the head (due to the invalid chain of
payloads) and the node is unable to produce a block that forks around the head
(due to the justification of the malicious chain).
If an honest chain exists which justifies a higher epoch than the malicious
chain, that chain will take precedence and revive any poisoned store. Such a
chain, if imported before the malicious chain, will prevent the store from
being poisoned. Therefore, the poisoning attack is temporary if >= 2/3rds of
the network is honest and non-faulty.
chain, if imported before the malicious chain, will prevent the store from being
poisoned. Therefore, the poisoning attack is temporary if >= 2/3rds of the
network is honest and non-faulty.
The `SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY` parameter assumes that the network
will justify a honest chain within some number of slots. With this assumption,
@@ -279,13 +297,13 @@ clients MUST provide the following command line flag to assist with manual
disaster recovery:
- `--safe-slots-to-import-optimistically`: modifies the
`SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY`.
`SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY`.
## Checkpoint Sync (Weak Subjectivity Sync)
A consensus engine MAY assume that the `ExecutionPayload` of a block used as an
anchor for checkpoint sync is `VALID` without necessarily providing that
payload to an execution engine.
anchor for checkpoint sync is `VALID` without necessarily providing that payload
to an execution engine.
## Validator assignments
@@ -301,46 +319,50 @@ An optimistic validator MUST NOT produce a block (i.e., sign across the
### Attesting
An optimistic validator MUST NOT participate in attestation (i.e., sign across the
`DOMAIN_BEACON_ATTESTER`, `DOMAIN_SELECTION_PROOF` or
An optimistic validator MUST NOT participate in attestation (i.e., sign across
the `DOMAIN_BEACON_ATTESTER`, `DOMAIN_SELECTION_PROOF` or
`DOMAIN_AGGREGATE_AND_PROOF` domains).
### Participating in Sync Committees
An optimistic validator MUST NOT participate in sync committees (i.e., sign across the
`DOMAIN_SYNC_COMMITTEE`, `DOMAIN_SYNC_COMMITTEE_SELECTION_PROOF` or
An optimistic validator MUST NOT participate in sync committees (i.e., sign
across the `DOMAIN_SYNC_COMMITTEE`, `DOMAIN_SYNC_COMMITTEE_SELECTION_PROOF` or
`DOMAIN_CONTRIBUTION_AND_PROOF` domains).
## Ethereum Beacon APIs
Consensus engines which provide an implementation of the [Ethereum Beacon
APIs](https://github.com/ethereum/beacon-APIs) must take care to ensure the
`execution_optimistic` value is set to `True` whenever the request references
optimistic blocks (and vice-versa).
Consensus engines which provide an implementation of the
[Ethereum Beacon APIs](https://github.com/ethereum/beacon-APIs) must take care
to ensure the `execution_optimistic` value is set to `True` whenever the request
references optimistic blocks (and vice-versa).
## Design Decision Rationale
### Why sync optimistically?
Most execution engines use state sync as a default sync mechanism on Ethereum Mainnet
because executing blocks from genesis takes several weeks on commodity hardware.
Most execution engines use state sync as a default sync mechanism on Ethereum
Mainnet because executing blocks from genesis takes several weeks on commodity
hardware.
State sync requires the knowledge of the current head of the chain to converge eventually.
If not constantly fed with the most recent head, state sync won't be able to complete
because the recent state soon becomes unavailable due to state trie pruning.
State sync requires the knowledge of the current head of the chain to converge
eventually. If not constantly fed with the most recent head, state sync won't be
able to complete because the recent state soon becomes unavailable due to state
trie pruning.
Optimistic block import (i.e. import when the execution engine *cannot* currently validate the payload)
breaks a deadlock between the execution layer sync process and importing beacon blocks
while the execution engine is syncing.
Optimistic block import (i.e. import when the execution engine *cannot*
currently validate the payload) breaks a deadlock between the execution layer
sync process and importing beacon blocks while the execution engine is syncing.
Optimistic sync is also an optimal strategy for execution engines using block execution as a default
sync mechanism (e.g. Erigon). Alternatively, a consensus engine may inform the execution engine with a payload
obtained from a checkpoint block, then wait until the execution layer catches up with it and proceed
in lock step after that. This alternative approach would keep user in limbo for several hours and
would increase time of the sync process as batch sync has more opportunities for optimisation than the lock step.
Optimistic sync is also an optimal strategy for execution engines using block
execution as a default sync mechanism (e.g. Erigon). Alternatively, a consensus
engine may inform the execution engine with a payload obtained from a checkpoint
block, then wait until the execution layer catches up with it and proceed in
lock step after that. This alternative approach would keep user in limbo for
several hours and would increase time of the sync process as batch sync has more
opportunities for optimisation than the lock step.
Aforementioned premises make optimistic sync a *generalized* solution for interaction between consensus and
execution engines during the sync process.
Aforementioned premises make optimistic sync a *generalized* solution for
interaction between consensus and execution engines during the sync process.
### Why `SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY`?
@@ -355,8 +377,8 @@ cases, produce a junk block that out-competes all locally produced blocks for
the head. This prevents a node from producing a chain of blocks, therefore
breaking liveness.
Thankfully, if 2/3rds of validators are not poisoned, they can justify an
honest chain which will un-poison all other nodes.
Thankfully, if 2/3rds of validators are not poisoned, they can justify an honest
chain which will un-poison all other nodes.
Notably, this attack only exists for optimistic nodes. Nodes which fully verify
the transition block will reject a block with a junk parent hash. Therefore,
@@ -381,17 +403,16 @@ something along the lines of: *"if the transition block is sufficiently old
enough, then we can just assume that block is honest or there exists an honest
justified chain to out-compete it."*
Note the use of "feasibly" in the previous paragraph. One can imagine
mechanisms to check that a block is justified before importing it. For example,
just keep processing blocks without adding them to fork choice. However, there
are still edge-cases here (e.g., when to halt and declare there was no
justification?) and how to mitigate implementation complexity. At this point,
it's important to reflect on the attack and how likely it is to happen. It
requires some rather contrived circumstances and it seems very unlikely to
occur. Therefore, we need to consider if adding complexity to avoid an
unlikely attack increases or decreases our total risk. Presently, it appears
that `SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY` sits in a sweet spot for this
trade-off.
Note the use of "feasibly" in the previous paragraph. One can imagine mechanisms
to check that a block is justified before importing it. For example, just keep
processing blocks without adding them to fork choice. However, there are still
edge-cases here (e.g., when to halt and declare there was no justification?) and
how to mitigate implementation complexity. At this point, it's important to
reflect on the attack and how likely it is to happen. It requires some rather
contrived circumstances and it seems very unlikely to occur. Therefore, we need
to consider if adding complexity to avoid an unlikely attack increases or
decreases our total risk. Presently, it appears that
`SAFE_SLOTS_TO_IMPORT_OPTIMISTICALLY` sits in a sweet spot for this trade-off.
### Transitioning from VALID -> INVALIDATED or INVALIDATED -> VALID
@@ -414,12 +435,13 @@ optimistic sync altogether.
### What if `TERMINAL_BLOCK_HASH` is used?
If the terminal block hash override is used (i.e., `TERMINAL_BLOCK_HASH !=
Hash32()`), the [`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block)
function will deterministically
return `True` or `False`. Whilst it's not *technically* required
retrospectively call [`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block)
on a transition block that
matches `TERMINAL_BLOCK_HASH` after an optimistic sync, doing so will have no
effect. For simplicity, the optimistic sync specification does not define
edge-case behaviour for when `TERMINAL_BLOCK_HASH` is used.
If the terminal block hash override is used (i.e.,
`TERMINAL_BLOCK_HASH != Hash32()`), the
[`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block)
function will deterministically return `True` or `False`. Whilst it's not
*technically* required retrospectively call
[`validate_merge_block`](../specs/bellatrix/fork-choice.md#validate_merge_block)
on a transition block that matches `TERMINAL_BLOCK_HASH` after an optimistic
sync, doing so will have no effect. For simplicity, the optimistic sync
specification does not define edge-case behaviour for when `TERMINAL_BLOCK_HASH`
is used.

View File

@@ -7,24 +7,31 @@
Use an OS that has Python 3.8 or above. For example, Debian 11 (bullseye)
1. Install the packages you need:
```sh
sudo apt install -y make git wget python3-venv gcc python3-dev
```
2. Download the latest [consensus specs](https://github.com/ethereum/consensus-specs)
```sh
git clone https://github.com/ethereum/consensus-specs.git
cd consensus-specs
```
```sh
sudo apt install -y make git wget python3-venv gcc python3-dev
```
2. Download the latest
[consensus specs](https://github.com/ethereum/consensus-specs)
```sh
git clone https://github.com/ethereum/consensus-specs.git
cd consensus-specs
```
3. Create the specifications and tests:
```sh
make
```
```sh
make
```
To read more about creating the environment, [see here](core/pyspec/README.md).
### Running your first test
Use `make` to run the `test_empty_block_transition` tests against the Altair fork like so:
Use `make` to run the `test_empty_block_transition` tests against the Altair
fork like so:
```
$ make test k=test_empty_block_transition fork=altair
@@ -43,75 +50,79 @@ s..
## The "Hello, World" of Consensus Spec Tests
One of the `test_empty_block_transition` tests is implemented by a function with the same
name located in
One of the `test_empty_block_transition` tests is implemented by a function with
the same name located in
[`~/consensus-specs/tests/core/pyspec/eth2spec/test/phase0/sanity/test_blocks.py`](https://github.com/ethereum/consensus-specs/blob/dev/tests/core/pyspec/eth2spec/test/phase0/sanity/test_blocks.py).
To learn how consensus spec tests are written, let's go over the code:
```python
```
@with_all_phases
```
This [decorator](https://book.pythontips.com/en/latest/decorators.html) specifies that this test
is applicable to all the phases of consensus layer development. These phases are similar to forks (Istanbul,
Berlin, London, etc.) in the execution blockchain.
This [decorator](https://book.pythontips.com/en/latest/decorators.html)
specifies that this test is applicable to all the phases of consensus layer
development. These phases are similar to forks (Istanbul, Berlin, London, etc.)
in the execution blockchain.
```python
```
@spec_state_test
```
This decorator specifies that this test is a state transition test, and that it does not include a transition
between different forks.
This decorator specifies that this test is a state transition test, and that it
does not include a transition between different forks.
```python
```
def test_empty_block_transition(spec, state):
```
This type of test receives two parameters:
* `specs`: The protocol specifications
* `state`: The genesis state before the test
- `specs`: The protocol specifications
- `state`: The genesis state before the test
```python
pre_slot = state.slot
pre_slot = state.slot
```
A slot is a unit of time (every 12 seconds in mainnet), for which a specific validator (selected randomly but in a
deterministic manner) is a proposer. The proposer can propose a block during that slot.
A slot is a unit of time (every 12 seconds in mainnet), for which a specific
validator (selected randomly but in a deterministic manner) is a proposer. The
proposer can propose a block during that slot.
```python
pre_eth1_votes = len(state.eth1_data_votes)
pre_mix = spec.get_randao_mix(state, spec.get_current_epoch(state))
pre_eth1_votes = len(state.eth1_data_votes)
pre_mix = spec.get_randao_mix(state, spec.get_current_epoch(state))
```
Store some values to check later that certain updates happened.
```python
yield 'pre', state
yield "pre", state
```
In Python `yield` is used by [generators](https://wiki.python.org/moin/Generators). However, for our purposes
we can treat it as a partial return statement that doesn't stop the function's processing, only adds to a list
of return values. Here we add two values, the string `'pre'` and the initial state, to the list of return values.
In Python `yield` is used by
[generators](https://wiki.python.org/moin/Generators). However, for our purposes
we can treat it as a partial return statement that doesn't stop the function's
processing, only adds to a list of return values. Here we add two values, the
string `'pre'` and the initial state, to the list of return values.
[You can read more about test generators and how they are used here](generators).
```python
block = build_empty_block_for_next_slot(spec, state)
block = build_empty_block_for_next_slot(spec, state)
```
The state contains the last block, which is necessary for building up the next block (every block needs to
have the root of the previous one in a blockchain).
The state contains the last block, which is necessary for building up the next
block (every block needs to have the root of the previous one in a blockchain).
```python
signed_block = state_transition_and_sign_block(spec, state, block)
signed_block = state_transition_and_sign_block(spec, state, block)
```
Create a block signed by the appropriate proposer and advance the state.
```python
yield 'blocks', [signed_block]
yield 'post', state
yield "blocks", [signed_block]
yield "post", state
```
More `yield` statements. The output of a consensus test is:
@@ -124,32 +135,35 @@ More `yield` statements. The output of a consensus test is:
6. The state after the test
```python
# One vote for the eth1
assert len(state.eth1_data_votes) == pre_eth1_votes + 1
# One vote for the eth1
assert len(state.eth1_data_votes) == pre_eth1_votes + 1
# Check that the new parent root is correct
assert spec.get_block_root_at_slot(state, pre_slot) == signed_block.message.parent_root
# Check that the new parent root is correct
assert spec.get_block_root_at_slot(state, pre_slot) == signed_block.message.parent_root
# Random data changed
assert spec.get_randao_mix(state, spec.get_current_epoch(state)) != pre_mix
# Random data changed
assert spec.get_randao_mix(state, spec.get_current_epoch(state)) != pre_mix
```
Finally we assertions that test the transition was legitimate. In this case we have three assertions:
Finally we assertions that test the transition was legitimate. In this case we
have three assertions:
1. One item was added to `eth1_data_votes`
2. The new block's `parent_root` is the same as the block in the previous location
2. The new block's `parent_root` is the same as the block in the previous
location
3. The random data that every block includes was changed.
## New Tests
The easiest way to write a new test is to copy and modify an existing one. For example,
lets write a test where the first slot of the beacon chain is empty (because the assigned
proposer is offline, for example), and then there's an empty block in the second slot.
The easiest way to write a new test is to copy and modify an existing one. For
example, lets write a test where the first slot of the beacon chain is empty
(because the assigned proposer is offline, for example), and then there's an
empty block in the second slot.
We already know how to accomplish most of what we need for this test, but the only way we know
to advance the state is `state_transition_and_sign_block`, a function that also puts a block
into the slot. So let's see if the function's definition tells us how to advance the state without
a block.
We already know how to accomplish most of what we need for this test, but the
only way we know to advance the state is `state_transition_and_sign_block`, a
function that also puts a block into the slot. So let's see if the function's
definition tells us how to advance the state without a block.
First, we need to find out where the function is located. Run:
@@ -158,7 +172,8 @@ find . -name '*.py' -exec grep 'def state_transition_and_sign_block' {} \; -prin
```
And you'll find that the function is defined in
`eth2spec/test/helpers/state.py`. Looking in that file, we see that the second function is:
`eth2spec/test/helpers/state.py`. Looking in that file, we see that the second
function is:
```python
def next_slot(spec, state):
@@ -168,30 +183,25 @@ def next_slot(spec, state):
spec.process_slots(state, state.slot + 1)
```
This looks like exactly what we need. So we add this call before we create the empty block:
This looks like exactly what we need. So we add this call before we create the
empty block:
```python
.
.
.
yield 'pre', state
next_slot(spec, state)
block = build_empty_block_for_next_slot(spec, state)
.
.
.
yield "pre", state
next_slot(spec, state)
block = build_empty_block_for_next_slot(spec, state)
```
That's it. Our new test works (copy `test_empty_block_transition`, rename it, add the `next_slot` call, and then run it to
verify this).
That's it. Our new test works (copy `test_empty_block_transition`, rename it,
add the `next_slot` call, and then run it to verify this).
## Tests Designed to Fail
It is important to make sure that the system rejects invalid input, so our next step is to deal with cases where the protocol
is supposed to reject something. To see such a test, look at `test_prev_slot_block_transition` (in the same
file we used previously, `~/consensus-specs/tests/core/pyspec/eth2spec/test/phase0/sanity/test_blocks.py`).
It is important to make sure that the system rejects invalid input, so our next
step is to deal with cases where the protocol is supposed to reject something.
To see such a test, look at `test_prev_slot_block_transition` (in the same file
we used previously,
`~/consensus-specs/tests/core/pyspec/eth2spec/test/phase0/sanity/test_blocks.py`).
```python
@with_all_phases
@@ -204,71 +214,80 @@ def test_prev_slot_block_transition(spec, state):
Build an empty block for the current slot.
```python
proposer_index = spec.get_beacon_proposer_index(state)
proposer_index = spec.get_beacon_proposer_index(state)
```
Get the identity of the current proposer, the one for *this* slot.
```python
spec.process_slots(state, state.slot + 1)
spec.process_slots(state, state.slot + 1)
```
Transition to the new slot, which naturally has a different proposer.
```python
yield 'pre', state
expect_assertion_error(lambda: transition_unsigned_block(spec, state, block))
yield "pre", state
expect_assertion_error(lambda: transition_unsigned_block(spec, state, block))
```
Specify that the function `transition_unsigned_block` will cause an assertion error.
You can see this function in `~/consensus-specs/tests/core/pyspec/eth2spec/test/helpers/block.py`,
and one of the tests is that the block must be for this slot:
Specify that the function `transition_unsigned_block` will cause an assertion
error. You can see this function in
`~/consensus-specs/tests/core/pyspec/eth2spec/test/helpers/block.py`, and one of
the tests is that the block must be for this slot:
> ```python
> assert state.slot == block.slot
> ```
> ```
Because we use [lambda notation](https://www.w3schools.com/python/python_lambda.asp), the test
does not call `transition_unsigned_block` here. Instead, this is a function parameter that can
be called later.
Because we use
[lambda notation](https://www.w3schools.com/python/python_lambda.asp), the test
does not call `transition_unsigned_block` here. Instead, this is a function
parameter that can be called later.
```python
block.state_root = state.hash_tree_root()
block.state_root = state.hash_tree_root()
```
Set the block's state root to the current state hash tree root, which identifies this block as
belonging to this slot (even though it was created for the previous slot).
Set the block's state root to the current state hash tree root, which identifies
this block as belonging to this slot (even though it was created for the
previous slot).
```python
signed_block = sign_block(spec, state, block, proposer_index=proposer_index)
signed_block = sign_block(spec, state, block, proposer_index=proposer_index)
```
Notice that `proposer_index` is the variable we set earlier, *before* we advanced
the slot with `spec.process_slots(state, state.slot + 1)`. It is not the proposer
for the current state.
Notice that `proposer_index` is the variable we set earlier, *before* we
advanced the slot with `spec.process_slots(state, state.slot + 1)`. It is not
the proposer for the current state.
```python
yield 'blocks', [signed_block]
yield 'post', None # No post state, signifying it errors out
yield "blocks", [signed_block]
yield "post", None # No post state, signifying it errors out
```
This is the way we specify that a test is designed to fail - failed tests have no post state,
because the processing mechanism errors out before creating it.
This is the way we specify that a test is designed to fail - failed tests have
no post state, because the processing mechanism errors out before creating it.
## Attestation Tests
The consensus layer doesn't provide any direct functionality to end users. It does
not execute EVM programs or store user data. It exists to provide a secure source of
information about the latest verified block hash of the execution layer.
The consensus layer doesn't provide any direct functionality to end users. It
does not execute EVM programs or store user data. It exists to provide a secure
source of information about the latest verified block hash of the execution
layer.
For every slot a validator is randomly selected as the proposer. The proposer proposes a block
for the current head of the consensus layer chain (built on the previous block). That block
includes the block hash of the proposed new head of the execution layer.
For every slot a validator is randomly selected as the proposer. The proposer
proposes a block for the current head of the consensus layer chain (built on the
previous block). That block includes the block hash of the proposed new head of
the execution layer.
For every slot there is also a randomly selected committee of validators that needs to vote whether
the new consensus layer block is valid, which requires the proposed head of the execution chain to
also be a valid block. These votes are called [attestations](https://notes.ethereum.org/@hww/aggregation#112-Attestation),
and they are sent as independent messages. The proposer for a block is able to include attestations from previous slots,
which is how they get on chain to form consensus, reward honest validators, etc.
For every slot there is also a randomly selected committee of validators that
needs to vote whether the new consensus layer block is valid, which requires the
proposed head of the execution chain to also be a valid block. These votes are
called
[attestations](https://notes.ethereum.org/@hww/aggregation#112-Attestation), and
they are sent as independent messages. The proposer for a block is able to
include attestations from previous slots, which is how they get on chain to form
consensus, reward honest validators, etc.
[You can see a simple successful attestation test here](https://github.com/ethereum/consensus-specs/blob/926e5a3d722df973b9a12f12c015783de35cafa9/tests/core/pyspec/eth2spec/test/phase0/block_processing/test_process_attestation.py#L26-L30):
Lets go over it line by line.
@@ -281,83 +300,79 @@ def test_success(spec, state):
```
[This function](https://github.com/ethereum/consensus-specs/blob/30fe7ba1107d976100eb0c3252ca7637b791e43a/tests/core/pyspec/eth2spec/test/helpers/attestations.py#L88-L120)
creates a valid attestation (which can then be modified to make it invalid if needed).
To see an attestation "from the inside" we need to follow it.
> ```python
> def get_valid_attestation(spec,
> state,
> slot=None,
> index=None,
> filter_participant_set=None,
> signed=False):
> ```
>
> Only two parameters, `spec` and `state` are required. However, there are four other parameters that can affect
> the attestation created by this function.
>
>
> ```python
> # If filter_participant_set filters everything, the attestation has 0 participants, and cannot be signed.
> # Thus strictly speaking invalid when no participant is added later.
> if slot is None:
> slot = state.slot
> if index is None:
> index = 0
> ```
>
> Default values. Normally we want to choose the current slot, and out of the proposers and committees that it can have,
> we want the first one.
>
> ```python
> attestation_data = build_attestation_data(
> spec, state, slot=slot, index=index
> )
> ```
>
> Build the actual attestation. You can see this function
> [here](https://github.com/ethereum/consensus-specs/blob/30fe7ba1107d976100eb0c3252ca7637b791e43a/tests/core/pyspec/eth2spec/test/helpers/attestations.py#L53-L85)
> to see the exact data in an attestation.
>
> ```python
> beacon_committee = spec.get_beacon_committee(
> state,
> attestation_data.slot,
> attestation_data.index,
> )
> ```
>
> This is the committee that is supposed to approve or reject the proposed block.
>
> ```python
>
> committee_size = len(beacon_committee)
> aggregation_bits = Bitlist[spec.MAX_VALIDATORS_PER_COMMITTEE](*([0] * committee_size))
> ```
>
> There's a bit for every committee member to see if it approves or not.
>
> ```python
> attestation = spec.Attestation(
> aggregation_bits=aggregation_bits,
> data=attestation_data,
> )
> # fill the attestation with (optionally filtered) participants, and optionally sign it
> fill_aggregate_attestation(spec, state, attestation, signed=signed, filter_participant_set=filter_participant_set)
>
> return attestation
> ```
creates a valid attestation (which can then be modified to make it invalid if
needed). To see an attestation "from the inside" we need to follow it.
```python
next_slots(spec, state, spec.MIN_ATTESTATION_INCLUSION_DELAY)
def get_valid_attestation(
spec, state, slot=None, index=None, filter_participant_set=None, signed=False
): ...
```
Only two parameters, `spec` and `state` are required. However, there are four
other parameters that can affect the attestation created by this function.
```python
# If filter_participant_set filters everything, the attestation has 0 participants, and cannot be signed.
# Thus strictly speaking invalid when no participant is added later.
if slot is None:
slot = state.slot
if index is None:
index = 0
```
Default values. Normally we want to choose the current slot, and out of the
proposers and committees that it can have, we want the first one.
```python
attestation_data = build_attestation_data(spec, state, slot=slot, index=index)
```
Build the actual attestation. You can see this function
[here](https://github.com/ethereum/consensus-specs/blob/30fe7ba1107d976100eb0c3252ca7637b791e43a/tests/core/pyspec/eth2spec/test/helpers/attestations.py#L53-L85)
to see the exact data in an attestation.
```python
beacon_committee = spec.get_beacon_committee(
state,
attestation_data.slot,
attestation_data.index,
)
```
This is the committee that is supposed to approve or reject the proposed block.
```python
committee_size = len(beacon_committee)
aggregation_bits = Bitlist[spec.MAX_VALIDATORS_PER_COMMITTEE](*([0] * committee_size))
```
There's a bit for every committee member to see if it approves or not.
```python
attestation = spec.Attestation(
aggregation_bits=aggregation_bits,
data=attestation_data,
)
# fill the attestation with (optionally filtered) participants, and optionally sign it
fill_aggregate_attestation(
spec, state, attestation, signed=signed, filter_participant_set=filter_participant_set
)
return attestation
```
```python
next_slots(spec, state, spec.MIN_ATTESTATION_INCLUSION_DELAY)
```
Attestations have to appear after the block they attest for, so we advance
`spec.MIN_ATTESTATION_INCLUSION_DELAY` slots before creating the block that includes the attestation.
Currently a single block is sufficient, but that may change in the future.
`spec.MIN_ATTESTATION_INCLUSION_DELAY` slots before creating the block that
includes the attestation. Currently a single block is sufficient, but that may
change in the future.
```python
yield from run_attestation_processing(spec, state, attestation)
yield from run_attestation_processing(spec, state, attestation)
```
[This function](https://github.com/ethereum/consensus-specs/blob/30fe7ba1107d976100eb0c3252ca7637b791e43a/tests/core/pyspec/eth2spec/test/helpers/attestations.py#L13-L50)
@@ -365,10 +380,11 @@ processes the attestation and returns the result.
### Adding an Attestation Test
Attestations can't happen in the same block as the one about which they are attesting, or in a block that is
after the block is finalized. This is specified as part of the specs, in the `process_attestation` function
(which is created from the spec by the `make pyspec` command you ran earlier). Here is the relevant code
fragment:
Attestations can't happen in the same block as the one about which they are
attesting, or in a block that is after the block is finalized. This is specified
as part of the specs, in the `process_attestation` function (which is created
from the spec by the `make pyspec` command you ran earlier). Here is the
relevant code fragment:
```python
def process_attestation(state: BeaconState, attestation: Attestation) -> None:
@@ -381,20 +397,24 @@ def process_attestation(state: BeaconState, attestation: Attestation) -> None:
In the last line you can see two conditions being asserted:
1. `data.slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot` which verifies that the attestation doesn't
arrive too early.
2. `state.slot <= data.slot + SLOTS_PER_EPOCH` which verifies that the attestation doesn't
arrive too late.
1. `data.slot + MIN_ATTESTATION_INCLUSION_DELAY <= state.slot` which verifies
that the attestation doesn't arrive too early.
2. `state.slot <= data.slot + SLOTS_PER_EPOCH` which verifies that the
attestation doesn't arrive too late.
This is how the consensus layer tests deal with edge cases, by asserting the conditions required for the
values to be legitimate. In the case of these particular conditions, they are tested
This is how the consensus layer tests deal with edge cases, by asserting the
conditions required for the values to be legitimate. In the case of these
particular conditions, they are tested
[here](https://github.com/ethereum/consensus-specs/blob/926e5a3d722df973b9a12f12c015783de35cafa9/tests/core/pyspec/eth2spec/test/phase0/block_processing/test_process_attestation.py#L87-L104).
One test checks what happens if the attestation is too early, and another if it is too late.
One test checks what happens if the attestation is too early, and another if it
is too late.
However, it is not enough to ensure we reject invalid blocks. It is also necessary to ensure we accept all valid blocks. You saw earlier
a test (`test_success`) that tested that being `MIN_ATTESTATION_INCLUSION_DELAY` after the data for which we attest is enough.
Now we'll write a similar test that verifies that being `SLOTS_PER_EPOCH` away is still valid. To do this, we modify the
`test_after_epoch_slots` function. We need two changes:
However, it is not enough to ensure we reject invalid blocks. It is also
necessary to ensure we accept all valid blocks. You saw earlier a test
(`test_success`) that tested that being `MIN_ATTESTATION_INCLUSION_DELAY` after
the data for which we attest is enough. Now we'll write a similar test that
verifies that being `SLOTS_PER_EPOCH` away is still valid. To do this, we modify
the `test_after_epoch_slots` function. We need two changes:
1. Call `transition_to_slot_via_block` with one less slot to advance
2. Don't tell `run_attestation_processing` to return an empty post state.
@@ -413,24 +433,29 @@ def test_almost_after_epoch_slots(spec, state):
yield from run_attestation_processing(spec, state, attestation)
```
Add this function to the file `consensus-specs/tests/core/pyspec/eth2spec/test/phase0/block_processing/test_process_attestation.py`,
Add this function to the file
`consensus-specs/tests/core/pyspec/eth2spec/test/phase0/block_processing/test_process_attestation.py`,
and run the test against Altair fork:
```sh
make test k=almost_after fork=altair
```
You should see it ran successfully (although you might get a warning, you can ignore it)
You should see it ran successfully (although you might get a warning, you can
ignore it)
## How are These Tests Used?
So far we've ran tests against the formal specifications. This is a way to check the specifications
are what we expect, but it doesn't actually check the beacon chain clients. The way these tests get applied
by clients is that every few weeks
So far we've ran tests against the formal specifications. This is a way to check
the specifications are what we expect, but it doesn't actually check the beacon
chain clients. The way these tests get applied by clients is that every few
weeks
[new test specifications are released](https://github.com/ethereum/consensus-spec-tests/releases),
in a format [documented here](https://github.com/ethereum/consensus-specs/tree/dev/tests/formats).
All the consensus layer clients implement test-runners that consume the test vectors in this standard format.
in a format
[documented here](https://github.com/ethereum/consensus-specs/tree/dev/tests/formats).
All the consensus layer clients implement test-runners that consume the test
vectors in this standard format.
---
______________________________________________________________________
Original version by [Ori Pomerantz](mailto:qbzzt1@gmail.com)

View File

@@ -1,17 +1,18 @@
# Executable Python Spec (PySpec)
The executable Python spec is built from the consensus specifications,
complemented with the necessary helper functions for hashing, BLS, and more.
complemented with the necessary helper functions for hashing, BLS, and more.
With this executable spec,
test-generators can easily create test-vectors for client implementations,
and the spec itself can be verified to be consistent and coherent through sanity tests implemented with pytest.
With this executable spec, test-generators can easily create test-vectors for
client implementations, and the spec itself can be verified to be consistent and
coherent through sanity tests implemented with pytest.
## Py-tests
These tests are not intended for client-consumption.
These tests are testing the spec itself, to verify consistency and provide feedback on modifications of the spec.
However, most of the tests can be run in generator-mode, to output test vectors for client-consumption.
These tests are not intended for client-consumption. These tests are testing the
spec itself, to verify consistency and provide feedback on modifications of the
spec. However, most of the tests can be run in generator-mode, to output test
vectors for client-consumption.
### How to run tests
@@ -51,9 +52,10 @@ Run `make coverage` to run all tests and open the html code coverage report.
## Contributing
Contributions are welcome, but consider implementing your idea as part of the spec itself first.
The pyspec is not a replacement.
Contributions are welcome, but consider implementing your idea as part of the
spec itself first. The pyspec is not a replacement.
## License
Same as the spec itself; see [LICENSE](../../../LICENSE) file in the specs repository root.
Same as the spec itself; see [LICENSE](../../../LICENSE) file in the specs
repository root.

View File

@@ -1,9 +1,11 @@
# Consensus specs config util
For run-time configuration, see [Configs documentation](../../../../../configs/README.md).
For run-time configuration, see
[Configs documentation](../../../../../configs/README.md).
For compile-time presets, see [Presets documentation](../../../../../presets/README.md)
and the `build-targets` flag for the `pyspec` distutils command.
For compile-time presets, see
[Presets documentation](../../../../../presets/README.md) and the
`build-targets` flag for the `pyspec` distutils command.
## Config usage:
@@ -13,11 +15,14 @@ from eth2spec.phase0 import mainnet as spec
from pathlib import Path
# To load the default configurations
config_util.load_defaults(Path("consensus-specs/configs")) # change path to point to equivalent of specs `configs` dir.
config_util.load_defaults(
Path("consensus-specs/configs")
) # change path to point to equivalent of specs `configs` dir.
# After loading the defaults, a config can be chosen: 'mainnet', 'minimal', or custom network config (by file path)
spec.config = spec.Configuration(**config_util.load_config_file(Path('mytestnet.yaml')))
spec.config = spec.Configuration(**config_util.load_config_file(Path("mytestnet.yaml")))
```
Note: previously the testnet config files included both preset and runtime-configuration data.
The new config loader is compatible with this: all config vars are loaded from the file,
but those that have become presets can be ignored.
Note: previously the testnet config files included both preset and
runtime-configuration data. The new config loader is compatible with this: all
config vars are loaded from the file, but those that have become presets can be
ignored.

View File

@@ -4,7 +4,8 @@
A util to quickly write new test suite generators with.
See [Generators documentation](../../../../generators/README.md) for integration details.
See [Generators documentation](../../../../generators/README.md) for integration
details.
Options:
@@ -28,9 +29,11 @@ Options:
This is a util to derive tests from a tests source file.
This requires the tests to yield test-case-part outputs. These outputs are then written to the test case directory.
Yielding data is illegal in normal pytests, so it is only done when in "generator mode".
This functionality can be attached to any function by using the `vector_test()` decorator found in `ethspec/tests/utils.py`.
This requires the tests to yield test-case-part outputs. These outputs are then
written to the test case directory. Yielding data is illegal in normal pytests,
so it is only done when in "generator mode". This functionality can be attached
to any function by using the `vector_test()` decorator found in
`ethspec/tests/utils.py`.
## Test-case parts
@@ -38,17 +41,25 @@ Test cases consist of parts, which are yielded to the base generator one by one.
The yielding pattern is:
2 value style: `yield <key name> <value>`. The kind of output will be inferred from the value by the `vector_test()` decorator.
2 value style: `yield <key name> <value>`. The kind of output will be inferred
from the value by the `vector_test()` decorator.
3 value style: `yield <key name> <kind name> <value>`.
Test part output kinds:
- `ssz`: value is expected to be a `bytes`, and the raw data is written to a `<key name>.ssz_snappy` file.
- `data`: value is expected to be any Python object that can be dumped as YAML. Output is written to `<key name>.yaml`
- `meta`: these key-value pairs are collected into a dict, and then collectively written to a metadata
file named `meta.yaml`, if anything is yielded with `meta` empty.
The `vector_test()` decorator can detect pyspec SSZ types, and output them both as `data` and `ssz`, for the test consumer to choose.
- `ssz`: value is expected to be a `bytes`, and the raw data is written to a
`<key name>.ssz_snappy` file.
- `data`: value is expected to be any Python object that can be dumped as YAML.
Output is written to `<key name>.yaml`
- `meta`: these key-value pairs are collected into a dict, and then collectively
written to a metadata file named `meta.yaml`, if anything is yielded with
`meta` empty.
Note that the yielded outputs are processed before the test continues. It is safe to yield information that later mutates,
as the output will already be encoded to yaml or ssz bytes. This avoids the need to deep-copy the whole object.
The `vector_test()` decorator can detect pyspec SSZ types, and output them both
as `data` and `ssz`, for the test consumer to choose.
Note that the yielded outputs are processed before the test continues. It is
safe to yield information that later mutates, as the output will already be
encoded to yaml or ssz bytes. This avoids the need to deep-copy the whole
object.

View File

@@ -1,40 +1,46 @@
# General test format
This document defines the YAML format and structure used for consensus spec testing.
This document defines the YAML format and structure used for consensus spec
testing.
<!-- mdformat-toc start --slug=github --no-anchors --maxlevel=6 --minlevel=2 -->
* [About](#about)
+ [Test-case formats](#test-case-formats)
* [Glossary](#glossary)
* [Test format philosophy](#test-format-philosophy)
+ [Config design](#config-design)
+ [Test completeness](#test-completeness)
* [Test structure](#test-structure)
+ [`<config name>/`](#--config-name---)
+ [`<fork or phase name>/`](#--fork-or-phase-name---)
+ [`<test runner name>/`](#--test-runner-name---)
+ [`<test handler name>/`](#--test-handler-name---)
+ [`<test suite name>/`](#--test-suite-name---)
+ [`<test case>/`](#--test-case---)
+ [`<output part>`](#--output-part--)
- [About](#about)
- [Test-case formats](#test-case-formats)
- [Glossary](#glossary)
- [Test format philosophy](#test-format-philosophy)
- [Config design](#config-design)
- [Test completeness](#test-completeness)
- [Test structure](#test-structure)
- [`<config name>/`](#config-name)
- [`<fork or phase name>/`](#fork-or-phase-name)
- [`<test runner name>/`](#test-runner-name)
- [`<test handler name>/`](#test-handler-name)
- [`<test suite name>/`](#test-suite-name)
- [`<test case>/`](#test-case)
- [`<output part>`](#output-part)
- [Common output formats](#common-output-formats)
- [Special output parts](#special-output-parts)
* [`meta.yaml`](#-metayaml-)
* [Config](#config)
* [Config sourcing](#config-sourcing)
* [Note for implementers](#note-for-implementers)
- [`meta.yaml`](#metayaml)
- [`config.yaml`](#configyaml)
- [Config sourcing](#config-sourcing)
- [Note for implementers](#note-for-implementers)
<!-- mdformat-toc end -->
## About
The consensus layer uses YAML as the format for all cross client tests. This document describes at a high level the general format to which all test files should conform.
The consensus layer uses YAML as the format for all cross client tests. This
document describes at a high level the general format to which all test files
should conform.
### Test-case formats
The particular formats of specific types of tests (test suites) are defined in separate documents.
The particular formats of specific types of tests (test suites) are defined in
separate documents.
Test formats:
- [`bls`](./bls/README.md)
- [`epoch_processing`](./epoch_processing/README.md)
- [`genesis`](./genesis/README.md)
@@ -47,49 +53,70 @@ Test formats:
## Glossary
- `generator`: a program that outputs one or more test-cases, each organized into a `config > runner > handler > suite` hierarchy.
- `config`: tests are grouped by configuration used for spec presets. In addition to the standard configurations,
`general` may be used as a catch-all for tests not restricted to one configuration. (E.g. BLS).
- `generator`: a program that outputs one or more test-cases, each organized
into a `config > runner > handler > suite` hierarchy.
- `config`: tests are grouped by configuration used for spec presets. In
addition to the standard configurations, `general` may be used as a catch-all
for tests not restricted to one configuration. (E.g. BLS).
- `type`: the specialization of one single `generator`. E.g. epoch processing.
- `runner`: where a generator is a *"producer"*, this is the *"consumer"*.
- A `runner` focuses on *only one* `type`, and each type has *only one* `runner`.
- `handler`: a `runner` may be too limited sometimes, you may have a set of tests with a specific focus that requires a different format.
To facilitate this, you specify a `handler`: the runner can deal with the format by using the specified handler.
- `suite`: a directory containing test cases that are coherent. Each `suite` under the same `handler` shares the same format.
This is an organizational/cosmetic hierarchy layer.
- `case`: a test case, a directory in a `suite`. A case can be anything in general,
but its format should be well-defined in the documentation corresponding to the `type` (and `handler`).
- `case part`: a test case consists of different files, possibly in different formats, to facilitate the specific test case format better.
Optionally, a `meta.yaml` is included to declare meta-data for the test, e.g. BLS requirements.
- A `runner` focuses on *only one* `type`, and each type has *only one*
`runner`.
- `handler`: a `runner` may be too limited sometimes, you may have a set of
tests with a specific focus that requires a different format. To facilitate
this, you specify a `handler`: the runner can deal with the format by using
the specified handler.
- `suite`: a directory containing test cases that are coherent. Each `suite`
under the same `handler` shares the same format. This is an
organizational/cosmetic hierarchy layer.
- `case`: a test case, a directory in a `suite`. A case can be anything in
general, but its format should be well-defined in the documentation
corresponding to the `type` (and `handler`).
- `case part`: a test case consists of different files, possibly in different
formats, to facilitate the specific test case format better. Optionally, a
`meta.yaml` is included to declare meta-data for the test, e.g. BLS
requirements.
## Test format philosophy
### Config design
The configuration constant types are:
- Never changing: genesis data.
- Changing, but reliant on old value: e.g. an epoch time may change, but if you want to do the conversion
`(genesis data, timestamp) -> epoch number`, you end up needing both constants.
- Changing, but kept around during fork transition: finalization may take a while,
e.g. an executable has to deal with new deposits and old deposits at the same time. Another example may be economic constants.
- Additional, backwards compatible: new constants are introduced for later phases.
- Changing: there is a very small chance some constant may really be *replaced*.
In this off-chance, it is likely better to include it as an additional variable,
and some clients may simply stop supporting the old one if they do not want to sync from genesis.
The change of functionality goes through a phase of deprecation of the old constant, and eventually only the new constant is kept around in the config (when old state is not supported anymore).
Based on these types of changes, we model the config as a list of key value pairs,
that only grows with every fork (they may change in development versions of forks, however; git manages this).
With this approach, configurations are backwards compatible (older clients ignore unknown variables) and easy to maintain.
- Never changing: genesis data.
- Changing, but reliant on old value: e.g. an epoch time may change, but if you
want to do the conversion `(genesis data, timestamp) -> epoch number`, you end
up needing both constants.
- Changing, but kept around during fork transition: finalization may take a
while, e.g. an executable has to deal with new deposits and old deposits at
the same time. Another example may be economic constants.
- Additional, backwards compatible: new constants are introduced for later
phases.
- Changing: there is a very small chance some constant may really be *replaced*.
In this off-chance, it is likely better to include it as an additional
variable, and some clients may simply stop supporting the old one if they do
not want to sync from genesis. The change of functionality goes through a
phase of deprecation of the old constant, and eventually only the new constant
is kept around in the config (when old state is not supported anymore).
Based on these types of changes, we model the config as a list of key value
pairs, that only grows with every fork (they may change in development versions
of forks, however; git manages this). With this approach, configurations are
backwards compatible (older clients ignore unknown variables) and easy to
maintain.
### Test completeness
Tests should be independent of any sync-data. If one wants to run a test, the input data should be available from the YAML.
The aim is to provide clients with a well-defined scope of work to run a particular set of test-suites.
Tests should be independent of any sync-data. If one wants to run a test, the
input data should be available from the YAML. The aim is to provide clients with
a well-defined scope of work to run a particular set of test-suites.
- Clients that are complete are expected to contribute to testing, seeking for better resources to get conformance with the spec, and other clients.
- Clients that are not complete in functionality can choose to ignore suites that use certain test-runners, or specific handlers of these test-runners.
- Clients that are on older versions can test their work based on older releases of the generated tests, and catch up with newer releases when possible.
- Clients that are complete are expected to contribute to testing, seeking for
better resources to get conformance with the spec, and other clients.
- Clients that are not complete in functionality can choose to ignore suites
that use certain test-runners, or specific handlers of these test-runners.
- Clients that are on older versions can test their work based on older releases
of the generated tests, and catch up with newer releases when possible.
## Test structure
@@ -100,64 +127,82 @@ tests/<config name>/<fork or phase name>/<test runner name>/<test handler name>/
### `<config name>/`
Configs are upper level. Some clients want to run minimal first, and useful for sanity checks during development too.
As a top level dir, it is not duplicated, and the used config can be copied right into this directory as reference.
Configs are upper level. Some clients want to run minimal first, and useful for
sanity checks during development too. As a top level dir, it is not duplicated,
and the used config can be copied right into this directory as reference.
### `<fork or phase name>/`
This would be: "phase0", "altair", etc. Each introduces new tests, and modifies any tests that change:
some tests of earlier forks repeat with updated state data.
This would be: "phase0", "altair", etc. Each introduces new tests, and modifies
any tests that change: some tests of earlier forks repeat with updated state
data.
### `<test runner name>/`
The well known bls/shuffling/ssz_static/operations/epoch_processing/etc. Handlers can change the format, but there is a general target to test.
The well known bls/shuffling/ssz_static/operations/epoch_processing/etc.
Handlers can change the format, but there is a general target to test.
### `<test handler name>/`
Specialization within category. All suites in here will have the same test case format.
Using a `handler` in a `runner` is optional. A `core` (or other generic) handler may be used if the `runner` does not have different formats.
Specialization within category. All suites in here will have the same test case
format. Using a `handler` in a `runner` is optional. A `core` (or other generic)
handler may be used if the `runner` does not have different formats.
### `<test suite name>/`
Suites are split up. Suite size (i.e. the amount of tests) does not change the maximum memory requirement, as test cases can be loaded one by one.
This also makes filtered sets of tests fast and easy to load.
Suites are split up. Suite size (i.e. the amount of tests) does not change the
maximum memory requirement, as test cases can be loaded one by one. This also
makes filtered sets of tests fast and easy to load.
### `<test case>/`
Cases are split up too. This enables diffing of parts of the test case, tracking changes per part, while still using LFS. Also enables different formats for some parts.
Cases are split up too. This enables diffing of parts of the test case, tracking
changes per part, while still using LFS. Also enables different formats for some
parts.
### `<output part>`
These files allow for custom formats for some parts of the test. E.g. something encoded in SSZ.
Or to avoid large files, the SSZ can be compressed with Snappy.
These files allow for custom formats for some parts of the test. E.g. something
encoded in SSZ. Or to avoid large files, the SSZ can be compressed with Snappy.
E.g. `pre.ssz_snappy`, `deposit.ssz_snappy`, `post.ssz_snappy`.
Diffing a `pre.ssz_snappy` and `post.ssz_snappy` provides all the information for testing, when decompressed and decoded.
Then the difference between pre and post can be compared to anything that changes the pre state, e.g. `deposit.ssz_snappy`
Diffing a `pre.ssz_snappy` and `post.ssz_snappy` provides all the information
for testing, when decompressed and decoded. Then the difference between pre and
post can be compared to anything that changes the pre state, e.g.
`deposit.ssz_snappy`
Note that by default, the SSZ data is in the given test case's <fork or phase name> version, e.g., if it's `altair` test case, use `altair.BeaconState` container to deserialize the given state.
Note that by default, the SSZ data is in the given test case's
<fork or phase name> version, e.g., if it's `altair` test case, use
`altair.BeaconState` container to deserialize the given state.
YAML is generally used for test metadata, and for tests that do not use SSZ: e.g. shuffling and BLS tests.
In this case, there is no point in adding special SSZ types. And the size and efficiency of YAML is acceptable.
YAML is generally used for test metadata, and for tests that do not use SSZ:
e.g. shuffling and BLS tests. In this case, there is no point in adding special
SSZ types. And the size and efficiency of YAML is acceptable.
#### Common output formats
Between all types of tests, a few formats are common:
- **`.yaml`**: A YAML file containing structured data to describe settings or test contents.
- **`.ssz`**: A file containing raw SSZ-encoded data. Previously widely used in tests, but replaced with compressed variant.
- **`.yaml`**: A YAML file containing structured data to describe settings or
test contents.
- **`.ssz`**: A file containing raw SSZ-encoded data. Previously widely used in
tests, but replaced with compressed variant.
- **`.ssz_snappy`**: Like `.ssz`, but compressed with Snappy block compression.
Snappy block compression is already applied to SSZ in consensus-layer gossip, available in client implementations, and thus chosen as compression method.
Snappy block compression is already applied to SSZ in consensus-layer gossip,
available in client implementations, and thus chosen as compression method.
#### Special output parts
##### `meta.yaml`
If present (it is optional), the test is enhanced with extra data to describe usage. Specialized data is described in the documentation of the specific test format.
If present (it is optional), the test is enhanced with extra data to describe
usage. Specialized data is described in the documentation of the specific test
format.
Common data is documented here:
Some test-case formats share some common key-value pair patterns, and these are documented here:
Some test-case formats share some common key-value pair patterns, and these are
documented here:
```
bls_setting: int -- optional, can have 3 different values:
@@ -170,12 +215,14 @@ bls_setting: int -- optional, can have 3 different values:
##### `config.yaml`
The runtime-configurables may be different for specific tests.
When present, this replaces the default runtime-config that comes with the otherwise compile-time preset settings.
The runtime-configurables may be different for specific tests. When present,
this replaces the default runtime-config that comes with the otherwise
compile-time preset settings.
The format matches that of the `mainnet_config.yaml` and `minimal_config.yaml`,
see the [`/configs`](../../configs/README.md#format) documentation.
Config values that are introduced at a later fork may be omitted from tests of previous forks.
see the [`/configs`](../../configs/README.md#format) documentation. Config
values that are introduced at a later fork may be omitted from tests of previous
forks.
## Config sourcing
@@ -191,20 +238,23 @@ And copied by CI for testing purposes to:
<tests repo root>/tests/<config name>/<config name>.yaml
```
The first `<config name>` is a directory, which contains exactly all tests that make use of the given config.
The first `<config name>` is a directory, which contains exactly all tests that
make use of the given config.
## Note for implementers
The basic pattern for test-suite loading and running is:
1. For a specific config, load it first (and only need to do so once),
then continue with the tests defined in the config folder.
1. For a specific config, load it first (and only need to do so once), then
continue with the tests defined in the config folder.
2. Select a fork. Repeat for each fork if running tests for multiple forks.
3. Select the category and specialization of interest (e.g. `operations > deposits`). Again, repeat for each if running all.
3. Select the category and specialization of interest (e.g.
`operations > deposits`). Again, repeat for each if running all.
4. Select a test suite. Or repeat for each.
5. Select a test case. Or repeat for each.
6. Load the parts of the case. And `meta.yaml` if present.
7. Run the test, as defined by the test format.
Step 1 may be a step with compile time selection of a configuration, if desired for optimization.
The base requirement is just to use the same set of constants, independent of the loading process.
Step 1 may be a step with compile time selection of a configuration, if desired
for optimization. The base requirement is just to use the same set of constants,
independent of the loading process.

View File

@@ -1,7 +1,8 @@
# BLS tests
A test type for BLS. Primarily geared towards verifying the *integration* of any BLS library.
We do not recommend rolling your own crypto or using an untested BLS library.
A test type for BLS. Primarily geared towards verifying the *integration* of any
BLS library. We do not recommend rolling your own crypto or using an untested
BLS library.
The BLS test suite runner has the following handlers:
@@ -13,4 +14,5 @@ The BLS test suite runner has the following handlers:
- [`sign`](./sign.md)
- [`verify`](./verify.md)
*Note*: Signature-verification and aggregate-verify test cases are not yet supported.
*Note*: Signature-verification and aggregate-verify test cases are not yet
supported.

View File

@@ -1,6 +1,7 @@
# Test format: BLS signature aggregation
A BLS signature aggregation combines a series of signatures into a single signature.
A BLS signature aggregation combines a series of signatures into a single
signature.
## Test case format
@@ -11,11 +12,14 @@ input: List[BLS Signature] -- list of input BLS signatures
output: BLS Signature -- expected output, single BLS signature or `null`.
```
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes (192 nibbles), prefixed with `0x`.
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes
(192 nibbles), prefixed with `0x`.
- output value is `null` if the input is invalid.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `aggregate` handler should aggregate the signatures in the `input`, and the result should match the expected `output`.
The `aggregate` handler should aggregate the signatures in the `input`, and the
result should match the expected `output`.

View File

@@ -14,11 +14,15 @@ input:
output: bool -- true (VALID) or false (INVALID)
```
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96 nibbles), prefixed with `0x`.
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes (192 nibbles), prefixed with `0x`.
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96
nibbles), prefixed with `0x`.
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes
(192 nibbles), prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `aggregate_verify` handler should verify the signature with pubkeys and messages in the `input`, and the result should match the expected `output`.
The `aggregate_verify` handler should verify the signature with pubkeys and
messages in the `input`, and the result should match the expected `output`.

View File

@@ -11,9 +11,11 @@ input: List[BLS Pubkey] -- list of input BLS pubkeys
output: BLSPubkey -- expected output, single BLS pubkeys or `null`.
```
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96 nibbles), prefixed with `0x`.
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96
nibbles), prefixed with `0x`.
- output value is `null` if the input is invalid.
## Condition
The `eth_aggregate_pubkeys` handler should aggregate the signatures in the `input`, and the result should match the expected `output`.
The `eth_aggregate_pubkeys` handler should aggregate the signatures in the
`input`, and the result should match the expected `output`.

View File

@@ -14,11 +14,15 @@ input:
output: bool -- true (VALID) or false (INVALID)
```
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96 nibbles), prefixed with `0x`.
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes (192 nibbles), prefixed with `0x`.
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96
nibbles), prefixed with `0x`.
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes
(192 nibbles), prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `eth_fast_aggregate_verify` handler should verify the signature with pubkeys and message in the `input`, and the result should match the expected `output`.
The `eth_fast_aggregate_verify` handler should verify the signature with pubkeys
and message in the `input`, and the result should match the expected `output`.

View File

@@ -14,11 +14,15 @@ input:
output: bool -- true (VALID) or false (INVALID)
```
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96 nibbles), prefixed with `0x`.
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes (192 nibbles), prefixed with `0x`.
- `BLS Pubkey` here is encoded as a string: hexadecimal encoding of 48 bytes (96
nibbles), prefixed with `0x`.
- `BLS Signature` here is encoded as a string: hexadecimal encoding of 96 bytes
(192 nibbles), prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `fast_aggregate_verify` handler should verify the signature with pubkeys and message in the `input`, and the result should match the expected `output`.
The `fast_aggregate_verify` handler should verify the signature with pubkeys and
message in the `input`, and the result should match the expected `output`.

View File

@@ -13,9 +13,11 @@ input:
output: BLS Signature -- expected output, single BLS signature or `null`.
```
- All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
- All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
- output value is `null` if the input is invalid.
## Condition
The `sign` handler should sign `message` with `privkey`, and the resulting signature should match the expected `output`.
The `sign` handler should sign `message` with `privkey`, and the resulting
signature should match the expected `output`.

View File

@@ -14,4 +14,5 @@ input:
output: bool -- VALID or INVALID
```
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.

View File

@@ -1,9 +1,10 @@
# Epoch processing tests
The different epoch sub-transitions are tested individually with test handlers.
The format is similar to block-processing state-transition tests.
There is no "change" factor however, the transitions are pure functions with just the pre-state as input.
Hence, the format is shared between each test-handler. (See test condition documentation on how to run the tests.)
The format is similar to block-processing state-transition tests. There is no
"change" factor however, the transitions are pure functions with just the
pre-state as input. Hence, the format is shared between each test-handler. (See
test condition documentation on how to run the tests.)
## Test case format
@@ -17,31 +18,36 @@ bls_setting: int -- see general test-format spec.
### `pre.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state before running the epoch sub-transition.
An SSZ-snappy encoded `BeaconState`, the state before running the epoch
sub-transition.
### `post.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state after applying the epoch sub-transition. No value if the sub-transition processing is aborted.
An SSZ-snappy encoded `BeaconState`, the state after applying the epoch
sub-transition. No value if the sub-transition processing is aborted.
### `pre_epoch.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state before running the epoch transition.
An SSZ-snappy encoded `BeaconState`, the state before running the epoch
transition.
### `post_epoch.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state after applying the epoch transition. No value if the transition processing is aborted.
An SSZ-snappy encoded `BeaconState`, the state after applying the epoch
transition. No value if the transition processing is aborted.
## Condition
A handler of the `epoch_processing` test-runner should process these cases,
calling the corresponding processing implementation (same name, prefixed with `process_`).
This excludes the other parts of the epoch-transition.
The provided pre-state is already transitioned to just before the specific sub-transition of focus of the handler.
calling the corresponding processing implementation (same name, prefixed with
`process_`). This excludes the other parts of the epoch-transition. The provided
pre-state is already transitioned to just before the specific sub-transition of
focus of the handler.
Sub-transitions:
- `eth1_data_reset` (>=Phase0)
- `historical_roots_update` (>=Phase0,<=Bellatrix)
- `historical_roots_update` (>=Phase0,\<=Bellatrix)
- `justification_and_finalization` (>=Phase0)
- `participation_record_updates` (==Phase0)
- `randao_mixes_reset` (>=Phase0)
@@ -61,12 +67,16 @@ The resulting state should match the expected `post` state.
## Condition (alternative)
Instead of having a different handler for each sub-transition, a single handler for all cases should load `pre_full` state, call `process_epoch` and then assert that the result state should match `post_full` state.
Instead of having a different handler for each sub-transition, a single handler
for all cases should load `pre_full` state, call `process_epoch` and then assert
that the result state should match `post_full` state.
This has the advantages:
- Less code to maintain for the epoch processing handler.
- Works with single pass epoch processing.
- Can detect bugs related to data dependencies between different sub-transitions.
- Can detect bugs related to data dependencies between different
sub-transitions.
As a disadvantage this condition takes more resources to compute, but just a constant amount per test vector.
As a disadvantage this condition takes more resources to compute, but just a
constant amount per test vector.

View File

@@ -16,14 +16,16 @@ blocks_count: int -- the number of blocks processed in this test.
### `pre.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state before running the block transitions.
An SSZ-snappy encoded `BeaconState`, the state before running the block
transitions.
Also available as `pre.ssz_snappy`.
### `blocks_<index>.yaml`
A series of files, with `<index>` in range `[0, blocks_count)`. Blocks need to be processed in order,
following the main transition function (i.e. process slot and epoch transitions in between blocks as normal)
A series of files, with `<index>` in range `[0, blocks_count)`. Blocks need to
be processed in order, following the main transition function (i.e. process slot
and epoch transitions in between blocks as normal)
Each file is a YAML-encoded `SignedBeaconBlock`.
@@ -31,9 +33,10 @@ Each block is also available as `blocks_<index>.ssz_snappy`
### `post.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state after applying the block transitions.
An SSZ-snappy encoded `BeaconState`, the state after applying the block
transitions.
## Condition
The resulting state should match the expected `post` state, or if the `post` state is left blank,
the handler should reject the series of blocks as invalid.
The resulting state should match the expected `post` state, or if the `post`
state is left blank, the handler should reject the series of blocks as invalid.

View File

@@ -1,6 +1,7 @@
# Fork choice tests
The aim of the fork choice tests is to provide test coverage of the various components of the fork choice.
The aim of the fork choice tests is to provide test coverage of the various
components of the fork choice.
<!-- mdformat-toc start --slug=github --no-anchors --maxlevel=6 --minlevel=2 -->
@@ -33,15 +34,20 @@ bls_setting: int -- see general test-format spec.
### `anchor_state.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state to initialize store with `get_forkchoice_store(anchor_state: BeaconState, anchor_block: BeaconBlock)` helper.
An SSZ-snappy encoded `BeaconState`, the state to initialize store with
`get_forkchoice_store(anchor_state: BeaconState, anchor_block: BeaconBlock)`
helper.
### `anchor_block.ssz_snappy`
An SSZ-snappy encoded `BeaconBlock`, the block to initialize store with `get_forkchoice_store(anchor_state: BeaconState, anchor_block: BeaconBlock)` helper.
An SSZ-snappy encoded `BeaconBlock`, the block to initialize store with
`get_forkchoice_store(anchor_state: BeaconState, anchor_block: BeaconBlock)`
helper.
### `steps.yaml`
The steps to execute in sequence. There may be multiple items of the following types:
The steps to execute in sequence. There may be multiple items of the following
types:
#### `on_tick` execution step
@@ -59,7 +65,8 @@ After this step, the `store` object may have been updated.
#### `on_attestation` execution step
The parameter that is required for executing `on_attestation(store, attestation)`.
The parameter that is required for executing
`on_attestation(store, attestation)`.
```yaml
{
@@ -69,6 +76,7 @@ The parameter that is required for executing `on_attestation(store, attestation)
If it's `false`, this execution step is expected to be invalid.
}
```
The file is located in the same folder (see below).
After this step, the `store` object may have been updated.
@@ -92,25 +100,32 @@ The parameter that is required for executing `on_block(store, block)`.
The file is located in the same folder (see below).
`blobs` and `proofs` are new fields from Deneb EIP-4844. These fields indicate the expected values from `retrieve_blobs_and_proofs()` helper inside `is_data_available()` helper. If these two fields are not provided, `retrieve_blobs_and_proofs()` returns empty lists.
`blobs` and `proofs` are new fields from Deneb EIP-4844. These fields indicate
the expected values from `retrieve_blobs_and_proofs()` helper inside
`is_data_available()` helper. If these two fields are not provided,
`retrieve_blobs_and_proofs()` returns empty lists.
After this step, the `store` object may have been updated.
#### `on_merge_block` execution step
Adds `PowBlock` data which is required for executing `on_block(store, block)`.
```yaml
{
pow_block: string -- the name of the `pow_block_<32-byte-root>.ssz_snappy` file.
To be used in `get_pow_block` lookup
}
```
The file is located in the same folder (see below).
PowBlocks should be used as return values for `get_pow_block(hash: Hash32) -> PowBlock` function if hashes match.
The file is located in the same folder (see below). PowBlocks should be used as
return values for `get_pow_block(hash: Hash32) -> PowBlock` function if hashes
match.
#### `on_attester_slashing` execution step
The parameter that is required for executing `on_attester_slashing(store, attester_slashing)`.
The parameter that is required for executing
`on_attester_slashing(store, attester_slashing)`.
```yaml
{
@@ -120,6 +135,7 @@ The parameter that is required for executing `on_attester_slashing(store, attest
If it's `false`, this execution step is expected to be invalid.
}
```
The file is located in the same folder (see below).
After this step, the `store` object may have been updated.
@@ -139,14 +155,21 @@ Optional step for optimistic sync tests.
}
```
This step sets the [`payloadStatus`](https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#payloadstatusv1)
value that Execution Layer client mock returns in responses to the following Engine API calls:
* [`engine_newPayloadV1(payload)`](https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#engine_newpayloadv1) if `payload.blockHash == payload_info.block_hash`
* [`engine_forkchoiceUpdatedV1(forkchoiceState, ...)`](https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#engine_forkchoiceupdatedv1) if `forkchoiceState.headBlockHash == payload_info.block_hash`
This step sets the
[`payloadStatus`](https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#payloadstatusv1)
value that Execution Layer client mock returns in responses to the following
Engine API calls:
*Note*: Status of a payload must be *initialized* via `on_payload_info` before the corresponding `on_block` execution step.
- [`engine_newPayloadV1(payload)`](https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#engine_newpayloadv1)
if `payload.blockHash == payload_info.block_hash`
- [`engine_forkchoiceUpdatedV1(forkchoiceState, ...)`](https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#engine_forkchoiceupdatedv1)
if `forkchoiceState.headBlockHash == payload_info.block_hash`
*Note*: Status of the same payload may be updated for several times throughout the test.
*Note*: Status of a payload must be *initialized* via `on_payload_info` before
the corresponding `on_block` execution step.
*Note*: Status of the same payload may be updated for several times throughout
the test.
#### Checks step
@@ -156,7 +179,9 @@ The checks to verify the current status of `store`.
checks: {<store_attribute>: value} -- the assertions.
```
`<store_attribute>` is the field member or property of [`Store`](../../../specs/phase0/fork-choice.md#store) object that maintained by client implementation. The fields include:
`<store_attribute>` is the field member or property of
[`Store`](../../../specs/phase0/fork-choice.md#store) object that maintained by
client implementation. The fields include:
```yaml
head: {
@@ -180,7 +205,8 @@ viable_for_head_roots_and_weights: [{
}]
```
Additionally, these fields if `get_proposer_head` and `should_override_forkchoice_update` features are implemented:
Additionally, these fields if `get_proposer_head` and
`should_override_forkchoice_update` features are implemented:
```yaml
get_proposer_head: string -- Encoded 32-byte value from get_proposer_head(store)
@@ -207,7 +233,8 @@ For example:
]
```
*Note*: Each `checks` step may include one or multiple items. Each item has to be checked against the current store.
*Note*: Each `checks` step may include one or multiple items. Each item has to
be checked against the current store.
### `attestation_<32-byte-root>.ssz_snappy`
@@ -223,8 +250,15 @@ Each file is an SSZ-snappy encoded `SignedBeaconBlock`.
## Condition
1. Deserialize `anchor_state.ssz_snappy` and `anchor_block.ssz_snappy` to initialize the local store object by with `get_forkchoice_store(anchor_state, anchor_block)` helper.
1. Deserialize `anchor_state.ssz_snappy` and `anchor_block.ssz_snappy` to
initialize the local store object by with
`get_forkchoice_store(anchor_state, anchor_block)` helper.
2. Iterate sequentially through `steps.yaml`
- For each execution, look up the corresponding ssz_snappy file. Execute the corresponding helper function on the current store.
- For the `on_block` execution step: if `len(block.message.body.attestations) > 0`, execute each attestation with `on_attestation(store, attestation)` after executing `on_block(store, block)`.
- For each `checks` step, the assertions on the current store must be satisfied.
- For each execution, look up the corresponding ssz_snappy file. Execute the
corresponding helper function on the current store.
- For the `on_block` execution step: if
`len(block.message.body.attestations) > 0`, execute each attestation with
`on_attestation(store, attestation)` after executing
`on_block(store, block)`.
- For each `checks` step, the assertions on the current store must be
satisfied.

View File

@@ -1,10 +1,12 @@
# Forks
The aim of the fork tests is to ensure that a pre-fork state can be transformed
into a valid post-fork state, utilizing the `upgrade` function found in the relevant `fork.md` spec.
into a valid post-fork state, utilizing the `upgrade` function found in the
relevant `fork.md` spec.
There is only one handler: `fork`. Each fork (after genesis) is handled with the same format,
and the particular fork boundary being tested is noted in `meta.yaml`.
There is only one handler: `fork`. Each fork (after genesis) is handled with the
same format, and the particular fork boundary being tested is noted in
`meta.yaml`.
## Test case format
@@ -20,24 +22,28 @@ fork: str -- Fork being transitioned to
Key of valid `fork` strings that might be found in `meta.yaml`
| String ID | Pre-fork | Post-fork | Function |
| - | - | - | - |
| `altair` | Phase 0 | Altair | `upgrade_to_altair` |
| `bellatrix` | Altair | Bellatrix | `upgrade_to_bellatrix` |
| String ID | Pre-fork | Post-fork | Function |
| ----------- | -------- | --------- | ---------------------- |
| `altair` | Phase 0 | Altair | `upgrade_to_altair` |
| `bellatrix` | Altair | Bellatrix | `upgrade_to_bellatrix` |
### `pre.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state before running the fork transition.
An SSZ-snappy encoded `BeaconState`, the state before running the fork
transition.
### `post.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state after applying the fork transition.
An SSZ-snappy encoded `BeaconState`, the state after applying the fork
transition.
*Note*: This type is the `BeaconState` after the fork and is *not* the same type as `pre`.
*Note*: This type is the `BeaconState` after the fork and is *not* the same type
as `pre`.
## Processing
To process this test, pass `pre` into the upgrade function defined by the `fork` in `meta.yaml`.
To process this test, pass `pre` into the upgrade function defined by the `fork`
in `meta.yaml`.
## Condition

View File

@@ -1,8 +1,11 @@
# Genesis tests
The aim of the genesis tests is to provide a baseline to test genesis-state initialization and test
if the proposed genesis-validity conditions are working.
The aim of the genesis tests is to provide a baseline to test genesis-state
initialization and test if the proposed genesis-validity conditions are working.
There are two handlers, documented individually:
- [`validity`](./validity.md): Tests if a genesis state is valid, i.e. if it counts as trigger to launch.
- [`initialization`](./initialization.md): Tests the initialization of a genesis state based on Eth1 data.
- [`validity`](./validity.md): Tests if a genesis state is valid, i.e. if it
counts as trigger to launch.
- [`initialization`](./initialization.md): Tests the initialization of a genesis
state based on Eth1 data.

View File

@@ -23,24 +23,26 @@ execution_payload_header: bool -- `execution_payload_header` field is filled or
### `deposits_<index>.ssz_snappy`
A series of files, with `<index>` in range `[0, deposits_count)`. Deposits need to be processed in order.
Each file is a SSZ-snappy encoded `Deposit` object.
A series of files, with `<index>` in range `[0, deposits_count)`. Deposits need
to be processed in order. Each file is a SSZ-snappy encoded `Deposit` object.
### `execution_payload_header.ssz_snappy`
### `execution_payload_header.ssz_snappy`
*Note*: Param added only for Bellatrix and subsequent forks.
The execution payload header that state is initialized with. An SSZ-snappy encoded `BeaconState` object.
The execution payload header that state is initialized with. An SSZ-snappy
encoded `BeaconState` object.
### `state.ssz_snappy`
### `state.ssz_snappy`
The expected genesis state. An SSZ-snappy encoded `BeaconState` object.
## Processing
To process this test, build a genesis state with the provided `eth1_block_hash`, `eth1_timestamp` and `deposits`:
To process this test, build a genesis state with the provided `eth1_block_hash`,
`eth1_timestamp` and `deposits`:
`initialize_beacon_state_from_eth1(eth1_block_hash, eth1_timestamp, deposits)`,
as described in the Beacon Chain specification.
as described in the Beacon Chain specification.
## Condition

View File

@@ -18,7 +18,8 @@ An SSZ-snappy encoded `BeaconState`, the state to validate as genesis candidate.
### `is_valid.yaml`
A boolean, true if the genesis state is deemed valid as to launch with, false otherwise.
A boolean, true if the genesis state is deemed valid as to launch with, false
otherwise.
## Processing
@@ -26,4 +27,5 @@ To process the data, call `is_valid_genesis_state(genesis)`.
## Condition
The result of calling `is_valid_genesis_state(genesis)` should match the expected `is_valid` boolean.
The result of calling `is_valid_genesis_state(genesis)` should match the
expected `is_valid` boolean.

View File

@@ -1,6 +1,8 @@
# KZG tests
A test type for KZG libraries. Tests all the public interfaces that a KZG library required to implement EIP-4844 needs to provide, as defined in `polynomial-commitments.md`.
A test type for KZG libraries. Tests all the public interfaces that a KZG
library required to implement EIP-4844 needs to provide, as defined in
`polynomial-commitments.md`.
We do not recommend rolling your own crypto or using an untested KZG library.
@@ -12,4 +14,3 @@ The KZG test suite runner has the following handlers:
- [`compute_blob_kzg_proof`](./compute_blob_kzg_proof.md)
- [`verify_blob_kzg_proof`](./verify_blob_kzg_proof.md)
- [`verify_blob_kzg_proof_batch`](./verify_blob_kzg_proof_batch.md)

View File

@@ -12,10 +12,15 @@ input:
output: KZGCommitment -- The KZG commitment
```
- `blob` here is encoded as a string: hexadecimal encoding of `4096 * 32 = 131072` bytes, prefixed with `0x`.
- `blob` here is encoded as a string: hexadecimal encoding of
`4096 * 32 = 131072` bytes, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `blob_to_kzg_commitment` handler should compute the KZG commitment for `blob`, and the result should match the expected `output`. If the blob is invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a BLS field element) it should error, i.e. the output should be `null`.
The `blob_to_kzg_commitment` handler should compute the KZG commitment for
`blob`, and the result should match the expected `output`. If the blob is
invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a
BLS field element) it should error, i.e. the output should be `null`.

View File

@@ -1,6 +1,7 @@
# Test format: Compute blob KZG proof
Compute the blob KZG proof for a given `blob`, that helps with quickly verifying that the KZG commitment for the blob is correct.
Compute the blob KZG proof for a given `blob`, that helps with quickly verifying
that the KZG commitment for the blob is correct.
## Test case format
@@ -13,11 +14,17 @@ input:
output: KZGProof -- The blob KZG proof
```
- `blob` here is encoded as a string: hexadecimal encoding of `4096 * 32 = 131072` bytes, prefixed with `0x`.
- `commitment` here is encoded as a string: hexadecimal encoding of `48` bytes, prefixed with `0x`.
- `blob` here is encoded as a string: hexadecimal encoding of
`4096 * 32 = 131072` bytes, prefixed with `0x`.
- `commitment` here is encoded as a string: hexadecimal encoding of `48` bytes,
prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `compute_blob_kzg_proof` handler should compute the blob KZG proof for `blob`, and the result should match the expected `output`. If the blob is invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a BLS field element) it should error, i.e. the output should be `null`.
The `compute_blob_kzg_proof` handler should compute the blob KZG proof for
`blob`, and the result should match the expected `output`. If the blob is
invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a
BLS field element) it should error, i.e. the output should be `null`.

View File

@@ -13,12 +13,21 @@ input:
output: Tuple[KZGProof, Bytes32] -- The KZG proof and the value y = f(z)
```
- `blob` here is encoded as a string: hexadecimal encoding of `4096 * 32 = 131072` bytes, prefixed with `0x`.
- `z` here is encoded as a string: hexadecimal encoding of `32` bytes representing a big endian encoded field element, prefixed with `0x`.
- `y` here is encoded as a string: hexadecimal encoding of `32` bytes representing a big endian encoded field element, prefixed with `0x`.
- `blob` here is encoded as a string: hexadecimal encoding of
`4096 * 32 = 131072` bytes, prefixed with `0x`.
- `z` here is encoded as a string: hexadecimal encoding of `32` bytes
representing a big endian encoded field element, prefixed with `0x`.
- `y` here is encoded as a string: hexadecimal encoding of `32` bytes
representing a big endian encoded field element, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `compute_kzg_proof` handler should compute the KZG proof as well as the value `y` for evaluating the polynomial represented by `blob` at `z`, and the result should match the expected `output`. If the blob is invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a BLS field element) or `z` is not a valid BLS field element, it should error, i.e. the output should be `null`.
The `compute_kzg_proof` handler should compute the KZG proof as well as the
value `y` for evaluating the polynomial represented by `blob` at `z`, and the
result should match the expected `output`. If the blob is invalid (e.g.
incorrect length or one of the 32-byte blocks does not represent a BLS field
element) or `z` is not a valid BLS field element, it should error, i.e. the
output should be `null`.

View File

@@ -1,6 +1,7 @@
# Test format: Verify blob KZG proof
Use the blob KZG proof to verify that the KZG commitment for a given `blob` is correct
Use the blob KZG proof to verify that the KZG commitment for a given `blob` is
correct
## Test case format
@@ -14,10 +15,17 @@ input:
output: bool -- true (valid proof) or false (incorrect proof)
```
- `blob` here is encoded as a string: hexadecimal encoding of `4096 * 32 = 131072` bytes, prefixed with `0x`.
- `blob` here is encoded as a string: hexadecimal encoding of
`4096 * 32 = 131072` bytes, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `verify_blob_kzg_proof` handler should verify that `commitment` is a correct KZG commitment to `blob` by using the blob KZG proof `proof`, and the result should match the expected `output`. If the commitment or proof is invalid (e.g. not on the curve or not in the G1 subgroup of the BLS curve) or `blob` is invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a BLS field element), it should error, i.e. the output should be `null`.
The `verify_blob_kzg_proof` handler should verify that `commitment` is a correct
KZG commitment to `blob` by using the blob KZG proof `proof`, and the result
should match the expected `output`. If the commitment or proof is invalid (e.g.
not on the curve or not in the G1 subgroup of the BLS curve) or `blob` is
invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a
BLS field element), it should error, i.e. the output should be `null`.

View File

@@ -1,6 +1,7 @@
# Test format: Verify blob KZG proof batch
Use the blob KZG proofs to verify that the KZG commitments for given `blobs` are correct
Use the blob KZG proofs to verify that the KZG commitments for given `blobs` are
correct
## Test case format
@@ -14,10 +15,18 @@ input:
output: bool -- true (all proofs are valid) or false (some proofs incorrect)
```
- `blobs` here are encoded as a string: hexadecimal encoding of `4096 * 32 = 131072` bytes, prefixed with `0x`.
- `blobs` here are encoded as a string: hexadecimal encoding of
`4096 * 32 = 131072` bytes, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `verify_blob_kzg_proof_batch` handler should verify that `commitments` are correct KZG commitments to `blobs` by using the blob KZG proofs `proofs`, and the result should match the expected `output`. If any of the commitments or proofs are invalid (e.g. not on the curve or not in the G1 subgroup of the BLS curve) or any blob is invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a BLS field element), it should error, i.e. the output should be `null`.
The `verify_blob_kzg_proof_batch` handler should verify that `commitments` are
correct KZG commitments to `blobs` by using the blob KZG proofs `proofs`, and
the result should match the expected `output`. If any of the commitments or
proofs are invalid (e.g. not on the curve or not in the G1 subgroup of the BLS
curve) or any blob is invalid (e.g. incorrect length or one of the 32-byte
blocks does not represent a BLS field element), it should error, i.e. the output
should be `null`.

View File

@@ -1,6 +1,7 @@
# Test format: Verify KZG proof
Verify the KZG proof for a given `blob` and an evaluation point `z` that claims to result in a value of `y`.
Verify the KZG proof for a given `blob` and an evaluation point `z` that claims
to result in a value of `y`.
## Test case format
@@ -15,11 +16,19 @@ input:
output: bool -- true (valid proof) or false (incorrect proof)
```
- `z` here is encoded as a string: hexadecimal encoding of `32` bytes representing a big endian encoded field element, prefixed with `0x`.
- `y` here is encoded as a string: hexadecimal encoding of `32` bytes representing a big endian encoded field element, prefixed with `0x`.
- `z` here is encoded as a string: hexadecimal encoding of `32` bytes
representing a big endian encoded field element, prefixed with `0x`.
- `y` here is encoded as a string: hexadecimal encoding of `32` bytes
representing a big endian encoded field element, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `verify_kzg_proof` handler should verify the KZG proof for evaluating the polynomial represented by `blob` at `z` resulting in the value `y`, and the result should match the expected `output`. If the commitment or proof is invalid (e.g. not on the curve or not in the G1 subgroup of the BLS curve) or `z` or `y` are not a valid BLS field element, it should error, i.e. the output should be `null`.
The `verify_kzg_proof` handler should verify the KZG proof for evaluating the
polynomial represented by `blob` at `z` resulting in the value `y`, and the
result should match the expected `output`. If the commitment or proof is invalid
(e.g. not on the curve or not in the G1 subgroup of the BLS curve) or `z` or `y`
are not a valid BLS field element, it should error, i.e. the output should be
`null`.

View File

@@ -1,6 +1,8 @@
# KZG tests for EIP-7594
A test type for KZG libraries. Tests all the public interfaces that a KZG library is required to implement for EIP-7594, as defined in `polynomial-commitments-sampling.md`.
A test type for KZG libraries. Tests all the public interfaces that a KZG
library is required to implement for EIP-7594, as defined in
`polynomial-commitments-sampling.md`.
We do not recommend rolling your own crypto or using an untested KZG library.

View File

@@ -15,8 +15,13 @@ output: List[Cell] -- the cells
- `Blob` is a 131072-byte hexadecimal string, prefixed with `0x`.
- `Cell` is a 2048-byte hexadecimal string, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `compute_cells` handler should compute the cells (chunks of an extended blob) for `blob`, and the result should match the expected `output`. If the blob is invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a BLS field element) it should error, i.e. the output should be `null`.
The `compute_cells` handler should compute the cells (chunks of an extended
blob) for `blob`, and the result should match the expected `output`. If the blob
is invalid (e.g. incorrect length or one of the 32-byte blocks does not
represent a BLS field element) it should error, i.e. the output should be
`null`.

View File

@@ -16,8 +16,13 @@ output: Tuple[List[Cell], List[KZGProof]] -- the cells and proofs
- `Cell` is a 2048-byte hexadecimal string, prefixed with `0x`.
- `KZGProof` is a 48-byte hexadecimal string, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `compute_cells_and_kzg_proofs` handler should compute the cells (chunks of an extended blob) and cell KZG proofs for `blob`, and the result should match the expected `output`. If the blob is invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a BLS field element) it should error, i.e. the output should be `null`.
The `compute_cells_and_kzg_proofs` handler should compute the cells (chunks of
an extended blob) and cell KZG proofs for `blob`, and the result should match
the expected `output`. If the blob is invalid (e.g. incorrect length or one of
the 32-byte blocks does not represent a BLS field element) it should error, i.e.
the output should be `null`.

View File

@@ -17,8 +17,14 @@ output: Tuple[List[Cell], List[KZGProof]] -- all cells and proofs
- `Cell` is a 2048-byte hexadecimal string, prefixed with `0x`.
- `KZGProof` is a 48-byte hexadecimal string, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `recover_cells_and_kzg_proofs` handler should recover missing cells and proofs, and the result should match the expected `output`. If any cell is invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a BLS field element), or any `cell_index` is invalid (e.g. greater than the number of cells for an extended blob), it should error, i.e. the output should be `null`.
The `recover_cells_and_kzg_proofs` handler should recover missing cells and
proofs, and the result should match the expected `output`. If any cell is
invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a
BLS field element), or any `cell_index` is invalid (e.g. greater than the number
of cells for an extended blob), it should error, i.e. the output should be
`null`.

View File

@@ -1,6 +1,7 @@
# Test format: Verify cell KZG proof batch
Use the cell KZG `proofs` to verify that the KZG `commitments` for the given `cells` are correct.
Use the cell KZG `proofs` to verify that the KZG `commitments` for the given
`cells` are correct.
## Test case format
@@ -19,8 +20,16 @@ output: bool -- true (all proofs are correct) or false (some proofs incorrect)
- `CellIndex` is an unsigned 64-bit integer.
- `Cell` is a 2048-byte hexadecimal string, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with `0x`.
All byte(s) fields are encoded as strings, hexadecimal encoding, prefixed with
`0x`.
## Condition
The `verify_cell_kzg_proof_batch` handler should verify that `commitments` are correct KZG commitments to `cells` by using the cell KZG proofs `proofs`, and the result should match the expected `output`. If any of the commitments or proofs are invalid (e.g. not on the curve or not in the G1 subgroup of the BLS curve), any cell is invalid (e.g. incorrect length or one of the 32-byte blocks does not represent a BLS field element), or any `cell_index` is invalid (e.g. greater than the number of cells for an extended blob), it should error, i.e. the output should be `null`.
The `verify_cell_kzg_proof_batch` handler should verify that `commitments` are
correct KZG commitments to `cells` by using the cell KZG proofs `proofs`, and
the result should match the expected `output`. If any of the commitments or
proofs are invalid (e.g. not on the curve or not in the G1 subgroup of the BLS
curve), any cell is invalid (e.g. incorrect length or one of the 32-byte blocks
does not represent a BLS field element), or any `cell_index` is invalid (e.g.
greater than the number of cells for an extended blob), it should error, i.e.
the output should be `null`.

View File

@@ -1,9 +1,14 @@
# Light client sync protocol tests
This series of tests provides reference test vectors for the light client sync protocol spec.
This series of tests provides reference test vectors for the light client sync
protocol spec.
Handlers:
- `data_collection`: see [Light client data collection test format](./data_collection.md)
- `single_merkle_proof`: see [Single leaf merkle proof test format](./single_merkle_proof.md)
- `data_collection`: see
[Light client data collection test format](./data_collection.md)
- `single_merkle_proof`: see
[Single leaf merkle proof test format](./single_merkle_proof.md)
- `sync`: see [Light client sync test format](./sync.md)
- `update_ranking`: see [`LightClientUpdate` ranking test format](./update_ranking.md)
- `update_ranking`: see
[`LightClientUpdate` ranking test format](./update_ranking.md)

View File

@@ -1,12 +1,15 @@
# Light client data collection tests
This series of tests provides reference test vectors for validating that a full node collects canonical data for serving to light clients implementing the light client sync protocol to sync to the latest block header.
This series of tests provides reference test vectors for validating that a full
node collects canonical data for serving to light clients implementing the light
client sync protocol to sync to the latest block header.
## Test case format
### `initial_state.ssz_snappy`
An SSZ-snappy encoded object of type `BeaconState` to initialize the blockchain from. The state's `slot` is epoch aligned.
An SSZ-snappy encoded object of type `BeaconState` to initialize the blockchain
from. The state's `slot` is epoch aligned.
### `steps.yaml`
@@ -14,7 +17,8 @@ The steps to execute in sequence.
#### `new_block` execution step
The new block described by the test step should be imported, but does not become head yet.
The new block described by the test step should be imported, but does not become
head yet.
```yaml
{
@@ -26,12 +30,14 @@ The new block described by the test step should be imported, but does not become
#### `new_head` execution step
The given block (previously imported) should become head, leading to potential updates to:
The given block (previously imported) should become head, leading to potential
updates to:
- The best `LightClientUpdate` for non-finalized sync committee periods.
- The latest `LightClientFinalityUpdate` and `LightClientOptimisticUpdate`.
- The latest finalized `Checkpoint` (across all branches).
- The available `LightClientBootstrap` instances for newly finalized `Checkpoint`s.
- The available `LightClientBootstrap` instances for newly finalized
`Checkpoint`s.
```yaml
{
@@ -73,4 +79,8 @@ The given block (previously imported) should become head, leading to potential u
## Condition
A test-runner should initialize a simplified blockchain from `initial_state`. An external signal is used to control fork choice. The test-runner should then proceed to execute all the test steps in sequence, collecting light client data during execution. After each `new_head` step, it should verify that the collected light client data matches the provided `checks`.
A test-runner should initialize a simplified blockchain from `initial_state`. An
external signal is used to control fork choice. The test-runner should then
proceed to execute all the test steps in sequence, collecting light client data
during execution. After each `new_head` step, it should verify that the
collected light client data matches the provided `checks`.

View File

@@ -5,15 +5,18 @@ generation and verification of merkle proofs based on static data.
## Test case format
Tests for each individual SSZ type are grouped into a `suite` indicating the SSZ type name.
Tests for each individual SSZ type are grouped into a `suite` indicating the SSZ
type name.
### `object.ssz_snappy`
A SSZ-snappy encoded object from which other data is generated. The SSZ type can be determined from the test `suite` name.
A SSZ-snappy encoded object from which other data is generated. The SSZ type can
be determined from the test `suite` name.
### `proof.yaml`
A proof of the leaf value (a merkle root) at generalized-index `leaf_index` in the given `object`.
A proof of the leaf value (a merkle root) at generalized-index `leaf_index` in
the given `object`.
```yaml
leaf: Bytes32 # string, hex encoded, with 0x prefix
@@ -24,6 +27,7 @@ branch: list of Bytes32 # list, each element is a string, hex encoded, with 0x
## Condition
A test-runner can implement the following assertions:
- Check that `is_valid_merkle_branch` confirms `leaf` at `leaf_index` to verify
against `hash_tree_root(object)` and `branch`.
- If the implementation supports generating merkle proofs, check that the

View File

@@ -1,6 +1,7 @@
# Light client sync tests
This series of tests provides reference test vectors for validating that a light client implementing the sync protocol can sync to the latest block header.
This series of tests provides reference test vectors for validating that a light
client implementing the sync protocol can sync to the latest block header.
## Test case format
@@ -15,9 +16,14 @@ store_fork_digest: string -- encoded `ForkDigest`-context of `store` obj
### `bootstrap.ssz_snappy`
An SSZ-snappy encoded `bootstrap` object of type `LightClientBootstrap` to initialize a local `store` object of type `LightClientStore` with `store_fork_digest` using `initialize_light_client_store(trusted_block_rooot, bootstrap)`. The SSZ type can be determined from `bootstrap_fork_digest`.
An SSZ-snappy encoded `bootstrap` object of type `LightClientBootstrap` to
initialize a local `store` object of type `LightClientStore` with
`store_fork_digest` using
`initialize_light_client_store(trusted_block_rooot, bootstrap)`. The SSZ type
can be determined from `bootstrap_fork_digest`.
If `store_fork_digest` differs from `bootstrap_fork_digest`, the `bootstrap` object may need to be upgraded before initializing the store.
If `store_fork_digest` differs from `bootstrap_fork_digest`, the `bootstrap`
object may need to be upgraded before initializing the store.
### `steps.yaml`
@@ -56,7 +62,9 @@ After this step, the `store` object may have been updated.
#### `process_update` execution step
The function `process_light_client_update(store, update, current_slot, genesis_validators_root)` should be executed with the specified parameters:
The function
`process_light_client_update(store, update, current_slot, genesis_validators_root)`
should be executed with the specified parameters:
```yaml
{
@@ -68,7 +76,8 @@ The function `process_light_client_update(store, update, current_slot, genesis_v
}
```
If `store_fork_digest` differs from `update_fork_digest`, the `update` object may need to be upgraded before processing the update.
If `store_fork_digest` differs from `update_fork_digest`, the `update` object
may need to be upgraded before processing the update.
After this step, the `store` object may have been updated.
@@ -87,4 +96,7 @@ After this step, the `store` object may have been updated.
## Condition
A test-runner should initialize a local `LightClientStore` using the provided `bootstrap` object. It should then proceed to execute all the test steps in sequence. After each step, it should verify that the resulting `store` verifies against the provided `checks`.
A test-runner should initialize a local `LightClientStore` using the provided
`bootstrap` object. It should then proceed to execute all the test steps in
sequence. After each step, it should verify that the resulting `store` verifies
against the provided `checks`.

View File

@@ -1,6 +1,7 @@
# `LightClientUpdate` ranking tests
This series of tests provides reference test vectors for validating that `LightClientUpdate` instances are ranked in a canonical order.
This series of tests provides reference test vectors for validating that
`LightClientUpdate` instances are ranked in a canonical order.
## Test case format
@@ -12,10 +13,14 @@ updates_count: int -- integer, decimal
### `updates_<index>.ssz_snappy`
A series of files, with `<index>` in range `[0, updates_count)`, ordered by descending precedence according to `is_better_update` (best update at index 0).
A series of files, with `<index>` in range `[0, updates_count)`, ordered by
descending precedence according to `is_better_update` (best update at index 0).
Each file is a SSZ-snappy encoded `LightClientUpdate`.
## Condition
A test-runner should load the provided `update` objects and verify that the local implementation ranks them in the same order. Note that the `update` objects are not restricted to a single sync committee period for the scope of this test.
A test-runner should load the provided `update` objects and verify that the
local implementation ranks them in the same order. Note that the `update`
objects are not restricted to a single sync committee period for the scope of
this test.

View File

@@ -1,4 +1,6 @@
# Merkle proof tests
Handlers:
- `single_merkle_proof`: see [Single leaf merkle proof test format](../light_client/single_merkle_proof.md)
- `single_merkle_proof`: see
[Single leaf merkle proof test format](../light_client/single_merkle_proof.md)

View File

@@ -1,7 +1,11 @@
# Networking tests
The aim of the networking tests is to set a base-line on what really needs to pass, i.e. the essentials.
The aim of the networking tests is to set a base-line on what really needs to
pass, i.e. the essentials.
Handlers:
- [`compute_columns_for_custody_group`](./compute_columns_for_custody_group.md): `compute_columns_for_custody_group` helper tests
- [`get_custody_groups`](./get_custody_groups.md): `get_custody_groups` helper tests
- [`compute_columns_for_custody_group`](./compute_columns_for_custody_group.md):
`compute_columns_for_custody_group` helper tests
- [`get_custody_groups`](./get_custody_groups.md): `get_custody_groups` helper
tests

View File

@@ -1,6 +1,7 @@
# `compute_columns_for_custody_group` tests
`compute_columns_for_custody_group` tests provide sanity checks for the correctness of the `compute_columns_for_custody_group` helper function.
`compute_columns_for_custody_group` tests provide sanity checks for the
correctness of the `compute_columns_for_custody_group` helper function.
## Test case format

View File

@@ -1,6 +1,7 @@
# `get_custody_groups` tests
`get_custody_groups` tests provide sanity checks for the correctness of the `get_custody_groups` helper function.
`get_custody_groups` tests provide sanity checks for the correctness of the
`get_custody_groups` helper function.
## Test case format

View File

@@ -1,6 +1,7 @@
# Operations tests
The different kinds of operations ("transactions") are tested individually with test handlers.
The different kinds of operations ("transactions") are tested individually with
test handlers.
## Test case format
@@ -22,38 +23,41 @@ An SSZ-snappy encoded operation object, e.g. a `ProposerSlashing`, or `Deposit`.
### `post.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state after applying the operation. No value if operation processing is aborted.
An SSZ-snappy encoded `BeaconState`, the state after applying the operation. No
value if operation processing is aborted.
## Condition
A handler of the `operations` test-runner should process these cases,
calling the corresponding processing implementation.
This excludes the other parts of the block-transition.
A handler of the `operations` test-runner should process these cases, calling
the corresponding processing implementation. This excludes the other parts of
the block-transition.
Operations:
| *`operation-name`* | *`operation-object`* | *`input name`* | *`processing call`* |
|---------------------------|------------------------------|---------------------|----------------------------------------------------------------------------------|
| `attestation` | `Attestation` | `attestation` | `process_attestation(state, attestation)` |
| `attester_slashing` | `AttesterSlashing` | `attester_slashing` | `process_attester_slashing(state, attester_slashing)` |
| `block_header` | `BeaconBlock` | **`block`** | `process_block_header(state, block)` |
| `deposit` | `Deposit` | `deposit` | `process_deposit(state, deposit)` |
| `proposer_slashing` | `ProposerSlashing` | `proposer_slashing` | `process_proposer_slashing(state, proposer_slashing)` |
| `voluntary_exit` | `SignedVoluntaryExit` | `voluntary_exit` | `process_voluntary_exit(state, voluntary_exit)` |
| `sync_aggregate` | `SyncAggregate` | `sync_aggregate` | `process_sync_aggregate(state, sync_aggregate)` (new in Altair) |
| `execution_payload` | `BeaconBlockBody` | **`body`** | `process_execution_payload(state, body)` (new in Bellatrix) |
| `withdrawals` | `ExecutionPayload` | `execution_payload` | `process_withdrawals(state, execution_payload)` (new in Capella) |
| `bls_to_execution_change` | `SignedBLSToExecutionChange` | `address_change` | `process_bls_to_execution_change(state, address_change)` (new in Capella) |
| `deposit_request` | `DepositRequest` | `deposit_request` | `process_deposit_request(state, deposit_request)` (new in Electra) |
| `withdrawal_request` | `WithdrawalRequest` | `withdrawal_request` | `process_withdrawal_request(state, withdrawal_request)` (new in Electra) |
| `consolidation_request` | `ConsolidationRequest` | `consolidation_request` | `process_consolidation_request(state, consolidation_request)` (new in Electra) |
| *`operation-name`* | *`operation-object`* | *`input name`* | *`processing call`* |
| ------------------------- | ---------------------------- | ----------------------- | ------------------------------------------------------------------------------ |
| `attestation` | `Attestation` | `attestation` | `process_attestation(state, attestation)` |
| `attester_slashing` | `AttesterSlashing` | `attester_slashing` | `process_attester_slashing(state, attester_slashing)` |
| `block_header` | `BeaconBlock` | **`block`** | `process_block_header(state, block)` |
| `deposit` | `Deposit` | `deposit` | `process_deposit(state, deposit)` |
| `proposer_slashing` | `ProposerSlashing` | `proposer_slashing` | `process_proposer_slashing(state, proposer_slashing)` |
| `voluntary_exit` | `SignedVoluntaryExit` | `voluntary_exit` | `process_voluntary_exit(state, voluntary_exit)` |
| `sync_aggregate` | `SyncAggregate` | `sync_aggregate` | `process_sync_aggregate(state, sync_aggregate)` (new in Altair) |
| `execution_payload` | `BeaconBlockBody` | **`body`** | `process_execution_payload(state, body)` (new in Bellatrix) |
| `withdrawals` | `ExecutionPayload` | `execution_payload` | `process_withdrawals(state, execution_payload)` (new in Capella) |
| `bls_to_execution_change` | `SignedBLSToExecutionChange` | `address_change` | `process_bls_to_execution_change(state, address_change)` (new in Capella) |
| `deposit_request` | `DepositRequest` | `deposit_request` | `process_deposit_request(state, deposit_request)` (new in Electra) |
| `withdrawal_request` | `WithdrawalRequest` | `withdrawal_request` | `process_withdrawal_request(state, withdrawal_request)` (new in Electra) |
| `consolidation_request` | `ConsolidationRequest` | `consolidation_request` | `process_consolidation_request(state, consolidation_request)` (new in Electra) |
Note that `block_header` is not strictly an operation (and is a full `Block`), but processed in the same manner, and hence included here.
Note that `block_header` is not strictly an operation (and is a full `Block`),
but processed in the same manner, and hence included here.
The `execution_payload` processing normally requires a `verify_execution_state_transition(execution_payload)`,
a responsibility of an (external) execution engine.
During testing this execution is mocked, an `execution.yml` is provided instead:
a dict containing an `execution_valid` boolean field with the verification result.
The `execution_payload` processing normally requires a
`verify_execution_state_transition(execution_payload)`, a responsibility of an
(external) execution engine. During testing this execution is mocked, an
`execution.yml` is provided instead: a dict containing an `execution_valid`
boolean field with the verification result.
The resulting state should match the expected `post` state, or if the `post` state is left blank,
the handler should reject the input operation as invalid.
The resulting state should match the expected `post` state, or if the `post`
state is left blank, the handler should reject the input operation as invalid.

View File

@@ -4,4 +4,5 @@ The random tests are generated with various randomized states and blocks.
## Test case format
- `random` handler: same as the [`blocks`](../sanity/blocks.md) handler test case format from sanity tests.
- `random` handler: same as the [`blocks`](../sanity/blocks.md) handler test
case format from sanity tests.

View File

@@ -1,8 +1,8 @@
# Rewards tests
All rewards deltas sub-functions are tested for each test case.
There is no "change" factor, the rewards/penalties outputs are pure functions with just the pre-state as input.
(See test condition documentation on how to run the tests.)
All rewards deltas sub-functions are tested for each test case. There is no
"change" factor, the rewards/penalties outputs are pure functions with just the
pre-state as input. (See test condition documentation on how to run the tests.)
`Deltas` is defined as:
@@ -22,43 +22,49 @@ description: string -- Optional description of test case, purely for debuggin
```
_Note_: No signature verification happens within rewards sub-functions. These
tests can safely be run with or without BLS enabled.
tests can safely be run with or without BLS enabled.
### `pre.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state before running the rewards sub-function.
An SSZ-snappy encoded `BeaconState`, the state before running the rewards
sub-function.
### `source_deltas.ssz_snappy`
An SSZ-snappy encoded `Deltas` representing the rewards and penalties returned by the rewards the `get_source_deltas` function
An SSZ-snappy encoded `Deltas` representing the rewards and penalties returned
by the rewards the `get_source_deltas` function
### `target_deltas.ssz_snappy`
An SSZ-snappy encoded `Deltas` representing the rewards and penalties returned by the rewards the `get_target_deltas` function
An SSZ-snappy encoded `Deltas` representing the rewards and penalties returned
by the rewards the `get_target_deltas` function
### `head_deltas.ssz_snappy`
An SSZ-snappy encoded `Deltas` representing the rewards and penalties returned by the rewards the `get_head_deltas` function
An SSZ-snappy encoded `Deltas` representing the rewards and penalties returned
by the rewards the `get_head_deltas` function
### `inclusion_delay_deltas.ssz_snappy`
An SSZ-snappy encoded `Deltas` representing the rewards and penalties returned by the rewards the `get_inclusion_delay_deltas` function
An SSZ-snappy encoded `Deltas` representing the rewards and penalties returned
by the rewards the `get_inclusion_delay_deltas` function
### `inactivity_penalty_deltas.ssz_snappy`
An SSZ-snappy encoded `Deltas` representing the rewards and penalties returned by the rewards the `get_inactivity_penalty_deltas` function
An SSZ-snappy encoded `Deltas` representing the rewards and penalties returned
by the rewards the `get_inactivity_penalty_deltas` function
## Condition
A handler of the `rewards` test-runner should process these cases,
calling the corresponding rewards deltas function for each set of deltas.
A handler of the `rewards` test-runner should process these cases, calling the
corresponding rewards deltas function for each set of deltas.
The provided pre-state is ready to be input into each rewards deltas function.
The provided `deltas` should match the return values of the
deltas function. Specifically the following must hold true for each set of deltas:
The provided `deltas` should match the return values of the deltas function.
Specifically the following must hold true for each set of deltas:
```python
deltas.rewards == deltas_function(state)[0]
deltas.penalties == deltas_function(state)[1]
deltas.rewards == deltas_function(state)[0]
deltas.penalties == deltas_function(state)[1]
```

View File

@@ -1,7 +1,10 @@
# Sanity tests
The aim of the sanity tests is to set a base-line on what really needs to pass, i.e. the essentials.
The aim of the sanity tests is to set a base-line on what really needs to pass,
i.e. the essentials.
There are two handlers, documented individually:
- [`slots`](./slots.md): transitions of one or more slots (and epoch transitions within)
- [`slots`](./slots.md): transitions of one or more slots (and epoch transitions
within)
- [`blocks`](./blocks.md): transitions triggered by one or more blocks

View File

@@ -1,6 +1,7 @@
# Sanity blocks testing
Sanity tests to cover a series of one or more blocks being processed, aiming to cover common changes.
Sanity tests to cover a series of one or more blocks being processed, aiming to
cover common changes.
## Test case format
@@ -15,20 +16,23 @@ blocks_count: int -- the number of blocks processed in this test.
### `pre.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state before running the block transitions.
An SSZ-snappy encoded `BeaconState`, the state before running the block
transitions.
### `blocks_<index>.ssz_snappy`
A series of files, with `<index>` in range `[0, blocks_count)`. Blocks need to be processed in order,
following the main transition function (i.e. process slot and epoch transitions in between blocks as normal)
A series of files, with `<index>` in range `[0, blocks_count)`. Blocks need to
be processed in order, following the main transition function (i.e. process slot
and epoch transitions in between blocks as normal)
Each file is a SSZ-snappy encoded `SignedBeaconBlock`.
### `post.ssz_snappy`
An SSZ-snappy encoded `BeaconState`, the state after applying the block transitions.
An SSZ-snappy encoded `BeaconState`, the state after applying the block
transitions.
## Condition
The resulting state should match the expected `post` state, or if the `post` state is left blank,
the handler should reject the series of blocks as invalid.
The resulting state should match the expected `post` state, or if the `post`
state is left blank, the handler should reject the series of blocks as invalid.

View File

@@ -1,6 +1,7 @@
# Sanity slots testing
Sanity tests to cover a series of one or more empty-slot transitions being processed, aiming to cover common changes.
Sanity tests to cover a series of one or more empty-slot transitions being
processed, aiming to cover common changes.
## Test case format
@@ -19,7 +20,8 @@ Also available as `pre.ssz_snappy`.
### `slots.yaml`
An integer. The amount of slots to process (i.e. the difference in slots between pre and post), always a positive number.
An integer. The amount of slots to process (i.e. the difference in slots between
pre and post), always a positive number.
### `post.ssz_snappy`
@@ -29,8 +31,9 @@ Also available as `post.ssz_snappy`.
### Processing
The transition with pure time, no blocks, is known as `process_slots(state, slot)` in the spec.
This runs state-caching (pure slot transition) and epoch processing (every E slots).
The transition with pure time, no blocks, is known as
`process_slots(state, slot)` in the spec. This runs state-caching (pure slot
transition) and epoch processing (every E slots).
To process the data, call `process_slots(pre, pre.slot + N)`.

View File

@@ -2,15 +2,19 @@
The runner of the Shuffling test type has only one handler: `core`.
However, this does not mean that testing is limited.
Clients may take different approaches to shuffling, for optimizing,
and supporting advanced lookup behavior back in older history.
However, this does not mean that testing is limited. Clients may take different
approaches to shuffling, for optimizing, and supporting advanced lookup behavior
back in older history.
For implementers, possible test runners implementing testing can include:
1) Just test permute-index, run it for each index `i` in `range(count)`, and check against expected `mapping[i]` (default spec implementation).
2) Test un-permute-index (the reverse lookup; implemented by running the shuffling rounds in reverse, from `round_count-1` to `0`).
3) Test the optimized complete shuffle, where all indices are shuffled at once; test output in one go.
4) Test complete shuffle in reverse (reverse rounds, same as #2).
1. Just test permute-index, run it for each index `i` in `range(count)`, and
check against expected `mapping[i]` (default spec implementation).
2. Test un-permute-index (the reverse lookup; implemented by running the
shuffling rounds in reverse, from `round_count-1` to `0`).
3. Test the optimized complete shuffle, where all indices are shuffled at once;
test output in one go.
4. Test complete shuffle in reverse (reverse rounds, same as #2).
## Test case format
@@ -22,17 +26,23 @@ count: int
mapping: List[int]
```
- The `bytes32` is encoded as a string, hexadecimal encoding, prefixed with `0x`.
- Integers are validator indices. These are `uint64`, but realistically they are not as big.
- The `bytes32` is encoded as a string, hexadecimal encoding, prefixed with
`0x`.
- Integers are validator indices. These are `uint64`, but realistically they are
not as big.
The `count` specifies the validator registry size. One should compute the shuffling for indices `0, 1, 2, 3, ..., count (exclusive)`.
The `count` specifies the validator registry size. One should compute the
shuffling for indices `0, 1, 2, 3, ..., count (exclusive)`.
The `seed` is the raw shuffling seed, passed to permute-index (or optimized shuffling approach).
The `seed` is the raw shuffling seed, passed to permute-index (or optimized
shuffling approach).
The `mapping` is a look up array, constructed as `[spec.compute_shuffled_index(i, count, seed) for i in range(count)]`
I.e. `mapping[i]` is the shuffled location of `i`.
The `mapping` is a look up array, constructed as
`[spec.compute_shuffled_index(i, count, seed) for i in range(count)]` I.e.
`mapping[i]` is the shuffled location of `i`.
## Condition
The resulting list should match the expected output after shuffling the implied input, using the given `seed`.
The output is checked using the `mapping`, based on the shuffling test type (e.g. can be backwards shuffling).
The resulting list should match the expected output after shuffling the implied
input, using the given `seed`. The output is checked using the `mapping`, based
on the shuffling test type (e.g. can be backwards shuffling).

View File

@@ -1,34 +1,38 @@
# SSZ, generic tests
This set of test-suites provides general testing for SSZ:
to decode any container/list/vector/other type from binary data, encode it back, and compute the hash-tree-root.
This set of test-suites provides general testing for SSZ: to decode any
container/list/vector/other type from binary data, encode it back, and compute
the hash-tree-root.
This test collection for general-purpose SSZ is experimental.
The `ssz_static` suite is the required minimal support for SSZ, and should be prioritized.
This test collection for general-purpose SSZ is experimental. The `ssz_static`
suite is the required minimal support for SSZ, and should be prioritized.
The `ssz_generic` tests are split up into different handler, each specialized into a SSZ type:
The `ssz_generic` tests are split up into different handler, each specialized
into a SSZ type:
- Vectors
- `basic_vector`
- `complex_vector` *not supported yet*
- `basic_vector`
- `complex_vector` *not supported yet*
- List
- `basic_list` *not supported yet*
- `complex_list` *not supported yet*
- `basic_list` *not supported yet*
- `complex_list` *not supported yet*
- Bitfields
- `bitvector`
- `bitlist`
- `bitvector`
- `bitlist`
- Basic types
- `boolean`
- `uints`
- `boolean`
- `uints`
- Containers
- `containers`
- `containers`
## Format
For each type, a `valid` and an `invalid` suite is implemented.
The cases have the same format, but those in the `invalid` suite only declare a subset of the data a test in the `valid` declares.
For each type, a `valid` and an `invalid` suite is implemented. The cases have
the same format, but those in the `invalid` suite only declare a subset of the
data a test in the `valid` declares.
Each of the handlers encodes the SSZ type declaration in the file-name. See [Type Declarations](#type-declarations).
Each of the handlers encodes the SSZ type declaration in the file-name. See
[Type Declarations](#type-declarations).
### `valid`
@@ -36,8 +40,8 @@ Valid has 3 parts: `meta.yaml`, `serialized.ssz_snappy`, `value.yaml`
### `meta.yaml`
Valid ssz objects can have a hash-tree-root.
The expected roots are encoded into the metadata yaml:
Valid ssz objects can have a hash-tree-root. The expected roots are encoded into
the metadata yaml:
```yaml
root: Bytes32 -- Hash-tree-root of the object
@@ -51,14 +55,17 @@ The serialized form of the object, as snappy-compressed SSZ bytes.
### `value.yaml`
The object, encoded as a YAML structure. Using the same familiar encoding as YAML data in the other test suites.
The object, encoded as a YAML structure. Using the same familiar encoding as
YAML data in the other test suites.
### Conditions
The conditions are the same for each type:
- Encoding: After encoding the given `value` object, the output should match `serialized`.
- Decoding: After decoding the given `serialized` bytes, it should match the `value` object.
- Encoding: After encoding the given `value` object, the output should match
`serialized`.
- Decoding: After decoding the given `serialized` bytes, it should match the
`value` object.
- Hash-tree-root: the root should match the root declared in the metadata.
## `invalid`
@@ -67,19 +74,22 @@ Test cases in the `invalid` suite only include the `serialized.ssz_snappy`
#### Condition
Unlike the `valid` suite, invalid encodings do not have any `value` or hash tree root.
The `serialized` data should simply not be decoded without raising an error.
Unlike the `valid` suite, invalid encodings do not have any `value` or hash tree
root. The `serialized` data should simply not be decoded without raising an
error.
Note that for some type declarations in the invalid suite, the type itself may technically be invalid.
This is a valid way of detecting `invalid` data too. E.g. a 0-length basic vector.
Note that for some type declarations in the invalid suite, the type itself may
technically be invalid. This is a valid way of detecting `invalid` data too.
E.g. a 0-length basic vector.
## Type declarations
Most types are not as static, and can reasonably be constructed during test runtime from the test case name.
Formats are listed below.
Most types are not as static, and can reasonably be constructed during test
runtime from the test case name. Formats are listed below.
For each test case, an additional `_{extra...}` may be appended to the name,
where `{extra...}` contains a human readable indication of the test case contents for debugging purposes.
where `{extra...}` contains a human readable indication of the test case
contents for debugging purposes.
### `basic_vector`
@@ -121,7 +131,8 @@ Data:
### `boolean`
A boolean has no type variations. Instead, file names just plainly describe the contents for debugging.
A boolean has no type variations. Instead, file names just plainly describe the
contents for debugging.
### `uints`
@@ -137,7 +148,8 @@ Data:
### `containers`
Containers are more complicated than the other types. Instead, a set of pre-defined container structures is referenced:
Containers are more complicated than the other types. Instead, a set of
pre-defined container structures is referenced:
```
Template:
@@ -150,7 +162,6 @@ Data:
```
```python
class SingleFieldTestStruct(Container):
A: byte

View File

@@ -1,8 +1,9 @@
# SSZ, static tests
This set of test-suites provides static testing for SSZ:
to instantiate just the known Ethereum SSZ types from binary data.
This set of test-suites provides static testing for SSZ: to instantiate just the
known Ethereum SSZ types from binary data.
This series of tests is based on the spec-maintained `eth2spec/utils/ssz/ssz_impl.py`, i.e. fully consistent with the SSZ spec.
This series of tests is based on the spec-maintained
`eth2spec/utils/ssz/ssz_impl.py`, i.e. fully consistent with the SSZ spec.
Test format documentation can be found here: [core test format](./core.md).

View File

@@ -1,22 +1,26 @@
# Test format: SSZ static types
The goal of this type is to provide clients with a solid reference for how the known SSZ objects should be encoded.
Each object described in the Phase 0 spec is covered.
This is important, as many of the clients aiming to serialize/deserialize objects directly into structs/classes
do not support (or have alternatives for) generic SSZ encoding/decoding.
The goal of this type is to provide clients with a solid reference for how the
known SSZ objects should be encoded. Each object described in the Phase 0 spec
is covered. This is important, as many of the clients aiming to
serialize/deserialize objects directly into structs/classes do not support (or
have alternatives for) generic SSZ encoding/decoding.
This test-format ensures these direct serializations are covered.
Note that this test suite does not cover the invalid-encoding case:
SSZ implementations should be hardened against invalid inputs with the other SSZ tests as guide, along with fuzzing.
Note that this test suite does not cover the invalid-encoding case: SSZ
implementations should be hardened against invalid inputs with the other SSZ
tests as guide, along with fuzzing.
## Test case format
Each SSZ type is a `handler`, since the format is semantically different: the type of the data is different.
Each SSZ type is a `handler`, since the format is semantically different: the
type of the data is different.
One can iterate over the handlers, and select the type based on the handler name.
Suites are then the same format, but each specialized in one randomization mode.
Some randomization modes may only produce a single test case (e.g. the all-zeroes case).
One can iterate over the handlers, and select the type based on the handler
name. Suites are then the same format, but each specialized in one randomization
mode. Some randomization modes may only produce a single test case (e.g. the
all-zeroes case).
The output parts are: `roots.yaml`, `serialized.ssz_snappy`, `value.yaml`
@@ -37,15 +41,20 @@ The same value as `serialized.ssz_snappy`, represented as YAML.
## Condition
A test-runner can implement the following assertions:
- If YAML decoding of SSZ objects is supported by the implementation:
- Serialization: After parsing the `value`, SSZ-serialize it: the output should match `serialized`
- Deserialization: SSZ-deserialize the `serialized` value, and see if it matches the parsed `value`
- Serialization: After parsing the `value`, SSZ-serialize it: the output
should match `serialized`
- Deserialization: SSZ-deserialize the `serialized` value, and see if it
matches the parsed `value`
- If YAML decoding of SSZ objects is not supported by the implementation:
- Serialization in 2 steps: deserialize `serialized`, then serialize the result,
and verify if the bytes match the original `serialized`.
- Hash-tree-root: After parsing the `value` (or deserializing `serialized`), Hash-tree-root it: the output should match `root`
- Serialization in 2 steps: deserialize `serialized`, then serialize the
result, and verify if the bytes match the original `serialized`.
- Hash-tree-root: After parsing the `value` (or deserializing `serialized`),
Hash-tree-root it: the output should match `root`
## References
**`serialized`**—[SSZ serialization](../../../ssz/simple-serialize.md#serialization)
**`root`**—[hash_tree_root](../../../ssz/simple-serialize.md#merkleization) function
**`root`**—[hash_tree_root](../../../ssz/simple-serialize.md#merkleization)
function

View File

@@ -1,3 +1,4 @@
# Sync tests
It reuses the [fork choice test format](../fork_choice/README.md) to apply the test script.
It reuses the [fork choice test format](../fork_choice/README.md) to apply the
test script.

View File

@@ -2,7 +2,8 @@
Transition tests to cover processing the chain across a fork boundary.
Each test case contains a `post_fork` key in the `meta.yaml` that indicates the target fork which also fixes the fork the test begins in.
Each test case contains a `post_fork` key in the `meta.yaml` that indicates the
target fork which also fixes the fork the test begins in.
Clients should assume forks happen sequentially in the following manner:
@@ -12,7 +13,11 @@ Clients should assume forks happen sequentially in the following manner:
3. `capella`
4. `deneb`
For example, if a test case has `post_fork` of `altair`, the test consumer should assume the test begins in `phase0` and use that specification to process the initial state and any blocks up until the fork epoch. After the fork happens, the test consumer should use the specification according to the `altair` fork to process the remaining data.
For example, if a test case has `post_fork` of `altair`, the test consumer
should assume the test begins in `phase0` and use that specification to process
the initial state and any blocks up until the fork epoch. After the fork
happens, the test consumer should use the specification according to the
`altair` fork to process the remaining data.
## Test case format
@@ -30,26 +35,26 @@ Refer to the specs for the relevant fork for further details.
### `pre.ssz_snappy`
A SSZ-snappy encoded `BeaconState` according to the specification of
the initial fork, the state before running the block transitions.
A SSZ-snappy encoded `BeaconState` according to the specification of the initial
fork, the state before running the block transitions.
### `blocks_<index>.ssz_snappy`
A series of files, with `<index>` in range `[0, blocks_count)`.
Blocks must be processed in order, following the main transition function
(i.e. process slot and epoch transitions in between blocks as normal).
A series of files, with `<index>` in range `[0, blocks_count)`. Blocks must be
processed in order, following the main transition function (i.e. process slot
and epoch transitions in between blocks as normal).
Blocks are encoded as `SignedBeaconBlock`s from the relevant spec version
as indicated by the `post_fork` and `fork_block` data in the `meta.yaml`.
Blocks are encoded as `SignedBeaconBlock`s from the relevant spec version as
indicated by the `post_fork` and `fork_block` data in the `meta.yaml`.
As blocks span fork boundaries, a `fork_block` number is given in
the `meta.yaml` to help resolve which blocks belong to which fork.
As blocks span fork boundaries, a `fork_block` number is given in the
`meta.yaml` to help resolve which blocks belong to which fork.
The `fork_block` is the index in the test data of the **last** block
of the **initial** fork.
The `fork_block` is the index in the test data of the **last** block of the
**initial** fork.
To demonstrate, the following diagram shows slots with `_` and blocks
in those slots as `x`. The fork happens at the epoch delineated by the `|`.
To demonstrate, the following diagram shows slots with `_` and blocks in those
slots as `x`. The fork happens at the epoch delineated by the `|`.
```
x x x x
@@ -62,13 +67,13 @@ testing the fork from Phase 0 to Altair, blocks with indices `0, 1` represent
`SignedBeaconBlock`s defined in the Phase 0 spec and blocks with indices `2, 3`
represent `SignedBeaconBlock`s defined in the Altair spec.
*Note*: If `fork_block` is missing, then all block data should be
interpreted as belonging to the post fork.
*Note*: If `fork_block` is missing, then all block data should be interpreted as
belonging to the post fork.
### `post.ssz_snappy`
A SSZ-snappy encoded `BeaconState` according to the specification of
the post fork, the state after running the block transitions.
A SSZ-snappy encoded `BeaconState` according to the specification of the post
fork, the state after running the block transitions.
## Condition

View File

@@ -1,18 +1,24 @@
# Fork choice compliance test generator
Fork Choice test generator intended to produce tests to validate conformance to the specs of various Fork Choice implementations.
Fork Choice test generator intended to produce tests to validate conformance to
the specs of various Fork Choice implementations.
Implementation of the approach described in the [Fork Choice compliance testing framework](https://hackmd.io/@ericsson49/fork-choice-implementation-vs-spec-testing).
Implementation of the approach described in the
[Fork Choice compliance testing framework](https://hackmd.io/@ericsson49/fork-choice-implementation-vs-spec-testing).
Preliminary research has been also performed in this [repo](https://github.com/txrx-research/fork_choice_test_generation/tree/main).
Preliminary research has been also performed in this
[repo](https://github.com/txrx-research/fork_choice_test_generation/tree/main).
To simplify adoption of the tests, we follow the test format described in the [fork choice test formats documentation](../../../formats/fork_choice/README.md), with a minor exception (new check added).
To simplify adoption of the tests, we follow the test format described in the
[fork choice test formats documentation](../../../formats/fork_choice/README.md),
with a minor exception (new check added).
This work was supported by a grant from the Ethereum Foundation.
# Pre-requisites
Install pyspec using the top-level Makefile, this will install necessary pre-requiesites.
Install pyspec using the top-level Makefile, this will install necessary
pre-requiesites.
```
> make pyspec
@@ -25,7 +31,8 @@ From the root directory:
```
> python -m tests.generators.compliance_runners.fork_choice.test_gen -o ${test_dir} --fc-gen-config ${config}
```
where `config` can be either: `tiny`, `small` or `standard.
where `config` can be either: `tiny`, `small` or \`standard.
Or specify path to the configuration file directly:
@@ -33,7 +40,8 @@ Or specify path to the configuration file directly:
> python -m tests.generators.compliance_runners.fork_choice.test_gen -o ${test_dir} --fc-gen-config-path ${config_path}
```
There are three configurations in the repo: [tiny](tiny/), [small](small/) and [standard](standard/).
There are three configurations in the repo: [tiny](tiny/), [small](small/) and
[standard](standard/).
# Running tests
@@ -45,7 +53,9 @@ From the root directory:
# Generating configurations
Files in [tiny](tiny/), [small](small/) and [standard](standard/) are generated with [generate_test_instances.py](generate_test_instances.py), e.g.
Files in [tiny](tiny/), [small](small/) and [standard](standard/) are generated
with [generate_test_instances.py](generate_test_instances.py), e.g.
```
> python -m tests.generators.compliance_runners.fork_choice.generate_test_instances
```