This PR adds a basic block machine processor. Currently, we assume a
rectangular block shape and just iterate over all rows and identities of
the block until no more progress is made. This is sufficient to generate
code for Poseidon.
Formats a vector of "effects" coming from the witgen solver into rust
code, compiles and loads it.
Submachine calls and receiving arguments will be done in another PR.
This code assumes `known` to be a padded bit vector ( #2230 ).
---------
Co-authored-by: Georg Wiese <georgwiese@gmail.com>
A few preparations for #2226:
- Extracted a `test_utils` module
- Introduced a new `Variable` type, which can refer to either a cell or
a "parameter" (either input or output of a machine call). I think in the
future, we could have more variants (e.g. scalar publics). `Variable` is
now used instead of `Cell` in `WitgenInference`.
- `WitgenInference::process_identity` now also returns whether any
progress has been made.
- Renamed `lookup` -> `machine_call` when rendering `Effect`s
This PR adds a component that can derive assignments and other code on
identities and multiple rows. It keeps track of which cells in the trace
are already known and which not. The way to access fixed rows is
abstracted because it does not have a concept of an absolute row. While
this might work for block machines with cyclic fixed columns, it does
not work in the general case.
What it does not do:
- have a sequence of which identities to consider on which rows
- a mechanism that determines when it is finished
---------
Co-authored-by: Georg Wiese <georgwiese@gmail.com>
This PR performs preliminary preparations in the block machine so that
it will be able to JIT-compile and evaluate lookups into this machine
given a certain combination of "known inputs".
---------
Co-authored-by: Georg Wiese <georgwiese@gmail.com>
This module is an equivalent of the existing affine_expression.rs, but
for compile-time execution on symbolic values instead of run-time
execution on concrete values.
Using the operators defined on that that type, you can build a
SymbolicAffineExpression from a polynomial identity and then use
`.solve()` to try to solve for one unknown variable. The result (instead
of a concrete assignment as in affine_expression.rs) is a
SymbolicExpression, i.e. a complex expression involving variables
(assumed to have a concrete value known at run time), constants and
certain operators on them.
The idea is that SymbolicAffineExpression is used on polynomial
identities in turn and solving for one cell after the other in the
trace. The resulting SymbolicExpression can be translated to rust or
pil.
This PR refactors a few things:
- `process_lookup_direct` no longer has a default implementation.
Eventually, we want all machines to implement it, so I figured it would
be better to explicitly panic in each machine.
- Refactored the implementation of
`FixedLookupMachine::process_plookup`, pulling some stuff out into a new
`CallerData` struct. This is similar to what @chriseth has done on
[`call_jit_from_block`](https://github.com/powdr-labs/powdr/compare/main...call_jit_from_block),
see the comment below.
- As a first test, I implemented `process_lookup_direct` for the
"large"-field memory machine (and `process_plookup` by wrapping
`process_lookup_direct`)
This PR:
- Removes the `acc_next` columns, which were only needed because of a
limitation of prover functions. The prover function that existed is now
removed entirely, because we use the hand written witgen anyway, see
#2191.
- Also this PR materializes the folded tuple. This lowers the degree of
the constraints if the tuples being sent have a degree > 1. It also
enables next references in the tuple being sent.
As a result, we can now generate Plonky3 proofs with a bus!
```bash
cargo run -r --features plonky3 --bin powdr-rs compile riscv/tests/riscv_data/keccak -o output --max-degree-log 18 --field gl
cargo run -r --features plonky3 pil output/keccak.asm -o output -f --field gl --prove-with plonky3 --linker-mode bus
```
The proof generation takes 8.32s (of which 394ms are spent on generating
the second-stage witness). This compares to 2.07s proof time without a
bus.
Builds on #2194 and #2183.
This PR gives us (relatively) fast witness generation for the bus, by
writing custom code instead of relying on the generic solver + prover
functions:
```
$ cargo run -r --features plonky3 --bin powdr-rs compile riscv/tests/riscv_data/keccak-o output --max-degree-log 18 --field gl
$ cargo run -r --features plonky3 pil output/$TEST.asm -o output -f --field gl --prove-with mock --linker-mode bus
...
Running main machine for 262144 rows
[00:00:05 (ETA: 00:00:05)] █████████░░░░░░░░░░░ 48% - 24283 rows/s, 3169k identities/s, 92% progress
Found loop with period 1 starting at row 127900
[00:00:05 (ETA: 00:00:00)] ████████████████████ 100% - 151125 rows/s, 16170k identities/s, 100% progress
Witness generation took 5.748081s
Writing output/commits.bin.
Backend setup for mock...
Setup took 0.54769236s
Generating later-stage witnesses took 0.29s
Proof generation took 2.0383847s
```
On `main`, second-stage witgen for the main machine alone takes about 5
minutes.
This PR:
- Renames the current `executor::witgen::ExpressionEvaluator` to
`executor::witgen::evaluators::partial_expression_evaluator::PartialExpressionEvaluator`
- It is used when solving and evaluates to a
`AffineResult<AlgebraicVariable<'a>, T>`, which might still contain
unknown variables.
- Adds a new `ExpressionEvaluator` that simply evaluates to `T`
- Changes `MockBackend` to use the new `ExpressionEvaluator` (previously
wrapped what is now called the `PartialExpressionEvaluator`)
As a result, the code in `MockBackend` can be simplified. Also, I'm
building on this in #2191 for fast witness generation for the bus.
This PR adds a `Constr:: PhantomBusInteraction` variant. For now, it is
ignored - if users want to use a bus, they need to express this in terms
of phantom lookups / permutations as before this PR.
I added a few `TODO(bus_interaction)` and opened #2184 to track support
for phantom bus interactions.
One use-case this could have before though is to trigger a
"hand-written" witness generation for the bus, as discussed in the chat.
Builds on #2169
With this PR, second-stage witness generation works for the bus used in
the RISC-V machine 🎉
This is an end-to-end test:
```bash
cargo run -r --bin powdr-rs compile riscv/tests/riscv_data/sum-o output --max-degree-log 15 --field gl
cargo run -r pil output/sum.asm -o output -f --field gl --prove-with mock --linker-mode bus -i 1,1,1
```
What's needed is two small changes to `VmProcessor`:
- The degree is now passed by the caller (`DynamicMachine` or
`SecondStageMachine`). That way, `SecondStageMachine` can set it to the
actual final size, instead of the maximum allowed degree.
- I disabled loop detection for second-stage witness generation for now.
Cherry-picked from #2174
With this PR, we run all prover functions in parallel when solving for
the witness in `VmProcessor`. Interestingly, this didn't require any
changes to the order in which things are done: We already ran the
functions independently and applied the combined updates. So, this is a
classic map-reduce.
I think this change always makes sense, but is especially useful for the
prover functions we have to set bus accumulator values. For example, in
our RISC-V machine, the main machine has ~30 bus interactions, with a
fairly expensive prover function for each.
When used on top of #2173 and #2175, this accelerates second-stage
witness generation for the main machine from ~10s to ~6s for the example
mentioned in #2173.
This introduces a new machine which is always used for the second-stage
witness generation. Currently it is a copy of DynamicMachine, but in the
end it will be optimized for second-stage witness generation.
With this PR, we get much better error messages for connection errors.
This is an example of an issue we're currently debugging:
```
$ cargo run -r pil test_data/std/keccakf16_memory_test.asm -o output -f --prove-with mock --export-witness-csv
...
Errors in 50 / 213 connections:
Connection failed between main_keccakf16_memory and main_memory:
main_keccakf16_memory::sel[0] * main_keccakf16_memory::step_flags[0] $ [0, main_keccakf16_memory::input_addr_h, main_keccakf16_memory::input_addr_l, main_keccakf16_memory::time_step, main_keccakf16_memory::preimage[3], main_keccakf16_memory::preimage[2]] is main_memory::selectors[2] $ [main_memory::m_is_write, main_memory::m_addr_high, main_memory::m_addr_low, main_memory::m_step_high * 65536 + main_memory::m_step_low, main_memory::m_value1, main_memory::m_value2];
The following tuples appear in main_memory, but not in main_keccakf16_memory:
Row 24: (0, 0, 0, 32, 0, 0)
Connection failed between main_keccakf16_memory and main_memory:
main_keccakf16_memory::sel[0] * main_keccakf16_memory::step_flags[0] $ [0, main_keccakf16_memory::addr_h[0], main_keccakf16_memory::addr_l[0], main_keccakf16_memory::time_step, main_keccakf16_memory::preimage[1], main_keccakf16_memory::preimage[0]] is main_memory::selectors[3] $ [main_memory::m_is_write, main_memory::m_addr_high, main_memory::m_addr_low, main_memory::m_step_high * 65536 + main_memory::m_step_low, main_memory::m_value1, main_memory::m_value2];
The following tuples appear in main_memory, but not in main_keccakf16_memory:
Row 24: (0, 0, 4, 32, 0, 0)
Connection failed between main_keccakf16_memory and main_memory:
main_keccakf16_memory::sel[0] * main_keccakf16_memory::step_flags[0] $ [0, main_keccakf16_memory::addr_h[1], main_keccakf16_memory::addr_l[1], main_keccakf16_memory::time_step, main_keccakf16_memory::preimage[7], main_keccakf16_memory::preimage[6]] is main_memory::selectors[4] $ [main_memory::m_is_write, main_memory::m_addr_high, main_memory::m_addr_low, main_memory::m_step_high * 65536 + main_memory::m_step_low, main_memory::m_value1, main_memory::m_value2];
The following tuples appear in main_memory, but not in main_keccakf16_memory:
Row 24: (0, 0, 8, 32, 0, 0)
Connection failed between main_keccakf16_memory and main_memory:
main_keccakf16_memory::sel[0] * main_keccakf16_memory::step_flags[0] $ [0, main_keccakf16_memory::addr_h[2], main_keccakf16_memory::addr_l[2], main_keccakf16_memory::time_step, main_keccakf16_memory::preimage[5], main_keccakf16_memory::preimage[4]] is main_memory::selectors[5] $ [main_memory::m_is_write, main_memory::m_addr_high, main_memory::m_addr_low, main_memory::m_step_high * 65536 + main_memory::m_step_low, main_memory::m_value1, main_memory::m_value2];
The following tuples appear in main_memory, but not in main_keccakf16_memory:
Row 24: (0, 0, 12, 32, 0, 1)
Connection failed between main_keccakf16_memory and main_memory:
main_keccakf16_memory::sel[0] * main_keccakf16_memory::step_flags[0] $ [0, main_keccakf16_memory::addr_h[3], main_keccakf16_memory::addr_l[3], main_keccakf16_memory::time_step, main_keccakf16_memory::preimage[11], main_keccakf16_memory::preimage[10]] is main_memory::selectors[6] $ [main_memory::m_is_write, main_memory::m_addr_high, main_memory::m_addr_low, main_memory::m_step_high * 65536 + main_memory::m_step_low, main_memory::m_value1, main_memory::m_value2];
The following tuples appear in main_memory, but not in main_keccakf16_memory:
Row 24: (0, 0, 16, 32, 0, 0)
... and 45 more errors
thread 'main' panicked at cli/src/main.rs:727:14:
called `Result::unwrap()` on an `Err` value: ["Constraint check failed"]
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
With this PR, we compute the later-stage witnesses per machine instead
of globally. This has two advantages:
- We're able to handle machines of different sizes
- We can parallelize later-stage witness generation
This affects the two backend that can deal with multiple machines in the
first place: `Plonky3Backend` and `CompositeBackend`
Multiplicity columns of *global* range constraints used to end up in the
main machine, which is a problem if it has a length different from the
range check machine.
Prepares #2129
With this PR, later-stage witness columns & identities referencing them
(or later-stage challenges) are completely ignored. The columns are not
assigned to any machine. Previously, they would end up in the main
machine and never receive any updates. That doesn't work if machines
have different sized though.
---------
Co-authored-by: Thibaut Schaeffer <schaeffer.thibaut@gmail.com>
MutableState is the main way to get access to sub-machines during
witness generation. We used to create copies of MutableState for each
lookup, extracting the "current machine" where we need mutable access
and creating copies of references to the other machines. This causes an
allocation for each lookup (including fixed lookups, I think) which is
bad for performance.
This PR changes the approach to use RefCell instead: MutableState now
owns the machines and for each call to a machine, we mutably borrow that
machine. The RefCell mechanism ensures that there are no recursive calls
to machines and also avoids allocations.
I also changed the query callback to use non-mut references.
The first and third commits only move code around.
When writing #2119, I came across another issue in witgen. When running
this file on `main`:
```rs
namespace main(4);
// Two bit-constrained witness columns
// In practice, bit1 is always one (but this is not constrained)
col witness bit1(i) query Query::Hint(1);
col witness bit2;
bit1 * (bit1 - 1) = 0;
bit2 * (bit2 - 1) = 0;
// Constrain their sum to be binary as well.
// This ensures that at most one of the two bits is set.
// Therefore, bit2 is always zero.
let bit_sum;
bit_sum = bit1 + bit2;
bit_sum * (bit_sum - 1) = 0;
// Some witness that depends on bit2.
col witness foo;
foo = bit2 * 42 + (1 - bit2) * 43;
```
I'm getting
```
thread 'main' panicked at /Users/georg/coding/powdr/executor/src/witgen/global_constraints.rs:258:17:
assertion failed: known_constraints.insert(p, RangeConstraint::from_max_bit(0)).is_none()
```
This is because when it sees the binary range constraint `bit_sum *
(bit_sum - 1) = 0`, it already figured out that `bit_sum` can be at most
`1` because of `bit_sum = bit1 + bit2`.
This PR fixes it.
Adds a basic mock prover that simply asserts that all constraints are
satisfied.
This first version has the following features:
- Polynomial constraints are verified.
- Speed is reasonable. For example, the RISC-V Keccak example is
verified in 0.087s.
- Error messages are good: It prints the relevant constraint and row,
and all relevant assignments.
Missing features:
- Machine connections (i.e., any lookups or permutations) are not yet
validated.
- Later-stage witnesses are not yet validated.
Caveats:
- We rely on `powdr_backend_utils::split_pil`, so lookups / permutations
*within* a namespace are ignored.
Example:
```
$ cargo run pil test_data/pil/fibonacci.pil -o output -f --prove-with mock
```
Fixes a bug I encountered in #2109:
- For blog machines, we have a cache which tracks a sequence of solving
steps that led to a success in the past.
- However, the needed sequence could be different from call to call. In
particular, it could depend on the operation ID.
- Because of that, in #1562, we added that the "default" sequence
iterator is always run after the cached sequence.
- But, we never called `report_progress()` on the default iterator,
which led to a bug in #2109.
Completes a task in #2009.
With this PR, we no longer rely on
`Analyzed::identities_with_inlined_intermediate_polynomials()`, which
can produce exponentially large expressions in some cases. Instead,
intermediate polynomials are evaluated on demand and cached. Thibaut's
example from #1995 speeds up massively with this PR.
Pulled out of #2007, to keep the diff smaller (and more relevant).
This refactoring simply builds a `MachineExtractor` object that holds a
`&FixedData`.
Review with the "Hide whitespace" setting :)
Change FinalizableData so that finalized rows are stored as one
contiguous array instead of an array of arrays. Furthermore, if the
column IDs are a contiguous sequence, the row will only have as many
field elements as there are columns.
I don't expect this PR to improve performance, but we can directly
re-use this data structure for JIT-compiled executors (which need
contiguous cell data, it was one of the main performance boosts). This
means we can decide at runtime if we want to use a JIT-compiled executor
or the interpreted one.
And this in turn allows us to delay the actual JIT-compiling until we
get a request to the machine in the form of a bit field of known columns
in the lookup. And once we know which columns are known, we can try to
JIT-compile. If it fails, we store that this combination failed and will
only use the interpreted one on that one in the future. This is
especially useful for things like poseidon_gl, where we don't want to
try all `2^13` combinations...
With this PR, if a block machine never receives a call *and* has dynamic
size, it is skipped entirely in the proof (for all backends that support
dynamically-sized machines, Plonky3 and the Composite backend). This is
sound if we treat a missing machine as not interacting at all with the
bus.
As a result, we can remove the "dummy calls" in the RISC-V machine,
which ensure that each block machine is called at least once.
In slightly more detail:
- If a block machine never receives a call, witgen returns columns of
length 0.
- When proving, we detect that and remove the machine entirely.
- The verifier is relaxed in that it no longer asserts that all machines
are being proven. As mentioned, this assumes that the bus argument
(which is not fully implemented) handles this accordingly, using a bus
accumulator of zero for the missing machine.
To test:
```
$ cargo run pil test_data/asm/block_to_block_empty_submachine.asm -o output -f --export-witness-csv --prove-with plonky3
...
Running main machine for 8 rows
[00:00:00 (ETA: 00:00:00)] ████████████████████ 100% - Starting...
Machine Secondary machine 0: main_arith (BlockMachine) is never used at runtime, so we remove it.
Witness generation took 0.002950166s
Writing output/commits.bin.
Writing output/block_to_block_empty_submachine_witness.csv.
Backend setup for plonky3...
Setup took 0.023712292s
Proof generation took 0.083828546s
Proof size: 80517 bytes
Writing output/block_to_block_empty_submachine_proof.bin.
```
I noticed that constant evaluation takes a very long time. With this
logging, it is easier to see why, e.g. for the Rust Keccak example:
```
Generated values for main_poseidon_gl::CLK[0] (32..4194304) in 5.11s
Generated values for main_poseidon_gl::CLK[1] (32..4194304) in 5.77s
Generated values for main_poseidon_gl::CLK[2] (32..4194304) in 5.39s
Generated values for main_poseidon_gl::CLK[3] (32..4194304) in 5.12s
Generated values for main_poseidon_gl::CLK[4] (32..4194304) in 6.32s
Generated values for main_poseidon_gl::CLK[5] (32..4194304) in 6.25s
Generated values for main_poseidon_gl::CLK[6] (32..4194304) in 4.92s
Generated values for main_poseidon_gl::CLK[7] (32..4194304) in 5.48s
Generated values for main_poseidon_gl::CLK[8] (32..4194304) in 5.39s
Generated values for main_poseidon_gl::CLK[9] (32..4194304) in 5.85s
Generated values for main_poseidon_gl::CLK[10] (32..4194304) in 5.84s
Generated values for main_poseidon_gl::CLK[11] (32..4194304) in 5.23s
Generated values for main_poseidon_gl::CLK[12] (32..4194304) in 6.04s
Generated values for main_poseidon_gl::CLK[13] (32..4194304) in 6.08s
Generated values for main_poseidon_gl::CLK[14] (32..4194304) in 4.88s
Generated values for main_poseidon_gl::CLK[15] (32..4194304) in 5.93s
Generated values for main_poseidon_gl::LASTBLOCK (32..4194304) in 12.95s
Fixed column generation took 108.75529s
```
This PR introduces a more "direct" way to perform a lookup during
witness generation. It removes the concept of `EvalValue`, which is a
list of "updates", and instead requests the called machine to directly
fill in mutable pointers to field elements.
The goal is to use this (hopefully) faster interface if
- the lookup can be fully solved in a single call
- no cell-based range constraints are needed
If the LHS of the lookup consists of direct polynomial references (or
next references), the caller can pass a pointer to the final table and
does not need to move data around any further.
Some numbers:
For the "keccak test", and only looking at the PC lookup, we get:
Inside `process_plookup_internal`:
40 ns: preparing the `data` and `values` arrays
290 ns: call to process_lookup_direct
1300 ns: computing the result EvalValue.