Builds on #2194 and #2183.
This PR gives us (relatively) fast witness generation for the bus, by
writing custom code instead of relying on the generic solver + prover
functions:
```
$ cargo run -r --features plonky3 --bin powdr-rs compile riscv/tests/riscv_data/keccak-o output --max-degree-log 18 --field gl
$ cargo run -r --features plonky3 pil output/$TEST.asm -o output -f --field gl --prove-with mock --linker-mode bus
...
Running main machine for 262144 rows
[00:00:05 (ETA: 00:00:05)] █████████░░░░░░░░░░░ 48% - 24283 rows/s, 3169k identities/s, 92% progress
Found loop with period 1 starting at row 127900
[00:00:05 (ETA: 00:00:00)] ████████████████████ 100% - 151125 rows/s, 16170k identities/s, 100% progress
Witness generation took 5.748081s
Writing output/commits.bin.
Backend setup for mock...
Setup took 0.54769236s
Generating later-stage witnesses took 0.29s
Proof generation took 2.0383847s
```
On `main`, second-stage witgen for the main machine alone takes about 5
minutes.
This PR:
- Renames the current `executor::witgen::ExpressionEvaluator` to
`executor::witgen::evaluators::partial_expression_evaluator::PartialExpressionEvaluator`
- It is used when solving and evaluates to a
`AffineResult<AlgebraicVariable<'a>, T>`, which might still contain
unknown variables.
- Adds a new `ExpressionEvaluator` that simply evaluates to `T`
- Changes `MockBackend` to use the new `ExpressionEvaluator` (previously
wrapped what is now called the `PartialExpressionEvaluator`)
As a result, the code in `MockBackend` can be simplified. Also, I'm
building on this in #2191 for fast witness generation for the bus.
This PR adds a `Constr:: PhantomBusInteraction` variant. For now, it is
ignored - if users want to use a bus, they need to express this in terms
of phantom lookups / permutations as before this PR.
I added a few `TODO(bus_interaction)` and opened #2184 to track support
for phantom bus interactions.
One use-case this could have before though is to trigger a
"hand-written" witness generation for the bus, as discussed in the chat.
Builds on #2169
With this PR, second-stage witness generation works for the bus used in
the RISC-V machine 🎉
This is an end-to-end test:
```bash
cargo run -r --bin powdr-rs compile riscv/tests/riscv_data/sum-o output --max-degree-log 15 --field gl
cargo run -r pil output/sum.asm -o output -f --field gl --prove-with mock --linker-mode bus -i 1,1,1
```
What's needed is two small changes to `VmProcessor`:
- The degree is now passed by the caller (`DynamicMachine` or
`SecondStageMachine`). That way, `SecondStageMachine` can set it to the
actual final size, instead of the maximum allowed degree.
- I disabled loop detection for second-stage witness generation for now.
Cherry-picked from #2174
With this PR, we run all prover functions in parallel when solving for
the witness in `VmProcessor`. Interestingly, this didn't require any
changes to the order in which things are done: We already ran the
functions independently and applied the combined updates. So, this is a
classic map-reduce.
I think this change always makes sense, but is especially useful for the
prover functions we have to set bus accumulator values. For example, in
our RISC-V machine, the main machine has ~30 bus interactions, with a
fairly expensive prover function for each.
When used on top of #2173 and #2175, this accelerates second-stage
witness generation for the main machine from ~10s to ~6s for the example
mentioned in #2173.
This introduces a new machine which is always used for the second-stage
witness generation. Currently it is a copy of DynamicMachine, but in the
end it will be optimized for second-stage witness generation.
With this PR, we get much better error messages for connection errors.
This is an example of an issue we're currently debugging:
```
$ cargo run -r pil test_data/std/keccakf16_memory_test.asm -o output -f --prove-with mock --export-witness-csv
...
Errors in 50 / 213 connections:
Connection failed between main_keccakf16_memory and main_memory:
main_keccakf16_memory::sel[0] * main_keccakf16_memory::step_flags[0] $ [0, main_keccakf16_memory::input_addr_h, main_keccakf16_memory::input_addr_l, main_keccakf16_memory::time_step, main_keccakf16_memory::preimage[3], main_keccakf16_memory::preimage[2]] is main_memory::selectors[2] $ [main_memory::m_is_write, main_memory::m_addr_high, main_memory::m_addr_low, main_memory::m_step_high * 65536 + main_memory::m_step_low, main_memory::m_value1, main_memory::m_value2];
The following tuples appear in main_memory, but not in main_keccakf16_memory:
Row 24: (0, 0, 0, 32, 0, 0)
Connection failed between main_keccakf16_memory and main_memory:
main_keccakf16_memory::sel[0] * main_keccakf16_memory::step_flags[0] $ [0, main_keccakf16_memory::addr_h[0], main_keccakf16_memory::addr_l[0], main_keccakf16_memory::time_step, main_keccakf16_memory::preimage[1], main_keccakf16_memory::preimage[0]] is main_memory::selectors[3] $ [main_memory::m_is_write, main_memory::m_addr_high, main_memory::m_addr_low, main_memory::m_step_high * 65536 + main_memory::m_step_low, main_memory::m_value1, main_memory::m_value2];
The following tuples appear in main_memory, but not in main_keccakf16_memory:
Row 24: (0, 0, 4, 32, 0, 0)
Connection failed between main_keccakf16_memory and main_memory:
main_keccakf16_memory::sel[0] * main_keccakf16_memory::step_flags[0] $ [0, main_keccakf16_memory::addr_h[1], main_keccakf16_memory::addr_l[1], main_keccakf16_memory::time_step, main_keccakf16_memory::preimage[7], main_keccakf16_memory::preimage[6]] is main_memory::selectors[4] $ [main_memory::m_is_write, main_memory::m_addr_high, main_memory::m_addr_low, main_memory::m_step_high * 65536 + main_memory::m_step_low, main_memory::m_value1, main_memory::m_value2];
The following tuples appear in main_memory, but not in main_keccakf16_memory:
Row 24: (0, 0, 8, 32, 0, 0)
Connection failed between main_keccakf16_memory and main_memory:
main_keccakf16_memory::sel[0] * main_keccakf16_memory::step_flags[0] $ [0, main_keccakf16_memory::addr_h[2], main_keccakf16_memory::addr_l[2], main_keccakf16_memory::time_step, main_keccakf16_memory::preimage[5], main_keccakf16_memory::preimage[4]] is main_memory::selectors[5] $ [main_memory::m_is_write, main_memory::m_addr_high, main_memory::m_addr_low, main_memory::m_step_high * 65536 + main_memory::m_step_low, main_memory::m_value1, main_memory::m_value2];
The following tuples appear in main_memory, but not in main_keccakf16_memory:
Row 24: (0, 0, 12, 32, 0, 1)
Connection failed between main_keccakf16_memory and main_memory:
main_keccakf16_memory::sel[0] * main_keccakf16_memory::step_flags[0] $ [0, main_keccakf16_memory::addr_h[3], main_keccakf16_memory::addr_l[3], main_keccakf16_memory::time_step, main_keccakf16_memory::preimage[11], main_keccakf16_memory::preimage[10]] is main_memory::selectors[6] $ [main_memory::m_is_write, main_memory::m_addr_high, main_memory::m_addr_low, main_memory::m_step_high * 65536 + main_memory::m_step_low, main_memory::m_value1, main_memory::m_value2];
The following tuples appear in main_memory, but not in main_keccakf16_memory:
Row 24: (0, 0, 16, 32, 0, 0)
... and 45 more errors
thread 'main' panicked at cli/src/main.rs:727:14:
called `Result::unwrap()` on an `Err` value: ["Constraint check failed"]
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
With this PR, we compute the later-stage witnesses per machine instead
of globally. This has two advantages:
- We're able to handle machines of different sizes
- We can parallelize later-stage witness generation
This affects the two backend that can deal with multiple machines in the
first place: `Plonky3Backend` and `CompositeBackend`
Multiplicity columns of *global* range constraints used to end up in the
main machine, which is a problem if it has a length different from the
range check machine.
Prepares #2129
With this PR, later-stage witness columns & identities referencing them
(or later-stage challenges) are completely ignored. The columns are not
assigned to any machine. Previously, they would end up in the main
machine and never receive any updates. That doesn't work if machines
have different sized though.
---------
Co-authored-by: Thibaut Schaeffer <schaeffer.thibaut@gmail.com>
MutableState is the main way to get access to sub-machines during
witness generation. We used to create copies of MutableState for each
lookup, extracting the "current machine" where we need mutable access
and creating copies of references to the other machines. This causes an
allocation for each lookup (including fixed lookups, I think) which is
bad for performance.
This PR changes the approach to use RefCell instead: MutableState now
owns the machines and for each call to a machine, we mutably borrow that
machine. The RefCell mechanism ensures that there are no recursive calls
to machines and also avoids allocations.
I also changed the query callback to use non-mut references.
The first and third commits only move code around.
When writing #2119, I came across another issue in witgen. When running
this file on `main`:
```rs
namespace main(4);
// Two bit-constrained witness columns
// In practice, bit1 is always one (but this is not constrained)
col witness bit1(i) query Query::Hint(1);
col witness bit2;
bit1 * (bit1 - 1) = 0;
bit2 * (bit2 - 1) = 0;
// Constrain their sum to be binary as well.
// This ensures that at most one of the two bits is set.
// Therefore, bit2 is always zero.
let bit_sum;
bit_sum = bit1 + bit2;
bit_sum * (bit_sum - 1) = 0;
// Some witness that depends on bit2.
col witness foo;
foo = bit2 * 42 + (1 - bit2) * 43;
```
I'm getting
```
thread 'main' panicked at /Users/georg/coding/powdr/executor/src/witgen/global_constraints.rs:258:17:
assertion failed: known_constraints.insert(p, RangeConstraint::from_max_bit(0)).is_none()
```
This is because when it sees the binary range constraint `bit_sum *
(bit_sum - 1) = 0`, it already figured out that `bit_sum` can be at most
`1` because of `bit_sum = bit1 + bit2`.
This PR fixes it.
Adds a basic mock prover that simply asserts that all constraints are
satisfied.
This first version has the following features:
- Polynomial constraints are verified.
- Speed is reasonable. For example, the RISC-V Keccak example is
verified in 0.087s.
- Error messages are good: It prints the relevant constraint and row,
and all relevant assignments.
Missing features:
- Machine connections (i.e., any lookups or permutations) are not yet
validated.
- Later-stage witnesses are not yet validated.
Caveats:
- We rely on `powdr_backend_utils::split_pil`, so lookups / permutations
*within* a namespace are ignored.
Example:
```
$ cargo run pil test_data/pil/fibonacci.pil -o output -f --prove-with mock
```
Fixes a bug I encountered in #2109:
- For blog machines, we have a cache which tracks a sequence of solving
steps that led to a success in the past.
- However, the needed sequence could be different from call to call. In
particular, it could depend on the operation ID.
- Because of that, in #1562, we added that the "default" sequence
iterator is always run after the cached sequence.
- But, we never called `report_progress()` on the default iterator,
which led to a bug in #2109.
Completes a task in #2009.
With this PR, we no longer rely on
`Analyzed::identities_with_inlined_intermediate_polynomials()`, which
can produce exponentially large expressions in some cases. Instead,
intermediate polynomials are evaluated on demand and cached. Thibaut's
example from #1995 speeds up massively with this PR.
Pulled out of #2007, to keep the diff smaller (and more relevant).
This refactoring simply builds a `MachineExtractor` object that holds a
`&FixedData`.
Review with the "Hide whitespace" setting :)
Change FinalizableData so that finalized rows are stored as one
contiguous array instead of an array of arrays. Furthermore, if the
column IDs are a contiguous sequence, the row will only have as many
field elements as there are columns.
I don't expect this PR to improve performance, but we can directly
re-use this data structure for JIT-compiled executors (which need
contiguous cell data, it was one of the main performance boosts). This
means we can decide at runtime if we want to use a JIT-compiled executor
or the interpreted one.
And this in turn allows us to delay the actual JIT-compiling until we
get a request to the machine in the form of a bit field of known columns
in the lookup. And once we know which columns are known, we can try to
JIT-compile. If it fails, we store that this combination failed and will
only use the interpreted one on that one in the future. This is
especially useful for things like poseidon_gl, where we don't want to
try all `2^13` combinations...
With this PR, if a block machine never receives a call *and* has dynamic
size, it is skipped entirely in the proof (for all backends that support
dynamically-sized machines, Plonky3 and the Composite backend). This is
sound if we treat a missing machine as not interacting at all with the
bus.
As a result, we can remove the "dummy calls" in the RISC-V machine,
which ensure that each block machine is called at least once.
In slightly more detail:
- If a block machine never receives a call, witgen returns columns of
length 0.
- When proving, we detect that and remove the machine entirely.
- The verifier is relaxed in that it no longer asserts that all machines
are being proven. As mentioned, this assumes that the bus argument
(which is not fully implemented) handles this accordingly, using a bus
accumulator of zero for the missing machine.
To test:
```
$ cargo run pil test_data/asm/block_to_block_empty_submachine.asm -o output -f --export-witness-csv --prove-with plonky3
...
Running main machine for 8 rows
[00:00:00 (ETA: 00:00:00)] ████████████████████ 100% - Starting...
Machine Secondary machine 0: main_arith (BlockMachine) is never used at runtime, so we remove it.
Witness generation took 0.002950166s
Writing output/commits.bin.
Writing output/block_to_block_empty_submachine_witness.csv.
Backend setup for plonky3...
Setup took 0.023712292s
Proof generation took 0.083828546s
Proof size: 80517 bytes
Writing output/block_to_block_empty_submachine_proof.bin.
```
I noticed that constant evaluation takes a very long time. With this
logging, it is easier to see why, e.g. for the Rust Keccak example:
```
Generated values for main_poseidon_gl::CLK[0] (32..4194304) in 5.11s
Generated values for main_poseidon_gl::CLK[1] (32..4194304) in 5.77s
Generated values for main_poseidon_gl::CLK[2] (32..4194304) in 5.39s
Generated values for main_poseidon_gl::CLK[3] (32..4194304) in 5.12s
Generated values for main_poseidon_gl::CLK[4] (32..4194304) in 6.32s
Generated values for main_poseidon_gl::CLK[5] (32..4194304) in 6.25s
Generated values for main_poseidon_gl::CLK[6] (32..4194304) in 4.92s
Generated values for main_poseidon_gl::CLK[7] (32..4194304) in 5.48s
Generated values for main_poseidon_gl::CLK[8] (32..4194304) in 5.39s
Generated values for main_poseidon_gl::CLK[9] (32..4194304) in 5.85s
Generated values for main_poseidon_gl::CLK[10] (32..4194304) in 5.84s
Generated values for main_poseidon_gl::CLK[11] (32..4194304) in 5.23s
Generated values for main_poseidon_gl::CLK[12] (32..4194304) in 6.04s
Generated values for main_poseidon_gl::CLK[13] (32..4194304) in 6.08s
Generated values for main_poseidon_gl::CLK[14] (32..4194304) in 4.88s
Generated values for main_poseidon_gl::CLK[15] (32..4194304) in 5.93s
Generated values for main_poseidon_gl::LASTBLOCK (32..4194304) in 12.95s
Fixed column generation took 108.75529s
```
This PR introduces a more "direct" way to perform a lookup during
witness generation. It removes the concept of `EvalValue`, which is a
list of "updates", and instead requests the called machine to directly
fill in mutable pointers to field elements.
The goal is to use this (hopefully) faster interface if
- the lookup can be fully solved in a single call
- no cell-based range constraints are needed
If the LHS of the lookup consists of direct polynomial references (or
next references), the caller can pass a pointer to the final table and
does not need to move data around any further.
Some numbers:
For the "keccak test", and only looking at the PC lookup, we get:
Inside `process_plookup_internal`:
40 ns: preparing the `data` and `values` arrays
290 ns: call to process_lookup_direct
1300 ns: computing the result EvalValue.
Generates multiplicity columns for global range constraints, by doing a
pass over the data at the very and of witness generation.
```
$ RUST_LOG=debug cargo run pil test_data/std/lookup_via_challenges_range_constraint.asm -o output -f --prove-with plonky3
...
Determined the following global range constraints:
main::x_low: [0, 7] & 0x7
main::x_high: [0, 7] & 0x7
Determined the following identities to be purely bit/range constraints:
Constr::PhantomLookup((Option::None, Option::None), [(main::x_low, main::BIT3)], main::multiplicities);
Constr::PhantomLookup((Option::None, Option::None), [(main::x_high, main::BIT3)], main::multiplicities_1);
Recorded the following range constraint multiplicity columns:
main::x_low -> main::multiplicities
main::x_high -> main::multiplicities_1
Running main machine for 8 rows
[00:00:00 (ETA: 00:00:00)] ████████████████████ 100% - Starting... Finalizing VM: Main Machine
Transposing 8 rows with 12 columns...
Finalizing remaining rows...
Needed to finalize 8 / 8 rows.
Done transposing.
== Witgen profile (6 events)
65.2% ( 2.0ms): witgen (outer code)
33.2% ( 1.0ms): Main Machine
1.6% ( 48.1µs): range constraint multiplicity witgen
---------------------------
==> Total: 3.011167ms
...
```
---------
Co-authored-by: Thibaut Schaeffer <schaeffer.thibaut@gmail.com>
The change from storing just the row to storing the row values changes
the time reported for the fixed machine on the keccak example from 1.5s
to 1.2 seconds.
The pc lookup plus copying the values from the columns took 3700 ns, now
the lookup takes 500 ns.
Using a hash map takes it down to 1 second.
Current numbers:
```
460 ns: Evaluating LHS
621 ns: Splitting into known and unknown values
201 ns: Actual index lookup
1182 ns: creating the EvalValue result
```
Note that the "1.5 to 1.0 seconds" overall speed improvement includes
the time it takes to build the lookup table.
---------
Co-authored-by: Leo <leo@powdrlabs.com>
(mostly cherry-picked from #1958)
This PR generalizes the multiplicity witness generation we had in
`FixedLookup` in the following ways:
- Instead of searching for a column of a specific name, takes the
witness column from the phantom lookups
- As a result, it is also able to handle arbitrarily many lookups
- The witness generation should work for all machine types which can be
connected via lookups: Fixed lookup, block machines, and secondary VMs.
A common use-case that is not yet covered is global range constraints,
as they are not represented as machine calls in witness generation. This
should be fixed in a follow-up PR.
---------
Co-authored-by: chriseth <chris@ethereum.org>
Following #1934, adds these new variants to the rust side. A follow up
PR by @georgwiese will actually compute multiplicities in witgen.
---------
Co-authored-by: Georg Wiese <georgwiese@gmail.com>
Step towards #1633
This PR adds witness generation for any public that is referenced from
an identity.
Note that publics and public references are now existing independently:
- A public is still defined as a pointer to a cell in the trace. The
prover extracts the values from the trace and returns them to the
verifier; witgen has nothing to do with them (except providing the
values in the trace).
- A public reference (i.e., a public that is referenced by a constraint)
was previously unimplemented. Now, witgen would solve for this value.
This value might not be the same as the value of the public being
referenced! We don't check for consistency.
After #1633 is completed, publics will no longer be defined in terms of
trace cells, so the values returned by witgen will be the ones that are
returned to the verifier.
For now, the values are not returned yet (and different machines might
find conflicting values for the same public). But the solving works, and
I added a log message, e.g.:
```
$ cargo run pil test_data/pil/fibonacci_with_public.pil -o output -f
Writing output/fibonacci_with_public_analyzed.pil.
done.
Optimizing pil...
Removed 0 witness and 0 fixed columns. Total count now: 2 witness and 1 fixed columns.
Writing output/fibonacci_with_public_opt.pil.
Evaluating fixed columns...
Fixed column generation took 0.001645084s
Writing output/constants.bin.
Deducing witness columns...
Running main machine for 4 rows
[00:00:00 (ETA: 00:00:00)] ░░░░░░░░░░░░░░░░░░░░ 0% - Starting...
=> out (public) = 5
[00:00:00 (ETA: 00:00:00)] ████████████████████ 100% - Starting...
Witness generation took 0.00259025s
Writing output/commits.bin.
```
With this change, we check the fraction of used rows in each machine. If
the fraction is above 50%, we don't log anything in INFO level,
otherwise, we suggest that the machines could be configured with a
smaller min_degree.
This is not a warning, because on some backends, it might not be
possible to use VADCOP.
Example:
```
$ cargo run pil test_data/std/poseidon_gl_test.asm -o output -f
...
Only 101 of 256 rows (39.45%) are used in machine 'Main Machine', which is configured to be of static size 256. If the min_degree of this machine was lower, we could size it down such that the fraction of used rows is at least 50%. If the backend supports it, consider lowering the min_degree.
```
---------
Co-authored-by: chriseth <chris@ethereum.org>
We represent lookup and permutation selectors as
`Option<AlgebraicExpression<T>>`.
The issue with this is that there are two ways to represent 1: `Some(1)`
and `None`.
This leads to issues in witgen where we check `is_none` but not against
`Some(1)`.
This already led to an awkward optimizer step which reduces one to the
other so that witgen only sees one of the two options.
This PR makes the selector non-optional. Therefore, 1 is only 1. Display
is adjusted to not print the selector if it prints to "1". This
str-level comparison is there to avoid introducing invasive bounds on T
which would contaminate everything.
`Identity` is currently a struct which acts as an union: all kinds of
identities are represented using the same struct fields, with runtime
checks that the right fields are used.
In the context of #1934 where we add new kinds of identities, it seems
advantageous to move to an enum on the Rust side as well.
Todo:
- [x] reimplement ids
- [x] minimize code duplication between lookups and permutations, if
possible
- [x] remove set_id
- [x] find better way to compare identities
- [x] fix connect
Mainly a test file to see what the JIT features are we need to support
https://github.com/powdr-labs/powdr/pull/1623
The "Add functionality benchmark" commit is the only important one here,
the rest are merges from the implementing branches.
This is a major change to the plonky3 prover to support proving many
machines.
# Sharing costs across tables
- at setup phase, fixed columns for each machine are committed to for
each possible size. This happens in separate commitments, so that the
prover and verifier can pick the relevant ones for a given execution
- for each phase of the proving, the corresponding traces across all
machines are committed to jointly
- the quotient chunks are committed to jointly across all tables
# Multi-stage publics
The implementation supports public values for each stage of each table.
This is tested internally in the plonky3 crate but not end-to-end in
pipeline tests.
---------
Co-authored-by: Leo Alt <leo@powdrlabs.com>