Commit Graph

410 Commits

Author SHA1 Message Date
Georg Wiese
99d9dd0fc4 WriteOnceMemory: Generate multiplicity column (#2148)
Instances of `WriteOnceMemory` did not yet generate the multiplicity
column when connected via a phantom lookup.
2024-11-26 08:52:11 +00:00
Georg Wiese
34f765d9e8 Remove range constraint multiplicity columns from main machine (#2147)
Multiplicity columns of *global* range constraints used to end up in the
main machine, which is a problem if it has a length different from the
range check machine.
2024-11-26 08:51:54 +00:00
chriseth
2a3f989c12 Move vm processor (#2136)
Co-authored-by: Georg Wiese <georgwiese@gmail.com>
2024-11-26 07:07:51 +00:00
Georg Wiese
cd07dfb660 Filter later-stage witnesses and identities (#2129)
Prepares #2129

With this PR, later-stage witness columns & identities referencing them
(or later-stage challenges) are completely ignored. The columns are not
assigned to any machine. Previously, they would end up in the main
machine and never receive any updates. That doesn't work if machines
have different sized though.

---------

Co-authored-by: Thibaut Schaeffer <schaeffer.thibaut@gmail.com>
2024-11-25 15:20:42 +00:00
chriseth
81517d0294 Remove 'b lifetime. (#2135) 2024-11-23 13:20:25 +00:00
chriseth
81196aea3e Improve MutableState / Machines (#2130)
MutableState is the main way to get access to sub-machines during
witness generation. We used to create copies of MutableState for each
lookup, extracting the "current machine" where we need mutable access
and creating copies of references to the other machines. This causes an
allocation for each lookup (including fixed lookups, I think) which is
bad for performance.

This PR changes the approach to use RefCell instead: MutableState now
owns the machines and for each call to a machine, we mutably borrow that
machine. The RefCell mechanism ensures that there are no recursive calls
to machines and also avoids allocations.

I also changed the query callback to use non-mut references.

The first and third commits only move code around.
2024-11-22 17:28:03 +00:00
Leo
51cff4c758 remove powdr-executor dependency from backend-utils and plonky3 (#2124)
To make the plonky3 verifier module simpler for recursion
2024-11-21 13:57:13 +00:00
Georg Wiese
8b38138559 Fix failing assertion in witgen (#2120)
When writing #2119, I came across another issue in witgen. When running
this file on `main`:

```rs
namespace main(4);

    // Two bit-constrained witness columns
    // In practice, bit1 is always one (but this is not constrained)
    col witness bit1(i) query Query::Hint(1);
    col witness bit2;
    bit1 * (bit1 - 1) = 0;
    bit2 * (bit2 - 1) = 0;

    // Constrain their sum to be binary as well.
    // This ensures that at most one of the two bits is set.
    // Therefore, bit2 is always zero.
    let bit_sum;
    bit_sum = bit1 + bit2;
    bit_sum * (bit_sum - 1) = 0;

    // Some witness that depends on bit2.
    col witness foo;
    foo = bit2 * 42 + (1 - bit2) * 43;
```

I'm getting
```
thread 'main' panicked at /Users/georg/coding/powdr/executor/src/witgen/global_constraints.rs:258:17:
assertion failed: known_constraints.insert(p, RangeConstraint::from_max_bit(0)).is_none()
```

This is because when it sees the binary range constraint `bit_sum *
(bit_sum - 1) = 0`, it already figured out that `bit_sum` can be at most
`1` because of `bit_sum = bit1 + bit2`.

This PR fixes it.
2024-11-20 06:44:01 +00:00
Georg Wiese
e009331905 Add MockBackend (#2114)
Adds a basic mock prover that simply asserts that all constraints are
satisfied.

This first version has the following features:
- Polynomial constraints are verified.
- Speed is reasonable. For example, the RISC-V Keccak example is
verified in 0.087s.
- Error messages are good: It prints the relevant constraint and row,
and all relevant assignments.

Missing features:
- Machine connections (i.e., any lookups or permutations) are not yet
validated.
- Later-stage witnesses are not yet validated.

Caveats:
- We rely on `powdr_backend_utils::split_pil`, so lookups / permutations
*within* a namespace are ignored.

Example:
```
$ cargo run pil test_data/pil/fibonacci.pil -o output -f --prove-with mock
```
2024-11-19 20:46:46 +00:00
Georg Wiese
d91ae4f90a Bug fix in ProcessingSequenceIterator (#2117)
Fixes a bug I encountered in #2109:
- For blog machines, we have a cache which tracks a sequence of solving
steps that led to a success in the past.
- However, the needed sequence could be different from call to call. In
particular, it could depend on the operation ID.
- Because of that, in #1562, we added that the "default" sequence
iterator is always run after the cached sequence.
- But, we never called `report_progress()` on the default iterator,
which led to a bug in #2109.
2024-11-19 20:09:18 +00:00
Georg Wiese
efbcd1ffe2 Witgen: Handle intermediate polynomials (#2007)
Completes a task in #2009.

With this PR, we no longer rely on
`Analyzed::identities_with_inlined_intermediate_polynomials()`, which
can produce exponentially large expressions in some cases. Instead,
intermediate polynomials are evaluated on demand and cached. Thibaut's
example from #1995 speeds up massively with this PR.
2024-11-19 10:19:31 +00:00
Georg Wiese
2c1acf8b09 Block Machine: Handle case where number of rows is just enough (#2070)
Before this PR, we used to panic if the number of rows in a block
machine was just enough to fit the needed number of blocks. This PR
fixes it.
2024-11-18 13:29:24 +00:00
Georg Wiese
016fb2a9fd Refactor MachineExtractor (#2099)
Pulled out of #2007, to keep the diff smaller (and more relevant).

This refactoring simply builds a `MachineExtractor` object that holds a
`&FixedData`.

Review with the "Hide whitespace" setting :)
2024-11-15 17:19:47 +00:00
chriseth
e64721c3f9 Use contiguous data for finalized rows. (#2088)
Change FinalizableData so that finalized rows are stored as one
contiguous array instead of an array of arrays. Furthermore, if the
column IDs are a contiguous sequence, the row will only have as many
field elements as there are columns.

I don't expect this PR to improve performance, but we can directly
re-use this data structure for JIT-compiled executors (which need
contiguous cell data, it was one of the main performance boosts). This
means we can decide at runtime if we want to use a JIT-compiled executor
or the interpreted one.

And this in turn allows us to delay the actual JIT-compiling until we
get a request to the machine in the form of a bit field of known columns
in the lookup. And once we know which columns are known, we can try to
JIT-compile. If it fails, we store that this combination failed and will
only use the interpreted one on that one in the future. This is
especially useful for things like poseidon_gl, where we don't want to
try all `2^13` combinations...
2024-11-15 17:13:10 +00:00
Georg Wiese
ed6e4f7671 Remove empty machines (#2078)
With this PR, if a block machine never receives a call *and* has dynamic
size, it is skipped entirely in the proof (for all backends that support
dynamically-sized machines, Plonky3 and the Composite backend). This is
sound if we treat a missing machine as not interacting at all with the
bus.

As a result, we can remove the "dummy calls" in the RISC-V machine,
which ensure that each block machine is called at least once.

In slightly more detail:
- If a block machine never receives a call, witgen returns columns of
length 0.
- When proving, we detect that and remove the machine entirely.
- The verifier is relaxed in that it no longer asserts that all machines
are being proven. As mentioned, this assumes that the bus argument
(which is not fully implemented) handles this accordingly, using a bus
accumulator of zero for the missing machine.

To test:
```
$ cargo run pil test_data/asm/block_to_block_empty_submachine.asm -o output -f --export-witness-csv --prove-with plonky3
...
Running main machine for 8 rows
[00:00:00 (ETA: 00:00:00)] ████████████████████ 100% - Starting...                                                     
Machine Secondary machine 0: main_arith (BlockMachine) is never used at runtime, so we remove it.
Witness generation took 0.002950166s
Writing output/commits.bin.
Writing output/block_to_block_empty_submachine_witness.csv.
Backend setup for plonky3...
Setup took 0.023712292s
Proof generation took 0.083828546s
Proof size: 80517 bytes
Writing output/block_to_block_empty_submachine_proof.bin.
```
2024-11-14 17:09:46 +00:00
chriseth
2c9924d2b4 Extract some functions from split_out_machines. (#2085)
A pure refactoring PR that tries to simplify split_out_machines with the
goal of being able to JIT-compile at least some machines.
2024-11-13 16:35:19 +00:00
Georg Wiese
0913624861 Improve constant evaluation logging (#2083)
I noticed that constant evaluation takes a very long time. With this
logging, it is easier to see why, e.g. for the Rust Keccak example:
```
  Generated values for main_poseidon_gl::CLK[0] (32..4194304) in 5.11s
  Generated values for main_poseidon_gl::CLK[1] (32..4194304) in 5.77s
  Generated values for main_poseidon_gl::CLK[2] (32..4194304) in 5.39s
  Generated values for main_poseidon_gl::CLK[3] (32..4194304) in 5.12s
  Generated values for main_poseidon_gl::CLK[4] (32..4194304) in 6.32s
  Generated values for main_poseidon_gl::CLK[5] (32..4194304) in 6.25s
  Generated values for main_poseidon_gl::CLK[6] (32..4194304) in 4.92s
  Generated values for main_poseidon_gl::CLK[7] (32..4194304) in 5.48s
  Generated values for main_poseidon_gl::CLK[8] (32..4194304) in 5.39s
  Generated values for main_poseidon_gl::CLK[9] (32..4194304) in 5.85s
  Generated values for main_poseidon_gl::CLK[10] (32..4194304) in 5.84s
  Generated values for main_poseidon_gl::CLK[11] (32..4194304) in 5.23s
  Generated values for main_poseidon_gl::CLK[12] (32..4194304) in 6.04s
  Generated values for main_poseidon_gl::CLK[13] (32..4194304) in 6.08s
  Generated values for main_poseidon_gl::CLK[14] (32..4194304) in 4.88s
  Generated values for main_poseidon_gl::CLK[15] (32..4194304) in 5.93s
  Generated values for main_poseidon_gl::LASTBLOCK (32..4194304) in 12.95s
Fixed column generation took 108.75529s
```
2024-11-13 15:42:25 +00:00
chriseth
ff054afd45 Remove redundant function. (#2068) 2024-11-11 20:51:52 +00:00
chriseth
83c5894282 Extract analysis utils. (#2067) 2024-11-11 14:56:39 +00:00
chriseth
ab1f43a1e2 Introduce direct lookup. (#2063)
This PR introduces a more "direct" way to perform a lookup during
witness generation. It removes the concept of `EvalValue`, which is a
list of "updates", and instead requests the called machine to directly
fill in mutable pointers to field elements.

The goal is to use this (hopefully) faster interface if
 - the lookup can be fully solved in a single call
 - no cell-based range constraints are needed

If the LHS of the lookup consists of direct polynomial references (or
next references), the caller can pass a pointer to the final table and
does not need to move data around any further.

Some numbers:

For the "keccak test", and only looking at the PC lookup, we get:

Inside `process_plookup_internal`:

40 ns: preparing the `data` and `values` arrays
290 ns: call to process_lookup_direct
1300 ns: computing the result EvalValue.
2024-11-11 14:35:19 +00:00
Georg Wiese
43de1e02e5 Multiplicity witgen for range constraints (#2062)
Generates multiplicity columns for global range constraints, by doing a
pass over the data at the very and of witness generation.

```
$ RUST_LOG=debug cargo run pil test_data/std/lookup_via_challenges_range_constraint.asm -o output -f --prove-with plonky3
...
Determined the following global range constraints:
  main::x_low: [0, 7] & 0x7
  main::x_high: [0, 7] & 0x7
Determined the following identities to be purely bit/range constraints:
  Constr::PhantomLookup((Option::None, Option::None), [(main::x_low, main::BIT3)], main::multiplicities);
  Constr::PhantomLookup((Option::None, Option::None), [(main::x_high, main::BIT3)], main::multiplicities_1);
Recorded the following range constraint multiplicity columns:
  main::x_low -> main::multiplicities
  main::x_high -> main::multiplicities_1
Running main machine for 8 rows
[00:00:00 (ETA: 00:00:00)] ████████████████████ 100% - Starting...                                                   Finalizing VM: Main Machine
Transposing 8 rows with 12 columns...
Finalizing remaining rows...
Needed to finalize 8 / 8 rows.
Done transposing.

 == Witgen profile (6 events)
   65.2% (   2.0ms): witgen (outer code)
   33.2% (   1.0ms): Main Machine
    1.6% (  48.1µs): range constraint multiplicity witgen
  ---------------------------
    ==> Total: 3.011167ms
...
```

---------

Co-authored-by: Thibaut Schaeffer <schaeffer.thibaut@gmail.com>
2024-11-11 08:51:13 +00:00
chriseth
6025a6a590 Improve fixed lookup (#2058)
The change from storing just the row to storing the row values changes
the time reported for the fixed machine on the keccak example from 1.5s
to 1.2 seconds.

The pc lookup plus copying the values from the columns took 3700 ns, now
the lookup takes 500 ns.

Using a hash map takes it down to 1 second.

Current numbers:

```
460 ns: Evaluating LHS
621 ns: Splitting into known and unknown values
201 ns: Actual index lookup
1182 ns: creating the EvalValue result
```

Note that the "1.5 to 1.0 seconds" overall speed improvement includes
the time it takes to build the lookup table.

---------

Co-authored-by: Leo <leo@powdrlabs.com>
2024-11-08 17:01:24 +00:00
Georg Wiese
2c016d5f63 Generalize multiplicity witgen (#2043)
(mostly cherry-picked from #1958)

This PR generalizes the multiplicity witness generation we had in
`FixedLookup` in the following ways:
- Instead of searching for a column of a specific name, takes the
witness column from the phantom lookups
- As a result, it is also able to handle arbitrarily many lookups
- The witness generation should work for all machine types which can be
connected via lookups: Fixed lookup, block machines, and secondary VMs.

A common use-case that is not yet covered is global range constraints,
as they are not represented as machine calls in witness generation. This
should be fixed in a follow-up PR.

---------

Co-authored-by: chriseth <chris@ethereum.org>
2024-11-08 11:23:49 +00:00
Georg Wiese
b8cfbca41a PIL LogUp implementation: Generate multiplicity column (#2042)
No need to have the user create the multiplicity column anymore.
2024-11-06 15:49:03 +00:00
Thibaut Schaeffer
073572aafa Rust phantom constr (#1945)
Following #1934, adds these new variants to the rust side. A follow up
PR by @georgwiese will actually compute multiplicities in witgen.

---------

Co-authored-by: Georg Wiese <georgwiese@gmail.com>
2024-11-04 16:08:50 +00:00
chriseth
8321a97ff8 Make some types explicit for columns. (#2014)
Add some explicit types or explicit conversions to prepare for the
`ToCol` trait.
2024-11-01 18:11:29 +00:00
Georg Wiese
84467ae4d8 Witgen for public references (#1756)
Step towards #1633

This PR adds witness generation for any public that is referenced from
an identity.

Note that publics and public references are now existing independently:
- A public is still defined as a pointer to a cell in the trace. The
prover extracts the values from the trace and returns them to the
verifier; witgen has nothing to do with them (except providing the
values in the trace).
- A public reference (i.e., a public that is referenced by a constraint)
was previously unimplemented. Now, witgen would solve for this value.
This value might not be the same as the value of the public being
referenced! We don't check for consistency.

After #1633 is completed, publics will no longer be defined in terms of
trace cells, so the values returned by witgen will be the ones that are
returned to the verifier.

For now, the values are not returned yet (and different machines might
find conflicting values for the same public). But the solving works, and
I added a log message, e.g.:
```
$ cargo run pil test_data/pil/fibonacci_with_public.pil -o output -f
Writing output/fibonacci_with_public_analyzed.pil.
done.
Optimizing pil...
Removed 0 witness and 0 fixed columns. Total count now: 2 witness and 1 fixed columns.
Writing output/fibonacci_with_public_opt.pil.
Evaluating fixed columns...
Fixed column generation took 0.001645084s
Writing output/constants.bin.
Deducing witness columns...
Running main machine for 4 rows
[00:00:00 (ETA: 00:00:00)] ░░░░░░░░░░░░░░░░░░░░ 0% - Starting...                                                   
        => out (public) = 5
[00:00:00 (ETA: 00:00:00)] ████████████████████ 100% - Starting...                                                   
Witness generation took 0.00259025s
Writing output/commits.bin.
```
2024-10-31 15:14:34 +00:00
Georg Wiese
5cc9d72cec Improve resizing log message (#1939)
With this change, we check the fraction of used rows in each machine. If
the fraction is above 50%, we don't log anything in INFO level,
otherwise, we suggest that the machines could be configured with a
smaller min_degree.

This is not a warning, because on some backends, it might not be
possible to use VADCOP.

Example:
```
$ cargo run pil test_data/std/poseidon_gl_test.asm -o output -f
...
Only 101 of 256 rows (39.45%) are used in machine 'Main Machine', which is configured to be of static size 256. If the min_degree of this machine was lower, we could size it down such that the fraction of used rows is at least 50%. If the backend supports it, consider lowering the min_degree.
```

---------

Co-authored-by: chriseth <chris@ethereum.org>
2024-10-30 09:33:44 +00:00
chriseth
9aa07d1db7 Abstract solved trait impls (#1970) 2024-10-29 18:13:15 +00:00
Thibaut Schaeffer
fbc2d51184 Make selector non-optional in analyzed (#1950)
We represent lookup and permutation selectors as
`Option<AlgebraicExpression<T>>`.
The issue with this is that there are two ways to represent 1: `Some(1)`
and `None`.
This leads to issues in witgen where we check `is_none` but not against
`Some(1)`.
This already led to an awkward optimizer step which reduces one to the
other so that witgen only sees one of the two options.

This PR makes the selector non-optional. Therefore, 1 is only 1. Display
is adjusted to not print the selector if it prints to "1". This
str-level comparison is there to avoid introducing invasive bounds on T
which would contaminate everything.
2024-10-28 14:36:02 +00:00
Leo
28a8022019 change jit logs to debug for now (#1957) 2024-10-25 15:12:39 +00:00
Thibaut Schaeffer
3617c90c14 Rust constr enum (#1941)
`Identity` is currently a struct which acts as an union: all kinds of
identities are represented using the same struct fields, with runtime
checks that the right fields are used.

In the context of #1934 where we add new kinds of identities, it seems
advantageous to move to an enum on the Rust side as well.

Todo:
- [x] reimplement ids
- [x] minimize code duplication between lookups and permutations, if
possible
- [x] remove set_id
- [x] find better way to compare identities
- [x] fix connect
2024-10-25 14:00:37 +00:00
chriseth
bace12eeec Target test for JIT compiler (#1836)
Mainly a test file to see what the JIT features are we need to support
https://github.com/powdr-labs/powdr/pull/1623

The "Add functionality benchmark" commit is the only important one here,
the rest are merges from the implementing branches.
2024-10-22 14:34:27 +00:00
Thibaut Schaeffer
0f66c6a755 Integrated plonky3 prover (#1857)
This is a major change to the plonky3 prover to support proving many
machines.

# Sharing costs across tables
- at setup phase, fixed columns for each machine are committed to for
each possible size. This happens in separate commitments, so that the
prover and verifier can pick the relevant ones for a given execution
- for each phase of the proving, the corresponding traces across all
machines are committed to jointly
- the quotient chunks are committed to jointly across all tables

# Multi-stage publics
The implementation supports public values for each stage of each table.
This is tested internally in the plonky3 crate but not end-to-end in
pipeline tests.

---------

Co-authored-by: Leo Alt <leo@powdrlabs.com>
2024-10-17 11:49:09 +00:00
onurinanc
4721b47a63 Witgen for multiplicities in LogUp in PIL (#1686)
Opened this PR to discuss on the change files related to the issue #1573

```
columns: {"main.m_logup_multiplicity": [Bn254Field { value: BigInt([1, 0, 0, 0]) }, Bn254Field { value: BigInt([2, 0, 0, 0]) }, Bn254Field { value: BigInt([1, 0, 0, 0]) }, Bn254Field { value: BigInt([1, 0, 0, 0]) }, Bn254Field { value: BigInt([1, 0, 0, 0]) }, Bn254Field { value: BigInt([2, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }]}
Vec<T> parts: [[Bn254Field { value: BigInt([1, 0, 0, 0]) }, Bn254Field { value: BigInt([2, 0, 0, 0]) }, Bn254Field { value: BigInt([1, 0, 0, 0]) }, Bn254Field { value: BigInt([1, 0, 0, 0]) }, Bn254Field { value: BigInt([1, 0, 0, 0]) }, Bn254Field { value: BigInt([2, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }]]
String parts: ["main.m_logup_multiplicity"]
columns.len(): 1
witness_cols: [("main.y", [Bn254Field { value: BigInt([2, 0, 0, 0]) }, Bn254Field { value: BigInt([6, 0, 0, 0]) }, Bn254Field { value: BigInt([3, 0, 0, 0]) }, Bn254Field { value: BigInt([7, 0, 0, 0]) }, Bn254Field { value: BigInt([5, 0, 0, 0]) }, Bn254Field { value: BigInt([3, 0, 0, 0]) }, Bn254Field { value: BigInt([7, 0, 0, 0]) }, Bn254Field { value: BigInt([4, 0, 0, 0]) }]), ("main.m_logup_multiplicity", [Bn254Field { value: BigInt([0, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }, Bn254Field { value: BigInt([0, 0, 0, 0]) }])]
```

Using take_witness_col_values inside the FixedLookup machine, we have
the correct values of the `main.m_logup_multiplicitiy` column, However,
when it comes to witness_cols, the values are changed.
The above issue is fixed.

With this PR, we have a witness generation for multiplicities in LogUp /
bus argument.
2024-10-17 09:49:48 +00:00
ShuangWu121
d4ea38837b change toolchain to nightly (#1908)
related to [this PR](https://github.com/powdr-labs/powdr/pull/1898)
we need to change to nightly toolchain to integrate stwo

I kept the toolchain related to riscv to be "nightly-2024-08-01" as it
is handled separately in workflow, so I made the least change to make
stwo integrate-able for now.

fix some clippy issues about the comment format on some files.

---------

Co-authored-by: chriseth <chris@ethereum.org>
2024-10-16 20:01:45 +00:00
chriseth
3b14d3f56d JIT: Implement the degree builtin. (#1899) 2024-10-16 11:15:18 +00:00
chriseth
6f204be638 Some more logging. (#1901) 2024-10-15 20:21:15 +00:00
Georg Wiese
e8e9a9375e Simplify BabyBear memory machine (#1883)
This removes a complicated comparison mechanism introduced in #1874.

Previously, time steps were compared differently from addresses. Now,
both are compared the same way which is:
- The prover provides a bit indicating whether the high limbs of the
values are equal
- If this bit is 1, a constraints asserts this is actually the case
- The prover provides the 16-Bit difference minus one on the limb that
differs. This asserts that the limb value is strictly increasing.
2024-10-10 14:08:15 +00:00
Leo
3364d9186d Memory16 (#1871)
Extracted from https://github.com/powdr-labs/powdr/pull/1790

This PR adds std and witgen support for the new 16-bit limb memory
machine.

- The current version only supports 24-bit addresses. I think it's fine
for now, but we should fix it later.
- Github fails miserably at showing the proper diff, but:
- the new file `double_sorted_32.rs` is the same as the old unique
`double_sorted.rs`, minus the common parts which
- were left in the outermost `double_sorted.rs` which dispatches calls
depending on the field
- the new file `double_sorted_16.rs` is very similar to the 32 case, but
adjusted for 2 value fields.
- There are probably more common things we can extract, but this here is
enough for a first version.

---------

Co-authored-by: Leo Alt <leo@ethereum.org>
Co-authored-by: Georg Wiese <georgwiese@gmail.com>
2024-10-09 17:10:45 +00:00
chriseth
6962d9e521 Use jit compiler to generate fixed column values. (#1825)
Tries to call the JIT compiler on each fixed column when generating the
values of the fixed columns. Uses the interpreter we already have for
all the columns where JIT compilation failed.
2024-10-03 17:10:58 +00:00
ShuangWu121
b13d5e857e fix mersenne31 overflow (#1868)
when run this command: 
`cargo run pil test_data/asm/book/hello_world.asm --inputs 0 -o output
--field m31 --prove-with plonky3 -f`
it gets error: 
```
thread 'main' panicked at 
powdr/executor/src/witgen/global_constraints.rs:195:37:
attempt to subtract with overflow
```

The problem is in function
```

fn smallest_period_candidate<T: FieldElement>(fixed: &[T]) -> Option<u64> {
    if fixed.first() != Some(&0.into()) {
        return None;
    }

    (1..63).find(|bit| fixed.last() == Some(&((1u64 << bit) - 1).into()))
}
```
when using Mersenne31 field (prime: 2^31 -1) and the last element of
fixed is 0, it is equivalent to 2^31-1, then the function return bit=31

and bit=31 overflow at this line:
`let mask = T::Integer::from(((1 << bit) - 1) as u64);`
in function process_fixed_column, as 1 is by default to be i32

so I add the code below to handle the special case and make 1 to be 1u64

`if fixed.last() == Some(&0.into()) {
        return None; 
    }`

---------

Co-authored-by: Georg Wiese <georgwiese@gmail.com>
2024-10-03 14:51:46 +00:00
chriseth
990d3574d9 Return iterator instead. (#1862)
Instead of returning a vector and turning it into an iterator, return an
iterator right away.
2024-10-02 11:02:59 +00:00
Thibaut Schaeffer
0358dec845 Upgrade Rust to 1.81 (#1846)
This is a prerequisite for #1845 

TODO:
- [x] narrow down or remove the dead_code exception
2024-09-27 14:08:26 +00:00
Leandro Pacheco
718c333ff9 Query input/output rework (#1828)
This PR does a few things:
- unify `Input/DataIdentifier` into a single `Input` query that takes
(channel,idx) and returns a field element
- Output query takes (channel, fe)
- change the related prover functions to reflect this
This interface is used by, but not the same, as the input/output for the
riscv machine.
How to use these to input/output bytes or serialized data is a job of
the runtime implementation.
2024-09-24 13:58:38 +00:00
Georg Wiese
fbc8c8c927 Introduce AlgebraicVariable (#1755)
(First 4 commits of #1650)

This PR prepares witness generation for scalar publics (#1756). Scalar
publics are similar to cells in the trace, but are global (i.e.,
independent on the row number).

With this PR, affine expressions use a new `AlgebraicVariable` enum,
that can be either a column reference (`&'a AlgebraicReference`, which
was used previously), or a reference to a public.
2024-09-23 14:44:04 +00:00
chriseth
06abb2a28a Simplify parsed identities and introduce the concept of proof item. (#1820)
At the on-site, we introduced operators for all lookup-like constraints.
This means that `[x, y] in s $ [a, b];` is not parsed as a lookup
identity any more, but just as a regular expression with binary
operators. The whole concept of a lookup or identity as a rust type is
now only present after the condenser has run. Because of that, we can
remove a lot of code that was concerned with parsed identities.

Since prover functions were introduced, we can have both constraints and
prover functions at statement level. Because of that I extended the
concept of "identity" (which we partly renamed to "constraint" already)
to "proof item". A proof item is either a constraint or a prover
function. Later on, we might also include fixed columns, challenges,
etc.

Co-authored-by: chriseth <chriseth.github@gmail.com>
2024-09-23 12:53:25 +00:00
chriseth
6789fe0271 Remove query functions from asm processing (#1746)
Co-authored-by: Lucas Clemente Vella <lvella@powdrlabs.com>
2024-09-13 13:56:06 +00:00
chriseth
48d7d90d9b Return errors from pil analyzer. (#1758) 2024-09-12 10:04:42 +00:00
chriseth
efd93482e3 Prover functions (#1717) 2024-09-11 10:51:56 +00:00