Analogously to our Plonky3 BabyBear support, I added support for M31 by
using the parameters they use in [their Keccak
example](7c5deb0eab/keccak-air/examples/prove_m31_poseidon2.rs).
To test:
```bash
cargo run pil test_data/pil/fibonacci.pil -o output -f --field m31 --prove-with plonky3
```
All tests passed. :)
Operations:
- shl<0> A1, A2, B -> C1, C2
- shr<1> A1, A2, B -> C1, C2
- `A1` `A2` are 16 bit limbs of 32 bit `A` in little-endian order.
Likewise for `C1` and `C2`
Implementation:
- We adopted a similar implementation to our prior shift machine, which
decomposes `A` to 4 bytes and looks up each byte to a lookup table of
`[A_byte, B (shift amount), block row, operation id]`, so the size of
the lookup table is `[256, 32, 4, 2] = 65536`. Each row looks up to the
resulting byte after the bit shifting, and the results are added
together to obtain `C`.
- In our design, instead of looking up to 32-bit `C` column, we are
looking up to two 16-bit `C1` and `C2` columns. Overall, there are more
witness columns due to decomposing to 16-bit limbs the in the main shift
machine and one more fixed column in the lookup table, but the same
number of lookups performed.
Future optimization:
- There's ample ground for "reshaping" the main machine to have more
columns but fewer rows, and do more lookups in each row so that we are
not just processing one byte in each row. In the most aggressive case,
we can even process everything in the same row.
- For example, processing two bytes in one row (instead of one byte)
should have half the number of rows but less than twice the number of
columns, and therefore fewer cells in total. This should be good for
provers with time linear to the number of cells.
---------
Co-authored-by: onurinanc <e191322@metu.edu.tr>
Adds Plonky3's implementation of the Mersenne-31 field.
To test:
```
cargo run pil test_data/pil/fibonacci.pil -o output -f --field m31
```
The implementation is basically the same wrapper as for BabyBear, so I
moved the code to a macro and used it for both fields.
On the Keccak test (and with the `RUSTFLAGS='-Ctarget-cpu=native'`
flag), this brings down the proof time from 7.2 to 4.4s. The share for
computing the quotient polynomial drops from > 50% for some machines to
< 10% for all machines.
We currently hardcode the range of degrees that variable degree machines
are preprocessed for. Expose that in machines instead.
This changes pil namespaces to accept a min and max degree:
```
namespace main(123..456);
namespace main(5); // allowed for backward compatibility, translates to `5..5`
```
It adds two new builtins:
```
std::prover::min_degree
std::prover::max_degree
```
And sets the behavior of the `std::prover::degree` builtin to only
succeed if `min_degree` and `max_degree` are equal.
This PR fixes issue #1728.
1) Removes the blank space that was printed even if the body of the
blocks was empty.
2) Adds the Precedence trait to LambdaExpressions to correctly handle
the use of parentheses.
Input for syscall is one memory pointer to the state array of 25 gl fe.
Output is calculated in place.
All functions outside of keccakf are executed in Rust. Might need to
delete everything except keccakf from keccakf.asm (including the
padding, updating, and byte <-> u64 conversions).
Not tested. Should I wait till all infrastructure needed are merged?
Technically can also do the following for the machine and I think I can
test it already?
```
pol commit input_state[25];
pol commit output_state[25];
operation keccakf_permutation<0> input_state[0], input_state[1], ..., input_state[24] -> output_state[0], output_state[1], ..., output_state[24]
array::zip(input_state, output_state, |input_state, output_state| (keccakf(input_state) - output_state = 0);
```
Small PR that adds the `Hash` trait to `Type` and some of its internal
types.
Needed to be able to store `Vec<type>` in order to solve trait
implementations based on its `type_args`.
This PR 'parameterise' Expressions to be able to choose when they can be
converted into a StructExpression and when they cannot. This
modification allows us to overcome ambiguities with Structs and to adopt
the standard Structs syntax:
```
struct Point{ x: int, y: int }
let p: Point = Point{ x: 1, y: 2 };
```
This PR replace #1591 to be able to include this modifications before
merging structs.
---------
Co-authored-by: chriseth <chris@ethereum.org>