Previously, to lower z = x * y, we always generated a fresh variable for
z. If a or b were constant, then R1CS optimization killed the fresh
variable.
Now, if a or b are constant, we skip the fresh variable, and just scale
the LC.
* Configuration system. Kill DFL_T
* add circ::cfg::CircCfg that holds cfg info
* it's constructible from circ_opt::CircOpt
* implements clap::Args, so you can set it from your compiler's
CLI/envvars
* defined in external crate to keep clap out of our main build
* organized by circ module, but not feature gated
* no point: the build wouldn't meaningfully change
* includes a way to set the default field
* added circ::cfg::set and circ::cfg::cfg
* also circ::cfg::set_default and circ::cfg::set_cfg
* access a sync::once_cell, static configuration
* killed DFL_T
* workflows
* unit-tested component probably need to not read circ::cfg::cfg.
* compilers need to call circ::cfg::set or circ::cfg::set_default.
* rm dead features
* WIP
* Alternate formatting machinery.
* fmt & lint
* rm Letified
* ilp
* Configuration system. Kill DFL_T
* add circ::cfg::CircCfg that holds cfg info
* it's constructible from circ_opt::CircOpt
* implements clap::Args, so you can set it from your compiler's
CLI/envvars
* defined in external crate to keep clap out of our main build
* organized by circ module, but not feature gated
* no point: the build wouldn't meaningfully change
* includes a way to set the default field
* added circ::cfg::set and circ::cfg::cfg
* also circ::cfg::set_default and circ::cfg::set_cfg
* access a sync::once_cell, static configuration
* killed DFL_T
* workflows
* unit-tested component probably need to not read circ::cfg::cfg.
* compilers need to call circ::cfg::set or circ::cfg::set_default.
* rm dead features
This reverts commit 05b793d565.
Right now Z#'s represents field comparisons and remainders
using 255-bit bit-vectors. Our field-blaster is not intended to be
correct or sound for bit-vectors that are this wide. Previously, a bug
in the field-blaster essentially allowed compilation and proving to
succeed anyway.
With the (reverted) bugfix, the field-blaster crashes.
Ultimately, we want Z# to handle these operations using
range-checks---not bit-vectors. So, I'm reverting the bugfix for now.
Fixes 4 bugs in R1CS lowering:
* division-by-zero in finite fields: previously, this always caused
incompleteness. Now, the behavior depends on the CIRC_RELAXATION
envvar:
* CIRC_RELAXATION=incomplete: divide-by-0 causes incompletenes
* CIRC_RELAXATION=nondet: divide-by-0 allows the output to take any value
* CIRC_RELAXATION=det: divide-by-0 forces the output to 0
* bit-vector overshift: previously, this cause incompleteness. Now,
we follow the semantics of SMT-LIB (the result saturates).
* bit-vector udiv-by-zero: previously, the output value was an
unconstrained bit-vector. Now, it is MAX (following SMT-LIB).
* bit-vector comparisons: previously, the prover could lie, claiming
that x < y when actually x >= y. All bit-vector comparisons are
affected, including those in udiv and urem.
* soundness bug!
* Better serde impl for bellman keys
Bellman keys (pk and vk) expose non-trait read and write functions for
interacting with `io::{Read,Write}`. They don't expose serde impls.
We need to serde them.
Previously, we used the io interface to convert from/to a `Vec<u8>`,
whose serde impl we used.
This was a bad idea b/c the serde impl for `Vec<u8>` is bad. It gets
stuck with the generic serde impl for `Vec`, rather than using
`(de)serialize_bytes`.
Now, we have serde-impl'd wrapper types for pk and vk that do the right
thing.
By my measurements, this decreases the cost of serializing a pk by about
84%.
* serde_bytes
* Slightly better serde implementation for Term
Also, for data-structures thereof.
Crediting Colin, b/c this is based on his PR.
Co-Authored-By: Collin Zhang <collinzrj@gmail.com>
The previous implementation for all 3 structures would separately serialize each term in a precomputation. This was very bad, because precomputations can be quite large, and the different terms typically share large numbers of sub-terms. This can be quadratic blowup.
Change list:
add input/output type metadata to PreComp (this makes the type of variables in a PreComp clear)
add textual serialization and parsing for PreComp (and include this in Computation)
add serde implementations for Computation and PreCop based on the text formats.
modify the R1cs serde derive so that the Vec<Term> is tuplized before serde
This reduces our prover-key size for verifyEddsa.zok from wildly large (>450GB) to 171MB.
Collin and Anna first observed this bug. Collin's PR #118 works around it (among other improvements).
Updated MPC unit tests to use function abstractions. Most of these updates are made to the C frontend and MPC lowering pass. Additional optimization and ILP solving passes need to be updated.
The assignment and cost model passes now need to factor in Array and Tuple semantics. This requires changes to the CostModel assignments. This also requires the the def-use-graph implementation for constant-index propagation.
Work-around:
- disabled HyCC benchmark tests
- disabled tests that use Op::Select / Op::Store
We received a benchmark from Lef which essentially checked the
following:
(cond ? A*B : A*C)[i] == v
where A, B, C are 4 by 4 matrices, cond is a bool, i is an index, and v
a value.
The expected number of constraints is n^3: order 64 , but compile time
and constraint count were in the thousands.
I investigated finding a number of things:
* the Z# front-end entered a conditional branch for the ternary, causing
the MM to happen in a conditional context
* while the MM has an oblivious access pattern, since `i` is variable,
the oblivious pass wasn't really helping
* there were some weird non-constant indices floating around like
(ite cond 0 0)
The three optimizations fix these problems:
* If the Z# frontend isn't in assertion isolation mode, don't enter a
conditional branch for an ITE
* Constant folding: add the rewrite (ite cond a a) -> a
* Linear-scan array eliminator: for each operation, check if the index
is constant. If so, skip the scan (for that access only)
to_r1cs used to return two copies of the r1cs, one inside ProverData and one directly.
This wastes memory with essentially no upside, as far as I can tell. I modified to_r1cs
to return just the ProverData and the VerifierData.
I also pulled the common code from R1cs::prover_data and R1cs::verifier_data into a
function that both of those functions now call.
Finally, cleaned up zxc to actually write the r1cs out (either as bincode or json).
Initial commit for support function abstractions.
`Computations` are maps of function names `str` to computation `Computation`.
Updates:
- Frontend `Gen` returns `Computations` with only a single main `Computation`.
- Optimizations take in `Computations` instead of a single `Computation`. Optimizations are applied to each `Computation`.
- Backends take in only the main `Computation`.
It detects bitwise BV ITEs expressed using and XOR/OR of ANDs with a
NOT. Now it triggers more often.
This cuts a few percentage points (~10%) worth of constraints on ZoK's
shaRound circuit.
Reduce Restrict::restrict_to_inputs from n*n to n
It needs to compute the set of terms which only on inputs in S.
It did this via *separate* containment checks for each term.
But, of course, there is a simple graph traversal that does O(1) work
per term.
I'm taking this patch now b/c I need it. All credits to Evan.
It passes our ZK tests, and a wrote a few unit tests.
Co-authored-by: Evan Laufer <evan.m.laufer@gmail.com>