* put smt2 files in temp dir
Now that we have a proper per-run temp directory, we can put
generated smt2 files in the temp directory, so that the files do not
clutter the main temp directory, and so that the files will be
cleaned up when Picus finishes execution.
* feat: add --noclean to prevent cleaning
Picus initially never clean up temporary files. However, the last commit
changed this so that temporary files are cleaned up. Sometimes, though,
we really do not want to clean up files (e.g. in order to generate
smt2 files for debugging purposes). This commit adds the flag to
control whether to run the clean up.
This commit adds a capability to process circom files directly
via the `--circom` flag. This is done by creating a
temporary directory to store the compiled r1cs file.
The temporary directory is cleaned up on exit when Picus exits normally.
However, when Picus crashes or when it is interrupted,
the temporary directory will not be cleaned, giving the user an
opportunity to inspect files.
Prior this commit, we cannot write tests expecting a timeout.
This commit adds the ability to do so.
Another major change is to support tests with optimization flag.
When Circom is given -O2, Circom files without public inputs
(which are all of our benchmarks) will fail to be compiled correctly.
This commit adds a heustistic to detect such situation,
and then "patch" the Circom file to add public inputs.
This is done by doing Circom compilation twice.
The first compilation allows us to read information from R1CS file,
which is then used for patching. The second compilation compiles
the patched file.
Prior this commit, the variables are repeatedly appended at the back of
a list, causing quadratic running time. Since ordering doesn't matter,
this commit makes it appends to the front instead.
Note that the algorithm overall is still potentially quadratic.
A proper fix needs to changes the interface significantly to introduce
an additional accumulator. This commit simply makes a minimal fix for
the immediate issue that we are having,
which causes CompConstant to get stuck.
This commit fixes the problem where an interrupt will not kill a
solver spawn by Picus, causing many solvers to be run concurrently in
the background. Since the fix needs to be applied to three files
that communicate with cvc5, Z3, and cvc4, I also took an opportunity to
refactor them and unify them into one common file.
We should avoid building the resulting query string with `format`
repeatedly, because `format` creates an entirely new string in
each invocation. When the query is big, the cost is
prohibitively expensive.
This commit instead builds the query string by writing to a
string port directly (similar to the StringBuilder idiom in languages
like Java). As the fix needs to be done on Z3, cvc4, and cvc5, I also
take an opportunity to refactor them to reduce code redundancy.
Instead of using `rcmds-ref` in a loop, which takes quadratic time,
directly iterate over three consecutive elements simultaneously,
making it linear time traversal.
This change makes the large benchmark not stuck at aboz lemma.
Previously, we use `(list->set (set->list ks))` to copy the set `ks`,
but this is in fact not needed at all. `ks`, which is an immutable set,
can't be structurally mutated (although its binding can be mutated).
The copying is especially harmful when it's done in a hot loop
(the `update` function in particular), since known and unknown set could
be very large.
This change makes the large benchmark not stuck at basis2-lemma
Prior this commit, linear lemma depends on the CDMAP and RCDMAP
procedures.
Let's say that constraint i involves variables Xi and Yi, where Xi variables
are the linear part, and Yi variables are the non-linear parts.
That is to say, the input follows this format:
Constraint 1: X11 X12 ... Y11 Y12 ...
Constraint 2: X21 X22 ... Y21 Y22 ...
...
Constraint n: Xn1 Xn2 ... Yn1 Yn2 ...
which is of size O(sum |Xi| + sum |Yi|).
The CDMAP procedure produces a map from a variable to a list of list of
variables. Meaning: a key can be uniquely determined if one of the list
of variables are all uniquely determined.
Thus, CDMAP produces a data of size O(sum { |Xi| (|Xi| + |Yi|) }) as the
output (where values of the same key are grouped together)
X11: X12 X13 ... Y11 Y12 ...
X12: X11 X13 ... Y11 Y12 ...
...
X1k: X11 X12 ... Y11 Y12 ...
X21: X22 X23 ... Y21 Y22 ...
X22: X21 X23 ... Y21 Y22 ...
...
.....
Xn1: Xn2 Xn3 ... Yn1 Yn2 ...
Xn2: Xn1 Xn3 ... Yn1 Yn2 ...
...
Xnk: Xn1 Xn2 ... Yn1 Yn2 ...
Often, some clauses might have O(n) variables, making the size
O(n^3) if the table is dense, or O(n^2) if the table is sparse.
The RCDMAP procedure essentially consumes a map that CDMAP procedure
outputs, and produces the inverse of that map, with duplicate keys of
the input map grouped together. The size is thus the same.
RCDMAP is also used in the counter selector. E.g., given:
1 2 3 | 4 6
1 3 5 | 4 6
Then CDMAP produces:
1: [2 3 4 6, 3 5 4 6]
2: [1 3 4 6]
3: [1 2 4 6, 1 5 4 6]
5: [1 3 4 6]
And RCDMAP produces:
1 3 4 6: [2, 5]
2 3 4 6: [1]
3 5 4 6: [1]
1 2 4 6: [3]
1 5 4 6: [3]
The counter selector counts that 5 occurs 2 times in the keys,
so it is weighted 2. 6, by contrast, occurs 5 times, so it is weighted 5.
-------------------------------------------------------------------
This commit aims to improve the performance by avoiding blowing up the
data size beyond linear. It also aims to preserve the existing behavior
as much as possible.
This is done by removing the CDMAP and RCDMAP procedures,
and instead employing the straightforward encoding:
X11 X12 ... | Y11 Y12 ...
X21 X22 ... | Y21 Y22 ...
...
Xn1 Xn2 ... | Yn1 Yn2 ...
which has the linear size.
When a variable is uniquely determined, we simply cross it off from the
above table. A variable becomes propagable when it's the only one left
in the linear part, and the non-linear part is empty.
For counter selector, we produce the counter without expanding the
encoding beyond linear size. This can be done by simple multiplication:
in (Xi, Yi), each variable in Xi contributes to other variables in Xi
exactly |Xi|-1 times (not counting itself) and each variable in Yi
contributes to other variables in Xi exactly |Xi| times.
The results are not the same, however. Consider the earlier concrete example:
Our encoding:
1 2 3 | 4 6
1 3 5 | 4 6
RCDMAP:
1 3 4 6: [2, 5]
2 3 4 6: [1]
3 5 4 6: [1]
1 2 4 6: [3]
1 5 4 6: [3]
The weights of 5 in both our encoding and RCDMAP agree: 2.
However, the weights of 6 do not agree.
Our encoding produces weight 6.
RCDMAP produces weight 5.
Arguably, our weight is more "correct", because it takes the value
part into account. I.e., in "1 3 4 6: [2, 5]", 6 contributes twice
(2 once and 5 once), so it should be counted twice.
Separately, I also believe that the counter selector is not ideal,
and plan to revamp the strategy soon, so the discrepancies will not
matter anyway.
In basis2 lemma, set-remove (which removes a single element) is used
instead of set-subtract (which removes a set of elements).
Previously, Picus gets away with this incorrect result because linear
lemma will "clean up" afterward by syncing known set and unknown set.
However, in the PR #18, we no longer do this syncing,
thus uncovering this issue. This commit fixes the problem.
PR #16 incorrectly concluded the expectation of some tests because I ran
them with different optimization flag. The commit fixes the issue and
add all other applicable tests in circomlib. These tests take less than
5 minutes to run in total, so that's not too long to run as a part of CI.
Instead of using list-ref in a loop (which takes quadratic time)
and list membership in a loop (which also takes quadratic time),
switch to direct element iteration and set membership query.
This reduces the running time from >10mins (I haven't fully measured it.
Terminated it early because I don't want to wait) to 0.6s.
This commit renames the existing --verbose to --cex-verbose,
and makes --verbose control the non-counterexample output,
which could be overwhelming on a very large circuit.
Similar to --cex-verbose, there are three levels for --verbose.
- 0: hides "large" outputs entirely.
- 1: shows the output, but display ... when the output is too large
- 2: shows the full output.
* chore: remove dead code
* fix: make constraint generation more efficient
Prior to this commit, we use list as a data structure to keep inputs,
and then access them by indices, which is inefficient.
This commit fixes the problem by converting the list to vector
for vector indexing. On the performance benchmark, it reduces
the runnning time for constraint generation from 8 minutes to 1 second.
Prior this commit, `read-r1cs` copied bytes in the file over and over
again in a hot loop, causing the time complexity to be quadratic.
This commit switches to use index-based access in the hot loop instead,
resulting in a large performance improvement.
The benchmark file that accompanies this fix took 30 minutes to
successfully parse (according to @shankarapailoor).
This commit reduces the parsing time to 2 seconds.
It should be noted that not all bytes copying is avoided,
since bytes copying outside the hot loop, although not ideal,
does not really impact the performance.
On benchmarks that are not quickly solved, I keep encountering the
error:
subprocess: process creation failed
system error: Too many open files; errno=24
This is because ports are not properly closed when the corresponding
process is killed. This commit fixes the problem.
In the long run, it would be better to restructure these modules to
share common code, so that the fix can be done at a single place.
- Prior this commit, every job depends on `publish-docker`, which is only
run on push. Therefore, all jobs are skipped. This commit fixes the
issue by removing the dependency.
- `publish-docker` is now also run as the last step, only when all tests passed.
- All subsequent jobs are now run on veridise/picus:git-latest,
so that we do not need to push first.
- The tests are slightly more sophisticated. Previously, it only ensures
that there's no error. Now, we also check against expected
output (underconstrained or not).
- Switch to use Racket and Rosette that are already installed in the
image. It turns out that we need to set the environment variable
`PLTADDONDIR` for this to work, because $HOME is overriden in containers
in GHA (https://github.com/actions/runner/issues/863).
- Remove a job that only compiles circomlib.
There's no point to do that.
This commit adds a flag `--verbose` for verbose level,
which supplants the flag `--raw-output` (or, previously, `--map`).
- When the verbose level is 0 (not verbose),
the output is in the circom variable format.
- When the verbose level is 1, the output is a mixed between the circom
variable format and the r1cs signal format, where we prefer the circom
variable format whenever possible.
- When the verbose level is 2, the output is always in
the r1cs signal format.
For `--verbose 0`, the output now has three sections: inputs, first
possible outputs, and second possible outputs.
For other verbosity level, there could be four sections. The extra
section is "other bindings".
Entries that are different in the first possible outputs and
second possible outputs are further highlighted with ANSI escape sequence.
Examples:
With `--verbose 0`:
```
# inputs:
# m1.main.inp: 0
# first possible outputs:
# m1.main.out[0]: 0
# m1.main.out[1]: 0
# m1.main.success: 0
# second possible outputs:
# m2.main.out[0]: 1
# m2.main.out[1]: 0
# m2.main.success: 1
```
With `--verbose 1`:
```
# inputs:
# m1.main.inp: 0
# first possible outputs:
# m1.main.out[0]: 0
# m1.main.out[1]: 0
# m1.main.success: 0
# second possible outputs:
# m2.main.out[0]: 1
# m2.main.out[1]: 0
# m2.main.success: 1
# other bindings:
# one: 1
# p: 21888242871839275222246405745257275088548364400416034343698204186575808495617
# ps1: 21888242871839275222246405745257275088548364400416034343698204186575808495616
# ps2: 21888242871839275222246405745257275088548364400416034343698204186575808495615
# ps3: 21888242871839275222246405745257275088548364400416034343698204186575808495614
# ps4: 21888242871839275222246405745257275088548364400416034343698204186575808495613
# ps5: 21888242871839275222246405745257275088548364400416034343698204186575808495612
# zero: 0
```
With `--verbose 2`
```
# inputs:
# x4: 0
# first possible outputs:
# x1: 0
# x2: 0
# x3: 0
# second possible outputs:
# y1: 1
# y2: 0
# y3: 1
# other bindings:
# one: 1
# p: 21888242871839275222246405745257275088548364400416034343698204186575808495617
# ps1: 21888242871839275222246405745257275088548364400416034343698204186575808495616
# ps2: 21888242871839275222246405745257275088548364400416034343698204186575808495615
# ps3: 21888242871839275222246405745257275088548364400416034343698204186575808495614
# ps4: 21888242871839275222246405745257275088548364400416034343698204186575808495613
# ps5: 21888242871839275222246405745257275088548364400416034343698204186575808495612
# zero: 0
```
Since 368f3c3, cvc5 can produce a finite field literal, in the format:
#f <value> m <mod-value>
See 368f3c3ed6
This commit adds a support for finite field literal.
It also refactors the existing model parsing to use S-expression reading
that is already built-in to Racket.
Prior this commit, the counterexample output format is defaulted to r1cs
signals (although the help page incorrectly says otherwise).
This commit fixes the help page and at the same time switches
the default format to circom variables, since the format is more
readable for clients. The flag `--raw-output` can still be used to view
the r1cs signals, which could be useful for debugging.