- Remove all crates that are not linked with `autoprecompiles`,
`openvm`, `constraint-solver`
- Remove tests and artifacts linked with removed crates
- Adjust CI
When installing `cargo-openvm` for the openvm-reth related tests, the
installation pulls fresh versions of the dependencies that are not
pinned. Some of these dependencies only work with Rust >= 1.88.
I made a [test PR in
powdr-labs/openvm](https://github.com/powdr-labs/openvm/pull/37) which
the changes here point to to get the `cargo-openvm` crate.
We can merge the PR above which would force us to change powdr as well,
but for now it may be enough to merge this PR to get the test passing,
and then change the openvm hash in powdr together with the Rust version
in a follow-up PR.
Based on commit 1dbe4db
- Split into two crates, lib and cli
- upgrade stwo, marked one stwo test `should_panic` @ShuangWu121
- various clippy and fmt fixes linked to the rust version update
- bring all rust versions to 2025-05-14. CI still installs other
versions for openvm which uses them internally. The stable rust version
we test on is bumped to 1.85
- remove `examples` and related tests, which test the powdr crate on the
previous version of powdr (since it uses another nightly). Happy to
discuss this if it's important @leonardoalt
Instead of setting `mtime` to the committed date, which @lvella noted is
error-prone, this PR does the following so that we can utilize the build
cache:
- check out the commit the build cache was built for
- set mtime to the time the cache was built
- check out the commit we are running the tests on
This hopefully has the effect that all modified files will have the
current time as mtime and will trigger re-builds correctly
This PR increases the number of `test_slow` PR test bins from 8 to 17,
thereby strictly reducing the time spent on `test_slow`.
We have three types of runners:
- `warp-ubuntu-2404-x64-8x` for: build, run_examples, bench. I think
this is a paid server: https://www.warpbuild.com/, which @leonardoalt
last changed in #2187, so I have a question: why don't we use the GitHub
Workflow free ones? Besides, I think we can parallelize `run_examples`,
which is a bottleneck of 38 minutes currently run single threaded, so
there can be some immediate time improvement if we run it on multiple
runner instances. Is this server charged by the number of runners or the
number of minutes?
- `ubuntu-24.04` for: test_quick, test_estark_polygon, test_slow.
- `ubuntu-22.04` for: udeps.
**BEFORE Optimization**
<img width="346" alt="Screen Shot 2025-03-24 at 17 25 11"
src="https://github.com/user-attachments/assets/b197eca3-9994-4113-8f4d-1e7b106b064c"
/>
**AFTER Optimization**
See run time of this PR.
**Future Optimization**
In #2580, I'm working on creating 14 bins with 4 threads each because we
have 55 tests in total and each can run on a separate thread. This would
reduce the `test_slow` run time to that of the longest test, which is
~10 minutes.
This requires computing the bin-thread assignment because `nextest` only
supports the hash method for partitions, which isn't ideal in this case.
Extracts the common code in single step processor and block processor
into a unified "processor".
---------
Co-authored-by: Georg Wiese <georgwiese@gmail.com>
This PR allows for free runtime data besides free compile time data,
based on `initial_memory`.
- Replaces `cbor` by `bincode` in prover queries (bincode is more
efficient and cbor crashed with `u128`)
- Allows the prover to pass a runtime initial memory, only possible with
continuations (from https://github.com/powdr-labs/powdr/pull/2251, was
already merged into here, see commits list)
- Changes the `powdr` lib, which already always uses continuations, to
always use this mechanism for prover data
- Provides a new stdin-stream-like function to read inputs in sequence,
like other zkVMs.
- The function above is called `read_stdin` which I'm not super happy
with, ideally it'd just be called `read` but the QueryCalldata function
is already called `read`. I think we could just keep this as is and
change later.
---------
Co-authored-by: Lucas Clemente Vella <lvella@powdrlabs.com>
This should speed up the test. If I remember correctly, with this
compilation is only 2-3s for Poseidon (instead of ~10min). At runtime,
it is about as "fast" as runtime witgen.
### PR: Update Powdr's `stwo` Dependency and Align Toolchain
This PR updates Powdr's `stwo` dependency to the latest version, which
now uses the `nightly-2024-12-17` Rust toolchain. To ensure
compatibility, Powdr's toolchain has also been aligned with this new
nightly version.
As part of this update:
- Several modifications were made to address stricter rules and lints
introduced by the newer version of Clippy.
- System dependencies, including `uuid-dev` and `libgrpc++-dev`, were
added to resolve build and runtime issues brought about by updated
dependencies and toolchain requirements.
This PR puts together the pieces to run compile-time witgen for block
machines. There are still many cases where it doesn't work yet, in which
case it falls back to run-time solving. These cases should be fixed in
future PRs.
It also fixes two bugs:
- When multiplying two affine expression, the case where one of them is
zero is now handled properly.
- `WitgenInference` now handles intermediate columns.
Note that this PR could slow down witgen by attempting to compile code
once per incoming connection and input / output combination, in block
machines. I think this should be negligible though and it gives us that
much of the new pipeline is already running in the tests and elsewhere.
# Benchmark results
I tested the code with different opt levels on a benchmark that computes
ca. $2^{16}$ Poseidon hashes.
## Baseline
```
== Witgen profile (393220 events)
93.0% ( 30.8s): Secondary machine 0: main_poseidon (BlockMachine)
4.1% ( 1.4s): witgen (outer code)
2.3% ( 750.8ms): Main machine (Dynamic)
0.6% ( 204.4ms): FixedLookup
0.0% ( 3.2µs): range constraint multiplicity witgen
---------------------------
==> Total: 33.109672458s
```
## JIT (opt level 1)
```
== Witgen profile (393222 events)
52.3% ( 7.7s): JIT-compilation
32.0% ( 4.7s): Secondary machine 0: main_poseidon (BlockMachine)
9.2% ( 1.3s): witgen (outer code)
5.1% ( 748.3ms): Main machine (Dynamic)
1.4% ( 213.5ms): FixedLookup
0.0% ( 417.0ns): range constraint multiplicity witgen
---------------------------
==> Total: 14.729149333s
```
## JIT (opt level 3)
```
== Witgen profile (393222 events)
94.6% ( 107.9s): JIT-compilation
3.4% ( 3.9s): Secondary machine 0: main_poseidon (BlockMachine)
1.1% ( 1.3s): witgen (outer code)
0.7% ( 746.5ms): Main machine (Dynamic)
0.2% ( 204.1ms): FixedLookup
0.0% ( 542.0ns): range constraint multiplicity witgen
---------------------------
==> Total: 114.036571291s
```
This PR:
- Makes explicit the notion that 0=stdin, 1=stdout, 2=stdout in the
QueryCallback's "FS"
- Exposes the outputs in Session
- Removes printing to stdout and stderr in the callback itself. This is
now the responsibility of the host if needed.
- Adds the Fibonacci test with stdout to CI using the write mechanism
The idea is that after this we should also expose the proof's publics
and make a stream mechanism for inputs and outputs
related to [this PR](https://github.com/powdr-labs/powdr/pull/1898)
we need to change to nightly toolchain to integrate stwo
I kept the toolchain related to riscv to be "nightly-2024-08-01" as it
is handled separately in workflow, so I made the least change to make
stwo integrate-able for now.
fix some clippy issues about the comment format on some files.
---------
Co-authored-by: chriseth <chris@ethereum.org>
Working on the assumption that there is a race condition between the
delete command and the push new cache command, that is why some days we
have no cache.
Fixes a bug where circuits with publics can't be verified properly. This
was caused by a difference in the circuit when the verification key is
generated (without witnesses) vs when the user requests a proof (with).
---------
Co-authored-by: Leo Alt <leo@ethereum.org>