Compare commits

...

42 Commits

Author SHA1 Message Date
Ekaterina Broslavskaya
2071346174 Release v1.0.0 (#361) 2025-12-18 13:55:37 +03:00
Vinh Trịnh
c0769395bd feat(rln-wasm): seperate rln wasm parallel package (#360) 2025-12-18 16:48:10 +07:00
Vinh Trịnh
2fc079d633 fix(ci): nightly build failed due to import paths for zerokit_utils::merkle_tree in poseidon_tree.rs file (#359) 2025-12-18 12:38:27 +07:00
Vinh Trịnh
0ebeea50fd feat(rln): extend error handling for rln module (#358)
Changes:
- Unified error types (`PoseidonError`, `HashError`, etc.) across
hashing, keygen, witness calculation, and serialization for consistent
and descriptive error handling.
- Refactored tests and examples to use `unwrap()` where safe, and
limited `expect()` in library code to non-panicking cases with clear
messaging.
- Improved witness and proof generation by removing panicking code paths
and enforcing proper error propagation.
- Cleaned up outdated imports, removed unused operations in `graph.rs`,
and updated public API documentation.
- Updated C, Nim, and WASM FFI bindings with more robust serialization
and clearer error log messages.
- Added keywords to package.json and update dependencies in
Makefile.toml and Nightly CI.
2025-12-17 19:27:07 +07:00
Vinh Trịnh
c890bc83ad fix(ci): nightly build failed due to incorrect config flag for pm_tree_adapter (#357)
Success build:
https://github.com/vacp2p/zerokit/actions/runs/20063084959
https://github.com/vacp2p/zerokit/actions/runs/20063354977
2025-12-09 19:40:14 +07:00
Vinh Trịnh
77a8d28965 feat: unify RLN types, refactor public APIs, add full (de)serialization, align FFI/WASM/APIs, simplify errors, update docs/examples, and clean up zerokit (#355)
# Changes

- Unified the `RLN` struct and core protocol types across public, FFI,
and WASM so everything works consistently.
- Fully refactored `protocol.rs` and `public.rs` to clean up the API
surface and make the flow easier to work with.
- Added (de)serialization for `RLN_Proof` and `RLN_ProofValues`, and
matched all C, Nim, WASM, and Node.js examples.
- Aligned FFI and WASM behavior, added missing APIs, and standardized
how witness are created and passed around.
- Reworked the error types, added clearer verification messages, and
simplified the overall error structure.
- Updated variable names, README, Rust docs, and examples across the
repo, updated outdated RLN RFC link.
- Refactored `rln-cli` to use the new public API, removed
serialize-based cli example, and dropped the `eyre` crate.
- Bumped dependencies, fixed CI, fixed `+atomic` flags for latest
nightly Rust and added `Clippy.toml` for better fmt.
- Added a `prelude.rs` file for easier use, cleaned up public access for
types and types import across zerokit modules.
- Separated keygen, proof handling, slashing logic, and witness into
protocol folder.
2025-12-09 19:03:04 +07:00
Vinh Trịnh
5c73af1130 feat(wasm): rework rln-wasm and rln-wasm-utils modules, remove buffer-based serialization, and update public.rs and protocol.rs accordingly (#352) 2025-12-01 17:33:46 +07:00
Vinh Trịnh
c74ab11c82 fix(rln): resolve memory leak in calc_witness and improve FFI memory deallocation pattern (#354) 2025-11-20 17:27:52 +07:00
Sydhds
a52cf84f46 feat(rln): rework FFI module with new functional APIs and remove buffer-based serialization (#337)
Co-authored-by: vinhtc27 <vinhtc27@gmail.com>
Co-authored-by: seemenkina <seemenkina@gmail.com>
Co-authored-by: sydhds < sydhds@gmail.com>
2025-11-05 23:51:49 +07:00
Jakub Sokołowski
3160d9504d nix: add our own binary cache to flake
Signed-off-by: Jakub Sokołowski <jakub@status.im>
2025-11-04 00:11:44 +01:00
Jakub Sokołowski
0b30ba112f nix: bump nixpkgs to same commit as status-go
Signed-off-by: Jakub Sokołowski <jakub@status.im>
2025-11-04 00:11:41 +01:00
Sydhds
a2f9aaeeee Set ruint dependency with fewer features (#349)
Use only the required features for ruint dependency.
2025-10-31 17:12:52 +01:00
Vinh Trịnh
a198960cf3 chore: use rust nightly-2025-09-24 until patch version release (#351)
To avoid CI blocking other PR's CI:
+ https://github.com/vacp2p/zerokit/pull/337
+ https://github.com/vacp2p/zerokit/pull/346
+ https://github.com/vacp2p/zerokit/pull/349
2025-10-31 22:07:10 +07:00
Sydhds
7f6f66bb13 Update zerokit-utils to version 0.7.0 in utils/README.md file (#348) 2025-10-21 10:24:44 +02:00
Ekaterina Broslavskaya
a4bb3feb50 Release v0.9.0 (#345) 2025-09-30 15:45:02 +03:00
Ekaterina Broslavskaya
2386e8732f fix(ci): update binary name generaion in CI (#344)
Clean feature naming with env vars

- Use arrays for feature sets in matrix.
- Add job-level env (FEATURES_CARGO, FEATURES_TAG, TARGET).
- Use FEATURES_TAG for artifact/file names → no more dots/commas.

Example:
`x86_64-unknown-linux-gnu-fullmerkletree.parallel-rln.tar.gz` →
`x86_64-unknown-linux-gnu-fullmerkletree-parallel-rln.tar.gz`
2025-09-30 15:18:50 +03:00
Vinh Trịnh
44c6cf3cdd fix(rln): fixed fail nightly build and updated CONTRIBUTING.md and Cargo.lock (#342) 2025-09-29 17:14:36 +07:00
0xc1c4da
eb8eedfdb4 Allow flake to be consumed, and nix build .#rln (#340)
I had been trying to consume zerokit (specifically rln on x86_64), to
build libwaku (nwaku) and was having issues, this PR at least allows a
build to occur.

```bash
$ nix flake show github:vacp2p/zerokit
error: syntax error, unexpected '=', expecting ';'
       at «github:vacp2p/zerokit/0b00c639a059a2cfde74bcf68fdf75db3b6898a4»/flake.nix:36:25:
           35|
           36|         rln-linux-arm64 = buildRln {
             |                         ^
           37|           target-platform = "aarch64-multiplatform";
```

`Cargo.lock` is required in repo for this to be possible, otherwise:
```bash
$ nix build .#rln --show-trace
warning: Git tree '/home/j/experiments/zerokit' is dirty
error:
       … while calling the 'derivationStrict' builtin
         at <nix/derivation-internal.nix>:37:12:
           36|
           37|   strict = derivationStrict drvAttrs;
             |            ^
           38|

       … while evaluating derivation 'zerokit-nightly'
         whose name attribute is located at /nix/store/fy7zcm8ya6p215wvrlqrl8022da6asn0-source/pkgs/stdenv/generic/make-derivation.nix:336:7

       … while evaluating attribute 'cargoDeps' of derivation 'zerokit-nightly'
         at /nix/store/fy7zcm8ya6p215wvrlqrl8022da6asn0-source/pkgs/build-support/rust/build-rust-package/default.nix:157:5:
          156|   // {
          157|     cargoDeps = cargoDeps';
             |     ^
          158|     inherit buildAndTestSubdir;

       … while calling the 'getAttr' builtin
         at <nix/derivation-internal.nix>:50:17:
           49|     value = commonAttrs // {
           50|       outPath = builtins.getAttr outputName strict;
             |                 ^
           51|       drvPath = strict.drvPath;

       … while calling the 'derivationStrict' builtin
         at <nix/derivation-internal.nix>:37:12:
           36|
           37|   strict = derivationStrict drvAttrs;
             |            ^
           38|

       … while evaluating derivation 'cargo-vendor-dir'
         whose name attribute is located at /nix/store/fy7zcm8ya6p215wvrlqrl8022da6asn0-source/pkgs/stdenv/generic/make-derivation.nix:336:7

       … while evaluating attribute 'buildCommand' of derivation 'cargo-vendor-dir'
         at /nix/store/fy7zcm8ya6p215wvrlqrl8022da6asn0-source/pkgs/build-support/trivial-builders/default.nix:59:17:
           58|         enableParallelBuilding = true;
           59|         inherit buildCommand name;
             |                 ^
           60|         passAsFile = [ "buildCommand" ]

       … while calling the 'toString' builtin
         at /nix/store/fy7zcm8ya6p215wvrlqrl8022da6asn0-source/pkgs/build-support/rust/import-cargo-lock.nix:264:20:
          263|
          264|     for crate in ${toString depCrates}; do
             |                    ^
          265|       # Link the crate directory, removing the output path hash from the destination.

       … while calling the 'deepSeq' builtin
         at /nix/store/fy7zcm8ya6p215wvrlqrl8022da6asn0-source/pkgs/build-support/rust/import-cargo-lock.nix:68:15:
           67|   # being evaluated otherwise, since there could be no git dependencies.
           68|   depCrates = builtins.deepSeq gitShaOutputHash (builtins.map mkCrate depPackages);
             |               ^
           69|

       … while calling the 'map' builtin
         at /nix/store/fy7zcm8ya6p215wvrlqrl8022da6asn0-source/pkgs/build-support/rust/import-cargo-lock.nix:68:50:
           67|   # being evaluated otherwise, since there could be no git dependencies.
           68|   depCrates = builtins.deepSeq gitShaOutputHash (builtins.map mkCrate depPackages);
             |                                                  ^
           69|

       … while calling the 'filter' builtin
         at /nix/store/fy7zcm8ya6p215wvrlqrl8022da6asn0-source/pkgs/build-support/rust/import-cargo-lock.nix:61:17:
           60|   # safely skip it.
           61|   depPackages = builtins.filter (p: p ? "source") packages;
             |                 ^
           62|

       … while calling the 'fromTOML' builtin
         at /nix/store/fy7zcm8ya6p215wvrlqrl8022da6asn0-source/pkgs/build-support/rust/import-cargo-lock.nix:50:20:
           49|
           50|   parsedLockFile = builtins.fromTOML lockFileContents;
             |                    ^
           51|

       … while evaluating the argument passed to builtins.fromTOML

       … while calling the 'readFile' builtin
         at /nix/store/fy7zcm8ya6p215wvrlqrl8022da6asn0-source/pkgs/build-support/rust/import-cargo-lock.nix:47:10:
           46|     if lockFile != null
           47|     then builtins.readFile lockFile
             |          ^
           48|     else args.lockFileContents;

       error: opening file '/nix/store/qh8gf0sl8znhnjwc1ksif7pwik26dsyd-source/Cargo.lock': No such file or directory
```

The PR allows for a successful build:
```bash
$ ls -R result
result:
target

result/target:
release

result/target/release:
librln.a  librln.d  librln.rlib  librln.so
```

---------

Co-authored-by: Jarrad Hope <jarrad@logos.co>
Co-authored-by: Vinh Trịnh <108657096+vinhtc27@users.noreply.github.com>
2025-09-17 14:57:06 +02:00
Vinh Trịnh
57b694db5d chore(rln-wasm): remove wasm-bindgen-cli installation (#341)
Currently, the new wasm-bindgen-cli version [causes CI to
fail](https://github.com/vacp2p/zerokit/actions/runs/17699917161/job/50313998747),
and it isn't needed for the parallel feature anymore.
So it's better to remove it from the codebase.
2025-09-16 14:55:18 +07:00
Vinh Trịnh
0b00c639a0 feat(rln): improve the PmTreeConfig initialization process with builder pattern (#334) 2025-09-03 18:54:08 +07:00
Vinh Trịnh
7c801a804e chore: remove cmake due to CI error and skip tests and benchmarks on draft pull requests (#339) 2025-09-03 15:56:09 +07:00
Joe Wanga
9da80dd807 docs: add comprehensive CONTRIBUTING.md with contributing guidelines (#331)
## Description
Adds a comprehensive CONTRIBUTING.md document that addresses all
requirements from the issue #309 .

---------

Co-authored-by: Ekaterina Broslavskaya <seemenkina@gmail.com>
2025-08-19 11:56:05 +03:00
Vinh Trịnh
bcbd6a97af chore: consistent naming and update docs for merkle trees (#333) 2025-08-18 21:37:28 +07:00
Ekaterina Broslavskaya
6965cf2852 feat(rln-wasm-utils): extracting identity generation and hash functions into a separate module (#332)
- separated all identity generation functions as separate functions,
rather than RLN methods
- added BE support - only for these functions so far
- covered the functions with tests, as well as conversion to big endian
- prepared for publication, but is actually awaiting the initial
publication of the RLN module

@vinhtc27, please check that everything is correct from the wasm point
of view. This module does not require parallel computing, so if there
are any unnecessary dependencies, builds, etc., please let me know.

---------

Co-authored-by: vinhtc27 <vinhtc27@gmail.com>
2025-07-31 16:05:46 +03:00
Vinh Trịnh
578e0507b3 feat: add wasm parallel testcase and simplify the witness_calculator.js (#328)
- Tested the parallel feature for rln-wasm on this branch:
https://github.com/vacp2p/zerokit/tree/benchmark-v0.9.0
- Simplified the test case by using the default generated
witness_calculator.js file for both Node and browser tests
- Added a WASM parallel test case using the latest wasm-bindgen-rayon
version 1.3.0
- [Successful CI
run](https://github.com/vacp2p/zerokit/actions/runs/16570298449) with
Cargo.lock is included, but it fails if ignored from the codebase.
- Requires publishing new pmtree version [on this
PR](https://github.com/vacp2p/pmtree/pull/4) before merging this branch.
2025-07-30 19:18:30 +07:00
Vinh Trịnh
bf1e184da9 feat: resolve overlap between stateless and merkletree feature flags (#329)
- Resolved overlap between stateless and merkletree feature flags.
- Updated every testcase related to stateless feature.
- Added compile-time feature check to avoid feature overlap.
- Added --no-default-features for all builds in nightly-release.yml
[(tested)](https://github.com/vacp2p/zerokit/actions/runs/16525062203).

---------

Co-authored-by: Ekaterina Broslavskaya <seemenkina@gmail.com>
2025-07-28 16:52:45 +07:00
Vinh Trịnh
4473688efa feat: support feature-specific binary generation and make arkzkey the default (#326)
- Integrated missing options for generating feature-specific binaries
[(tested)](https://github.com/vacp2p/zerokit/actions/runs/16408191766).
- Made arkzkey the default feature for improved consistency.
- Created a script to convert arkzkey from zkey.
- Updated nightly-release.yaml file.
- Updated documentation.
2025-07-28 15:11:41 +07:00
Vinh Trịnh
c80569d518 feat: restore parallel flag, improve CI, resolve clippy warnings, bump deps (#325) 2025-07-14 15:00:24 +07:00
Sydhds
fd99b6af74 Add pmtree delete function docstring (#324) 2025-07-10 08:25:10 +02:00
Sydhds
65f53e3da3 Initial impl for IdSecret (#320) 2025-07-08 09:48:04 +02:00
Vinh Trịnh
042f8a9739 feat: use stateless as default feature for rln in wasm module (#322) 2025-07-04 13:50:26 +07:00
Sydhds
baf474e747 Use Vec::with_capacity for bytes_le_to_vec_fr (#321) 2025-06-23 10:13:39 +02:00
Ekaterina Broslavskaya
dc0b31752c release v0.8.0 (#315) 2025-06-05 12:23:06 +03:00
Sydhds
36013bf4ba Remove not explicit use statement (#317) 2025-06-05 10:32:43 +02:00
Sydhds
211b2d4830 Add error for return type of compute_id_secret function (#316) 2025-06-04 09:00:27 +02:00
Sydhds
5f4bcb74ce Eyre removal 2 (#311)
Co-authored-by: Ekaterina Broslavskaya <seemenkina@gmail.com>
2025-06-02 10:32:13 +02:00
Jakub Sokołowski
de5fd36add nix: add RLN targets for different platforms
Wanted to be able to build `wakucanary` without having to build `zerokit` manually.
Also adds the `release` flag which can be set to `false` for a debug build.

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2025-05-29 10:30:02 +02:00
Jakub Sokołowski
19c0f551c8 nix: use rust tooling from rust-overlay for builds
Noticed the builds in `nix/default.nix` were not using the tooling
from `rust-overlay` but instead using older one from `pkgs`.

This also removes the need to compile LLVM before building Zerokit.

Signed-off-by: Jakub Sokołowski <jakub@status.im>
2025-05-29 09:56:31 +02:00
vinhtc27
4133f1f8c3 fix: bumps deps, downgrade hex-literal to avoid Rust edition 2024 issue
Signed-off-by: Jakub Sokołowski <jakub@status.im>
2025-05-29 09:56:30 +02:00
markoburcul
149096f7a6 flake: add rust overlay and shell dependencies 2025-05-15 11:51:55 +02:00
Vinh Trịnh
7023e85fce Enable parallel execution for Merkle Tree (#306) 2025-05-14 12:19:37 +07:00
Vinh Trịnh
a4cafa6adc Enable parallel execution for rln-wasm module (#296)
## Changes

- Enabled parallelism in the browser for `rln-wasm` with the
`multithread` feature flag.
- Added browser tests for both single-threaded and multi-threaded modes.
- Enabled browser tests in the CI workflow.
- Pending: resolving hanging issue with `wasm-bindgen-rayon`
([comment](https://github.com/RReverser/wasm-bindgen-rayon/issues/6#issuecomment-2814372940)).
- Forked [this
commit](42887c80e6)
into a separate
[branch](https://github.com/vacp2p/zerokit/tree/benchmark-v0.8.0), which
includes an HTML benchmark file and a test case for the multithreaded
feature in `rln-wasm`.
- The test case still has the known issue above, so it's temporarily
disabled in this PR and will be addressed in the future.
- Improve the `make installdeps` which resolves the issue of NVM not
enabling Node.js in the current terminal session.
- Reduce the build size of the `.wasm` blob using the `wasm-opt` tool
from [Binaryen](https://github.com/WebAssembly/binaryen).
- Maybe we can close this draft
[PR](https://github.com/vacp2p/zerokit/pull/226), which is already very
outdated?
2025-05-13 13:15:05 +07:00
115 changed files with 15673 additions and 9691 deletions

View File

@@ -10,6 +10,7 @@ on:
- "!rln/resources/**"
- "!utils/src/**"
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
paths-ignore:
- "**.md"
- "!.github/workflows/*.yml"
@@ -18,58 +19,54 @@ on:
- "!rln/resources/**"
- "!utils/src/**"
name: Tests
name: CI
jobs:
utils-test:
# skip tests on draft PRs
if: github.event_name == 'push' || (github.event_name == 'pull_request' && !github.event.pull_request.draft)
strategy:
matrix:
platform: [ ubuntu-latest, macos-latest ]
crate: [ utils ]
platform: [ubuntu-latest, macos-latest]
crate: [utils]
runs-on: ${{ matrix.platform }}
timeout-minutes: 60
name: test - ${{ matrix.crate }} - ${{ matrix.platform }}
name: Test - ${{ matrix.crate }} - ${{ matrix.platform }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- name: Install dependencies
run: make installdeps
- name: cargo-make test
- name: Test utils
run: |
cargo make test --release
working-directory: ${{ matrix.crate }}
rln-test:
# skip tests on draft PRs
if: github.event_name == 'push' || (github.event_name == 'pull_request' && !github.event.pull_request.draft)
strategy:
matrix:
platform: [ ubuntu-latest, macos-latest ]
crate: [ rln ]
feature: [ "default", "arkzkey", "stateless" ]
platform: [ubuntu-latest, macos-latest]
crate: [rln]
feature: ["default", "stateless"]
runs-on: ${{ matrix.platform }}
timeout-minutes: 60
name: test - ${{ matrix.crate }} - ${{ matrix.platform }} - ${{ matrix.feature }}
name: Test - ${{ matrix.crate }} - ${{ matrix.platform }} - ${{ matrix.feature }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- name: Install dependencies
run: make installdeps
- name: cargo-make test
- name: Test rln
run: |
if [ ${{ matrix.feature }} == default ]; then
cargo make test --release
@@ -78,91 +75,123 @@ jobs:
fi
working-directory: ${{ matrix.crate }}
rln-wasm:
rln-wasm-test:
# skip tests on draft PRs
if: github.event_name == 'push' || (github.event_name == 'pull_request' && !github.event.pull_request.draft)
strategy:
matrix:
platform: [ ubuntu-latest, macos-latest ]
feature: [ "default", "arkzkey" ]
platform: [ubuntu-latest, macos-latest]
crate: [rln-wasm]
feature: ["default"]
runs-on: ${{ matrix.platform }}
timeout-minutes: 60
name: test - rln-wasm - ${{ matrix.platform }} - ${{ matrix.feature }}
name: Test - ${{ matrix.crate }} - ${{ matrix.platform }} - ${{ matrix.feature }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- name: Install Dependencies
- name: Install dependencies
run: make installdeps
- name: cargo-make build
run: |
if [ ${{ matrix.feature }} == default ]; then
cargo make build
else
cargo make build_${{ matrix.feature }}
fi
working-directory: rln-wasm
- name: cargo-make test
run: |
if [ ${{ matrix.feature }} == default ]; then
cargo make test --release
else
cargo make test_${{ matrix.feature }} --release
fi
working-directory: rln-wasm
- name: Build rln-wasm
run: cargo make build
working-directory: ${{ matrix.crate }}
- name: Test rln-wasm on node
run: cargo make test --release
working-directory: ${{ matrix.crate }}
- name: Test rln-wasm on browser
run: cargo make test_browser --release
working-directory: ${{ matrix.crate }}
rln-wasm-parallel-test:
# skip tests on draft PRs
if: github.event_name == 'push' || (github.event_name == 'pull_request' && !github.event.pull_request.draft)
strategy:
matrix:
platform: [ubuntu-latest, macos-latest]
crate: [rln-wasm]
feature: ["parallel"]
runs-on: ${{ matrix.platform }}
timeout-minutes: 60
name: Test - ${{ matrix.crate }} - ${{ matrix.platform }} - ${{ matrix.feature }}
steps:
- uses: actions/checkout@v4
- name: Install nightly toolchain
uses: dtolnay/rust-toolchain@nightly
with:
components: rust-src
targets: wasm32-unknown-unknown
- uses: Swatinem/rust-cache@v2
- name: Install dependencies
run: make installdeps
- name: Build rln-wasm in parallel mode
run: cargo make build_parallel
working-directory: ${{ matrix.crate }}
- name: Test rln-wasm in parallel mode on browser
run: cargo make test_parallel --release
working-directory: ${{ matrix.crate }}
lint:
# run on both ready and draft PRs
if: github.event_name == 'push' || (github.event_name == 'pull_request' && !github.event.pull_request.draft)
strategy:
matrix:
# we run lint tests only on ubuntu
platform: [ ubuntu-latest ]
crate: [ rln, rln-wasm, utils ]
# run lint tests only on ubuntu
platform: [ubuntu-latest]
crate: [rln, rln-wasm, utils]
runs-on: ${{ matrix.platform }}
timeout-minutes: 60
name: lint - ${{ matrix.crate }} - ${{ matrix.platform }}
name: Lint - ${{ matrix.crate }} - ${{ matrix.platform }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@stable
with:
profile: minimal
toolchain: stable
override: true
components: rustfmt, clippy
- name: Install wasm32 target
if: matrix.crate == 'rln-wasm'
run: rustup target add wasm32-unknown-unknown
- uses: Swatinem/rust-cache@v2
- name: Install Dependencies
- name: Install dependencies
run: make installdeps
- name: cargo fmt
- name: Check formatting
if: success() || failure()
run: cargo fmt -- --check
working-directory: ${{ matrix.crate }}
- name: cargo clippy
if: success() || failure()
- name: Check clippy wasm target
if: (success() || failure()) && (matrix.crate == 'rln-wasm')
run: |
cargo clippy --release
cargo clippy --target wasm32-unknown-unknown --tests --release -- -D warnings
working-directory: ${{ matrix.crate }}
- name: Check clippy default feature
if: (success() || failure()) && (matrix.crate != 'rln-wasm')
run: |
cargo clippy --all-targets --tests --release -- -D warnings
- name: Check clippy stateless feature
if: (success() || failure()) && (matrix.crate == 'rln')
run: |
cargo clippy --all-targets --tests --release --features=stateless --no-default-features -- -D warnings
working-directory: ${{ matrix.crate }}
benchmark-utils:
# run only in pull requests
if: github.event_name == 'pull_request'
# run only on ready PRs
if: github.event_name == 'pull_request' && !github.event.pull_request.draft
strategy:
matrix:
# we run benchmark tests only on ubuntu
platform: [ ubuntu-latest ]
crate: [ utils ]
# run benchmark tests only on ubuntu
platform: [ubuntu-latest]
crate: [utils]
runs-on: ${{ matrix.platform }}
timeout-minutes: 60
name: benchmark - ${{ matrix.platform }} - ${{ matrix.crate }}
name: Benchmark - ${{ matrix.crate }} - ${{ matrix.platform }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
uses: actions/checkout@v4
- uses: Swatinem/rust-cache@v2
- uses: boa-dev/criterion-compare-action@v3
with:
@@ -170,21 +199,21 @@ jobs:
cwd: ${{ matrix.crate }}
benchmark-rln:
# run only in pull requests
if: github.event_name == 'pull_request'
# run only on ready PRs
if: github.event_name == 'pull_request' && !github.event.pull_request.draft
strategy:
matrix:
# we run benchmark tests only on ubuntu
platform: [ ubuntu-latest ]
crate: [ rln ]
feature: [ "default", "arkzkey" ]
# run benchmark tests only on ubuntu
platform: [ubuntu-latest]
crate: [rln]
feature: ["default"]
runs-on: ${{ matrix.platform }}
timeout-minutes: 60
name: benchmark - ${{ matrix.platform }} - ${{ matrix.crate }} - ${{ matrix.feature }}
name: Benchmark - ${{ matrix.crate }} - ${{ matrix.platform }} - ${{ matrix.feature }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
uses: actions/checkout@v4
- uses: Swatinem/rust-cache@v2
- uses: boa-dev/criterion-compare-action@v3
with:

View File

@@ -6,43 +6,45 @@ on:
jobs:
linux:
strategy:
matrix:
feature: [ "default", "arkzkey", "stateless" ]
target:
- x86_64-unknown-linux-gnu
- aarch64-unknown-linux-gnu
# - i686-unknown-linux-gnu
include:
- feature: stateless
cargo_args: --exclude rln-cli
name: Linux build
runs-on: ubuntu-latest
strategy:
matrix:
features:
- ["stateless"]
- ["stateless", "parallel"]
- ["pmtree-ft"]
- ["pmtree-ft", "parallel"]
- ["fullmerkletree"]
- ["fullmerkletree", "parallel"]
- ["optimalmerkletree"]
- ["optimalmerkletree", "parallel"]
target: [x86_64-unknown-linux-gnu, aarch64-unknown-linux-gnu]
env:
FEATURES_CARGO: ${{ join(matrix.features, ',') }}
FEATURES_TAG: ${{ join(matrix.features, '-') }}
TARGET: ${{ matrix.target }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@stable
with:
profile: minimal
toolchain: stable
override: true
target: ${{ matrix.target }}
target: ${{ env.TARGET }}
- uses: Swatinem/rust-cache@v2
- name: Install dependencies
run: make installdeps
- name: cross build
- name: Cross build
run: |
cross build --release --target ${{ matrix.target }} --features ${{ matrix.feature }} --workspace ${{ matrix.cargo_args }}
cross build --release --target $TARGET --no-default-features --features "$FEATURES_CARGO" --workspace
mkdir release
cp target/${{ matrix.target }}/release/librln* release/
tar -czvf ${{ matrix.target }}-${{ matrix.feature }}-rln.tar.gz release/
cp target/$TARGET/release/librln* release/
tar -czvf $TARGET-$FEATURES_TAG-rln.tar.gz release/
- name: Upload archive artifact
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target }}-${{ matrix.feature }}-archive
path: ${{ matrix.target }}-${{ matrix.feature }}-rln.tar.gz
name: ${{ env.TARGET }}-${{ env.FEATURES_TAG }}-archive
path: ${{ env.TARGET }}-${{ env.FEATURES_TAG }}-rln.tar.gz
retention-days: 2
macos:
@@ -50,82 +52,107 @@ jobs:
runs-on: macos-latest
strategy:
matrix:
feature: [ "default", "arkzkey", "stateless" ]
target:
- x86_64-apple-darwin
- aarch64-apple-darwin
include:
- feature: stateless
cargo_args: --exclude rln-cli
features:
- ["stateless"]
- ["stateless", "parallel"]
- ["pmtree-ft"]
- ["pmtree-ft", "parallel"]
- ["fullmerkletree"]
- ["fullmerkletree", "parallel"]
- ["optimalmerkletree"]
- ["optimalmerkletree", "parallel"]
target: [x86_64-apple-darwin, aarch64-apple-darwin]
env:
FEATURES_CARGO: ${{ join(matrix.features, ',') }}
FEATURES_TAG: ${{ join(matrix.features, '-') }}
TARGET: ${{ matrix.target }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@stable
with:
profile: minimal
toolchain: stable
override: true
target: ${{ matrix.target }}
target: ${{ env.TARGET }}
- uses: Swatinem/rust-cache@v2
- name: Install dependencies
run: make installdeps
- name: cross build
- name: Cross build
run: |
cross build --release --target ${{ matrix.target }} --features ${{ matrix.feature }} --workspace ${{ matrix.cargo_args }}
cross build --release --target $TARGET --no-default-features --features "$FEATURES_CARGO" --workspace
mkdir release
cp target/${{ matrix.target }}/release/librln* release/
tar -czvf ${{ matrix.target }}-${{ matrix.feature }}-rln.tar.gz release/
cp target/$TARGET/release/librln* release/
tar -czvf $TARGET-$FEATURES_TAG-rln.tar.gz release/
- name: Upload archive artifact
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target }}-${{ matrix.feature }}-archive
path: ${{ matrix.target }}-${{ matrix.feature }}-rln.tar.gz
name: ${{ env.TARGET }}-${{ env.FEATURES_TAG }}-archive
path: ${{ env.TARGET }}-${{ env.FEATURES_TAG }}-rln.tar.gz
retention-days: 2
browser-rln-wasm:
name: Browser build (RLN WASM)
rln-wasm:
name: Build rln-wasm
runs-on: ubuntu-latest
strategy:
matrix:
feature:
- "default"
- "parallel"
- "utils"
steps:
- name: Checkout sources
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Install stable toolchain
uses: actions-rs/toolchain@v1
uses: dtolnay/rust-toolchain@stable
with:
profile: minimal
toolchain: stable
override: true
targets: wasm32-unknown-unknown
- name: Install nightly toolchain
uses: dtolnay/rust-toolchain@nightly
with:
components: rust-src
targets: wasm32-unknown-unknown
- uses: Swatinem/rust-cache@v2
- name: Install dependencies
run: make installdeps
- name: cross make build
- name: Build rln-wasm package
run: |
cross make build
mkdir release
cp pkg/** release/
tar -czvf browser-rln-wasm.tar.gz release/
working-directory: rln-wasm
if [[ ${{ matrix.feature }} == "parallel" ]]; then
env CARGO_TARGET_WASM32_UNKNOWN_UNKNOWN_RUSTFLAGS="-C target-feature=+atomics,+bulk-memory,+mutable-globals -C link-arg=--shared-memory -C link-arg=--max-memory=1073741824 -C link-arg=--import-memory -C link-arg=--export=__wasm_init_tls -C link-arg=--export=__tls_size -C link-arg=--export=__tls_align -C link-arg=--export=__tls_base" \
rustup run nightly wasm-pack build --release --target web --scope waku \
--features parallel -Z build-std=panic_abort,std
sed -i.bak 's/rln-wasm/zerokit-rln-wasm-parallel/g' pkg/package.json && rm pkg/package.json.bak
elif [[ ${{ matrix.feature }} == "utils" ]]; then
wasm-pack build --release --target web --scope waku --no-default-features --features utils
sed -i.bak 's/rln-wasm/zerokit-rln-wasm-utils/g' pkg/package.json && rm pkg/package.json.bak
else
wasm-pack build --release --target web --scope waku
sed -i.bak 's/rln-wasm/zerokit-rln-wasm/g' pkg/package.json && rm pkg/package.json.bak
fi
jq '. + {keywords: ["zerokit", "rln", "wasm"]}' pkg/package.json > pkg/package.json.tmp && \
mv pkg/package.json.tmp pkg/package.json
mkdir release
cp -r pkg/* release/
tar -czvf rln-wasm-${{ matrix.feature }}.tar.gz release/
working-directory: rln-wasm
- name: Upload archive artifact
uses: actions/upload-artifact@v4
with:
name: browser-rln-wasm-archive
path: rln-wasm/browser-rln-wasm.tar.gz
name: rln-wasm-${{ matrix.feature }}-archive
path: rln-wasm/rln-wasm-${{ matrix.feature }}.tar.gz
retention-days: 2
prepare-prerelease:
name: Prepare pre-release
needs: [ linux, macos, browser-rln-wasm ]
needs: [linux, macos, rln-wasm]
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
ref: master
- name: Download artifacts
uses: actions/download-artifact@v4
- name: Delete tag
uses: dev-drprasad/delete-tag-and-release@v0.2.1
with:
@@ -133,7 +160,6 @@ jobs:
tag_name: nightly
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create prerelease
run: |
start_tag=$(gh release list -L 2 --exclude-drafts | grep -v nightly | cut -d$'\t' -f3 | sed -n '1p')
@@ -145,7 +171,6 @@ jobs:
*-archive/*.tar.gz \
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Delete artifacts
uses: geekyeggo/delete-artifact@v5
with:

View File

@@ -9,7 +9,7 @@ jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- uses: micnncim/action-label-syncer@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

23
.gitignore vendored
View File

@@ -1,20 +1,29 @@
# Common files to ignore in Rust projects
.DS_Store
.idea
*.log
tmp/
rln/pmtree_db
rln-cli/database
# Generated by Cargo
# will have compiled files and executables
debug/
target/
# Generated by Cargo will have compiled files and executables
/target
# Generated by Nix
result/
result
# These are backup files generated by rustfmt
**/*.rs.bk
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb
# FFI C examples
rln/ffi_c_examples/main
rln/ffi_c_examples/rln.h
rln/ffi_c_examples/database
# FFI Nim examples
rln/ffi_nim_examples/main
rln/ffi_nim_examples/database
# Vscode
.vscode

View File

@@ -1,29 +0,0 @@
# CHANGE LOG
## 2023-02-28 v0.2
This release contains:
- Improved code quality
- Allows consumers of zerokit RLN to set leaves to the Merkle Tree from an arbitrary index. Useful for batching updates to the Merkle Tree.
- Improved performance for proof generation and verification
- rln_wasm which allows for the consumption of RLN through a WebAssembly interface
- Refactored to generate Semaphore-compatible credentials
- Dual License under Apache 2.0 and MIT
- RLN compiles as a static library, which can be consumed through a C FFI
## 2022-09-19 v0.1
Initial beta release.
This release contains:
- RLN Module with API to manage, compute and verify [RLN](https://rfc.vac.dev/spec/32/) zkSNARK proofs and RLN primitives.
- This can be consumed either as a Rust API or as a C FFI. The latter means it can be easily consumed through other environments, such as [Go](https://github.com/status-im/go-zerokit-rln/blob/master/rln/librln.h) or [Nim](https://github.com/status-im/nwaku/blob/4745c7872c69b5fd5c6ddab36df9c5c3d55f57c3/waku/v2/protocol/waku_rln_relay/waku_rln_relay_types.nim).
It also contains the following examples and experiments:
- Basic [example wrapper](https://github.com/vacp2p/zerokit/tree/master/multiplier) around a simple Circom circuit to show Circom integration through ark-circom and FFI.
- Experimental [Semaphore wrapper](https://github.com/vacp2p/zerokit/tree/master/semaphore).
Feedback welcome! You can either [open an issue](https://github.com/vacp2p/zerokit/issues) or come talk to us in our [Vac Discord](https://discord.gg/PQFdubGt6d) #zerokit channel.

197
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,197 @@
# Contributing to Zerokit
Thank you for your interest in contributing to Zerokit!
This guide will discuss how the Zerokit team handles [Commits](#commits),
[Pull Requests](#pull-requests) and [Merging](#merging).
**Note:** We won't force external contributors to follow this verbatim.
Following these guidelines definitely helps us in accepting your contributions.
## Getting Started
1. Fork the repository
2. Create a feature branch: `git checkout -b fix/your-bug-fix` or `git checkout -b feat/your-feature-name`
3. Make your changes following our guidelines
4. Ensure relevant tests pass (see [testing guidelines](#building-and-testing))
5. Commit your changes (signed commits are highly encouraged - see [commit guidelines](#commits))
6. Push and create a Pull Request
## Development Setup
### Prerequisites
Install the required dependencies:
```bash
make installdeps
```
Or use Nix:
```bash
nix develop
```
### Building and Testing
```bash
# Build all crates
make build
# Run standard tests
make test
# Module-specific testing
cd rln && cargo make test_stateless # Test stateless features
cd rln-wasm && cargo make test_browser # Test in browser headless mode
cd rln-wasm && cargo make test_parallel # Test parallel features
```
### Tools
We recommend using the [markdownlint extension](https://marketplace.visualstudio.com/items?itemName=DavidAnson.vscode-markdownlint)
for VS Code to maintain consistent documentation formatting.
## Commits
We want to keep our commits small and focused.
This allows for easily reviewing individual commits and/or
splitting up pull requests when they grow too big.
Additionally, this allows us to merge smaller changes quicker and release more often.
**All commits must be GPG signed.**
This ensures the authenticity and integrity of contributions.
### Conventional Commits
When making the commit, write the commit message
following the [Conventional Commits (v1.0.0)](https://www.conventionalcommits.org/en/v1.0.0/) specification.
Following this convention allows us to provide an automated release process
that also generates a detailed Changelog.
As described by the specification, our commit messages should be written as:
```markdown
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
```
Some examples of this pattern include:
```markdown
feat(rln): add parallel witness calculation support
```
```markdown
fix(rln-wasm): resolve memory leak in browser threading
```
```markdown
docs: update RLN protocol flow documentation
```
#### Scopes
Use scopes to improve the Changelog:
- `rln` - Core RLN implementation
- `rln-cli` - Command-line interface
- `rln-wasm` - WebAssembly bindings
- `utils` - Cryptographic utilities (Merkle trees, Poseidon hash)
- `ci` - Continuous integration
#### Breaking Changes
Mark breaking changes by adding `!` after the type:
```markdown
feat(rln)!: change proof generation API
```
## Pull Requests
Before creating a pull request, search for related issues.
If none exist, create an issue describing the problem you're solving.
### CI Flow
Our continuous integration automatically runs when you create a Pull Request:
- **Build verification**: All crates compile successfully
- **Test execution**: Comprehensive testing across all modules and feature combinations
- **Code formatting**: `cargo fmt` compliance
- **Linting**: `cargo clippy` checks
- **Cross-platform builds**: Testing on multiple platforms
Ensure the following commands pass before submitting:
```bash
# Format code
cargo fmt --all
# Check for common mistakes
cargo clippy --all-targets
# Run all tests
make test
```
### Adding Tests
Include tests for new functionality:
- **Unit tests** for specific functions
- **Integration tests** for broader functionality
- **WASM tests** for browser compatibility
### Typos and Small Changes
For minor fixes like typos, please report them as issues instead of opening PRs.
This helps us manage resources effectively and ensures meaningful contributions.
## Merging
We use "squash merging" for all pull requests.
This combines all commits into one commit, so keep pull requests small and focused.
### Requirements
- CI checks must pass
- At least one maintainer review and approval
- All review feedback addressed
### Squash Guidelines
When squashing, update the commit title to be a proper Conventional Commit and
include any other relevant commits in the body:
```markdown
feat(rln): implement parallel witness calculation (#123)
fix(tests): resolve memory leak in test suite
chore(ci): update rust toolchain version
```
## Roadmap Alignment
Please refer to our [project roadmap](https://roadmap.vac.dev/) for current development priorities.
Consider how your changes align with these strategic goals when contributing.
## Getting Help
- **Issues**: Create a GitHub issue for bugs or feature requests
- **Discussions**: Use GitHub Discussions for questions
- **Documentation**: Check existing docs and unit tests for examples
## License
By contributing to Zerokit, you agree that your contributions will be licensed under both MIT and
Apache 2.0 licenses, consistent with the project's dual licensing.
## Additional Resources
- [Conventional Commits Guide](https://www.conventionalcommits.org/en/v1.0.0/)
- [Project GitHub Repository](https://github.com/vacp2p/zerokit)

1603
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,10 @@
[workspace]
members = ["rln", "rln-cli", "rln-wasm", "utils"]
default-members = ["rln", "rln-cli", "rln-wasm", "utils"]
members = ["rln", "utils"]
exclude = ["rln-cli", "rln-wasm"]
resolver = "2"
# Compilation profile for any non-workspace member.
# Dependencies are optimized, even in a dev build. This improves dev performance
# while having neglible impact on incremental build times.
# Dependencies are optimized, even in a dev build.
# This improves dev performance while having negligible impact on incremental build times.
[profile.dev.package."*"]
opt-level = 3

View File

@@ -1,6 +1,6 @@
.PHONY: all installdeps build test bench clean
all: .pre-build build
all: installdeps build
.fetch-submodules:
@git submodule update --init --recursive
@@ -13,29 +13,26 @@ endif
installdeps: .pre-build
ifeq ($(shell uname),Darwin)
@brew update
@brew install cmake ninja
@brew install ninja
else ifeq ($(shell uname),Linux)
@sudo apt-get update
@sudo apt-get install -y cmake ninja-build
endif
@if [ ! -d "$$HOME/.nvm" ]; then \
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash; \
@if [ -f /etc/os-release ] && grep -q "ID=nixos" /etc/os-release; then \
echo "Detected NixOS, skipping apt installation."; \
else \
sudo apt update; \
sudo apt install -y cmake ninja-build; \
fi
@bash -c 'export NVM_DIR="$$HOME/.nvm" && \
[ -s "$$NVM_DIR/nvm.sh" ] && \. "$$NVM_DIR/nvm.sh" && \
nvm install 22.14.0 && \
nvm use 22.14.0'
@curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
@echo "\033[1;32m>>> Now run this command to activate Node.js 22.14.0: \033[1;33msource $$HOME/.nvm/nvm.sh && nvm use 22.14.0\033[0m"
endif
@which wasm-pack > /dev/null && wasm-pack --version | grep -q "0.13.1" || cargo install wasm-pack --version=0.13.1
@test -s "$$HOME/.nvm/nvm.sh" || curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash
@bash -c '. "$$HOME/.nvm/nvm.sh"; [ "$$(node -v 2>/dev/null)" = "v22.14.0" ] || nvm install 22.14.0; nvm use 22.14.0; nvm alias default 22.14.0'
build: .pre-build
build: installdeps
@cargo make build
test: .pre-build
test: build
@cargo make test
bench: .pre-build
bench: build
@cargo make bench
clean:

View File

@@ -12,31 +12,37 @@ A collection of Zero Knowledge modules written in Rust and designed to be used i
Zerokit provides zero-knowledge cryptographic primitives with a focus on performance, security, and usability.
The current focus is on Rate-Limiting Nullifier [RLN](https://github.com/Rate-Limiting-Nullifier) implementation.
Current implementation is based on the following [specification](https://github.com/vacp2p/rfc-index/blob/main/vac/raw/rln-v2.md)
Current implementation is based on the following
[specification](https://rfc.vac.dev/vac/raw/rln-v2)
and focused on RLNv2 which allows to set a rate limit for the number of messages that can be sent by a user.
## Features
- **RLN Implementation**: Efficient Rate-Limiting Nullifier using zkSNARKs
- **RLN Implementation**: Efficient Rate-Limiting Nullifier using zkSNARK
- **Circom Compatibility**: Uses Circom-based circuits for RLN
- **Cross-Platform**: Support for multiple architectures (see compatibility note below)
- **Cross-Platform**: Support for multiple architectures with cross-compilation
- **FFI-Friendly**: Easy to integrate with other languages
- **WASM Support**: Can be compiled to WebAssembly for web applications
## Architecture
Zerokit currently focuses on RLN (Rate-Limiting Nullifier) implementation using [Circom](https://iden3.io/circom) circuits through ark-circom, providing an alternative to existing native Rust implementations.
Zerokit currently focuses on RLN (Rate-Limiting Nullifier) implementation using [Circom](https://iden3.io/circom)
circuits through ark-circom, providing an alternative to existing native Rust implementations.
## Build and Test
> [!IMPORTANT]
> For WASM support or x32 architecture builds, use version `0.6.1`. The current version has dependency issues for these platforms. WASM support will return in a future release.
### Install Dependencies
```bash
make installdeps
```
#### Use Nix to install dependencies
```bash
nix develop
```
### Build and Test All Crates
```bash
@@ -69,8 +75,8 @@ The execution graph file used by this code has been generated by means of the sa
> [!IMPORTANT]
> The circom-witnesscalc code fragments have been borrowed instead of depending on this crate,
because its types of input and output data were incompatible with the corresponding zerokit code fragments,
and circom-witnesscalc has some dependencies, which are redundant for our purpose.
> because its types of input and output data were incompatible with the corresponding zerokit code fragments,
> and circom-witnesscalc has some dependencies, which are redundant for our purpose.
## Documentation

31
flake.lock generated
View File

@@ -2,23 +2,44 @@
"nodes": {
"nixpkgs": {
"locked": {
"lastModified": 1740603184,
"narHash": "sha256-t+VaahjQAWyA+Ctn2idyo1yxRIYpaDxMgHkgCNiMJa4=",
"lastModified": 1757590060,
"narHash": "sha256-EWwwdKLMZALkgHFyKW7rmyhxECO74+N+ZO5xTDnY/5c=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "f44bd8ca21e026135061a0a57dcf3d0775b67a49",
"rev": "0ef228213045d2cdb5a169a95d63ded38670b293",
"type": "github"
},
"original": {
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "f44bd8ca21e026135061a0a57dcf3d0775b67a49",
"rev": "0ef228213045d2cdb5a169a95d63ded38670b293",
"type": "github"
}
},
"root": {
"inputs": {
"nixpkgs": "nixpkgs"
"nixpkgs": "nixpkgs",
"rust-overlay": "rust-overlay"
}
},
"rust-overlay": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1748399823,
"narHash": "sha256-kahD8D5hOXOsGbNdoLLnqCL887cjHkx98Izc37nDjlA=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "d68a69dc71bc19beb3479800392112c2f6218159",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
}
},

View File

@@ -1,12 +1,21 @@
{
description = "A flake for building zerokit";
inputs = {
# Version 24.11
nixpkgs.url = "github:NixOS/nixpkgs?rev=f44bd8ca21e026135061a0a57dcf3d0775b67a49";
nixConfig = {
extra-substituters = [ "https://nix-cache.status.im/" ];
extra-trusted-public-keys = [ "nix-cache.status.im-1:x/93lOfLU+duPplwMSBR+OlY4+mo+dCN7n0mr4oPwgY=" ];
};
outputs = { self, nixpkgs }:
inputs = {
# Version 24.11
nixpkgs.url = "github:NixOS/nixpkgs?rev=0ef228213045d2cdb5a169a95d63ded38670b293";
rust-overlay = {
url = "github:oxalica/rust-overlay";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs = { self, nixpkgs, rust-overlay }:
let
stableSystems = [
"x86_64-linux" "aarch64-linux"
@@ -15,22 +24,58 @@
"i686-windows"
];
forAllSystems = nixpkgs.lib.genAttrs stableSystems;
pkgsFor = forAllSystems (system: import nixpkgs { inherit system; });
pkgsFor = forAllSystems (
system: import nixpkgs {
inherit system;
config = {
android_sdk.accept_license = true;
allowUnfree = true;
};
overlays = [
(import rust-overlay)
(f: p: { inherit rust-overlay; })
];
}
);
in rec
{
packages = forAllSystems (system: let
pkgs = pkgsFor.${system};
buildPackage = pkgs.callPackage ./nix/default.nix;
buildRln = (buildPackage { src = self; project = "rln"; }).override;
in rec {
zerokit-android-arm64 = pkgs.callPackage ./nix/default.nix { target-platform="aarch64-android-prebuilt"; rust-target= "aarch64-linux-android"; };
default = zerokit-android-arm64;
rln = buildRln { };
rln-linux-arm64 = buildRln {
target-platform = "aarch64-multiplatform";
rust-target = "aarch64-unknown-linux-gnu";
};
rln-android-arm64 = buildRln {
target-platform = "aarch64-android-prebuilt";
rust-target = "aarch64-linux-android";
};
rln-ios-arm64 = buildRln {
target-platform = "aarch64-darwin";
rust-target = "aarch64-apple-ios";
};
# TODO: Remove legacy name for RLN android library
zerokit-android-arm64 = rln-android-arm64;
default = rln;
});
devShells = forAllSystems (system: let
pkgs = pkgsFor.${system};
in {
default = pkgs.mkShell {
inputsFrom = [
packages.${system}.default
buildInputs = with pkgs; [
git cmake cargo-make rustup
binaryen ninja gnuplot
rust-bin.stable.latest.default
];
};
});

View File

@@ -1,33 +1,62 @@
{
pkgs,
target-platform ? "aarch64-android-prebuilt",
rust-target ? "aarch64-linux-android",
rust-overlay,
project,
src ? ../.,
release ? true,
target-platform ? null,
rust-target ? null,
features ? null,
}:
pkgs.pkgsCross.${target-platform}.rustPlatform.buildRustPackage {
pname = "zerokit";
version = "nightly";
let
# Use cross-compilation if target-platform is specified.
targetPlatformPkgs = if target-platform != null
then pkgs.pkgsCross.${target-platform}
else pkgs;
src = ../.;
rust-bin = rust-overlay.lib.mkRustBin { } targetPlatformPkgs.buildPackages;
# Use Rust and Cargo versions from rust-overlay.
rustPlatform = targetPlatformPkgs.makeRustPlatform {
cargo = rust-bin.stable.latest.minimal;
rustc = rust-bin.stable.latest.minimal;
};
in rustPlatform.buildRustPackage {
pname = "zerokit";
version = if src ? rev then src.rev else "nightly";
# Improve caching of sources
src = builtins.path { path = src; name = "zerokit"; };
cargoLock = {
lockFile = ../Cargo.lock;
lockFile = src + "/Cargo.lock";
allowBuiltinFetchGit = true;
};
nativeBuildInputs = [ pkgs.rust-cbindgen ];
doCheck = false;
CARGO_HOME = "/tmp";
buildPhase = ''
pushd rln
cargo rustc --crate-type=cdylib --release --lib --target=${rust-target}
popd
cargo build --lib \
${if release then "--release" else ""} \
${if rust-target != null then "--target=${rust-target}" else ""} \
${if features != null then "--features=${features}" else ""} \
--manifest-path ${project}/Cargo.toml
'';
installPhase = ''
mkdir -p $out/
cp ./target/${rust-target}/release/librln.so $out/
set -eu
mkdir -p $out/lib
find target -type f -name 'librln.*' -not -path '*/deps/*' -exec cp -v '{}' "$out/lib/" \;
mkdir -p $out/include
cbindgen ${src}/rln -l c > "$out/include/rln.h"
'';
meta = with pkgs.lib; {
description = "Zerokit";
license = licenses.mit;

20
rln-cli/.gitignore vendored Normal file
View File

@@ -0,0 +1,20 @@
# Common files to ignore in Rust projects
.DS_Store
.idea
*.log
tmp/
# Generated by Cargo will have compiled files and executables
/target
# Generated by rln-cli
/database
# Generated by Nix
result
# These are backup files generated by rustfmt
**/*.rs.bk
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb

1647
rln-cli/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[package]
name = "rln-cli"
version = "0.4.0"
version = "0.5.0"
edition = "2021"
[[example]]
@@ -13,15 +13,15 @@ path = "src/examples/stateless.rs"
required-features = ["stateless"]
[dependencies]
rln = { path = "../rln", default-features = false }
zerokit_utils = { path = "../utils" }
clap = { version = "4.5.35", features = ["cargo", "derive", "env"] }
clap_derive = { version = "4.5.32" }
color-eyre = "0.6.3"
serde_json = "1.0"
serde = { version = "1.0", features = ["derive"] }
rln = { path = "../rln", version = "1.0.0", default-features = false }
zerokit_utils = { path = "../utils", version = "1.0.0", default-features = false }
clap = { version = "4.5.53", features = ["cargo", "derive", "env"] }
serde_json = "1.0.145"
serde = { version = "1.0.228", features = ["derive"] }
[features]
default = []
arkzkey = ["rln/arkzkey"]
stateless = ["rln/stateless"]
default = ["rln/pmtree-ft", "rln/parallel"]
stateless = ["rln/stateless", "rln/parallel"]
[package.metadata.docs.rs]
all-features = true

View File

@@ -1,45 +1,10 @@
# Zerokit RLN-CLI
The Zerokit RLN-CLI provides a command-line interface for interacting with the public API of the [Zerokit RLN Module](../rln/README.md).
It also contain:
+ [Relay Example](#relay-example) to demonstrate the use of the RLN module for spam prevention.
+ [Stateless Example](#stateless-example) to demonstrate the use of the RLN module for stateless features.
## Configuration
The CLI can be configured using a JSON configuration file (see the [example](example.config.json)).
You can specify the configuration file path using the `RLN_CONFIG_PATH` environment variable:
```bash
export RLN_CONFIG_PATH=example.config.json
```
Alternatively, you can provide the configuration file path as an argument for each command:
```bash
RLN_CONFIG_PATH=example.config.json cargo run -- <SUBCOMMAND> [OPTIONS]
```
If the configuration file is empty, default settings will be used, but the tree data folder will be temporary and not saved to the preconfigured path.
We recommend using the example config, as all commands (except `new` and `create-with-params`) require an initialized RLN instance.
## Feature Flags
The CLI supports optional features. To enable the **arkzkey** feature, run:
```bash
cargo run --features arkzkey -- <SUBCOMMAND> [OPTIONS]
```
For more details, refer to the [Zerokit RLN Module](../rln/README.md) documentation.
The Zerokit RLN-CLI provides a command-line interface examples on how to use public API of the [Zerokit RLN Module](../rln/README.md).
## Relay Example
The following [Example](src/examples/relay.rs) demonstrates how RLN enables spam prevention in anonymous environments for multple users.
The following [Relay Example](src/examples/relay.rs) demonstrates how RLN enables spam prevention in anonymous environments for multple users.
You can run the example using the following command:
@@ -47,124 +12,18 @@ You can run the example using the following command:
cargo run --example relay
```
or with the **arkzkey** feature flag:
You can also change **MESSAGE_LIMIT** and **TREE_DEPTH** in the [relay.rs](src/examples/relay.rs) file to see how the RLN instance behaves with different parameters.
```bash
cargo run --example relay --features arkzkey
```
You can also change **MESSAGE_LIMIT** and **TREEE_HEIGHT** in the [relay.rs](src/examples/relay.rs) file to see how the RLN instance behaves with different parameters.
The customize **TREEE_HEIGHT** constant differs from the default value of `20` should follow [Custom Circuit Compilation](../rln/README.md#advanced-custom-circuit-compilation) instructions.
The customize **TREE_DEPTH** constant differs from the default value of `20` should follow [Custom Circuit Compilation](../rln/README.md#advanced-custom-circuit-compilation) instructions.
## Stateless Example
The following [Example](src/examples/stateless.rs) demonstrates how RLN can be used for stateless features by creating the Merkle tree outside of RLN instance.
The following [Stateless Example](src/examples/stateless.rs) demonstrates how RLN can be used for stateless features by creating the Merkle tree outside of RLN instance.
This example function similarly to the [Relay Example](#relay-example) but uses a stateless RLN and seperate Merkle tree.
You can run the example using the following command:
```bash
cargo run --example stateless --features stateless
```
or with the **arkzkey** feature flag:
```bash
cargo run --example stateless --features stateless,arkzkey
```
## CLI Commands
### Instance Management
To initialize a new RLN instance:
```bash
cargo run new --tree-height <HEIGHT>
```
To initialize an RLN instance with custom parameters:
```bash
cargo run new-with-params --resources-path <PATH> --tree-height <HEIGHT>
```
To update the Merkle tree height:
```bash
cargo run set-tree --tree-height <HEIGHT>
```
### Leaf Operations
To set a single leaf:
```bash
cargo run set-leaf --index <INDEX> --input <INPUT_PATH>
```
To set multiple leaves:
```bash
cargo run set-multiple-leaves --index <START_INDEX> --input <INPUT_PATH>
```
To reset multiple leaves:
```bash
cargo run reset-multiple-leaves --input <INPUT_PATH>
```
To set the next available leaf:
```bash
cargo run set-next-leaf --input <INPUT_PATH>
```
To delete a specific leaf:
```bash
cargo run delete-leaf --index <INDEX>
```
### Proof Operations
To generate a proof:
```bash
cargo run prove --input <INPUT_PATH>
```
To generate an RLN proof:
```bash
cargo run generate-proof --input <INPUT_PATH>
```
To verify a proof:
```bash
cargo run verify --input <PROOF_PATH>
```
To verify a proof with multiple Merkle roots:
```bash
cargo run verify-with-roots --input <INPUT_PATH> --roots <ROOTS_PATH>
```
### Tree Information
To retrieve the current Merkle root:
```bash
cargo run get-root
```
To obtain a Merkle proof for a specific index:
```bash
cargo run get-proof --index <INDEX>
cargo run --example stateless --no-default-features --features stateless
```

View File

@@ -1,11 +0,0 @@
{
"tree_config": {
"path": "database",
"temporary": false,
"cache_capacity": 150000,
"flush_every_ms": 12000,
"mode": "HighThroughput",
"use_compression": false
},
"tree_height": 20
}

View File

@@ -1,69 +0,0 @@
use std::path::PathBuf;
use clap::Subcommand;
use rln::circuit::TEST_TREE_HEIGHT;
#[derive(Subcommand)]
pub(crate) enum Commands {
New {
#[arg(short, long, default_value_t = TEST_TREE_HEIGHT)]
tree_height: usize,
},
NewWithParams {
#[arg(short, long, default_value_t = TEST_TREE_HEIGHT)]
tree_height: usize,
#[arg(short, long, default_value = "../rln/resources/tree_height_20")]
resources_path: PathBuf,
},
SetTree {
#[arg(short, long, default_value_t = TEST_TREE_HEIGHT)]
tree_height: usize,
},
SetLeaf {
#[arg(short, long)]
index: usize,
#[arg(short, long)]
input: PathBuf,
},
SetMultipleLeaves {
#[arg(short, long)]
index: usize,
#[arg(short, long)]
input: PathBuf,
},
ResetMultipleLeaves {
#[arg(short, long)]
input: PathBuf,
},
SetNextLeaf {
#[arg(short, long)]
input: PathBuf,
},
DeleteLeaf {
#[arg(short, long)]
index: usize,
},
GetRoot,
GetProof {
#[arg(short, long)]
index: usize,
},
Prove {
#[arg(short, long)]
input: PathBuf,
},
Verify {
#[arg(short, long)]
input: PathBuf,
},
GenerateProof {
#[arg(short, long)]
input: PathBuf,
},
VerifyWithRoots {
#[arg(short, long)]
input: PathBuf,
#[arg(short, long)]
roots: PathBuf,
},
}

View File

@@ -1,38 +0,0 @@
use std::{fs::File, io::Read, path::PathBuf};
use color_eyre::Result;
use serde::{Deserialize, Serialize};
use serde_json::Value;
pub const RLN_CONFIG_PATH: &str = "RLN_CONFIG_PATH";
#[derive(Default, Serialize, Deserialize)]
pub(crate) struct Config {
pub inner: Option<InnerConfig>,
}
#[derive(Default, Serialize, Deserialize)]
pub(crate) struct InnerConfig {
pub tree_height: usize,
pub tree_config: Value,
}
impl Config {
pub(crate) fn load_config() -> Result<Config> {
match std::env::var(RLN_CONFIG_PATH) {
Ok(env) => {
let path = PathBuf::from(env);
let mut file = File::open(path)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;
let inner: InnerConfig = serde_json::from_str(&contents)?;
Ok(Config { inner: Some(inner) })
}
Err(_) => Ok(Config::default()),
}
}
pub(crate) fn as_bytes(&self) -> Vec<u8> {
serde_json::to_string(&self.inner).unwrap().into_bytes()
}
}

View File

@@ -1,23 +1,22 @@
use std::{
collections::HashMap,
fs::File,
io::{stdin, stdout, Cursor, Read, Write},
io::{stdin, stdout, Read, Write},
path::{Path, PathBuf},
};
use clap::{Parser, Subcommand};
use color_eyre::{eyre::eyre, Result};
use rln::{
circuit::Fr,
hashers::{hash_to_field, poseidon_hash},
protocol::{keygen, prepare_prove_input, prepare_verify_input},
public::RLN,
utils::{bytes_le_to_fr, fr_to_bytes_le, generate_input_buffer},
use rln::prelude::{
hash_to_field_le, keygen, poseidon_hash, recover_id_secret, Fr, IdSecret, PmtreeConfigBuilder,
RLNProofValues, RLNWitnessInput, RLN,
};
use zerokit_utils::pm_tree::Mode;
const MESSAGE_LIMIT: u32 = 1;
const TREEE_HEIGHT: usize = 20;
const TREE_DEPTH: usize = 20;
type Result<T> = std::result::Result<T, Box<dyn std::error::Error>>;
#[derive(Parser)]
#[command(author, version, about, long_about = None)]
@@ -44,15 +43,15 @@ enum Commands {
#[derive(Debug, Clone)]
struct Identity {
identity_secret_hash: Fr,
identity_secret: IdSecret,
id_commitment: Fr,
}
impl Identity {
fn new() -> Self {
let (identity_secret_hash, id_commitment) = keygen();
let (identity_secret, id_commitment) = keygen().unwrap();
Identity {
identity_secret_hash,
identity_secret,
id_commitment,
}
}
@@ -60,18 +59,15 @@ impl Identity {
struct RLNSystem {
rln: RLN,
used_nullifiers: HashMap<[u8; 32], Vec<u8>>,
used_nullifiers: HashMap<Fr, RLNProofValues>,
local_identities: HashMap<usize, Identity>,
}
impl RLNSystem {
fn new() -> Result<Self> {
let mut resources: Vec<Vec<u8>> = Vec::new();
let resources_path: PathBuf = format!("../rln/resources/tree_height_{TREEE_HEIGHT}").into();
#[cfg(feature = "arkzkey")]
let resources_path: PathBuf = format!("../rln/resources/tree_depth_{TREE_DEPTH}").into();
let filenames = ["rln_final.arkzkey", "graph.bin"];
#[cfg(not(feature = "arkzkey"))]
let filenames = ["rln_final.zkey", "graph.bin"];
for filename in filenames {
let fullpath = resources_path.join(Path::new(filename));
let mut file = File::open(&fullpath)?;
@@ -80,11 +76,19 @@ impl RLNSystem {
file.read_exact(&mut output_buffer)?;
resources.push(output_buffer);
}
let tree_config = PmtreeConfigBuilder::new()
.path("./database")
.temporary(false)
.cache_capacity(1073741824)
.flush_every_ms(500)
.mode(Mode::HighThroughput)
.use_compression(false)
.build()?;
let rln = RLN::new_with_params(
TREEE_HEIGHT,
TREE_DEPTH,
resources[0].clone(),
resources[1].clone(),
generate_input_buffer(),
tree_config,
)?;
println!("RLN instance initialized successfully");
Ok(RLNSystem {
@@ -103,8 +107,8 @@ impl RLNSystem {
println!("Registered users:");
for (index, identity) in &self.local_identities {
println!("User Index: {index}");
println!("+ Identity Secret Hash: {}", identity.identity_secret_hash);
println!("+ Identity Commitment: {}", identity.id_commitment);
println!("+ Identity secret: {}", *identity.identity_secret);
println!("+ Identity commitment: {}", identity.id_commitment);
println!();
}
}
@@ -113,131 +117,112 @@ impl RLNSystem {
let index = self.rln.leaves_set();
let identity = Identity::new();
let rate_commitment = poseidon_hash(&[identity.id_commitment, Fr::from(MESSAGE_LIMIT)]);
let mut buffer = Cursor::new(fr_to_bytes_le(&rate_commitment));
match self.rln.set_next_leaf(&mut buffer) {
let rate_commitment =
poseidon_hash(&[identity.id_commitment, Fr::from(MESSAGE_LIMIT)]).unwrap();
match self.rln.set_next_leaf(rate_commitment) {
Ok(_) => {
println!("Registered User Index: {index}");
println!("+ Identity secret hash: {}", identity.identity_secret_hash);
println!("+ Identity commitment: {},", identity.id_commitment);
println!("+ Identity secret: {}", *identity.identity_secret);
println!("+ Identity commitment: {}", identity.id_commitment);
self.local_identities.insert(index, identity);
}
Err(_) => {
println!("Maximum user limit reached: 2^{TREEE_HEIGHT}");
println!("Maximum user limit reached: 2^{TREE_DEPTH}");
}
};
Ok(index)
}
fn generate_proof(
fn generate_and_verify_proof(
&mut self,
user_index: usize,
message_id: u32,
signal: &str,
external_nullifier: Fr,
) -> Result<Vec<u8>> {
) -> Result<RLNProofValues> {
let identity = match self.local_identities.get(&user_index) {
Some(identity) => identity,
None => return Err(eyre!("user index {user_index} not found")),
None => return Err(format!("user index {user_index} not found").into()),
};
let serialized = prepare_prove_input(
identity.identity_secret_hash,
user_index,
let (path_elements, identity_path_index) = self.rln.get_merkle_proof(user_index)?;
let x = hash_to_field_le(signal.as_bytes())?;
let witness = RLNWitnessInput::new(
identity.identity_secret.clone(),
Fr::from(MESSAGE_LIMIT),
Fr::from(message_id),
path_elements,
identity_path_index,
x,
external_nullifier,
signal.as_bytes(),
);
let mut input_buffer = Cursor::new(serialized);
let mut output_buffer = Cursor::new(Vec::new());
self.rln
.generate_rln_proof(&mut input_buffer, &mut output_buffer)?;
)?;
let (proof, proof_values) = self.rln.generate_rln_proof(&witness)?;
println!("Proof generated successfully:");
println!("+ User Index: {user_index}");
println!("+ Message ID: {message_id}");
println!("+ Signal: {signal}");
Ok(output_buffer.into_inner())
let verified = self.rln.verify_rln_proof(&proof, &proof_values, &x)?;
if verified {
println!("Proof verified successfully");
}
fn verify_proof(&mut self, proof_data: Vec<u8>, signal: &str) -> Result<()> {
let proof_with_signal = prepare_verify_input(proof_data.clone(), signal.as_bytes());
let mut input_buffer = Cursor::new(proof_with_signal);
Ok(proof_values)
}
match self.rln.verify_rln_proof(&mut input_buffer) {
Ok(true) => {
let nullifier = &proof_data[256..288];
let nullifier_key: [u8; 32] = nullifier.try_into()?;
if let Some(previous_proof) = self.used_nullifiers.get(&nullifier_key) {
self.handle_duplicate_message_id(previous_proof.clone(), proof_data)?;
fn check_nullifier(&mut self, proof_values: RLNProofValues) -> Result<()> {
if let Some(&previous_proof_values) = self.used_nullifiers.get(&proof_values.nullifier) {
self.handle_duplicate_message_id(previous_proof_values, proof_values)?;
return Ok(());
}
self.used_nullifiers.insert(nullifier_key, proof_data);
self.used_nullifiers
.insert(proof_values.nullifier, proof_values);
println!("Message verified and accepted");
}
Ok(false) => {
println!("Verification failed: message_id must be unique within the epoch and satisfy 0 <= message_id < MESSAGE_LIMIT: {MESSAGE_LIMIT}");
}
Err(err) => return Err(err),
}
Ok(())
}
fn handle_duplicate_message_id(
&mut self,
previous_proof: Vec<u8>,
current_proof: Vec<u8>,
previous_proof_values: RLNProofValues,
current_proof_values: RLNProofValues,
) -> Result<()> {
let x = &current_proof[192..224];
let y = &current_proof[224..256];
let prev_x = &previous_proof[192..224];
let prev_y = &previous_proof[224..256];
if x == prev_x && y == prev_y {
return Err(eyre!("this exact message and signal has already been sent"));
if previous_proof_values.x == current_proof_values.x
&& previous_proof_values.y == current_proof_values.y
{
return Err("this exact message and signal has already been sent".into());
}
let mut proof1 = Cursor::new(previous_proof);
let mut proof2 = Cursor::new(current_proof);
let mut output = Cursor::new(Vec::new());
match self
.rln
.recover_id_secret(&mut proof1, &mut proof2, &mut output)
{
Ok(_) => {
let output_data = output.into_inner();
let (leaked_identity_secret_hash, _) = bytes_le_to_fr(&output_data);
match recover_id_secret(&previous_proof_values, &current_proof_values) {
Ok(leaked_identity_secret) => {
if let Some((user_index, identity)) = self
.local_identities
.iter()
.find(|(_, identity)| {
identity.identity_secret_hash == leaked_identity_secret_hash
})
.find(|(_, identity)| identity.identity_secret == leaked_identity_secret)
.map(|(index, identity)| (*index, identity))
{
let real_identity_secret_hash = identity.identity_secret_hash;
if leaked_identity_secret_hash != real_identity_secret_hash {
Err(eyre!("identity secret hash mismatch {leaked_identity_secret_hash} != {real_identity_secret_hash}"))
let real_identity_secret = identity.identity_secret.clone();
if leaked_identity_secret != real_identity_secret {
Err("Identity secret mismatch: leaked_identity_secret != real_identity_secret".into())
} else {
println!("DUPLICATE message ID detected! Reveal identity secret hash: {leaked_identity_secret_hash}");
println!(
"DUPLICATE message ID detected! Reveal identity secret: {}",
*leaked_identity_secret
);
self.local_identities.remove(&user_index);
self.rln.delete_leaf(user_index)?;
println!("User index {user_index} has been SLASHED");
Ok(())
}
} else {
Err(eyre!(
"user identity secret hash {leaked_identity_secret_hash} not found"
))
Err("user identity secret ******** not found".into())
}
}
Err(err) => Err(eyre!("Failed to recover identity secret: {err}")),
Err(err) => Err(format!("Failed to recover identity secret: {err}").into()),
}
}
}
@@ -246,9 +231,9 @@ fn main() -> Result<()> {
println!("Initializing RLN instance...");
print!("\x1B[2J\x1B[1;1H");
let mut rln_system = RLNSystem::new()?;
let rln_epoch = hash_to_field(b"epoch");
let rln_identifier = hash_to_field(b"rln-identifier");
let external_nullifier = poseidon_hash(&[rln_epoch, rln_identifier]);
let rln_epoch = hash_to_field_le(b"epoch")?;
let rln_identifier = hash_to_field_le(b"rln-identifier")?;
let external_nullifier = poseidon_hash(&[rln_epoch, rln_identifier]).unwrap();
println!("RLN Relay Example:");
println!("Message Limit: {MESSAGE_LIMIT}");
println!("----------------------------------");
@@ -275,15 +260,15 @@ fn main() -> Result<()> {
message_id,
signal,
} => {
match rln_system.generate_proof(
match rln_system.generate_and_verify_proof(
user_index,
message_id,
&signal,
external_nullifier,
) {
Ok(proof) => {
if let Err(err) = rln_system.verify_proof(proof, &signal) {
println!("Verification error: {err}");
Ok(proof_values) => {
if let Err(err) = rln_system.check_nullifier(proof_values) {
println!("Check nullifier error: {err}");
};
}
Err(err) => {

View File

@@ -1,23 +1,20 @@
#![cfg(feature = "stateless")]
use std::{
collections::HashMap,
io::{stdin, stdout, Cursor, Write},
io::{stdin, stdout, Write},
};
use clap::{Parser, Subcommand};
use color_eyre::{eyre::eyre, Result};
use rln::{
circuit::{Fr, TEST_TREE_HEIGHT},
hashers::{hash_to_field, poseidon_hash},
poseidon_tree::PoseidonTree,
protocol::{keygen, prepare_verify_input, rln_witness_from_values, serialize_witness},
public::RLN,
utils::{bytes_le_to_fr, fr_to_bytes_le},
use rln::prelude::{
hash_to_field_le, keygen, poseidon_hash, recover_id_secret, Fr, IdSecret, OptimalMerkleTree,
PoseidonHash, RLNProofValues, RLNWitnessInput, ZerokitMerkleProof, ZerokitMerkleTree,
DEFAULT_TREE_DEPTH, RLN,
};
use zerokit_utils::ZerokitMerkleTree;
const MESSAGE_LIMIT: u32 = 1;
type Result<T> = std::result::Result<T, Box<dyn std::error::Error>>;
type ConfigOf<T> = <T as ZerokitMerkleTree>::Config;
#[derive(Parser)]
@@ -45,15 +42,15 @@ enum Commands {
#[derive(Debug, Clone)]
struct Identity {
identity_secret_hash: Fr,
identity_secret: IdSecret,
id_commitment: Fr,
}
impl Identity {
fn new() -> Self {
let (identity_secret_hash, id_commitment) = keygen();
let (identity_secret, id_commitment) = keygen().unwrap();
Identity {
identity_secret_hash,
identity_secret,
id_commitment,
}
}
@@ -61,8 +58,8 @@ impl Identity {
struct RLNSystem {
rln: RLN,
tree: PoseidonTree,
used_nullifiers: HashMap<[u8; 32], Vec<u8>>,
tree: OptimalMerkleTree<PoseidonHash>,
used_nullifiers: HashMap<Fr, RLNProofValues>,
local_identities: HashMap<usize, Identity>,
}
@@ -70,11 +67,12 @@ impl RLNSystem {
fn new() -> Result<Self> {
let rln = RLN::new()?;
let default_leaf = Fr::from(0);
let tree = PoseidonTree::new(
TEST_TREE_HEIGHT,
let tree: OptimalMerkleTree<PoseidonHash> = OptimalMerkleTree::new(
DEFAULT_TREE_DEPTH,
default_leaf,
ConfigOf::<PoseidonTree>::default(),
)?;
ConfigOf::<OptimalMerkleTree<PoseidonHash>>::default(),
)
.unwrap();
Ok(RLNSystem {
rln,
@@ -93,8 +91,8 @@ impl RLNSystem {
println!("Registered users:");
for (index, identity) in &self.local_identities {
println!("User Index: {index}");
println!("+ Identity Secret Hash: {}", identity.identity_secret_hash);
println!("+ Identity Commitment: {}", identity.id_commitment);
println!("+ Identity secret: {}", *identity.identity_secret);
println!("+ Identity commitment: {}", identity.id_commitment);
println!();
}
}
@@ -103,137 +101,117 @@ impl RLNSystem {
let index = self.tree.leaves_set();
let identity = Identity::new();
let rate_commitment = poseidon_hash(&[identity.id_commitment, Fr::from(MESSAGE_LIMIT)]);
let rate_commitment =
poseidon_hash(&[identity.id_commitment, Fr::from(MESSAGE_LIMIT)]).unwrap();
self.tree.update_next(rate_commitment)?;
println!("Registered User Index: {index}");
println!("+ Identity secret hash: {}", identity.identity_secret_hash);
println!("+ Identity secret: {}", *identity.identity_secret);
println!("+ Identity commitment: {}", identity.id_commitment);
self.local_identities.insert(index, identity);
Ok(index)
}
fn generate_proof(
fn generate_and_verify_proof(
&mut self,
user_index: usize,
message_id: u32,
signal: &str,
external_nullifier: Fr,
) -> Result<Vec<u8>> {
) -> Result<RLNProofValues> {
let identity = match self.local_identities.get(&user_index) {
Some(identity) => identity,
None => return Err(eyre!("user index {user_index} not found")),
None => return Err(format!("user index {user_index} not found").into()),
};
let merkle_proof = self.tree.proof(user_index)?;
let x = hash_to_field(signal.as_bytes());
let x = hash_to_field_le(signal.as_bytes())?;
let rln_witness = rln_witness_from_values(
identity.identity_secret_hash,
&merkle_proof,
x,
external_nullifier,
let witness = RLNWitnessInput::new(
identity.identity_secret.clone(),
Fr::from(MESSAGE_LIMIT),
Fr::from(message_id),
merkle_proof.get_path_elements(),
merkle_proof.get_path_index(),
x,
external_nullifier,
)?;
let serialized = serialize_witness(&rln_witness)?;
let mut input_buffer = Cursor::new(serialized);
let mut output_buffer = Cursor::new(Vec::new());
self.rln
.generate_rln_proof_with_witness(&mut input_buffer, &mut output_buffer)?;
let (proof, proof_values) = self.rln.generate_rln_proof(&witness)?;
println!("Proof generated successfully:");
println!("+ User Index: {user_index}");
println!("+ Message ID: {message_id}");
println!("+ Signal: {signal}");
Ok(output_buffer.into_inner())
let tree_root = self.tree.root();
let verified = self
.rln
.verify_with_roots(&proof, &proof_values, &x, &[tree_root])?;
if verified {
println!("Proof verified successfully");
}
fn verify_proof(&mut self, proof_data: Vec<u8>, signal: &str) -> Result<()> {
let proof_with_signal = prepare_verify_input(proof_data.clone(), signal.as_bytes());
let mut input_buffer = Cursor::new(proof_with_signal);
Ok(proof_values)
}
let root = self.tree.root();
let roots_serialized = fr_to_bytes_le(&root);
let mut roots_buffer = Cursor::new(roots_serialized);
fn check_nullifier(&mut self, proof_values: RLNProofValues) -> Result<()> {
let tree_root = self.tree.root();
match self
.rln
.verify_with_roots(&mut input_buffer, &mut roots_buffer)
{
Ok(true) => {
let nullifier = &proof_data[256..288];
let nullifier_key: [u8; 32] = nullifier.try_into()?;
if let Some(previous_proof) = self.used_nullifiers.get(&nullifier_key) {
self.handle_duplicate_message_id(previous_proof.clone(), proof_data)?;
if proof_values.root != tree_root {
println!("Check nullifier failed: invalid root");
return Ok(());
}
self.used_nullifiers.insert(nullifier_key, proof_data);
if let Some(&previous_proof_values) = self.used_nullifiers.get(&proof_values.nullifier) {
self.handle_duplicate_message_id(previous_proof_values, proof_values)?;
return Ok(());
}
self.used_nullifiers
.insert(proof_values.nullifier, proof_values);
println!("Message verified and accepted");
}
Ok(false) => {
println!("Verification failed: message_id must be unique within the epoch and satisfy 0 <= message_id < MESSAGE_LIMIT: {MESSAGE_LIMIT}");
}
Err(err) => return Err(err.into()),
}
Ok(())
}
fn handle_duplicate_message_id(
&mut self,
previous_proof: Vec<u8>,
current_proof: Vec<u8>,
previous_proof_values: RLNProofValues,
current_proof_values: RLNProofValues,
) -> Result<()> {
let x = &current_proof[192..224];
let y = &current_proof[224..256];
let prev_x = &previous_proof[192..224];
let prev_y = &previous_proof[224..256];
if x == prev_x && y == prev_y {
return Err(eyre!("this exact message and signal has already been sent"));
if previous_proof_values.x == current_proof_values.x
&& previous_proof_values.y == current_proof_values.y
{
return Err("this exact message and signal has already been sent".into());
}
let mut proof1 = Cursor::new(previous_proof);
let mut proof2 = Cursor::new(current_proof);
let mut output = Cursor::new(Vec::new());
match self
.rln
.recover_id_secret(&mut proof1, &mut proof2, &mut output)
{
Ok(_) => {
let output_data = output.into_inner();
let (leaked_identity_secret_hash, _) = bytes_le_to_fr(&output_data);
match recover_id_secret(&previous_proof_values, &current_proof_values) {
Ok(leaked_identity_secret) => {
if let Some((user_index, identity)) = self
.local_identities
.iter()
.find(|(_, identity)| {
identity.identity_secret_hash == leaked_identity_secret_hash
})
.find(|(_, identity)| identity.identity_secret == leaked_identity_secret)
.map(|(index, identity)| (*index, identity))
{
let real_identity_secret_hash = identity.identity_secret_hash;
if leaked_identity_secret_hash != real_identity_secret_hash {
Err(eyre!("identity secret hash mismatch {leaked_identity_secret_hash} != {real_identity_secret_hash}"))
let real_identity_secret = identity.identity_secret.clone();
if leaked_identity_secret != real_identity_secret {
Err("Identity secret mismatch: leaked_identity_secret != real_identity_secret".into())
} else {
println!("DUPLICATE message ID detected! Reveal identity secret hash: {leaked_identity_secret_hash}");
println!(
"DUPLICATE message ID detected! Reveal identity secret: {}",
*leaked_identity_secret
);
self.local_identities.remove(&user_index);
println!("User index {user_index} has been SLASHED");
Ok(())
}
} else {
Err(eyre!(
"user identity secret hash {leaked_identity_secret_hash} not found"
))
Err("user identity secret ******** not found".into())
}
}
Err(err) => Err(eyre!("Failed to recover identity secret: {err}")),
Err(err) => Err(format!("Failed to recover identity secret: {err}").into()),
}
}
}
@@ -242,9 +220,9 @@ fn main() -> Result<()> {
println!("Initializing RLN instance...");
print!("\x1B[2J\x1B[1;1H");
let mut rln_system = RLNSystem::new()?;
let rln_epoch = hash_to_field(b"epoch");
let rln_identifier = hash_to_field(b"rln-identifier");
let external_nullifier = poseidon_hash(&[rln_epoch, rln_identifier]);
let rln_epoch = hash_to_field_le(b"epoch")?;
let rln_identifier = hash_to_field_le(b"rln-identifier")?;
let external_nullifier = poseidon_hash(&[rln_epoch, rln_identifier]).unwrap();
println!("RLN Stateless Relay Example:");
println!("Message Limit: {MESSAGE_LIMIT}");
println!("----------------------------------");
@@ -272,15 +250,15 @@ fn main() -> Result<()> {
message_id,
signal,
} => {
match rln_system.generate_proof(
match rln_system.generate_and_verify_proof(
user_index,
message_id,
&signal,
external_nullifier,
) {
Ok(proof) => {
if let Err(err) = rln_system.verify_proof(proof, &signal) {
println!("Verification error: {err}");
Ok(proof_values) => {
if let Err(err) = rln_system.check_nullifier(proof_values) {
println!("Check nullifier error: {err}");
};
}
Err(err) => {

View File

@@ -1,202 +0,0 @@
use std::{
fs::File,
io::{Cursor, Read},
path::Path,
};
use clap::Parser;
use color_eyre::{eyre::Report, Result};
use commands::Commands;
use config::{Config, InnerConfig};
use rln::{
public::RLN,
utils::{bytes_le_to_fr, bytes_le_to_vec_fr},
};
use serde_json::json;
use state::State;
mod commands;
mod config;
mod state;
#[derive(Parser)]
#[command(author, version, about, long_about = None)]
struct Cli {
#[command(subcommand)]
command: Option<Commands>,
}
fn main() -> Result<()> {
let cli = Cli::parse();
let mut state = match &cli.command {
Some(Commands::New { .. }) | Some(Commands::NewWithParams { .. }) => State::default(),
_ => State::load_state()?,
};
match cli.command {
Some(Commands::New { tree_height }) => {
let config = Config::load_config()?;
state.rln = if let Some(InnerConfig { tree_height, .. }) = config.inner {
println!("Initializing RLN with custom config");
Some(RLN::new(tree_height, Cursor::new(config.as_bytes()))?)
} else {
println!("Initializing RLN with default config");
Some(RLN::new(tree_height, Cursor::new(json!({}).to_string()))?)
};
Ok(())
}
Some(Commands::NewWithParams {
tree_height,
resources_path,
}) => {
let mut resources: Vec<Vec<u8>> = Vec::new();
#[cfg(feature = "arkzkey")]
let filenames = ["rln_final.arkzkey", "graph.bin"];
#[cfg(not(feature = "arkzkey"))]
let filenames = ["rln_final.zkey", "graph.bin"];
for filename in filenames {
let fullpath = resources_path.join(Path::new(filename));
let mut file = File::open(&fullpath)?;
let metadata = std::fs::metadata(&fullpath)?;
let mut output_buffer = vec![0; metadata.len() as usize];
file.read_exact(&mut output_buffer)?;
resources.push(output_buffer);
}
let config = Config::load_config()?;
if let Some(InnerConfig {
tree_height,
tree_config,
}) = config.inner
{
println!("Initializing RLN with custom config");
state.rln = Some(RLN::new_with_params(
tree_height,
resources[0].clone(),
resources[1].clone(),
Cursor::new(tree_config.to_string().as_bytes()),
)?)
} else {
println!("Initializing RLN with default config");
state.rln = Some(RLN::new_with_params(
tree_height,
resources[0].clone(),
resources[1].clone(),
Cursor::new(json!({}).to_string()),
)?)
};
Ok(())
}
Some(Commands::SetTree { tree_height }) => {
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.set_tree(tree_height)?;
Ok(())
}
Some(Commands::SetLeaf { index, input }) => {
let input_data = File::open(input)?;
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.set_leaf(index, input_data)?;
Ok(())
}
Some(Commands::SetMultipleLeaves { index, input }) => {
let input_data = File::open(input)?;
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.set_leaves_from(index, input_data)?;
Ok(())
}
Some(Commands::ResetMultipleLeaves { input }) => {
let input_data = File::open(input)?;
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.init_tree_with_leaves(input_data)?;
Ok(())
}
Some(Commands::SetNextLeaf { input }) => {
let input_data = File::open(input)?;
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.set_next_leaf(input_data)?;
Ok(())
}
Some(Commands::DeleteLeaf { index }) => {
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.delete_leaf(index)?;
Ok(())
}
Some(Commands::Prove { input }) => {
let input_data = File::open(input)?;
let mut output_buffer = Cursor::new(Vec::<u8>::new());
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.prove(input_data, &mut output_buffer)?;
let proof = output_buffer.into_inner();
println!("proof: {:?}", proof);
Ok(())
}
Some(Commands::Verify { input }) => {
let input_data = File::open(input)?;
let verified = state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.verify(input_data)?;
println!("verified: {:?}", verified);
Ok(())
}
Some(Commands::GenerateProof { input }) => {
let input_data = File::open(input)?;
let mut output_buffer = Cursor::new(Vec::<u8>::new());
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.generate_rln_proof(input_data, &mut output_buffer)?;
let proof = output_buffer.into_inner();
println!("proof: {:?}", proof);
Ok(())
}
Some(Commands::VerifyWithRoots { input, roots }) => {
let input_data = File::open(input)?;
let roots_data = File::open(roots)?;
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.verify_with_roots(input_data, roots_data)?;
Ok(())
}
Some(Commands::GetRoot) => {
let mut output_buffer = Cursor::new(Vec::<u8>::new());
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.get_root(&mut output_buffer)
.unwrap();
let (root, _) = bytes_le_to_fr(&output_buffer.into_inner());
println!("root: {root}");
Ok(())
}
Some(Commands::GetProof { index }) => {
let mut output_buffer = Cursor::new(Vec::<u8>::new());
state
.rln
.ok_or(Report::msg("no RLN instance initialized"))?
.get_proof(index, &mut output_buffer)?;
let output_buffer_inner = output_buffer.into_inner();
let (path_elements, _) = bytes_le_to_vec_fr(&output_buffer_inner)?;
for (index, element) in path_elements.iter().enumerate() {
println!("path element {}: {}", index, element);
}
Ok(())
}
None => Ok(()),
}
}

View File

@@ -1,23 +0,0 @@
use std::io::Cursor;
use color_eyre::Result;
use rln::public::RLN;
use crate::config::{Config, InnerConfig};
#[derive(Default)]
pub(crate) struct State {
pub rln: Option<RLN>,
}
impl State {
pub(crate) fn load_state() -> Result<State> {
let config = Config::load_config()?;
let rln = if let Some(InnerConfig { tree_height, .. }) = config.inner {
Some(RLN::new(tree_height, Cursor::new(config.as_bytes()))?)
} else {
None
};
Ok(State { rln })
}
}

24
rln-wasm/.gitignore vendored
View File

@@ -1,6 +1,22 @@
# Common files to ignore in Rust projects
.DS_Store
.idea
*.log
tmp/
# Generated by Cargo will have compiled files and executables
/target
# Generated by rln-wasm
/pkg
/examples/node_modules
/examples/package-lock.json
# Generated by Nix
result
# These are backup files generated by rustfmt
**/*.rs.bk
Cargo.lock
bin/
pkg/
wasm-pack.log
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb

1792
rln-wasm/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,39 +1,53 @@
[package]
name = "rln-wasm"
version = "0.1.0"
version = "1.0.0"
edition = "2021"
license = "MIT or Apache2"
license = "MIT OR Apache-2.0"
[lib]
crate-type = ["cdylib", "rlib"]
required-features = ["stateless"]
[dependencies]
rln = { path = "../rln", default-features = false }
num-bigint = { version = "0.4.6", default-features = false, features = [
"rand",
"serde",
rln = { path = "../rln", version = "1.0.0", default-features = false, features = [
"stateless",
] }
wasm-bindgen = "0.2.100"
zerokit_utils = { path = "../utils", version = "1.0.0", default-features = false }
num-bigint = { version = "0.4.6", default-features = false }
js-sys = "0.3.83"
wasm-bindgen = "0.2.106"
serde-wasm-bindgen = "0.6.5"
js-sys = "0.3.77"
serde_json = "1.0"
serde = "1.0.228"
wasm-bindgen-rayon = { version = "1.3.0", features = [
"no-bundler",
], optional = true }
ark-relations = { version = "0.5.1", features = ["std"] }
ark-groth16 = { version = "0.5.0", default-features = false }
rand = "0.8.5"
# The `console_error_panic_xhook` crate provides better debugging of panics by
# The `console_error_panic_hook` crate provides better debugging of panics by
# logging them with `console.error`. This is great for development, but requires
# all the `std::fmt` and `std::panicking` infrastructure, so isn't great for
# code size when deploying.
console_error_panic_hook = { version = "0.1.7", optional = true }
zerokit_utils = { path = "../utils" }
[target.'cfg(target_arch = "wasm32")'.dependencies]
getrandom = { version = "0.2.15", features = ["js"] }
getrandom = { version = "0.2.16", features = ["js"] }
[dev-dependencies]
wasm-bindgen-test = "0.3.50"
wasm-bindgen-futures = "0.4.50"
serde_json = "1.0.145"
wasm-bindgen-test = "0.3.56"
wasm-bindgen-futures = "0.4.56"
ark-std = { version = "0.5.0", default-features = false }
[dev-dependencies.web-sys]
version = "0.3.83"
features = ["Window", "Navigator"]
[features]
default = ["console_error_panic_hook"]
stateless = ["rln/stateless"]
arkzkey = ["rln/arkzkey"]
default = []
utils = []
panic_hook = ["console_error_panic_hook"]
parallel = ["rln/parallel", "wasm-bindgen-rayon", "ark-groth16/parallel"]
[package.metadata.docs.rs]
all-features = true

View File

@@ -1,24 +1,72 @@
[tasks.build]
clear = true
dependencies = ["pack_build", "pack_rename"]
dependencies = ["pack_build", "pack_rename", "pack_add_keywords"]
[tasks.build_arkzkey]
[tasks.build_parallel]
clear = true
dependencies = ["pack_build_arkzkey", "pack_rename"]
dependencies = [
"pack_build_parallel",
"pack_rename_parallel",
"pack_add_keywords",
]
[tasks.build_utils]
clear = true
dependencies = ["pack_build_utils", "pack_rename_utils", "pack_add_keywords"]
[tasks.pack_build]
command = "wasm-pack"
args = ["build", "--release", "--target", "web", "--scope", "waku"]
env = { "RUSTFLAGS" = "--cfg feature=\"stateless\"" }
[tasks.pack_build_arkzkey]
command = "wasm-pack"
args = ["build", "--release", "--target", "web", "--scope", "waku"]
env = { "RUSTFLAGS" = "--cfg feature=\"stateless\" --cfg feature=\"arkzkey\"" }
[tasks.pack_rename]
script = "sed -i.bak 's/rln-wasm/zerokit-rln-wasm/g' pkg/package.json && rm pkg/package.json.bak"
[tasks.pack_build_parallel]
command = "env"
args = [
"CARGO_TARGET_WASM32_UNKNOWN_UNKNOWN_RUSTFLAGS=-C target-feature=+atomics,+bulk-memory,+mutable-globals -C link-arg=--shared-memory -C link-arg=--max-memory=1073741824 -C link-arg=--import-memory -C link-arg=--export=__wasm_init_tls -C link-arg=--export=__tls_size -C link-arg=--export=__tls_align -C link-arg=--export=__tls_base",
"rustup",
"run",
"nightly",
"wasm-pack",
"build",
"--release",
"--target",
"web",
"--scope",
"waku",
"--features",
"parallel",
"-Z",
"build-std=panic_abort,std",
]
[tasks.pack_rename_parallel]
script = "sed -i.bak 's/rln-wasm/zerokit-rln-wasm-parallel/g' pkg/package.json && rm pkg/package.json.bak"
[tasks.pack_build_utils]
command = "wasm-pack"
args = [
"build",
"--release",
"--target",
"web",
"--scope",
"waku",
"--no-default-features",
"--features",
"utils",
]
[tasks.pack_rename_utils]
script = "sed -i.bak 's/rln-wasm/zerokit-rln-wasm-utils/g' pkg/package.json && rm pkg/package.json.bak"
[tasks.pack_add_keywords]
script = """
jq '. + {keywords: ["zerokit", "rln", "wasm"]}' pkg/package.json > pkg/package.json.tmp && \
mv pkg/package.json.tmp pkg/package.json
"""
[tasks.test]
command = "wasm-pack"
args = [
@@ -30,9 +78,46 @@ args = [
"--",
"--nocapture",
]
env = { "RUSTFLAGS" = "--cfg feature=\"stateless\"" }
dependencies = ["build"]
[tasks.test_arkzkey]
[tasks.test_browser]
command = "wasm-pack"
args = [
"test",
"--release",
"--chrome",
"--headless",
"--target",
"wasm32-unknown-unknown",
"--",
"--nocapture",
]
dependencies = ["build"]
[tasks.test_parallel]
command = "env"
args = [
"CARGO_TARGET_WASM32_UNKNOWN_UNKNOWN_RUSTFLAGS=-C target-feature=+atomics,+bulk-memory,+mutable-globals -C link-arg=--shared-memory -C link-arg=--max-memory=1073741824 -C link-arg=--import-memory -C link-arg=--export=__wasm_init_tls -C link-arg=--export=__tls_size -C link-arg=--export=__tls_align -C link-arg=--export=__tls_base",
"rustup",
"run",
"nightly",
"wasm-pack",
"test",
"--release",
"--chrome",
"--headless",
"--target",
"wasm32-unknown-unknown",
"--features",
"parallel",
"-Z",
"build-std=panic_abort,std",
"--",
"--nocapture",
]
dependencies = ["build_parallel"]
[tasks.test_utils]
command = "wasm-pack"
args = [
"test",
@@ -40,19 +125,13 @@ args = [
"--node",
"--target",
"wasm32-unknown-unknown",
"--no-default-features",
"--features",
"utils",
"--",
"--nocapture",
]
env = { "RUSTFLAGS" = "--cfg feature=\"stateless\" --cfg feature=\"arkzkey\"" }
dependencies = ["build_arkzkey"]
dependencies = ["build_utils"]
[tasks.bench]
disabled = true
[tasks.login]
command = "wasm-pack"
args = ["login"]
[tasks.publish]
command = "wasm-pack"
args = ["publish", "--access", "public", "--target", "web"]

View File

@@ -1,55 +1,119 @@
# RLN for WASM
This library is used in [waku-org/js-rln](https://github.com/waku-org/js-rln/)
[![npm version](https://badge.fury.io/js/@waku%2Fzerokit-rln-wasm.svg)](https://badge.fury.io/js/@waku%2Fzerokit-rln-wasm)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
> **Note**: This project requires `wasm-pack` for compiling Rust to WebAssembly and `cargo-make` for running the build commands. Make sure both are installed before proceeding.
The Zerokit RLN WASM Module provides WebAssembly bindings for working with
Rate-Limiting Nullifier [RLN](https://rfc.vac.dev/vac/raw/rln-v2) zkSNARK proofs and primitives.
This module is used by [waku-org/js-rln](https://github.com/waku-org/js-rln/) to enable
RLN functionality in JavaScript/TypeScript applications.
Install `wasm-pack`:
## Install Dependencies
```bash
curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
```
> [!NOTE]
> This project requires the following tools:
>
> - `wasm-pack` (v0.13.1) - for compiling Rust to WebAssembly
> - `cargo-make` - for running build commands
> - `nvm` - to install and manage Node.js (v22.14.0+)
Install `cargo-make`
```bash
cargo install cargo-make
```
Or install everything needed for `zerokit` at the root of the repository:
### Quick Install
```bash
make installdeps
```
## Building the library
### Manual Installation
First, navigate to the rln-wasm directory:
```bash
# Install wasm-pack
cargo install wasm-pack --version=0.13.1
# Install cargo-make
cargo install cargo-make
# Install Node.js via nvm
nvm install 22.14.0
nvm use 22.14.0
nvm alias default 22.14.0
```
## Building the Library
Navigate to the rln-wasm directory:
```bash
cd rln-wasm
```
Compile zerokit for `wasm32-unknown-unknown`:
Build commands:
```bash
cargo make build
cargo make build # Default → @waku/zerokit-rln-wasm
cargo make build_parallel # Parallel → @waku/zerokit-rln-wasm-parallel (requires nightly Rust)
cargo make build_utils # Utils only → @waku/zerokit-rln-wasm-utils
```
Or compile with the **arkzkey** feature enabled
All packages output to `pkg/` directory.
## Running Tests and Benchmarks
```bash
cargo make build_arkzkey
cargo make test # Standard tests
cargo make test_browser # Browser headless mode
cargo make test_utils # Utils-only tests
cargo make test_parallel # Parallel tests
```
## Running tests and benchmarks
## Examples
```bash
cargo make test
See [Node example](./examples/index.js) and [README](./examples/Readme.md) for proof generation, verification, and slashing.
## Parallel Computation
Enables multi-threaded browser execution using `wasm-bindgen-rayon`.
> [!NOTE]
>
> - Parallel support is not enabled by default due to WebAssembly and browser limitations.
> - Requires `nightly` Rust: `rustup install nightly`
> - Browser-only (not compatible with Node.js)
> - Requires HTTP headers for `SharedArrayBuffer`:
> - `Cross-Origin-Opener-Policy: same-origin`
> - `Cross-Origin-Embedder-Policy: require-corp`
### Usage
Direct usage (modern browsers with WebAssembly threads support):
```js
import * as wasmPkg from '@waku/zerokit-rln-wasm-parallel';
await wasmPkg.default();
await wasmPkg.initThreadPool(navigator.hardwareConcurrency);
wasmPkg.nowCallAnyExportedFuncs();
```
Or test with the **arkzkey** feature enabled
### Feature Detection for Older Browsers
```bash
cargo make test_arkzkey
If you're targeting [older browser versions that didn't support WebAssembly threads yet](https://webassembly.org/roadmap/), you'll want to use both builds - the parallel version for modern browsers and the default version as a fallback. Use feature detection to choose the appropriate build on the JavaScript side.
You can use the [wasm-feature-detect](https://github.com/GoogleChromeLabs/wasm-feature-detect) library for this purpose:
```js
import { threads } from 'wasm-feature-detect';
let wasmPkg;
if (await threads()) {
wasmPkg = await import('@waku/zerokit-rln-wasm-parallel');
await wasmPkg.default();
await wasmPkg.initThreadPool(navigator.hardwareConcurrency);
} else {
wasmPkg = await import('@waku/zerokit-rln-wasm');
await wasmPkg.default();
}
wasmPkg.nowCallAnyExportedFuncs();
```

View File

@@ -0,0 +1,22 @@
# RLN WASM Node Examples
This example demonstrates how to use the RLN WASM package in a Node.js environment.
## Build the @waku/zerokit-rln-wasm package at the root of rln-wasm module
```bash
cargo make build
```
## Move into this directory and install dependencies
```bash
cd examples
npm install
```
## Run
```bash
npm start
```

484
rln-wasm/examples/index.js Normal file
View File

@@ -0,0 +1,484 @@
import { readFileSync } from "fs";
import { fileURLToPath } from "url";
import { dirname, join } from "path";
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
function debugUint8Array(uint8Array) {
return Array.from(uint8Array, (byte) =>
byte.toString(16).padStart(2, "0")
).join(", ");
}
async function calculateWitness(circomPath, inputs, witnessCalculatorFile) {
const wasmFile = readFileSync(circomPath);
const wasmFileBuffer = wasmFile.buffer.slice(
wasmFile.byteOffset,
wasmFile.byteOffset + wasmFile.byteLength
);
const witnessCalculator = await witnessCalculatorFile(wasmFileBuffer);
const calculatedWitness = await witnessCalculator.calculateWitness(
inputs,
false
);
return calculatedWitness;
}
async function main() {
const rlnWasm = await import("../pkg/rln_wasm.js");
const wasmPath = join(__dirname, "../pkg/rln_wasm_bg.wasm");
const wasmBytes = readFileSync(wasmPath);
rlnWasm.initSync({ module: wasmBytes });
const zkeyPath = join(
__dirname,
"../../rln/resources/tree_depth_20/rln_final.arkzkey"
);
const circomPath = join(
__dirname,
"../../rln/resources/tree_depth_20/rln.wasm"
);
const witnessCalculatorPath = join(
__dirname,
"../resources/witness_calculator.js"
);
const { builder: witnessCalculatorFile } = await import(
witnessCalculatorPath
);
console.log("Creating RLN instance");
const zkeyData = readFileSync(zkeyPath);
let rlnInstance;
try {
rlnInstance = new rlnWasm.WasmRLN(new Uint8Array(zkeyData));
} catch (error) {
console.error("Initial RLN instance creation error:", error);
return;
}
console.log("RLN instance created successfully");
console.log("\nGenerating identity keys");
let identity;
try {
identity = rlnWasm.Identity.generate();
} catch (error) {
console.error("Key generation error:", error);
return;
}
const identitySecret = identity.getSecretHash();
const idCommitment = identity.getCommitment();
console.log("Identity generated");
console.log(" - identity_secret = " + identitySecret.debug());
console.log(" - id_commitment = " + idCommitment.debug());
console.log("\nCreating message limit");
const userMessageLimit = rlnWasm.WasmFr.fromUint(1);
console.log(" - user_message_limit = " + userMessageLimit.debug());
console.log("\nComputing rate commitment");
let rateCommitment;
try {
rateCommitment = rlnWasm.Hasher.poseidonHashPair(
idCommitment,
userMessageLimit
);
} catch (error) {
console.error("Rate commitment hash error:", error);
return;
}
console.log(" - rate_commitment = " + rateCommitment.debug());
console.log("\nWasmFr serialization: WasmFr <-> bytes");
const serRateCommitment = rateCommitment.toBytesLE();
console.log(
" - serialized rate_commitment = [" +
debugUint8Array(serRateCommitment) +
"]"
);
let deserRateCommitment;
try {
deserRateCommitment = rlnWasm.WasmFr.fromBytesLE(serRateCommitment);
} catch (error) {
console.error("Rate commitment deserialization error:", error);
return;
}
console.log(
" - deserialized rate_commitment = " + deserRateCommitment.debug()
);
console.log("\nIdentity serialization: Identity <-> bytes");
const serIdentity = identity.toBytesLE();
console.log(
" - serialized identity = [" + debugUint8Array(serIdentity) + "]"
);
let deserIdentity;
try {
deserIdentity = rlnWasm.Identity.fromBytesLE(serIdentity);
} catch (error) {
console.error("Identity deserialization error:", error);
return;
}
const deserIdentitySecret = deserIdentity.getSecretHash();
const deserIdCommitment = deserIdentity.getCommitment();
console.log(
" - deserialized identity = [" +
deserIdentitySecret.debug() +
", " +
deserIdCommitment.debug() +
"]"
);
console.log("\nBuilding Merkle path for stateless mode");
const treeDepth = 20;
const defaultLeaf = rlnWasm.WasmFr.zero();
const defaultHashes = [];
try {
defaultHashes[0] = rlnWasm.Hasher.poseidonHashPair(
defaultLeaf,
defaultLeaf
);
for (let i = 1; i < treeDepth - 1; i++) {
defaultHashes[i] = rlnWasm.Hasher.poseidonHashPair(
defaultHashes[i - 1],
defaultHashes[i - 1]
);
}
} catch (error) {
console.error("Poseidon hash error:", error);
return;
}
const pathElements = new rlnWasm.VecWasmFr();
pathElements.push(defaultLeaf);
for (let i = 1; i < treeDepth; i++) {
pathElements.push(defaultHashes[i - 1]);
}
const identityPathIndex = new Uint8Array(treeDepth);
console.log("\nVecWasmFr serialization: VecWasmFr <-> bytes");
const serPathElements = pathElements.toBytesLE();
console.log(
" - serialized path_elements = [" + debugUint8Array(serPathElements) + "]"
);
let deserPathElements;
try {
deserPathElements = rlnWasm.VecWasmFr.fromBytesLE(serPathElements);
} catch (error) {
console.error("Path elements deserialization error:", error);
return;
}
console.log(" - deserialized path_elements = ", deserPathElements.debug());
console.log("\nUint8Array serialization: Uint8Array <-> bytes");
const serPathIndex = rlnWasm.Uint8ArrayUtils.toBytesLE(identityPathIndex);
console.log(
" - serialized path_index = [" + debugUint8Array(serPathIndex) + "]"
);
let deserPathIndex;
try {
deserPathIndex = rlnWasm.Uint8ArrayUtils.fromBytesLE(serPathIndex);
} catch (error) {
console.error("Path index deserialization error:", error);
return;
}
console.log(" - deserialized path_index =", deserPathIndex);
console.log("\nComputing Merkle root for stateless mode");
console.log(" - computing root for index 0 with rate_commitment");
let computedRoot;
try {
computedRoot = rlnWasm.Hasher.poseidonHashPair(rateCommitment, defaultLeaf);
for (let i = 1; i < treeDepth; i++) {
computedRoot = rlnWasm.Hasher.poseidonHashPair(
computedRoot,
defaultHashes[i - 1]
);
}
} catch (error) {
console.error("Poseidon hash error:", error);
return;
}
console.log(" - computed_root = " + computedRoot.debug());
console.log("\nHashing signal");
const signal = new Uint8Array([
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0,
]);
let x;
try {
x = rlnWasm.Hasher.hashToFieldLE(signal);
} catch (error) {
console.error("Hash signal error:", error);
return;
}
console.log(" - x = " + x.debug());
console.log("\nHashing epoch");
const epochStr = "test-epoch";
let epoch;
try {
epoch = rlnWasm.Hasher.hashToFieldLE(new TextEncoder().encode(epochStr));
} catch (error) {
console.error("Hash epoch error:", error);
return;
}
console.log(" - epoch = " + epoch.debug());
console.log("\nHashing RLN identifier");
const rlnIdStr = "test-rln-identifier";
let rlnIdentifier;
try {
rlnIdentifier = rlnWasm.Hasher.hashToFieldLE(
new TextEncoder().encode(rlnIdStr)
);
} catch (error) {
console.error("Hash RLN identifier error:", error);
return;
}
console.log(" - rln_identifier = " + rlnIdentifier.debug());
console.log("\nComputing Poseidon hash for external nullifier");
let externalNullifier;
try {
externalNullifier = rlnWasm.Hasher.poseidonHashPair(epoch, rlnIdentifier);
} catch (error) {
console.error("External nullifier hash error:", error);
return;
}
console.log(" - external_nullifier = " + externalNullifier.debug());
console.log("\nCreating message_id");
const messageId = rlnWasm.WasmFr.fromUint(0);
console.log(" - message_id = " + messageId.debug());
console.log("\nCreating RLN Witness");
const witness = new rlnWasm.WasmRLNWitnessInput(
identitySecret,
userMessageLimit,
messageId,
pathElements,
identityPathIndex,
x,
externalNullifier
);
console.log("RLN Witness created successfully");
console.log(
"\nWasmRLNWitnessInput serialization: WasmRLNWitnessInput <-> bytes"
);
let serWitness;
try {
serWitness = witness.toBytesLE();
} catch (error) {
console.error("Witness serialization error:", error);
return;
}
console.log(
" - serialized witness = [" + debugUint8Array(serWitness) + " ]"
);
let deserWitness;
try {
deserWitness = rlnWasm.WasmRLNWitnessInput.fromBytesLE(serWitness);
} catch (error) {
console.error("Witness deserialization error:", error);
return;
}
console.log(" - witness deserialized successfully");
console.log("\nCalculating witness");
let witnessJson;
try {
witnessJson = witness.toBigIntJson();
} catch (error) {
console.error("Witness to BigInt JSON error:", error);
return;
}
const calculatedWitness = await calculateWitness(
circomPath,
witnessJson,
witnessCalculatorFile
);
console.log("Witness calculated successfully");
console.log("\nGenerating RLN Proof");
let rln_proof;
try {
rln_proof = rlnInstance.generateRLNProofWithWitness(
calculatedWitness,
witness
);
} catch (error) {
console.error("Proof generation error:", error);
return;
}
console.log("Proof generated successfully");
console.log("\nGetting proof values");
const proofValues = rln_proof.getValues();
console.log(" - y = " + proofValues.y.debug());
console.log(" - nullifier = " + proofValues.nullifier.debug());
console.log(" - root = " + proofValues.root.debug());
console.log(" - x = " + proofValues.x.debug());
console.log(
" - external_nullifier = " + proofValues.externalNullifier.debug()
);
console.log("\nRLNProof serialization: RLNProof <-> bytes");
let serProof;
try {
serProof = rln_proof.toBytesLE();
} catch (error) {
console.error("Proof serialization error:", error);
return;
}
console.log(" - serialized proof = [" + debugUint8Array(serProof) + " ]");
let deserProof;
try {
deserProof = rlnWasm.WasmRLNProof.fromBytesLE(serProof);
} catch (error) {
console.error("Proof deserialization error:", error);
return;
}
console.log(" - proof deserialized successfully");
console.log("\nRLNProofValues serialization: RLNProofValues <-> bytes");
const serProofValues = proofValues.toBytesLE();
console.log(
" - serialized proof_values = [" + debugUint8Array(serProofValues) + " ]"
);
let deserProofValues2;
try {
deserProofValues2 = rlnWasm.WasmRLNProofValues.fromBytesLE(serProofValues);
} catch (error) {
console.error("Proof values deserialization error:", error);
return;
}
console.log(" - proof_values deserialized successfully");
console.log(
" - deserialized external_nullifier = " +
deserProofValues2.externalNullifier.debug()
);
console.log("\nVerifying Proof");
const roots = new rlnWasm.VecWasmFr();
roots.push(computedRoot);
let isValid;
try {
isValid = rlnInstance.verifyWithRoots(rln_proof, roots, x);
} catch (error) {
console.error("Proof verification error:", error);
return;
}
if (isValid) {
console.log("Proof verified successfully");
} else {
console.log("Proof verification failed");
return;
}
console.log(
"\nSimulating double-signaling attack (same epoch, different message)"
);
console.log("\nHashing second signal");
const signal2 = new Uint8Array([
11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
]);
let x2;
try {
x2 = rlnWasm.Hasher.hashToFieldLE(signal2);
} catch (error) {
console.error("Hash second signal error:", error);
return;
}
console.log(" - x2 = " + x2.debug());
console.log("\nCreating second message with the same id");
const messageId2 = rlnWasm.WasmFr.fromUint(0);
console.log(" - message_id2 = " + messageId2.debug());
console.log("\nCreating second RLN Witness");
const witness2 = new rlnWasm.WasmRLNWitnessInput(
identitySecret,
userMessageLimit,
messageId2,
pathElements,
identityPathIndex,
x2,
externalNullifier
);
console.log("Second RLN Witness created successfully");
console.log("\nCalculating second witness");
let witnessJson2;
try {
witnessJson2 = witness2.toBigIntJson();
} catch (error) {
console.error("Second witness to BigInt JSON error:", error);
return;
}
const calculatedWitness2 = await calculateWitness(
circomPath,
witnessJson2,
witnessCalculatorFile
);
console.log("Second witness calculated successfully");
console.log("\nGenerating second RLN Proof");
let rln_proof2;
try {
rln_proof2 = rlnInstance.generateRLNProofWithWitness(
calculatedWitness2,
witness2
);
} catch (error) {
console.error("Second proof generation error:", error);
return;
}
console.log("Second proof generated successfully");
console.log("\nVerifying second proof");
let isValid2;
try {
isValid2 = rlnInstance.verifyWithRoots(rln_proof2, roots, x2);
} catch (error) {
console.error("Proof verification error:", error);
return;
}
if (isValid2) {
console.log("Second proof verified successfully");
console.log("\nRecovering identity secret");
const proofValues1 = rln_proof.getValues();
const proofValues2 = rln_proof2.getValues();
let recoveredSecret;
try {
recoveredSecret = rlnWasm.WasmRLNProofValues.recoverIdSecret(
proofValues1,
proofValues2
);
} catch (error) {
console.error("Identity recovery error:", error);
return;
}
console.log(" - recovered_secret = " + recoveredSecret.debug());
console.log(" - original_secret = " + identitySecret.debug());
console.log("Slashing successful: Identity is recovered!");
} else {
console.log("Second proof verification failed");
}
}
main().catch(console.error);

View File

@@ -0,0 +1,13 @@
{
"name": "rln-wasm-node-example",
"version": "1.0.0",
"description": "Node.js example for RLN WASM",
"type": "module",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"dependencies": {
"@waku/zerokit-rln-wasm": "file:../../pkg"
}
}

View File

@@ -1,4 +1,8 @@
module.exports = async function builder(code, options) {
// File generated with https://github.com/iden3/circom
// following the instructions from:
// https://github.com/vacp2p/zerokit/tree/master/rln#advanced-custom-circuit-compilation
export async function builder(code, options) {
options = options || {};
let wasmModule;
@@ -101,7 +105,7 @@ module.exports = async function builder(code, options) {
// Then append the value to the message we are creating
msgStr += fromArray32(arr).toString();
}
};
}
class WitnessCalculator {
constructor(instance, sanityCheck) {

View File

@@ -1,294 +1,16 @@
#![cfg(target_arch = "wasm32")]
use js_sys::{BigInt as JsBigInt, Object, Uint8Array};
use num_bigint::BigInt;
use rln::public::{hash, poseidon_hash, RLN};
use std::vec::Vec;
use wasm_bindgen::prelude::*;
pub mod wasm_rln;
pub mod wasm_utils;
#[cfg(all(feature = "parallel", not(feature = "utils")))]
pub use wasm_bindgen_rayon::init_thread_pool;
#[cfg(not(feature = "utils"))]
pub use wasm_rln::{WasmRLN, WasmRLNProof, WasmRLNProofValues, WasmRLNWitnessInput};
pub use wasm_utils::{ExtendedIdentity, Hasher, Identity, VecWasmFr, WasmFr};
#[cfg(feature = "panic_hook")]
#[wasm_bindgen(js_name = initPanicHook)]
pub fn init_panic_hook() {
console_error_panic_hook::set_once();
}
#[wasm_bindgen(js_name = RLN)]
pub struct RLNWrapper {
// The purpose of this wrapper is to hold a RLN instance with the 'static lifetime
// because wasm_bindgen does not allow returning elements with lifetimes
instance: RLN,
}
// Macro to call methods with arbitrary amount of arguments,
// which have the last argument is output buffer pointer
// First argument to the macro is context,
// second is the actual method on `RLN`
// third is the aforementioned output buffer argument
// rest are all other arguments to the method
macro_rules! call_with_output_and_error_msg {
// this variant is needed for the case when
// there are zero other arguments
($instance:expr, $method:ident, $error_msg:expr) => {
{
let mut output_data: Vec<u8> = Vec::new();
let new_instance = $instance.process();
if let Err(err) = new_instance.instance.$method(&mut output_data) {
std::mem::forget(output_data);
Err(format!("Msg: {:#?}, Error: {:#?}", $error_msg, err))
} else {
let result = Uint8Array::from(&output_data[..]);
std::mem::forget(output_data);
Ok(result)
}
}
};
($instance:expr, $method:ident, $error_msg:expr, $( $arg:expr ),* ) => {
{
let mut output_data: Vec<u8> = Vec::new();
let new_instance = $instance.process();
if let Err(err) = new_instance.instance.$method($($arg.process()),*, &mut output_data) {
std::mem::forget(output_data);
Err(format!("Msg: {:#?}, Error: {:#?}", $error_msg, err))
} else {
let result = Uint8Array::from(&output_data[..]);
std::mem::forget(output_data);
Ok(result)
}
}
};
}
macro_rules! call {
($instance:expr, $method:ident $(, $arg:expr)*) => {
{
let new_instance: &mut RLNWrapper = $instance.process();
new_instance.instance.$method($($arg.process()),*)
}
}
}
macro_rules! call_bool_method_with_error_msg {
($instance:expr, $method:ident, $error_msg:expr $(, $arg:expr)*) => {
{
let new_instance: &RLNWrapper = $instance.process();
new_instance.instance.$method($($arg.process()),*).map_err(|err| format!("Msg: {:#?}, Error: {:#?}", $error_msg, err))
}
}
}
// Macro to execute a function with arbitrary amount of arguments,
// First argument is the function to execute
// Rest are all other arguments to the method
macro_rules! fn_call_with_output_and_error_msg {
// this variant is needed for the case when
// there are zero other arguments
($func:ident, $error_msg:expr) => {
{
let mut output_data: Vec<u8> = Vec::new();
if let Err(err) = $func(&mut output_data) {
std::mem::forget(output_data);
Err(format!("Msg: {:#?}, Error: {:#?}", $error_msg, err))
} else {
let result = Uint8Array::from(&output_data[..]);
std::mem::forget(output_data);
Ok(result)
}
}
};
($func:ident, $error_msg:expr, $( $arg:expr ),* ) => {
{
let mut output_data: Vec<u8> = Vec::new();
if let Err(err) = $func($($arg.process()),*, &mut output_data) {
std::mem::forget(output_data);
Err(format!("Msg: {:#?}, Error: {:#?}", $error_msg, err))
} else {
let result = Uint8Array::from(&output_data[..]);
std::mem::forget(output_data);
Ok(result)
}
}
};
}
trait ProcessArg {
type ReturnType;
fn process(self) -> Self::ReturnType;
}
impl ProcessArg for usize {
type ReturnType = usize;
fn process(self) -> Self::ReturnType {
self
}
}
impl<T> ProcessArg for Vec<T> {
type ReturnType = Vec<T>;
fn process(self) -> Self::ReturnType {
self
}
}
impl ProcessArg for *const RLN {
type ReturnType = &'static RLN;
fn process(self) -> Self::ReturnType {
unsafe { &*self }
}
}
impl ProcessArg for *const RLNWrapper {
type ReturnType = &'static RLNWrapper;
fn process(self) -> Self::ReturnType {
unsafe { &*self }
}
}
impl ProcessArg for *mut RLNWrapper {
type ReturnType = &'static mut RLNWrapper;
fn process(self) -> Self::ReturnType {
unsafe { &mut *self }
}
}
impl<'a> ProcessArg for &'a [u8] {
type ReturnType = &'a [u8];
fn process(self) -> Self::ReturnType {
self
}
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[wasm_bindgen(js_name = newRLN)]
pub fn wasm_new(zkey: Uint8Array) -> Result<*mut RLNWrapper, String> {
let instance = RLN::new_with_params(zkey.to_vec()).map_err(|err| format!("{:#?}", err))?;
let wrapper = RLNWrapper { instance };
Ok(Box::into_raw(Box::new(wrapper)))
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[wasm_bindgen(js_name = rlnWitnessToJson)]
pub fn wasm_rln_witness_to_json(
ctx: *mut RLNWrapper,
serialized_witness: Uint8Array,
) -> Result<Object, String> {
let inputs = call!(
ctx,
get_rln_witness_bigint_json,
&serialized_witness.to_vec()[..]
)
.map_err(|err| err.to_string())?;
let js_value = serde_wasm_bindgen::to_value(&inputs).map_err(|err| err.to_string())?;
Object::from_entries(&js_value).map_err(|err| format!("{:#?}", err))
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[wasm_bindgen(js_name = generateRLNProofWithWitness)]
pub fn wasm_generate_rln_proof_with_witness(
ctx: *mut RLNWrapper,
calculated_witness: Vec<JsBigInt>,
serialized_witness: Uint8Array,
) -> Result<Uint8Array, String> {
let mut witness_vec: Vec<BigInt> = vec![];
for v in calculated_witness {
witness_vec.push(
v.to_string(10)
.map_err(|err| format!("{:#?}", err))?
.as_string()
.ok_or("not a string error")?
.parse::<BigInt>()
.map_err(|err| format!("{:#?}", err))?,
);
}
call_with_output_and_error_msg!(
ctx,
generate_rln_proof_with_witness,
"could not generate proof",
witness_vec,
serialized_witness.to_vec()
)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[wasm_bindgen(js_name = generateMembershipKey)]
pub fn wasm_key_gen(ctx: *const RLNWrapper) -> Result<Uint8Array, String> {
call_with_output_and_error_msg!(ctx, key_gen, "could not generate membership keys")
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[wasm_bindgen(js_name = generateExtendedMembershipKey)]
pub fn wasm_extended_key_gen(ctx: *const RLNWrapper) -> Result<Uint8Array, String> {
call_with_output_and_error_msg!(ctx, extended_key_gen, "could not generate membership keys")
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[wasm_bindgen(js_name = generateSeededMembershipKey)]
pub fn wasm_seeded_key_gen(ctx: *const RLNWrapper, seed: Uint8Array) -> Result<Uint8Array, String> {
call_with_output_and_error_msg!(
ctx,
seeded_key_gen,
"could not generate membership key",
&seed.to_vec()[..]
)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[wasm_bindgen(js_name = generateSeededExtendedMembershipKey)]
pub fn wasm_seeded_extended_key_gen(
ctx: *const RLNWrapper,
seed: Uint8Array,
) -> Result<Uint8Array, String> {
call_with_output_and_error_msg!(
ctx,
seeded_extended_key_gen,
"could not generate membership key",
&seed.to_vec()[..]
)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[wasm_bindgen(js_name = recovedIDSecret)]
pub fn wasm_recover_id_secret(
ctx: *const RLNWrapper,
input_proof_data_1: Uint8Array,
input_proof_data_2: Uint8Array,
) -> Result<Uint8Array, String> {
call_with_output_and_error_msg!(
ctx,
recover_id_secret,
"could not recover id secret",
&input_proof_data_1.to_vec()[..],
&input_proof_data_2.to_vec()[..]
)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[wasm_bindgen(js_name = verifyWithRoots)]
pub fn wasm_verify_with_roots(
ctx: *const RLNWrapper,
proof: Uint8Array,
roots: Uint8Array,
) -> Result<bool, String> {
call_bool_method_with_error_msg!(
ctx,
verify_with_roots,
"error while verifying proof with roots".to_string(),
&proof.to_vec()[..],
&roots.to_vec()[..]
)
}
#[wasm_bindgen(js_name = hash)]
pub fn wasm_hash(input: Uint8Array) -> Result<Uint8Array, String> {
fn_call_with_output_and_error_msg!(hash, "could not generate hash", &input.to_vec()[..])
}
#[wasm_bindgen(js_name = poseidonHash)]
pub fn wasm_poseidon_hash(input: Uint8Array) -> Result<Uint8Array, String> {
fn_call_with_output_and_error_msg!(
poseidon_hash,
"could not generate poseidon hash",
&input.to_vec()[..]
)
}

View File

@@ -1,26 +0,0 @@
const fs = require("fs");
// Utils functions for loading circom witness calculator and reading files from test
module.exports = {
read_file: function (path) {
return fs.readFileSync(path);
},
calculateWitness: async function (circom_path, inputs) {
const wc = require("../resources/witness_calculator.js");
const wasmFile = fs.readFileSync(circom_path);
const wasmFileBuffer = wasmFile.slice(
wasmFile.byteOffset,
wasmFile.byteOffset + wasmFile.byteLength
);
const witnessCalculator = await wc(wasmFileBuffer);
const calculatedWitness = await witnessCalculator.calculateWitness(
inputs,
false
);
return JSON.stringify(calculatedWitness, (key, value) =>
typeof value === "bigint" ? value.toString() : value
);
},
};

253
rln-wasm/src/wasm_rln.rs Normal file
View File

@@ -0,0 +1,253 @@
#![cfg(target_arch = "wasm32")]
#![cfg(not(feature = "utils"))]
use js_sys::{BigInt as JsBigInt, Object, Uint8Array};
use num_bigint::BigInt;
use rln::prelude::*;
use serde::Serialize;
use wasm_bindgen::prelude::*;
use crate::wasm_utils::{VecWasmFr, WasmFr};
#[wasm_bindgen]
pub struct WasmRLN(RLN);
#[wasm_bindgen]
impl WasmRLN {
#[wasm_bindgen(constructor)]
pub fn new(zkey_data: &Uint8Array) -> Result<WasmRLN, String> {
let rln = RLN::new_with_params(zkey_data.to_vec()).map_err(|err| err.to_string())?;
Ok(WasmRLN(rln))
}
#[wasm_bindgen(js_name = generateRLNProofWithWitness)]
pub fn generate_rln_proof_with_witness(
&self,
calculated_witness: Vec<JsBigInt>,
witness: &WasmRLNWitnessInput,
) -> Result<WasmRLNProof, String> {
let calculated_witness_bigint: Vec<BigInt> = calculated_witness
.iter()
.map(|js_bigint| {
js_bigint
.to_string(10)
.ok()
.and_then(|js_str| js_str.as_string())
.ok_or_else(|| "Failed to convert JsBigInt to string".to_string())
.and_then(|str_val| {
str_val
.parse::<BigInt>()
.map_err(|err| format!("Failed to parse BigInt: {}", err))
})
})
.collect::<Result<Vec<_>, _>>()?;
let (proof, proof_values) = self
.0
.generate_rln_proof_with_witness(calculated_witness_bigint, &witness.0)
.map_err(|err| err.to_string())?;
let rln_proof = RLNProof {
proof_values,
proof,
};
Ok(WasmRLNProof(rln_proof))
}
#[wasm_bindgen(js_name = verifyWithRoots)]
pub fn verify_with_roots(
&self,
rln_proof: &WasmRLNProof,
roots: &VecWasmFr,
x: &WasmFr,
) -> Result<bool, String> {
let roots_fr: Vec<Fr> = (0..roots.length())
.filter_map(|i| roots.get(i))
.map(|root| *root)
.collect();
self.0
.verify_with_roots(&rln_proof.0.proof, &rln_proof.0.proof_values, x, &roots_fr)
.map_err(|err| err.to_string())
}
}
#[wasm_bindgen]
pub struct WasmRLNProof(RLNProof);
#[wasm_bindgen]
impl WasmRLNProof {
#[wasm_bindgen(js_name = getValues)]
pub fn get_values(&self) -> WasmRLNProofValues {
WasmRLNProofValues(self.0.proof_values)
}
#[wasm_bindgen(js_name = toBytesLE)]
pub fn to_bytes_le(&self) -> Result<Uint8Array, String> {
let bytes = rln_proof_to_bytes_le(&self.0).map_err(|err| err.to_string())?;
Ok(Uint8Array::from(&bytes[..]))
}
#[wasm_bindgen(js_name = toBytesBE)]
pub fn to_bytes_be(&self) -> Result<Uint8Array, String> {
let bytes = rln_proof_to_bytes_be(&self.0).map_err(|err| err.to_string())?;
Ok(Uint8Array::from(&bytes[..]))
}
#[wasm_bindgen(js_name = fromBytesLE)]
pub fn from_bytes_le(bytes: &Uint8Array) -> Result<WasmRLNProof, String> {
let bytes_vec = bytes.to_vec();
let (proof, _) = bytes_le_to_rln_proof(&bytes_vec).map_err(|err| err.to_string())?;
Ok(WasmRLNProof(proof))
}
#[wasm_bindgen(js_name = fromBytesBE)]
pub fn from_bytes_be(bytes: &Uint8Array) -> Result<WasmRLNProof, String> {
let bytes_vec = bytes.to_vec();
let (proof, _) = bytes_be_to_rln_proof(&bytes_vec).map_err(|err| err.to_string())?;
Ok(WasmRLNProof(proof))
}
}
#[wasm_bindgen]
pub struct WasmRLNProofValues(RLNProofValues);
#[wasm_bindgen]
impl WasmRLNProofValues {
#[wasm_bindgen(getter)]
pub fn y(&self) -> WasmFr {
WasmFr::from(self.0.y)
}
#[wasm_bindgen(getter)]
pub fn nullifier(&self) -> WasmFr {
WasmFr::from(self.0.nullifier)
}
#[wasm_bindgen(getter)]
pub fn root(&self) -> WasmFr {
WasmFr::from(self.0.root)
}
#[wasm_bindgen(getter)]
pub fn x(&self) -> WasmFr {
WasmFr::from(self.0.x)
}
#[wasm_bindgen(getter, js_name = externalNullifier)]
pub fn external_nullifier(&self) -> WasmFr {
WasmFr::from(self.0.external_nullifier)
}
#[wasm_bindgen(js_name = toBytesLE)]
pub fn to_bytes_le(&self) -> Uint8Array {
Uint8Array::from(&rln_proof_values_to_bytes_le(&self.0)[..])
}
#[wasm_bindgen(js_name = toBytesBE)]
pub fn to_bytes_be(&self) -> Uint8Array {
Uint8Array::from(&rln_proof_values_to_bytes_be(&self.0)[..])
}
#[wasm_bindgen(js_name = fromBytesLE)]
pub fn from_bytes_le(bytes: &Uint8Array) -> Result<WasmRLNProofValues, String> {
let bytes_vec = bytes.to_vec();
let (proof_values, _) =
bytes_le_to_rln_proof_values(&bytes_vec).map_err(|err| err.to_string())?;
Ok(WasmRLNProofValues(proof_values))
}
#[wasm_bindgen(js_name = fromBytesBE)]
pub fn from_bytes_be(bytes: &Uint8Array) -> Result<WasmRLNProofValues, String> {
let bytes_vec = bytes.to_vec();
let (proof_values, _) =
bytes_be_to_rln_proof_values(&bytes_vec).map_err(|err| err.to_string())?;
Ok(WasmRLNProofValues(proof_values))
}
#[wasm_bindgen(js_name = recoverIdSecret)]
pub fn recover_id_secret(
proof_values_1: &WasmRLNProofValues,
proof_values_2: &WasmRLNProofValues,
) -> Result<WasmFr, String> {
let recovered_identity_secret = recover_id_secret(&proof_values_1.0, &proof_values_2.0)
.map_err(|err| err.to_string())?;
Ok(WasmFr::from(*recovered_identity_secret))
}
}
#[wasm_bindgen]
pub struct WasmRLNWitnessInput(RLNWitnessInput);
#[wasm_bindgen]
impl WasmRLNWitnessInput {
#[wasm_bindgen(constructor)]
pub fn new(
identity_secret: &WasmFr,
user_message_limit: &WasmFr,
message_id: &WasmFr,
path_elements: &VecWasmFr,
identity_path_index: &Uint8Array,
x: &WasmFr,
external_nullifier: &WasmFr,
) -> Result<WasmRLNWitnessInput, String> {
let mut identity_secret_fr = identity_secret.inner();
let path_elements: Vec<Fr> = path_elements.inner();
let identity_path_index: Vec<u8> = identity_path_index.to_vec();
let witness = RLNWitnessInput::new(
IdSecret::from(&mut identity_secret_fr),
user_message_limit.inner(),
message_id.inner(),
path_elements,
identity_path_index,
x.inner(),
external_nullifier.inner(),
)
.map_err(|err| err.to_string())?;
Ok(WasmRLNWitnessInput(witness))
}
#[wasm_bindgen(js_name = toBigIntJson)]
pub fn to_bigint_json(&self) -> Result<Object, String> {
let bigint_json = rln_witness_to_bigint_json(&self.0).map_err(|err| err.to_string())?;
let serializer = serde_wasm_bindgen::Serializer::json_compatible();
let js_value = bigint_json
.serialize(&serializer)
.map_err(|err| err.to_string())?;
js_value
.dyn_into::<Object>()
.map_err(|err| format!("{:#?}", err))
}
#[wasm_bindgen(js_name = toBytesLE)]
pub fn to_bytes_le(&self) -> Result<Uint8Array, String> {
let bytes = rln_witness_to_bytes_le(&self.0).map_err(|err| err.to_string())?;
Ok(Uint8Array::from(&bytes[..]))
}
#[wasm_bindgen(js_name = toBytesBE)]
pub fn to_bytes_be(&self) -> Result<Uint8Array, String> {
let bytes = rln_witness_to_bytes_be(&self.0).map_err(|err| err.to_string())?;
Ok(Uint8Array::from(&bytes[..]))
}
#[wasm_bindgen(js_name = fromBytesLE)]
pub fn from_bytes_le(bytes: &Uint8Array) -> Result<WasmRLNWitnessInput, String> {
let bytes_vec = bytes.to_vec();
let (witness, _) = bytes_le_to_rln_witness(&bytes_vec).map_err(|err| err.to_string())?;
Ok(WasmRLNWitnessInput(witness))
}
#[wasm_bindgen(js_name = fromBytesBE)]
pub fn from_bytes_be(bytes: &Uint8Array) -> Result<WasmRLNWitnessInput, String> {
let bytes_vec = bytes.to_vec();
let (witness, _) = bytes_be_to_rln_witness(&bytes_vec).map_err(|err| err.to_string())?;
Ok(WasmRLNWitnessInput(witness))
}
}

420
rln-wasm/src/wasm_utils.rs Normal file
View File

@@ -0,0 +1,420 @@
#![cfg(target_arch = "wasm32")]
use std::ops::Deref;
use js_sys::Uint8Array;
use rln::prelude::*;
use wasm_bindgen::prelude::*;
// WasmFr
#[wasm_bindgen]
#[derive(Debug, Clone, Copy, PartialEq, Default)]
pub struct WasmFr(Fr);
impl From<Fr> for WasmFr {
fn from(fr: Fr) -> Self {
Self(fr)
}
}
impl Deref for WasmFr {
type Target = Fr;
fn deref(&self) -> &Self::Target {
&self.0
}
}
#[wasm_bindgen]
impl WasmFr {
#[wasm_bindgen(js_name = zero)]
pub fn zero() -> Self {
Self(Fr::from(0u32))
}
#[wasm_bindgen(js_name = one)]
pub fn one() -> Self {
Self(Fr::from(1u32))
}
#[wasm_bindgen(js_name = fromUint)]
pub fn from_uint(value: u32) -> Self {
Self(Fr::from(value))
}
#[wasm_bindgen(js_name = fromBytesLE)]
pub fn from_bytes_le(bytes: &Uint8Array) -> Result<Self, String> {
let bytes_vec = bytes.to_vec();
let (fr, _) = bytes_le_to_fr(&bytes_vec).map_err(|err| err.to_string())?;
Ok(Self(fr))
}
#[wasm_bindgen(js_name = fromBytesBE)]
pub fn from_bytes_be(bytes: &Uint8Array) -> Result<Self, String> {
let bytes_vec = bytes.to_vec();
let (fr, _) = bytes_be_to_fr(&bytes_vec).map_err(|err| err.to_string())?;
Ok(Self(fr))
}
#[wasm_bindgen(js_name = toBytesLE)]
pub fn to_bytes_le(&self) -> Uint8Array {
let bytes = fr_to_bytes_le(&self.0);
Uint8Array::from(&bytes[..])
}
#[wasm_bindgen(js_name = toBytesBE)]
pub fn to_bytes_be(&self) -> Uint8Array {
let bytes = fr_to_bytes_be(&self.0);
Uint8Array::from(&bytes[..])
}
#[wasm_bindgen(js_name = debug)]
pub fn debug(&self) -> String {
format!("{:?}", self.0)
}
}
impl WasmFr {
pub fn inner(&self) -> Fr {
self.0
}
}
// VecWasmFr
#[wasm_bindgen]
#[derive(Debug, Clone, PartialEq, Default)]
pub struct VecWasmFr(Vec<Fr>);
#[wasm_bindgen]
impl VecWasmFr {
#[wasm_bindgen(constructor)]
pub fn new() -> Self {
Self(Vec::new())
}
#[wasm_bindgen(js_name = fromBytesLE)]
pub fn from_bytes_le(bytes: &Uint8Array) -> Result<VecWasmFr, String> {
let bytes_vec = bytes.to_vec();
bytes_le_to_vec_fr(&bytes_vec)
.map(|(vec_fr, _)| VecWasmFr(vec_fr))
.map_err(|err| err.to_string())
}
#[wasm_bindgen(js_name = fromBytesBE)]
pub fn from_bytes_be(bytes: &Uint8Array) -> Result<VecWasmFr, String> {
let bytes_vec = bytes.to_vec();
bytes_be_to_vec_fr(&bytes_vec)
.map(|(vec_fr, _)| VecWasmFr(vec_fr))
.map_err(|err| err.to_string())
}
#[wasm_bindgen(js_name = toBytesLE)]
pub fn to_bytes_le(&self) -> Uint8Array {
let bytes = vec_fr_to_bytes_le(&self.0);
Uint8Array::from(&bytes[..])
}
#[wasm_bindgen(js_name = toBytesBE)]
pub fn to_bytes_be(&self) -> Uint8Array {
let bytes = vec_fr_to_bytes_be(&self.0);
Uint8Array::from(&bytes[..])
}
#[wasm_bindgen(js_name = get)]
pub fn get(&self, index: usize) -> Option<WasmFr> {
self.0.get(index).map(|&fr| WasmFr(fr))
}
#[wasm_bindgen(js_name = length)]
pub fn length(&self) -> usize {
self.0.len()
}
#[wasm_bindgen(js_name = push)]
pub fn push(&mut self, element: &WasmFr) {
self.0.push(element.0);
}
#[wasm_bindgen(js_name = debug)]
pub fn debug(&self) -> String {
format!("{:?}", self.0)
}
}
impl VecWasmFr {
pub fn inner(&self) -> Vec<Fr> {
self.0.clone()
}
}
// Uint8Array
#[wasm_bindgen]
pub struct Uint8ArrayUtils;
#[wasm_bindgen]
impl Uint8ArrayUtils {
#[wasm_bindgen(js_name = toBytesLE)]
pub fn to_bytes_le(input: &Uint8Array) -> Uint8Array {
let input_vec = input.to_vec();
let bytes = vec_u8_to_bytes_le(&input_vec);
Uint8Array::from(&bytes[..])
}
#[wasm_bindgen(js_name = toBytesBE)]
pub fn to_bytes_be(input: &Uint8Array) -> Uint8Array {
let input_vec = input.to_vec();
let bytes = vec_u8_to_bytes_be(&input_vec);
Uint8Array::from(&bytes[..])
}
#[wasm_bindgen(js_name = fromBytesLE)]
pub fn from_bytes_le(bytes: &Uint8Array) -> Result<Uint8Array, String> {
let bytes_vec = bytes.to_vec();
bytes_le_to_vec_u8(&bytes_vec)
.map(|(vec_u8, _)| Uint8Array::from(&vec_u8[..]))
.map_err(|err| err.to_string())
}
#[wasm_bindgen(js_name = fromBytesBE)]
pub fn from_bytes_be(bytes: &Uint8Array) -> Result<Uint8Array, String> {
let bytes_vec = bytes.to_vec();
bytes_be_to_vec_u8(&bytes_vec)
.map(|(vec_u8, _)| Uint8Array::from(&vec_u8[..]))
.map_err(|err| err.to_string())
}
}
// Utility APIs
#[wasm_bindgen]
pub struct Hasher;
#[wasm_bindgen]
impl Hasher {
#[wasm_bindgen(js_name = hashToFieldLE)]
pub fn hash_to_field_le(input: &Uint8Array) -> Result<WasmFr, String> {
hash_to_field_le(&input.to_vec())
.map(WasmFr)
.map_err(|err| err.to_string())
}
#[wasm_bindgen(js_name = hashToFieldBE)]
pub fn hash_to_field_be(input: &Uint8Array) -> Result<WasmFr, String> {
hash_to_field_be(&input.to_vec())
.map(WasmFr)
.map_err(|err| err.to_string())
}
#[wasm_bindgen(js_name = poseidonHashPair)]
pub fn poseidon_hash_pair(a: &WasmFr, b: &WasmFr) -> Result<WasmFr, String> {
poseidon_hash(&[a.0, b.0])
.map(WasmFr)
.map_err(|err| err.to_string())
}
}
#[wasm_bindgen]
pub struct Identity {
identity_secret: Fr,
id_commitment: Fr,
}
#[wasm_bindgen]
impl Identity {
#[wasm_bindgen(js_name = generate)]
pub fn generate() -> Result<Identity, String> {
let (identity_secret, id_commitment) = keygen().map_err(|err| err.to_string())?;
Ok(Identity {
identity_secret: *identity_secret,
id_commitment,
})
}
#[wasm_bindgen(js_name = generateSeeded)]
pub fn generate_seeded(seed: &Uint8Array) -> Result<Identity, String> {
let seed_vec = seed.to_vec();
let (identity_secret, id_commitment) =
seeded_keygen(&seed_vec).map_err(|err| err.to_string())?;
Ok(Identity {
identity_secret,
id_commitment,
})
}
#[wasm_bindgen(js_name = getSecretHash)]
pub fn get_secret_hash(&self) -> WasmFr {
WasmFr(self.identity_secret)
}
#[wasm_bindgen(js_name = getCommitment)]
pub fn get_commitment(&self) -> WasmFr {
WasmFr(self.id_commitment)
}
#[wasm_bindgen(js_name = toArray)]
pub fn to_array(&self) -> VecWasmFr {
VecWasmFr(vec![self.identity_secret, self.id_commitment])
}
#[wasm_bindgen(js_name = toBytesLE)]
pub fn to_bytes_le(&self) -> Uint8Array {
let vec_fr = vec![self.identity_secret, self.id_commitment];
let bytes = vec_fr_to_bytes_le(&vec_fr);
Uint8Array::from(&bytes[..])
}
#[wasm_bindgen(js_name = toBytesBE)]
pub fn to_bytes_be(&self) -> Uint8Array {
let vec_fr = vec![self.identity_secret, self.id_commitment];
let bytes = vec_fr_to_bytes_be(&vec_fr);
Uint8Array::from(&bytes[..])
}
#[wasm_bindgen(js_name = fromBytesLE)]
pub fn from_bytes_le(bytes: &Uint8Array) -> Result<Identity, String> {
let bytes_vec = bytes.to_vec();
let (vec_fr, _) = bytes_le_to_vec_fr(&bytes_vec).map_err(|err| err.to_string())?;
if vec_fr.len() != 2 {
return Err(format!("Expected 2 elements, got {}", vec_fr.len()));
}
Ok(Identity {
identity_secret: vec_fr[0],
id_commitment: vec_fr[1],
})
}
#[wasm_bindgen(js_name = fromBytesBE)]
pub fn from_bytes_be(bytes: &Uint8Array) -> Result<Identity, String> {
let bytes_vec = bytes.to_vec();
let (vec_fr, _) = bytes_be_to_vec_fr(&bytes_vec).map_err(|err| err.to_string())?;
if vec_fr.len() != 2 {
return Err(format!("Expected 2 elements, got {}", vec_fr.len()));
}
Ok(Identity {
identity_secret: vec_fr[0],
id_commitment: vec_fr[1],
})
}
}
#[wasm_bindgen]
pub struct ExtendedIdentity {
identity_trapdoor: Fr,
identity_nullifier: Fr,
identity_secret: Fr,
id_commitment: Fr,
}
#[wasm_bindgen]
impl ExtendedIdentity {
#[wasm_bindgen(js_name = generate)]
pub fn generate() -> Result<ExtendedIdentity, String> {
let (identity_trapdoor, identity_nullifier, identity_secret, id_commitment) =
extended_keygen().map_err(|err| err.to_string())?;
Ok(ExtendedIdentity {
identity_trapdoor,
identity_nullifier,
identity_secret,
id_commitment,
})
}
#[wasm_bindgen(js_name = generateSeeded)]
pub fn generate_seeded(seed: &Uint8Array) -> Result<ExtendedIdentity, String> {
let seed_vec = seed.to_vec();
let (identity_trapdoor, identity_nullifier, identity_secret, id_commitment) =
extended_seeded_keygen(&seed_vec).map_err(|err| err.to_string())?;
Ok(ExtendedIdentity {
identity_trapdoor,
identity_nullifier,
identity_secret,
id_commitment,
})
}
#[wasm_bindgen(js_name = getTrapdoor)]
pub fn get_trapdoor(&self) -> WasmFr {
WasmFr(self.identity_trapdoor)
}
#[wasm_bindgen(js_name = getNullifier)]
pub fn get_nullifier(&self) -> WasmFr {
WasmFr(self.identity_nullifier)
}
#[wasm_bindgen(js_name = getSecretHash)]
pub fn get_secret_hash(&self) -> WasmFr {
WasmFr(self.identity_secret)
}
#[wasm_bindgen(js_name = getCommitment)]
pub fn get_commitment(&self) -> WasmFr {
WasmFr(self.id_commitment)
}
#[wasm_bindgen(js_name = toArray)]
pub fn to_array(&self) -> VecWasmFr {
VecWasmFr(vec![
self.identity_trapdoor,
self.identity_nullifier,
self.identity_secret,
self.id_commitment,
])
}
#[wasm_bindgen(js_name = toBytesLE)]
pub fn to_bytes_le(&self) -> Uint8Array {
let vec_fr = vec![
self.identity_trapdoor,
self.identity_nullifier,
self.identity_secret,
self.id_commitment,
];
let bytes = vec_fr_to_bytes_le(&vec_fr);
Uint8Array::from(&bytes[..])
}
#[wasm_bindgen(js_name = toBytesBE)]
pub fn to_bytes_be(&self) -> Uint8Array {
let vec_fr = vec![
self.identity_trapdoor,
self.identity_nullifier,
self.identity_secret,
self.id_commitment,
];
let bytes = vec_fr_to_bytes_be(&vec_fr);
Uint8Array::from(&bytes[..])
}
#[wasm_bindgen(js_name = fromBytesLE)]
pub fn from_bytes_le(bytes: &Uint8Array) -> Result<ExtendedIdentity, String> {
let bytes_vec = bytes.to_vec();
let (vec_fr, _) = bytes_le_to_vec_fr(&bytes_vec).map_err(|err| err.to_string())?;
if vec_fr.len() != 4 {
return Err(format!("Expected 4 elements, got {}", vec_fr.len()));
}
Ok(ExtendedIdentity {
identity_trapdoor: vec_fr[0],
identity_nullifier: vec_fr[1],
identity_secret: vec_fr[2],
id_commitment: vec_fr[3],
})
}
#[wasm_bindgen(js_name = fromBytesBE)]
pub fn from_bytes_be(bytes: &Uint8Array) -> Result<ExtendedIdentity, String> {
let bytes_vec = bytes.to_vec();
let (vec_fr, _) = bytes_be_to_vec_fr(&bytes_vec).map_err(|err| err.to_string())?;
if vec_fr.len() != 4 {
return Err(format!("Expected 4 elements, got {}", vec_fr.len()));
}
Ok(ExtendedIdentity {
identity_trapdoor: vec_fr[0],
identity_nullifier: vec_fr[1],
identity_secret: vec_fr[2],
id_commitment: vec_fr[3],
})
}
}

247
rln-wasm/tests/browser.rs Normal file
View File

@@ -0,0 +1,247 @@
#![cfg(target_arch = "wasm32")]
#![cfg(not(feature = "utils"))]
#[cfg(test)]
mod test {
use js_sys::{BigInt as JsBigInt, Date, Object, Uint8Array};
use rln::prelude::*;
use rln_wasm::{
Hasher, Identity, VecWasmFr, WasmFr, WasmRLN, WasmRLNProof, WasmRLNWitnessInput,
};
use wasm_bindgen::{prelude::wasm_bindgen, JsValue};
use wasm_bindgen_test::{console_log, wasm_bindgen_test, wasm_bindgen_test_configure};
use zerokit_utils::merkle_tree::{
OptimalMerkleProof, OptimalMerkleTree, ZerokitMerkleProof, ZerokitMerkleTree,
};
#[cfg(feature = "parallel")]
use {rln_wasm::init_thread_pool, wasm_bindgen_futures::JsFuture, web_sys::window};
#[wasm_bindgen(inline_js = r#"
export function isThreadpoolSupported() {
return typeof SharedArrayBuffer !== 'undefined' &&
typeof Atomics !== 'undefined' &&
typeof crossOriginIsolated !== 'undefined' &&
crossOriginIsolated;
}
export function initWitnessCalculator(jsCode) {
const processedCode = jsCode
.replace(/export\s+async\s+function\s+builder/, 'async function builder')
.replace(/export\s*\{\s*builder\s*\};?/g, '');
const moduleFunc = new Function(processedCode + '\nreturn { builder };');
const witnessCalculatorModule = moduleFunc();
window.witnessCalculatorBuilder = witnessCalculatorModule.builder;
if (typeof window.witnessCalculatorBuilder !== 'function') {
return false;
}
return true;
}
export function readFile(data) {
return new Uint8Array(data);
}
export async function calculateWitness(circom_data, inputs) {
const wasmBuffer = circom_data instanceof Uint8Array ? circom_data : new Uint8Array(circom_data);
const witnessCalculator = await window.witnessCalculatorBuilder(wasmBuffer);
const calculatedWitness = await witnessCalculator.calculateWitness(inputs, false);
return JSON.stringify(calculatedWitness, (key, value) =>
typeof value === "bigint" ? value.toString() : value
);
}
"#)]
extern "C" {
#[wasm_bindgen(catch)]
fn isThreadpoolSupported() -> Result<bool, JsValue>;
#[wasm_bindgen(catch)]
fn initWitnessCalculator(js: &str) -> Result<bool, JsValue>;
#[wasm_bindgen(catch)]
fn readFile(data: &[u8]) -> Result<Uint8Array, JsValue>;
#[wasm_bindgen(catch)]
async fn calculateWitness(circom_data: &[u8], inputs: Object) -> Result<JsValue, JsValue>;
}
const WITNESS_CALCULATOR_JS: &str = include_str!("../resources/witness_calculator.js");
const ARKZKEY_BYTES: &[u8] =
include_bytes!("../../rln/resources/tree_depth_20/rln_final.arkzkey");
const CIRCOM_BYTES: &[u8] = include_bytes!("../../rln/resources/tree_depth_20/rln.wasm");
wasm_bindgen_test_configure!(run_in_browser);
#[wasm_bindgen_test]
pub async fn rln_wasm_benchmark() {
// Check if thread pool is supported
#[cfg(feature = "parallel")]
if !isThreadpoolSupported().unwrap() {
panic!("Thread pool is NOT supported");
} else {
// Initialize thread pool
let cpu_count = window().unwrap().navigator().hardware_concurrency() as usize;
JsFuture::from(init_thread_pool(cpu_count)).await.unwrap();
}
// Initialize witness calculator
initWitnessCalculator(WITNESS_CALCULATOR_JS).unwrap();
let mut results = String::from("\nbenchmarks:\n");
let iterations = 10;
let zkey = readFile(ARKZKEY_BYTES).unwrap();
// Benchmark RLN instance creation
let start_rln_new = Date::now();
for _ in 0..iterations {
let _ = WasmRLN::new(&zkey).unwrap();
}
let rln_new_result = Date::now() - start_rln_new;
// Create RLN instance for other benchmarks
let rln_instance = WasmRLN::new(&zkey).unwrap();
let mut tree: OptimalMerkleTree<PoseidonHash> =
OptimalMerkleTree::default(DEFAULT_TREE_DEPTH).unwrap();
// Benchmark generate identity
let start_identity_gen = Date::now();
for _ in 0..iterations {
let _ = Identity::generate().unwrap();
}
let identity_gen_result = Date::now() - start_identity_gen;
// Generate identity for other benchmarks
let identity_pair = Identity::generate().unwrap();
let identity_secret = identity_pair.get_secret_hash();
let id_commitment = identity_pair.get_commitment();
let epoch = Hasher::hash_to_field_le(&Uint8Array::from(b"test-epoch" as &[u8])).unwrap();
let rln_identifier =
Hasher::hash_to_field_le(&Uint8Array::from(b"test-rln-identifier" as &[u8])).unwrap();
let external_nullifier = Hasher::poseidon_hash_pair(&epoch, &rln_identifier).unwrap();
let identity_index = tree.leaves_set();
let user_message_limit = WasmFr::from_uint(100);
let rate_commitment =
Hasher::poseidon_hash_pair(&id_commitment, &user_message_limit).unwrap();
tree.update_next(*rate_commitment).unwrap();
let message_id = WasmFr::from_uint(0);
let signal: [u8; 32] = [0; 32];
let x = Hasher::hash_to_field_le(&Uint8Array::from(&signal[..])).unwrap();
let merkle_proof: OptimalMerkleProof<PoseidonHash> = tree.proof(identity_index).unwrap();
let mut path_elements = VecWasmFr::new();
for path_element in merkle_proof.get_path_elements() {
path_elements.push(&WasmFr::from(path_element));
}
let path_index = Uint8Array::from(&merkle_proof.get_path_index()[..]);
let witness = WasmRLNWitnessInput::new(
&identity_secret,
&user_message_limit,
&message_id,
&path_elements,
&path_index,
&x,
&external_nullifier,
)
.unwrap();
let bigint_json = witness.to_bigint_json().unwrap();
// Benchmark witness calculation
let start_calculate_witness = Date::now();
for _ in 0..iterations {
let _ = calculateWitness(CIRCOM_BYTES, bigint_json.clone())
.await
.unwrap();
}
let calculate_witness_result = Date::now() - start_calculate_witness;
// Calculate witness for other benchmarks
let calculated_witness_str = calculateWitness(CIRCOM_BYTES, bigint_json.clone())
.await
.unwrap()
.as_string()
.unwrap();
let calculated_witness_vec_str: Vec<String> =
serde_json::from_str(&calculated_witness_str).unwrap();
let calculated_witness: Vec<JsBigInt> = calculated_witness_vec_str
.iter()
.map(|x| JsBigInt::new(&x.into()).unwrap())
.collect();
// Benchmark proof generation with witness
let start_generate_rln_proof_with_witness = Date::now();
for _ in 0..iterations {
let _ = rln_instance
.generate_rln_proof_with_witness(calculated_witness.clone(), &witness)
.unwrap();
}
let generate_rln_proof_with_witness_result =
Date::now() - start_generate_rln_proof_with_witness;
// Generate proof with witness for other benchmarks
let proof: WasmRLNProof = rln_instance
.generate_rln_proof_with_witness(calculated_witness, &witness)
.unwrap();
let root = WasmFr::from(tree.root());
let mut roots = VecWasmFr::new();
roots.push(&root);
// Benchmark proof verification with the root
let start_verify_with_roots = Date::now();
for _ in 0..iterations {
let _ = rln_instance.verify_with_roots(&proof, &roots, &x).unwrap();
}
let verify_with_roots_result = Date::now() - start_verify_with_roots;
// Verify proof with the root for other benchmarks
let is_proof_valid = rln_instance.verify_with_roots(&proof, &roots, &x).unwrap();
assert!(is_proof_valid, "verification failed");
// Format and display the benchmark results
let format_duration = |duration_ms: f64| -> String {
let avg_ms = duration_ms / (iterations as f64);
if avg_ms >= 1000.0 {
format!("{:.3} s", avg_ms / 1000.0)
} else {
format!("{:.3} ms", avg_ms)
}
};
results.push_str(&format!(
"RLN instance creation: {}\n",
format_duration(rln_new_result)
));
results.push_str(&format!(
"Identity generation: {}\n",
format_duration(identity_gen_result)
));
results.push_str(&format!(
"Witness calculation: {}\n",
format_duration(calculate_witness_result)
));
results.push_str(&format!(
"Proof generation with witness: {}\n",
format_duration(generate_rln_proof_with_witness_result)
));
results.push_str(&format!(
"Proof verification with roots: {}\n",
format_duration(verify_with_roots_result)
));
// Log the results
console_log!("{results}");
}
}

233
rln-wasm/tests/node.rs Normal file
View File

@@ -0,0 +1,233 @@
#![cfg(target_arch = "wasm32")]
#![cfg(not(feature = "utils"))]
#[cfg(test)]
mod test {
use js_sys::{BigInt as JsBigInt, Date, Object, Uint8Array};
use rln::prelude::*;
use rln_wasm::{
Hasher, Identity, VecWasmFr, WasmFr, WasmRLN, WasmRLNProof, WasmRLNWitnessInput,
};
use wasm_bindgen::{prelude::wasm_bindgen, JsValue};
use wasm_bindgen_test::{console_log, wasm_bindgen_test};
use zerokit_utils::merkle_tree::{
OptimalMerkleProof, OptimalMerkleTree, ZerokitMerkleProof, ZerokitMerkleTree,
};
#[wasm_bindgen(inline_js = r#"
const fs = require("fs");
let witnessCalculatorModule = null;
module.exports = {
initWitnessCalculator: function(code) {
const processedCode = code
.replace(/export\s+async\s+function\s+builder/, 'async function builder')
.replace(/export\s*\{\s*builder\s*\};?/g, '');
const moduleFunc = new Function(processedCode + '\nreturn { builder };');
witnessCalculatorModule = moduleFunc();
if (typeof witnessCalculatorModule.builder !== 'function') {
return false;
}
return true;
},
readFile: function (path) {
return fs.readFileSync(path);
},
calculateWitness: async function (circom_path, inputs) {
const wasmFile = fs.readFileSync(circom_path);
const wasmFileBuffer = wasmFile.buffer.slice(
wasmFile.byteOffset,
wasmFile.byteOffset + wasmFile.byteLength
);
const witnessCalculator = await witnessCalculatorModule.builder(wasmFileBuffer);
const calculatedWitness = await witnessCalculator.calculateWitness(
inputs,
false
);
return JSON.stringify(calculatedWitness, (key, value) =>
typeof value === "bigint" ? value.toString() : value
);
},
};
"#)]
extern "C" {
#[wasm_bindgen(catch)]
fn initWitnessCalculator(code: &str) -> Result<bool, JsValue>;
#[wasm_bindgen(catch)]
fn readFile(path: &str) -> Result<Uint8Array, JsValue>;
#[wasm_bindgen(catch)]
async fn calculateWitness(circom_path: &str, input: Object) -> Result<JsValue, JsValue>;
}
const WITNESS_CALCULATOR_JS: &str = include_str!("../resources/witness_calculator.js");
const ARKZKEY_PATH: &str = "../rln/resources/tree_depth_20/rln_final.arkzkey";
const CIRCOM_PATH: &str = "../rln/resources/tree_depth_20/rln.wasm";
#[wasm_bindgen_test]
pub async fn rln_wasm_benchmark() {
// Initialize witness calculator
initWitnessCalculator(WITNESS_CALCULATOR_JS).unwrap();
let mut results = String::from("\nbenchmarks:\n");
let iterations = 10;
let zkey = readFile(ARKZKEY_PATH).unwrap();
// Benchmark RLN instance creation
let start_rln_new = Date::now();
for _ in 0..iterations {
let _ = WasmRLN::new(&zkey).unwrap();
}
let rln_new_result = Date::now() - start_rln_new;
// Create RLN instance for other benchmarks
let rln_instance = WasmRLN::new(&zkey).unwrap();
let mut tree: OptimalMerkleTree<PoseidonHash> =
OptimalMerkleTree::default(DEFAULT_TREE_DEPTH).unwrap();
// Benchmark generate identity
let start_identity_gen = Date::now();
for _ in 0..iterations {
let _ = Identity::generate().unwrap();
}
let identity_gen_result = Date::now() - start_identity_gen;
// Generate identity for other benchmarks
let identity_pair = Identity::generate().unwrap();
let identity_secret = identity_pair.get_secret_hash();
let id_commitment = identity_pair.get_commitment();
let epoch = Hasher::hash_to_field_le(&Uint8Array::from(b"test-epoch" as &[u8])).unwrap();
let rln_identifier =
Hasher::hash_to_field_le(&Uint8Array::from(b"test-rln-identifier" as &[u8])).unwrap();
let external_nullifier = Hasher::poseidon_hash_pair(&epoch, &rln_identifier).unwrap();
let identity_index = tree.leaves_set();
let user_message_limit = WasmFr::from_uint(100);
let rate_commitment =
Hasher::poseidon_hash_pair(&id_commitment, &user_message_limit).unwrap();
tree.update_next(*rate_commitment).unwrap();
let message_id = WasmFr::from_uint(0);
let signal: [u8; 32] = [0; 32];
let x = Hasher::hash_to_field_le(&Uint8Array::from(&signal[..])).unwrap();
let merkle_proof: OptimalMerkleProof<PoseidonHash> = tree.proof(identity_index).unwrap();
let mut path_elements = VecWasmFr::new();
for path_element in merkle_proof.get_path_elements() {
path_elements.push(&WasmFr::from(path_element));
}
let path_index = Uint8Array::from(&merkle_proof.get_path_index()[..]);
let witness = WasmRLNWitnessInput::new(
&identity_secret,
&user_message_limit,
&message_id,
&path_elements,
&path_index,
&x,
&external_nullifier,
)
.unwrap();
let bigint_json = witness.to_bigint_json().unwrap();
// Benchmark witness calculation
let start_calculate_witness = Date::now();
for _ in 0..iterations {
let _ = calculateWitness(CIRCOM_PATH, bigint_json.clone())
.await
.unwrap();
}
let calculate_witness_result = Date::now() - start_calculate_witness;
// Calculate witness for other benchmarks
let calculated_witness_str = calculateWitness(CIRCOM_PATH, bigint_json.clone())
.await
.unwrap()
.as_string()
.unwrap();
let calculated_witness_vec_str: Vec<String> =
serde_json::from_str(&calculated_witness_str).unwrap();
let calculated_witness: Vec<JsBigInt> = calculated_witness_vec_str
.iter()
.map(|x| JsBigInt::new(&x.into()).unwrap())
.collect();
// Benchmark proof generation with witness
let start_generate_rln_proof_with_witness = Date::now();
for _ in 0..iterations {
let _ = rln_instance
.generate_rln_proof_with_witness(calculated_witness.clone(), &witness)
.unwrap();
}
let generate_rln_proof_with_witness_result =
Date::now() - start_generate_rln_proof_with_witness;
// Generate proof with witness for other benchmarks
let proof: WasmRLNProof = rln_instance
.generate_rln_proof_with_witness(calculated_witness, &witness)
.unwrap();
let root = WasmFr::from(tree.root());
let mut roots = VecWasmFr::new();
roots.push(&root);
// Benchmark proof verification with the root
let start_verify_with_roots = Date::now();
for _ in 0..iterations {
let _ = rln_instance.verify_with_roots(&proof, &roots, &x).unwrap();
}
let verify_with_roots_result = Date::now() - start_verify_with_roots;
// Verify proof with the root for other benchmarks
let is_proof_valid = rln_instance.verify_with_roots(&proof, &roots, &x).unwrap();
assert!(is_proof_valid, "verification failed");
// Format and display the benchmark results
let format_duration = |duration_ms: f64| -> String {
let avg_ms = duration_ms / (iterations as f64);
if avg_ms >= 1000.0 {
format!("{:.3} s", avg_ms / 1000.0)
} else {
format!("{:.3} ms", avg_ms)
}
};
results.push_str(&format!(
"RLN instance creation: {}\n",
format_duration(rln_new_result)
));
results.push_str(&format!(
"Identity generation: {}\n",
format_duration(identity_gen_result)
));
results.push_str(&format!(
"Witness calculation: {}\n",
format_duration(calculate_witness_result)
));
results.push_str(&format!(
"Proof generation with witness: {}\n",
format_duration(generate_rln_proof_with_witness_result)
));
results.push_str(&format!(
"Proof verification with roots: {}\n",
format_duration(verify_with_roots_result)
));
// Log the results
console_log!("{results}");
}
}

View File

@@ -1,289 +0,0 @@
#![cfg(target_arch = "wasm32")]
#[cfg(test)]
mod tests {
use js_sys::{BigInt as JsBigInt, Date, Object, Uint8Array};
use rln::circuit::{Fr, TEST_TREE_HEIGHT};
use rln::hashers::{hash_to_field, poseidon_hash};
use rln::poseidon_tree::PoseidonTree;
use rln::protocol::{prepare_verify_input, rln_witness_from_values, serialize_witness};
use rln::utils::{bytes_le_to_fr, fr_to_bytes_le};
use rln_wasm::*;
use wasm_bindgen::{prelude::*, JsValue};
use wasm_bindgen_test::wasm_bindgen_test;
use zerokit_utils::merkle_tree::merkle_tree::ZerokitMerkleTree;
#[wasm_bindgen(module = "src/utils.js")]
extern "C" {
#[wasm_bindgen(catch)]
fn read_file(path: &str) -> Result<Uint8Array, JsValue>;
#[wasm_bindgen(catch)]
async fn calculateWitness(circom_path: &str, input: Object) -> Result<JsValue, JsValue>;
}
#[cfg(feature = "arkzkey")]
const ZKEY_PATH: &str = "../rln/resources/tree_height_20/rln_final.arkzkey";
#[cfg(not(feature = "arkzkey"))]
const ZKEY_PATH: &str = "../rln/resources/tree_height_20/rln_final.zkey";
const CIRCOM_PATH: &str = "../rln/resources/tree_height_20/rln.wasm";
#[wasm_bindgen_test]
pub async fn rln_wasm_benchmark() {
let mut results = String::from("\nbenchmarks:\n");
let iterations = 10;
let zkey = read_file(&ZKEY_PATH).expect("Failed to read zkey file");
// Benchmark wasm_new
let start_wasm_new = Date::now();
for _ in 0..iterations {
let _ = wasm_new(zkey.clone()).expect("Failed to create RLN instance");
}
let wasm_new_result = Date::now() - start_wasm_new;
// Create RLN instance for other benchmarks
let rln_instance = wasm_new(zkey).expect("Failed to create RLN instance");
let mut tree = PoseidonTree::default(TEST_TREE_HEIGHT).expect("Failed to create tree");
// Benchmark wasm_key_gen
let start_wasm_key_gen = Date::now();
for _ in 0..iterations {
let _ = wasm_key_gen(rln_instance).expect("Failed to generate keys");
}
let wasm_key_gen_result = Date::now() - start_wasm_key_gen;
// Generate identity pair for other benchmarks
let mem_keys = wasm_key_gen(rln_instance).expect("Failed to generate keys");
let id_key = mem_keys.subarray(0, 32);
let (identity_secret_hash, _) = bytes_le_to_fr(&id_key.to_vec());
let (id_commitment, _) = bytes_le_to_fr(&mem_keys.subarray(32, 64).to_vec());
let epoch = hash_to_field(b"test-epoch");
let rln_identifier = hash_to_field(b"test-rln-identifier");
let external_nullifier = poseidon_hash(&[epoch, rln_identifier]);
let identity_index = tree.leaves_set();
let user_message_limit = Fr::from(100);
let rate_commitment = poseidon_hash(&[id_commitment, user_message_limit]);
tree.update_next(rate_commitment)
.expect("Failed to update tree");
let message_id = Fr::from(0);
let signal: [u8; 32] = [0; 32];
let x = hash_to_field(&signal);
let merkle_proof = tree
.proof(identity_index)
.expect("Failed to generate merkle proof");
let rln_witness = rln_witness_from_values(
identity_secret_hash,
&merkle_proof,
x,
external_nullifier,
user_message_limit,
message_id,
)
.expect("Failed to create RLN witness");
let serialized_witness =
serialize_witness(&rln_witness).expect("Failed to serialize witness");
let witness_buffer = Uint8Array::from(&serialized_witness[..]);
let json_inputs = wasm_rln_witness_to_json(rln_instance, witness_buffer.clone())
.expect("Failed to convert witness to JSON");
// Benchmark calculateWitness
let start_calculate_witness = Date::now();
for _ in 0..iterations {
let _ = calculateWitness(&CIRCOM_PATH, json_inputs.clone())
.await
.expect("Failed to calculate witness");
}
let calculate_witness_result = Date::now() - start_calculate_witness;
// Calculate witness for other benchmarks
let calculated_witness_json = calculateWitness(&CIRCOM_PATH, json_inputs)
.await
.expect("Failed to calculate witness")
.as_string()
.expect("Failed to convert calculated witness to string");
let calculated_witness_vec_str: Vec<String> =
serde_json::from_str(&calculated_witness_json).expect("Failed to parse JSON");
let calculated_witness: Vec<JsBigInt> = calculated_witness_vec_str
.iter()
.map(|x| JsBigInt::new(&x.into()).expect("Failed to create JsBigInt"))
.collect();
// Benchmark wasm_generate_rln_proof_with_witness
let start_wasm_generate_rln_proof_with_witness = Date::now();
for _ in 0..iterations {
let _ = wasm_generate_rln_proof_with_witness(
rln_instance,
calculated_witness.clone(),
witness_buffer.clone(),
)
.expect("Failed to generate proof");
}
let wasm_generate_rln_proof_with_witness_result =
Date::now() - start_wasm_generate_rln_proof_with_witness;
// Generate a proof for other benchmarks
let proof =
wasm_generate_rln_proof_with_witness(rln_instance, calculated_witness, witness_buffer)
.expect("Failed to generate proof");
let proof_data = proof.to_vec();
let verify_input = prepare_verify_input(proof_data, &signal);
let input_buffer = Uint8Array::from(&verify_input[..]);
let root = tree.root();
let roots_serialized = fr_to_bytes_le(&root);
let roots_buffer = Uint8Array::from(&roots_serialized[..]);
// Benchmark wasm_verify_with_roots
let start_wasm_verify_with_roots = Date::now();
for _ in 0..iterations {
let _ =
wasm_verify_with_roots(rln_instance, input_buffer.clone(), roots_buffer.clone())
.expect("Failed to verify proof");
}
let wasm_verify_with_roots_result = Date::now() - start_wasm_verify_with_roots;
// Verify the proof with the root
let is_proof_valid = wasm_verify_with_roots(rln_instance, input_buffer, roots_buffer)
.expect("Failed to verify proof");
assert!(is_proof_valid, "verification failed");
// Format and display results
let format_duration = |duration_ms: f64| -> String {
let avg_ms = duration_ms / (iterations as f64);
if avg_ms >= 1000.0 {
format!("{:.3} s", avg_ms / 1000.0)
} else {
format!("{:.3} ms", avg_ms)
}
};
results.push_str(&format!("wasm_new: {}\n", format_duration(wasm_new_result)));
results.push_str(&format!(
"wasm_key_gen: {}\n",
format_duration(wasm_key_gen_result)
));
results.push_str(&format!(
"calculateWitness: {}\n",
format_duration(calculate_witness_result)
));
results.push_str(&format!(
"wasm_generate_rln_proof_with_witness: {}\n",
format_duration(wasm_generate_rln_proof_with_witness_result)
));
results.push_str(&format!(
"wasm_verify_with_roots: {}\n",
format_duration(wasm_verify_with_roots_result)
));
// Log the results
wasm_bindgen_test::console_log!("{results}");
}
#[wasm_bindgen_test]
pub async fn rln_wasm_test() {
// Read the zkey file
let zkey = read_file(&ZKEY_PATH).expect("Failed to read zkey file");
// Create RLN instance and separated tree
let rln_instance = wasm_new(zkey).expect("Failed to create RLN instance");
let mut tree = PoseidonTree::default(TEST_TREE_HEIGHT).expect("Failed to create tree");
// Setting up the epoch and rln_identifier
let epoch = hash_to_field(b"test-epoch");
let rln_identifier = hash_to_field(b"test-rln-identifier");
let external_nullifier = poseidon_hash(&[epoch, rln_identifier]);
// Generate identity pair
let mem_keys = wasm_key_gen(rln_instance).expect("Failed to generate keys");
let (identity_secret_hash, _) = bytes_le_to_fr(&mem_keys.subarray(0, 32).to_vec());
let (id_commitment, _) = bytes_le_to_fr(&mem_keys.subarray(32, 64).to_vec());
// Get index of the identity
let identity_index = tree.leaves_set();
// Setting up the user message limit
let user_message_limit = Fr::from(100);
// Updating the tree with the rate commitment
let rate_commitment = poseidon_hash(&[id_commitment, user_message_limit]);
tree.update_next(rate_commitment)
.expect("Failed to update tree");
// Generate merkle proof
let merkle_proof = tree
.proof(identity_index)
.expect("Failed to generate merkle proof");
// Create message id and signal
let message_id = Fr::from(0);
let signal: [u8; 32] = [0; 32];
let x = hash_to_field(&signal);
// Prepare input for witness calculation
let rln_witness = rln_witness_from_values(
identity_secret_hash,
&merkle_proof,
x,
external_nullifier,
user_message_limit,
message_id,
)
.expect("Failed to create RLN witness");
// Serialize the rln witness
let serialized_witness =
serialize_witness(&rln_witness).expect("Failed to serialize witness");
// Convert the serialized witness to a Uint8Array
let witness_buffer = Uint8Array::from(&serialized_witness[..]);
// Obtaining inputs that should be sent to circom witness calculator
let json_inputs = wasm_rln_witness_to_json(rln_instance, witness_buffer.clone())
.expect("Failed to convert witness to JSON");
// Calculating witness with JS
// (Using a JSON since wasm_bindgen does not like Result<Vec<JsBigInt>,JsValue>)
let calculated_witness_json = calculateWitness(&CIRCOM_PATH, json_inputs)
.await
.expect("Failed to calculate witness")
.as_string()
.expect("Failed to convert calculated witness to string");
let calculated_witness_vec_str: Vec<String> =
serde_json::from_str(&calculated_witness_json).expect("Failed to parse JSON");
let calculated_witness: Vec<JsBigInt> = calculated_witness_vec_str
.iter()
.map(|x| JsBigInt::new(&x.into()).expect("Failed to create JsBigInt"))
.collect();
// Generate a proof from the calculated witness
let proof =
wasm_generate_rln_proof_with_witness(rln_instance, calculated_witness, witness_buffer)
.expect("Failed to generate proof");
// Prepare the root for verification
let root = tree.root();
let roots_serialized = fr_to_bytes_le(&root);
let roots_buffer = Uint8Array::from(&roots_serialized[..]);
// Prepare input for proof verification
let proof_data = proof.to_vec();
let verify_input = prepare_verify_input(proof_data, &signal);
let input_buffer = Uint8Array::from(&verify_input[..]);
// Verify the proof with the root
let is_proof_valid = wasm_verify_with_roots(rln_instance, input_buffer, roots_buffer)
.expect("Failed to verify proof");
assert!(is_proof_valid, "verification failed");
}
}

222
rln-wasm/tests/utils.rs Normal file
View File

@@ -0,0 +1,222 @@
#![cfg(target_arch = "wasm32")]
#[cfg(test)]
mod test {
use std::assert_eq;
use ark_std::rand::thread_rng;
use js_sys::Uint8Array;
use rand::Rng;
use rln::prelude::*;
use rln_wasm::{ExtendedIdentity, Hasher, Identity, VecWasmFr, WasmFr};
use wasm_bindgen_test::wasm_bindgen_test;
#[wasm_bindgen_test]
fn test_keygen_wasm() {
let identity = Identity::generate().unwrap();
let identity_secret = *identity.get_secret_hash();
let id_commitment = *identity.get_commitment();
assert_ne!(identity_secret, Fr::from(0u8));
assert_ne!(id_commitment, Fr::from(0u8));
let arr = identity.to_array();
assert_eq!(arr.length(), 2);
assert_eq!(*arr.get(0).unwrap(), identity_secret);
assert_eq!(*arr.get(1).unwrap(), id_commitment);
}
#[wasm_bindgen_test]
fn test_extended_keygen_wasm() {
let identity = ExtendedIdentity::generate().unwrap();
let identity_trapdoor = *identity.get_trapdoor();
let identity_nullifier = *identity.get_nullifier();
let identity_secret = *identity.get_secret_hash();
let id_commitment = *identity.get_commitment();
assert_ne!(identity_trapdoor, Fr::from(0u8));
assert_ne!(identity_nullifier, Fr::from(0u8));
assert_ne!(identity_secret, Fr::from(0u8));
assert_ne!(id_commitment, Fr::from(0u8));
let arr = identity.to_array();
assert_eq!(arr.length(), 4);
assert_eq!(*arr.get(0).unwrap(), identity_trapdoor);
assert_eq!(*arr.get(1).unwrap(), identity_nullifier);
assert_eq!(*arr.get(2).unwrap(), identity_secret);
assert_eq!(*arr.get(3).unwrap(), id_commitment);
}
#[wasm_bindgen_test]
fn test_seeded_keygen_wasm() {
let seed_bytes: Vec<u8> = vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
let seed = Uint8Array::from(&seed_bytes[..]);
let identity = Identity::generate_seeded(&seed).unwrap();
let identity_secret = *identity.get_secret_hash();
let id_commitment = *identity.get_commitment();
let expected_identity_secret_seed_bytes = str_to_fr(
"0x766ce6c7e7a01bdf5b3f257616f603918c30946fa23480f2859c597817e6716",
16,
)
.unwrap();
let expected_id_commitment_seed_bytes = str_to_fr(
"0xbf16d2b5c0d6f9d9d561e05bfca16a81b4b873bb063508fae360d8c74cef51f",
16,
)
.unwrap();
assert_eq!(identity_secret, expected_identity_secret_seed_bytes);
assert_eq!(id_commitment, expected_id_commitment_seed_bytes);
}
#[wasm_bindgen_test]
fn test_seeded_extended_keygen_wasm() {
let seed_bytes: Vec<u8> = vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
let seed = Uint8Array::from(&seed_bytes[..]);
let identity = ExtendedIdentity::generate_seeded(&seed).unwrap();
let identity_trapdoor = *identity.get_trapdoor();
let identity_nullifier = *identity.get_nullifier();
let identity_secret = *identity.get_secret_hash();
let id_commitment = *identity.get_commitment();
let expected_identity_trapdoor_seed_bytes = str_to_fr(
"0x766ce6c7e7a01bdf5b3f257616f603918c30946fa23480f2859c597817e6716",
16,
)
.unwrap();
let expected_identity_nullifier_seed_bytes = str_to_fr(
"0x1f18714c7bc83b5bca9e89d404cf6f2f585bc4c0f7ed8b53742b7e2b298f50b4",
16,
)
.unwrap();
let expected_identity_secret_seed_bytes = str_to_fr(
"0x2aca62aaa7abaf3686fff2caf00f55ab9462dc12db5b5d4bcf3994e671f8e521",
16,
)
.unwrap();
let expected_id_commitment_seed_bytes = str_to_fr(
"0x68b66aa0a8320d2e56842581553285393188714c48f9b17acd198b4f1734c5c",
16,
)
.unwrap();
assert_eq!(identity_trapdoor, expected_identity_trapdoor_seed_bytes);
assert_eq!(identity_nullifier, expected_identity_nullifier_seed_bytes);
assert_eq!(identity_secret, expected_identity_secret_seed_bytes);
assert_eq!(id_commitment, expected_id_commitment_seed_bytes);
}
#[wasm_bindgen_test]
fn test_wasmfr() {
let wasmfr_zero = WasmFr::zero();
let fr_zero = Fr::from(0u8);
assert_eq!(*wasmfr_zero, fr_zero);
let wasmfr_one = WasmFr::one();
let fr_one = Fr::from(1u8);
assert_eq!(*wasmfr_one, fr_one);
let wasmfr_int = WasmFr::from_uint(42);
let fr_int = Fr::from(42u8);
assert_eq!(*wasmfr_int, fr_int);
let wasmfr_debug_str = wasmfr_int.debug();
assert_eq!(wasmfr_debug_str.to_string(), "42");
let identity = Identity::generate().unwrap();
let mut id_secret_fr = *identity.get_secret_hash();
let id_secret_hash = IdSecret::from(&mut id_secret_fr);
let id_commitment = *identity.get_commitment();
let wasmfr_id_secret_hash = *identity.get_secret_hash();
assert_eq!(wasmfr_id_secret_hash, *id_secret_hash);
let wasmfr_id_commitment = *identity.get_commitment();
assert_eq!(wasmfr_id_commitment, id_commitment);
}
#[wasm_bindgen_test]
fn test_vec_wasmfr() {
let vec_fr = vec![Fr::from(1u8), Fr::from(2u8), Fr::from(3u8), Fr::from(4u8)];
let mut vec_wasmfr = VecWasmFr::new();
for fr in &vec_fr {
vec_wasmfr.push(&WasmFr::from(*fr));
}
let bytes_le = vec_wasmfr.to_bytes_le();
let expected_le = rln::utils::vec_fr_to_bytes_le(&vec_fr);
assert_eq!(bytes_le.to_vec(), expected_le);
let bytes_be = vec_wasmfr.to_bytes_be();
let expected_be = rln::utils::vec_fr_to_bytes_be(&vec_fr);
assert_eq!(bytes_be.to_vec(), expected_be);
let vec_wasmfr_from_le = match VecWasmFr::from_bytes_le(&bytes_le) {
Ok(v) => v,
Err(err) => panic!("VecWasmFr::from_bytes_le call failed: {}", err),
};
assert_eq!(vec_wasmfr_from_le.length(), vec_wasmfr.length());
for i in 0..vec_wasmfr.length() {
assert_eq!(
*vec_wasmfr_from_le.get(i).unwrap(),
*vec_wasmfr.get(i).unwrap()
);
}
let vec_wasmfr_from_be = match VecWasmFr::from_bytes_be(&bytes_be) {
Ok(v) => v,
Err(err) => panic!("VecWasmFr::from_bytes_be call failed: {}", err),
};
for i in 0..vec_wasmfr.length() {
assert_eq!(
*vec_wasmfr_from_be.get(i).unwrap(),
*vec_wasmfr.get(i).unwrap()
);
}
}
#[wasm_bindgen_test]
fn test_hash_to_field_wasm() {
let mut rng = thread_rng();
let signal_gen: [u8; 32] = rng.gen();
let signal = Uint8Array::from(&signal_gen[..]);
let wasmfr_le_1 = Hasher::hash_to_field_le(&signal).unwrap();
let fr_le_2 = hash_to_field_le(&signal_gen).unwrap();
assert_eq!(*wasmfr_le_1, fr_le_2);
let wasmfr_be_1 = Hasher::hash_to_field_be(&signal).unwrap();
let fr_be_2 = hash_to_field_be(&signal_gen).unwrap();
assert_eq!(*wasmfr_be_1, fr_be_2);
assert_eq!(*wasmfr_le_1, *wasmfr_be_1);
assert_eq!(fr_le_2, fr_be_2);
let hash_wasmfr_le_1 = wasmfr_le_1.to_bytes_le();
let hash_fr_le_2 = fr_to_bytes_le(&fr_le_2);
assert_eq!(hash_wasmfr_le_1.to_vec(), hash_fr_le_2);
let hash_wasmfr_be_1 = wasmfr_be_1.to_bytes_be();
let hash_fr_be_2 = fr_to_bytes_be(&fr_be_2);
assert_eq!(hash_wasmfr_be_1.to_vec(), hash_fr_be_2);
assert_ne!(hash_wasmfr_le_1.to_vec(), hash_wasmfr_be_1.to_vec());
assert_ne!(hash_fr_le_2, hash_fr_be_2);
}
#[wasm_bindgen_test]
fn test_poseidon_hash_pair_wasm() {
let input_1 = Fr::from(42u8);
let input_2 = Fr::from(99u8);
let expected_hash = poseidon_hash(&[input_1, input_2]).unwrap();
let wasmfr_1 = WasmFr::from_uint(42);
let wasmfr_2 = WasmFr::from_uint(99);
let received_hash = Hasher::poseidon_hash_pair(&wasmfr_1, &wasmfr_2).unwrap();
assert_eq!(*received_hash, expected_hash);
}
}

View File

@@ -1,6 +1,6 @@
[package]
name = "rln"
version = "0.7.0"
version = "1.0.0"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "APIs to manage, compute and verify zkSNARK proofs and RLN primitives"
@@ -9,88 +9,81 @@ homepage = "https://vac.dev"
repository = "https://github.com/vacp2p/zerokit"
[lib]
crate-type = ["rlib", "staticlib"]
crate-type = ["rlib", "staticlib", "cdylib"]
bench = false
# This flag disable cargo doctests, i.e. testing example code-snippets in documentation
# This flag disables cargo doctests, i.e. testing example code-snippets in documentation
doctest = false
[dependencies]
# ZKP Generation
ark-bn254 = { version = "0.5.0", features = ["std"] }
ark-relations = { version = "0.5.1", features = ["std"] }
ark-ff = { version = "0.5.0", default-features = false, features = [
"parallel",
] }
ark-ec = { version = "0.5.0", default-features = false, features = [
"parallel",
] }
ark-ff = { version = "0.5.0", default-features = false }
ark-ec = { version = "0.5.0", default-features = false }
ark-std = { version = "0.5.0", default-features = false }
ark-poly = { version = "0.5.0", default-features = false }
ark-groth16 = { version = "0.5.0", default-features = false }
ark-serialize = { version = "0.5.0", default-features = false }
ark-std = { version = "0.5.0", default-features = false, features = [
"parallel",
] }
ark-poly = { version = "0.5.0", default-features = false, features = [
"parallel",
] }
ark-groth16 = { version = "0.5.0", default-features = false, features = [
"parallel",
] }
ark-serialize = { version = "0.5.0", default-features = false, features = [
"parallel",
] }
# Error Handling
thiserror = "2.0.17"
# error handling
color-eyre = "0.6.3"
thiserror = "2.0.12"
# utilities
# Utilities
rayon = { version = "1.11.0", optional = true }
byteorder = "1.5.0"
cfg-if = "1.0"
num-bigint = { version = "0.4.6", default-features = false, features = [
"rand",
"std",
] }
cfg-if = "1.0.4"
num-bigint = { version = "0.4.6", default-features = false, features = ["std"] }
num-traits = "0.2.19"
once_cell = "1.21.3"
lazy_static = "1.5.0"
rand = "0.8.5"
rand_chacha = "0.3.1"
ruint = { version = "1.14.0", features = ["rand", "serde", "ark-ff-04"] }
ruint = { version = "1.17.0", default-features = false, features = [
"rand",
"serde",
"ark-ff-05",
] }
tiny-keccak = { version = "2.0.2", features = ["keccak"] }
utils = { package = "zerokit_utils", version = "0.5.2", path = "../utils", default-features = false }
zeroize = "1.8.2"
tempfile = "3.23.0"
zerokit_utils = { version = "1.0.0", path = "../utils", default-features = false }
# serialization
prost = "0.13.5"
serde_json = "1.0"
serde = { version = "1.0", features = ["derive"] }
# FFI
safer-ffi.version = "0.1"
document-features = { version = "0.2.11", optional = true }
# Serialization
prost = "0.14.1"
serde_json = "1.0.145"
serde = { version = "1.0.228", features = ["derive"] }
# Documentation
document-features = { version = "0.2.12", optional = true }
[dev-dependencies]
sled = "0.34.7"
criterion = { version = "0.4.0", features = ["html_reports"] }
criterion = { version = "0.8.0", features = ["html_reports"] }
[features]
default = ["pmtree-ft"]
fullmerkletree = ["default"]
default = ["parallel", "pmtree-ft"]
stateless = []
arkzkey = []
# Note: pmtree feature is still experimental
pmtree-ft = ["utils/pmtree-ft"]
[[bench]]
name = "circuit_loading_arkzkey_benchmark"
harness = false
required-features = ["arkzkey"]
[[bench]]
name = "circuit_loading_benchmark"
harness = false
parallel = [
"rayon",
"ark-ff/parallel",
"ark-ec/parallel",
"ark-std/parallel",
"ark-poly/parallel",
"ark-groth16/parallel",
"ark-serialize/parallel",
"zerokit_utils/parallel",
]
fullmerkletree = [] # Pre-allocated tree, fastest access
optimalmerkletree = [] # Sparse storage, memory efficient
pmtree-ft = ["zerokit_utils/pmtree-ft"] # Persistent storage, disk-based
headers = ["safer-ffi/headers"] # Generate C header file with safer-ffi
[[bench]]
name = "pmtree_benchmark"
harness = false
required-features = ["pmtree-ft"]
[[bench]]
name = "poseidon_tree_benchmark"
@@ -98,3 +91,7 @@ harness = false
[package.metadata.docs.rs]
all-features = true
[[bin]]
name = "generate-headers"
required-features = ["headers"] # Do not build unless generating headers.

View File

@@ -8,11 +8,15 @@ args = ["test", "--release", "--", "--nocapture"]
[tasks.test_stateless]
command = "cargo"
args = ["test", "--release", "--features", "stateless"]
[tasks.test_arkzkey]
command = "cargo"
args = ["test", "--release", "--features", "arkzkey"]
args = [
"test",
"--release",
"--no-default-features",
"--features",
"stateless",
"--",
"--nocapture",
]
[tasks.bench]
command = "cargo"

View File

@@ -1,8 +1,12 @@
# Zerokit RLN Module
[![Crates.io](https://img.shields.io/crates/v/rln.svg)](https://crates.io/crates/rln)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
The Zerokit RLN Module provides a Rust implementation for working with Rate-Limiting Nullifier [RLN](https://rfc.vac.dev/spec/32/) zkSNARK proofs and primitives. This module allows you to:
The Zerokit RLN Module provides a Rust implementation for working with
Rate-Limiting Nullifier [RLN](https://rfc.vac.dev/vac/raw/rln-v2) zkSNARK proofs and primitives.
This module allows you to:
- Generate and verify RLN proofs
- Work with Merkle trees for commitment storage
@@ -11,7 +15,8 @@ The Zerokit RLN Module provides a Rust implementation for working with Rate-Limi
## Quick Start
> [!IMPORTANT]
> Version 0.6.1 is required for WASM support or x32 architecture. Current version doesn't support these platforms due to dependency issues. WASM support will return in a future release.
> Version 0.7.0 is the only version that does not support WASM and x32 architecture.
> WASM support is available in version 0.8.0 and above.
### Add RLN as dependency
@@ -19,110 +24,104 @@ We start by adding zerokit RLN to our `Cargo.toml`
```toml
[dependencies]
rln = { git = "https://github.com/vacp2p/zerokit" }
rln = "1.0.0"
```
## Basic Usage Example
Note that we need to pass to RLN object constructor the path where the graph file (`graph.bin`, built for the input tree size), the corresponding proving key (`rln_final.zkey`) or (`rln_final_uncompr.arkzkey`) and verification key (`verification_key.arkvkey`, optional) are found.
The RLN object constructor requires the following files:
In the following we will use [cursors](https://doc.rust-lang.org/std/io/struct.Cursor.html) as readers/writers for interfacing with RLN public APIs.
- `rln_final.arkzkey`: The proving key in arkzkey format.
- `graph.bin`: The graph file built for the input tree size
Additionally, `rln.wasm` is used for testing in the rln-wasm module.
```rust
use std::io::Cursor;
use rln::{
circuit::Fr,
hashers::{hash_to_field, poseidon_hash},
protocol::{keygen, prepare_prove_input, prepare_verify_input},
public::RLN,
utils::fr_to_bytes_le,
};
use serde_json::json;
use rln::prelude::{keygen, poseidon_hash, hash_to_field_le, RLN, RLNWitnessInput, Fr, IdSecret};
fn main() {
// 1. Initialize RLN with parameters:
// - the tree height;
// - the tree depth;
// - the tree config, if it is not defined, the default value will be set
let tree_height = 20;
let input = Cursor::new(json!({}).to_string());
let mut rln = RLN::new(tree_height, input).unwrap();
let tree_depth = 20;
let mut rln = RLN::new(tree_depth, "").unwrap();
// 2. Generate an identity keypair
let (identity_secret_hash, id_commitment) = keygen();
let (identity_secret, id_commitment) = keygen();
// 3. Add a rate commitment to the Merkle tree
let id_index = 10;
let leaf_index = 10;
let user_message_limit = Fr::from(10);
let rate_commitment = poseidon_hash(&[id_commitment, user_message_limit]);
let mut buffer = Cursor::new(fr_to_bytes_le(&rate_commitment));
rln.set_leaf(id_index, &mut buffer).unwrap();
rln.set_leaf(leaf_index, rate_commitment).unwrap();
// 4. Set up external nullifier (epoch + app identifier)
// 4. Get the Merkle proof for the added commitment
let (path_elements, identity_path_index) = rln.get_merkle_proof(leaf_index).unwrap();
// 5. Set up external nullifier (epoch + app identifier)
// We generate epoch from a date seed and we ensure is
// mapped to a field element by hashing-to-field its content
let epoch = hash_to_field(b"Today at noon, this year");
// We generate rln_identifier from a date seed and we ensure is
// mapped to a field element by hashing-to-field its content
let rln_identifier = hash_to_field(b"test-rln-identifier");
let epoch = hash_to_field_le(b"Today at noon, this year");
// We generate rln_identifier from an application identifier and
// we ensure is mapped to a field element by hashing-to-field its content
let rln_identifier = hash_to_field_le(b"test-rln-identifier");
// We generate a external nullifier
let external_nullifier = poseidon_hash(&[epoch, rln_identifier]);
// We choose a message_id satisfy 0 <= message_id < user_message_limit
let message_id = Fr::from(1);
// 5. Generate and verify a proof for a message
// 6. Define the message signal
let signal = b"RLN is awesome";
// 6. Prepare input for generate_rln_proof API
// input_data is [ identity_secret<32> | id_index<8> | external_nullifier<32> | user_message_limit<32> | message_id<32> | signal_len<8> | signal<var> ]
let prove_input = prepare_prove_input(
identity_secret_hash,
id_index,
// 7. Compute x from the signal
let x = hash_to_field_le(signal);
// 8. Create witness input for RLN proof generation
let witness = RLNWitnessInput::new(
identity_secret,
user_message_limit,
message_id,
path_elements,
identity_path_index,
x,
external_nullifier,
signal,
);
// 7. Generate a RLN proof
// We generate a RLN proof for proof_input
let mut input_buffer = Cursor::new(prove_input);
let mut output_buffer = Cursor::new(Vec::<u8>::new());
rln.generate_rln_proof(&mut input_buffer, &mut output_buffer)
)
.unwrap();
// We get the public outputs returned by the circuit evaluation
// The byte vector `proof_data` is serialized as `[ zk-proof | tree_root | external_nullifier | share_x | share_y | nullifier ]`.
let proof_data = output_buffer.into_inner();
// 9. Generate a RLN proof
// We generate proof and proof values from the witness
let (proof, proof_values) = rln.generate_rln_proof(&witness).unwrap();
// 8. Verify a RLN proof
// Input buffer is serialized as `[proof_data | signal_len | signal ]`, where `proof_data` is (computed as) the output obtained by `generate_rln_proof`.
let verify_data = prepare_verify_input(proof_data, signal);
// We verify the zk-proof against the provided proof values
let mut input_buffer = Cursor::new(verify_data);
let verified = rln.verify_rln_proof(&mut input_buffer).unwrap();
// We ensure the proof is valid
// 10. Verify the RLN proof
// We verify the proof using the proof and proof values and the hashed signal x
let verified = rln.verify_rln_proof(&proof, &proof_values, &x).unwrap();
assert!(verified);
}
```
### Comments for the code above for point 4
### Comments for the code above for point 5
The `external nullifier` includes two parameters.
The first one is `epoch` and it's used to identify messages received in a certain time frame.
It usually corresponds to the current UNIX time but can also be set to a random value or generated by a seed, provided that it corresponds to a field element.
It usually corresponds to the current UNIX time but can also be set to a random value or generated by a seed,
provided that it corresponds to a field element.
The second one is `rln_identifier` and it's used to prevent a RLN ZK proof generated for one application to be re-used in another one.
The second one is `rln_identifier` and it's used to prevent a RLN ZK proof generated
for one application to be re-used in another one.
### Features
- **Multiple Backend Support**: Choose between different zkey formats with feature flags
- `arkzkey`: Use the optimized Arkworks-compatible zkey format (faster loading)
- `stateless`: For stateless proof verification
- **Pre-compiled Circuits**: Ready-to-use circuits with Merkle tree height of 20
- **Stateless Mode**: Allows the use of RLN without maintaining state of the Merkle tree.
- **Pre-compiled Circuits**: Ready-to-use circuits with Merkle tree depth of 20
- **Wasm Support**: WebAssembly bindings via rln-wasm crate with features like:
- Browser and Node.js compatibility
- Optional parallel feature support using [wasm-bindgen-rayon](https://github.com/RReverser/wasm-bindgen-rayon)
- Headless browser testing capabilities
- **Merkle Tree Implementations**: Multiple tree variants optimized for different use cases:
- **Full Merkle Tree**: Fastest access with complete pre-allocated tree in memory. Best for frequent random access (enable with `fullmerkletree` feature).
- **Optimal Merkle Tree**: Memory-efficient sparse storage using HashMap. Ideal for partially populated trees (enable with `optimalmerkletree` feature).
- **Persistent Merkle Tree**: Disk-based storage with [sled](https://github.com/spacejam/sled) for persistence across application restarts and large datasets (enable with `pmtree-ft` feature).
## Building and Testing
@@ -143,20 +142,22 @@ cargo make build
# Test with default features
cargo make test
# Test with specific features
cargo make test_arkzkey # For arkzkey feature
cargo make test_stateless # For stateless feature
# Test with stateless features
cargo make test_stateless
```
## Advanced: Custom Circuit Compilation
The `rln` (<https://github.com/rate-limiting-nullifier/circom-rln>) repository, which contains the RLN circuit implementation is using for pre-compiled RLN circuit for zerokit RLN.
The `circom-rln` (<https://github.com/rate-limiting-nullifier/circom-rln>) repository,
which contains the RLN circuit implementation used for pre-compiled RLN circuit for zerokit RLN.
If you want to compile your own RLN circuit, you can follow the instructions below.
### 1. Compile ZK Circuits for getting the zkey and verification key files
### 1. Compile ZK Circuits for getting the zkey file
This script actually generates not only the zkey and verification key files for the RLN circuit, but also the execution wasm file used for witness calculation.
However, the wasm file is not needed for the `rln` module, because current implementation uses the iden3 graph file for witness calculation.
This script actually generates not only the zkey file for the RLN circuit,
but also the execution wasm file used for witness calculation.
However, the wasm file is not needed for the `rln` module,
because current implementation uses the iden3 graph file for witness calculation.
This graph file is generated by the `circom-witnesscalc` tool in [step 2](#2-generate-witness-calculation-graph).
To customize the circuit parameters, modify `circom-rln/circuits/rln.circom`:
@@ -169,19 +170,27 @@ component main { public [x, externalNullifier] } = RLN(N, M);
Where:
- `N`: Merkle tree height, determining the maximum membership capacity (2^N members).
- `N`: Merkle tree depth, determining the maximum membership capacity (2^N members).
- `M`: Bit size for range checks, setting an upper bound for the number of messages per epoch (2^M messages).
> [!NOTE]
> However, if `N` is too big, this might require a larger Powers of Tau ceremony than the one hardcoded in `./scripts/build-circuits.sh`, which is `2^14`. \
> In such case, we refer to the official [Circom documentation](https://docs.circom.io/getting-started/proving-circuits/#powers-of-tau) for instructions on how to run an appropriate Powers of Tau ceremony and Phase 2 in order to compile the desired circuit. \
> Additionally, while `M` sets an upper bound on the number of messages per epoch (`2^M`), you can configure lower message limit for your use case, as long as it satisfies `user_message_limit ≤ 2^M`. \
> Currently, the `rln` module comes with a [pre-compiled](https://github.com/vacp2p/zerokit/tree/master/rln/resources) RLN circuit with a Merkle tree of height `20` and a bit size of `16`, allowing up to `2^20` registered members and a `2^16` message limit per epoch.
> However, if `N` is too big, this might require a larger Powers of Tau ceremony
> than the one hardcoded in `./scripts/build-circuits.sh`, which is `2^14`.
> In such case, we refer to the official
> [Circom documentation](https://docs.circom.io/getting-started/proving-circuits/#powers-of-tau)
> for instructions on how to run an appropriate Powers of Tau ceremony and Phase 2 in order to compile the desired circuit. \
> Additionally, while `M` sets an upper bound on the number of messages per epoch (`2^M`),
> you can configure lower message limit for your use case, as long as it satisfies `user_message_limit ≤ 2^M`. \
> Currently, the `rln` module comes with a [pre-compiled](https://github.com/vacp2p/zerokit/tree/master/rln/resources/tree_depth_20)
> RLN circuit with a Merkle tree of depth `20` and a bit size of `16`,
> allowing up to `2^20` registered members and a `2^16` message limit per epoch.
#### Install circom compiler
You can follow the instructions below or refer to the [installing Circom](https://docs.circom.io/getting-started/installation/#installing-circom) guide for more details, but make sure to use the specific version `v2.1.0`.
You can follow the instructions below or refer to the
[installing Circom](https://docs.circom.io/getting-started/installation/#installing-circom) guide for more details.
Make sure to use the specific version `v2.1.0`.
```sh
# Clone the circom repository
@@ -218,7 +227,8 @@ cp zkeyFiles/rln/final.zkey <path_to_rln_final.zkey>
### 2. Generate Witness Calculation Graph
The execution graph file used for witness calculation can be compiled following instructions in the [circom-witnesscalc](https://github.com/iden3/circom-witnesscalc) repository.
The execution graph file used for witness calculation can be compiled following instructions
in the [circom-witnesscalc](https://github.com/iden3/circom-witnesscalc) repository.
As mentioned in step 1, we should use `rln.circom` file from `circom-rln` repository.
```sh
@@ -235,11 +245,14 @@ cargo build
cargo run --package circom_witnesscalc --bin build-circuit ../circom-rln/circuits/rln.circom <path_to_graph.bin>
```
The `rln` module comes with [pre-compiled](https://github.com/vacp2p/zerokit/tree/master/rln/resources) execution graph files for the RLN circuit.
The `rln` module comes with [pre-compiled](https://github.com/vacp2p/zerokit/tree/master/rln/resources/tree_depth_20)
execution graph files for the RLN circuit.
### 3. Generate Arkzkey Representation for zkey and verification key files
### 3. Generate Arkzkey Representation for zkey file
For faster loading, compile the zkey file into the arkzkey format using [ark-zkey](https://github.com/seemenkina/ark-zkey). This is fork of the [original](https://github.com/zkmopro/ark-zkey) repository with the uncompressed zkey support.
For faster loading, compile the zkey file into the arkzkey format using
[ark-zkey](https://github.com/seemenkina/ark-zkey).
This is fork of the [original](https://github.com/zkmopro/ark-zkey) repository with the uncompressed arkzkey support.
```sh
# Clone the ark-zkey repository
@@ -252,19 +265,42 @@ cd ark-zkey && cargo build
cargo run --bin arkzkey-util <path_to_rln_final.zkey>
```
Currently, the `rln` module comes with [pre-compiled](https://github.com/vacp2p/zerokit/tree/master/rln/resources) arkzkey keys for the RLN circuit.
This will generate the `rln_final.arkzkey` file, which is used by the `rln` module.
## Get involved
Currently, the `rln` module comes with
[pre-compiled](https://github.com/vacp2p/zerokit/tree/master/rln/resources/tree_depth_20) arkzkey keys for the RLN circuit.
Zerokit RLN public and FFI APIs allow interaction with many more features than what briefly showcased above.
> [!NOTE]
> You can use this [convert_zkey.sh](./convert_zkey.sh) script
> to automate the process of generating the arkzkey file from any zkey file
We invite you to check our API documentation by running
Run the script as follows:
```rust
cargo doc --no-deps
```sh
chmod +x ./convert_zkey.sh
./convert_zkey.sh <path_to_rln_final.zkey>
```
and look at unit tests to have an hint on how to interface and use them.
## FFI Interface
RLN provides C-compatible bindings for integration with C, C++, Nim, and other languages through [safer_ffi](https://getditto.github.io/safer_ffi/).
The FFI layer is organized into several modules:
- [`ffi_rln.rs`](./src/ffi/ffi_rln.rs) Implements core RLN functionality, including initialization functions, proof generation, and proof verification.
- [`ffi_tree.rs`](./src/ffi/ffi_tree.rs) Provides all tree-related operations and helper functions for Merkle tree management.
- [`ffi_utils.rs`](./src/ffi/ffi_utils.rs) Contains all utility functions and structure definitions used across the FFI layer.
### Examples
Working examples demonstrating proof generation, proof verification and slashing in C and Nim:
- [C example](./ffi_c_examples/main.c) and [README](./ffi_c_examples/Readme.md)
- [Nim example](./ffi_nim_examples/main.nim) and [README](./ffi_nim_examples/Readme.md)
### Memory Management
- All **heap-allocated** objects returned from Rust FFI **must** be freed using their corresponding FFI `_free` functions.
## Detailed Protocol Flow
@@ -276,9 +312,20 @@ and look at unit tests to have an hint on how to interface and use them.
- Ensures rate-limiting constraints are satisfied
- Generates a nullifier to prevent double-usage
5. **Proof Verification**: Verify the proof without revealing the prover's identity
6. **Slashing Mechanism**: Detect and penalize double-usage attempts
## Getting Involved
Zerokit RLN public and FFI APIs allow interaction with many more features than what briefly showcased above.
We invite you to check our API documentation by running
```bash
cargo doc --no-deps
```
and look at unit tests to have an hint on how to interface and use them.
- Check the [unit tests](https://github.com/vacp2p/zerokit/tree/master/rln/tests) for more usage examples
- [RFC specification](https://rfc.vac.dev/spec/32/) for the Rate-Limiting Nullifier protocol
- [RFC specification](https://rfc.vac.dev/vac/raw/rln-v2) for the Rate-Limiting Nullifier protocol
- [GitHub repository](https://github.com/vacp2p/zerokit) for the latest updates

View File

@@ -1,25 +0,0 @@
use criterion::{criterion_group, criterion_main, Criterion};
use rln::circuit::{read_arkzkey_from_bytes_uncompressed, ARKZKEY_BYTES};
pub fn uncompressed_bench(c: &mut Criterion) {
let arkzkey = ARKZKEY_BYTES.to_vec();
let size = arkzkey.len() as f32;
println!(
"Size of uncompressed arkzkey: {:.2?} MB",
size / 1024.0 / 1024.0
);
c.bench_function("arkzkey::arkzkey_from_raw_uncompressed", |b| {
b.iter(|| {
let r = read_arkzkey_from_bytes_uncompressed(&arkzkey);
assert_eq!(r.is_ok(), true);
})
});
}
criterion_group! {
name = benches;
config = Criterion::default().sample_size(10);
targets = uncompressed_bench
}
criterion_main!(benches);

View File

@@ -1,24 +0,0 @@
use criterion::{criterion_group, criterion_main, Criterion};
use rln::circuit::zkey::read_zkey;
use std::io::Cursor;
pub fn zkey_load_benchmark(c: &mut Criterion) {
let zkey = rln::circuit::ZKEY_BYTES.to_vec();
let size = zkey.len() as f32;
println!("Size of zkey: {:.2?} MB", size / 1024.0 / 1024.0);
c.bench_function("zkey::zkey_from_raw", |b| {
b.iter(|| {
let mut reader = Cursor::new(zkey.clone());
let r = read_zkey(&mut reader);
assert_eq!(r.is_ok(), true);
})
});
}
criterion_group! {
name = benches;
config = Criterion::default().sample_size(10);
targets = zkey_load_benchmark
}
criterion_main!(benches);

View File

@@ -1,11 +1,11 @@
use criterion::{criterion_group, criterion_main, Criterion};
use rln::{circuit::Fr, pm_tree_adapter::PmTree};
use utils::ZerokitMerkleTree;
use rln::prelude::*;
use zerokit_utils::merkle_tree::ZerokitMerkleTree;
pub fn pmtree_benchmark(c: &mut Criterion) {
let mut tree = PmTree::default(2).unwrap();
let leaves: Vec<Fr> = (0..4).map(|s| Fr::from(s)).collect();
let leaves: Vec<Fr> = (0..4).map(Fr::from).collect();
c.bench_function("Pmtree::set", |b| {
b.iter(|| {
@@ -13,7 +13,7 @@ pub fn pmtree_benchmark(c: &mut Criterion) {
})
});
c.bench_function("Pmtree:delete", |b| {
c.bench_function("Pmtree::delete", |b| {
b.iter(|| {
tree.delete(0).unwrap();
})
@@ -26,12 +26,6 @@ pub fn pmtree_benchmark(c: &mut Criterion) {
})
});
c.bench_function("Pmtree::compute_root", |b| {
b.iter(|| {
tree.compute_root().unwrap();
})
});
c.bench_function("Pmtree::get", |b| {
b.iter(|| {
tree.get(0).unwrap();

View File

@@ -1,18 +1,15 @@
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};
use rln::{
circuit::{Fr, TEST_TREE_HEIGHT},
hashers::PoseidonHash,
};
use utils::{FullMerkleTree, OptimalMerkleTree, ZerokitMerkleTree};
use rln::prelude::*;
use zerokit_utils::merkle_tree::{FullMerkleTree, OptimalMerkleTree, ZerokitMerkleTree};
pub fn get_leaves(n: u32) -> Vec<Fr> {
(0..n).map(|s| Fr::from(s)).collect()
(0..n).map(Fr::from).collect()
}
pub fn optimal_merkle_tree_poseidon_benchmark(c: &mut Criterion) {
c.bench_function("OptimalMerkleTree::<Poseidon>::full_height_gen", |b| {
c.bench_function("OptimalMerkleTree::<Poseidon>::full_depth_gen", |b| {
b.iter(|| {
OptimalMerkleTree::<PoseidonHash>::default(TEST_TREE_HEIGHT).unwrap();
OptimalMerkleTree::<PoseidonHash>::default(DEFAULT_TREE_DEPTH).unwrap();
})
});
@@ -20,7 +17,7 @@ pub fn optimal_merkle_tree_poseidon_benchmark(c: &mut Criterion) {
for &n in [1u32, 10, 100].iter() {
let leaves = get_leaves(n);
let mut tree = OptimalMerkleTree::<PoseidonHash>::default(TEST_TREE_HEIGHT).unwrap();
let mut tree = OptimalMerkleTree::<PoseidonHash>::default(DEFAULT_TREE_DEPTH).unwrap();
group.bench_function(
BenchmarkId::new("OptimalMerkleTree::<Poseidon>::set", n),
|b| {
@@ -41,9 +38,9 @@ pub fn optimal_merkle_tree_poseidon_benchmark(c: &mut Criterion) {
}
pub fn full_merkle_tree_poseidon_benchmark(c: &mut Criterion) {
c.bench_function("FullMerkleTree::<Poseidon>::full_height_gen", |b| {
c.bench_function("FullMerkleTree::<Poseidon>::full_depth_gen", |b| {
b.iter(|| {
FullMerkleTree::<PoseidonHash>::default(TEST_TREE_HEIGHT).unwrap();
FullMerkleTree::<PoseidonHash>::default(DEFAULT_TREE_DEPTH).unwrap();
})
});
@@ -51,7 +48,7 @@ pub fn full_merkle_tree_poseidon_benchmark(c: &mut Criterion) {
for &n in [1u32, 10, 100].iter() {
let leaves = get_leaves(n);
let mut tree = FullMerkleTree::<PoseidonHash>::default(TEST_TREE_HEIGHT).unwrap();
let mut tree = FullMerkleTree::<PoseidonHash>::default(DEFAULT_TREE_DEPTH).unwrap();
group.bench_function(
BenchmarkId::new("FullMerkleTree::<Poseidon>::set", n),
|b| {

56
rln/convert_zkey.sh Executable file
View File

@@ -0,0 +1,56 @@
#!/bin/bash
# Convert zkey to arkzkey using /tmp directory
# Usage: ./convert_zkey.sh <path_to_zkey_file>
set -e
# Check input
if [ $# -eq 0 ]; then
echo "Usage: $0 <path_to_zkey_file>"
exit 1
fi
ZKEY_FILE="$1"
if [ ! -f "$ZKEY_FILE" ]; then
echo "Error: File '$ZKEY_FILE' does not exist"
exit 1
fi
# Get absolute path before changing directories
ZKEY_ABSOLUTE_PATH=$(realpath "$ZKEY_FILE")
# Create temp directory in /tmp
TEMP_DIR="/tmp/ark-zkey-$$"
echo "Using temp directory: $TEMP_DIR"
# Cleanup function
cleanup() {
echo "Cleaning up temp directory: $TEMP_DIR"
rm -rf "$TEMP_DIR"
}
# Setup cleanup trap
trap cleanup EXIT
# Create temp directory and clone ark-zkey
mkdir -p "$TEMP_DIR"
cd "$TEMP_DIR"
git clone https://github.com/seemenkina/ark-zkey.git
cd ark-zkey
cargo build
# Convert
cargo run --bin arkzkey-util "$ZKEY_ABSOLUTE_PATH"
# Check if arkzkey file was created (tool creates it in same directory as input)
ARKZKEY_FILE="${ZKEY_ABSOLUTE_PATH%.zkey}.arkzkey"
if [ ! -f "$ARKZKEY_FILE" ]; then
echo "Could not find generated .arkzkey file at $ARKZKEY_FILE"
exit 1
fi
echo "Conversion successful!"
echo "Output file: $ARKZKEY_FILE"

View File

@@ -0,0 +1,47 @@
# RLN FFI C example
This example demonstrates how to use the RLN C FFI in both stateless and non-stateless modes.
## Non-stateless mode
### Compile lib non-stateless
```bash
cargo build -p rln
cargo run --features headers --bin generate-headers
mv -v rln.h rln/ffi_c_examples/
```
### Compile and run example non-stateless
```bash
cd rln/ffi_c_examples/
gcc -Wall main.c -o main -lrln -L../../target/debug
./main
```
## Stateless mode
### Compile lib stateless
```bash
cargo build -p rln --no-default-features --features stateless
cargo run --no-default-features --features stateless,headers --bin generate-headers
mv -v rln.h rln/ffi_c_examples/
```
### Compile example stateless
```bash
cd rln/ffi_c_examples/
gcc -Wall -DSTATELESS main.c -o main -lrln -L../../target/debug
./main
```
## Note
### Find C lib used by Rust
```bash
cargo +nightly rustc --release -p rln -- -Z unstable-options --print native-static-libs
```

668
rln/ffi_c_examples/main.c Normal file
View File

@@ -0,0 +1,668 @@
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include "rln.h"
int main(int argc, char const *const argv[])
{
printf("Creating RLN instance\n");
#ifdef STATELESS
CResult_FFI_RLN_ptr_Vec_uint8_t ffi_rln_new_result = ffi_rln_new();
#else
const char *config_path = "../resources/tree_depth_20/config.json";
CResult_FFI_RLN_ptr_Vec_uint8_t ffi_rln_new_result = ffi_rln_new(20, config_path);
#endif
if (!ffi_rln_new_result.ok)
{
fprintf(stderr, "Initial RLN instance creation error: %s\n", ffi_rln_new_result.err.ptr);
ffi_c_string_free(ffi_rln_new_result.err);
return EXIT_FAILURE;
}
FFI_RLN_t *rln = ffi_rln_new_result.ok;
printf("RLN instance created successfully\n");
printf("\nGenerating identity keys\n");
CResult_Vec_CFr_Vec_uint8_t keys_result = ffi_key_gen();
if (keys_result.err.ptr)
{
fprintf(stderr, "Key generation error: %s\n", keys_result.err.ptr);
ffi_c_string_free(keys_result.err);
return EXIT_FAILURE;
}
Vec_CFr_t keys = keys_result.ok;
const CFr_t *identity_secret = ffi_vec_cfr_get(&keys, 0);
const CFr_t *id_commitment = ffi_vec_cfr_get(&keys, 1);
printf("Identity generated\n");
Vec_uint8_t debug = ffi_cfr_debug(identity_secret);
printf(" - identity_secret = %s\n", debug.ptr);
ffi_c_string_free(debug);
debug = ffi_cfr_debug(id_commitment);
printf(" - id_commitment = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("\nCreating message limit\n");
CFr_t *user_message_limit = ffi_uint_to_cfr(1);
debug = ffi_cfr_debug(user_message_limit);
printf(" - user_message_limit = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("\nComputing rate commitment\n");
CResult_CFr_ptr_Vec_uint8_t rate_commitment_result = ffi_poseidon_hash_pair(id_commitment, user_message_limit);
if (!rate_commitment_result.ok)
{
fprintf(stderr, "Rate commitment hash error: %s\n", rate_commitment_result.err.ptr);
ffi_c_string_free(rate_commitment_result.err);
return EXIT_FAILURE;
}
CFr_t *rate_commitment = rate_commitment_result.ok;
debug = ffi_cfr_debug(rate_commitment);
printf(" - rate_commitment = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("\nCFr serialization: CFr <-> bytes\n");
Vec_uint8_t ser_rate_commitment = ffi_cfr_to_bytes_le(rate_commitment);
debug = ffi_vec_u8_debug(&ser_rate_commitment);
printf(" - serialized rate_commitment = %s\n", debug.ptr);
ffi_c_string_free(debug);
CResult_CFr_ptr_Vec_uint8_t deser_rate_commitment_result = ffi_bytes_le_to_cfr(&ser_rate_commitment);
if (!deser_rate_commitment_result.ok)
{
fprintf(stderr, "Rate commitment deserialization error: %s\n", deser_rate_commitment_result.err.ptr);
ffi_c_string_free(deser_rate_commitment_result.err);
return EXIT_FAILURE;
}
CFr_t *deser_rate_commitment = deser_rate_commitment_result.ok;
debug = ffi_cfr_debug(deser_rate_commitment);
printf(" - deserialized rate_commitment = %s\n", debug.ptr);
ffi_c_string_free(debug);
ffi_vec_u8_free(ser_rate_commitment);
ffi_cfr_free(deser_rate_commitment);
printf("\nVec<CFr> serialization: Vec<CFr> <-> bytes\n");
Vec_uint8_t ser_keys = ffi_vec_cfr_to_bytes_le(&keys);
debug = ffi_vec_u8_debug(&ser_keys);
printf(" - serialized keys = %s\n", debug.ptr);
ffi_c_string_free(debug);
CResult_Vec_CFr_Vec_uint8_t deser_keys_result = ffi_bytes_le_to_vec_cfr(&ser_keys);
if (deser_keys_result.err.ptr)
{
fprintf(stderr, "Keys deserialization error: %s\n", deser_keys_result.err.ptr);
ffi_c_string_free(deser_keys_result.err);
return EXIT_FAILURE;
}
debug = ffi_vec_cfr_debug(&deser_keys_result.ok);
printf(" - deserialized keys = %s\n", debug.ptr);
ffi_c_string_free(debug);
Vec_CFr_t deser_keys = deser_keys_result.ok;
ffi_vec_cfr_free(deser_keys);
ffi_vec_u8_free(ser_keys);
#ifdef STATELESS
#define TREE_DEPTH 20
#define CFR_SIZE 32
printf("\nBuilding Merkle path for stateless mode\n");
CFr_t *default_leaf = ffi_cfr_zero();
CFr_t *default_hashes[TREE_DEPTH - 1];
CResult_CFr_ptr_Vec_uint8_t hash_result = ffi_poseidon_hash_pair(default_leaf, default_leaf);
if (!hash_result.ok)
{
fprintf(stderr, "Poseidon hash error: %s\n", hash_result.err.ptr);
ffi_c_string_free(hash_result.err);
return EXIT_FAILURE;
}
default_hashes[0] = hash_result.ok;
for (size_t i = 1; i < TREE_DEPTH - 1; i++)
{
hash_result = ffi_poseidon_hash_pair(default_hashes[i - 1], default_hashes[i - 1]);
if (!hash_result.ok)
{
fprintf(stderr, "Poseidon hash error: %s\n", hash_result.err.ptr);
ffi_c_string_free(hash_result.err);
return EXIT_FAILURE;
}
default_hashes[i] = hash_result.ok;
}
Vec_CFr_t path_elements = ffi_vec_cfr_new(TREE_DEPTH);
ffi_vec_cfr_push(&path_elements, default_leaf);
for (size_t i = 0; i < TREE_DEPTH - 1; i++)
{
ffi_vec_cfr_push(&path_elements, default_hashes[i]);
}
printf("\nVec<CFr> serialization: Vec<CFr> <-> bytes\n");
Vec_uint8_t ser_path_elements = ffi_vec_cfr_to_bytes_le(&path_elements);
debug = ffi_vec_u8_debug(&ser_path_elements);
printf(" - serialized path_elements = %s\n", debug.ptr);
ffi_c_string_free(debug);
CResult_Vec_CFr_Vec_uint8_t deser_path_elements_result = ffi_bytes_le_to_vec_cfr(&ser_path_elements);
if (deser_path_elements_result.err.ptr)
{
fprintf(stderr, "Path elements deserialization error: %s\n", deser_path_elements_result.err.ptr);
ffi_c_string_free(deser_path_elements_result.err);
return EXIT_FAILURE;
}
debug = ffi_vec_cfr_debug(&deser_path_elements_result.ok);
printf(" - deserialized path_elements = %s\n", debug.ptr);
ffi_c_string_free(debug);
Vec_CFr_t deser_path_elements = deser_path_elements_result.ok;
ffi_vec_cfr_free(deser_path_elements);
ffi_vec_u8_free(ser_path_elements);
uint8_t path_index_arr[TREE_DEPTH] = {0};
Vec_uint8_t identity_path_index = {
.ptr = path_index_arr,
.len = TREE_DEPTH,
.cap = TREE_DEPTH};
printf("\nVec<uint8> serialization: Vec<uint8> <-> bytes\n");
Vec_uint8_t ser_path_index = ffi_vec_u8_to_bytes_le(&identity_path_index);
debug = ffi_vec_u8_debug(&ser_path_index);
printf(" - serialized path_index = %s\n", debug.ptr);
ffi_c_string_free(debug);
CResult_Vec_uint8_Vec_uint8_t deser_path_index_result = ffi_bytes_le_to_vec_u8(&ser_path_index);
if (deser_path_index_result.err.ptr)
{
fprintf(stderr, "Path index deserialization error: %s\n", deser_path_index_result.err.ptr);
ffi_c_string_free(deser_path_index_result.err);
return EXIT_FAILURE;
}
debug = ffi_vec_u8_debug(&deser_path_index_result.ok);
printf(" - deserialized path_index = %s\n", debug.ptr);
ffi_c_string_free(debug);
Vec_uint8_t deser_path_index = deser_path_index_result.ok;
ffi_vec_u8_free(deser_path_index);
ffi_vec_u8_free(ser_path_index);
printf("\nComputing Merkle root for stateless mode\n");
printf(" - computing root for index 0 with rate_commitment\n");
CResult_CFr_ptr_Vec_uint8_t root_result = ffi_poseidon_hash_pair(rate_commitment, default_leaf);
if (!root_result.ok)
{
fprintf(stderr, "Poseidon hash error: %s\n", root_result.err.ptr);
ffi_c_string_free(root_result.err);
return EXIT_FAILURE;
}
CFr_t *computed_root = root_result.ok;
for (size_t i = 1; i < TREE_DEPTH; i++)
{
root_result = ffi_poseidon_hash_pair(computed_root, default_hashes[i - 1]);
if (!root_result.ok)
{
fprintf(stderr, "Poseidon hash error: %s\n", root_result.err.ptr);
ffi_c_string_free(root_result.err);
return EXIT_FAILURE;
}
CFr_t *next_root = root_result.ok;
ffi_cfr_free(computed_root);
computed_root = next_root;
}
debug = ffi_cfr_debug(computed_root);
printf(" - computed_root = %s\n", debug.ptr);
ffi_c_string_free(debug);
#else
printf("\nAdding rate_commitment to tree\n");
CBoolResult_t set_err = ffi_set_next_leaf(&rln, rate_commitment);
if (!set_err.ok)
{
fprintf(stderr, "Set next leaf error: %s\n", set_err.err.ptr);
ffi_c_string_free(set_err.err);
return EXIT_FAILURE;
}
size_t leaf_index = ffi_leaves_set(&rln) - 1;
printf(" - added to tree at index %zu\n", leaf_index);
printf("\nGetting Merkle proof\n");
CResult_FFI_MerkleProof_ptr_Vec_uint8_t proof_result = ffi_get_merkle_proof(&rln, leaf_index);
if (!proof_result.ok)
{
fprintf(stderr, "Get proof error: %s\n", proof_result.err.ptr);
ffi_c_string_free(proof_result.err);
return EXIT_FAILURE;
}
FFI_MerkleProof_t *merkle_proof = proof_result.ok;
printf(" - proof obtained (depth: %zu)\n", merkle_proof->path_elements.len);
#endif
printf("\nHashing signal\n");
uint8_t signal[32] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
Vec_uint8_t signal_vec = {signal, 32, 32};
CResult_CFr_ptr_Vec_uint8_t x_result = ffi_hash_to_field_le(&signal_vec);
if (!x_result.ok)
{
fprintf(stderr, "Hash signal error: %s\n", x_result.err.ptr);
ffi_c_string_free(x_result.err);
return EXIT_FAILURE;
}
CFr_t *x = x_result.ok;
debug = ffi_cfr_debug(x);
printf(" - x = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("\nHashing epoch\n");
const char *epoch_str = "test-epoch";
Vec_uint8_t epoch_vec = {(uint8_t *)epoch_str, strlen(epoch_str), strlen(epoch_str)};
CResult_CFr_ptr_Vec_uint8_t epoch_result = ffi_hash_to_field_le(&epoch_vec);
if (!epoch_result.ok)
{
fprintf(stderr, "Hash epoch error: %s\n", epoch_result.err.ptr);
ffi_c_string_free(epoch_result.err);
return EXIT_FAILURE;
}
CFr_t *epoch = epoch_result.ok;
debug = ffi_cfr_debug(epoch);
printf(" - epoch = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("\nHashing RLN identifier\n");
const char *rln_id_str = "test-rln-identifier";
Vec_uint8_t rln_id_vec = {(uint8_t *)rln_id_str, strlen(rln_id_str), strlen(rln_id_str)};
CResult_CFr_ptr_Vec_uint8_t rln_identifier_result = ffi_hash_to_field_le(&rln_id_vec);
if (!rln_identifier_result.ok)
{
fprintf(stderr, "Hash RLN identifier error: %s\n", rln_identifier_result.err.ptr);
ffi_c_string_free(rln_identifier_result.err);
return EXIT_FAILURE;
}
CFr_t *rln_identifier = rln_identifier_result.ok;
debug = ffi_cfr_debug(rln_identifier);
printf(" - rln_identifier = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("\nComputing Poseidon hash for external nullifier\n");
CResult_CFr_ptr_Vec_uint8_t external_nullifier_result = ffi_poseidon_hash_pair(epoch, rln_identifier);
if (!external_nullifier_result.ok)
{
fprintf(stderr, "External nullifier hash error: %s\n", external_nullifier_result.err.ptr);
ffi_c_string_free(external_nullifier_result.err);
return EXIT_FAILURE;
}
CFr_t *external_nullifier = external_nullifier_result.ok;
debug = ffi_cfr_debug(external_nullifier);
printf(" - external_nullifier = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("\nCreating message_id\n");
CFr_t *message_id = ffi_uint_to_cfr(0);
debug = ffi_cfr_debug(message_id);
printf(" - message_id = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("\nCreating RLN Witness\n");
#ifdef STATELESS
CResult_FFI_RLNWitnessInput_ptr_Vec_uint8_t witness_result = ffi_rln_witness_input_new(
identity_secret,
user_message_limit,
message_id,
&path_elements,
&identity_path_index,
x,
external_nullifier);
if (!witness_result.ok)
{
fprintf(stderr, "RLN Witness creation error: %s\n", witness_result.err.ptr);
ffi_c_string_free(witness_result.err);
return EXIT_FAILURE;
}
FFI_RLNWitnessInput_t *witness = witness_result.ok;
printf("RLN Witness created successfully\n");
#else
CResult_FFI_RLNWitnessInput_ptr_Vec_uint8_t witness_result = ffi_rln_witness_input_new(
identity_secret,
user_message_limit,
message_id,
&merkle_proof->path_elements,
&merkle_proof->path_index,
x,
external_nullifier);
if (!witness_result.ok)
{
fprintf(stderr, "RLN Witness creation error: %s\n", witness_result.err.ptr);
ffi_c_string_free(witness_result.err);
return EXIT_FAILURE;
}
FFI_RLNWitnessInput_t *witness = witness_result.ok;
printf("RLN Witness created successfully\n");
#endif
printf("\nRLNWitnessInput serialization: RLNWitnessInput <-> bytes\n");
CResult_Vec_uint8_Vec_uint8_t ser_witness_result = ffi_rln_witness_to_bytes_le(&witness);
if (ser_witness_result.err.ptr)
{
fprintf(stderr, "Witness serialization error: %s\n", ser_witness_result.err.ptr);
ffi_c_string_free(ser_witness_result.err);
return EXIT_FAILURE;
}
Vec_uint8_t ser_witness = ser_witness_result.ok;
debug = ffi_vec_u8_debug(&ser_witness);
printf(" - serialized witness = %s\n", debug.ptr);
ffi_c_string_free(debug);
CResult_FFI_RLNWitnessInput_ptr_Vec_uint8_t deser_witness_result = ffi_bytes_le_to_rln_witness(&ser_witness);
if (!deser_witness_result.ok)
{
fprintf(stderr, "Witness deserialization error: %s\n", deser_witness_result.err.ptr);
ffi_c_string_free(deser_witness_result.err);
return EXIT_FAILURE;
}
FFI_RLNWitnessInput_t *deser_witness = deser_witness_result.ok;
printf(" - witness deserialized successfully\n");
ffi_rln_witness_input_free(deser_witness);
ffi_vec_u8_free(ser_witness);
printf("\nGenerating RLN Proof\n");
CResult_FFI_RLNProof_ptr_Vec_uint8_t proof_gen_result = ffi_generate_rln_proof(
&rln,
&witness);
if (!proof_gen_result.ok)
{
fprintf(stderr, "Proof generation error: %s\n", proof_gen_result.err.ptr);
ffi_c_string_free(proof_gen_result.err);
return EXIT_FAILURE;
}
FFI_RLNProof_t *rln_proof = proof_gen_result.ok;
printf("Proof generated successfully\n");
printf("\nGetting proof values\n");
FFI_RLNProofValues_t *proof_values = ffi_rln_proof_get_values(&rln_proof);
CFr_t *y = ffi_rln_proof_values_get_y(&proof_values);
debug = ffi_cfr_debug(y);
printf(" - y = %s\n", debug.ptr);
ffi_c_string_free(debug);
ffi_cfr_free(y);
CFr_t *nullifier = ffi_rln_proof_values_get_nullifier(&proof_values);
debug = ffi_cfr_debug(nullifier);
printf(" - nullifier = %s\n", debug.ptr);
ffi_c_string_free(debug);
ffi_cfr_free(nullifier);
CFr_t *root = ffi_rln_proof_values_get_root(&proof_values);
debug = ffi_cfr_debug(root);
printf(" - root = %s\n", debug.ptr);
ffi_c_string_free(debug);
ffi_cfr_free(root);
CFr_t *x_val = ffi_rln_proof_values_get_x(&proof_values);
debug = ffi_cfr_debug(x_val);
printf(" - x = %s\n", debug.ptr);
ffi_c_string_free(debug);
ffi_cfr_free(x_val);
CFr_t *ext_nullifier = ffi_rln_proof_values_get_external_nullifier(&proof_values);
debug = ffi_cfr_debug(ext_nullifier);
printf(" - external_nullifier = %s\n", debug.ptr);
ffi_c_string_free(debug);
ffi_cfr_free(ext_nullifier);
printf("\nRLNProof serialization: RLNProof <-> bytes\n");
CResult_Vec_uint8_Vec_uint8_t ser_proof_result = ffi_rln_proof_to_bytes_le(&rln_proof);
if (ser_proof_result.err.ptr)
{
fprintf(stderr, "Proof serialization error: %s\n", ser_proof_result.err.ptr);
ffi_c_string_free(ser_proof_result.err);
return EXIT_FAILURE;
}
Vec_uint8_t ser_proof = ser_proof_result.ok;
debug = ffi_vec_u8_debug(&ser_proof);
printf(" - serialized proof = %s\n", debug.ptr);
ffi_c_string_free(debug);
CResult_FFI_RLNProof_ptr_Vec_uint8_t deser_proof_result = ffi_bytes_le_to_rln_proof(&ser_proof);
if (!deser_proof_result.ok)
{
fprintf(stderr, "Proof deserialization error: %s\n", deser_proof_result.err.ptr);
ffi_c_string_free(deser_proof_result.err);
return EXIT_FAILURE;
}
FFI_RLNProof_t *deser_proof = deser_proof_result.ok;
printf(" - proof deserialized successfully\n");
printf("\nRLNProofValues serialization: RLNProofValues <-> bytes\n");
Vec_uint8_t ser_proof_values = ffi_rln_proof_values_to_bytes_le(&proof_values);
debug = ffi_vec_u8_debug(&ser_proof_values);
printf(" - serialized proof_values = %s\n", debug.ptr);
ffi_c_string_free(debug);
CResult_FFI_RLNProofValues_ptr_Vec_uint8_t deser_proof_values_result = ffi_bytes_le_to_rln_proof_values(&ser_proof_values);
if (!deser_proof_values_result.ok)
{
fprintf(stderr, "Proof values deserialization error: %s\n", deser_proof_values_result.err.ptr);
ffi_c_string_free(deser_proof_values_result.err);
return EXIT_FAILURE;
}
FFI_RLNProofValues_t *deser_proof_values = deser_proof_values_result.ok;
printf(" - proof_values deserialized successfully\n");
CFr_t *deser_external_nullifier = ffi_rln_proof_values_get_external_nullifier(&deser_proof_values);
debug = ffi_cfr_debug(deser_external_nullifier);
printf(" - deserialized external_nullifier = %s\n", debug.ptr);
ffi_c_string_free(debug);
ffi_cfr_free(deser_external_nullifier);
ffi_rln_proof_values_free(deser_proof_values);
ffi_vec_u8_free(ser_proof_values);
ffi_rln_proof_free(deser_proof);
ffi_vec_u8_free(ser_proof);
printf("\nVerifying Proof\n");
#ifdef STATELESS
Vec_CFr_t roots = ffi_vec_cfr_from_cfr(computed_root);
CBoolResult_t verify_err = ffi_verify_with_roots(&rln, &rln_proof, &roots, x);
#else
CBoolResult_t verify_err = ffi_verify_rln_proof(&rln, &rln_proof, x);
#endif
if (!verify_err.ok)
{
fprintf(stderr, "Proof verification error: %s\n", verify_err.err.ptr);
ffi_c_string_free(verify_err.err);
return EXIT_FAILURE;
}
printf("Proof verified successfully\n");
ffi_rln_proof_free(rln_proof);
printf("\nSimulating double-signaling attack (same epoch, different message)\n");
printf("\nHashing second signal\n");
uint8_t signal2[32] = {11, 12, 13, 14, 15, 16, 17, 18, 19, 20};
Vec_uint8_t signal2_vec = {signal2, 32, 32};
CResult_CFr_ptr_Vec_uint8_t x2_result = ffi_hash_to_field_le(&signal2_vec);
if (!x2_result.ok)
{
fprintf(stderr, "Hash second signal error: %s\n", x2_result.err.ptr);
ffi_c_string_free(x2_result.err);
return EXIT_FAILURE;
}
CFr_t *x2 = x2_result.ok;
debug = ffi_cfr_debug(x2);
printf(" - x2 = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("\nCreating second message with the same id\n");
CFr_t *message_id2 = ffi_uint_to_cfr(0);
debug = ffi_cfr_debug(message_id2);
printf(" - message_id2 = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("\nCreating second RLN Witness\n");
#ifdef STATELESS
CResult_FFI_RLNWitnessInput_ptr_Vec_uint8_t witness_result2 = ffi_rln_witness_input_new(
identity_secret,
user_message_limit,
message_id2,
&path_elements,
&identity_path_index,
x2,
external_nullifier);
if (!witness_result2.ok)
{
fprintf(stderr, "Second RLN Witness creation error: %s\n", witness_result2.err.ptr);
ffi_c_string_free(witness_result2.err);
return EXIT_FAILURE;
}
FFI_RLNWitnessInput_t *witness2 = witness_result2.ok;
printf("Second RLN Witness created successfully\n");
#else
CResult_FFI_RLNWitnessInput_ptr_Vec_uint8_t witness_result2 = ffi_rln_witness_input_new(
identity_secret,
user_message_limit,
message_id2,
&merkle_proof->path_elements,
&merkle_proof->path_index,
x2,
external_nullifier);
if (!witness_result2.ok)
{
fprintf(stderr, "Second RLN Witness creation error: %s\n", witness_result2.err.ptr);
ffi_c_string_free(witness_result2.err);
return EXIT_FAILURE;
}
FFI_RLNWitnessInput_t *witness2 = witness_result2.ok;
printf("Second RLN Witness created successfully\n");
#endif
printf("\nGenerating second RLN Proof\n");
CResult_FFI_RLNProof_ptr_Vec_uint8_t proof_gen_result2 = ffi_generate_rln_proof(
&rln,
&witness2);
if (!proof_gen_result2.ok)
{
fprintf(stderr, "Second proof generation error: %s\n", proof_gen_result2.err.ptr);
ffi_c_string_free(proof_gen_result2.err);
return EXIT_FAILURE;
}
FFI_RLNProof_t *rln_proof2 = proof_gen_result2.ok;
printf("Second proof generated successfully\n");
FFI_RLNProofValues_t *proof_values2 = ffi_rln_proof_get_values(&rln_proof2);
printf("\nVerifying second proof\n");
#ifdef STATELESS
CBoolResult_t verify_err2 = ffi_verify_with_roots(&rln, &rln_proof2, &roots, x2);
#else
CBoolResult_t verify_err2 = ffi_verify_rln_proof(&rln, &rln_proof2, x2);
#endif
if (!verify_err2.ok)
{
fprintf(stderr, "Proof verification error: %s\n", verify_err2.err.ptr);
ffi_c_string_free(verify_err2.err);
return EXIT_FAILURE;
}
printf("Second proof verified successfully\n");
ffi_rln_proof_free(rln_proof2);
printf("\nRecovering identity secret\n");
CResult_CFr_ptr_Vec_uint8_t recover_result = ffi_recover_id_secret(&proof_values, &proof_values2);
if (!recover_result.ok)
{
fprintf(stderr, "Identity recovery error: %s\n", recover_result.err.ptr);
ffi_c_string_free(recover_result.err);
return EXIT_FAILURE;
}
CFr_t *recovered_secret = recover_result.ok;
debug = ffi_cfr_debug(recovered_secret);
printf(" - recovered_secret = %s\n", debug.ptr);
ffi_c_string_free(debug);
debug = ffi_cfr_debug(identity_secret);
printf(" - original_secret = %s\n", debug.ptr);
ffi_c_string_free(debug);
printf("Slashing successful: Identity is recovered!\n");
ffi_cfr_free(recovered_secret);
ffi_rln_proof_values_free(proof_values2);
ffi_rln_proof_values_free(proof_values);
ffi_cfr_free(x2);
ffi_cfr_free(message_id2);
#ifdef STATELESS
ffi_rln_witness_input_free(witness2);
ffi_rln_witness_input_free(witness);
ffi_vec_cfr_free(roots);
ffi_vec_cfr_free(path_elements);
for (size_t i = 0; i < TREE_DEPTH - 1; i++)
{
ffi_cfr_free(default_hashes[i]);
}
ffi_cfr_free(default_leaf);
ffi_cfr_free(computed_root);
#else
ffi_rln_witness_input_free(witness2);
ffi_rln_witness_input_free(witness);
ffi_merkle_proof_free(merkle_proof);
#endif
ffi_cfr_free(rate_commitment);
ffi_cfr_free(x);
ffi_cfr_free(epoch);
ffi_cfr_free(rln_identifier);
ffi_cfr_free(external_nullifier);
ffi_cfr_free(user_message_limit);
ffi_cfr_free(message_id);
ffi_vec_cfr_free(keys);
ffi_rln_free(rln);
return EXIT_SUCCESS;
}

View File

@@ -0,0 +1,124 @@
# RLN FFI Nim example
This example demonstrates how to use the RLN C FFI from Nim in both stateless and non-stateless modes. It covers:
- Creating an RLN handle (stateless or with Merkle tree backend)
- Generating identity keys and commitments
- Building a witness (mock Merkle path in stateless mode, real Merkle proof in non-stateless mode)
- Generating and verifying a proof
- Serializing/deserializing FFI objects (CFr, Vec\<CFr>, RLNWitnessInput, RLNProof, RLNProofValues)
- Simulating a double-signaling attack and recovering the identity secret
## Build the RLN library
From the repository root:
```bash
# Stateless build (no tree APIs)
cargo build -p rln --release --no-default-features --features stateless
# Non-stateless build (with tree APIs)
cargo build -p rln --release
```
This produces the shared library in `target/release`:
- macOS: `librln.dylib`
- Linux: `librln.so`
- Windows: `rln.dll`
## Build the Nim example (two modes)
From this directory:
```bash
# Stateless mode (no tree APIs, uses mock Merkle path)
nim c -d:release -d:ffiStateless main.nim
# Non-stateless mode (uses exported tree APIs to insert leaf and fetch proof)
nim c -d:release main.nim
```
Notes:
- The example links dynamically. If your OS linker cannot find the library at runtime,
set an rpath or environment variable as shown below.
- The example auto-picks a platform-specific default library name.
You can override it with `-d:RLN_LIB:"/absolute/path/to/lib"` if needed.
## Run the example
Ensure the dynamic loader can find the RLN library, then run the binary.
macOS:
```bash
DYLD_LIBRARY_PATH=../../target/release ./main
```
Linux:
```bash
LD_LIBRARY_PATH=../../target/release ./main
```
Windows (PowerShell):
```powershell
$env:PATH = "$PWD\..\..\target\release;$env:PATH"
./main.exe
```
You should see detailed output showing each step, for example:
```text
Creating RLN instance
RLN instance created successfully
Generating identity keys
Identity generated
- identity_secret = ...
- id_commitment = ...
Creating message limit
- user_message_limit = ...
Computing rate commitment
- rate_commitment = ...
CFr serialization: CFr <-> bytes
- serialized rate_commitment = ...
- deserialized rate_commitment = ...
Vec<CFr> serialization: Vec<CFr> <-> bytes
- serialized keys = ...
- deserialized keys = ...
... (Merkle path, hashing, witness, proof, verification, and slashing steps) ...
Proof verified successfully
Slashing successful: Identity is recovered!
```
## What the example does
### Stateless mode
1. Creates an RLN handle via the stateless constructor.
2. Generates identity keys, sets a `user_message_limit` and `message_id`.
3. Hashes a signal, epoch, and RLN identifier to field elements.
4. Computes `rateCommitment = Poseidon(id_commitment, user_message_limit)`.
5. Builds a mock Merkle path for an empty depth-20 tree at index 0 (no exported tree APIs):
- Path siblings: level 0 sibling is `0`, then each level uses precomputed default hashes `H(0,0)`, `H(H(0,0),H(0,0))`, ...
- Path indices: all zeros (left at every level)
- Root: folds the path upwards with `rateCommitment` at index 0
6. Builds the witness, generates the proof, and verifies it with `ffi_verify_with_roots`, passing a one-element roots vector containing the computed root.
7. Simulates a double-signaling attack and recovers the identity secret from two proofs.
### Non-stateless mode
1. Creates an RLN handle with a Merkle tree backend and configuration.
2. Generates identity keys and computes `rateCommitment = Poseidon(id_commitment, user_message_limit)`.
3. Inserts the leaf with `ffi_set_next_leaf` and fetches a real Merkle path for index 0 via `ffi_get_merkle_proof`.
4. Builds the witness from the exported proof, generates the proof, and verifies with `ffi_verify_rln_proof` using the current tree root.
5. Simulates a double-signaling attack and recovers the identity secret from two proofs.

View File

@@ -0,0 +1,940 @@
# Embed rpaths to find Cargo's built library relative to the executable
when defined(macosx):
{.passL: "-Wl,-rpath,@executable_path/../../target/release".}
when defined(linux):
{.passL: "-Wl,-rpath,'$ORIGIN/../../target/release'".}
# Portable dynlib name with override capability (-d:RLN_LIB:"...")
when defined(macosx):
const RLN_LIB* {.strdefine.} = "librln.dylib"
elif defined(linux):
const RLN_LIB* {.strdefine.} = "librln.so"
elif defined(windows):
const RLN_LIB* {.strdefine.} = "rln.dll"
else:
const RLN_LIB* {.strdefine.} = "rln"
# FFI objects
type
CSize* = csize_t
CFr* = object
FFI_RLN* = object
FFI_RLNProof* = object
FFI_RLNWitnessInput* = object
Vec_CFr* = object
dataPtr*: ptr CFr
len*: CSize
cap*: CSize
Vec_uint8* = object
dataPtr*: ptr uint8
len*: CSize
cap*: CSize
SliceRefU8* = object
dataPtr*: ptr uint8
len*: CSize
FFI_MerkleProof* = object
path_elements*: Vec_CFr
path_index*: Vec_uint8
CResultRLNPtrVecU8* = object
ok*: ptr FFI_RLN
err*: Vec_uint8
CResultProofPtrVecU8* = object
ok*: ptr FFI_RLNProof
err*: Vec_uint8
CResultWitnessInputPtrVecU8* = object
ok*: ptr FFI_RLNWitnessInput
err*: Vec_uint8
FFI_RLNProofValues* = object
CResultCFrPtrVecU8* = object
ok*: ptr CFr
err*: Vec_uint8
CResultRLNProofValuesPtrVecU8* = object
ok*: ptr FFI_RLNProofValues
err*: Vec_uint8
CResultMerkleProofPtrVecU8* = object
ok*: ptr FFI_MerkleProof
err*: Vec_uint8
CResultVecCFrVecU8* = object
ok*: Vec_CFr
err*: Vec_uint8
CResultVecU8VecU8* = object
ok*: Vec_uint8
err*: Vec_uint8
CResultBigIntJsonVecU8* = object
ok*: Vec_uint8
err*: Vec_uint8
CBoolResult* = object
ok*: bool
err*: Vec_uint8
# CFr functions
proc ffi_cfr_zero*(): ptr CFr {.importc: "ffi_cfr_zero", cdecl,
dynlib: RLN_LIB.}
proc ffi_cfr_one*(): ptr CFr {.importc: "ffi_cfr_one", cdecl, dynlib: RLN_LIB.}
proc ffi_cfr_free*(x: ptr CFr) {.importc: "ffi_cfr_free", cdecl,
dynlib: RLN_LIB.}
proc ffi_uint_to_cfr*(value: uint32): ptr CFr {.importc: "ffi_uint_to_cfr",
cdecl, dynlib: RLN_LIB.}
proc ffi_cfr_debug*(cfr: ptr CFr): Vec_uint8 {.importc: "ffi_cfr_debug", cdecl,
dynlib: RLN_LIB.}
proc ffi_cfr_to_bytes_le*(cfr: ptr CFr): Vec_uint8 {.importc: "ffi_cfr_to_bytes_le",
cdecl, dynlib: RLN_LIB.}
proc ffi_cfr_to_bytes_be*(cfr: ptr CFr): Vec_uint8 {.importc: "ffi_cfr_to_bytes_be",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_le_to_cfr*(bytes: ptr Vec_uint8): CResultCFrPtrVecU8 {.importc: "ffi_bytes_le_to_cfr",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_be_to_cfr*(bytes: ptr Vec_uint8): CResultCFrPtrVecU8 {.importc: "ffi_bytes_be_to_cfr",
cdecl, dynlib: RLN_LIB.}
# Vec<CFr> functions
proc ffi_vec_cfr_new*(capacity: CSize): Vec_CFr {.importc: "ffi_vec_cfr_new",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_cfr_from_cfr*(cfr: ptr CFr): Vec_CFr {.importc: "ffi_vec_cfr_from_cfr",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_cfr_push*(v: ptr Vec_CFr, cfr: ptr CFr) {.importc: "ffi_vec_cfr_push",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_cfr_len*(v: ptr Vec_CFr): CSize {.importc: "ffi_vec_cfr_len",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_cfr_get*(v: ptr Vec_CFr, i: CSize): ptr CFr {.importc: "ffi_vec_cfr_get",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_cfr_to_bytes_le*(v: ptr Vec_CFr): Vec_uint8 {.importc: "ffi_vec_cfr_to_bytes_le",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_cfr_to_bytes_be*(v: ptr Vec_CFr): Vec_uint8 {.importc: "ffi_vec_cfr_to_bytes_be",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_le_to_vec_cfr*(bytes: ptr Vec_uint8): CResultVecCFrVecU8 {.importc: "ffi_bytes_le_to_vec_cfr",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_be_to_vec_cfr*(bytes: ptr Vec_uint8): CResultVecCFrVecU8 {.importc: "ffi_bytes_be_to_vec_cfr",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_cfr_debug*(v: ptr Vec_CFr): Vec_uint8 {.importc: "ffi_vec_cfr_debug",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_cfr_free*(v: Vec_CFr) {.importc: "ffi_vec_cfr_free", cdecl,
dynlib: RLN_LIB.}
# Vec<u8> functions
proc ffi_vec_u8_to_bytes_le*(v: ptr Vec_uint8): Vec_uint8 {.importc: "ffi_vec_u8_to_bytes_le",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_u8_to_bytes_be*(v: ptr Vec_uint8): Vec_uint8 {.importc: "ffi_vec_u8_to_bytes_be",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_le_to_vec_u8*(bytes: ptr Vec_uint8): CResultVecU8VecU8 {.importc: "ffi_bytes_le_to_vec_u8",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_be_to_vec_u8*(bytes: ptr Vec_uint8): CResultVecU8VecU8 {.importc: "ffi_bytes_be_to_vec_u8",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_u8_debug*(v: ptr Vec_uint8): Vec_uint8 {.importc: "ffi_vec_u8_debug",
cdecl, dynlib: RLN_LIB.}
proc ffi_vec_u8_free*(v: Vec_uint8) {.importc: "ffi_vec_u8_free", cdecl,
dynlib: RLN_LIB.}
# Hashing functions
proc ffi_hash_to_field_le*(input: ptr Vec_uint8): CResultCFrPtrVecU8 {.importc: "ffi_hash_to_field_le",
cdecl, dynlib: RLN_LIB.}
proc ffi_hash_to_field_be*(input: ptr Vec_uint8): CResultCFrPtrVecU8 {.importc: "ffi_hash_to_field_be",
cdecl, dynlib: RLN_LIB.}
proc ffi_poseidon_hash_pair*(a: ptr CFr,
b: ptr CFr): CResultCFrPtrVecU8 {.importc: "ffi_poseidon_hash_pair", cdecl,
dynlib: RLN_LIB.}
# Keygen function
proc ffi_key_gen*(): CResultVecCFrVecU8 {.importc: "ffi_key_gen", cdecl,
dynlib: RLN_LIB.}
proc ffi_seeded_key_gen*(seed: ptr Vec_uint8): CResultVecCFrVecU8 {.importc: "ffi_seeded_key_gen",
cdecl, dynlib: RLN_LIB.}
proc ffi_extended_key_gen*(): CResultVecCFrVecU8 {.importc: "ffi_extended_key_gen",
cdecl, dynlib: RLN_LIB.}
proc ffi_seeded_extended_key_gen*(seed: ptr Vec_uint8): CResultVecCFrVecU8 {.importc: "ffi_seeded_extended_key_gen",
cdecl, dynlib: RLN_LIB.}
# RLN instance functions
when defined(ffiStateless):
proc ffi_rln_new*(): CResultRLNPtrVecU8 {.importc: "ffi_rln_new", cdecl,
dynlib: RLN_LIB.}
proc ffi_rln_new_with_params*(zkey_data: ptr Vec_uint8,
graph_data: ptr Vec_uint8): CResultRLNPtrVecU8 {.importc: "ffi_rln_new_with_params",
cdecl, dynlib: RLN_LIB.}
else:
proc ffi_rln_new*(treeDepth: CSize, config: cstring): CResultRLNPtrVecU8 {.importc: "ffi_rln_new",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_new_with_params*(treeDepth: CSize, zkey_data: ptr Vec_uint8,
graph_data: ptr Vec_uint8, config: cstring): CResultRLNPtrVecU8 {.importc: "ffi_rln_new_with_params",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_free*(rln: ptr FFI_RLN) {.importc: "ffi_rln_free", cdecl,
dynlib: RLN_LIB.}
# Witness input functions
proc ffi_rln_witness_input_new*(
identity_secret: ptr CFr,
user_message_limit: ptr CFr,
message_id: ptr CFr,
path_elements: ptr Vec_CFr,
identity_path_index: ptr Vec_uint8,
x: ptr CFr,
external_nullifier: ptr CFr
): CResultWitnessInputPtrVecU8 {.importc: "ffi_rln_witness_input_new", cdecl,
dynlib: RLN_LIB.}
proc ffi_rln_witness_to_bytes_le*(witness: ptr ptr FFI_RLNWitnessInput): CResultVecU8VecU8 {.importc: "ffi_rln_witness_to_bytes_le",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_witness_to_bytes_be*(witness: ptr ptr FFI_RLNWitnessInput): CResultVecU8VecU8 {.importc: "ffi_rln_witness_to_bytes_be",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_le_to_rln_witness*(bytes: ptr Vec_uint8): CResultWitnessInputPtrVecU8 {.importc: "ffi_bytes_le_to_rln_witness",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_be_to_rln_witness*(bytes: ptr Vec_uint8): CResultWitnessInputPtrVecU8 {.importc: "ffi_bytes_be_to_rln_witness",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_witness_to_bigint_json*(witness: ptr ptr FFI_RLNWitnessInput): CResultBigIntJsonVecU8 {.importc: "ffi_rln_witness_to_bigint_json",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_witness_input_free*(witness: ptr FFI_RLNWitnessInput) {.importc: "ffi_rln_witness_input_free",
cdecl, dynlib: RLN_LIB.}
# Proof generation/verification functions
proc ffi_generate_rln_proof*(
rln: ptr ptr FFI_RLN,
witness: ptr ptr FFI_RLNWitnessInput
): CResultProofPtrVecU8 {.importc: "ffi_generate_rln_proof", cdecl,
dynlib: RLN_LIB.}
proc ffi_generate_rln_proof_with_witness*(
rln: ptr ptr FFI_RLN,
calculated_witness: ptr Vec_uint8,
witness: ptr ptr FFI_RLNWitnessInput
): CResultProofPtrVecU8 {.importc: "ffi_generate_rln_proof_with_witness",
cdecl, dynlib: RLN_LIB.}
when not defined(ffiStateless):
proc ffi_verify_rln_proof*(
rln: ptr ptr FFI_RLN,
proof: ptr ptr FFI_RLNProof,
x: ptr CFr
): CBoolResult {.importc: "ffi_verify_rln_proof", cdecl,
dynlib: RLN_LIB.}
proc ffi_verify_with_roots*(
rln: ptr ptr FFI_RLN,
proof: ptr ptr FFI_RLNProof,
roots: ptr Vec_CFr,
x: ptr CFr
): CBoolResult {.importc: "ffi_verify_with_roots", cdecl,
dynlib: RLN_LIB.}
proc ffi_rln_proof_free*(p: ptr FFI_RLNProof) {.importc: "ffi_rln_proof_free",
cdecl, dynlib: RLN_LIB.}
# Merkle tree operations (non-stateless mode)
when not defined(ffiStateless):
proc ffi_set_tree*(rln: ptr ptr FFI_RLN,
tree_depth: CSize): CBoolResult {.importc: "ffi_set_tree",
cdecl, dynlib: RLN_LIB.}
proc ffi_delete_leaf*(rln: ptr ptr FFI_RLN,
index: CSize): CBoolResult {.importc: "ffi_delete_leaf",
cdecl, dynlib: RLN_LIB.}
proc ffi_set_leaf*(rln: ptr ptr FFI_RLN, index: CSize,
leaf: ptr CFr): CBoolResult {.importc: "ffi_set_leaf",
cdecl, dynlib: RLN_LIB.}
proc ffi_get_leaf*(rln: ptr ptr FFI_RLN,
index: CSize): CResultCFrPtrVecU8 {.importc: "ffi_get_leaf",
cdecl, dynlib: RLN_LIB.}
proc ffi_set_next_leaf*(rln: ptr ptr FFI_RLN,
leaf: ptr CFr): CBoolResult {.importc: "ffi_set_next_leaf",
cdecl, dynlib: RLN_LIB.}
proc ffi_set_leaves_from*(rln: ptr ptr FFI_RLN, index: CSize,
leaves: ptr Vec_CFr): CBoolResult {.importc: "ffi_set_leaves_from",
cdecl, dynlib: RLN_LIB.}
proc ffi_init_tree_with_leaves*(rln: ptr ptr FFI_RLN,
leaves: ptr Vec_CFr): CBoolResult {.importc: "ffi_init_tree_with_leaves",
cdecl, dynlib: RLN_LIB.}
proc ffi_atomic_operation*(rln: ptr ptr FFI_RLN, index: CSize,
leaves: ptr Vec_CFr,
indices: ptr Vec_uint8): CBoolResult {.importc: "ffi_atomic_operation",
cdecl, dynlib: RLN_LIB.}
proc ffi_seq_atomic_operation*(rln: ptr ptr FFI_RLN, leaves: ptr Vec_CFr,
indices: ptr Vec_uint8): CBoolResult {.importc: "ffi_seq_atomic_operation",
cdecl, dynlib: RLN_LIB.}
proc ffi_get_root*(rln: ptr ptr FFI_RLN): ptr CFr {.importc: "ffi_get_root",
cdecl, dynlib: RLN_LIB.}
proc ffi_leaves_set*(rln: ptr ptr FFI_RLN): CSize {.importc: "ffi_leaves_set",
cdecl, dynlib: RLN_LIB.}
proc ffi_get_merkle_proof*(rln: ptr ptr FFI_RLN,
index: CSize): CResultMerkleProofPtrVecU8 {.importc: "ffi_get_merkle_proof",
cdecl, dynlib: RLN_LIB.}
proc ffi_set_metadata*(rln: ptr ptr FFI_RLN,
metadata: ptr Vec_uint8): CBoolResult {.importc: "ffi_set_metadata",
cdecl, dynlib: RLN_LIB.}
proc ffi_get_metadata*(rln: ptr ptr FFI_RLN): CResultVecU8VecU8 {.importc: "ffi_get_metadata",
cdecl, dynlib: RLN_LIB.}
proc ffi_flush*(rln: ptr ptr FFI_RLN): CBoolResult {.importc: "ffi_flush",
cdecl, dynlib: RLN_LIB.}
proc ffi_merkle_proof_free*(p: ptr FFI_MerkleProof) {.importc: "ffi_merkle_proof_free",
cdecl, dynlib: RLN_LIB.}
# Identity secret recovery
proc ffi_recover_id_secret*(proof_values_1: ptr ptr FFI_RLNProofValues,
proof_values_2: ptr ptr FFI_RLNProofValues): CResultCFrPtrVecU8 {.importc: "ffi_recover_id_secret",
cdecl, dynlib: RLN_LIB.}
# RLNProof serialization
proc ffi_rln_proof_to_bytes_le*(proof: ptr ptr FFI_RLNProof): CResultVecU8VecU8 {.importc: "ffi_rln_proof_to_bytes_le",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_proof_to_bytes_be*(proof: ptr ptr FFI_RLNProof): CResultVecU8VecU8 {.importc: "ffi_rln_proof_to_bytes_be",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_le_to_rln_proof*(bytes: ptr Vec_uint8): CResultProofPtrVecU8 {.importc: "ffi_bytes_le_to_rln_proof",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_be_to_rln_proof*(bytes: ptr Vec_uint8): CResultProofPtrVecU8 {.importc: "ffi_bytes_be_to_rln_proof",
cdecl, dynlib: RLN_LIB.}
# RLNProofValues functions
proc ffi_rln_proof_get_values*(proof: ptr ptr FFI_RLNProof): ptr FFI_RLNProofValues {.importc: "ffi_rln_proof_get_values",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_proof_values_get_y*(pv: ptr ptr FFI_RLNProofValues): ptr CFr {.importc: "ffi_rln_proof_values_get_y",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_proof_values_get_nullifier*(pv: ptr ptr FFI_RLNProofValues): ptr CFr {.importc: "ffi_rln_proof_values_get_nullifier",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_proof_values_get_root*(pv: ptr ptr FFI_RLNProofValues): ptr CFr {.importc: "ffi_rln_proof_values_get_root",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_proof_values_get_x*(pv: ptr ptr FFI_RLNProofValues): ptr CFr {.importc: "ffi_rln_proof_values_get_x",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_proof_values_get_external_nullifier*(pv: ptr ptr FFI_RLNProofValues): ptr CFr {.importc: "ffi_rln_proof_values_get_external_nullifier",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_proof_values_to_bytes_le*(pv: ptr ptr FFI_RLNProofValues): Vec_uint8 {.importc: "ffi_rln_proof_values_to_bytes_le",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_proof_values_to_bytes_be*(pv: ptr ptr FFI_RLNProofValues): Vec_uint8 {.importc: "ffi_rln_proof_values_to_bytes_be",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_le_to_rln_proof_values*(bytes: ptr Vec_uint8): CResultRLNProofValuesPtrVecU8 {.importc: "ffi_bytes_le_to_rln_proof_values",
cdecl, dynlib: RLN_LIB.}
proc ffi_bytes_be_to_rln_proof_values*(bytes: ptr Vec_uint8): CResultRLNProofValuesPtrVecU8 {.importc: "ffi_bytes_be_to_rln_proof_values",
cdecl, dynlib: RLN_LIB.}
proc ffi_rln_proof_values_free*(pv: ptr FFI_RLNProofValues) {.importc: "ffi_rln_proof_values_free",
cdecl, dynlib: RLN_LIB.}
# Helpers functions
proc asVecU8*(buf: var seq[uint8]): Vec_uint8 =
result.dataPtr = if buf.len == 0: nil else: addr buf[0]
result.len = CSize(buf.len)
result.cap = CSize(buf.len)
proc asString*(v: Vec_uint8): string =
if v.dataPtr.isNil or v.len == 0: return ""
result = newString(v.len.int)
copyMem(addr result[0], v.dataPtr, v.len.int)
proc ffi_c_string_free*(s: Vec_uint8) {.importc: "ffi_c_string_free", cdecl,
dynlib: RLN_LIB.}
when isMainModule:
echo "Creating RLN instance"
var rlnRes: CResultRLNPtrVecU8
when defined(ffiStateless):
rlnRes = ffi_rln_new()
else:
let config_path = """../resources/tree_depth_20/config.json""".cstring
rlnRes = ffi_rln_new(CSize(20), config_path)
if rlnRes.ok.isNil:
stderr.writeLine "Initial RLN instance creation error: ", asString(rlnRes.err)
ffi_c_string_free(rlnRes.err)
quit 1
var rln = rlnRes.ok
echo "RLN instance created successfully"
echo "\nGenerating identity keys"
var keysResult = ffi_key_gen()
if keysResult.err.dataPtr != nil:
let errMsg = asString(keysResult.err)
ffi_c_string_free(keysResult.err)
echo "Key generation error: ", errMsg
quit 1
var keys = keysResult.ok
let identitySecret = ffi_vec_cfr_get(addr keys, CSize(0))
let idCommitment = ffi_vec_cfr_get(addr keys, CSize(1))
echo "Identity generated"
block:
let debug = ffi_cfr_debug(identitySecret)
echo " - identity_secret = ", asString(debug)
ffi_c_string_free(debug)
block:
let debug = ffi_cfr_debug(idCommitment)
echo " - id_commitment = ", asString(debug)
ffi_c_string_free(debug)
echo "\nCreating message limit"
let userMessageLimit = ffi_uint_to_cfr(1'u32)
block:
let debug = ffi_cfr_debug(userMessageLimit)
echo " - user_message_limit = ", asString(debug)
ffi_c_string_free(debug)
echo "\nComputing rate commitment"
let rateCommitmentResult = ffi_poseidon_hash_pair(idCommitment, userMessageLimit)
if rateCommitmentResult.ok.isNil:
let errMsg = asString(rateCommitmentResult.err)
ffi_c_string_free(rateCommitmentResult.err)
echo "Rate commitment hash error: ", errMsg
quit 1
let rateCommitment = rateCommitmentResult.ok
block:
let debug = ffi_cfr_debug(rateCommitment)
echo " - rate_commitment = ", asString(debug)
ffi_c_string_free(debug)
echo "\nCFr serialization: CFr <-> bytes"
var serRateCommitment = ffi_cfr_to_bytes_be(rateCommitment)
block:
let debug = ffi_vec_u8_debug(addr serRateCommitment)
echo " - serialized rate_commitment = ", asString(debug)
ffi_c_string_free(debug)
let deserRateCommitmentResult = ffi_bytes_be_to_cfr(addr serRateCommitment)
if deserRateCommitmentResult.ok.isNil:
stderr.writeLine "Rate commitment deserialization error: ", asString(
deserRateCommitmentResult.err)
ffi_c_string_free(deserRateCommitmentResult.err)
quit 1
let deserRateCommitment = deserRateCommitmentResult.ok
block:
let debug = ffi_cfr_debug(deserRateCommitment)
echo " - deserialized rate_commitment = ", asString(debug)
ffi_c_string_free(debug)
ffi_vec_u8_free(serRateCommitment)
ffi_cfr_free(deserRateCommitment)
echo "\nVec<CFr> serialization: Vec<CFr> <-> bytes"
var serKeys = ffi_vec_cfr_to_bytes_be(addr keys)
block:
let debug = ffi_vec_u8_debug(addr serKeys)
echo " - serialized keys = ", asString(debug)
ffi_c_string_free(debug)
let deserKeysResult = ffi_bytes_be_to_vec_cfr(addr serKeys)
if deserKeysResult.err.dataPtr != nil:
stderr.writeLine "Keys deserialization error: ", asString(
deserKeysResult.err)
ffi_c_string_free(deserKeysResult.err)
quit 1
block:
var okKeys = deserKeysResult.ok
let debug = ffi_vec_cfr_debug(addr okKeys)
echo " - deserialized keys = ", asString(debug)
ffi_c_string_free(debug)
ffi_vec_cfr_free(deserKeysResult.ok)
ffi_vec_u8_free(serKeys)
when defined(ffiStateless):
const treeDepth = 20
const CFR_SIZE = 32
echo "\nBuilding Merkle path for stateless mode"
let defaultLeaf = ffi_cfr_zero()
var defaultHashes: array[treeDepth-1, ptr CFr]
block:
let hashResult = ffi_poseidon_hash_pair(defaultLeaf, defaultLeaf)
if hashResult.ok.isNil:
let errMsg = asString(hashResult.err)
ffi_c_string_free(hashResult.err)
echo "Poseidon hash error: ", errMsg
quit 1
defaultHashes[0] = hashResult.ok
for i in 1..treeDepth-2:
let hashResult = ffi_poseidon_hash_pair(defaultHashes[i-1], defaultHashes[i-1])
if hashResult.ok.isNil:
let errMsg = asString(hashResult.err)
ffi_c_string_free(hashResult.err)
echo "Poseidon hash error: ", errMsg
quit 1
defaultHashes[i] = hashResult.ok
var pathElements = ffi_vec_cfr_new(CSize(treeDepth))
ffi_vec_cfr_push(addr pathElements, defaultLeaf)
for i in 0..treeDepth-2:
ffi_vec_cfr_push(addr pathElements, defaultHashes[i])
echo "\nVec<CFr> serialization: Vec<CFr> <-> bytes"
var serPathElements = ffi_vec_cfr_to_bytes_be(addr pathElements)
block:
let debug = ffi_vec_u8_debug(addr serPathElements)
echo " - serialized path_elements = ", asString(debug)
ffi_c_string_free(debug)
let deserPathElements = ffi_bytes_be_to_vec_cfr(addr serPathElements)
if deserPathElements.err.dataPtr != nil:
stderr.writeLine "Path elements deserialization error: ", asString(
deserPathElements.err)
ffi_c_string_free(deserPathElements.err)
quit 1
block:
var okPathElems = deserPathElements.ok
let debug = ffi_vec_cfr_debug(addr okPathElems)
echo " - deserialized path_elements = ", asString(debug)
ffi_c_string_free(debug)
ffi_vec_cfr_free(deserPathElements.ok)
ffi_vec_u8_free(serPathElements)
var pathIndexSeq = newSeq[uint8](treeDepth)
var identityPathIndex = asVecU8(pathIndexSeq)
echo "\nVec<uint8> serialization: Vec<uint8> <-> bytes"
var serPathIndex = ffi_vec_u8_to_bytes_be(addr identityPathIndex)
block:
let debug = ffi_vec_u8_debug(addr serPathIndex)
echo " - serialized path_index = ", asString(debug)
ffi_c_string_free(debug)
let deserPathIndex = ffi_bytes_be_to_vec_u8(addr serPathIndex)
if deserPathIndex.err.dataPtr != nil:
stderr.writeLine "Path index deserialization error: ", asString(
deserPathIndex.err)
ffi_c_string_free(deserPathIndex.err)
quit 1
block:
var okPathIdx = deserPathIndex.ok
let debug = ffi_vec_u8_debug(addr okPathIdx)
echo " - deserialized path_index = ", asString(debug)
ffi_c_string_free(debug)
ffi_vec_u8_free(deserPathIndex.ok)
ffi_vec_u8_free(serPathIndex)
echo "\nComputing Merkle root for stateless mode"
echo " - computing root for index 0 with rate_commitment"
let rootResult = ffi_poseidon_hash_pair(rateCommitment, defaultLeaf)
if rootResult.ok.isNil:
let errMsg = asString(rootResult.err)
ffi_c_string_free(rootResult.err)
echo "Poseidon hash error: ", errMsg
quit 1
var computedRoot = rootResult.ok
for i in 1..treeDepth-1:
let nextResult = ffi_poseidon_hash_pair(computedRoot, defaultHashes[i-1])
if nextResult.ok.isNil:
let errMsg = asString(nextResult.err)
ffi_c_string_free(nextResult.err)
echo "Poseidon hash error: ", errMsg
quit 1
let next = nextResult.ok
ffi_cfr_free(computedRoot)
computedRoot = next
block:
let debug = ffi_cfr_debug(computedRoot)
echo " - computed_root = ", asString(debug)
ffi_c_string_free(debug)
else:
echo "\nAdding rate_commitment to tree"
var rcPtr = rateCommitment
let setErr = ffi_set_next_leaf(addr rln, rcPtr)
if not setErr.ok:
stderr.writeLine "Set next leaf error: ", asString(setErr.err)
ffi_c_string_free(setErr.err)
quit 1
let leafIndex = ffi_leaves_set(addr rln) - 1
echo " - added to tree at index ", leafIndex
echo "\nGetting Merkle proof"
let proofResult = ffi_get_merkle_proof(addr rln, leafIndex)
if proofResult.ok.isNil:
stderr.writeLine "Get proof error: ", asString(proofResult.err)
ffi_c_string_free(proofResult.err)
quit 1
let merkleProof = proofResult.ok
echo " - proof obtained (depth: ", merkleProof.path_elements.len, ")"
echo "\nHashing signal"
var signal: array[32, uint8] = [1'u8, 2, 3, 4, 5, 6, 7, 8, 9, 10, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
var signalVec = Vec_uint8(dataPtr: cast[ptr uint8](addr signal[0]),
len: CSize(signal.len), cap: CSize(signal.len))
let xResult = ffi_hash_to_field_be(addr signalVec)
if xResult.ok.isNil:
stderr.writeLine "Hash signal error: ", asString(xResult.err)
ffi_c_string_free(xResult.err)
quit 1
let x = xResult.ok
block:
let debug = ffi_cfr_debug(x)
echo " - x = ", asString(debug)
ffi_c_string_free(debug)
echo "\nHashing epoch"
let epochStr = "test-epoch"
var epochBytes = newSeq[uint8](epochStr.len)
for i in 0..<epochStr.len: epochBytes[i] = uint8(epochStr[i])
var epochVec = asVecU8(epochBytes)
let epochResult = ffi_hash_to_field_be(addr epochVec)
if epochResult.ok.isNil:
stderr.writeLine "Hash epoch error: ", asString(epochResult.err)
ffi_c_string_free(epochResult.err)
quit 1
let epoch = epochResult.ok
block:
let debug = ffi_cfr_debug(epoch)
echo " - epoch = ", asString(debug)
ffi_c_string_free(debug)
echo "\nHashing RLN identifier"
let rlnIdStr = "test-rln-identifier"
var rlnIdBytes = newSeq[uint8](rlnIdStr.len)
for i in 0..<rlnIdStr.len: rlnIdBytes[i] = uint8(rlnIdStr[i])
var rlnIdVec = asVecU8(rlnIdBytes)
let rlnIdentifierResult = ffi_hash_to_field_be(addr rlnIdVec)
if rlnIdentifierResult.ok.isNil:
stderr.writeLine "Hash RLN identifier error: ", asString(
rlnIdentifierResult.err)
ffi_c_string_free(rlnIdentifierResult.err)
quit 1
let rlnIdentifier = rlnIdentifierResult.ok
block:
let debug = ffi_cfr_debug(rlnIdentifier)
echo " - rln_identifier = ", asString(debug)
ffi_c_string_free(debug)
echo "\nComputing Poseidon hash for external nullifier"
let externalNullifierResult = ffi_poseidon_hash_pair(epoch, rlnIdentifier)
if externalNullifierResult.ok.isNil:
let errMsg = asString(externalNullifierResult.err)
ffi_c_string_free(externalNullifierResult.err)
echo "External nullifier hash error: ", errMsg
quit 1
let externalNullifier = externalNullifierResult.ok
block:
let debug = ffi_cfr_debug(externalNullifier)
echo " - external_nullifier = ", asString(debug)
ffi_c_string_free(debug)
echo "\nCreating message_id"
let messageId = ffi_uint_to_cfr(0'u32)
block:
let debug = ffi_cfr_debug(messageId)
echo " - message_id = ", asString(debug)
ffi_c_string_free(debug)
echo "\nCreating RLN Witness"
when defined(ffiStateless):
var witnessRes = ffi_rln_witness_input_new(identitySecret,
userMessageLimit, messageId, addr pathElements, addr identityPathIndex,
x, externalNullifier)
if witnessRes.ok.isNil:
stderr.writeLine "RLN Witness creation error: ", asString(witnessRes.err)
ffi_c_string_free(witnessRes.err)
quit 1
var witness = witnessRes.ok
echo "RLN Witness created successfully"
else:
var witnessRes = ffi_rln_witness_input_new(identitySecret,
userMessageLimit, messageId, addr merkleProof.path_elements,
addr merkleProof.path_index, x, externalNullifier)
if witnessRes.ok.isNil:
stderr.writeLine "RLN Witness creation error: ", asString(witnessRes.err)
ffi_c_string_free(witnessRes.err)
quit 1
var witness = witnessRes.ok
echo "RLN Witness created successfully"
echo "\nRLNWitnessInput serialization: RLNWitnessInput <-> bytes"
let serWitnessResult = ffi_rln_witness_to_bytes_be(addr witness)
if serWitnessResult.err.dataPtr != nil:
stderr.writeLine "Witness serialization error: ", asString(
serWitnessResult.err)
ffi_c_string_free(serWitnessResult.err)
quit 1
var serWitness = serWitnessResult.ok
block:
let debug = ffi_vec_u8_debug(addr serWitness)
echo " - serialized witness = ", asString(debug)
ffi_c_string_free(debug)
let deserWitnessResult = ffi_bytes_be_to_rln_witness(addr serWitness)
if deserWitnessResult.ok.isNil:
stderr.writeLine "Witness deserialization error: ", asString(
deserWitnessResult.err)
ffi_c_string_free(deserWitnessResult.err)
quit 1
echo " - witness deserialized successfully"
ffi_rln_witness_input_free(deserWitnessResult.ok)
ffi_vec_u8_free(serWitness)
echo "\nGenerating RLN Proof"
var proofRes = ffi_generate_rln_proof(addr rln, addr witness)
if proofRes.ok.isNil:
stderr.writeLine "Proof generation error: ", asString(proofRes.err)
ffi_c_string_free(proofRes.err)
quit 1
var proof = proofRes.ok
echo "Proof generated successfully"
echo "\nGetting proof values"
var proofValues = ffi_rln_proof_get_values(addr proof)
block:
let y = ffi_rln_proof_values_get_y(addr proofValues)
let debug = ffi_cfr_debug(y)
echo " - y = ", asString(debug)
ffi_c_string_free(debug)
ffi_cfr_free(y)
block:
let nullifier = ffi_rln_proof_values_get_nullifier(addr proofValues)
let debug = ffi_cfr_debug(nullifier)
echo " - nullifier = ", asString(debug)
ffi_c_string_free(debug)
ffi_cfr_free(nullifier)
block:
let root = ffi_rln_proof_values_get_root(addr proofValues)
let debug = ffi_cfr_debug(root)
echo " - root = ", asString(debug)
ffi_c_string_free(debug)
ffi_cfr_free(root)
block:
let xVal = ffi_rln_proof_values_get_x(addr proofValues)
let debug = ffi_cfr_debug(xVal)
echo " - x = ", asString(debug)
ffi_c_string_free(debug)
ffi_cfr_free(xVal)
block:
let extNullifier = ffi_rln_proof_values_get_external_nullifier(
addr proofValues)
let debug = ffi_cfr_debug(extNullifier)
echo " - external_nullifier = ", asString(debug)
ffi_c_string_free(debug)
ffi_cfr_free(extNullifier)
echo "\nRLNProof serialization: RLNProof <-> bytes"
let serProofResult = ffi_rln_proof_to_bytes_be(addr proof)
if serProofResult.err.dataPtr != nil:
stderr.writeLine "Proof serialization error: ", asString(serProofResult.err)
ffi_c_string_free(serProofResult.err)
quit 1
var serProof = serProofResult.ok
block:
let debug = ffi_vec_u8_debug(addr serProof)
echo " - serialized proof = ", asString(debug)
ffi_c_string_free(debug)
let deserProofResult = ffi_bytes_be_to_rln_proof(addr serProof)
if deserProofResult.ok.isNil:
stderr.writeLine "Proof deserialization error: ", asString(
deserProofResult.err)
ffi_c_string_free(deserProofResult.err)
quit 1
var deserProof = deserProofResult.ok
echo " - proof deserialized successfully"
echo "\nRLNProofValues serialization: RLNProofValues <-> bytes"
var serProofValues = ffi_rln_proof_values_to_bytes_be(addr proofValues)
block:
let debug = ffi_vec_u8_debug(addr serProofValues)
echo " - serialized proof_values = ", asString(debug)
ffi_c_string_free(debug)
let deserProofValuesResult = ffi_bytes_be_to_rln_proof_values(
addr serProofValues)
if deserProofValuesResult.ok.isNil:
stderr.writeLine "Proof values deserialization error: ", asString(
deserProofValuesResult.err)
ffi_c_string_free(deserProofValuesResult.err)
quit 1
var deserProofValues = deserProofValuesResult.ok
echo " - proof_values deserialized successfully"
block:
let deserExternalNullifier = ffi_rln_proof_values_get_external_nullifier(
addr deserProofValues)
let debug = ffi_cfr_debug(deserExternalNullifier)
echo " - deserialized external_nullifier = ", asString(debug)
ffi_c_string_free(debug)
ffi_cfr_free(deserExternalNullifier)
ffi_rln_proof_values_free(deserProofValues)
ffi_vec_u8_free(serProofValues)
ffi_rln_proof_free(deserProof)
ffi_vec_u8_free(serProof)
echo "\nVerifying Proof"
when defined(ffiStateless):
var roots = ffi_vec_cfr_from_cfr(computedRoot)
let verifyErr = ffi_verify_with_roots(addr rln, addr proof, addr roots, x)
else:
let verifyErr = ffi_verify_rln_proof(addr rln, addr proof, x)
if not verifyErr.ok:
stderr.writeLine "Proof verification error: ", asString(verifyErr.err)
ffi_c_string_free(verifyErr.err)
quit 1
echo "Proof verified successfully"
ffi_rln_proof_free(proof)
echo "\nSimulating double-signaling attack (same epoch, different message)"
echo "\nHashing second signal"
var signal2: array[32, uint8] = [11'u8, 12, 13, 14, 15, 16, 17, 18, 19, 20, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
var signal2Vec = Vec_uint8(dataPtr: cast[ptr uint8](addr signal2[0]),
len: CSize(signal2.len), cap: CSize(signal2.len))
let x2Result = ffi_hash_to_field_be(addr signal2Vec)
if x2Result.ok.isNil:
stderr.writeLine "Hash second signal error: ", asString(x2Result.err)
ffi_c_string_free(x2Result.err)
quit 1
let x2 = x2Result.ok
block:
let debug = ffi_cfr_debug(x2)
echo " - x2 = ", asString(debug)
ffi_c_string_free(debug)
echo "\nCreating second message with the same id"
let messageId2 = ffi_uint_to_cfr(0'u32)
block:
let debug = ffi_cfr_debug(messageId2)
echo " - message_id2 = ", asString(debug)
ffi_c_string_free(debug)
echo "\nCreating second RLN Witness"
when defined(ffiStateless):
var witnessRes2 = ffi_rln_witness_input_new(identitySecret,
userMessageLimit, messageId2, addr pathElements, addr identityPathIndex,
x2, externalNullifier)
if witnessRes2.ok.isNil:
stderr.writeLine "Second RLN Witness creation error: ", asString(
witnessRes2.err)
ffi_c_string_free(witnessRes2.err)
quit 1
var witness2 = witnessRes2.ok
echo "Second RLN Witness created successfully"
else:
var witnessRes2 = ffi_rln_witness_input_new(identitySecret,
userMessageLimit, messageId2, addr merkleProof.path_elements,
addr merkleProof.path_index, x2, externalNullifier)
if witnessRes2.ok.isNil:
stderr.writeLine "Second RLN Witness creation error: ", asString(
witnessRes2.err)
ffi_c_string_free(witnessRes2.err)
quit 1
var witness2 = witnessRes2.ok
echo "Second RLN Witness created successfully"
echo "\nGenerating second RLN Proof"
var proofRes2 = ffi_generate_rln_proof(addr rln, addr witness2)
if proofRes2.ok.isNil:
stderr.writeLine "Second proof generation error: ", asString(proofRes2.err)
ffi_c_string_free(proofRes2.err)
quit 1
var proof2 = proofRes2.ok
echo "Second proof generated successfully"
var proofValues2 = ffi_rln_proof_get_values(addr proof2)
echo "\nVerifying second proof"
when defined(ffiStateless):
let verifyErr2 = ffi_verify_with_roots(addr rln, addr proof2, addr roots, x2)
else:
let verifyErr2 = ffi_verify_rln_proof(addr rln, addr proof2, x2)
if not verifyErr2.ok:
stderr.writeLine "Proof verification error: ", asString(
verifyErr2.err)
ffi_c_string_free(verifyErr2.err)
quit 1
echo "Second proof verified successfully"
echo "\nRecovering identity secret"
let recoverRes = ffi_recover_id_secret(addr proofValues, addr proofValues2)
if recoverRes.ok.isNil:
stderr.writeLine "Identity recovery error: ", asString(recoverRes.err)
ffi_c_string_free(recoverRes.err)
quit 1
let recoveredSecret = recoverRes.ok
block:
let debug = ffi_cfr_debug(recoveredSecret)
echo " - recovered_secret = ", asString(debug)
ffi_c_string_free(debug)
block:
let debug = ffi_cfr_debug(identitySecret)
echo " - original_secret = ", asString(debug)
ffi_c_string_free(debug)
echo "Slashing successful: Identity is recovered!"
ffi_cfr_free(recoveredSecret)
ffi_rln_proof_values_free(proofValues2)
ffi_rln_proof_values_free(proofValues)
ffi_rln_proof_free(proof2)
ffi_cfr_free(x2)
ffi_cfr_free(messageId2)
when defined(ffiStateless):
ffi_rln_witness_input_free(witness2)
ffi_rln_witness_input_free(witness)
ffi_vec_cfr_free(roots)
ffi_vec_cfr_free(pathElements)
for i in 0..treeDepth-2:
ffi_cfr_free(defaultHashes[i])
ffi_cfr_free(defaultLeaf)
ffi_cfr_free(computedRoot)
else:
ffi_rln_witness_input_free(witness2)
ffi_rln_witness_input_free(witness)
ffi_merkle_proof_free(merkleProof)
ffi_cfr_free(rateCommitment)
ffi_cfr_free(x)
ffi_cfr_free(epoch)
ffi_cfr_free(rlnIdentifier)
ffi_cfr_free(externalNullifier)
ffi_cfr_free(userMessageLimit)
ffi_cfr_free(messageId)
ffi_vec_cfr_free(keys)
ffi_rln_free(rln)

View File

@@ -0,0 +1,9 @@
{
"path": "./database",
"temporary": false,
"cache_capacity": 1073741824,
"flush_every_ms": 500,
"mode": "HighThroughput",
"use_compression": false,
"tree_depth": 20
}

View File

@@ -0,0 +1,5 @@
use rln::ffi;
fn main() -> std::io::Result<()> {
ffi::generate_headers()
}

25
rln/src/circuit/error.rs Normal file
View File

@@ -0,0 +1,25 @@
/// Errors that can occur during zkey reading operations
#[derive(Debug, thiserror::Error)]
pub enum ZKeyReadError {
#[error("Empty zkey bytes provided")]
EmptyBytes,
#[error("{0}")]
SerializationError(#[from] ark_serialize::SerializationError),
}
/// Errors that can occur during witness calculation
#[derive(Debug, thiserror::Error)]
pub enum WitnessCalcError {
#[error("Failed to deserialize witness calculation graph: {0}")]
GraphDeserialization(#[from] std::io::Error),
#[error("Failed to evaluate witness calculation graph: {0}")]
GraphEvaluation(String),
#[error("Invalid input length for '{name}': expected {expected}, got {actual}")]
InvalidInputLength {
name: String,
expected: usize,
actual: usize,
},
#[error("Missing required input: {0}")]
MissingInput(String),
}

View File

@@ -1,35 +1,66 @@
// This file is based on the code by iden3. Its preimage can be found here:
// This crate is based on the code by iden3. Its preimage can be found here:
// https://github.com/iden3/circom-witnesscalc/blob/5cb365b6e4d9052ecc69d4567fcf5bc061c20e94/src/lib.rs
pub mod graph;
pub mod proto;
pub mod storage;
mod graph;
mod proto;
mod storage;
use ruint::aliases::U256;
use std::collections::HashMap;
use graph::Node;
use ruint::aliases::U256;
use storage::deserialize_witnesscalc_graph;
use zeroize::zeroize_flat_type;
use crate::circuit::Fr;
use graph::{fr_to_u256, Node};
use self::graph::fr_to_u256;
use super::{error::WitnessCalcError, Fr};
use crate::utils::FrOrSecret;
pub type InputSignalsInfo = HashMap<String, (usize, usize)>;
pub(crate) type InputSignalsInfo = HashMap<String, (usize, usize)>;
pub fn calc_witness<I: IntoIterator<Item = (String, Vec<Fr>)>>(
pub(crate) fn calc_witness<I: IntoIterator<Item = (String, Vec<FrOrSecret>)>>(
inputs: I,
graph_data: &[u8],
) -> Vec<Fr> {
let inputs: HashMap<String, Vec<U256>> = inputs
) -> Result<Vec<Fr>, WitnessCalcError> {
let mut inputs: HashMap<String, Vec<U256>> = inputs
.into_iter()
.map(|(key, value)| (key, value.iter().map(fr_to_u256).collect()))
.map(|(key, value)| {
(
key,
value
.iter()
.map(|f_| match f_ {
FrOrSecret::IdSecret(s) => s.to_u256(),
FrOrSecret::Fr(f) => fr_to_u256(f),
})
.collect(),
)
})
.collect();
let (nodes, signals, input_mapping): (Vec<Node>, Vec<usize>, InputSignalsInfo) =
deserialize_witnesscalc_graph(std::io::Cursor::new(graph_data)).unwrap();
deserialize_witnesscalc_graph(std::io::Cursor::new(graph_data))?;
let mut inputs_buffer = get_inputs_buffer(get_inputs_size(&nodes));
populate_inputs(&inputs, &input_mapping, &mut inputs_buffer);
graph::evaluate(&nodes, inputs_buffer.as_slice(), &signals)
populate_inputs(&inputs, &input_mapping, &mut inputs_buffer)?;
if let Some(v) = inputs.get_mut("identitySecret") {
// DO NOT USE: unsafe { zeroize_flat_type(v) } only clears the Vec pointer, not the data—can cause memory leaks
for val in v.iter_mut() {
unsafe { zeroize_flat_type(val) };
}
}
let res = graph::evaluate(&nodes, inputs_buffer.as_slice(), &signals)
.map_err(WitnessCalcError::GraphEvaluation)?;
for val in inputs_buffer.iter_mut() {
unsafe { zeroize_flat_type(val) };
}
Ok(res)
}
fn get_inputs_size(nodes: &[Node]) -> usize {
@@ -52,17 +83,26 @@ fn populate_inputs(
input_list: &HashMap<String, Vec<U256>>,
inputs_info: &InputSignalsInfo,
input_buffer: &mut [U256],
) {
) -> Result<(), WitnessCalcError> {
for (key, value) in input_list {
let (offset, len) = inputs_info[key];
if len != value.len() {
panic!("Invalid input length for {}", key);
let (offset, len) = inputs_info
.get(key)
.ok_or_else(|| WitnessCalcError::MissingInput(key.clone()))?;
if *len != value.len() {
return Err(WitnessCalcError::InvalidInputLength {
name: key.clone(),
expected: *len,
actual: value.len(),
});
}
for (i, v) in value.iter().enumerate() {
input_buffer[offset + i] = *v;
}
}
Ok(())
}
/// Allocates inputs vec with position 0 set to 1

View File

@@ -1,22 +1,17 @@
// This file is based on the code by iden3. Its preimage can be found here:
// This crate is based on the code by iden3. Its preimage can be found here:
// https://github.com/iden3/circom-witnesscalc/blob/5cb365b6e4d9052ecc69d4567fcf5bc061c20e94/src/graph.rs
use std::cmp::Ordering;
use ark_ff::{BigInt, BigInteger, One, PrimeField, Zero};
use ark_serialize::{CanonicalDeserialize, CanonicalSerialize, Compress, Validate};
use rand::Rng;
use ruint::{aliases::U256, uint};
use serde::{Deserialize, Serialize};
use std::{
cmp::Ordering,
collections::HashMap,
error::Error,
ops::{BitAnd, BitOr, BitXor, Deref, Shl, Shr},
};
use crate::circuit::iden3calc::proto;
use super::proto;
use crate::circuit::Fr;
pub const M: U256 =
const M: U256 =
uint!(21888242871839275222246405745257275088548364400416034343698204186575808495617_U256);
fn ark_se<S, A: CanonicalSerialize>(a: &A, s: S) -> Result<S::Ok, S::Error>
@@ -39,17 +34,18 @@ where
}
#[inline(always)]
pub fn fr_to_u256(x: &Fr) -> U256 {
pub(crate) fn fr_to_u256(x: &Fr) -> U256 {
U256::from_limbs(x.into_bigint().0)
}
#[inline(always)]
pub fn u256_to_fr(x: &U256) -> Fr {
Fr::from_bigint(BigInt::new(x.into_limbs())).expect("Failed to convert U256 to Fr")
pub(crate) fn u256_to_fr(x: &U256) -> Result<Fr, String> {
Fr::from_bigint(BigInt::new(x.into_limbs()))
.ok_or_else(|| "Failed to convert U256 to Fr".to_string())
}
#[derive(Hash, PartialEq, Eq, Debug, Clone, Copy, Serialize, Deserialize)]
pub enum Operation {
pub(crate) enum Operation {
Mul,
Div,
Add,
@@ -73,113 +69,76 @@ pub enum Operation {
}
impl Operation {
// TODO: rewrite to &U256 type
pub fn eval(&self, a: U256, b: U256) -> U256 {
fn eval_fr(&self, a: Fr, b: Fr) -> Result<Fr, String> {
use Operation::*;
match self {
Mul => a.mul_mod(b, M),
Div => {
if b == U256::ZERO {
// as we are simulating a circuit execution with signals
// values all equal to 0, just return 0 here in case of
// division by zero
U256::ZERO
} else {
a.mul_mod(b.inv_mod(M).unwrap(), M)
}
}
Add => a.add_mod(b, M),
Sub => a.add_mod(M - b, M),
Pow => a.pow_mod(b, M),
Mod => a.div_rem(b).1,
Eq => U256::from(a == b),
Neq => U256::from(a != b),
Lt => u_lt(&a, &b),
Gt => u_gt(&a, &b),
Leq => u_lte(&a, &b),
Geq => u_gte(&a, &b),
Land => U256::from(a != U256::ZERO && b != U256::ZERO),
Lor => U256::from(a != U256::ZERO || b != U256::ZERO),
Shl => compute_shl_uint(a, b),
Shr => compute_shr_uint(a, b),
// TODO test with conner case when it is possible to get the number
// bigger then modulus
Bor => a.bitor(b),
Band => a.bitand(b),
// TODO test with conner case when it is possible to get the number
// bigger then modulus
Bxor => a.bitxor(b),
Idiv => a / b,
}
}
pub fn eval_fr(&self, a: Fr, b: Fr) -> Fr {
use Operation::*;
match self {
Mul => a * b,
Mul => Ok(a * b),
// We always should return something on the circuit execution.
// So in case of division by 0 we would return 0. And the proof
// should be invalid in the end.
Div => {
if b.is_zero() {
Fr::zero()
Ok(Fr::zero())
} else {
a / b
Ok(a / b)
}
}
Add => a + b,
Sub => a - b,
Add => Ok(a + b),
Sub => Ok(a - b),
// Modular exponentiation to prevent overflow and keep result in field
Pow => {
let a_u256 = fr_to_u256(&a);
let b_u256 = fr_to_u256(&b);
let result = a_u256.pow_mod(b_u256, M);
u256_to_fr(&result)
}
// Integer division (not field division)
Idiv => {
if b.is_zero() {
Fr::zero()
Ok(Fr::zero())
} else {
let a_u256 = fr_to_u256(&a);
let b_u256 = fr_to_u256(&b);
u256_to_fr(&(a_u256 / b_u256))
}
}
// Integer modulo (not field arithmetic)
Mod => {
if b.is_zero() {
Fr::zero()
Ok(Fr::zero())
} else {
let a_u256 = fr_to_u256(&a);
let b_u256 = fr_to_u256(&b);
u256_to_fr(&(a_u256 % b_u256))
}
}
Eq => match a.cmp(&b) {
Eq => Ok(match a.cmp(&b) {
Ordering::Equal => Fr::one(),
_ => Fr::zero(),
},
Neq => match a.cmp(&b) {
}),
Neq => Ok(match a.cmp(&b) {
Ordering::Equal => Fr::zero(),
_ => Fr::one(),
},
}),
Lt => u256_to_fr(&u_lt(&fr_to_u256(&a), &fr_to_u256(&b))),
Gt => u256_to_fr(&u_gt(&fr_to_u256(&a), &fr_to_u256(&b))),
Leq => u256_to_fr(&u_lte(&fr_to_u256(&a), &fr_to_u256(&b))),
Geq => u256_to_fr(&u_gte(&fr_to_u256(&a), &fr_to_u256(&b))),
Land => {
if a.is_zero() || b.is_zero() {
Land => Ok(if a.is_zero() || b.is_zero() {
Fr::zero()
} else {
Fr::one()
}
}
Lor => {
if a.is_zero() && b.is_zero() {
}),
Lor => Ok(if a.is_zero() && b.is_zero() {
Fr::zero()
} else {
Fr::one()
}
}
}),
Shl => shl(a, b),
Shr => shr(a, b),
Bor => bit_or(a, b),
Band => bit_and(a, b),
Bxor => bit_xor(a, b),
// TODO implement other operators
_ => unimplemented!("operator {:?} not implemented for Montgomery", self),
}
}
}
@@ -212,37 +171,27 @@ impl From<&Operation> for proto::DuoOp {
}
#[derive(Hash, PartialEq, Eq, Debug, Clone, Copy, Serialize, Deserialize)]
pub enum UnoOperation {
pub(crate) enum UnoOperation {
Neg,
Id, // identity - just return self
}
impl UnoOperation {
pub fn eval(&self, a: U256) -> U256 {
match self {
UnoOperation::Neg => {
if a == U256::ZERO {
U256::ZERO
} else {
M - a
}
}
UnoOperation::Id => a,
}
}
pub fn eval_fr(&self, a: Fr) -> Fr {
fn eval_fr(&self, a: Fr) -> Result<Fr, String> {
match self {
UnoOperation::Neg => {
if a.is_zero() {
Fr::zero()
Ok(Fr::zero())
} else {
let mut x = Fr::MODULUS;
x.sub_with_borrow(&a.into_bigint());
Fr::from_bigint(x).unwrap()
Fr::from_bigint(x).ok_or_else(|| "Failed to compute negation".to_string())
}
}
_ => unimplemented!("uno operator {:?} not implemented for Montgomery", self),
_ => Err(format!(
"uno operator {:?} not implemented for Montgomery",
self
)),
}
}
}
@@ -257,30 +206,18 @@ impl From<&UnoOperation> for proto::UnoOp {
}
#[derive(Hash, PartialEq, Eq, Debug, Clone, Copy, Serialize, Deserialize)]
pub enum TresOperation {
pub(crate) enum TresOperation {
TernCond,
}
impl TresOperation {
pub fn eval(&self, a: U256, b: U256, c: U256) -> U256 {
match self {
TresOperation::TernCond => {
if a == U256::ZERO {
c
} else {
b
}
}
}
}
pub fn eval_fr(&self, a: Fr, b: Fr, c: Fr) -> Fr {
fn eval_fr(&self, a: Fr, b: Fr, c: Fr) -> Result<Fr, String> {
match self {
TresOperation::TernCond => {
if a.is_zero() {
c
Ok(c)
} else {
b
Ok(b)
}
}
}
@@ -296,7 +233,7 @@ impl From<&TresOperation> for proto::TresOp {
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize)]
pub enum Node {
pub(crate) enum Node {
Input(usize),
Constant(U256),
#[serde(serialize_with = "ark_se", deserialize_with = "ark_de")]
@@ -306,133 +243,21 @@ pub enum Node {
TresOp(TresOperation, usize, usize, usize),
}
// TODO remove pub from Vec<Node>
#[derive(Default)]
pub struct Nodes(pub Vec<Node>);
impl Nodes {
pub fn new() -> Self {
Nodes(Vec::new())
}
pub fn to_const(&self, idx: NodeIdx) -> Result<U256, NodeConstErr> {
let me = self.0.get(idx.0).ok_or(NodeConstErr::EmptyNode(idx))?;
match me {
Node::Constant(v) => Ok(*v),
Node::UnoOp(op, a) => Ok(op.eval(self.to_const(NodeIdx(*a))?)),
Node::Op(op, a, b) => {
Ok(op.eval(self.to_const(NodeIdx(*a))?, self.to_const(NodeIdx(*b))?))
}
Node::TresOp(op, a, b, c) => Ok(op.eval(
self.to_const(NodeIdx(*a))?,
self.to_const(NodeIdx(*b))?,
self.to_const(NodeIdx(*c))?,
)),
Node::Input(_) => Err(NodeConstErr::InputSignal),
Node::MontConstant(_) => {
panic!("MontConstant should not be used here")
}
}
}
pub fn push(&mut self, n: Node) -> NodeIdx {
self.0.push(n);
NodeIdx(self.0.len() - 1)
}
pub fn get(&self, idx: NodeIdx) -> Option<&Node> {
self.0.get(idx.0)
}
}
impl Deref for Nodes {
type Target = Vec<Node>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
#[derive(Debug, Copy, Clone)]
pub struct NodeIdx(pub usize);
impl From<usize> for NodeIdx {
fn from(v: usize) -> Self {
NodeIdx(v)
}
}
#[derive(Debug)]
pub enum NodeConstErr {
EmptyNode(NodeIdx),
InputSignal,
}
impl std::fmt::Display for NodeConstErr {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
NodeConstErr::EmptyNode(idx) => {
write!(f, "empty node at index {}", idx.0)
}
NodeConstErr::InputSignal => {
write!(f, "input signal is not a constant")
}
}
}
}
impl Error for NodeConstErr {}
fn compute_shl_uint(a: U256, b: U256) -> U256 {
debug_assert!(b.lt(&U256::from(256)));
let ls_limb = b.as_limbs()[0];
a.shl(ls_limb as usize)
}
fn compute_shr_uint(a: U256, b: U256) -> U256 {
debug_assert!(b.lt(&U256::from(256)));
let ls_limb = b.as_limbs()[0];
a.shr(ls_limb as usize)
}
/// All references must be backwards.
fn assert_valid(nodes: &[Node]) {
for (i, &node) in nodes.iter().enumerate() {
if let Node::Op(_, a, b) = node {
assert!(a < i);
assert!(b < i);
} else if let Node::UnoOp(_, a) = node {
assert!(a < i);
} else if let Node::TresOp(_, a, b, c) = node {
assert!(a < i);
assert!(b < i);
assert!(c < i);
}
}
}
pub fn optimize(nodes: &mut Vec<Node>, outputs: &mut [usize]) {
tree_shake(nodes, outputs);
propagate(nodes);
value_numbering(nodes, outputs);
constants(nodes);
tree_shake(nodes, outputs);
montgomery_form(nodes);
}
pub fn evaluate(nodes: &[Node], inputs: &[U256], outputs: &[usize]) -> Vec<Fr> {
// assert_valid(nodes);
pub(crate) fn evaluate(
nodes: &[Node],
inputs: &[U256],
outputs: &[usize],
) -> Result<Vec<Fr>, String> {
// Evaluate the graph.
let mut values = Vec::with_capacity(nodes.len());
for &node in nodes.iter() {
let value = match node {
Node::Constant(c) => u256_to_fr(&c),
Node::Constant(c) => u256_to_fr(&c)?,
Node::MontConstant(c) => c,
Node::Input(i) => u256_to_fr(&inputs[i]),
Node::Op(op, a, b) => op.eval_fr(values[a], values[b]),
Node::UnoOp(op, a) => op.eval_fr(values[a]),
Node::TresOp(op, a, b, c) => op.eval_fr(values[a], values[b], values[c]),
Node::Input(i) => u256_to_fr(&inputs[i])?,
Node::Op(op, a, b) => op.eval_fr(values[a], values[b])?,
Node::UnoOp(op, a) => op.eval_fr(values[a])?,
Node::TresOp(op, a, b, c) => op.eval_fr(values[a], values[b], values[c])?,
};
values.push(value);
}
@@ -443,246 +268,31 @@ pub fn evaluate(nodes: &[Node], inputs: &[U256], outputs: &[usize]) -> Vec<Fr> {
out[i] = values[outputs[i]];
}
out
Ok(out)
}
/// Constant propagation
pub fn propagate(nodes: &mut [Node]) {
assert_valid(nodes);
for i in 0..nodes.len() {
if let Node::Op(op, a, b) = nodes[i] {
if let (Node::Constant(va), Node::Constant(vb)) = (nodes[a], nodes[b]) {
nodes[i] = Node::Constant(op.eval(va, vb));
} else if a == b {
// Not constant but equal
use Operation::*;
if let Some(c) = match op {
Eq | Leq | Geq => Some(true),
Neq | Lt | Gt => Some(false),
_ => None,
} {
nodes[i] = Node::Constant(U256::from(c));
}
}
} else if let Node::UnoOp(op, a) = nodes[i] {
if let Node::Constant(va) = nodes[a] {
nodes[i] = Node::Constant(op.eval(va));
}
} else if let Node::TresOp(op, a, b, c) = nodes[i] {
if let (Node::Constant(va), Node::Constant(vb), Node::Constant(vc)) =
(nodes[a], nodes[b], nodes[c])
{
nodes[i] = Node::Constant(op.eval(va, vb, vc));
}
}
}
}
/// Remove unused nodes
pub fn tree_shake(nodes: &mut Vec<Node>, outputs: &mut [usize]) {
assert_valid(nodes);
// Mark all nodes that are used.
let mut used = vec![false; nodes.len()];
for &i in outputs.iter() {
used[i] = true;
}
// Work backwards from end as all references are backwards.
for i in (0..nodes.len()).rev() {
if used[i] {
if let Node::Op(_, a, b) = nodes[i] {
used[a] = true;
used[b] = true;
}
if let Node::UnoOp(_, a) = nodes[i] {
used[a] = true;
}
if let Node::TresOp(_, a, b, c) = nodes[i] {
used[a] = true;
used[b] = true;
used[c] = true;
}
}
}
// Remove unused nodes
let n = nodes.len();
let mut retain = used.iter();
nodes.retain(|_| *retain.next().unwrap());
// Renumber references.
let mut renumber = vec![None; n];
let mut index = 0;
for (i, &used) in used.iter().enumerate() {
if used {
renumber[i] = Some(index);
index += 1;
}
}
assert_eq!(index, nodes.len());
for (&used, renumber) in used.iter().zip(renumber.iter()) {
assert_eq!(used, renumber.is_some());
}
// Renumber references.
for node in nodes.iter_mut() {
if let Node::Op(_, a, b) = node {
*a = renumber[*a].unwrap();
*b = renumber[*b].unwrap();
}
if let Node::UnoOp(_, a) = node {
*a = renumber[*a].unwrap();
}
if let Node::TresOp(_, a, b, c) = node {
*a = renumber[*a].unwrap();
*b = renumber[*b].unwrap();
*c = renumber[*c].unwrap();
}
}
for output in outputs.iter_mut() {
*output = renumber[*output].unwrap();
}
}
/// Randomly evaluate the graph
fn random_eval(nodes: &mut [Node]) -> Vec<U256> {
let mut rng = rand::thread_rng();
let mut values = Vec::with_capacity(nodes.len());
let mut inputs = HashMap::new();
let mut prfs = HashMap::new();
let mut prfs_uno = HashMap::new();
let mut prfs_tres = HashMap::new();
for node in nodes.iter() {
use Operation::*;
let value = match node {
// Constants evaluate to themselves
Node::Constant(c) => *c,
Node::MontConstant(_) => unimplemented!("should not be used"),
// Algebraic Ops are evaluated directly
// Since the field is large, by Swartz-Zippel if
// two values are the same then they are likely algebraically equal.
Node::Op(op @ (Add | Sub | Mul), a, b) => op.eval(values[*a], values[*b]),
// Input and non-algebraic ops are random functions
// TODO: https://github.com/recmo/uint/issues/95 and use .gen_range(..M)
Node::Input(i) => *inputs.entry(*i).or_insert_with(|| rng.gen::<U256>() % M),
Node::Op(op, a, b) => *prfs
.entry((*op, values[*a], values[*b]))
.or_insert_with(|| rng.gen::<U256>() % M),
Node::UnoOp(op, a) => *prfs_uno
.entry((*op, values[*a]))
.or_insert_with(|| rng.gen::<U256>() % M),
Node::TresOp(op, a, b, c) => *prfs_tres
.entry((*op, values[*a], values[*b], values[*c]))
.or_insert_with(|| rng.gen::<U256>() % M),
};
values.push(value);
}
values
}
/// Value numbering
pub fn value_numbering(nodes: &mut [Node], outputs: &mut [usize]) {
assert_valid(nodes);
// Evaluate the graph in random field elements.
let values = random_eval(nodes);
// Find all nodes with the same value.
let mut value_map = HashMap::new();
for (i, &value) in values.iter().enumerate() {
value_map.entry(value).or_insert_with(Vec::new).push(i);
}
// For nodes that are the same, pick the first index.
let renumber: Vec<_> = values.into_iter().map(|v| value_map[&v][0]).collect();
// Renumber references.
for node in nodes.iter_mut() {
if let Node::Op(_, a, b) = node {
*a = renumber[*a];
*b = renumber[*b];
}
if let Node::UnoOp(_, a) = node {
*a = renumber[*a];
}
if let Node::TresOp(_, a, b, c) = node {
*a = renumber[*a];
*b = renumber[*b];
*c = renumber[*c];
}
}
for output in outputs.iter_mut() {
*output = renumber[*output];
}
}
/// Probabilistic constant determination
pub fn constants(nodes: &mut [Node]) {
assert_valid(nodes);
// Evaluate the graph in random field elements.
let values_a = random_eval(nodes);
let values_b = random_eval(nodes);
// Find all nodes with the same value.
for i in 0..nodes.len() {
if let Node::Constant(_) = nodes[i] {
continue;
}
if values_a[i] == values_b[i] {
nodes[i] = Node::Constant(values_a[i]);
}
}
}
/// Convert to Montgomery form
pub fn montgomery_form(nodes: &mut [Node]) {
for node in nodes.iter_mut() {
use Node::*;
use Operation::*;
match node {
Constant(c) => *node = MontConstant(u256_to_fr(c)),
MontConstant(..) => (),
Input(..) => (),
Op(
Mul | Div | Add | Sub | Idiv | Mod | Eq | Neq | Lt | Gt | Leq | Geq | Land | Lor
| Shl | Shr | Bor | Band | Bxor,
..,
) => (),
Op(op @ Pow, ..) => unimplemented!("Operators Montgomery form: {:?}", op),
UnoOp(UnoOperation::Neg, ..) => (),
UnoOp(op, ..) => unimplemented!("Uno Operators Montgomery form: {:?}", op),
TresOp(TresOperation::TernCond, ..) => (),
}
}
}
fn shl(a: Fr, b: Fr) -> Fr {
fn shl(a: Fr, b: Fr) -> Result<Fr, String> {
if b.is_zero() {
return a;
return Ok(a);
}
if b.cmp(&Fr::from(Fr::MODULUS_BIT_SIZE)).is_ge() {
return Fr::zero();
return Ok(Fr::zero());
}
let n = b.into_bigint().0[0] as u32;
let a = a.into_bigint();
Fr::from_bigint(a << n).unwrap()
Fr::from_bigint(a << n).ok_or_else(|| "Failed to compute left shift".to_string())
}
fn shr(a: Fr, b: Fr) -> Fr {
fn shr(a: Fr, b: Fr) -> Result<Fr, String> {
if b.is_zero() {
return a;
return Ok(a);
}
match b.cmp(&Fr::from(254u64)) {
Ordering::Equal => return Fr::zero(),
Ordering::Greater => return Fr::zero(),
Ordering::Equal => return Ok(Fr::zero()),
Ordering::Greater => return Ok(Fr::zero()),
_ => (),
};
@@ -698,7 +308,7 @@ fn shr(a: Fr, b: Fr) -> Fr {
}
if n == 0 {
return Fr::from_bigint(result).unwrap();
return Fr::from_bigint(result).ok_or_else(|| "Failed to compute right shift".to_string());
}
let mask: u64 = (1 << n) - 1;
@@ -709,10 +319,10 @@ fn shr(a: Fr, b: Fr) -> Fr {
c[i] = (c[i] >> n) | (carrier << (64 - n));
carrier = new_carrier;
}
Fr::from_bigint(result).unwrap()
Fr::from_bigint(result).ok_or_else(|| "Failed to compute right shift".to_string())
}
fn bit_and(a: Fr, b: Fr) -> Fr {
fn bit_and(a: Fr, b: Fr) -> Result<Fr, String> {
let a = a.into_bigint();
let b = b.into_bigint();
let c: [u64; 4] = [
@@ -726,10 +336,10 @@ fn bit_and(a: Fr, b: Fr) -> Fr {
d.sub_with_borrow(&Fr::MODULUS);
}
Fr::from_bigint(d).unwrap()
Fr::from_bigint(d).ok_or_else(|| "Failed to compute bitwise AND".to_string())
}
fn bit_or(a: Fr, b: Fr) -> Fr {
fn bit_or(a: Fr, b: Fr) -> Result<Fr, String> {
let a = a.into_bigint();
let b = b.into_bigint();
let c: [u64; 4] = [
@@ -743,10 +353,10 @@ fn bit_or(a: Fr, b: Fr) -> Fr {
d.sub_with_borrow(&Fr::MODULUS);
}
Fr::from_bigint(d).unwrap()
Fr::from_bigint(d).ok_or_else(|| "Failed to compute bitwise OR".to_string())
}
fn bit_xor(a: Fr, b: Fr) -> Fr {
fn bit_xor(a: Fr, b: Fr) -> Result<Fr, String> {
let a = a.into_bigint();
let b = b.into_bigint();
let c: [u64; 4] = [
@@ -760,7 +370,7 @@ fn bit_xor(a: Fr, b: Fr) -> Fr {
d.sub_with_borrow(&Fr::MODULUS);
}
Fr::from_bigint(d).unwrap()
Fr::from_bigint(d).ok_or_else(|| "Failed to compute bitwise XOR".to_string())
}
// M / 2
@@ -816,24 +426,27 @@ fn u_lt(a: &U256, b: &U256) -> U256 {
}
#[cfg(test)]
mod tests {
use super::*;
mod test {
use std::{ops::Div, str::FromStr};
use ruint::uint;
use std::ops::Div;
use std::str::FromStr;
use super::*;
#[test]
fn test_ok() {
let a = Fr::from(4u64);
let b = Fr::from(2u64);
let c = shl(a, b);
let c = shl(a, b).unwrap();
assert_eq!(c.cmp(&Fr::from(16u64)), Ordering::Equal)
}
#[test]
fn test_div() {
assert_eq!(
Operation::Div.eval_fr(Fr::from(2u64), Fr::from(3u64)),
Operation::Div
.eval_fr(Fr::from(2u64), Fr::from(3u64))
.unwrap(),
Fr::from_str(
"7296080957279758407415468581752425029516121466805344781232734728858602831873"
)
@@ -841,12 +454,16 @@ mod tests {
);
assert_eq!(
Operation::Div.eval_fr(Fr::from(6u64), Fr::from(2u64)),
Operation::Div
.eval_fr(Fr::from(6u64), Fr::from(2u64))
.unwrap(),
Fr::from_str("3").unwrap()
);
assert_eq!(
Operation::Div.eval_fr(Fr::from(7u64), Fr::from(2u64)),
Operation::Div
.eval_fr(Fr::from(7u64), Fr::from(2u64))
.unwrap(),
Fr::from_str(
"10944121435919637611123202872628637544274182200208017171849102093287904247812"
)
@@ -857,17 +474,23 @@ mod tests {
#[test]
fn test_idiv() {
assert_eq!(
Operation::Idiv.eval_fr(Fr::from(2u64), Fr::from(3u64)),
Operation::Idiv
.eval_fr(Fr::from(2u64), Fr::from(3u64))
.unwrap(),
Fr::from_str("0").unwrap()
);
assert_eq!(
Operation::Idiv.eval_fr(Fr::from(6u64), Fr::from(2u64)),
Operation::Idiv
.eval_fr(Fr::from(6u64), Fr::from(2u64))
.unwrap(),
Fr::from_str("3").unwrap()
);
assert_eq!(
Operation::Idiv.eval_fr(Fr::from(7u64), Fr::from(2u64)),
Operation::Idiv
.eval_fr(Fr::from(7u64), Fr::from(2u64))
.unwrap(),
Fr::from_str("3").unwrap()
);
}
@@ -875,12 +498,16 @@ mod tests {
#[test]
fn test_fr_mod() {
assert_eq!(
Operation::Mod.eval_fr(Fr::from(7u64), Fr::from(2u64)),
Operation::Mod
.eval_fr(Fr::from(7u64), Fr::from(2u64))
.unwrap(),
Fr::from_str("1").unwrap()
);
assert_eq!(
Operation::Mod.eval_fr(Fr::from(7u64), Fr::from(9u64)),
Operation::Mod
.eval_fr(Fr::from(7u64), Fr::from(9u64))
.unwrap(),
Fr::from_str("7").unwrap()
);
}
@@ -944,14 +571,14 @@ mod tests {
let x = M.div(uint!(2_U256));
println!("x: {:?}", x.as_limbs());
println!("x: {}", M);
println!("x: {M}");
}
#[test]
fn test_2() {
let nodes: Vec<Node> = vec![];
// let node = nodes[0];
let node = nodes.get(0);
println!("{:?}", node);
let node = nodes.first();
println!("{node:?}");
}
}

View File

@@ -1,33 +1,33 @@
// This file has been generated by prost-build during compilation of the code by iden3
// This crate has been generated by prost-build during compilation of the code by iden3
// and modified manually. The *.proto file used to generate this on can be found here:
// https://github.com/iden3/circom-witnesscalc/blob/5cb365b6e4d9052ecc69d4567fcf5bc061c20e94/protos/messages.proto
use std::collections::HashMap;
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct BigUInt {
#[derive(Clone, PartialEq, prost::Message)]
pub(crate) struct BigUInt {
#[prost(bytes = "vec", tag = "1")]
pub value_le: Vec<u8>,
}
#[derive(Clone, Copy, PartialEq, ::prost::Message)]
pub struct InputNode {
#[derive(Clone, Copy, PartialEq, prost::Message)]
pub(crate) struct InputNode {
#[prost(uint32, tag = "1")]
pub idx: u32,
}
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct ConstantNode {
#[derive(Clone, PartialEq, prost::Message)]
pub(crate) struct ConstantNode {
#[prost(message, optional, tag = "1")]
pub value: Option<BigUInt>,
}
#[derive(Clone, Copy, PartialEq, ::prost::Message)]
pub struct UnoOpNode {
#[derive(Clone, Copy, PartialEq, prost::Message)]
pub(crate) struct UnoOpNode {
#[prost(enumeration = "UnoOp", tag = "1")]
pub op: i32,
#[prost(uint32, tag = "2")]
pub a_idx: u32,
}
#[derive(Clone, Copy, PartialEq, ::prost::Message)]
pub struct DuoOpNode {
#[derive(Clone, Copy, PartialEq, prost::Message)]
pub(crate) struct DuoOpNode {
#[prost(enumeration = "DuoOp", tag = "1")]
pub op: i32,
#[prost(uint32, tag = "2")]
@@ -35,8 +35,8 @@ pub struct DuoOpNode {
#[prost(uint32, tag = "3")]
pub b_idx: u32,
}
#[derive(Clone, Copy, PartialEq, ::prost::Message)]
pub struct TresOpNode {
#[derive(Clone, Copy, PartialEq, prost::Message)]
pub(crate) struct TresOpNode {
#[prost(enumeration = "TresOp", tag = "1")]
pub op: i32,
#[prost(uint32, tag = "2")]
@@ -46,15 +46,15 @@ pub struct TresOpNode {
#[prost(uint32, tag = "4")]
pub c_idx: u32,
}
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Node {
#[derive(Clone, PartialEq, prost::Message)]
pub(crate) struct Node {
#[prost(oneof = "node::Node", tags = "1, 2, 3, 4, 5")]
pub node: Option<node::Node>,
}
/// Nested message and enum types in `Node`.
pub mod node {
#[derive(Clone, PartialEq, ::prost::Oneof)]
pub enum Node {
pub(crate) mod node {
#[derive(Clone, PartialEq, prost::Oneof)]
pub(crate) enum Node {
#[prost(message, tag = "1")]
Input(super::InputNode),
#[prost(message, tag = "2")]
@@ -67,22 +67,22 @@ pub mod node {
TresOp(super::TresOpNode),
}
}
#[derive(Clone, Copy, PartialEq, ::prost::Message)]
pub struct SignalDescription {
#[derive(Clone, Copy, PartialEq, prost::Message)]
pub(crate) struct SignalDescription {
#[prost(uint32, tag = "1")]
pub offset: u32,
#[prost(uint32, tag = "2")]
pub len: u32,
}
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct GraphMetadata {
#[derive(Clone, PartialEq, prost::Message)]
pub(crate) struct GraphMetadata {
#[prost(uint32, repeated, tag = "1")]
pub witness_signals: Vec<u32>,
#[prost(map = "string, message", tag = "2")]
pub inputs: HashMap<String, SignalDescription>,
}
#[derive(Clone, Copy, Debug, PartialEq, ::prost::Enumeration)]
pub enum DuoOp {
#[derive(Clone, Copy, Debug, PartialEq, prost::Enumeration)]
pub(crate) enum DuoOp {
Mul = 0,
Div = 1,
Add = 2,
@@ -105,13 +105,13 @@ pub enum DuoOp {
Bxor = 19,
}
#[derive(Clone, Copy, Debug, PartialEq, ::prost::Enumeration)]
pub enum UnoOp {
#[derive(Clone, Copy, Debug, PartialEq, prost::Enumeration)]
pub(crate) enum UnoOp {
Neg = 0,
Id = 1,
}
#[derive(Clone, Copy, Debug, PartialEq, ::prost::Enumeration)]
pub enum TresOp {
#[derive(Clone, Copy, Debug, PartialEq, prost::Enumeration)]
pub(crate) enum TresOp {
TernCond = 0,
}

View File

@@ -1,57 +1,86 @@
// This file is based on the code by iden3. Its preimage can be found here:
// This crate is based on the code by iden3. Its preimage can be found here:
// https://github.com/iden3/circom-witnesscalc/blob/5cb365b6e4d9052ecc69d4567fcf5bc061c20e94/src/storage.rs
use ark_bn254::Fr;
use std::io::{Read, Write};
use ark_ff::PrimeField;
use byteorder::{LittleEndian, ReadBytesExt, WriteBytesExt};
use prost::Message;
use std::io::{Read, Write};
use crate::circuit::iden3calc::{
graph,
graph::{Operation, TresOperation, UnoOperation},
use super::{
graph::{self, Operation, TresOperation, UnoOperation},
proto, InputSignalsInfo,
};
use crate::circuit::Fr;
// format of the wtns.graph file:
// + magic line: wtns.graph.001
// + 4 bytes unsigned LE 32-bit integer: number of nodes
// + series of protobuf serialized nodes. Each node prefixed by varint length
// + protobuf serialized GraphMetadata
// + 8 bytes unsigned LE 64-bit integer: offset of GraphMetadata message
/// Format of the wtns.graph file:
/// + magic line: wtns.graph.001
/// + 4 bytes unsigned LE 32-bit integer: number of nodes
/// + series of protobuf serialized nodes. Each node prefixed by varint length
/// + protobuf serialized GraphMetadata
/// + 8 bytes unsigned LE 64-bit integer: offset of GraphMetadata message
const WITNESSCALC_GRAPH_MAGIC: &[u8] = b"wtns.graph.001";
const MAX_VARINT_LENGTH: usize = 10;
impl From<proto::Node> for graph::Node {
fn from(value: proto::Node) -> Self {
match value.node.unwrap() {
proto::node::Node::Input(input_node) => graph::Node::Input(input_node.idx as usize),
impl TryFrom<proto::Node> for graph::Node {
type Error = std::io::Error;
fn try_from(value: proto::Node) -> Result<Self, Self::Error> {
let node = value.node.ok_or_else(|| {
std::io::Error::new(
std::io::ErrorKind::InvalidData,
"Proto::Node must have a node field",
)
})?;
match node {
proto::node::Node::Input(input_node) => Ok(graph::Node::Input(input_node.idx as usize)),
proto::node::Node::Constant(constant_node) => {
let i = constant_node.value.unwrap();
graph::Node::MontConstant(Fr::from_le_bytes_mod_order(i.value_le.as_slice()))
let i = constant_node.value.ok_or_else(|| {
std::io::Error::new(
std::io::ErrorKind::InvalidData,
"Constant node must have a value",
)
})?;
Ok(graph::Node::MontConstant(Fr::from_le_bytes_mod_order(
i.value_le.as_slice(),
)))
}
proto::node::Node::UnoOp(uno_op_node) => {
let op = proto::UnoOp::try_from(uno_op_node.op).unwrap();
graph::Node::UnoOp(op.into(), uno_op_node.a_idx as usize)
let op = proto::UnoOp::try_from(uno_op_node.op).map_err(|_| {
std::io::Error::new(
std::io::ErrorKind::InvalidData,
"UnoOp must be valid enum value",
)
})?;
Ok(graph::Node::UnoOp(op.into(), uno_op_node.a_idx as usize))
}
proto::node::Node::DuoOp(duo_op_node) => {
let op = proto::DuoOp::try_from(duo_op_node.op).unwrap();
graph::Node::Op(
let op = proto::DuoOp::try_from(duo_op_node.op).map_err(|_| {
std::io::Error::new(
std::io::ErrorKind::InvalidData,
"DuoOp must be valid enum value",
)
})?;
Ok(graph::Node::Op(
op.into(),
duo_op_node.a_idx as usize,
duo_op_node.b_idx as usize,
)
))
}
proto::node::Node::TresOp(tres_op_node) => {
let op = proto::TresOp::try_from(tres_op_node.op).unwrap();
graph::Node::TresOp(
let op = proto::TresOp::try_from(tres_op_node.op).map_err(|_| {
std::io::Error::new(
std::io::ErrorKind::InvalidData,
"TresOp must be valid enum value",
)
})?;
Ok(graph::Node::TresOp(
op.into(),
tres_op_node.a_idx as usize,
tres_op_node.b_idx as usize,
tres_op_node.c_idx as usize,
)
))
}
}
}
@@ -137,14 +166,15 @@ impl From<proto::TresOp> for graph::TresOperation {
}
}
pub fn serialize_witnesscalc_graph<T: Write>(
#[allow(dead_code)]
pub(crate) fn serialize_witnesscalc_graph<T: Write>(
mut w: T,
nodes: &Vec<graph::Node>,
witness_signals: &[usize],
input_signals: &InputSignalsInfo,
) -> std::io::Result<()> {
let mut ptr = 0usize;
w.write_all(WITNESSCALC_GRAPH_MAGIC).unwrap();
w.write_all(WITNESSCALC_GRAPH_MAGIC)?;
ptr += WITNESSCALC_GRAPH_MAGIC.len();
w.write_u64::<LittleEndian>(nodes.len() as u64)?;
@@ -232,7 +262,7 @@ fn read_message<R: Read, M: Message + std::default::Default>(
Ok(msg)
}
pub fn deserialize_witnesscalc_graph(
pub(crate) fn deserialize_witnesscalc_graph(
r: impl Read,
) -> std::io::Result<(Vec<graph::Node>, Vec<usize>, InputSignalsInfo)> {
let mut br = WriteBackReader::new(r);
@@ -251,8 +281,7 @@ pub fn deserialize_witnesscalc_graph(
let mut nodes = Vec::with_capacity(nodes_num as usize);
for _ in 0..nodes_num {
let n: proto::Node = read_message(&mut br)?;
let n2: graph::Node = n.into();
nodes.push(n2);
nodes.push(n.try_into()?);
}
let md: proto::GraphMetadata = read_message(&mut br)?;
@@ -331,13 +360,15 @@ impl<R: Read> Write for WriteBackReader<R> {
}
#[cfg(test)]
mod tests {
use super::*;
use byteorder::ByteOrder;
mod test {
use core::str::FromStr;
use graph::{Operation, TresOperation, UnoOperation};
use std::collections::HashMap;
use byteorder::ByteOrder;
use graph::{Operation, TresOperation, UnoOperation};
use super::*;
#[test]
fn test_read_message() {
let mut buf = Vec::new();
@@ -419,13 +450,13 @@ mod tests {
let mut r = WriteBackReader::new(std::io::Cursor::new(&data));
let buf = &mut [0u8; 5];
r.read(buf).unwrap();
r.read_exact(buf).unwrap();
assert_eq!(buf, &[1, 2, 3, 4, 5]);
// return [4, 5] to reader
r.write(&buf[3..]).unwrap();
r.write_all(&buf[3..]).unwrap();
// return [2, 3] to reader
r.write(&buf[1..3]).unwrap();
r.write_all(&buf[1..3]).unwrap();
buf.fill(0);

View File

@@ -1,151 +1,140 @@
// This crate provides interfaces for the zero-knowledge circuit and keys
pub mod iden3calc;
pub mod qap;
pub mod zkey;
pub(crate) mod error;
pub(crate) mod iden3calc;
pub(crate) mod qap;
#[cfg(not(target_arch = "wasm32"))]
use std::sync::LazyLock;
use ::lazy_static::lazy_static;
use ark_bn254::{
Bn254, Fq as ArkFq, Fq2 as ArkFq2, Fr as ArkFr, G1Affine as ArkG1Affine,
G1Projective as ArkG1Projective, G2Affine as ArkG2Affine, G2Projective as ArkG2Projective,
};
use ark_groth16::ProvingKey;
use ark_relations::r1cs::ConstraintMatrices;
use cfg_if::cfg_if;
use color_eyre::{Report, Result};
use crate::circuit::iden3calc::calc_witness;
#[cfg(feature = "arkzkey")]
use {
ark_ff::Field, ark_serialize::CanonicalDeserialize, ark_serialize::CanonicalSerialize,
color_eyre::eyre::WrapErr,
use ark_ff::Field;
use ark_groth16::{
Proof as ArkProof, ProvingKey as ArkProvingKey, VerifyingKey as ArkVerifyingKey,
};
use ark_relations::r1cs::ConstraintMatrices;
use ark_serialize::{CanonicalDeserialize, CanonicalSerialize};
#[cfg(not(feature = "arkzkey"))]
use {crate::circuit::zkey::read_zkey, std::io::Cursor};
#[cfg(feature = "arkzkey")]
pub const ARKZKEY_BYTES: &[u8] = include_bytes!("../../resources/tree_height_20/rln_final.arkzkey");
pub const ZKEY_BYTES: &[u8] = include_bytes!("../../resources/tree_height_20/rln_final.zkey");
use self::error::ZKeyReadError;
#[cfg(not(target_arch = "wasm32"))]
const GRAPH_BYTES: &[u8] = include_bytes!("../../resources/tree_height_20/graph.bin");
const GRAPH_BYTES: &[u8] = include_bytes!("../../resources/tree_depth_20/graph.bin");
lazy_static! {
static ref ZKEY: (ProvingKey<Curve>, ConstraintMatrices<Fr>) = {
cfg_if! {
if #[cfg(feature = "arkzkey")] {
read_arkzkey_from_bytes_uncompressed(ARKZKEY_BYTES).expect("Failed to read arkzkey")
} else {
let mut reader = Cursor::new(ZKEY_BYTES);
read_zkey(&mut reader).expect("Failed to read zkey")
}
}
};
}
#[cfg(not(target_arch = "wasm32"))]
const ARKZKEY_BYTES: &[u8] = include_bytes!("../../resources/tree_depth_20/rln_final.arkzkey");
pub const TEST_TREE_HEIGHT: usize = 20;
#[cfg(not(target_arch = "wasm32"))]
static ARKZKEY: LazyLock<Zkey> = LazyLock::new(|| {
read_arkzkey_from_bytes_uncompressed(ARKZKEY_BYTES).expect("Default zkey must be valid")
});
pub const DEFAULT_TREE_DEPTH: usize = 20;
pub const COMPRESS_PROOF_SIZE: usize = 128;
// The following types define the pairing friendly elliptic curve, the underlying finite fields and groups default to this module
// Note that proofs are serialized assuming Fr to be 4x8 = 32 bytes in size. Hence, changing to a curve with different encoding will make proof verification to fail
/// BN254 pairing-friendly elliptic curve.
pub type Curve = Bn254;
/// Scalar field Fr of the BN254 curve.
pub type Fr = ArkFr;
/// Base field Fq of the BN254 curve.
pub type Fq = ArkFq;
/// Quadratic extension field element for the BN254 curve.
pub type Fq2 = ArkFq2;
/// Affine representation of a G1 group element on the BN254 curve.
pub type G1Affine = ArkG1Affine;
/// Projective representation of a G1 group element on the BN254 curve.
pub type G1Projective = ArkG1Projective;
/// Affine representation of a G2 group element on the BN254 curve.
pub type G2Affine = ArkG2Affine;
/// Projective representation of a G2 group element on the BN254 curve.
pub type G2Projective = ArkG2Projective;
// Loads the proving key using a bytes vector
pub fn zkey_from_raw(zkey_data: &[u8]) -> Result<(ProvingKey<Curve>, ConstraintMatrices<Fr>)> {
/// Groth16 proof for the BN254 curve.
pub type Proof = ArkProof<Curve>;
/// Proving key for the Groth16 proof system.
pub type ProvingKey = ArkProvingKey<Curve>;
/// Combining the proving key and constraint matrices.
pub type Zkey = (ArkProvingKey<Curve>, ConstraintMatrices<Fr>);
/// Verifying key for the Groth16 proof system.
pub type VerifyingKey = ArkVerifyingKey<Curve>;
/// Loads the zkey from raw bytes
pub fn zkey_from_raw(zkey_data: &[u8]) -> Result<Zkey, ZKeyReadError> {
if zkey_data.is_empty() {
return Err(Report::msg("No proving key found!"));
return Err(ZKeyReadError::EmptyBytes);
}
let proving_key_and_matrices = match () {
#[cfg(feature = "arkzkey")]
() => read_arkzkey_from_bytes_uncompressed(zkey_data)?,
#[cfg(not(feature = "arkzkey"))]
() => {
let mut reader = Cursor::new(zkey_data);
read_zkey(&mut reader)?
}
};
let proving_key_and_matrices = read_arkzkey_from_bytes_uncompressed(zkey_data)?;
Ok(proving_key_and_matrices)
}
// Loads the proving key
// Loads default zkey from folder
#[cfg(not(target_arch = "wasm32"))]
pub fn zkey_from_folder() -> &'static (ProvingKey<Curve>, ConstraintMatrices<Fr>) {
&ZKEY
}
pub fn calculate_rln_witness<I: IntoIterator<Item = (String, Vec<Fr>)>>(
inputs: I,
graph_data: &[u8],
) -> Vec<Fr> {
calc_witness(inputs, graph_data)
pub fn zkey_from_folder() -> &'static Zkey {
&ARKZKEY
}
// Loads default graph from folder
#[cfg(not(target_arch = "wasm32"))]
pub fn graph_from_folder() -> &'static [u8] {
GRAPH_BYTES
}
////////////////////////////////////////////////////////
// Functions and structs from [arkz-key](https://github.com/zkmopro/ark-zkey/blob/main/src/lib.rs#L106)
// without print and allow to choose between compressed and uncompressed arkzkey
////////////////////////////////////////////////////////
// The following functions and structs are based on code from ark-zkey:
// https://github.com/zkmopro/ark-zkey/blob/main/src/lib.rs#L106
#[cfg(feature = "arkzkey")]
#[derive(CanonicalSerialize, CanonicalDeserialize, Clone, Debug, PartialEq)]
pub struct SerializableProvingKey(pub ProvingKey<Bn254>);
struct SerializableProvingKey(ArkProvingKey<Curve>);
#[cfg(feature = "arkzkey")]
#[derive(CanonicalSerialize, CanonicalDeserialize, Clone, Debug, PartialEq)]
pub struct SerializableConstraintMatrices<F: Field> {
pub num_instance_variables: usize,
pub num_witness_variables: usize,
pub num_constraints: usize,
pub a_num_non_zero: usize,
pub b_num_non_zero: usize,
pub c_num_non_zero: usize,
pub a: SerializableMatrix<F>,
pub b: SerializableMatrix<F>,
pub c: SerializableMatrix<F>,
struct SerializableConstraintMatrices<F: Field> {
num_instance_variables: usize,
num_witness_variables: usize,
num_constraints: usize,
a_num_non_zero: usize,
b_num_non_zero: usize,
c_num_non_zero: usize,
a: SerializableMatrix<F>,
b: SerializableMatrix<F>,
c: SerializableMatrix<F>,
}
#[cfg(feature = "arkzkey")]
#[derive(CanonicalSerialize, CanonicalDeserialize, Clone, Debug, PartialEq)]
pub struct SerializableMatrix<F: Field> {
struct SerializableMatrix<F: Field> {
pub data: Vec<Vec<(F, usize)>>,
}
#[cfg(feature = "arkzkey")]
pub fn read_arkzkey_from_bytes_uncompressed(
arkzkey_data: &[u8],
) -> Result<(ProvingKey<Curve>, ConstraintMatrices<Fr>)> {
fn read_arkzkey_from_bytes_uncompressed(arkzkey_data: &[u8]) -> Result<Zkey, ZKeyReadError> {
if arkzkey_data.is_empty() {
return Err(Report::msg("No proving key found!"));
return Err(ZKeyReadError::EmptyBytes);
}
let mut cursor = std::io::Cursor::new(arkzkey_data);
let serialized_proving_key =
SerializableProvingKey::deserialize_uncompressed_unchecked(&mut cursor)
.wrap_err("Failed to deserialize proving key")?;
SerializableProvingKey::deserialize_uncompressed_unchecked(&mut cursor)?;
let serialized_constraint_matrices =
SerializableConstraintMatrices::deserialize_uncompressed_unchecked(&mut cursor)
.wrap_err("Failed to deserialize constraint matrices")?;
SerializableConstraintMatrices::deserialize_uncompressed_unchecked(&mut cursor)?;
// Get on right form for API
let proving_key: ProvingKey<Bn254> = serialized_proving_key.0;
let constraint_matrices: ConstraintMatrices<ark_bn254::Fr> = ConstraintMatrices {
let proving_key: ProvingKey = serialized_proving_key.0;
let constraint_matrices: ConstraintMatrices<Fr> = ConstraintMatrices {
num_instance_variables: serialized_constraint_matrices.num_instance_variables,
num_witness_variables: serialized_constraint_matrices.num_witness_variables,
num_constraints: serialized_constraint_matrices.num_constraints,
@@ -156,6 +145,7 @@ pub fn read_arkzkey_from_bytes_uncompressed(
b: serialized_constraint_matrices.b.data,
c: serialized_constraint_matrices.c.data,
};
let zkey = (proving_key, constraint_matrices);
Ok((proving_key, constraint_matrices))
Ok(zkey)
}

View File

@@ -1,4 +1,4 @@
// This file is based on the code by arkworks. Its preimage can be found here:
// This crate is based on the code by arkworks. Its preimage can be found here:
// https://github.com/arkworks-rs/circom-compat/blob/3c95ed98e23a408b4d99a53e483a9bba39685a4e/src/circom/qap.rs
use ark_ff::PrimeField;
@@ -6,13 +6,18 @@ use ark_groth16::r1cs_to_qap::{evaluate_constraint, LibsnarkReduction, R1CSToQAP
use ark_poly::EvaluationDomain;
use ark_relations::r1cs::{ConstraintMatrices, ConstraintSystemRef, SynthesisError};
use ark_std::{cfg_into_iter, cfg_iter, cfg_iter_mut, vec};
#[cfg(feature = "parallel")]
use rayon::iter::{
IndexedParallelIterator, IntoParallelIterator, IntoParallelRefIterator,
IntoParallelRefMutIterator, ParallelIterator,
};
/// Implements the witness map used by snarkjs. The arkworks witness map calculates the
/// coefficients of H through computing (AB-C)/Z in the evaluation domain and going back to the
/// coefficients domain. snarkjs instead precomputes the Lagrange form of the powers of tau bases
/// in a domain twice as large and the witness map is computed as the odd coefficients of (AB-C)
/// in that domain. This serves as HZ when computing the C proof element.
pub struct CircomReduction;
pub(crate) struct CircomReduction;
impl R1CSToQAP for CircomReduction {
#[allow(clippy::type_complexity)]

View File

@@ -1,371 +0,0 @@
// This file is based on the code by arkworks. Its preimage can be found here:
// https://github.com/arkworks-rs/circom-compat/blob/3c95ed98e23a408b4d99a53e483a9bba39685a4e/src/zkey.rs
//! ZKey Parsing
//!
//! Each ZKey file is broken into sections:
//! Header(1)
//! Prover Type 1 Groth
//! HeaderGroth(2)
//! n8q
//! q
//! n8r
//! r
//! NVars
//! NPub
//! DomainSize (multiple of 2
//! alpha1
//! beta1
//! delta1
//! beta2
//! gamma2
//! delta2
//! IC(3)
//! Coefs(4)
//! PointsA(5)
//! PointsB1(6)
//! PointsB2(7)
//! PointsC(8)
//! PointsH(9)
//! Contributions(10)
use ark_ff::{BigInteger256, PrimeField};
use ark_relations::r1cs::ConstraintMatrices;
use ark_serialize::{CanonicalDeserialize, SerializationError};
use ark_std::log2;
use byteorder::{LittleEndian, ReadBytesExt};
use std::{
collections::HashMap,
io::{Read, Seek, SeekFrom},
};
use ark_bn254::{Bn254, Fq, Fq2, Fr, G1Affine, G2Affine};
use ark_groth16::{ProvingKey, VerifyingKey};
use num_traits::Zero;
type IoResult<T> = Result<T, SerializationError>;
#[derive(Clone, Debug)]
struct Section {
position: u64,
#[allow(dead_code)]
size: usize,
}
/// Reads a SnarkJS ZKey file into an Arkworks ProvingKey.
pub fn read_zkey<R: Read + Seek>(
reader: &mut R,
) -> IoResult<(ProvingKey<Bn254>, ConstraintMatrices<Fr>)> {
let mut binfile = BinFile::new(reader)?;
let proving_key = binfile.proving_key()?;
let matrices = binfile.matrices()?;
Ok((proving_key, matrices))
}
#[derive(Debug)]
struct BinFile<'a, R> {
#[allow(dead_code)]
ftype: String,
#[allow(dead_code)]
version: u32,
sections: HashMap<u32, Vec<Section>>,
reader: &'a mut R,
}
impl<'a, R: Read + Seek> BinFile<'a, R> {
fn new(reader: &'a mut R) -> IoResult<Self> {
let mut magic = [0u8; 4];
reader.read_exact(&mut magic)?;
let version = reader.read_u32::<LittleEndian>()?;
let num_sections = reader.read_u32::<LittleEndian>()?;
let mut sections = HashMap::new();
for _ in 0..num_sections {
let section_id = reader.read_u32::<LittleEndian>()?;
let section_length = reader.read_u64::<LittleEndian>()?;
let section = sections.entry(section_id).or_insert_with(Vec::new);
section.push(Section {
position: reader.stream_position()?,
size: section_length as usize,
});
reader.seek(SeekFrom::Current(section_length as i64))?;
}
Ok(Self {
ftype: std::str::from_utf8(&magic[..]).unwrap().to_string(),
version,
sections,
reader,
})
}
fn proving_key(&mut self) -> IoResult<ProvingKey<Bn254>> {
let header = self.groth_header()?;
let ic = self.ic(header.n_public)?;
let a_query = self.a_query(header.n_vars)?;
let b_g1_query = self.b_g1_query(header.n_vars)?;
let b_g2_query = self.b_g2_query(header.n_vars)?;
let l_query = self.l_query(header.n_vars - header.n_public - 1)?;
let h_query = self.h_query(header.domain_size as usize)?;
let vk = VerifyingKey::<Bn254> {
alpha_g1: header.verifying_key.alpha_g1,
beta_g2: header.verifying_key.beta_g2,
gamma_g2: header.verifying_key.gamma_g2,
delta_g2: header.verifying_key.delta_g2,
gamma_abc_g1: ic,
};
let pk = ProvingKey::<Bn254> {
vk,
beta_g1: header.verifying_key.beta_g1,
delta_g1: header.verifying_key.delta_g1,
a_query,
b_g1_query,
b_g2_query,
h_query,
l_query,
};
Ok(pk)
}
fn get_section(&self, id: u32) -> Section {
self.sections.get(&id).unwrap()[0].clone()
}
fn groth_header(&mut self) -> IoResult<HeaderGroth> {
let section = self.get_section(2);
let header = HeaderGroth::new(&mut self.reader, &section)?;
Ok(header)
}
fn ic(&mut self, n_public: usize) -> IoResult<Vec<G1Affine>> {
// the range is non-inclusive so we do +1 to get all inputs
self.g1_section(n_public + 1, 3)
}
/// Returns the [`ConstraintMatrices`] corresponding to the zkey
pub fn matrices(&mut self) -> IoResult<ConstraintMatrices<Fr>> {
let header = self.groth_header()?;
let section = self.get_section(4);
self.reader.seek(SeekFrom::Start(section.position))?;
let num_coeffs: u32 = self.reader.read_u32::<LittleEndian>()?;
// insantiate AB
let mut matrices = vec![vec![vec![]; header.domain_size as usize]; 2];
let mut max_constraint_index = 0;
for _ in 0..num_coeffs {
let matrix: u32 = self.reader.read_u32::<LittleEndian>()?;
let constraint: u32 = self.reader.read_u32::<LittleEndian>()?;
let signal: u32 = self.reader.read_u32::<LittleEndian>()?;
let value: Fr = deserialize_field_fr(&mut self.reader)?;
max_constraint_index = std::cmp::max(max_constraint_index, constraint);
matrices[matrix as usize][constraint as usize].push((value, signal as usize));
}
let num_constraints = max_constraint_index as usize - header.n_public;
// Remove the public input constraints, Arkworks adds them later
matrices.iter_mut().for_each(|m| {
m.truncate(num_constraints);
});
// This is taken from Arkworks' to_matrices() function
let a = matrices[0].clone();
let b = matrices[1].clone();
let a_num_non_zero: usize = a.iter().map(|lc| lc.len()).sum();
let b_num_non_zero: usize = b.iter().map(|lc| lc.len()).sum();
let matrices = ConstraintMatrices {
num_instance_variables: header.n_public + 1,
num_witness_variables: header.n_vars - header.n_public,
num_constraints,
a_num_non_zero,
b_num_non_zero,
c_num_non_zero: 0,
a,
b,
c: vec![],
};
Ok(matrices)
}
fn a_query(&mut self, n_vars: usize) -> IoResult<Vec<G1Affine>> {
self.g1_section(n_vars, 5)
}
fn b_g1_query(&mut self, n_vars: usize) -> IoResult<Vec<G1Affine>> {
self.g1_section(n_vars, 6)
}
fn b_g2_query(&mut self, n_vars: usize) -> IoResult<Vec<G2Affine>> {
self.g2_section(n_vars, 7)
}
fn l_query(&mut self, n_vars: usize) -> IoResult<Vec<G1Affine>> {
self.g1_section(n_vars, 8)
}
fn h_query(&mut self, n_vars: usize) -> IoResult<Vec<G1Affine>> {
self.g1_section(n_vars, 9)
}
fn g1_section(&mut self, num: usize, section_id: usize) -> IoResult<Vec<G1Affine>> {
let section = self.get_section(section_id as u32);
self.reader.seek(SeekFrom::Start(section.position))?;
deserialize_g1_vec(self.reader, num as u32)
}
fn g2_section(&mut self, num: usize, section_id: usize) -> IoResult<Vec<G2Affine>> {
let section = self.get_section(section_id as u32);
self.reader.seek(SeekFrom::Start(section.position))?;
deserialize_g2_vec(self.reader, num as u32)
}
}
#[derive(Default, Clone, Debug, CanonicalDeserialize)]
pub struct ZVerifyingKey {
alpha_g1: G1Affine,
beta_g1: G1Affine,
beta_g2: G2Affine,
gamma_g2: G2Affine,
delta_g1: G1Affine,
delta_g2: G2Affine,
}
impl ZVerifyingKey {
fn new<R: Read>(reader: &mut R) -> IoResult<Self> {
let alpha_g1 = deserialize_g1(reader)?;
let beta_g1 = deserialize_g1(reader)?;
let beta_g2 = deserialize_g2(reader)?;
let gamma_g2 = deserialize_g2(reader)?;
let delta_g1 = deserialize_g1(reader)?;
let delta_g2 = deserialize_g2(reader)?;
Ok(Self {
alpha_g1,
beta_g1,
beta_g2,
gamma_g2,
delta_g1,
delta_g2,
})
}
}
#[derive(Clone, Debug)]
struct HeaderGroth {
#[allow(dead_code)]
n8q: u32,
#[allow(dead_code)]
q: BigInteger256,
#[allow(dead_code)]
n8r: u32,
#[allow(dead_code)]
r: BigInteger256,
n_vars: usize,
n_public: usize,
domain_size: u32,
#[allow(dead_code)]
power: u32,
verifying_key: ZVerifyingKey,
}
impl HeaderGroth {
fn new<R: Read + Seek>(reader: &mut R, section: &Section) -> IoResult<Self> {
reader.seek(SeekFrom::Start(section.position))?;
Self::read(reader)
}
fn read<R: Read>(mut reader: &mut R) -> IoResult<Self> {
// TODO: Impl From<u32> in Arkworks
let n8q: u32 = u32::deserialize_uncompressed(&mut reader)?;
// group order r of Bn254
let q = BigInteger256::deserialize_uncompressed(&mut reader)?;
let n8r: u32 = u32::deserialize_uncompressed(&mut reader)?;
// Prime field modulus
let r = BigInteger256::deserialize_uncompressed(&mut reader)?;
let n_vars = u32::deserialize_uncompressed(&mut reader)? as usize;
let n_public = u32::deserialize_uncompressed(&mut reader)? as usize;
let domain_size: u32 = u32::deserialize_uncompressed(&mut reader)?;
let power = log2(domain_size as usize);
let verifying_key = ZVerifyingKey::new(&mut reader)?;
Ok(Self {
n8q,
q,
n8r,
r,
n_vars,
n_public,
domain_size,
power,
verifying_key,
})
}
}
// need to divide by R, since snarkjs outputs the zkey with coefficients
// multiplieid by R^2
fn deserialize_field_fr<R: Read>(reader: &mut R) -> IoResult<Fr> {
let bigint = BigInteger256::deserialize_uncompressed(reader)?;
Ok(Fr::new_unchecked(Fr::new_unchecked(bigint).into_bigint()))
}
// skips the multiplication by R because Circom points are already in Montgomery form
fn deserialize_field<R: Read>(reader: &mut R) -> IoResult<Fq> {
let bigint = BigInteger256::deserialize_uncompressed(reader)?;
// if you use Fq::new it multiplies by R
Ok(Fq::new_unchecked(bigint))
}
pub fn deserialize_field2<R: Read>(reader: &mut R) -> IoResult<Fq2> {
let c0 = deserialize_field(reader)?;
let c1 = deserialize_field(reader)?;
Ok(Fq2::new(c0, c1))
}
fn deserialize_g1<R: Read>(reader: &mut R) -> IoResult<G1Affine> {
let x = deserialize_field(reader)?;
let y = deserialize_field(reader)?;
let infinity = x.is_zero() && y.is_zero();
if infinity {
Ok(G1Affine::identity())
} else {
Ok(G1Affine::new(x, y))
}
}
fn deserialize_g2<R: Read>(reader: &mut R) -> IoResult<G2Affine> {
let f1 = deserialize_field2(reader)?;
let f2 = deserialize_field2(reader)?;
let infinity = f1.is_zero() && f2.is_zero();
if infinity {
Ok(G2Affine::identity())
} else {
Ok(G2Affine::new(f1, f2))
}
}
fn deserialize_g1_vec<R: Read>(reader: &mut R, n_vars: u32) -> IoResult<Vec<G1Affine>> {
(0..n_vars).map(|_| deserialize_g1(reader)).collect()
}
fn deserialize_g2_vec<R: Read>(reader: &mut R, n_vars: u32) -> IoResult<Vec<G2Affine>> {
(0..n_vars).map(|_| deserialize_g2(reader)).collect()
}

83
rln/src/error.rs Normal file
View File

@@ -0,0 +1,83 @@
use std::{array::TryFromSliceError, num::TryFromIntError};
use ark_relations::r1cs::SynthesisError;
use num_bigint::{BigInt, ParseBigIntError};
use thiserror::Error;
use zerokit_utils::error::{FromConfigError, HashError, ZerokitMerkleTreeError};
use crate::circuit::{
error::{WitnessCalcError, ZKeyReadError},
Fr,
};
/// Errors that can occur during RLN utility operations (conversions, parsing, etc.)
#[derive(Debug, thiserror::Error)]
pub enum UtilsError {
#[error("Expected radix 10 or 16")]
WrongRadix,
#[error("Failed to parse big integer: {0}")]
ParseBigInt(#[from] ParseBigIntError),
#[error("Failed to convert to usize: {0}")]
ToUsize(#[from] TryFromIntError),
#[error("Failed to convert from slice: {0}")]
FromSlice(#[from] TryFromSliceError),
#[error("Input data too short: expected at least {expected} bytes, got {actual} bytes")]
InsufficientData { expected: usize, actual: usize },
}
/// Errors that can occur during RLN protocol operations (proof generation, verification, etc.)
#[derive(Debug, thiserror::Error)]
pub enum ProtocolError {
#[error("Error producing proof: {0}")]
Synthesis(#[from] SynthesisError),
#[error("RLN utility error: {0}")]
Utils(#[from] UtilsError),
#[error("Error calculating witness: {0}")]
WitnessCalc(#[from] WitnessCalcError),
#[error("Expected to read {0} bytes but read only {1} bytes")]
InvalidReadLen(usize, usize),
#[error("Cannot convert bigint {0:?} to biguint")]
BigUintConversion(BigInt),
#[error("Message id ({0}) is not within user_message_limit ({1})")]
InvalidMessageId(Fr, Fr),
#[error("Merkle proof length mismatch: expected {0}, got {1}")]
InvalidMerkleProofLength(usize, usize),
#[error("External nullifiers mismatch: {0} != {1}")]
ExternalNullifierMismatch(Fr, Fr),
#[error("Cannot recover secret: division by zero")]
DivisionByZero,
#[error("Merkle tree operation error: {0}")]
MerkleTree(#[from] ZerokitMerkleTreeError),
#[error("Hash computation error: {0}")]
Hash(#[from] HashError),
#[error("Proof serialization error: {0}")]
SerializationError(#[from] ark_serialize::SerializationError),
}
/// Errors that can occur during proof verification
#[derive(Error, Debug)]
pub enum VerifyError {
#[error("Invalid proof provided")]
InvalidProof,
#[error("Expected one of the provided roots")]
InvalidRoot,
#[error("Signal value does not match")]
InvalidSignal,
}
/// Top-level RLN error type encompassing all RLN operations
#[derive(Debug, thiserror::Error)]
pub enum RLNError {
#[error("Configuration error: {0}")]
Config(#[from] FromConfigError),
#[error("Merkle tree error: {0}")]
MerkleTree(#[from] ZerokitMerkleTreeError),
#[error("Hash error: {0}")]
Hash(#[from] HashError),
#[error("ZKey error: {0}")]
ZKey(#[from] ZKeyReadError),
#[error("Protocol error: {0}")]
Protocol(#[from] ProtocolError),
#[error("Verification error: {0}")]
Verify(#[from] VerifyError),
}

View File

@@ -1,547 +0,0 @@
// This crate implements the public Foreign Function Interface (FFI) for the RLN module
use std::slice;
use crate::public::{hash as public_hash, poseidon_hash as public_poseidon_hash, RLN};
// Macro to call methods with arbitrary amount of arguments,
// First argument to the macro is context,
// second is the actual method on `RLN`
// rest are all other arguments to the method
#[cfg(not(feature = "stateless"))]
macro_rules! call {
($instance:expr, $method:ident $(, $arg:expr)*) => {
{
let new_instance: &mut RLN = $instance.process();
match new_instance.$method($($arg.process()),*) {
Ok(()) => {
true
}
Err(err) => {
eprintln!("execution error: {err}");
false
}
}
}
}
}
// Macro to call methods with arbitrary amount of arguments,
// which have the last argument is output buffer pointer
// First argument to the macro is context,
// second is the actual method on `RLN`
// third is the aforementioned output buffer argument
// rest are all other arguments to the method
macro_rules! call_with_output_arg {
// this variant is needed for the case when
// there are zero other arguments
($instance:expr, $method:ident, $output_arg:expr) => {
{
let mut output_data: Vec<u8> = Vec::new();
let new_instance = $instance.process();
match new_instance.$method(&mut output_data) {
Ok(()) => {
unsafe { *$output_arg = Buffer::from(&output_data[..]) };
std::mem::forget(output_data);
true
}
Err(err) => {
std::mem::forget(output_data);
eprintln!("execution error: {err}");
false
}
}
}
};
($instance:expr, $method:ident, $output_arg:expr, $( $arg:expr ),* ) => {
{
let mut output_data: Vec<u8> = Vec::new();
let new_instance = $instance.process();
match new_instance.$method($($arg.process()),*, &mut output_data) {
Ok(()) => {
unsafe { *$output_arg = Buffer::from(&output_data[..]) };
std::mem::forget(output_data);
true
}
Err(err) => {
std::mem::forget(output_data);
eprintln!("execution error: {err}");
false
}
}
}
};
}
// Macro to call methods with arbitrary amount of arguments,
// which are not implemented in a ctx RLN object
// First argument is the method to call
// Second argument is the output buffer argument
// The remaining arguments are all other inputs to the method
macro_rules! no_ctx_call_with_output_arg {
($method:ident, $output_arg:expr, $( $arg:expr ),* ) => {
{
let mut output_data: Vec<u8> = Vec::new();
match $method($($arg.process()),*, &mut output_data) {
Ok(()) => {
unsafe { *$output_arg = Buffer::from(&output_data[..]) };
std::mem::forget(output_data);
true
}
Err(err) => {
std::mem::forget(output_data);
eprintln!("execution error: {err}");
false
}
}
}
}
}
// Macro to call methods with arbitrary amount of arguments,
// which have the last argument as bool
// First argument to the macro is context,
// second is the actual method on `RLN`
// third is the aforementioned bool argument
// rest are all other arguments to the method
macro_rules! call_with_bool_arg {
($instance:expr, $method:ident, $bool_arg:expr, $( $arg:expr ),* ) => {
{
let new_instance = $instance.process();
if match new_instance.$method($($arg.process()),*,) {
Ok(result) => result,
Err(err) => {
eprintln!("execution error: {err}");
return false
},
} {
unsafe { *$bool_arg = true };
} else {
unsafe { *$bool_arg = false };
};
true
}
}
}
trait ProcessArg {
type ReturnType;
fn process(self) -> Self::ReturnType;
}
impl ProcessArg for usize {
type ReturnType = usize;
fn process(self) -> Self::ReturnType {
self
}
}
impl ProcessArg for *const Buffer {
type ReturnType = &'static [u8];
fn process(self) -> Self::ReturnType {
<&[u8]>::from(unsafe { &*self })
}
}
impl ProcessArg for *const RLN {
type ReturnType = &'static RLN;
fn process(self) -> Self::ReturnType {
unsafe { &*self }
}
}
impl ProcessArg for *mut RLN {
type ReturnType = &'static mut RLN;
fn process(self) -> Self::ReturnType {
unsafe { &mut *self }
}
}
///// Buffer struct is taken from
///// <https://github.com/celo-org/celo-threshold-bls-rs/blob/master/crates/threshold-bls-ffi/src/ffi.rs>
/////
///// Also heavily inspired by <https://github.com/kilic/rln/blob/master/src/ffi.rs>
#[repr(C)]
#[derive(Clone, Debug, PartialEq)]
pub struct Buffer {
pub ptr: *const u8,
pub len: usize,
}
impl From<&[u8]> for Buffer {
fn from(src: &[u8]) -> Self {
Self {
ptr: src.as_ptr(),
len: src.len(),
}
}
}
impl<'a> From<&Buffer> for &'a [u8] {
fn from(src: &Buffer) -> &'a [u8] {
unsafe { slice::from_raw_parts(src.ptr, src.len) }
}
}
// TODO: check if there are security implications by using this clippy
// #[allow(clippy::not_unsafe_ptr_arg_deref)]
////////////////////////////////////////////////////////
// RLN APIs
////////////////////////////////////////////////////////
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[cfg(not(feature = "stateless"))]
#[no_mangle]
pub extern "C" fn new(tree_height: usize, input_buffer: *const Buffer, ctx: *mut *mut RLN) -> bool {
match RLN::new(tree_height, input_buffer.process()) {
Ok(rln) => {
unsafe { *ctx = Box::into_raw(Box::new(rln)) };
true
}
Err(err) => {
eprintln!("could not instantiate rln: {err}");
false
}
}
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[cfg(feature = "stateless")]
#[no_mangle]
pub extern "C" fn new(ctx: *mut *mut RLN) -> bool {
match RLN::new() {
Ok(rln) => {
unsafe { *ctx = Box::into_raw(Box::new(rln)) };
true
}
Err(err) => {
eprintln!("could not instantiate rln: {err}");
false
}
}
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[cfg(not(feature = "stateless"))]
#[no_mangle]
pub extern "C" fn new_with_params(
tree_height: usize,
zkey_buffer: *const Buffer,
graph_data: *const Buffer,
tree_config: *const Buffer,
ctx: *mut *mut RLN,
) -> bool {
match RLN::new_with_params(
tree_height,
zkey_buffer.process().to_vec(),
graph_data.process().to_vec(),
tree_config.process(),
) {
Ok(rln) => {
unsafe { *ctx = Box::into_raw(Box::new(rln)) };
true
}
Err(err) => {
eprintln!("could not instantiate rln: {err}");
false
}
}
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[cfg(feature = "stateless")]
#[no_mangle]
pub extern "C" fn new_with_params(
zkey_buffer: *const Buffer,
graph_buffer: *const Buffer,
ctx: *mut *mut RLN,
) -> bool {
match RLN::new_with_params(
zkey_buffer.process().to_vec(),
graph_buffer.process().to_vec(),
) {
Ok(rln) => {
unsafe { *ctx = Box::into_raw(Box::new(rln)) };
true
}
Err(err) => {
eprintln!("could not instantiate rln: {err}");
false
}
}
}
////////////////////////////////////////////////////////
// Merkle tree APIs
////////////////////////////////////////////////////////
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn set_tree(ctx: *mut RLN, tree_height: usize) -> bool {
call!(ctx, set_tree, tree_height)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn delete_leaf(ctx: *mut RLN, index: usize) -> bool {
call!(ctx, delete_leaf, index)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn set_leaf(ctx: *mut RLN, index: usize, input_buffer: *const Buffer) -> bool {
call!(ctx, set_leaf, index, input_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn get_leaf(ctx: *mut RLN, index: usize, output_buffer: *mut Buffer) -> bool {
call_with_output_arg!(ctx, get_leaf, output_buffer, index)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn leaves_set(ctx: *mut RLN) -> usize {
ctx.process().leaves_set()
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn set_next_leaf(ctx: *mut RLN, input_buffer: *const Buffer) -> bool {
call!(ctx, set_next_leaf, input_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn set_leaves_from(
ctx: *mut RLN,
index: usize,
input_buffer: *const Buffer,
) -> bool {
call!(ctx, set_leaves_from, index, input_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn init_tree_with_leaves(ctx: *mut RLN, input_buffer: *const Buffer) -> bool {
call!(ctx, init_tree_with_leaves, input_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn atomic_operation(
ctx: *mut RLN,
index: usize,
leaves_buffer: *const Buffer,
indices_buffer: *const Buffer,
) -> bool {
call!(ctx, atomic_operation, index, leaves_buffer, indices_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn seq_atomic_operation(
ctx: *mut RLN,
leaves_buffer: *const Buffer,
indices_buffer: *const Buffer,
) -> bool {
call!(
ctx,
atomic_operation,
ctx.process().leaves_set(),
leaves_buffer,
indices_buffer
)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn get_root(ctx: *const RLN, output_buffer: *mut Buffer) -> bool {
call_with_output_arg!(ctx, get_root, output_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn get_proof(ctx: *const RLN, index: usize, output_buffer: *mut Buffer) -> bool {
call_with_output_arg!(ctx, get_proof, output_buffer, index)
}
////////////////////////////////////////////////////////
// zkSNARKs APIs
////////////////////////////////////////////////////////
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn prove(
ctx: *mut RLN,
input_buffer: *const Buffer,
output_buffer: *mut Buffer,
) -> bool {
call_with_output_arg!(ctx, prove, output_buffer, input_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn verify(
ctx: *const RLN,
proof_buffer: *const Buffer,
proof_is_valid_ptr: *mut bool,
) -> bool {
call_with_bool_arg!(ctx, verify, proof_is_valid_ptr, proof_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn generate_rln_proof(
ctx: *mut RLN,
input_buffer: *const Buffer,
output_buffer: *mut Buffer,
) -> bool {
call_with_output_arg!(ctx, generate_rln_proof, output_buffer, input_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn generate_rln_proof_with_witness(
ctx: *mut RLN,
input_buffer: *const Buffer,
output_buffer: *mut Buffer,
) -> bool {
call_with_output_arg!(
ctx,
generate_rln_proof_with_witness,
output_buffer,
input_buffer
)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn verify_rln_proof(
ctx: *const RLN,
proof_buffer: *const Buffer,
proof_is_valid_ptr: *mut bool,
) -> bool {
call_with_bool_arg!(ctx, verify_rln_proof, proof_is_valid_ptr, proof_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn verify_with_roots(
ctx: *const RLN,
proof_buffer: *const Buffer,
roots_buffer: *const Buffer,
proof_is_valid_ptr: *mut bool,
) -> bool {
call_with_bool_arg!(
ctx,
verify_with_roots,
proof_is_valid_ptr,
proof_buffer,
roots_buffer
)
}
////////////////////////////////////////////////////////
// Utils
////////////////////////////////////////////////////////
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn key_gen(ctx: *const RLN, output_buffer: *mut Buffer) -> bool {
call_with_output_arg!(ctx, key_gen, output_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn seeded_key_gen(
ctx: *const RLN,
input_buffer: *const Buffer,
output_buffer: *mut Buffer,
) -> bool {
call_with_output_arg!(ctx, seeded_key_gen, output_buffer, input_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn extended_key_gen(ctx: *const RLN, output_buffer: *mut Buffer) -> bool {
call_with_output_arg!(ctx, extended_key_gen, output_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn seeded_extended_key_gen(
ctx: *const RLN,
input_buffer: *const Buffer,
output_buffer: *mut Buffer,
) -> bool {
call_with_output_arg!(ctx, seeded_extended_key_gen, output_buffer, input_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn recover_id_secret(
ctx: *const RLN,
input_proof_buffer_1: *const Buffer,
input_proof_buffer_2: *const Buffer,
output_buffer: *mut Buffer,
) -> bool {
call_with_output_arg!(
ctx,
recover_id_secret,
output_buffer,
input_proof_buffer_1,
input_proof_buffer_2
)
}
////////////////////////////////////////////////////////
// Persistent metadata APIs
////////////////////////////////////////////////////////
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn set_metadata(ctx: *mut RLN, input_buffer: *const Buffer) -> bool {
call!(ctx, set_metadata, input_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn get_metadata(ctx: *const RLN, output_buffer: *mut Buffer) -> bool {
call_with_output_arg!(ctx, get_metadata, output_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
#[cfg(not(feature = "stateless"))]
pub extern "C" fn flush(ctx: *mut RLN) -> bool {
call!(ctx, flush)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn hash(input_buffer: *const Buffer, output_buffer: *mut Buffer) -> bool {
no_ctx_call_with_output_arg!(public_hash, output_buffer, input_buffer)
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
#[no_mangle]
pub extern "C" fn poseidon_hash(input_buffer: *const Buffer, output_buffer: *mut Buffer) -> bool {
no_ctx_call_with_output_arg!(public_poseidon_hash, output_buffer, input_buffer)
}

566
rln/src/ffi/ffi_rln.rs Normal file
View File

@@ -0,0 +1,566 @@
#![allow(non_camel_case_types)]
use num_bigint::BigInt;
use safer_ffi::{boxed::Box_, derive_ReprC, ffi_export, prelude::repr_c};
#[cfg(not(feature = "stateless"))]
use {safer_ffi::prelude::char_p, std::fs::File, std::io::Read};
use super::ffi_utils::{CBoolResult, CFr, CResult};
use crate::prelude::*;
#[cfg(not(feature = "stateless"))]
const MAX_CONFIG_SIZE: u64 = 1024 * 1024; // 1MB
// FFI_RLN
#[derive_ReprC]
#[repr(opaque)]
pub struct FFI_RLN(pub(crate) RLN);
// RLN initialization APIs
#[cfg(not(feature = "stateless"))]
#[ffi_export]
pub fn ffi_rln_new(
tree_depth: usize,
config_path: char_p::Ref<'_>,
) -> CResult<repr_c::Box<FFI_RLN>, repr_c::String> {
let config_str = File::open(config_path.to_str())
.and_then(|mut file| {
let metadata = file.metadata()?;
if metadata.len() > MAX_CONFIG_SIZE {
return Err(std::io::Error::new(
std::io::ErrorKind::InvalidData,
format!(
"Config file too large: {} bytes (max {} bytes)",
metadata.len(),
MAX_CONFIG_SIZE
),
));
}
let mut s = String::new();
file.read_to_string(&mut s)?;
Ok(s)
})
.unwrap_or_default();
match RLN::new(tree_depth, config_str.as_str()) {
Ok(rln) => CResult {
ok: Some(Box_::new(FFI_RLN(rln))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[cfg(feature = "stateless")]
#[ffi_export]
pub fn ffi_rln_new() -> CResult<repr_c::Box<FFI_RLN>, repr_c::String> {
match RLN::new() {
Ok(rln) => CResult {
ok: Some(Box_::new(FFI_RLN(rln))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[cfg(not(feature = "stateless"))]
#[ffi_export]
pub fn ffi_rln_new_with_params(
tree_depth: usize,
zkey_data: &repr_c::Vec<u8>,
graph_data: &repr_c::Vec<u8>,
config_path: char_p::Ref<'_>,
) -> CResult<repr_c::Box<FFI_RLN>, repr_c::String> {
let config_str = File::open(config_path.to_str())
.and_then(|mut file| {
let metadata = file.metadata()?;
if metadata.len() > MAX_CONFIG_SIZE {
return Err(std::io::Error::new(
std::io::ErrorKind::InvalidData,
format!(
"Config file too large: {} bytes (max {} bytes)",
metadata.len(),
MAX_CONFIG_SIZE
),
));
}
let mut s = String::new();
file.read_to_string(&mut s)?;
Ok(s)
})
.unwrap_or_default();
match RLN::new_with_params(
tree_depth,
zkey_data.to_vec(),
graph_data.to_vec(),
config_str.as_str(),
) {
Ok(rln) => CResult {
ok: Some(Box_::new(FFI_RLN(rln))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[cfg(feature = "stateless")]
#[ffi_export]
pub fn ffi_rln_new_with_params(
zkey_data: &repr_c::Vec<u8>,
graph_data: &repr_c::Vec<u8>,
) -> CResult<repr_c::Box<FFI_RLN>, repr_c::String> {
match RLN::new_with_params(zkey_data.to_vec(), graph_data.to_vec()) {
Ok(rln) => CResult {
ok: Some(Box_::new(FFI_RLN(rln))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_rln_free(rln: repr_c::Box<FFI_RLN>) {
drop(rln);
}
// RLNProof
#[derive_ReprC]
#[repr(opaque)]
pub struct FFI_RLNProof(pub(crate) RLNProof);
#[ffi_export]
pub fn ffi_rln_proof_get_values(
rln_proof: &repr_c::Box<FFI_RLNProof>,
) -> repr_c::Box<FFI_RLNProofValues> {
Box_::new(FFI_RLNProofValues(rln_proof.0.proof_values))
}
#[ffi_export]
pub fn ffi_rln_proof_to_bytes_le(
rln_proof: &repr_c::Box<FFI_RLNProof>,
) -> CResult<repr_c::Vec<u8>, repr_c::String> {
match rln_proof_to_bytes_le(&rln_proof.0) {
Ok(bytes) => CResult {
ok: Some(bytes.into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_rln_proof_to_bytes_be(
rln_proof: &repr_c::Box<FFI_RLNProof>,
) -> CResult<repr_c::Vec<u8>, repr_c::String> {
match rln_proof_to_bytes_be(&rln_proof.0) {
Ok(bytes) => CResult {
ok: Some(bytes.into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_bytes_le_to_rln_proof(
bytes: &repr_c::Vec<u8>,
) -> CResult<repr_c::Box<FFI_RLNProof>, repr_c::String> {
match bytes_le_to_rln_proof(bytes) {
Ok((rln_proof, _)) => CResult {
ok: Some(Box_::new(FFI_RLNProof(rln_proof))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_bytes_be_to_rln_proof(
bytes: &repr_c::Vec<u8>,
) -> CResult<repr_c::Box<FFI_RLNProof>, repr_c::String> {
match bytes_be_to_rln_proof(bytes) {
Ok((rln_proof, _)) => CResult {
ok: Some(Box_::new(FFI_RLNProof(rln_proof))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_rln_proof_free(rln_proof: repr_c::Box<FFI_RLNProof>) {
drop(rln_proof);
}
// RLNWitnessInput
#[derive_ReprC]
#[repr(opaque)]
pub struct FFI_RLNWitnessInput(pub(crate) RLNWitnessInput);
#[ffi_export]
pub fn ffi_rln_witness_input_new(
identity_secret: &CFr,
user_message_limit: &CFr,
message_id: &CFr,
path_elements: &repr_c::Vec<CFr>,
identity_path_index: &repr_c::Vec<u8>,
x: &CFr,
external_nullifier: &CFr,
) -> CResult<repr_c::Box<FFI_RLNWitnessInput>, repr_c::String> {
let mut identity_secret_fr = identity_secret.0;
let path_elements: Vec<Fr> = path_elements.iter().map(|cfr| cfr.0).collect();
let identity_path_index: Vec<u8> = identity_path_index.iter().copied().collect();
match RLNWitnessInput::new(
IdSecret::from(&mut identity_secret_fr),
user_message_limit.0,
message_id.0,
path_elements,
identity_path_index,
x.0,
external_nullifier.0,
) {
Ok(witness) => CResult {
ok: Some(Box_::new(FFI_RLNWitnessInput(witness))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_rln_witness_to_bytes_le(
witness: &repr_c::Box<FFI_RLNWitnessInput>,
) -> CResult<repr_c::Vec<u8>, repr_c::String> {
match rln_witness_to_bytes_le(&witness.0) {
Ok(bytes) => CResult {
ok: Some(bytes.into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_rln_witness_to_bytes_be(
witness: &repr_c::Box<FFI_RLNWitnessInput>,
) -> CResult<repr_c::Vec<u8>, repr_c::String> {
match rln_witness_to_bytes_be(&witness.0) {
Ok(bytes) => CResult {
ok: Some(bytes.into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_bytes_le_to_rln_witness(
bytes: &repr_c::Vec<u8>,
) -> CResult<repr_c::Box<FFI_RLNWitnessInput>, repr_c::String> {
match bytes_le_to_rln_witness(bytes) {
Ok((witness, _)) => CResult {
ok: Some(Box_::new(FFI_RLNWitnessInput(witness))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_bytes_be_to_rln_witness(
bytes: &repr_c::Vec<u8>,
) -> CResult<repr_c::Box<FFI_RLNWitnessInput>, repr_c::String> {
match bytes_be_to_rln_witness(bytes) {
Ok((witness, _)) => CResult {
ok: Some(Box_::new(FFI_RLNWitnessInput(witness))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_rln_witness_to_bigint_json(
witness: &repr_c::Box<FFI_RLNWitnessInput>,
) -> CResult<repr_c::String, repr_c::String> {
match rln_witness_to_bigint_json(&witness.0) {
Ok(json) => CResult {
ok: Some(json.to_string().into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_rln_witness_input_free(witness: repr_c::Box<FFI_RLNWitnessInput>) {
drop(witness);
}
// RLNProofValues
#[derive_ReprC]
#[repr(opaque)]
pub struct FFI_RLNProofValues(pub(crate) RLNProofValues);
#[ffi_export]
pub fn ffi_rln_proof_values_get_y(pv: &repr_c::Box<FFI_RLNProofValues>) -> repr_c::Box<CFr> {
CFr::from(pv.0.y).into()
}
#[ffi_export]
pub fn ffi_rln_proof_values_get_nullifier(
pv: &repr_c::Box<FFI_RLNProofValues>,
) -> repr_c::Box<CFr> {
CFr::from(pv.0.nullifier).into()
}
#[ffi_export]
pub fn ffi_rln_proof_values_get_root(pv: &repr_c::Box<FFI_RLNProofValues>) -> repr_c::Box<CFr> {
CFr::from(pv.0.root).into()
}
#[ffi_export]
pub fn ffi_rln_proof_values_get_x(pv: &repr_c::Box<FFI_RLNProofValues>) -> repr_c::Box<CFr> {
CFr::from(pv.0.x).into()
}
#[ffi_export]
pub fn ffi_rln_proof_values_get_external_nullifier(
pv: &repr_c::Box<FFI_RLNProofValues>,
) -> repr_c::Box<CFr> {
CFr::from(pv.0.external_nullifier).into()
}
#[ffi_export]
pub fn ffi_rln_proof_values_to_bytes_le(pv: &repr_c::Box<FFI_RLNProofValues>) -> repr_c::Vec<u8> {
rln_proof_values_to_bytes_le(&pv.0).into()
}
#[ffi_export]
pub fn ffi_rln_proof_values_to_bytes_be(pv: &repr_c::Box<FFI_RLNProofValues>) -> repr_c::Vec<u8> {
rln_proof_values_to_bytes_be(&pv.0).into()
}
#[ffi_export]
pub fn ffi_bytes_le_to_rln_proof_values(
bytes: &repr_c::Vec<u8>,
) -> CResult<repr_c::Box<FFI_RLNProofValues>, repr_c::String> {
match bytes_le_to_rln_proof_values(bytes) {
Ok((pv, _)) => CResult {
ok: Some(Box_::new(FFI_RLNProofValues(pv))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_bytes_be_to_rln_proof_values(
bytes: &repr_c::Vec<u8>,
) -> CResult<repr_c::Box<FFI_RLNProofValues>, repr_c::String> {
match bytes_be_to_rln_proof_values(bytes) {
Ok((pv, _)) => CResult {
ok: Some(Box_::new(FFI_RLNProofValues(pv))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_rln_proof_values_free(proof_values: repr_c::Box<FFI_RLNProofValues>) {
drop(proof_values);
}
// Proof generation APIs
#[ffi_export]
pub fn ffi_generate_rln_proof(
rln: &repr_c::Box<FFI_RLN>,
witness: &repr_c::Box<FFI_RLNWitnessInput>,
) -> CResult<repr_c::Box<FFI_RLNProof>, repr_c::String> {
match rln.0.generate_rln_proof(&witness.0) {
Ok((proof, proof_values)) => {
let rln_proof = RLNProof {
proof_values,
proof,
};
CResult {
ok: Some(Box_::new(FFI_RLNProof(rln_proof))),
err: None,
}
}
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_generate_rln_proof_with_witness(
rln: &repr_c::Box<FFI_RLN>,
calculated_witness: &repr_c::Vec<repr_c::String>,
witness: &repr_c::Box<FFI_RLNWitnessInput>,
) -> CResult<repr_c::Box<FFI_RLNProof>, repr_c::String> {
let calculated_witness_bigint: Result<Vec<BigInt>, _> = calculated_witness
.iter()
.map(|s| {
let s_str = unsafe { std::str::from_utf8_unchecked(s.as_bytes()) };
s_str.parse::<BigInt>()
})
.collect();
let calculated_witness_bigint = match calculated_witness_bigint {
Ok(w) => w,
Err(err) => {
return CResult {
ok: None,
err: Some(format!("Failed to parse witness: {}", err).into()),
}
}
};
match rln
.0
.generate_rln_proof_with_witness(calculated_witness_bigint, &witness.0)
{
Ok((proof, proof_values)) => {
let rln_proof = RLNProof {
proof_values,
proof,
};
CResult {
ok: Some(Box_::new(FFI_RLNProof(rln_proof))),
err: None,
}
}
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
// Proof verification APIs
#[cfg(not(feature = "stateless"))]
#[ffi_export]
pub fn ffi_verify_rln_proof(
rln: &repr_c::Box<FFI_RLN>,
rln_proof: &repr_c::Box<FFI_RLNProof>,
x: &CFr,
) -> CBoolResult {
match rln
.0
.verify_rln_proof(&rln_proof.0.proof, &rln_proof.0.proof_values, &x.0)
{
Ok(verified) => CBoolResult {
ok: verified,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_verify_with_roots(
rln: &repr_c::Box<FFI_RLN>,
rln_proof: &repr_c::Box<FFI_RLNProof>,
roots: &repr_c::Vec<CFr>,
x: &CFr,
) -> CBoolResult {
let roots_fr: Vec<Fr> = roots.iter().map(|cfr| cfr.0).collect();
match rln.0.verify_with_roots(
&rln_proof.0.proof,
&rln_proof.0.proof_values,
&x.0,
&roots_fr,
) {
Ok(verified) => CBoolResult {
ok: verified,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
// Identity secret recovery API
#[ffi_export]
pub fn ffi_recover_id_secret(
proof_values_1: &repr_c::Box<FFI_RLNProofValues>,
proof_values_2: &repr_c::Box<FFI_RLNProofValues>,
) -> CResult<repr_c::Box<CFr>, repr_c::String> {
match recover_id_secret(&proof_values_1.0, &proof_values_2.0) {
Ok(secret) => CResult {
ok: Some(Box_::new(CFr::from(*secret))),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}

269
rln/src/ffi/ffi_tree.rs Normal file
View File

@@ -0,0 +1,269 @@
#![allow(non_camel_case_types)]
#![cfg(not(feature = "stateless"))]
use safer_ffi::{boxed::Box_, derive_ReprC, ffi_export, prelude::repr_c};
use super::{
ffi_rln::FFI_RLN,
ffi_utils::{CBoolResult, CFr, CResult},
};
// MerkleProof
#[derive_ReprC]
#[repr(C)]
pub struct FFI_MerkleProof {
pub path_elements: repr_c::Vec<CFr>,
pub path_index: repr_c::Vec<u8>,
}
#[ffi_export]
pub fn ffi_merkle_proof_free(merkle_proof: repr_c::Box<FFI_MerkleProof>) {
drop(merkle_proof);
}
// Merkle tree management APIs
#[ffi_export]
pub fn ffi_set_tree(rln: &mut repr_c::Box<FFI_RLN>, tree_depth: usize) -> CBoolResult {
match rln.0.set_tree(tree_depth) {
Ok(_) => CBoolResult {
ok: true,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
// Merkle tree leaf operations
#[ffi_export]
pub fn ffi_delete_leaf(rln: &mut repr_c::Box<FFI_RLN>, index: usize) -> CBoolResult {
match rln.0.delete_leaf(index) {
Ok(_) => CBoolResult {
ok: true,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_set_leaf(rln: &mut repr_c::Box<FFI_RLN>, index: usize, leaf: &CFr) -> CBoolResult {
match rln.0.set_leaf(index, leaf.0) {
Ok(_) => CBoolResult {
ok: true,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_get_leaf(
rln: &repr_c::Box<FFI_RLN>,
index: usize,
) -> CResult<repr_c::Box<CFr>, repr_c::String> {
match rln.0.get_leaf(index) {
Ok(leaf) => CResult {
ok: Some(CFr::from(leaf).into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_leaves_set(rln: &repr_c::Box<FFI_RLN>) -> usize {
rln.0.leaves_set()
}
#[ffi_export]
pub fn ffi_set_next_leaf(rln: &mut repr_c::Box<FFI_RLN>, leaf: &CFr) -> CBoolResult {
match rln.0.set_next_leaf(leaf.0) {
Ok(_) => CBoolResult {
ok: true,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_set_leaves_from(
rln: &mut repr_c::Box<FFI_RLN>,
index: usize,
leaves: &repr_c::Vec<CFr>,
) -> CBoolResult {
let leaves_vec: Vec<_> = leaves.iter().map(|cfr| cfr.0).collect();
match rln.0.set_leaves_from(index, leaves_vec) {
Ok(_) => CBoolResult {
ok: true,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_init_tree_with_leaves(
rln: &mut repr_c::Box<FFI_RLN>,
leaves: &repr_c::Vec<CFr>,
) -> CBoolResult {
let leaves_vec: Vec<_> = leaves.iter().map(|cfr| cfr.0).collect();
match rln.0.init_tree_with_leaves(leaves_vec) {
Ok(_) => CBoolResult {
ok: true,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
// Atomic operations
#[ffi_export]
pub fn ffi_atomic_operation(
rln: &mut repr_c::Box<FFI_RLN>,
index: usize,
leaves: &repr_c::Vec<CFr>,
indices: &repr_c::Vec<usize>,
) -> CBoolResult {
let leaves_vec: Vec<_> = leaves.iter().map(|cfr| cfr.0).collect();
let indices_vec: Vec<_> = indices.iter().copied().collect();
match rln.0.atomic_operation(index, leaves_vec, indices_vec) {
Ok(_) => CBoolResult {
ok: true,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_seq_atomic_operation(
rln: &mut repr_c::Box<FFI_RLN>,
leaves: &repr_c::Vec<CFr>,
indices: &repr_c::Vec<u8>,
) -> CBoolResult {
let index = rln.0.leaves_set();
let leaves_vec: Vec<_> = leaves.iter().map(|cfr| cfr.0).collect();
let indices_vec: Vec<_> = indices.iter().map(|x| *x as usize).collect();
match rln.0.atomic_operation(index, leaves_vec, indices_vec) {
Ok(_) => CBoolResult {
ok: true,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
// Root and proof operations
#[ffi_export]
pub fn ffi_get_root(rln: &repr_c::Box<FFI_RLN>) -> repr_c::Box<CFr> {
CFr::from(rln.0.get_root()).into()
}
#[ffi_export]
pub fn ffi_get_merkle_proof(
rln: &repr_c::Box<FFI_RLN>,
index: usize,
) -> CResult<repr_c::Box<FFI_MerkleProof>, repr_c::String> {
match rln.0.get_merkle_proof(index) {
Ok((path_elements, path_index)) => {
let path_elements: repr_c::Vec<CFr> = path_elements
.iter()
.map(|fr| CFr::from(*fr))
.collect::<Vec<_>>()
.into();
let path_index: repr_c::Vec<u8> = path_index.into();
let merkle_proof = FFI_MerkleProof {
path_elements,
path_index,
};
CResult {
ok: Some(Box_::new(merkle_proof)),
err: None,
}
}
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
// Persistent metadata APIs
#[ffi_export]
pub fn ffi_set_metadata(rln: &mut repr_c::Box<FFI_RLN>, metadata: &repr_c::Vec<u8>) -> CBoolResult {
match rln.0.set_metadata(metadata) {
Ok(_) => CBoolResult {
ok: true,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_get_metadata(rln: &repr_c::Box<FFI_RLN>) -> CResult<repr_c::Vec<u8>, repr_c::String> {
match rln.0.get_metadata() {
Ok(metadata) => CResult {
ok: Some(metadata.into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_flush(rln: &mut repr_c::Box<FFI_RLN>) -> CBoolResult {
match rln.0.flush() {
Ok(_) => CBoolResult {
ok: true,
err: None,
},
Err(err) => CBoolResult {
ok: false,
err: Some(err.to_string().into()),
},
}
}

407
rln/src/ffi/ffi_utils.rs Normal file
View File

@@ -0,0 +1,407 @@
#![allow(non_camel_case_types)]
use std::ops::Deref;
use safer_ffi::{
boxed::Box_,
derive_ReprC, ffi_export,
prelude::{repr_c, ReprC},
};
use crate::prelude::*;
// CResult
#[derive_ReprC]
#[repr(C)]
pub struct CResult<T: ReprC, Err: ReprC> {
pub ok: Option<T>,
pub err: Option<Err>,
}
// CBoolResult
#[derive_ReprC]
#[repr(C)]
pub struct CBoolResult {
pub ok: bool,
pub err: Option<repr_c::String>,
}
// CFr
#[derive_ReprC]
#[repr(opaque)]
#[derive(Debug, Clone, Copy, PartialEq)]
pub struct CFr(pub(crate) Fr);
impl Deref for CFr {
type Target = Fr;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl From<Fr> for CFr {
fn from(fr: Fr) -> Self {
Self(fr)
}
}
impl From<CFr> for repr_c::Box<CFr> {
fn from(cfr: CFr) -> Self {
Box_::new(cfr)
}
}
impl From<&CFr> for repr_c::Box<CFr> {
fn from(cfr: &CFr) -> Self {
CFr(cfr.0).into()
}
}
impl PartialEq<Fr> for CFr {
fn eq(&self, other: &Fr) -> bool {
self.0 == *other
}
}
#[ffi_export]
pub fn ffi_cfr_zero() -> repr_c::Box<CFr> {
CFr::from(Fr::from(0)).into()
}
#[ffi_export]
pub fn ffi_cfr_one() -> repr_c::Box<CFr> {
CFr::from(Fr::from(1)).into()
}
#[ffi_export]
pub fn ffi_cfr_to_bytes_le(cfr: &CFr) -> repr_c::Vec<u8> {
fr_to_bytes_le(&cfr.0).into()
}
#[ffi_export]
pub fn ffi_cfr_to_bytes_be(cfr: &CFr) -> repr_c::Vec<u8> {
fr_to_bytes_be(&cfr.0).into()
}
#[ffi_export]
pub fn ffi_bytes_le_to_cfr(bytes: &repr_c::Vec<u8>) -> CResult<repr_c::Box<CFr>, repr_c::String> {
match bytes_le_to_fr(bytes) {
Ok((cfr, _)) => CResult {
ok: Some(CFr(cfr).into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_bytes_be_to_cfr(bytes: &repr_c::Vec<u8>) -> CResult<repr_c::Box<CFr>, repr_c::String> {
match bytes_be_to_fr(bytes) {
Ok((cfr, _)) => CResult {
ok: Some(CFr(cfr).into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_uint_to_cfr(value: u32) -> repr_c::Box<CFr> {
CFr::from(Fr::from(value)).into()
}
#[ffi_export]
pub fn ffi_cfr_debug(cfr: Option<&CFr>) -> repr_c::String {
match cfr {
Some(cfr) => format!("{:?}", cfr.0).into(),
None => "None".into(),
}
}
#[ffi_export]
pub fn ffi_cfr_free(cfr: repr_c::Box<CFr>) {
drop(cfr);
}
// Vec<CFr>
#[ffi_export]
pub fn ffi_vec_cfr_new(capacity: usize) -> repr_c::Vec<CFr> {
Vec::with_capacity(capacity).into()
}
#[ffi_export]
pub fn ffi_vec_cfr_from_cfr(cfr: &CFr) -> repr_c::Vec<CFr> {
vec![*cfr].into()
}
#[ffi_export]
pub fn ffi_vec_cfr_push(v: &mut safer_ffi::Vec<CFr>, cfr: &CFr) {
let mut new: Vec<CFr> = std::mem::replace(v, Vec::new().into()).into();
if new.len() == new.capacity() {
new.reserve_exact(1);
}
new.push(*cfr);
*v = new.into();
}
#[ffi_export]
pub fn ffi_vec_cfr_len(v: &repr_c::Vec<CFr>) -> usize {
v.len()
}
#[ffi_export]
pub fn ffi_vec_cfr_get(v: &repr_c::Vec<CFr>, i: usize) -> Option<&CFr> {
v.get(i)
}
#[ffi_export]
pub fn ffi_vec_cfr_to_bytes_le(vec: &repr_c::Vec<CFr>) -> repr_c::Vec<u8> {
let vec_fr: Vec<Fr> = vec.iter().map(|cfr| cfr.0).collect();
vec_fr_to_bytes_le(&vec_fr).into()
}
#[ffi_export]
pub fn ffi_vec_cfr_to_bytes_be(vec: &repr_c::Vec<CFr>) -> repr_c::Vec<u8> {
let vec_fr: Vec<Fr> = vec.iter().map(|cfr| cfr.0).collect();
vec_fr_to_bytes_be(&vec_fr).into()
}
#[ffi_export]
pub fn ffi_bytes_le_to_vec_cfr(
bytes: &repr_c::Vec<u8>,
) -> CResult<repr_c::Vec<CFr>, repr_c::String> {
match bytes_le_to_vec_fr(bytes) {
Ok((vec_fr, _)) => {
let vec_cfr: Vec<CFr> = vec_fr.into_iter().map(CFr).collect();
CResult {
ok: Some(vec_cfr.into()),
err: None,
}
}
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_bytes_be_to_vec_cfr(
bytes: &repr_c::Vec<u8>,
) -> CResult<repr_c::Vec<CFr>, repr_c::String> {
match bytes_be_to_vec_fr(bytes) {
Ok((vec_fr, _)) => {
let vec_cfr: Vec<CFr> = vec_fr.into_iter().map(CFr).collect();
CResult {
ok: Some(vec_cfr.into()),
err: None,
}
}
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_vec_cfr_debug(v: Option<&repr_c::Vec<CFr>>) -> repr_c::String {
match v {
Some(v) => {
let vec_fr: Vec<Fr> = v.iter().map(|cfr| cfr.0).collect();
format!("{:?}", vec_fr).into()
}
None => "None".into(),
}
}
#[ffi_export]
pub fn ffi_vec_cfr_free(v: repr_c::Vec<CFr>) {
drop(v);
}
// Vec<u8>
#[ffi_export]
pub fn ffi_vec_u8_to_bytes_le(vec: &repr_c::Vec<u8>) -> repr_c::Vec<u8> {
vec_u8_to_bytes_le(vec).into()
}
#[ffi_export]
pub fn ffi_vec_u8_to_bytes_be(vec: &repr_c::Vec<u8>) -> repr_c::Vec<u8> {
vec_u8_to_bytes_be(vec).into()
}
#[ffi_export]
pub fn ffi_bytes_le_to_vec_u8(bytes: &repr_c::Vec<u8>) -> CResult<repr_c::Vec<u8>, repr_c::String> {
match bytes_le_to_vec_u8(bytes) {
Ok((vec, _)) => CResult {
ok: Some(vec.into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_bytes_be_to_vec_u8(bytes: &repr_c::Vec<u8>) -> CResult<repr_c::Vec<u8>, repr_c::String> {
match bytes_be_to_vec_u8(bytes) {
Ok((vec, _)) => CResult {
ok: Some(vec.into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(err.to_string().into()),
},
}
}
#[ffi_export]
pub fn ffi_vec_u8_debug(v: Option<&repr_c::Vec<u8>>) -> repr_c::String {
match v {
Some(v) => format!("{:x?}", v.deref()).into(),
None => "None".into(),
}
}
#[ffi_export]
pub fn ffi_vec_u8_free(v: repr_c::Vec<u8>) {
drop(v);
}
// Utility APIs
#[ffi_export]
pub fn ffi_hash_to_field_le(input: &repr_c::Vec<u8>) -> CResult<repr_c::Box<CFr>, repr_c::String> {
match hash_to_field_le(input) {
Ok(hash_result) => CResult {
ok: Some(CFr::from(hash_result).into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_hash_to_field_be(input: &repr_c::Vec<u8>) -> CResult<repr_c::Box<CFr>, repr_c::String> {
match hash_to_field_be(input) {
Ok(hash_result) => CResult {
ok: Some(CFr::from(hash_result).into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_poseidon_hash_pair(a: &CFr, b: &CFr) -> CResult<repr_c::Box<CFr>, repr_c::String> {
match poseidon_hash(&[a.0, b.0]) {
Ok(hash_result) => CResult {
ok: Some(CFr::from(hash_result).into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_key_gen() -> CResult<repr_c::Vec<CFr>, repr_c::String> {
match keygen() {
Ok((identity_secret, id_commitment)) => CResult {
ok: Some(vec![CFr(*identity_secret), CFr(id_commitment)].into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_seeded_key_gen(seed: &repr_c::Vec<u8>) -> CResult<repr_c::Vec<CFr>, repr_c::String> {
match seeded_keygen(seed) {
Ok((identity_secret, id_commitment)) => CResult {
ok: Some(vec![CFr(identity_secret), CFr(id_commitment)].into()),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_extended_key_gen() -> CResult<repr_c::Vec<CFr>, repr_c::String> {
match extended_keygen() {
Ok((identity_trapdoor, identity_nullifier, identity_secret, id_commitment)) => CResult {
ok: Some(
vec![
CFr(identity_trapdoor),
CFr(identity_nullifier),
CFr(identity_secret),
CFr(id_commitment),
]
.into(),
),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_seeded_extended_key_gen(
seed: &repr_c::Vec<u8>,
) -> CResult<repr_c::Vec<CFr>, repr_c::String> {
match extended_seeded_keygen(seed) {
Ok((identity_trapdoor, identity_nullifier, identity_secret, id_commitment)) => CResult {
ok: Some(
vec![
CFr(identity_trapdoor),
CFr(identity_nullifier),
CFr(identity_secret),
CFr(id_commitment),
]
.into(),
),
err: None,
},
Err(err) => CResult {
ok: None,
err: Some(format!("{:?}", err).into()),
},
}
}
#[ffi_export]
pub fn ffi_c_string_free(s: repr_c::String) {
drop(s);
}

10
rln/src/ffi/mod.rs Normal file
View File

@@ -0,0 +1,10 @@
#![cfg(not(target_arch = "wasm32"))]
pub mod ffi_rln;
pub mod ffi_tree;
pub mod ffi_utils;
#[cfg(feature = "headers")]
pub fn generate_headers() -> std::io::Result<()> {
safer_ffi::headers::builder().to_file("rln.h")?.generate()
}

View File

@@ -1,13 +1,19 @@
/// This crate instantiates the Poseidon hash algorithm.
use crate::{circuit::Fr, utils::bytes_le_to_fr};
// This crate instantiates the Poseidon hash algorithm.
use once_cell::sync::Lazy;
use tiny_keccak::{Hasher, Keccak};
use utils::poseidon::Poseidon;
use zerokit_utils::{error::HashError, poseidon::Poseidon};
use crate::{
circuit::Fr,
error::UtilsError,
utils::{bytes_be_to_fr, bytes_le_to_fr},
};
/// These indexed constants hardcode the supported round parameters tuples (t, RF, RN, SKIP_MATRICES) for the Bn254 scalar field.
/// SKIP_MATRICES is the index of the randomly generated secure MDS matrix.
/// TODO: generate these parameters
pub const ROUND_PARAMS: [(usize, usize, usize, usize); 8] = [
const ROUND_PARAMS: [(usize, usize, usize, usize); 8] = [
(2, 8, 56, 0),
(3, 8, 57, 0),
(4, 8, 56, 0),
@@ -21,10 +27,9 @@ pub const ROUND_PARAMS: [(usize, usize, usize, usize); 8] = [
/// Poseidon Hash wrapper over above implementation.
static POSEIDON: Lazy<Poseidon<Fr>> = Lazy::new(|| Poseidon::<Fr>::from(&ROUND_PARAMS));
pub fn poseidon_hash(input: &[Fr]) -> Fr {
POSEIDON
.hash(input)
.expect("hash with fixed input size can't fail")
pub fn poseidon_hash(input: &[Fr]) -> Result<Fr, HashError> {
let hash = POSEIDON.hash(input)?;
Ok(hash)
}
/// The zerokit RLN Merkle tree Hasher.
@@ -32,20 +37,21 @@ pub fn poseidon_hash(input: &[Fr]) -> Fr {
pub struct PoseidonHash;
/// The default Hasher trait used by Merkle tree implementation in utils.
impl utils::merkle_tree::Hasher for PoseidonHash {
impl zerokit_utils::merkle_tree::Hasher for PoseidonHash {
type Fr = Fr;
type Error = HashError;
fn default_leaf() -> Self::Fr {
Self::Fr::from(0)
}
fn hash(inputs: &[Self::Fr]) -> Self::Fr {
fn hash(inputs: &[Self::Fr]) -> Result<Self::Fr, Self::Error> {
poseidon_hash(inputs)
}
}
/// Hashes arbitrary signal to the underlying prime field.
pub fn hash_to_field(signal: &[u8]) -> Fr {
pub fn hash_to_field_le(signal: &[u8]) -> Result<Fr, UtilsError> {
// We hash the input signal using Keccak256
let mut hash = [0; 32];
let mut hasher = Keccak::v256();
@@ -53,6 +59,24 @@ pub fn hash_to_field(signal: &[u8]) -> Fr {
hasher.finalize(&mut hash);
// We export the hash as a field element
let (el, _) = bytes_le_to_fr(hash.as_ref());
el
let (el, _) = bytes_le_to_fr(hash.as_ref())?;
Ok(el)
}
/// Hashes arbitrary signal to the underlying prime field.
pub fn hash_to_field_be(signal: &[u8]) -> Result<Fr, UtilsError> {
// We hash the input signal using Keccak256
let mut hash = [0; 32];
let mut hasher = Keccak::v256();
hasher.update(signal);
hasher.finalize(&mut hash);
// Reverse the bytes to get big endian representation
hash.reverse();
// We export the hash as a field element
let (el, _) = bytes_be_to_fr(hash.as_ref())?;
Ok(el)
}

View File

@@ -1,12 +1,31 @@
pub mod circuit;
#[cfg(not(target_arch = "wasm32"))]
pub mod error;
pub mod ffi;
pub mod hashers;
#[cfg(feature = "pmtree-ft")]
pub mod pm_tree_adapter;
pub mod poseidon_tree;
pub mod prelude;
pub mod protocol;
pub mod public;
#[cfg(test)]
pub mod public_api_tests;
pub mod utils;
// Ensure that only one Merkle tree feature is enabled at a time
#[cfg(any(
all(feature = "fullmerkletree", feature = "optimalmerkletree"),
all(feature = "fullmerkletree", feature = "pmtree-ft"),
all(feature = "optimalmerkletree", feature = "pmtree-ft"),
))]
compile_error!(
"Only one of `fullmerkletree`, `optimalmerkletree`, or `pmtree-ft` can be enabled at a time."
);
// Ensure that the `stateless` feature is not enabled with any Merkle tree features
#[cfg(all(
feature = "stateless",
any(
feature = "fullmerkletree",
feature = "optimalmerkletree",
feature = "pmtree-ft"
)
))]
compile_error!("Cannot enable any Merkle tree features with stateless");

View File

@@ -1,17 +1,24 @@
use std::fmt::Debug;
use std::path::PathBuf;
use std::str::FromStr;
#![cfg(feature = "pmtree-ft")]
use std::{fmt::Debug, path::PathBuf, str::FromStr};
use color_eyre::{Report, Result};
use serde_json::Value;
use tempfile::Builder;
use zerokit_utils::{
error::{FromConfigError, ZerokitMerkleTreeError},
merkle_tree::{ZerokitMerkleProof, ZerokitMerkleTree},
pm_tree::{
pmtree,
pmtree::{tree::Key, Database, Hasher, PmtreeErrorKind},
Config, Mode, SledDB,
},
};
use utils::pmtree::tree::Key;
use utils::pmtree::{Database, Hasher};
use utils::*;
use crate::circuit::Fr;
use crate::hashers::{poseidon_hash, PoseidonHash};
use crate::utils::{bytes_le_to_fr, fr_to_bytes_le};
use crate::{
circuit::Fr,
hashers::{poseidon_hash, PoseidonHash},
utils::{bytes_le_to_fr, fr_to_bytes_le},
};
const METADATA_KEY: [u8; 8] = *b"metadata";
@@ -39,7 +46,8 @@ impl Hasher for PoseidonHash {
}
fn deserialize(value: pmtree::Value) -> Self::Fr {
let (fr, _) = bytes_le_to_fr(&value);
// TODO: allow to handle error properly in pmtree Hasher trait
let (fr, _) = bytes_le_to_fr(&value).expect("Fr deserialization must be valid");
fr
}
@@ -48,24 +56,114 @@ impl Hasher for PoseidonHash {
}
fn hash(inputs: &[Self::Fr]) -> Self::Fr {
poseidon_hash(inputs)
// TODO: allow to handle error properly in pmtree Hasher trait
poseidon_hash(inputs).expect("Poseidon hash must be valid")
}
}
fn get_tmp_path() -> PathBuf {
std::env::temp_dir().join(format!("pmtree-{}", rand::random::<u64>()))
fn default_tmp_path() -> Result<PathBuf, std::io::Error> {
Ok(Builder::new()
.prefix("pmtree-")
.tempfile()?
.into_temp_path()
.to_path_buf())
}
fn get_tmp() -> bool {
true
const DEFAULT_TEMPORARY: bool = true;
const DEFAULT_CACHE_CAPACITY: u64 = 1073741824; // 1 Gigabyte
const DEFAULT_FLUSH_EVERY_MS: u64 = 500; // 500 Milliseconds
const DEFAULT_MODE: Mode = Mode::HighThroughput;
const DEFAULT_USE_COMPRESSION: bool = false;
pub struct PmtreeConfigBuilder {
path: Option<PathBuf>,
temporary: bool,
cache_capacity: u64,
flush_every_ms: u64,
mode: Mode,
use_compression: bool,
}
impl Default for PmtreeConfigBuilder {
fn default() -> Self {
Self::new()
}
}
impl PmtreeConfigBuilder {
pub fn new() -> Self {
PmtreeConfigBuilder {
path: None,
temporary: DEFAULT_TEMPORARY,
cache_capacity: DEFAULT_CACHE_CAPACITY,
flush_every_ms: DEFAULT_FLUSH_EVERY_MS,
mode: DEFAULT_MODE,
use_compression: DEFAULT_USE_COMPRESSION,
}
}
pub fn path<P: Into<PathBuf>>(mut self, path: P) -> Self {
self.path = Some(path.into());
self
}
pub fn temporary(mut self, temporary: bool) -> Self {
self.temporary = temporary;
self
}
pub fn cache_capacity(mut self, capacity: u64) -> Self {
self.cache_capacity = capacity;
self
}
pub fn flush_every_ms(mut self, ms: u64) -> Self {
self.flush_every_ms = ms;
self
}
pub fn mode(mut self, mode: Mode) -> Self {
self.mode = mode;
self
}
pub fn use_compression(mut self, compression: bool) -> Self {
self.use_compression = compression;
self
}
pub fn build(self) -> Result<PmtreeConfig, FromConfigError> {
let path = match (self.temporary, self.path) {
(true, None) => default_tmp_path()?,
(false, None) => return Err(FromConfigError::MissingPath),
(true, Some(path)) if path.exists() => return Err(FromConfigError::PathExists),
(_, Some(path)) => path,
};
let config = Config::new()
.temporary(self.temporary)
.path(path)
.cache_capacity(self.cache_capacity)
.flush_every_ms(Some(self.flush_every_ms))
.mode(self.mode)
.use_compression(self.use_compression);
Ok(PmtreeConfig(config))
}
}
pub struct PmtreeConfig(Config);
impl FromStr for PmtreeConfig {
type Err = Report;
impl PmtreeConfig {
pub fn builder() -> PmtreeConfigBuilder {
PmtreeConfigBuilder::new()
}
}
fn from_str(s: &str) -> Result<Self> {
impl FromStr for PmtreeConfig {
type Err = FromConfigError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let config: Value = serde_json::from_str(s)?;
let path = config["path"].as_str();
@@ -80,21 +178,17 @@ impl FromStr for PmtreeConfig {
};
let use_compression = config["use_compression"].as_bool();
if temporary.is_some()
&& path.is_some()
&& temporary.unwrap()
&& path.as_ref().unwrap().exists()
{
return Err(Report::msg(format!(
"Path {:?} already exists, cannot use temporary",
path.unwrap()
)));
if let (Some(true), Some(path)) = (temporary, path.as_ref()) {
if path.exists() {
return Err(FromConfigError::PathExists);
}
}
let default_tmp_path = default_tmp_path()?;
let config = Config::new()
.temporary(temporary.unwrap_or(get_tmp()))
.path(path.unwrap_or(get_tmp_path()))
.cache_capacity(cache_capacity.unwrap_or(1024 * 1024 * 1024))
.temporary(temporary.unwrap_or(DEFAULT_TEMPORARY))
.path(path.unwrap_or(default_tmp_path))
.cache_capacity(cache_capacity.unwrap_or(DEFAULT_CACHE_CAPACITY))
.flush_every_ms(flush_every_ms)
.mode(mode)
.use_compression(use_compression.unwrap_or(false));
@@ -104,18 +198,12 @@ impl FromStr for PmtreeConfig {
impl Default for PmtreeConfig {
fn default() -> Self {
let tmp_path = get_tmp_path();
PmtreeConfig(
Config::new()
.temporary(true)
.path(tmp_path)
.cache_capacity(150_000)
.mode(Mode::HighThroughput)
.use_compression(false)
.flush_every_ms(Some(12_000)),
)
Self::builder()
.build()
.expect("Default PmtreeConfig must be valid")
}
}
impl Debug for PmtreeConfig {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.0.fmt(f)
@@ -133,12 +221,16 @@ impl ZerokitMerkleTree for PmTree {
type Hasher = PoseidonHash;
type Config = PmtreeConfig;
fn default(depth: usize) -> Result<Self> {
fn default(depth: usize) -> Result<Self, ZerokitMerkleTreeError> {
let default_config = PmtreeConfig::default();
PmTree::new(depth, Self::Hasher::default_leaf(), default_config)
}
fn new(depth: usize, _default_leaf: FrOf<Self::Hasher>, config: Self::Config) -> Result<Self> {
fn new(
depth: usize,
_default_leaf: FrOf<Self::Hasher>,
config: Self::Config,
) -> Result<Self, ZerokitMerkleTreeError> {
let tree_loaded = pmtree::MerkleTree::load(config.clone().0);
let tree = match tree_loaded {
Ok(tree) => tree,
@@ -168,14 +260,12 @@ impl ZerokitMerkleTree for PmTree {
self.tree.root()
}
fn compute_root(&mut self) -> Result<FrOf<Self::Hasher>> {
Ok(self.tree.root())
}
fn set(&mut self, index: usize, leaf: FrOf<Self::Hasher>) -> Result<()> {
self.tree
.set(index, leaf)
.map_err(|e| Report::msg(e.to_string()))?;
fn set(
&mut self,
index: usize,
leaf: FrOf<Self::Hasher>,
) -> Result<(), ZerokitMerkleTreeError> {
self.tree.set(index, leaf)?;
self.cached_leaves_indices[index] = 1;
Ok(())
}
@@ -184,38 +274,41 @@ impl ZerokitMerkleTree for PmTree {
&mut self,
start: usize,
values: I,
) -> Result<()> {
) -> Result<(), ZerokitMerkleTreeError> {
let v = values.into_iter().collect::<Vec<_>>();
self.tree
.set_range(start, v.clone().into_iter())
.map_err(|e| Report::msg(e.to_string()))?;
self.tree.set_range(start, v.clone().into_iter())?;
for i in start..v.len() {
self.cached_leaves_indices[i] = 1
}
Ok(())
}
fn get(&self, index: usize) -> Result<FrOf<Self::Hasher>> {
self.tree.get(index).map_err(|e| Report::msg(e.to_string()))
fn get(&self, index: usize) -> Result<FrOf<Self::Hasher>, ZerokitMerkleTreeError> {
self.tree
.get(index)
.map_err(ZerokitMerkleTreeError::PmtreeErrorKind)
}
fn get_subtree_root(&self, n: usize, index: usize) -> Result<FrOf<Self::Hasher>> {
fn get_subtree_root(
&self,
n: usize,
index: usize,
) -> Result<FrOf<Self::Hasher>, ZerokitMerkleTreeError> {
if n > self.depth() {
return Err(Report::msg("level exceeds depth size"));
return Err(ZerokitMerkleTreeError::InvalidLevel);
}
if index >= self.capacity() {
return Err(Report::msg("index exceeds set size"));
return Err(ZerokitMerkleTreeError::InvalidLeaf);
}
if n == 0 {
Ok(self.root())
} else if n == self.depth() {
self.get(index)
} else {
let node = self
.tree
.get_elem(Key::new(n, index >> (self.depth() - n)))
.unwrap();
Ok(node)
match self.tree.get_elem(Key::new(n, index >> (self.depth() - n))) {
Ok(value) => Ok(value),
Err(_) => Err(ZerokitMerkleTreeError::InvalidSubTreeIndex),
}
}
}
@@ -235,70 +328,86 @@ impl ZerokitMerkleTree for PmTree {
start: usize,
leaves: I,
indices: J,
) -> Result<()> {
) -> Result<(), ZerokitMerkleTreeError> {
let leaves = leaves.into_iter().collect::<Vec<_>>();
let mut indices = indices.into_iter().collect::<Vec<_>>();
indices.sort();
match (leaves.len(), indices.len()) {
(0, 0) => Err(Report::msg("no leaves or indices to be removed")),
(0, 0) => Err(ZerokitMerkleTreeError::InvalidLeaf),
(1, 0) => self.set(start, leaves[0]),
(0, 1) => self.delete(indices[0]),
(_, 0) => self.set_range(start, leaves.into_iter()),
(0, _) => self.remove_indices(&indices),
(_, _) => self.remove_indices_and_set_leaves(start, leaves, &indices),
(0, _) => self
.remove_indices(&indices)
.map_err(ZerokitMerkleTreeError::PmtreeErrorKind),
(_, _) => self
.remove_indices_and_set_leaves(start, leaves, &indices)
.map_err(ZerokitMerkleTreeError::PmtreeErrorKind),
}
}
fn update_next(&mut self, leaf: FrOf<Self::Hasher>) -> Result<()> {
fn update_next(&mut self, leaf: FrOf<Self::Hasher>) -> Result<(), ZerokitMerkleTreeError> {
self.tree
.update_next(leaf)
.map_err(|e| Report::msg(e.to_string()))
.map_err(ZerokitMerkleTreeError::PmtreeErrorKind)
}
fn delete(&mut self, index: usize) -> Result<()> {
/// Delete a leaf in the merkle tree given its index
///
/// Deleting a leaf is done by resetting it to its default value. Note that the next_index field
/// will not be changed (== previously used index cannot be reused - this to avoid replay
/// attacks or unexpected and very hard to tackle issues)
fn delete(&mut self, index: usize) -> Result<(), ZerokitMerkleTreeError> {
self.tree
.delete(index)
.map_err(|e| Report::msg(e.to_string()))?;
.map_err(ZerokitMerkleTreeError::PmtreeErrorKind)?;
self.cached_leaves_indices[index] = 0;
Ok(())
}
fn proof(&self, index: usize) -> Result<Self::Proof> {
fn proof(&self, index: usize) -> Result<Self::Proof, ZerokitMerkleTreeError> {
let proof = self.tree.proof(index)?;
Ok(PmTreeProof { proof })
}
fn verify(&self, leaf: &FrOf<Self::Hasher>, witness: &Self::Proof) -> Result<bool> {
if self.tree.verify(leaf, &witness.proof) {
fn verify(
&self,
leaf: &FrOf<Self::Hasher>,
merkle_proof: &Self::Proof,
) -> Result<bool, ZerokitMerkleTreeError> {
if self.tree.verify(leaf, &merkle_proof.proof) {
Ok(true)
} else {
Err(Report::msg("verify failed"))
Err(ZerokitMerkleTreeError::InvalidMerkleProof)
}
}
fn set_metadata(&mut self, metadata: &[u8]) -> Result<()> {
self.tree.db.put(METADATA_KEY, metadata.to_vec())?;
fn set_metadata(&mut self, metadata: &[u8]) -> Result<(), ZerokitMerkleTreeError> {
self.tree
.db
.put(METADATA_KEY, metadata.to_vec())
.map_err(ZerokitMerkleTreeError::PmtreeErrorKind)?;
self.metadata = metadata.to_vec();
Ok(())
}
fn metadata(&self) -> Result<Vec<u8>> {
fn metadata(&self) -> Result<Vec<u8>, ZerokitMerkleTreeError> {
if !self.metadata.is_empty() {
return Ok(self.metadata.clone());
}
// if empty, try searching the db
let data = self.tree.db.get(METADATA_KEY)?;
if data.is_none() {
// send empty Metadata
return Ok(Vec::new());
}
Ok(data.unwrap())
// Return empty metadata if not found, otherwise return the data
Ok(data.unwrap_or_default())
}
fn close_db_connection(&mut self) -> Result<()> {
self.tree.db.close().map_err(|e| Report::msg(e.to_string()))
fn close_db_connection(&mut self) -> Result<(), ZerokitMerkleTreeError> {
self.tree
.db
.close()
.map_err(ZerokitMerkleTreeError::PmtreeErrorKind)
}
}
@@ -306,15 +415,18 @@ type PmTreeHasher = <PmTree as ZerokitMerkleTree>::Hasher;
type FrOfPmTreeHasher = FrOf<PmTreeHasher>;
impl PmTree {
fn remove_indices(&mut self, indices: &[usize]) -> Result<()> {
fn remove_indices(&mut self, indices: &[usize]) -> Result<(), PmtreeErrorKind> {
if indices.is_empty() {
return Err(PmtreeErrorKind::TreeError(
pmtree::TreeErrorKind::InvalidKey,
));
}
let start = indices[0];
let end = indices.last().unwrap() + 1;
let end = indices[indices.len() - 1] + 1;
let new_leaves = (start..end).map(|_| PmTreeHasher::default_leaf());
self.tree
.set_range(start, new_leaves)
.map_err(|e| Report::msg(e.to_string()))?;
self.tree.set_range(start, new_leaves)?;
for i in start..end {
self.cached_leaves_indices[i] = 0
@@ -327,8 +439,13 @@ impl PmTree {
start: usize,
leaves: Vec<FrOfPmTreeHasher>,
indices: &[usize],
) -> Result<()> {
let min_index = *indices.first().unwrap();
) -> Result<(), PmtreeErrorKind> {
if indices.is_empty() {
return Err(PmtreeErrorKind::TreeError(
pmtree::TreeErrorKind::InvalidKey,
));
}
let min_index = indices[0];
let max_index = start + leaves.len();
let mut set_values = vec![PmTreeHasher::default_leaf(); max_index - min_index];
@@ -344,9 +461,7 @@ impl PmTree {
set_values[start - min_index + i] = leaf;
}
self.tree
.set_range(start, set_values)
.map_err(|e| Report::msg(e.to_string()))?;
self.tree.set_range(start, set_values)?;
for i in indices {
self.cached_leaves_indices[*i] = 0;
@@ -378,7 +493,40 @@ impl ZerokitMerkleProof for PmTreeProof {
fn get_path_index(&self) -> Vec<Self::Index> {
self.proof.get_path_index()
}
fn compute_root_from(&self, leaf: &FrOf<Self::Hasher>) -> FrOf<Self::Hasher> {
self.proof.compute_root_from(leaf)
fn compute_root_from(
&self,
leaf: &FrOf<Self::Hasher>,
) -> Result<FrOf<Self::Hasher>, ZerokitMerkleTreeError> {
Ok(self.proof.compute_root_from(leaf))
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_pmtree_json_config() {
let json = r#"
{
"path": "pmtree-123456",
"temporary": false,
"cache_capacity": 1073741824,
"flush_every_ms": 500,
"mode": "HighThroughput",
"use_compression": false
}"#;
let _: PmtreeConfig = json.parse().unwrap();
let _ = PmtreeConfig::builder()
.path(default_tmp_path().unwrap())
.temporary(DEFAULT_TEMPORARY)
.cache_capacity(DEFAULT_CACHE_CAPACITY)
.mode(DEFAULT_MODE)
.use_compression(DEFAULT_USE_COMPRESSION)
.build()
.unwrap();
}
}

View File

@@ -1,30 +1,32 @@
// This crate defines the RLN module default Merkle tree implementation and its Hasher
// Implementation inspired by https://github.com/worldcoin/semaphore-rs/blob/d462a4372f1fd9c27610f2acfe4841fab1d396aa/src/poseidon_tree.rs
// Implementation inspired by https://github.com/worldcoin/semaphore-rs/blob/d462a4372f1fd9c27610f2acfe4841fab1d396aa/src/poseidon_tree.rs (no differences)
#![cfg(not(feature = "stateless"))]
use cfg_if::cfg_if;
cfg_if! {
if #[cfg(feature = "pmtree-ft")] {
use crate::pm_tree_adapter::*;
} else {
use crate::hashers::{PoseidonHash};
use utils::merkle_tree::*;
}
}
// The zerokit RLN default Merkle tree implementation is the OptimalMerkleTree.
// To switch to FullMerkleTree implementation, it is enough to enable the fullmerkletree feature
// The zerokit RLN default Merkle tree implementation is the PMTree from the vacp2p_pmtree crate
// To switch to FullMerkleTree or OptimalMerkleTree, enable the corresponding feature in the Cargo.toml file
cfg_if! {
if #[cfg(feature = "fullmerkletree")] {
use zerokit_utils::merkle_tree::{FullMerkleTree, FullMerkleProof};
use crate::hashers::PoseidonHash;
pub type PoseidonTree = FullMerkleTree<PoseidonHash>;
pub type MerkleProof = FullMerkleProof<PoseidonHash>;
} else if #[cfg(feature = "optimalmerkletree")] {
use zerokit_utils::merkle_tree::{OptimalMerkleTree, OptimalMerkleProof};
use crate::hashers::PoseidonHash;
pub type PoseidonTree = OptimalMerkleTree<PoseidonHash>;
pub type MerkleProof = OptimalMerkleProof<PoseidonHash>;
} else if #[cfg(feature = "pmtree-ft")] {
use crate::pm_tree_adapter::{PmTree, PmTreeProof};
pub type PoseidonTree = PmTree;
pub type MerkleProof = PmTreeProof;
} else {
pub type PoseidonTree = OptimalMerkleTree<PoseidonHash>;
pub type MerkleProof = OptimalMerkleProof<PoseidonHash>;
compile_error!("One of the features `fullmerkletree`, `optimalmerkletree`, or `pmtree-ft` must be enabled.");
}
}

37
rln/src/prelude.rs Normal file
View File

@@ -0,0 +1,37 @@
// This module re-exports the most commonly used types and functions from the RLN library
#[cfg(not(target_arch = "wasm32"))]
pub use crate::circuit::{graph_from_folder, zkey_from_folder};
#[cfg(feature = "pmtree-ft")]
pub use crate::pm_tree_adapter::{FrOf, PmTree, PmTreeProof, PmtreeConfig, PmtreeConfigBuilder};
#[cfg(not(feature = "stateless"))]
pub use crate::poseidon_tree::{MerkleProof, PoseidonTree};
#[cfg(not(feature = "stateless"))]
pub use crate::protocol::compute_tree_root;
#[cfg(not(target_arch = "wasm32"))]
pub use crate::protocol::{generate_zk_proof, verify_zk_proof};
pub use crate::{
circuit::{
zkey_from_raw, Curve, Fq, Fq2, Fr, G1Affine, G1Projective, G2Affine, G2Projective, Proof,
VerifyingKey, Zkey, COMPRESS_PROOF_SIZE, DEFAULT_TREE_DEPTH,
},
error::{ProtocolError, RLNError, UtilsError, VerifyError},
hashers::{hash_to_field_be, hash_to_field_le, poseidon_hash, PoseidonHash},
protocol::{
bytes_be_to_rln_proof, bytes_be_to_rln_proof_values, bytes_be_to_rln_witness,
bytes_le_to_rln_proof, bytes_le_to_rln_proof_values, bytes_le_to_rln_witness,
extended_keygen, extended_seeded_keygen, generate_zk_proof_with_witness, keygen,
proof_values_from_witness, recover_id_secret, rln_proof_to_bytes_be, rln_proof_to_bytes_le,
rln_proof_values_to_bytes_be, rln_proof_values_to_bytes_le, rln_witness_to_bigint_json,
rln_witness_to_bytes_be, rln_witness_to_bytes_le, seeded_keygen, RLNProof, RLNProofValues,
RLNWitnessInput,
},
public::RLN,
utils::{
bytes_be_to_fr, bytes_be_to_vec_fr, bytes_be_to_vec_u8, bytes_be_to_vec_usize,
bytes_le_to_fr, bytes_le_to_vec_fr, bytes_le_to_vec_u8, bytes_le_to_vec_usize,
fr_to_bytes_be, fr_to_bytes_le, normalize_usize_be, normalize_usize_le, str_to_fr,
to_bigint, vec_fr_to_bytes_be, vec_fr_to_bytes_le, vec_u8_to_bytes_be, vec_u8_to_bytes_le,
IdSecret, FR_BYTE_SIZE,
},
};

View File

@@ -1,795 +0,0 @@
// This crate collects all the underlying primitives used to implement RLN
use ark_bn254::Fr;
use ark_groth16::{prepare_verifying_key, Groth16, Proof as ArkProof, ProvingKey, VerifyingKey};
use ark_relations::r1cs::{ConstraintMatrices, SynthesisError};
use ark_serialize::{CanonicalDeserialize, CanonicalSerialize};
use ark_std::{rand::thread_rng, UniformRand};
use color_eyre::{Report, Result};
use num_bigint::BigInt;
use rand::{Rng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use serde::{Deserialize, Serialize};
#[cfg(test)]
use std::time::Instant;
use thiserror::Error;
use tiny_keccak::{Hasher as _, Keccak};
use crate::circuit::{calculate_rln_witness, qap::CircomReduction, Curve};
use crate::hashers::{hash_to_field, poseidon_hash};
use crate::poseidon_tree::*;
use crate::public::RLN_IDENTIFIER;
use crate::utils::*;
use utils::{ZerokitMerkleProof, ZerokitMerkleTree};
///////////////////////////////////////////////////////
// RLN Witness data structure and utility functions
///////////////////////////////////////////////////////
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub struct RLNWitnessInput {
#[serde(serialize_with = "ark_se", deserialize_with = "ark_de")]
identity_secret: Fr,
#[serde(serialize_with = "ark_se", deserialize_with = "ark_de")]
user_message_limit: Fr,
#[serde(serialize_with = "ark_se", deserialize_with = "ark_de")]
message_id: Fr,
#[serde(serialize_with = "ark_se", deserialize_with = "ark_de")]
path_elements: Vec<Fr>,
identity_path_index: Vec<u8>,
#[serde(serialize_with = "ark_se", deserialize_with = "ark_de")]
x: Fr,
#[serde(serialize_with = "ark_se", deserialize_with = "ark_de")]
external_nullifier: Fr,
}
#[derive(Debug, PartialEq)]
pub struct RLNProofValues {
// Public outputs:
pub y: Fr,
pub nullifier: Fr,
pub root: Fr,
// Public Inputs:
pub x: Fr,
pub external_nullifier: Fr,
}
pub fn serialize_field_element(element: Fr) -> Vec<u8> {
fr_to_bytes_le(&element)
}
pub fn deserialize_field_element(serialized: Vec<u8>) -> Fr {
let (element, _) = bytes_le_to_fr(&serialized);
element
}
pub fn deserialize_identity_pair(serialized: Vec<u8>) -> (Fr, Fr) {
let (identity_secret_hash, read) = bytes_le_to_fr(&serialized);
let (id_commitment, _) = bytes_le_to_fr(&serialized[read..]);
(identity_secret_hash, id_commitment)
}
pub fn deserialize_identity_tuple(serialized: Vec<u8>) -> (Fr, Fr, Fr, Fr) {
let mut all_read = 0;
let (identity_trapdoor, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (identity_nullifier, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (identity_secret_hash, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (identity_commitment, _) = bytes_le_to_fr(&serialized[all_read..]);
(
identity_trapdoor,
identity_nullifier,
identity_secret_hash,
identity_commitment,
)
}
/// Serializes witness
///
/// # Errors
///
/// Returns an error if `rln_witness.message_id` is not within `rln_witness.user_message_limit`.
/// input data is [ identity_secret<32> | user_message_limit<32> | message_id<32> | path_elements[<32>] | identity_path_index<8> | x<32> | external_nullifier<32> ]
pub fn serialize_witness(rln_witness: &RLNWitnessInput) -> Result<Vec<u8>> {
// Check if message_id is within user_message_limit
message_id_range_check(&rln_witness.message_id, &rln_witness.user_message_limit)?;
// Calculate capacity for Vec:
// - 5 fixed field elements: identity_secret, user_message_limit, message_id, x, external_nullifier
// - variable number of path elements
// - identity_path_index (variable size)
let mut serialized: Vec<u8> = Vec::with_capacity(
fr_byte_size() * (5 + rln_witness.path_elements.len())
+ rln_witness.identity_path_index.len(),
);
serialized.extend_from_slice(&fr_to_bytes_le(&rln_witness.identity_secret));
serialized.extend_from_slice(&fr_to_bytes_le(&rln_witness.user_message_limit));
serialized.extend_from_slice(&fr_to_bytes_le(&rln_witness.message_id));
serialized.extend_from_slice(&vec_fr_to_bytes_le(&rln_witness.path_elements)?);
serialized.extend_from_slice(&vec_u8_to_bytes_le(&rln_witness.identity_path_index)?);
serialized.extend_from_slice(&fr_to_bytes_le(&rln_witness.x));
serialized.extend_from_slice(&fr_to_bytes_le(&rln_witness.external_nullifier));
Ok(serialized)
}
/// Deserializes witness
///
/// # Errors
///
/// Returns an error if `message_id` is not within `user_message_limit`.
pub fn deserialize_witness(serialized: &[u8]) -> Result<(RLNWitnessInput, usize)> {
let mut all_read: usize = 0;
let (identity_secret, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (user_message_limit, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (message_id, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
message_id_range_check(&message_id, &user_message_limit)?;
let (path_elements, read) = bytes_le_to_vec_fr(&serialized[all_read..])?;
all_read += read;
let (identity_path_index, read) = bytes_le_to_vec_u8(&serialized[all_read..])?;
all_read += read;
let (x, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (external_nullifier, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
if serialized.len() != all_read {
return Err(Report::msg("serialized length is not equal to all_read"));
}
Ok((
RLNWitnessInput {
identity_secret,
path_elements,
identity_path_index,
x,
external_nullifier,
user_message_limit,
message_id,
},
all_read,
))
}
// This function deserializes input for kilic's rln generate_proof public API
// https://github.com/kilic/rln/blob/7ac74183f8b69b399e3bc96c1ae8ab61c026dc43/src/public.rs#L148
// input_data is [ identity_secret<32> | id_index<8> | user_message_limit<32> | message_id<32> | external_nullifier<32> | signal_len<8> | signal<var> ]
// return value is a rln witness populated according to this information
pub fn proof_inputs_to_rln_witness(
tree: &mut PoseidonTree,
serialized: &[u8],
) -> Result<(RLNWitnessInput, usize)> {
let mut all_read: usize = 0;
let (identity_secret, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let id_index = usize::try_from(u64::from_le_bytes(
serialized[all_read..all_read + 8].try_into()?,
))?;
all_read += 8;
let (user_message_limit, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (message_id, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (external_nullifier, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let signal_len = usize::try_from(u64::from_le_bytes(
serialized[all_read..all_read + 8].try_into()?,
))?;
all_read += 8;
let signal: Vec<u8> = serialized[all_read..all_read + signal_len].to_vec();
let merkle_proof = tree.proof(id_index).expect("proof should exist");
let path_elements = merkle_proof.get_path_elements();
let identity_path_index = merkle_proof.get_path_index();
let x = hash_to_field(&signal);
Ok((
RLNWitnessInput {
identity_secret,
path_elements,
identity_path_index,
user_message_limit,
message_id,
x,
external_nullifier,
},
all_read,
))
}
/// Creates [`RLNWitnessInput`] from it's fields.
///
/// # Errors
///
/// Returns an error if `message_id` is not within `user_message_limit`.
pub fn rln_witness_from_values(
identity_secret: Fr,
merkle_proof: &MerkleProof,
x: Fr,
external_nullifier: Fr,
user_message_limit: Fr,
message_id: Fr,
) -> Result<RLNWitnessInput> {
message_id_range_check(&message_id, &user_message_limit)?;
let path_elements = merkle_proof.get_path_elements();
let identity_path_index = merkle_proof.get_path_index();
Ok(RLNWitnessInput {
identity_secret,
path_elements,
identity_path_index,
x,
external_nullifier,
user_message_limit,
message_id,
})
}
pub fn random_rln_witness(tree_height: usize) -> RLNWitnessInput {
let mut rng = thread_rng();
let identity_secret = hash_to_field(&rng.gen::<[u8; 32]>());
let x = hash_to_field(&rng.gen::<[u8; 32]>());
let epoch = hash_to_field(&rng.gen::<[u8; 32]>());
let rln_identifier = hash_to_field(RLN_IDENTIFIER); //hash_to_field(&rng.gen::<[u8; 32]>());
let mut path_elements: Vec<Fr> = Vec::new();
let mut identity_path_index: Vec<u8> = Vec::new();
for _ in 0..tree_height {
path_elements.push(hash_to_field(&rng.gen::<[u8; 32]>()));
identity_path_index.push(rng.gen_range(0..2) as u8);
}
let user_message_limit = Fr::from(100);
let message_id = Fr::from(1);
RLNWitnessInput {
identity_secret,
path_elements,
identity_path_index,
x,
external_nullifier: poseidon_hash(&[epoch, rln_identifier]),
user_message_limit,
message_id,
}
}
pub fn proof_values_from_witness(rln_witness: &RLNWitnessInput) -> Result<RLNProofValues> {
message_id_range_check(&rln_witness.message_id, &rln_witness.user_message_limit)?;
// y share
let a_0 = rln_witness.identity_secret;
let a_1 = poseidon_hash(&[a_0, rln_witness.external_nullifier, rln_witness.message_id]);
let y = a_0 + rln_witness.x * a_1;
// Nullifier
let nullifier = poseidon_hash(&[a_1]);
// Merkle tree root computations
let root = compute_tree_root(
&rln_witness.identity_secret,
&rln_witness.user_message_limit,
&rln_witness.path_elements,
&rln_witness.identity_path_index,
);
Ok(RLNProofValues {
y,
nullifier,
root,
x: rln_witness.x,
external_nullifier: rln_witness.external_nullifier,
})
}
/// input_data is [ root<32> | external_nullifier<32> | x<32> | y<32> | nullifier<32> ]
pub fn serialize_proof_values(rln_proof_values: &RLNProofValues) -> Vec<u8> {
// Calculate capacity for Vec:
// 5 field elements: root, external_nullifier, x, y, nullifier
let mut serialized = Vec::with_capacity(fr_byte_size() * 5);
serialized.extend_from_slice(&fr_to_bytes_le(&rln_proof_values.root));
serialized.extend_from_slice(&fr_to_bytes_le(&rln_proof_values.external_nullifier));
serialized.extend_from_slice(&fr_to_bytes_le(&rln_proof_values.x));
serialized.extend_from_slice(&fr_to_bytes_le(&rln_proof_values.y));
serialized.extend_from_slice(&fr_to_bytes_le(&rln_proof_values.nullifier));
serialized
}
// Note: don't forget to skip the 128 bytes ZK proof, if serialized contains it.
// This proc deserialzies only proof _values_, i.e. circuit outputs, not the zk proof.
pub fn deserialize_proof_values(serialized: &[u8]) -> (RLNProofValues, usize) {
let mut all_read: usize = 0;
let (root, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (external_nullifier, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (x, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (y, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
let (nullifier, read) = bytes_le_to_fr(&serialized[all_read..]);
all_read += read;
(
RLNProofValues {
y,
nullifier,
root,
x,
external_nullifier,
},
all_read,
)
}
// input_data is [ identity_secret<32> | id_index<8> | user_message_limit<32> | message_id<32> | external_nullifier<32> | signal_len<8> | signal<var> ]
pub fn prepare_prove_input(
identity_secret: Fr,
id_index: usize,
user_message_limit: Fr,
message_id: Fr,
external_nullifier: Fr,
signal: &[u8],
) -> Vec<u8> {
// Calculate capacity for Vec:
// - 4 field elements: identity_secret, user_message_limit, message_id, external_nullifier
// - 16 bytes for two normalized usize values (id_index<8> + signal_len<8>)
// - variable length signal data
let mut serialized = Vec::with_capacity(fr_byte_size() * 4 + 16 + signal.len()); // length of 4 fr elements + 16 bytes (id_index + len) + signal length
serialized.extend_from_slice(&fr_to_bytes_le(&identity_secret));
serialized.extend_from_slice(&normalize_usize(id_index));
serialized.extend_from_slice(&fr_to_bytes_le(&user_message_limit));
serialized.extend_from_slice(&fr_to_bytes_le(&message_id));
serialized.extend_from_slice(&fr_to_bytes_le(&external_nullifier));
serialized.extend_from_slice(&normalize_usize(signal.len()));
serialized.extend_from_slice(signal);
serialized
}
// input_data is [ proof<128> | root<32> | external_nullifier<32> | x<32> | y<32> | nullifier<32> | signal_len<8> | signal<var> ]
pub fn prepare_verify_input(proof_data: Vec<u8>, signal: &[u8]) -> Vec<u8> {
// Calculate capacity for Vec:
// - proof_data contains the proof and proof values (proof<128> + root<32> + external_nullifier<32> + x<32> + y<32> + nullifier<32>)
// - 8 bytes for normalized signal length value (signal_len<8>)
// - variable length signal data
let mut serialized = Vec::with_capacity(proof_data.len() + 8 + signal.len());
serialized.extend(proof_data);
serialized.extend_from_slice(&normalize_usize(signal.len()));
serialized.extend_from_slice(signal);
serialized
}
///////////////////////////////////////////////////////
// Merkle tree utility functions
///////////////////////////////////////////////////////
pub fn compute_tree_root(
identity_secret: &Fr,
user_message_limit: &Fr,
path_elements: &[Fr],
identity_path_index: &[u8],
) -> Fr {
let id_commitment = poseidon_hash(&[*identity_secret]);
let mut root = poseidon_hash(&[id_commitment, *user_message_limit]);
for i in 0..identity_path_index.len() {
if identity_path_index[i] == 0 {
root = poseidon_hash(&[root, path_elements[i]]);
} else {
root = poseidon_hash(&[path_elements[i], root]);
}
}
root
}
///////////////////////////////////////////////////////
// Protocol utility functions
///////////////////////////////////////////////////////
// Generates a tuple (identity_secret_hash, id_commitment) where
// identity_secret_hash is random and id_commitment = PoseidonHash(identity_secret_hash)
// RNG is instantiated using thread_rng()
pub fn keygen() -> (Fr, Fr) {
let mut rng = thread_rng();
let identity_secret_hash = Fr::rand(&mut rng);
let id_commitment = poseidon_hash(&[identity_secret_hash]);
(identity_secret_hash, id_commitment)
}
// Generates a tuple (identity_trapdoor, identity_nullifier, identity_secret_hash, id_commitment) where
// identity_trapdoor and identity_nullifier are random,
// identity_secret_hash = PoseidonHash(identity_trapdoor, identity_nullifier),
// id_commitment = PoseidonHash(identity_secret_hash),
// RNG is instantiated using thread_rng()
// Generated credentials are compatible with Semaphore credentials
pub fn extended_keygen() -> (Fr, Fr, Fr, Fr) {
let mut rng = thread_rng();
let identity_trapdoor = Fr::rand(&mut rng);
let identity_nullifier = Fr::rand(&mut rng);
let identity_secret_hash = poseidon_hash(&[identity_trapdoor, identity_nullifier]);
let id_commitment = poseidon_hash(&[identity_secret_hash]);
(
identity_trapdoor,
identity_nullifier,
identity_secret_hash,
id_commitment,
)
}
// Generates a tuple (identity_secret_hash, id_commitment) where
// identity_secret_hash is random and id_commitment = PoseidonHash(identity_secret_hash)
// RNG is instantiated using 20 rounds of ChaCha seeded with the hash of the input
pub fn seeded_keygen(signal: &[u8]) -> (Fr, Fr) {
// ChaCha20 requires a seed of exactly 32 bytes.
// We first hash the input seed signal to a 32 bytes array and pass this as seed to ChaCha20
let mut seed = [0; 32];
let mut hasher = Keccak::v256();
hasher.update(signal);
hasher.finalize(&mut seed);
let mut rng = ChaCha20Rng::from_seed(seed);
let identity_secret_hash = Fr::rand(&mut rng);
let id_commitment = poseidon_hash(&[identity_secret_hash]);
(identity_secret_hash, id_commitment)
}
// Generates a tuple (identity_trapdoor, identity_nullifier, identity_secret_hash, id_commitment) where
// identity_trapdoor and identity_nullifier are random,
// identity_secret_hash = PoseidonHash(identity_trapdoor, identity_nullifier),
// id_commitment = PoseidonHash(identity_secret_hash),
// RNG is instantiated using 20 rounds of ChaCha seeded with the hash of the input
// Generated credentials are compatible with Semaphore credentials
pub fn extended_seeded_keygen(signal: &[u8]) -> (Fr, Fr, Fr, Fr) {
// ChaCha20 requires a seed of exactly 32 bytes.
// We first hash the input seed signal to a 32 bytes array and pass this as seed to ChaCha20
let mut seed = [0; 32];
let mut hasher = Keccak::v256();
hasher.update(signal);
hasher.finalize(&mut seed);
let mut rng = ChaCha20Rng::from_seed(seed);
let identity_trapdoor = Fr::rand(&mut rng);
let identity_nullifier = Fr::rand(&mut rng);
let identity_secret_hash = poseidon_hash(&[identity_trapdoor, identity_nullifier]);
let id_commitment = poseidon_hash(&[identity_secret_hash]);
(
identity_trapdoor,
identity_nullifier,
identity_secret_hash,
id_commitment,
)
}
pub fn compute_id_secret(share1: (Fr, Fr), share2: (Fr, Fr)) -> Result<Fr, String> {
// Assuming a0 is the identity secret and a1 = poseidonHash([a0, external_nullifier]),
// a (x,y) share satisfies the following relation
// y = a_0 + x * a_1
let (x1, y1) = share1;
let (x2, y2) = share2;
// If the two input shares were computed for the same external_nullifier and identity secret, we can recover the latter
// y1 = a_0 + x1 * a_1
// y2 = a_0 + x2 * a_1
let a_1 = (y1 - y2) / (x1 - x2);
let a_0 = y1 - x1 * a_1;
// If shares come from the same polynomial, a0 is correctly recovered and a1 = poseidonHash([a0, external_nullifier])
Ok(a_0)
}
///////////////////////////////////////////////////////
// zkSNARK utility functions
///////////////////////////////////////////////////////
#[derive(Error, Debug)]
pub enum ProofError {
#[error("Error reading circuit key: {0}")]
CircuitKeyError(#[from] Report),
#[error("Error producing witness: {0}")]
WitnessError(Report),
#[error("Error producing proof: {0}")]
SynthesisError(#[from] SynthesisError),
}
fn calculate_witness_element<E: ark_ec::pairing::Pairing>(
witness: Vec<BigInt>,
) -> Result<Vec<E::ScalarField>> {
use ark_ff::PrimeField;
let modulus = <E::ScalarField as PrimeField>::MODULUS;
// convert it to field elements
use num_traits::Signed;
let mut witness_vec = vec![];
for w in witness.into_iter() {
let w = if w.sign() == num_bigint::Sign::Minus {
// Need to negate the witness element if negative
modulus.into()
- w.abs()
.to_biguint()
.ok_or(Report::msg("not a biguint value"))?
} else {
w.to_biguint().ok_or(Report::msg("not a biguint value"))?
};
witness_vec.push(E::ScalarField::from(w))
}
Ok(witness_vec)
}
pub fn generate_proof_with_witness(
witness: Vec<BigInt>,
proving_key: &(ProvingKey<Curve>, ConstraintMatrices<Fr>),
) -> Result<ArkProof<Curve>, ProofError> {
// If in debug mode, we measure and later print time take to compute witness
#[cfg(test)]
let now = Instant::now();
let full_assignment =
calculate_witness_element::<Curve>(witness).map_err(ProofError::WitnessError)?;
#[cfg(test)]
println!("witness generation took: {:.2?}", now.elapsed());
// Random Values
let mut rng = thread_rng();
let r = Fr::rand(&mut rng);
let s = Fr::rand(&mut rng);
// If in debug mode, we measure and later print time take to compute proof
#[cfg(test)]
let now = Instant::now();
let proof = Groth16::<_, CircomReduction>::create_proof_with_reduction_and_matrices(
&proving_key.0,
r,
s,
&proving_key.1,
proving_key.1.num_instance_variables,
proving_key.1.num_constraints,
full_assignment.as_slice(),
)?;
#[cfg(test)]
println!("proof generation took: {:.2?}", now.elapsed());
Ok(proof)
}
/// Formats inputs for witness calculation
///
/// # Errors
///
/// Returns an error if `rln_witness.message_id` is not within `rln_witness.user_message_limit`.
pub fn inputs_for_witness_calculation(
rln_witness: &RLNWitnessInput,
) -> Result<[(&str, Vec<Fr>); 7]> {
message_id_range_check(&rln_witness.message_id, &rln_witness.user_message_limit)?;
let mut identity_path_index = Vec::with_capacity(rln_witness.identity_path_index.len());
rln_witness
.identity_path_index
.iter()
.for_each(|v| identity_path_index.push(Fr::from(*v)));
Ok([
("identitySecret", vec![rln_witness.identity_secret]),
("userMessageLimit", vec![rln_witness.user_message_limit]),
("messageId", vec![rln_witness.message_id]),
("pathElements", rln_witness.path_elements.clone()),
("identityPathIndex", identity_path_index),
("x", vec![rln_witness.x]),
("externalNullifier", vec![rln_witness.external_nullifier]),
])
}
/// Generates a RLN proof
///
/// # Errors
///
/// Returns a [`ProofError`] if proving fails.
pub fn generate_proof(
proving_key: &(ProvingKey<Curve>, ConstraintMatrices<Fr>),
rln_witness: &RLNWitnessInput,
graph_data: &[u8],
) -> Result<ArkProof<Curve>, ProofError> {
let inputs = inputs_for_witness_calculation(rln_witness)?
.into_iter()
.map(|(name, values)| (name.to_string(), values));
// If in debug mode, we measure and later print time take to compute witness
#[cfg(test)]
let now = Instant::now();
let full_assignment = calculate_rln_witness(inputs, graph_data);
#[cfg(test)]
println!("witness generation took: {:.2?}", now.elapsed());
// Random Values
let mut rng = thread_rng();
let r = Fr::rand(&mut rng);
let s = Fr::rand(&mut rng);
// If in debug mode, we measure and later print time take to compute proof
#[cfg(test)]
let now = Instant::now();
let proof = Groth16::<_, CircomReduction>::create_proof_with_reduction_and_matrices(
&proving_key.0,
r,
s,
&proving_key.1,
proving_key.1.num_instance_variables,
proving_key.1.num_constraints,
full_assignment.as_slice(),
)?;
#[cfg(test)]
println!("proof generation took: {:.2?}", now.elapsed());
Ok(proof)
}
/// Verifies a given RLN proof
///
/// # Errors
///
/// Returns a [`ProofError`] if verifying fails. Verification failure does not
/// necessarily mean the proof is incorrect.
pub fn verify_proof(
verifying_key: &VerifyingKey<Curve>,
proof: &ArkProof<Curve>,
proof_values: &RLNProofValues,
) -> Result<bool, ProofError> {
// We re-arrange proof-values according to the circuit specification
let inputs = vec![
proof_values.y,
proof_values.root,
proof_values.nullifier,
proof_values.x,
proof_values.external_nullifier,
];
// Check that the proof is valid
let pvk = prepare_verifying_key(verifying_key);
//let pr: ArkProof<Curve> = (*proof).into();
// If in debug mode, we measure and later print time take to verify proof
#[cfg(test)]
let now = Instant::now();
let verified = Groth16::<_, CircomReduction>::verify_proof(&pvk, proof, &inputs)?;
#[cfg(test)]
println!("verify took: {:.2?}", now.elapsed());
Ok(verified)
}
// auxiliary function for serialisation Fr to json using ark serilize
fn ark_se<S, A: CanonicalSerialize>(a: &A, s: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let mut bytes = vec![];
a.serialize_compressed(&mut bytes)
.map_err(serde::ser::Error::custom)?;
s.serialize_bytes(&bytes)
}
// auxiliary function for deserialisation Fr to json using ark serilize
fn ark_de<'de, D, A: CanonicalDeserialize>(data: D) -> Result<A, D::Error>
where
D: serde::de::Deserializer<'de>,
{
let s: Vec<u8> = serde::de::Deserialize::deserialize(data)?;
let a = A::deserialize_compressed_unchecked(s.as_slice());
a.map_err(serde::de::Error::custom)
}
/// Converts a JSON value into [`RLNWitnessInput`] object.
///
/// # Errors
///
/// Returns an error if `rln_witness.message_id` is not within `rln_witness.user_message_limit`.
pub fn rln_witness_from_json(input_json: serde_json::Value) -> Result<RLNWitnessInput> {
let rln_witness: RLNWitnessInput = serde_json::from_value(input_json).unwrap();
message_id_range_check(&rln_witness.message_id, &rln_witness.user_message_limit)?;
Ok(rln_witness)
}
/// Converts a [`RLNWitnessInput`] object to the corresponding JSON serialization.
///
/// # Errors
///
/// Returns an error if `message_id` is not within `user_message_limit`.
pub fn rln_witness_to_json(rln_witness: &RLNWitnessInput) -> Result<serde_json::Value> {
message_id_range_check(&rln_witness.message_id, &rln_witness.user_message_limit)?;
let rln_witness_json = serde_json::to_value(rln_witness)?;
Ok(rln_witness_json)
}
/// Converts a [`RLNWitnessInput`] object to the corresponding JSON serialization.
/// Before serialisation the data should be translated into big int for further calculation in the witness calculator.
///
/// # Errors
///
/// Returns an error if `message_id` is not within `user_message_limit`.
pub fn rln_witness_to_bigint_json(rln_witness: &RLNWitnessInput) -> Result<serde_json::Value> {
message_id_range_check(&rln_witness.message_id, &rln_witness.user_message_limit)?;
let mut path_elements = Vec::new();
for v in rln_witness.path_elements.iter() {
path_elements.push(to_bigint(v)?.to_str_radix(10));
}
let mut identity_path_index = Vec::new();
rln_witness
.identity_path_index
.iter()
.for_each(|v| identity_path_index.push(BigInt::from(*v).to_str_radix(10)));
let inputs = serde_json::json!({
"identitySecret": to_bigint(&rln_witness.identity_secret)?.to_str_radix(10),
"userMessageLimit": to_bigint(&rln_witness.user_message_limit)?.to_str_radix(10),
"messageId": to_bigint(&rln_witness.message_id)?.to_str_radix(10),
"pathElements": path_elements,
"identityPathIndex": identity_path_index,
"x": to_bigint(&rln_witness.x)?.to_str_radix(10),
"externalNullifier": to_bigint(&rln_witness.external_nullifier)?.to_str_radix(10),
});
Ok(inputs)
}
pub fn message_id_range_check(message_id: &Fr, user_message_limit: &Fr) -> Result<()> {
if message_id > user_message_limit {
return Err(color_eyre::Report::msg(
"message_id is not within user_message_limit",
));
}
Ok(())
}

View File

@@ -0,0 +1,80 @@
use ark_std::{rand::thread_rng, UniformRand};
use rand::SeedableRng;
use rand_chacha::ChaCha20Rng;
use tiny_keccak::{Hasher as _, Keccak};
use zerokit_utils::error::ZerokitMerkleTreeError;
use crate::{circuit::Fr, hashers::poseidon_hash, utils::IdSecret};
/// Generates a random RLN identity using a cryptographically secure RNG.
///
/// Returns `(identity_secret, id_commitment)` where the commitment is `PoseidonHash(identity_secret)`.
pub fn keygen() -> Result<(IdSecret, Fr), ZerokitMerkleTreeError> {
let mut rng = thread_rng();
let identity_secret = IdSecret::rand(&mut rng);
let id_commitment = poseidon_hash(&[*identity_secret.clone()])?;
Ok((identity_secret, id_commitment))
}
/// Generates an extended RLN identity compatible with Semaphore.
///
/// Returns `(identity_trapdoor, identity_nullifier, identity_secret, id_commitment)` where:
/// - `identity_secret = PoseidonHash(identity_trapdoor, identity_nullifier)`
/// - `id_commitment = PoseidonHash(identity_secret)`
pub fn extended_keygen() -> Result<(Fr, Fr, Fr, Fr), ZerokitMerkleTreeError> {
let mut rng = thread_rng();
let identity_trapdoor = Fr::rand(&mut rng);
let identity_nullifier = Fr::rand(&mut rng);
let identity_secret = poseidon_hash(&[identity_trapdoor, identity_nullifier])?;
let id_commitment = poseidon_hash(&[identity_secret])?;
Ok((
identity_trapdoor,
identity_nullifier,
identity_secret,
id_commitment,
))
}
/// Generates a deterministic RLN identity from a seed.
///
/// Uses ChaCha20 RNG seeded with Keccak-256 hash of the input.
/// Returns `(identity_secret, id_commitment)`. Same input always produces the same identity.
pub fn seeded_keygen(signal: &[u8]) -> Result<(Fr, Fr), ZerokitMerkleTreeError> {
// ChaCha20 requires a seed of exactly 32 bytes.
// We first hash the input seed signal to a 32 bytes array and pass this as seed to ChaCha20
let mut seed = [0; 32];
let mut hasher = Keccak::v256();
hasher.update(signal);
hasher.finalize(&mut seed);
let mut rng = ChaCha20Rng::from_seed(seed);
let identity_secret = Fr::rand(&mut rng);
let id_commitment = poseidon_hash(&[identity_secret])?;
Ok((identity_secret, id_commitment))
}
/// Generates a deterministic extended RLN identity from a seed, compatible with Semaphore.
///
/// Uses ChaCha20 RNG seeded with Keccak-256 hash of the input.
/// Returns `(identity_trapdoor, identity_nullifier, identity_secret, id_commitment)`.
/// Same input always produces the same identity.
pub fn extended_seeded_keygen(signal: &[u8]) -> Result<(Fr, Fr, Fr, Fr), ZerokitMerkleTreeError> {
// ChaCha20 requires a seed of exactly 32 bytes.
// We first hash the input seed signal to a 32 bytes array and pass this as seed to ChaCha20
let mut seed = [0; 32];
let mut hasher = Keccak::v256();
hasher.update(signal);
hasher.finalize(&mut seed);
let mut rng = ChaCha20Rng::from_seed(seed);
let identity_trapdoor = Fr::rand(&mut rng);
let identity_nullifier = Fr::rand(&mut rng);
let identity_secret = poseidon_hash(&[identity_trapdoor, identity_nullifier])?;
let id_commitment = poseidon_hash(&[identity_secret])?;
Ok((
identity_trapdoor,
identity_nullifier,
identity_secret,
id_commitment,
))
}

19
rln/src/protocol/mod.rs Normal file
View File

@@ -0,0 +1,19 @@
// This crate collects all the underlying primitives used to implement RLN
mod keygen;
mod proof;
mod slashing;
mod witness;
pub use keygen::{extended_keygen, extended_seeded_keygen, keygen, seeded_keygen};
pub use proof::{
bytes_be_to_rln_proof, bytes_be_to_rln_proof_values, bytes_le_to_rln_proof,
bytes_le_to_rln_proof_values, generate_zk_proof, generate_zk_proof_with_witness,
rln_proof_to_bytes_be, rln_proof_to_bytes_le, rln_proof_values_to_bytes_be,
rln_proof_values_to_bytes_le, verify_zk_proof, RLNProof, RLNProofValues,
};
pub use slashing::recover_id_secret;
pub use witness::{
bytes_be_to_rln_witness, bytes_le_to_rln_witness, compute_tree_root, proof_values_from_witness,
rln_witness_to_bigint_json, rln_witness_to_bytes_be, rln_witness_to_bytes_le, RLNWitnessInput,
};

345
rln/src/protocol/proof.rs Normal file
View File

@@ -0,0 +1,345 @@
use ark_ff::PrimeField;
use ark_groth16::{prepare_verifying_key, Groth16};
use ark_serialize::{CanonicalDeserialize, CanonicalSerialize};
use ark_std::{rand::thread_rng, UniformRand};
use num_bigint::BigInt;
use num_traits::Signed;
use super::witness::{inputs_for_witness_calculation, RLNWitnessInput};
use crate::{
circuit::{
iden3calc::calc_witness, qap::CircomReduction, Curve, Fr, Proof, VerifyingKey, Zkey,
COMPRESS_PROOF_SIZE,
},
error::ProtocolError,
utils::{bytes_be_to_fr, bytes_le_to_fr, fr_to_bytes_be, fr_to_bytes_le, FR_BYTE_SIZE},
};
/// Complete RLN proof.
///
/// Combines the Groth16 proof with its public values.
#[derive(Debug, PartialEq, Clone)]
pub struct RLNProof {
pub proof: Proof,
pub proof_values: RLNProofValues,
}
/// Public values for RLN proof verification.
///
/// Contains the circuit's public inputs and outputs. Used in proof verification
/// and identity secret recovery when rate limit violations are detected.
#[derive(Debug, PartialEq, Clone, Copy)]
pub struct RLNProofValues {
// Public outputs:
pub y: Fr,
pub nullifier: Fr,
pub root: Fr,
// Public Inputs:
pub x: Fr,
pub external_nullifier: Fr,
}
/// Serializes RLN proof values to little-endian bytes.
pub fn rln_proof_values_to_bytes_le(rln_proof_values: &RLNProofValues) -> Vec<u8> {
// Calculate capacity for Vec:
// 5 field elements: root, external_nullifier, x, y, nullifier
let mut bytes = Vec::with_capacity(FR_BYTE_SIZE * 5);
bytes.extend_from_slice(&fr_to_bytes_le(&rln_proof_values.root));
bytes.extend_from_slice(&fr_to_bytes_le(&rln_proof_values.external_nullifier));
bytes.extend_from_slice(&fr_to_bytes_le(&rln_proof_values.x));
bytes.extend_from_slice(&fr_to_bytes_le(&rln_proof_values.y));
bytes.extend_from_slice(&fr_to_bytes_le(&rln_proof_values.nullifier));
bytes
}
/// Serializes RLN proof values to big-endian bytes.
pub fn rln_proof_values_to_bytes_be(rln_proof_values: &RLNProofValues) -> Vec<u8> {
// Calculate capacity for Vec:
// 5 field elements: root, external_nullifier, x, y, nullifier
let mut bytes = Vec::with_capacity(FR_BYTE_SIZE * 5);
bytes.extend_from_slice(&fr_to_bytes_be(&rln_proof_values.root));
bytes.extend_from_slice(&fr_to_bytes_be(&rln_proof_values.external_nullifier));
bytes.extend_from_slice(&fr_to_bytes_be(&rln_proof_values.x));
bytes.extend_from_slice(&fr_to_bytes_be(&rln_proof_values.y));
bytes.extend_from_slice(&fr_to_bytes_be(&rln_proof_values.nullifier));
bytes
}
/// Deserializes RLN proof values from little-endian bytes.
///
/// Format: `[ root<32> | external_nullifier<32> | x<32> | y<32> | nullifier<32> ]`
///
/// Returns the deserialized proof values and the number of bytes read.
pub fn bytes_le_to_rln_proof_values(
bytes: &[u8],
) -> Result<(RLNProofValues, usize), ProtocolError> {
let mut read: usize = 0;
let (root, el_size) = bytes_le_to_fr(&bytes[read..])?;
read += el_size;
let (external_nullifier, el_size) = bytes_le_to_fr(&bytes[read..])?;
read += el_size;
let (x, el_size) = bytes_le_to_fr(&bytes[read..])?;
read += el_size;
let (y, el_size) = bytes_le_to_fr(&bytes[read..])?;
read += el_size;
let (nullifier, el_size) = bytes_le_to_fr(&bytes[read..])?;
read += el_size;
Ok((
RLNProofValues {
y,
nullifier,
root,
x,
external_nullifier,
},
read,
))
}
/// Deserializes RLN proof values from big-endian bytes.
///
/// Format: `[ root<32> | external_nullifier<32> | x<32> | y<32> | nullifier<32> ]`
///
/// Returns the deserialized proof values and the number of bytes read.
pub fn bytes_be_to_rln_proof_values(
bytes: &[u8],
) -> Result<(RLNProofValues, usize), ProtocolError> {
let mut read: usize = 0;
let (root, el_size) = bytes_be_to_fr(&bytes[read..])?;
read += el_size;
let (external_nullifier, el_size) = bytes_be_to_fr(&bytes[read..])?;
read += el_size;
let (x, el_size) = bytes_be_to_fr(&bytes[read..])?;
read += el_size;
let (y, el_size) = bytes_be_to_fr(&bytes[read..])?;
read += el_size;
let (nullifier, el_size) = bytes_be_to_fr(&bytes[read..])?;
read += el_size;
Ok((
RLNProofValues {
y,
nullifier,
root,
x,
external_nullifier,
},
read,
))
}
/// Serializes RLN proof to little-endian bytes.
///
/// Note: The Groth16 proof is always serialized in LE format (arkworks behavior),
/// while proof_values are serialized in LE format.
pub fn rln_proof_to_bytes_le(rln_proof: &RLNProof) -> Result<Vec<u8>, ProtocolError> {
// Calculate capacity for Vec:
// - 128 bytes for compressed Groth16 proof
// - 5 field elements for proof values (root, external_nullifier, x, y, nullifier)
let mut bytes = Vec::with_capacity(COMPRESS_PROOF_SIZE + FR_BYTE_SIZE * 5);
// Serialize proof (always LE format from arkworks)
rln_proof.proof.serialize_compressed(&mut bytes)?;
// Serialize proof values in LE
let proof_values_bytes = rln_proof_values_to_bytes_le(&rln_proof.proof_values);
bytes.extend_from_slice(&proof_values_bytes);
Ok(bytes)
}
/// Serializes RLN proof to big-endian bytes.
///
/// Note: The Groth16 proof is always serialized in LE format (arkworks behavior),
/// while proof_values are serialized in BE format. This creates a mixed-endian format.
pub fn rln_proof_to_bytes_be(rln_proof: &RLNProof) -> Result<Vec<u8>, ProtocolError> {
// Calculate capacity for Vec:
// - 128 bytes for compressed Groth16 proof
// - 5 field elements for proof values (root, external_nullifier, x, y, nullifier)
let mut bytes = Vec::with_capacity(COMPRESS_PROOF_SIZE + FR_BYTE_SIZE * 5);
// Serialize proof (always LE format from arkworks)
rln_proof.proof.serialize_compressed(&mut bytes)?;
// Serialize proof values in BE
let proof_values_bytes = rln_proof_values_to_bytes_be(&rln_proof.proof_values);
bytes.extend_from_slice(&proof_values_bytes);
Ok(bytes)
}
/// Deserializes RLN proof from little-endian bytes.
///
/// Format: `[ proof<128,LE> | root<32,LE> | external_nullifier<32,LE> | x<32,LE> | y<32,LE> | nullifier<32,LE> ]`
///
/// Returns the deserialized proof and the number of bytes read.
pub fn bytes_le_to_rln_proof(bytes: &[u8]) -> Result<(RLNProof, usize), ProtocolError> {
let mut read: usize = 0;
// Deserialize proof (always LE from arkworks)
let proof = Proof::deserialize_compressed(&bytes[read..read + COMPRESS_PROOF_SIZE])?;
read += COMPRESS_PROOF_SIZE;
// Deserialize proof values
let (values, el_size) = bytes_le_to_rln_proof_values(&bytes[read..])?;
read += el_size;
Ok((
RLNProof {
proof,
proof_values: values,
},
read,
))
}
/// Deserializes RLN proof from big-endian bytes.
///
/// Format: `[ proof<128,LE> | root<32,BE> | external_nullifier<32,BE> | x<32,BE> | y<32,BE> | nullifier<32,BE> ]`
///
/// Note: Mixed-endian format - proof is LE (arkworks), proof_values are BE.
///
/// Returns the deserialized proof and the number of bytes read.
pub fn bytes_be_to_rln_proof(bytes: &[u8]) -> Result<(RLNProof, usize), ProtocolError> {
let mut read: usize = 0;
// Deserialize proof (always LE from arkworks)
let proof = Proof::deserialize_compressed(&bytes[read..read + COMPRESS_PROOF_SIZE])?;
read += COMPRESS_PROOF_SIZE;
// Deserialize proof values
let (values, el_size) = bytes_be_to_rln_proof_values(&bytes[read..])?;
read += el_size;
Ok((
RLNProof {
proof,
proof_values: values,
},
read,
))
}
// zkSNARK proof generation and verification
/// Converts calculated witness (BigInt) to field elements.
fn calculated_witness_to_field_elements<E: ark_ec::pairing::Pairing>(
calculated_witness: Vec<BigInt>,
) -> Result<Vec<E::ScalarField>, ProtocolError> {
let modulus = <E::ScalarField as PrimeField>::MODULUS;
// convert it to field elements
let mut field_elements = vec![];
for w in calculated_witness.into_iter() {
let w = if w.sign() == num_bigint::Sign::Minus {
// Need to negate the witness element if negative
modulus.into()
- w.abs()
.to_biguint()
.ok_or(ProtocolError::BigUintConversion(w))?
} else {
w.to_biguint().ok_or(ProtocolError::BigUintConversion(w))?
};
field_elements.push(E::ScalarField::from(w))
}
Ok(field_elements)
}
/// Generates a zkSNARK proof from pre-calculated witness values.
///
/// Use this when witness calculation is performed externally.
pub fn generate_zk_proof_with_witness(
calculated_witness: Vec<BigInt>,
zkey: &Zkey,
) -> Result<Proof, ProtocolError> {
let full_assignment = calculated_witness_to_field_elements::<Curve>(calculated_witness)?;
// Random Values
let mut rng = thread_rng();
let r = Fr::rand(&mut rng);
let s = Fr::rand(&mut rng);
let proof = Groth16::<_, CircomReduction>::create_proof_with_reduction_and_matrices(
&zkey.0,
r,
s,
&zkey.1,
zkey.1.num_instance_variables,
zkey.1.num_constraints,
full_assignment.as_slice(),
)?;
Ok(proof)
}
/// Generates a zkSNARK proof from witness input using the provided circuit data.
pub fn generate_zk_proof(
zkey: &Zkey,
witness: &RLNWitnessInput,
graph_data: &[u8],
) -> Result<Proof, ProtocolError> {
let inputs = inputs_for_witness_calculation(witness)?
.into_iter()
.map(|(name, values)| (name.to_string(), values));
let full_assignment = calc_witness(inputs, graph_data)?;
// Random Values
let mut rng = thread_rng();
let r = Fr::rand(&mut rng);
let s = Fr::rand(&mut rng);
let proof = Groth16::<_, CircomReduction>::create_proof_with_reduction_and_matrices(
&zkey.0,
r,
s,
&zkey.1,
zkey.1.num_instance_variables,
zkey.1.num_constraints,
full_assignment.as_slice(),
)?;
Ok(proof)
}
/// Verifies a zkSNARK proof against the verifying key and public values.
///
/// Returns `true` if the proof is cryptographically valid, `false` if verification fails.
/// Note: Verification failure may occur due to proof computation errors, not necessarily malicious proofs.
pub fn verify_zk_proof(
verifying_key: &VerifyingKey,
proof: &Proof,
proof_values: &RLNProofValues,
) -> Result<bool, ProtocolError> {
// We re-arrange proof-values according to the circuit specification
let inputs = vec![
proof_values.y,
proof_values.root,
proof_values.nullifier,
proof_values.x,
proof_values.external_nullifier,
];
// Check that the proof is valid
let pvk = prepare_verifying_key(verifying_key);
let verified = Groth16::<_, CircomReduction>::verify_proof(&pvk, proof, &inputs)?;
Ok(verified)
}

View File

@@ -0,0 +1,55 @@
use ark_ff::AdditiveGroup;
use super::proof::RLNProofValues;
use crate::{circuit::Fr, error::ProtocolError, utils::IdSecret};
/// Computes identity secret from two (x, y) shares.
fn compute_id_secret(share1: (Fr, Fr), share2: (Fr, Fr)) -> Result<IdSecret, ProtocolError> {
// Assuming a0 is the identity secret and a1 = poseidonHash([a0, external_nullifier]),
// a (x,y) share satisfies the following relation
// y = a_0 + x * a_1
let (x1, y1) = share1;
let (x2, y2) = share2;
// If the two input shares were computed for the same external_nullifier and identity secret, we can recover the latter
// y1 = a_0 + x1 * a_1
// y2 = a_0 + x2 * a_1
if (x1 - x2) != Fr::ZERO {
let a_1 = (y1 - y2) / (x1 - x2);
let mut a_0 = y1 - x1 * a_1;
// If shares come from the same polynomial, a0 is correctly recovered and a1 = poseidonHash([a0, external_nullifier])
let id_secret = IdSecret::from(&mut a_0);
Ok(id_secret)
} else {
Err(ProtocolError::DivisionByZero)
}
}
/// Recovers identity secret from two proof shares with the same external nullifier.
///
/// When a user violates rate limits by generating multiple proofs in the same epoch,
/// their shares can be used to recover their identity secret through polynomial interpolation.
pub fn recover_id_secret(
rln_proof_values_1: &RLNProofValues,
rln_proof_values_2: &RLNProofValues,
) -> Result<IdSecret, ProtocolError> {
let external_nullifier_1 = rln_proof_values_1.external_nullifier;
let external_nullifier_2 = rln_proof_values_2.external_nullifier;
// We continue only if the proof values are for the same external nullifier
if external_nullifier_1 != external_nullifier_2 {
return Err(ProtocolError::ExternalNullifierMismatch(
external_nullifier_1,
external_nullifier_2,
));
}
// We extract the two shares
let share1 = (rln_proof_values_1.x, rln_proof_values_1.y);
let share2 = (rln_proof_values_2.x, rln_proof_values_2.y);
// We recover the secret
compute_id_secret(share1, share2)
}

354
rln/src/protocol/witness.rs Normal file
View File

@@ -0,0 +1,354 @@
use zeroize::Zeroize;
use super::proof::RLNProofValues;
use crate::{
circuit::Fr,
error::ProtocolError,
hashers::poseidon_hash,
utils::{
bytes_be_to_fr, bytes_be_to_vec_fr, bytes_be_to_vec_u8, bytes_le_to_fr, bytes_le_to_vec_fr,
bytes_le_to_vec_u8, fr_to_bytes_be, fr_to_bytes_le, to_bigint, vec_fr_to_bytes_be,
vec_fr_to_bytes_le, vec_u8_to_bytes_be, vec_u8_to_bytes_le, FrOrSecret, IdSecret,
FR_BYTE_SIZE,
},
};
/// Witness input for RLN proof generation.
///
/// Contains the identity credentials, merkle proof, rate-limiting parameters,
/// and signal binding data required to generate a Groth16 proof for the RLN protocol.
#[derive(Debug, PartialEq, Clone)]
pub struct RLNWitnessInput {
identity_secret: IdSecret,
user_message_limit: Fr,
message_id: Fr,
path_elements: Vec<Fr>,
identity_path_index: Vec<u8>,
x: Fr,
external_nullifier: Fr,
}
impl RLNWitnessInput {
pub fn new(
identity_secret: IdSecret,
user_message_limit: Fr,
message_id: Fr,
path_elements: Vec<Fr>,
identity_path_index: Vec<u8>,
x: Fr,
external_nullifier: Fr,
) -> Result<Self, ProtocolError> {
// Message ID range check
if message_id > user_message_limit {
return Err(ProtocolError::InvalidMessageId(
message_id,
user_message_limit,
));
}
// Merkle proof length check
let path_elements_len = path_elements.len();
let identity_path_index_len = identity_path_index.len();
if path_elements_len != identity_path_index_len {
return Err(ProtocolError::InvalidMerkleProofLength(
path_elements_len,
identity_path_index_len,
));
}
Ok(Self {
identity_secret,
user_message_limit,
message_id,
path_elements,
identity_path_index,
x,
external_nullifier,
})
}
pub fn identity_secret(&self) -> &IdSecret {
&self.identity_secret
}
pub fn user_message_limit(&self) -> &Fr {
&self.user_message_limit
}
pub fn message_id(&self) -> &Fr {
&self.message_id
}
pub fn path_elements(&self) -> &[Fr] {
&self.path_elements
}
pub fn identity_path_index(&self) -> &[u8] {
&self.identity_path_index
}
pub fn x(&self) -> &Fr {
&self.x
}
pub fn external_nullifier(&self) -> &Fr {
&self.external_nullifier
}
}
/// Serializes an RLN witness to little-endian bytes.
pub fn rln_witness_to_bytes_le(witness: &RLNWitnessInput) -> Result<Vec<u8>, ProtocolError> {
// Calculate capacity for Vec:
// - 5 fixed field elements: identity_secret, user_message_limit, message_id, x, external_nullifier
// - variable number of path elements
// - identity_path_index (variable size)
let mut bytes: Vec<u8> = Vec::with_capacity(
FR_BYTE_SIZE * (5 + witness.path_elements.len()) + witness.identity_path_index.len(),
);
bytes.extend_from_slice(&witness.identity_secret.to_bytes_le());
bytes.extend_from_slice(&fr_to_bytes_le(&witness.user_message_limit));
bytes.extend_from_slice(&fr_to_bytes_le(&witness.message_id));
bytes.extend_from_slice(&vec_fr_to_bytes_le(&witness.path_elements));
bytes.extend_from_slice(&vec_u8_to_bytes_le(&witness.identity_path_index));
bytes.extend_from_slice(&fr_to_bytes_le(&witness.x));
bytes.extend_from_slice(&fr_to_bytes_le(&witness.external_nullifier));
Ok(bytes)
}
/// Serializes an RLN witness to big-endian bytes.
pub fn rln_witness_to_bytes_be(witness: &RLNWitnessInput) -> Result<Vec<u8>, ProtocolError> {
// Calculate capacity for Vec:
// - 5 fixed field elements: identity_secret, user_message_limit, message_id, x, external_nullifier
// - variable number of path elements
// - identity_path_index (variable size)
let mut bytes: Vec<u8> = Vec::with_capacity(
FR_BYTE_SIZE * (5 + witness.path_elements.len()) + witness.identity_path_index.len(),
);
bytes.extend_from_slice(&witness.identity_secret.to_bytes_be());
bytes.extend_from_slice(&fr_to_bytes_be(&witness.user_message_limit));
bytes.extend_from_slice(&fr_to_bytes_be(&witness.message_id));
bytes.extend_from_slice(&vec_fr_to_bytes_be(&witness.path_elements));
bytes.extend_from_slice(&vec_u8_to_bytes_be(&witness.identity_path_index));
bytes.extend_from_slice(&fr_to_bytes_be(&witness.x));
bytes.extend_from_slice(&fr_to_bytes_be(&witness.external_nullifier));
Ok(bytes)
}
/// Deserializes an RLN witness from little-endian bytes.
///
/// Format: `[ identity_secret<32> | user_message_limit<32> | message_id<32> | path_elements<var> | identity_path_index<var> | x<32> | external_nullifier<32> ]`
///
/// Returns the deserialized witness and the number of bytes read.
pub fn bytes_le_to_rln_witness(bytes: &[u8]) -> Result<(RLNWitnessInput, usize), ProtocolError> {
let mut read: usize = 0;
let (identity_secret, el_size) = IdSecret::from_bytes_le(&bytes[read..])?;
read += el_size;
let (user_message_limit, el_size) = bytes_le_to_fr(&bytes[read..])?;
read += el_size;
let (message_id, el_size) = bytes_le_to_fr(&bytes[read..])?;
read += el_size;
let (path_elements, el_size) = bytes_le_to_vec_fr(&bytes[read..])?;
read += el_size;
let (identity_path_index, el_size) = bytes_le_to_vec_u8(&bytes[read..])?;
read += el_size;
let (x, el_size) = bytes_le_to_fr(&bytes[read..])?;
read += el_size;
let (external_nullifier, el_size) = bytes_le_to_fr(&bytes[read..])?;
read += el_size;
if bytes.len() != read {
return Err(ProtocolError::InvalidReadLen(bytes.len(), read));
}
Ok((
RLNWitnessInput::new(
identity_secret,
user_message_limit,
message_id,
path_elements,
identity_path_index,
x,
external_nullifier,
)?,
read,
))
}
/// Deserializes an RLN witness from big-endian bytes.
///
/// Format: `[ identity_secret<32> | user_message_limit<32> | message_id<32> | path_elements<var> | identity_path_index<var> | x<32> | external_nullifier<32> ]`
///
/// Returns the deserialized witness and the number of bytes read.
pub fn bytes_be_to_rln_witness(bytes: &[u8]) -> Result<(RLNWitnessInput, usize), ProtocolError> {
let mut read: usize = 0;
let (identity_secret, el_size) = IdSecret::from_bytes_be(&bytes[read..])?;
read += el_size;
let (user_message_limit, el_size) = bytes_be_to_fr(&bytes[read..])?;
read += el_size;
let (message_id, el_size) = bytes_be_to_fr(&bytes[read..])?;
read += el_size;
let (path_elements, el_size) = bytes_be_to_vec_fr(&bytes[read..])?;
read += el_size;
let (identity_path_index, el_size) = bytes_be_to_vec_u8(&bytes[read..])?;
read += el_size;
let (x, el_size) = bytes_be_to_fr(&bytes[read..])?;
read += el_size;
let (external_nullifier, el_size) = bytes_be_to_fr(&bytes[read..])?;
read += el_size;
if bytes.len() != read {
return Err(ProtocolError::InvalidReadLen(bytes.len(), read));
}
Ok((
RLNWitnessInput::new(
identity_secret,
user_message_limit,
message_id,
path_elements,
identity_path_index,
x,
external_nullifier,
)?,
read,
))
}
/// Converts RLN witness to JSON with BigInt string representation for witness calculator.
pub fn rln_witness_to_bigint_json(
witness: &RLNWitnessInput,
) -> Result<serde_json::Value, ProtocolError> {
use num_bigint::BigInt;
let mut path_elements = Vec::new();
for v in witness.path_elements.iter() {
path_elements.push(to_bigint(v).to_str_radix(10));
}
let mut identity_path_index = Vec::new();
witness
.identity_path_index
.iter()
.for_each(|v| identity_path_index.push(BigInt::from(*v).to_str_radix(10)));
let inputs = serde_json::json!({
"identitySecret": to_bigint(&witness.identity_secret).to_str_radix(10),
"userMessageLimit": to_bigint(&witness.user_message_limit).to_str_radix(10),
"messageId": to_bigint(&witness.message_id).to_str_radix(10),
"pathElements": path_elements,
"identityPathIndex": identity_path_index,
"x": to_bigint(&witness.x).to_str_radix(10),
"externalNullifier": to_bigint(&witness.external_nullifier).to_str_radix(10),
});
Ok(inputs)
}
/// Computes RLN proof values from witness input.
///
/// Calculates the public outputs (y, nullifier, root) that will be part of the proof.
pub fn proof_values_from_witness(
witness: &RLNWitnessInput,
) -> Result<RLNProofValues, ProtocolError> {
// y share
let a_0 = &witness.identity_secret;
let mut to_hash = [**a_0, witness.external_nullifier, witness.message_id];
let a_1 = poseidon_hash(&to_hash)?;
let y = *(a_0.clone()) + witness.x * a_1;
// Nullifier
let nullifier = poseidon_hash(&[a_1])?;
to_hash[0].zeroize();
// Merkle tree root computations
let root = compute_tree_root(
&witness.identity_secret,
&witness.user_message_limit,
&witness.path_elements,
&witness.identity_path_index,
)?;
Ok(RLNProofValues {
y,
nullifier,
root,
x: witness.x,
external_nullifier: witness.external_nullifier,
})
}
/// Computes the Merkle tree root from identity credentials and Merkle membership proof.
pub fn compute_tree_root(
identity_secret: &IdSecret,
user_message_limit: &Fr,
path_elements: &[Fr],
identity_path_index: &[u8],
) -> Result<Fr, ProtocolError> {
let mut to_hash = [*identity_secret.clone()];
let id_commitment = poseidon_hash(&to_hash)?;
to_hash[0].zeroize();
let mut root = poseidon_hash(&[id_commitment, *user_message_limit])?;
for i in 0..identity_path_index.len() {
if identity_path_index[i] == 0 {
root = poseidon_hash(&[root, path_elements[i]])?;
} else {
root = poseidon_hash(&[path_elements[i], root])?;
}
}
Ok(root)
}
/// Prepares inputs for witness calculation from RLN witness input.
pub(super) fn inputs_for_witness_calculation(
witness: &RLNWitnessInput,
) -> Result<[(&str, Vec<FrOrSecret>); 7], ProtocolError> {
let mut identity_path_index = Vec::with_capacity(witness.identity_path_index.len());
witness
.identity_path_index
.iter()
.for_each(|v| identity_path_index.push(Fr::from(*v)));
Ok([
(
"identitySecret",
vec![witness.identity_secret.clone().into()],
),
("userMessageLimit", vec![witness.user_message_limit.into()]),
("messageId", vec![witness.message_id.into()]),
(
"pathElements",
witness
.path_elements
.iter()
.cloned()
.map(Into::into)
.collect(),
),
(
"identityPathIndex",
identity_path_index.into_iter().map(Into::into).collect(),
),
("x", vec![witness.x.into()]),
("externalNullifier", vec![witness.external_nullifier.into()]),
])
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,29 +1,37 @@
// This crate provides cross-module useful utilities (mainly type conversions) not necessarily specific to RLN
use std::ops::Deref;
use ark_ff::PrimeField;
use color_eyre::{Report, Result};
use ark_serialize::{CanonicalDeserialize, CanonicalSerialize};
use ark_std::UniformRand;
use num_bigint::{BigInt, BigUint};
use num_traits::Num;
use serde_json::json;
use std::io::Cursor;
use rand::Rng;
use ruint::aliases::U256;
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use crate::circuit::Fr;
use crate::{circuit::Fr, error::UtilsError};
/// Byte size of a field element aligned to 64-bit boundary, computed once at compile time.
pub const FR_BYTE_SIZE: usize = {
// Get the modulus bit size of the field
let modulus_bits: u32 = Fr::MODULUS_BIT_SIZE;
// Alignment boundary in bits for field element serialization
let alignment_bits: u32 = 64;
// Align to the next multiple of alignment_bits and convert to bytes
((modulus_bits + alignment_bits - (modulus_bits % alignment_bits)) / 8) as usize
};
#[inline(always)]
pub fn to_bigint(el: &Fr) -> Result<BigInt> {
Ok(BigUint::from(*el).into())
pub fn to_bigint(el: &Fr) -> BigInt {
BigUint::from(*el).into()
}
#[inline(always)]
pub fn fr_byte_size() -> usize {
let mbs = <Fr as PrimeField>::MODULUS_BIT_SIZE;
((mbs + 64 - (mbs % 64)) / 8) as usize
}
#[inline(always)]
pub fn str_to_fr(input: &str, radix: u32) -> Result<Fr> {
pub fn str_to_fr(input: &str, radix: u32) -> Result<Fr, UtilsError> {
if !(radix == 10 || radix == 16) {
return Err(Report::msg("wrong radix"));
return Err(UtilsError::WrongRadix);
}
// We remove any quote present and we trim
@@ -40,12 +48,33 @@ pub fn str_to_fr(input: &str, radix: u32) -> Result<Fr> {
}
#[inline(always)]
pub fn bytes_le_to_fr(input: &[u8]) -> (Fr, usize) {
let el_size = fr_byte_size();
(
pub fn bytes_le_to_fr(input: &[u8]) -> Result<(Fr, usize), UtilsError> {
let el_size = FR_BYTE_SIZE;
if input.len() < el_size {
return Err(UtilsError::InsufficientData {
expected: el_size,
actual: input.len(),
});
}
Ok((
Fr::from(BigUint::from_bytes_le(&input[0..el_size])),
el_size,
)
))
}
#[inline(always)]
pub fn bytes_be_to_fr(input: &[u8]) -> Result<(Fr, usize), UtilsError> {
let el_size = FR_BYTE_SIZE;
if input.len() < el_size {
return Err(UtilsError::InsufficientData {
expected: el_size,
actual: input.len(),
});
}
Ok((
Fr::from(BigUint::from_bytes_be(&input[0..el_size])),
el_size,
))
}
#[inline(always)]
@@ -53,86 +82,246 @@ pub fn fr_to_bytes_le(input: &Fr) -> Vec<u8> {
let input_biguint: BigUint = (*input).into();
let mut res = input_biguint.to_bytes_le();
//BigUint conversion ignores most significant zero bytes. We restore them otherwise serialization will fail (length % 8 != 0)
res.resize(fr_byte_size(), 0);
res.resize(FR_BYTE_SIZE, 0);
res
}
#[inline(always)]
pub fn vec_fr_to_bytes_le(input: &[Fr]) -> Result<Vec<u8>> {
pub fn fr_to_bytes_be(input: &Fr) -> Vec<u8> {
let input_biguint: BigUint = (*input).into();
let mut res = input_biguint.to_bytes_be();
// For BE, insert 0 at the start of the Vec (see also fr_to_bytes_le comments)
let to_insert_count = FR_BYTE_SIZE.saturating_sub(res.len());
if to_insert_count > 0 {
// Insert multi 0 at index 0
res.splice(0..0, std::iter::repeat_n(0, to_insert_count));
}
res
}
#[inline(always)]
pub fn vec_fr_to_bytes_le(input: &[Fr]) -> Vec<u8> {
// Calculate capacity for Vec:
// - 8 bytes for normalized vector length (usize)
// - each Fr element requires fr_byte_size() bytes (typically 32 bytes)
let mut bytes = Vec::with_capacity(8 + input.len() * fr_byte_size());
// - each Fr element requires FR_BYTE_SIZE bytes (typically 32 bytes)
let mut bytes = Vec::with_capacity(8 + input.len() * FR_BYTE_SIZE);
// We store the vector length
bytes.extend_from_slice(&normalize_usize(input.len()));
bytes.extend_from_slice(&normalize_usize_le(input.len()));
// We store each element
for el in input {
bytes.extend_from_slice(&fr_to_bytes_le(el));
}
Ok(bytes)
bytes
}
#[inline(always)]
pub fn vec_u8_to_bytes_le(input: &[u8]) -> Result<Vec<u8>> {
pub fn vec_fr_to_bytes_be(input: &[Fr]) -> Vec<u8> {
// Calculate capacity for Vec:
// - 8 bytes for normalized vector length (usize)
// - each Fr element requires FR_BYTE_SIZE bytes (typically 32 bytes)
let mut bytes = Vec::with_capacity(8 + input.len() * FR_BYTE_SIZE);
// We store the vector length
bytes.extend_from_slice(&normalize_usize_be(input.len()));
// We store each element
for el in input {
bytes.extend_from_slice(&fr_to_bytes_be(el));
}
bytes
}
#[inline(always)]
pub fn vec_u8_to_bytes_le(input: &[u8]) -> Vec<u8> {
// Calculate capacity for Vec:
// - 8 bytes for normalized vector length (usize)
// - variable length input data
let mut bytes = Vec::with_capacity(8 + input.len());
// We store the vector length
bytes.extend_from_slice(&normalize_usize(input.len()));
bytes.extend_from_slice(&normalize_usize_le(input.len()));
// We store the input
bytes.extend_from_slice(input);
Ok(bytes)
bytes
}
#[inline(always)]
pub fn bytes_le_to_vec_u8(input: &[u8]) -> Result<(Vec<u8>, usize)> {
let mut read: usize = 0;
pub fn vec_u8_to_bytes_be(input: &[u8]) -> Vec<u8> {
// Calculate capacity for Vec:
// - 8 bytes for normalized vector length (usize)
// - variable length input data
let mut bytes = Vec::with_capacity(8 + input.len());
// We store the vector length
bytes.extend_from_slice(&normalize_usize_be(input.len()));
// We store the input
bytes.extend_from_slice(input);
bytes
}
#[inline(always)]
pub fn bytes_le_to_vec_u8(input: &[u8]) -> Result<(Vec<u8>, usize), UtilsError> {
let mut read: usize = 0;
if input.len() < 8 {
return Err(UtilsError::InsufficientData {
expected: 8,
actual: input.len(),
});
}
let len = usize::try_from(u64::from_le_bytes(input[0..8].try_into()?))?;
read += 8;
if input.len() < 8 + len {
return Err(UtilsError::InsufficientData {
expected: 8 + len,
actual: input.len(),
});
}
let res = input[8..8 + len].to_vec();
read += res.len();
Ok((res, read))
}
#[inline(always)]
pub fn bytes_le_to_vec_fr(input: &[u8]) -> Result<(Vec<Fr>, usize)> {
pub fn bytes_be_to_vec_u8(input: &[u8]) -> Result<(Vec<u8>, usize), UtilsError> {
let mut read: usize = 0;
let mut res: Vec<Fr> = Vec::new();
if input.len() < 8 {
return Err(UtilsError::InsufficientData {
expected: 8,
actual: input.len(),
});
}
let len = usize::try_from(u64::from_be_bytes(input[0..8].try_into()?))?;
read += 8;
if input.len() < 8 + len {
return Err(UtilsError::InsufficientData {
expected: 8 + len,
actual: input.len(),
});
}
let res = input[8..8 + len].to_vec();
read += res.len();
Ok((res, read))
}
#[inline(always)]
pub fn bytes_le_to_vec_fr(input: &[u8]) -> Result<(Vec<Fr>, usize), UtilsError> {
let mut read: usize = 0;
if input.len() < 8 {
return Err(UtilsError::InsufficientData {
expected: 8,
actual: input.len(),
});
}
let len = usize::try_from(u64::from_le_bytes(input[0..8].try_into()?))?;
read += 8;
let el_size = fr_byte_size();
let el_size = FR_BYTE_SIZE;
if input.len() < 8 + len * el_size {
return Err(UtilsError::InsufficientData {
expected: 8 + len * el_size,
actual: input.len(),
});
}
let mut res: Vec<Fr> = Vec::with_capacity(len);
for i in 0..len {
let (curr_el, _) = bytes_le_to_fr(&input[8 + el_size * i..8 + el_size * (i + 1)]);
let (curr_el, _) = bytes_le_to_fr(&input[8 + el_size * i..8 + el_size * (i + 1)])?;
res.push(curr_el);
read += el_size;
}
Ok((res, read))
}
#[inline(always)]
pub fn bytes_le_to_vec_usize(input: &[u8]) -> Result<Vec<usize>> {
pub fn bytes_be_to_vec_fr(input: &[u8]) -> Result<(Vec<Fr>, usize), UtilsError> {
let mut read: usize = 0;
if input.len() < 8 {
return Err(UtilsError::InsufficientData {
expected: 8,
actual: input.len(),
});
}
let len = usize::try_from(u64::from_be_bytes(input[0..8].try_into()?))?;
read += 8;
let el_size = FR_BYTE_SIZE;
if input.len() < 8 + len * el_size {
return Err(UtilsError::InsufficientData {
expected: 8 + len * el_size,
actual: input.len(),
});
}
let mut res: Vec<Fr> = Vec::with_capacity(len);
for i in 0..len {
let (curr_el, _) = bytes_be_to_fr(&input[8 + el_size * i..8 + el_size * (i + 1)])?;
res.push(curr_el);
read += el_size;
}
Ok((res, read))
}
#[inline(always)]
pub fn bytes_le_to_vec_usize(input: &[u8]) -> Result<Vec<usize>, UtilsError> {
if input.len() < 8 {
return Err(UtilsError::InsufficientData {
expected: 8,
actual: input.len(),
});
}
let nof_elem = usize::try_from(u64::from_le_bytes(input[0..8].try_into()?))?;
if nof_elem == 0 {
Ok(vec![])
} else {
let elements: Vec<usize> = input[8..]
.chunks(8)
.map(|ch| usize::from_le_bytes(ch[0..8].try_into().unwrap()))
.collect();
Ok(elements)
if input.len() < 8 + nof_elem * 8 {
return Err(UtilsError::InsufficientData {
expected: 8 + nof_elem * 8,
actual: input.len(),
});
}
input[8..]
.chunks_exact(8)
.take(nof_elem)
.map(|ch| {
ch.try_into()
.map(usize::from_le_bytes)
.map_err(UtilsError::FromSlice)
})
.collect()
}
}
#[inline(always)]
pub fn bytes_be_to_vec_usize(input: &[u8]) -> Result<Vec<usize>, UtilsError> {
if input.len() < 8 {
return Err(UtilsError::InsufficientData {
expected: 8,
actual: input.len(),
});
}
let nof_elem = usize::try_from(u64::from_be_bytes(input[0..8].try_into()?))?;
if nof_elem == 0 {
Ok(vec![])
} else {
if input.len() < 8 + nof_elem * 8 {
return Err(UtilsError::InsufficientData {
expected: 8 + nof_elem * 8,
actual: input.len(),
});
}
input[8..]
.chunks_exact(8)
.take(nof_elem)
.map(|ch| {
ch.try_into()
.map(usize::from_be_bytes)
.map_err(UtilsError::FromSlice)
})
.collect()
}
}
@@ -140,14 +329,131 @@ pub fn bytes_le_to_vec_usize(input: &[u8]) -> Result<Vec<usize>> {
/// On 32-bit systems, the result is zero-padded to 8 bytes.
/// On 64-bit systems, it directly represents the `usize` value.
#[inline(always)]
pub fn normalize_usize(input: usize) -> [u8; 8] {
pub fn normalize_usize_le(input: usize) -> [u8; 8] {
let mut bytes = [0u8; 8];
let input_bytes = input.to_le_bytes();
bytes[..input_bytes.len()].copy_from_slice(&input_bytes);
bytes
}
#[inline(always)] // using for test
pub fn generate_input_buffer() -> Cursor<String> {
Cursor::new(json!({}).to_string())
/// Normalizes a `usize` into an 8-byte array, ensuring consistency across architectures.
/// On 32-bit systems, the result is zero-padded to 8 bytes.
/// On 64-bit systems, it directly represents the `usize` value.
#[inline(always)]
pub fn normalize_usize_be(input: usize) -> [u8; 8] {
let mut bytes = [0u8; 8];
let input_bytes = input.to_be_bytes();
let offset = 8 - input_bytes.len();
bytes[offset..].copy_from_slice(&input_bytes);
bytes
}
#[derive(
Debug, Zeroize, ZeroizeOnDrop, Clone, PartialEq, CanonicalSerialize, CanonicalDeserialize,
)]
pub struct IdSecret(Fr);
impl IdSecret {
pub fn rand<R: Rng + ?Sized>(rng: &mut R) -> Self {
let mut fr = Fr::rand(rng);
let res = Self::from(&mut fr);
// No need to zeroize fr (already zeroiz'ed in from implementation)
#[allow(clippy::let_and_return)]
res
}
pub fn from_bytes_le(input: &[u8]) -> Result<(Self, usize), UtilsError> {
let el_size = FR_BYTE_SIZE;
if input.len() < el_size {
return Err(UtilsError::InsufficientData {
expected: el_size,
actual: input.len(),
});
}
let b_uint = BigUint::from_bytes_le(&input[0..el_size]);
let mut fr = Fr::from(b_uint);
let res = IdSecret::from(&mut fr);
// Note: no zeroize on b_uint as it has been moved
Ok((res, el_size))
}
pub fn from_bytes_be(input: &[u8]) -> Result<(Self, usize), UtilsError> {
let el_size = FR_BYTE_SIZE;
if input.len() < el_size {
return Err(UtilsError::InsufficientData {
expected: el_size,
actual: input.len(),
});
}
let b_uint = BigUint::from_bytes_be(&input[0..el_size]);
let mut fr = Fr::from(b_uint);
let res = IdSecret::from(&mut fr);
// Note: no zeroize on b_uint as it has been moved
Ok((res, el_size))
}
pub(crate) fn to_bytes_le(&self) -> Zeroizing<Vec<u8>> {
let input_biguint: BigUint = self.0.into();
let mut res = input_biguint.to_bytes_le();
res.resize(FR_BYTE_SIZE, 0);
Zeroizing::new(res)
}
pub(crate) fn to_bytes_be(&self) -> Zeroizing<Vec<u8>> {
let input_biguint: BigUint = self.0.into();
let mut res = input_biguint.to_bytes_be();
let to_insert_count = FR_BYTE_SIZE.saturating_sub(res.len());
if to_insert_count > 0 {
// Insert multi 0 at index 0
res.splice(0..0, std::iter::repeat_n(0, to_insert_count));
}
Zeroizing::new(res)
}
/// Warning: this can leak the secret value
/// Warning: Leaked value is of type 'U256' which implement Copy (every copy will not be zeroized)
pub(crate) fn to_u256(&self) -> U256 {
let mut big_int = self.0.into_bigint();
let res = U256::from_limbs(big_int.0);
big_int.zeroize();
res
}
}
impl From<&mut Fr> for IdSecret {
fn from(value: &mut Fr) -> Self {
let id_secret = Self(*value);
value.zeroize();
id_secret
}
}
impl Deref for IdSecret {
type Target = Fr;
/// Deref to &Fr
///
/// Warning: this can leak the secret value
/// Warning: Leaked value is of type 'Fr' which implement Copy (every copy will not be zeroized)
fn deref(&self) -> &Self::Target {
&self.0
}
}
#[derive(Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) enum FrOrSecret {
IdSecret(IdSecret),
Fr(Fr),
}
impl From<Fr> for FrOrSecret {
fn from(value: Fr) -> Self {
FrOrSecret::Fr(value)
}
}
impl From<IdSecret> for FrOrSecret {
fn from(value: IdSecret) -> Self {
FrOrSecret::IdSecret(value)
}
}

File diff suppressed because it is too large Load Diff

288
rln/tests/ffi_utils.rs Normal file
View File

@@ -0,0 +1,288 @@
#[cfg(test)]
mod test {
use rand::Rng;
use rln::{ffi::ffi_utils::*, prelude::*};
#[test]
// Tests hash to field using FFI APIs
fn test_seeded_keygen_ffi() {
// We generate a new identity pair from an input seed
let seed_bytes: Vec<u8> = vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
let res = match ffi_seeded_key_gen(&seed_bytes.into()) {
CResult {
ok: Some(vec_cfr),
err: None,
} => vec_cfr,
CResult {
ok: None,
err: Some(err),
} => panic!("ffi_seeded_key_gen call failed: {}", err),
_ => unreachable!(),
};
let identity_secret = res.first().unwrap();
let id_commitment = res.get(1).unwrap();
// We check against expected values
let expected_identity_secret_seed_bytes = str_to_fr(
"0x766ce6c7e7a01bdf5b3f257616f603918c30946fa23480f2859c597817e6716",
16,
)
.unwrap();
let expected_id_commitment_seed_bytes = str_to_fr(
"0xbf16d2b5c0d6f9d9d561e05bfca16a81b4b873bb063508fae360d8c74cef51f",
16,
)
.unwrap();
assert_eq!(*identity_secret, expected_identity_secret_seed_bytes);
assert_eq!(*id_commitment, expected_id_commitment_seed_bytes);
}
#[test]
// Tests hash to field using FFI APIs
fn test_seeded_extended_keygen_ffi() {
// We generate a new identity tuple from an input seed
let seed_bytes: Vec<u8> = vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
let key_gen = match ffi_seeded_extended_key_gen(&seed_bytes.into()) {
CResult {
ok: Some(vec_cfr),
err: None,
} => vec_cfr,
CResult {
ok: None,
err: Some(err),
} => panic!("ffi_seeded_extended_key_gen call failed: {}", err),
_ => unreachable!(),
};
let identity_trapdoor = *key_gen[0];
let identity_nullifier = *key_gen[1];
let identity_secret = *key_gen[2];
let id_commitment = *key_gen[3];
// We check against expected values
let expected_identity_trapdoor_seed_bytes = str_to_fr(
"0x766ce6c7e7a01bdf5b3f257616f603918c30946fa23480f2859c597817e6716",
16,
)
.unwrap();
let expected_identity_nullifier_seed_bytes = str_to_fr(
"0x1f18714c7bc83b5bca9e89d404cf6f2f585bc4c0f7ed8b53742b7e2b298f50b4",
16,
)
.unwrap();
let expected_identity_secret_seed_bytes = str_to_fr(
"0x2aca62aaa7abaf3686fff2caf00f55ab9462dc12db5b5d4bcf3994e671f8e521",
16,
)
.unwrap();
let expected_id_commitment_seed_bytes = str_to_fr(
"0x68b66aa0a8320d2e56842581553285393188714c48f9b17acd198b4f1734c5c",
16,
)
.unwrap();
assert_eq!(identity_trapdoor, expected_identity_trapdoor_seed_bytes);
assert_eq!(identity_nullifier, expected_identity_nullifier_seed_bytes);
assert_eq!(identity_secret, expected_identity_secret_seed_bytes);
assert_eq!(id_commitment, expected_id_commitment_seed_bytes);
}
#[test]
// Test CFr FFI functions
fn test_cfr_ffi() {
let cfr_zero = ffi_cfr_zero();
let fr_zero = rln::circuit::Fr::from(0u8);
assert_eq!(*cfr_zero, fr_zero);
let cfr_one = ffi_cfr_one();
let fr_one = rln::circuit::Fr::from(1u8);
assert_eq!(*cfr_one, fr_one);
let cfr_int = ffi_uint_to_cfr(42);
let fr_int = rln::circuit::Fr::from(42u8);
assert_eq!(*cfr_int, fr_int);
let cfr_debug_str = ffi_cfr_debug(Some(&cfr_int));
assert_eq!(cfr_debug_str.to_string(), "42");
let key_gen = match ffi_key_gen() {
CResult {
ok: Some(vec_cfr),
err: None,
} => vec_cfr,
CResult {
ok: None,
err: Some(err),
} => panic!("ffi_key_gen call failed: {}", err),
_ => unreachable!(),
};
let mut id_secret_fr = *key_gen[0];
let id_secret_hash = IdSecret::from(&mut id_secret_fr);
let id_commitment = *key_gen[1];
let cfr_id_secret_hash = ffi_vec_cfr_get(&key_gen, 0).unwrap();
assert_eq!(*cfr_id_secret_hash, *id_secret_hash);
let cfr_id_commitment = ffi_vec_cfr_get(&key_gen, 1).unwrap();
assert_eq!(*cfr_id_commitment, id_commitment);
}
#[test]
// Test Vec<u8> FFI functions
fn test_vec_u8_ffi() {
let mut rng = rand::thread_rng();
let signal_gen: [u8; 32] = rng.gen();
let signal: Vec<u8> = signal_gen.to_vec();
let bytes_le = ffi_vec_u8_to_bytes_le(&signal.clone().into());
let expected_le = vec_u8_to_bytes_le(&signal);
assert_eq!(bytes_le.iter().copied().collect::<Vec<_>>(), expected_le);
let bytes_be = ffi_vec_u8_to_bytes_be(&signal.clone().into());
let expected_be = vec_u8_to_bytes_be(&signal);
assert_eq!(bytes_be.iter().copied().collect::<Vec<_>>(), expected_be);
let signal_from_le = match ffi_bytes_le_to_vec_u8(&bytes_le) {
CResult {
ok: Some(vec_u8),
err: None,
} => vec_u8,
CResult {
ok: None,
err: Some(err),
} => panic!("ffi_bytes_le_to_vec_u8 call failed: {}", err),
_ => unreachable!(),
};
assert_eq!(signal_from_le.iter().copied().collect::<Vec<_>>(), signal);
let signal_from_be = match ffi_bytes_be_to_vec_u8(&bytes_be) {
CResult {
ok: Some(vec_u8),
err: None,
} => vec_u8,
CResult {
ok: None,
err: Some(err),
} => panic!("ffi_bytes_be_to_vec_u8 call failed: {}", err),
_ => unreachable!(),
};
assert_eq!(signal_from_be.iter().copied().collect::<Vec<_>>(), signal);
}
#[test]
// Test Vec<CFr> FFI functions
fn test_vec_cfr_ffi() {
let vec_fr = [Fr::from(1u8), Fr::from(2u8), Fr::from(3u8), Fr::from(4u8)];
let vec_cfr: Vec<CFr> = vec_fr.iter().map(|fr| CFr::from(*fr)).collect();
let bytes_le = ffi_vec_cfr_to_bytes_le(&vec_cfr.clone().into());
let expected_le = vec_fr_to_bytes_le(&vec_fr);
assert_eq!(bytes_le.iter().copied().collect::<Vec<_>>(), expected_le);
let bytes_be = ffi_vec_cfr_to_bytes_be(&vec_cfr.clone().into());
let expected_be = vec_fr_to_bytes_be(&vec_fr);
assert_eq!(bytes_be.iter().copied().collect::<Vec<_>>(), expected_be);
let vec_cfr_from_le = match ffi_bytes_le_to_vec_cfr(&bytes_le) {
CResult {
ok: Some(vec_cfr),
err: None,
} => vec_cfr,
CResult {
ok: None,
err: Some(err),
} => panic!("ffi_bytes_le_to_vec_cfr call failed: {}", err),
_ => unreachable!(),
};
assert_eq!(vec_cfr_from_le.iter().copied().collect::<Vec<_>>(), vec_cfr);
let vec_cfr_from_be = match ffi_bytes_be_to_vec_cfr(&bytes_be) {
CResult {
ok: Some(vec_cfr),
err: None,
} => vec_cfr,
CResult {
ok: None,
err: Some(err),
} => panic!("ffi_bytes_be_to_vec_cfr call failed: {}", err),
_ => unreachable!(),
};
assert_eq!(vec_cfr_from_be.iter().copied().collect::<Vec<_>>(), vec_cfr);
}
#[test]
// Tests hash to field using FFI APIs
fn test_hash_to_field_ffi() {
let mut rng = rand::thread_rng();
let signal_gen: [u8; 32] = rng.gen();
let signal: Vec<u8> = signal_gen.to_vec();
let cfr_le_1 = match ffi_hash_to_field_le(&signal.clone().into()) {
CResult {
ok: Some(cfr),
err: None,
} => cfr,
CResult {
ok: None,
err: Some(err),
} => panic!("ffi_hash_to_field_le call failed: {}", err),
_ => unreachable!(),
};
let fr_le_2 = hash_to_field_le(&signal).unwrap();
assert_eq!(*cfr_le_1, fr_le_2);
let cfr_be_1 = match ffi_hash_to_field_be(&signal.clone().into()) {
CResult {
ok: Some(cfr),
err: None,
} => cfr,
CResult {
ok: None,
err: Some(err),
} => panic!("ffi_hash_to_field_be call failed: {}", err),
_ => unreachable!(),
};
let fr_be_2 = hash_to_field_be(&signal).unwrap();
assert_eq!(*cfr_le_1, fr_be_2);
assert_eq!(*cfr_le_1, *cfr_be_1);
assert_eq!(fr_le_2, fr_be_2);
let hash_cfr_le_1 = ffi_cfr_to_bytes_le(&cfr_le_1)
.iter()
.copied()
.collect::<Vec<_>>();
let hash_fr_le_2 = fr_to_bytes_le(&fr_le_2);
assert_eq!(hash_cfr_le_1, hash_fr_le_2);
let hash_cfr_be_1 = ffi_cfr_to_bytes_be(&cfr_be_1)
.iter()
.copied()
.collect::<Vec<_>>();
let hash_fr_be_2 = fr_to_bytes_be(&fr_be_2);
assert_eq!(hash_cfr_be_1, hash_fr_be_2);
assert_ne!(hash_cfr_le_1, hash_cfr_be_1);
assert_ne!(hash_fr_le_2, hash_fr_be_2);
}
#[test]
// Test Poseidon hash FFI
fn test_poseidon_hash_pair_ffi() {
let input_1 = Fr::from(42u8);
let input_2 = Fr::from(99u8);
let expected_hash = poseidon_hash(&[input_1, input_2]).unwrap();
let received_hash_cfr =
match ffi_poseidon_hash_pair(&CFr::from(input_1), &CFr::from(input_2)) {
CResult {
ok: Some(cfr),
err: None,
} => cfr,
CResult {
ok: None,
err: Some(err),
} => panic!("ffi_poseidon_hash_pair call failed: {}", err),
_ => unreachable!(),
};
assert_eq!(*received_hash_cfr, expected_hash);
}
}

View File

@@ -1,30 +1,35 @@
////////////////////////////////////////////////////////////
/// Tests
////////////////////////////////////////////////////////////
// Tests
#![cfg(not(feature = "stateless"))]
#[cfg(test)]
mod test {
use rln::hashers::{poseidon_hash, PoseidonHash};
use rln::{circuit::*, poseidon_tree::PoseidonTree};
use utils::{FullMerkleTree, OptimalMerkleTree, ZerokitMerkleProof, ZerokitMerkleTree};
use rln::prelude::*;
use zerokit_utils::merkle_tree::{
FullMerkleTree, OptimalMerkleTree, ZerokitMerkleProof, ZerokitMerkleTree,
};
#[test]
// The test is checked correctness for `FullMerkleTree` and `OptimalMerkleTree` with Poseidon hash
// The test checked correctness for `FullMerkleTree` and `OptimalMerkleTree` with Poseidon hash
fn test_zerokit_merkle_implementations() {
let sample_size = 100;
let leaves: Vec<Fr> = (0..sample_size).map(|s| Fr::from(s)).collect();
let leaves: Vec<Fr> = (0..sample_size).map(Fr::from).collect();
let mut tree_full = FullMerkleTree::<PoseidonHash>::default(TEST_TREE_HEIGHT).unwrap();
let mut tree_opt = OptimalMerkleTree::<PoseidonHash>::default(TEST_TREE_HEIGHT).unwrap();
let mut tree_full = FullMerkleTree::<PoseidonHash>::default(DEFAULT_TREE_DEPTH).unwrap();
let mut tree_opt = OptimalMerkleTree::<PoseidonHash>::default(DEFAULT_TREE_DEPTH).unwrap();
for i in 0..sample_size.try_into().unwrap() {
tree_full.set(i, leaves[i]).unwrap();
let proof = tree_full.proof(i).expect("index should be set");
for (i, leave) in leaves
.into_iter()
.enumerate()
.take(sample_size.try_into().unwrap())
{
tree_full.set(i, leave).unwrap();
let proof = tree_full.proof(i).unwrap();
assert_eq!(proof.leaf_index(), i);
tree_opt.set(i, leaves[i]).unwrap();
tree_opt.set(i, leave).unwrap();
assert_eq!(tree_opt.root(), tree_full.root());
let proof = tree_opt.proof(i).expect("index should be set");
let proof = tree_opt.proof(i).unwrap();
assert_eq!(proof.leaf_index(), i);
}
@@ -65,7 +70,7 @@ mod test {
let prev_r = tree.get_subtree_root(n, idx_r).unwrap();
let subroot = tree.get_subtree_root(n - 1, idx_sr).unwrap();
assert_eq!(poseidon_hash(&[prev_l, prev_r]), subroot);
assert_eq!(poseidon_hash(&[prev_l, prev_r]).unwrap(), subroot);
}
}
}
@@ -98,7 +103,7 @@ mod test {
// check remove_indices_and_set_leaves inside override_range function
assert!(tree.get_empty_leaves_indices().is_empty());
let leaves_2: Vec<Fr> = (0..2).map(|s| Fr::from(s as i32)).collect();
let leaves_2: Vec<Fr> = (0..2).map(Fr::from).collect();
tree.override_range(0, leaves_2.clone().into_iter(), [0, 1, 2, 3].into_iter())
.unwrap();
assert_eq!(tree.get_empty_leaves_indices(), vec![2, 3]);
@@ -113,7 +118,7 @@ mod test {
.unwrap();
assert_eq!(tree.get_empty_leaves_indices(), vec![2, 3]);
let leaves_4: Vec<Fr> = (0..4).map(|s| Fr::from(s as i32)).collect();
let leaves_4: Vec<Fr> = (0..4).map(Fr::from).collect();
// check if the indexes for write and delete are the same
tree.override_range(0, leaves_4.clone().into_iter(), [0, 1, 2, 3].into_iter())
.unwrap();

View File

@@ -1,13 +1,10 @@
#![cfg(not(feature = "stateless"))]
#[cfg(test)]
mod test {
use ark_ff::BigInt;
use rln::circuit::{graph_from_folder, zkey_from_folder};
use rln::circuit::{Fr, TEST_TREE_HEIGHT};
use rln::hashers::{hash_to_field, poseidon_hash};
use rln::poseidon_tree::PoseidonTree;
use rln::protocol::*;
use rln::utils::str_to_fr;
use utils::{ZerokitMerkleProof, ZerokitMerkleTree};
use rln::prelude::*;
use zerokit_utils::merkle_tree::{ZerokitMerkleProof, ZerokitMerkleTree};
type ConfigOf<T> = <T as ZerokitMerkleTree>::Config;
@@ -17,19 +14,19 @@ mod test {
let leaf_index = 3;
// generate identity
let identity_secret_hash = hash_to_field(b"test-merkle-proof");
let id_commitment = poseidon_hash(&[identity_secret_hash]);
let rate_commitment = poseidon_hash(&[id_commitment, 100.into()]);
let identity_secret = hash_to_field_le(b"test-merkle-proof").unwrap();
let id_commitment = poseidon_hash(&[identity_secret]).unwrap();
let rate_commitment = poseidon_hash(&[id_commitment, 100.into()]).unwrap();
// generate merkle tree
let default_leaf = Fr::from(0);
let mut tree = PoseidonTree::new(
TEST_TREE_HEIGHT,
DEFAULT_TREE_DEPTH,
default_leaf,
ConfigOf::<PoseidonTree>::default(),
)
.unwrap();
tree.set(leaf_index, rate_commitment.into()).unwrap();
tree.set(leaf_index, rate_commitment).unwrap();
// We check correct computation of the root
let root = tree.root();
@@ -45,7 +42,7 @@ mod test {
.into()
);
let merkle_proof = tree.proof(leaf_index).expect("proof should exist");
let merkle_proof = tree.proof(leaf_index).unwrap();
let path_elements = merkle_proof.get_path_elements();
let identity_path_index = merkle_proof.get_path_index();
@@ -72,7 +69,7 @@ mod test {
"0x0f57c5571e9a4eab49e2c8cf050dae948aef6ead647392273546249d1c1ff10f",
"0x1830ee67b5fb554ad5f63d4388800e1cfe78e310697d46e43c9ce36134f72cca",
]
.map(|e| str_to_fr(e, 16).unwrap())
.map(|str| str_to_fr(str, 16).unwrap())
.to_vec();
let expected_identity_path_index: Vec<u8> =
@@ -88,105 +85,77 @@ mod test {
fn get_test_witness() -> RLNWitnessInput {
let leaf_index = 3;
// Generate identity pair
let (identity_secret_hash, id_commitment) = keygen();
let (identity_secret, id_commitment) = keygen().unwrap();
let user_message_limit = Fr::from(100);
let rate_commitment = poseidon_hash(&[id_commitment, user_message_limit]);
let rate_commitment = poseidon_hash(&[id_commitment, user_message_limit]).unwrap();
//// generate merkle tree
let default_leaf = Fr::from(0);
let mut tree = PoseidonTree::new(
TEST_TREE_HEIGHT,
DEFAULT_TREE_DEPTH,
default_leaf,
ConfigOf::<PoseidonTree>::default(),
)
.unwrap();
tree.set(leaf_index, rate_commitment.into()).unwrap();
tree.set(leaf_index, rate_commitment).unwrap();
let merkle_proof = tree.proof(leaf_index).expect("proof should exist");
let merkle_proof = tree.proof(leaf_index).unwrap();
let signal = b"hey hey";
let x = hash_to_field(signal);
let x = hash_to_field_le(signal).unwrap();
// We set the remaining values to random ones
let epoch = hash_to_field(b"test-epoch");
let rln_identifier = hash_to_field(b"test-rln-identifier");
let external_nullifier = poseidon_hash(&[epoch, rln_identifier]);
let epoch = hash_to_field_le(b"test-epoch").unwrap();
let rln_identifier = hash_to_field_le(b"test-rln-identifier").unwrap();
let external_nullifier = poseidon_hash(&[epoch, rln_identifier]).unwrap();
rln_witness_from_values(
identity_secret_hash,
&merkle_proof,
let message_id = Fr::from(1);
RLNWitnessInput::new(
identity_secret,
user_message_limit,
message_id,
merkle_proof.get_path_elements(),
merkle_proof.get_path_index(),
x,
external_nullifier,
user_message_limit,
Fr::from(1),
)
.unwrap()
}
#[test]
// We test a RLN proof generation and verification
fn test_witness_from_json() {
// We generate all relevant keys
let proving_key = zkey_from_folder();
let verification_key = &proving_key.0.vk;
let graph_data = graph_from_folder();
// We compute witness from the json input
let rln_witness = get_test_witness();
let rln_witness_json = rln_witness_to_json(&rln_witness).unwrap();
let rln_witness_deser = rln_witness_from_json(rln_witness_json).unwrap();
assert_eq!(rln_witness_deser, rln_witness);
// Let's generate a zkSNARK proof
let proof = generate_proof(&proving_key, &rln_witness_deser, &graph_data).unwrap();
let proof_values = proof_values_from_witness(&rln_witness_deser).unwrap();
// Let's verify the proof
let verified = verify_proof(&verification_key, &proof, &proof_values);
assert!(verified.unwrap());
}
#[test]
// We test a RLN proof generation and verification
fn test_end_to_end() {
let rln_witness = get_test_witness();
let rln_witness_json = rln_witness_to_json(&rln_witness).unwrap();
let rln_witness_deser = rln_witness_from_json(rln_witness_json).unwrap();
assert_eq!(rln_witness_deser, rln_witness);
let witness = get_test_witness();
// We generate all relevant keys
let proving_key = zkey_from_folder();
let verification_key = &proving_key.0.vk;
let graph_data = graph_from_folder();
// Let's generate a zkSNARK proof
let proof = generate_proof(&proving_key, &rln_witness_deser, &graph_data).unwrap();
let proof = generate_zk_proof(proving_key, &witness, graph_data).unwrap();
let proof_values = proof_values_from_witness(&rln_witness_deser).unwrap();
let proof_values = proof_values_from_witness(&witness).unwrap();
// Let's verify the proof
let success = verify_proof(&verification_key, &proof, &proof_values).unwrap();
let success = verify_zk_proof(&proving_key.0.vk, &proof, &proof_values).unwrap();
assert!(success);
}
#[test]
fn test_witness_serialization() {
// We test witness JSON serialization
let rln_witness = get_test_witness();
let rln_witness_json = rln_witness_to_json(&rln_witness).unwrap();
let rln_witness_deser = rln_witness_from_json(rln_witness_json).unwrap();
assert_eq!(rln_witness_deser, rln_witness);
let witness = get_test_witness();
// We test witness serialization
let ser = serialize_witness(&rln_witness).unwrap();
let (deser, _) = deserialize_witness(&ser).unwrap();
assert_eq!(rln_witness, deser);
let ser = rln_witness_to_bytes_le(&witness).unwrap();
let (deser, _) = bytes_le_to_rln_witness(&ser).unwrap();
assert_eq!(witness, deser);
// We test Proof values serialization
let proof_values = proof_values_from_witness(&rln_witness).unwrap();
let ser = serialize_proof_values(&proof_values);
let (deser, _) = deserialize_proof_values(&ser);
let proof_values = proof_values_from_witness(&witness).unwrap();
let ser = rln_proof_values_to_bytes_le(&proof_values);
let (deser, _) = bytes_le_to_rln_proof_values(&ser).unwrap();
assert_eq!(proof_values, deser);
}
@@ -196,10 +165,10 @@ mod test {
fn test_seeded_keygen() {
// Generate identity pair using a seed phrase
let seed_phrase: &str = "A seed phrase example";
let (identity_secret_hash, id_commitment) = seeded_keygen(seed_phrase.as_bytes());
let (identity_secret, id_commitment) = seeded_keygen(seed_phrase.as_bytes()).unwrap();
// We check against expected values
let expected_identity_secret_hash_seed_phrase = str_to_fr(
let expected_identity_secret_seed_phrase = str_to_fr(
"0x20df38f3f00496f19fe7c6535492543b21798ed7cb91aebe4af8012db884eda3",
16,
)
@@ -210,18 +179,15 @@ mod test {
)
.unwrap();
assert_eq!(
identity_secret_hash,
expected_identity_secret_hash_seed_phrase
);
assert_eq!(identity_secret, expected_identity_secret_seed_phrase);
assert_eq!(id_commitment, expected_id_commitment_seed_phrase);
// Generate identity pair using an byte array
let seed_bytes: &[u8] = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
let (identity_secret_hash, id_commitment) = seeded_keygen(seed_bytes);
let (identity_secret, id_commitment) = seeded_keygen(seed_bytes).unwrap();
// We check against expected values
let expected_identity_secret_hash_seed_bytes = str_to_fr(
let expected_identity_secret_seed_bytes = str_to_fr(
"0x766ce6c7e7a01bdf5b3f257616f603918c30946fa23480f2859c597817e6716",
16,
)
@@ -232,19 +198,13 @@ mod test {
)
.unwrap();
assert_eq!(
identity_secret_hash,
expected_identity_secret_hash_seed_bytes
);
assert_eq!(identity_secret, expected_identity_secret_seed_bytes);
assert_eq!(id_commitment, expected_id_commitment_seed_bytes);
// We check again if the identity pair generated with the same seed phrase corresponds to the previously generated one
let (identity_secret_hash, id_commitment) = seeded_keygen(seed_phrase.as_bytes());
let (identity_secret, id_commitment) = seeded_keygen(seed_phrase.as_bytes()).unwrap();
assert_eq!(
identity_secret_hash,
expected_identity_secret_hash_seed_phrase
);
assert_eq!(identity_secret, expected_identity_secret_seed_phrase);
assert_eq!(id_commitment, expected_id_commitment_seed_phrase);
}
}

File diff suppressed because it is too large Load Diff

424
rln/tests/utils.rs Normal file
View File

@@ -0,0 +1,424 @@
#[cfg(test)]
mod test {
use ark_std::{rand::thread_rng, UniformRand};
use rln::prelude::*;
#[test]
fn test_normalize_usize_le() {
// Test basic cases
assert_eq!(normalize_usize_le(0), [0, 0, 0, 0, 0, 0, 0, 0]);
assert_eq!(normalize_usize_le(1), [1, 0, 0, 0, 0, 0, 0, 0]);
assert_eq!(normalize_usize_le(255), [255, 0, 0, 0, 0, 0, 0, 0]);
assert_eq!(normalize_usize_le(256), [0, 1, 0, 0, 0, 0, 0, 0]);
assert_eq!(normalize_usize_le(65535), [255, 255, 0, 0, 0, 0, 0, 0]);
assert_eq!(normalize_usize_le(65536), [0, 0, 1, 0, 0, 0, 0, 0]);
// Test 32-bit boundary
assert_eq!(
normalize_usize_le(4294967295),
[255, 255, 255, 255, 0, 0, 0, 0]
);
assert_eq!(normalize_usize_le(4294967296), [0, 0, 0, 0, 1, 0, 0, 0]);
// Test maximum value
assert_eq!(
normalize_usize_le(usize::MAX),
[255, 255, 255, 255, 255, 255, 255, 255]
);
// Test that result is always 8 bytes
assert_eq!(normalize_usize_le(0).len(), 8);
assert_eq!(normalize_usize_le(usize::MAX).len(), 8);
}
#[test]
fn test_normalize_usize_be() {
// Test basic cases
assert_eq!(normalize_usize_be(0), [0, 0, 0, 0, 0, 0, 0, 0]);
assert_eq!(normalize_usize_be(1), [0, 0, 0, 0, 0, 0, 0, 1]);
assert_eq!(normalize_usize_be(255), [0, 0, 0, 0, 0, 0, 0, 255]);
assert_eq!(normalize_usize_be(256), [0, 0, 0, 0, 0, 0, 1, 0]);
assert_eq!(normalize_usize_be(65535), [0, 0, 0, 0, 0, 0, 255, 255]);
assert_eq!(normalize_usize_be(65536), [0, 0, 0, 0, 0, 1, 0, 0]);
// Test 32-bit boundary
assert_eq!(
normalize_usize_be(4294967295),
[0, 0, 0, 0, 255, 255, 255, 255]
);
assert_eq!(normalize_usize_be(4294967296), [0, 0, 0, 1, 0, 0, 0, 0]);
// Test maximum value
assert_eq!(
normalize_usize_be(usize::MAX),
[255, 255, 255, 255, 255, 255, 255, 255]
);
// Test that result is always 8 bytes
assert_eq!(normalize_usize_be(0).len(), 8);
assert_eq!(normalize_usize_be(usize::MAX).len(), 8);
}
#[test]
fn test_normalize_usize_endianness() {
// Test that little-endian and big-endian produce different results for non-zero values
let test_values = vec![1, 255, 256, 65535, 65536, 4294967295, 4294967296];
for &value in &test_values {
let le_result = normalize_usize_le(value);
let be_result = normalize_usize_be(value);
// For non-zero values, LE and BE should be different
assert_ne!(
le_result, be_result,
"LE and BE should differ for value {value}"
);
// Both should be 8 bytes
assert_eq!(le_result.len(), 8);
assert_eq!(be_result.len(), 8);
}
// Zero should be the same in both endianness
assert_eq!(normalize_usize_le(0), normalize_usize_be(0));
}
#[test]
fn test_normalize_usize_roundtrip() {
// Test that we can reconstruct the original value from the normalized bytes
let test_values = vec![
0,
1,
255,
256,
65535,
65536,
4294967295,
4294967296,
usize::MAX,
];
for &value in &test_values {
let le_bytes = normalize_usize_le(value);
let be_bytes = normalize_usize_be(value);
// Reconstruct from little-endian bytes
let reconstructed_le = usize::from_le_bytes(le_bytes);
assert_eq!(
reconstructed_le, value,
"LE roundtrip failed for value {value}"
);
// Reconstruct from big-endian bytes
let reconstructed_be = usize::from_be_bytes(be_bytes);
assert_eq!(
reconstructed_be, value,
"BE roundtrip failed for value {value}"
);
}
}
#[test]
fn test_normalize_usize_edge_cases() {
// Test edge cases and boundary values
let edge_cases = vec![
0,
1,
255,
256,
65535,
65536,
16777215, // 2^24 - 1
16777216, // 2^24
4294967295, // 2^32 - 1
4294967296, // 2^32
1099511627775, // 2^40 - 1
1099511627776, // 2^40
281474976710655, // 2^48 - 1
281474976710656, // 2^48
72057594037927935, // 2^56 - 1
72057594037927936, // 2^56
usize::MAX,
];
for &value in &edge_cases {
let le_result = normalize_usize_le(value);
let be_result = normalize_usize_be(value);
// Both should be 8 bytes
assert_eq!(le_result.len(), 8);
assert_eq!(be_result.len(), 8);
// Roundtrip should work
assert_eq!(usize::from_le_bytes(le_result), value);
assert_eq!(usize::from_be_bytes(be_result), value);
}
}
#[test]
fn test_normalize_usize_architecture_independence() {
// Test that the functions work consistently regardless of the underlying architecture
// This test ensures that the functions provide consistent 8-byte output
// even on 32-bit systems where usize might be 4 bytes
let test_values = vec![0, 1, 255, 256, 65535, 65536, 4294967295, 4294967296];
for &value in &test_values {
let le_result = normalize_usize_le(value);
let be_result = normalize_usize_be(value);
// Always 8 bytes regardless of architecture
assert_eq!(le_result.len(), 8);
assert_eq!(be_result.len(), 8);
// The result should be consistent with the original value
assert_eq!(usize::from_le_bytes(le_result), value);
assert_eq!(usize::from_be_bytes(be_result), value);
}
}
#[test]
fn test_fr_serialization_roundtrip() {
let mut rng = thread_rng();
// Test multiple random Fr values
for _ in 0..10 {
let fr = Fr::rand(&mut rng);
// Test little-endian roundtrip
let le_bytes = fr_to_bytes_le(&fr);
let (reconstructed_le, _) = bytes_le_to_fr(&le_bytes).unwrap();
assert_eq!(fr, reconstructed_le);
// Test big-endian roundtrip
let be_bytes = fr_to_bytes_be(&fr);
let (reconstructed_be, _) = bytes_be_to_fr(&be_bytes).unwrap();
assert_eq!(fr, reconstructed_be);
}
}
#[test]
fn test_vec_fr_serialization_roundtrip() {
let mut rng = thread_rng();
// Test with different vector sizes
for size in [0, 1, 5, 10] {
let fr_vec: Vec<Fr> = (0..size).map(|_| Fr::rand(&mut rng)).collect();
// Test little-endian roundtrip
let le_bytes = vec_fr_to_bytes_le(&fr_vec);
let (reconstructed_le, _) = bytes_le_to_vec_fr(&le_bytes).unwrap();
assert_eq!(fr_vec, reconstructed_le);
// Test big-endian roundtrip
let be_bytes = vec_fr_to_bytes_be(&fr_vec);
let (reconstructed_be, _) = bytes_be_to_vec_fr(&be_bytes).unwrap();
assert_eq!(fr_vec, reconstructed_be);
}
}
#[test]
fn test_vec_u8_serialization_roundtrip() {
// Test with different vector sizes and content
let test_cases = vec![
vec![],
vec![0],
vec![255],
vec![1, 2, 3, 4, 5],
vec![0, 255, 128, 64, 32, 16, 8, 4, 2, 1],
(0..100).collect::<Vec<u8>>(),
];
for test_case in test_cases {
// Test little-endian roundtrip
let le_bytes = vec_u8_to_bytes_le(&test_case);
let (reconstructed_le, _) = bytes_le_to_vec_u8(&le_bytes).unwrap();
assert_eq!(test_case, reconstructed_le);
// Test big-endian roundtrip
let be_bytes = vec_u8_to_bytes_be(&test_case);
let (reconstructed_be, _) = bytes_be_to_vec_u8(&be_bytes).unwrap();
assert_eq!(test_case, reconstructed_be);
}
}
#[test]
fn test_vec_usize_serialization_roundtrip() {
// Test with different vector sizes and content
let test_cases = vec![
vec![],
vec![0],
vec![usize::MAX],
vec![1, 2, 3, 4, 5],
vec![0, 255, 65535, 4294967295, usize::MAX],
(0..10).collect::<Vec<usize>>(),
];
for test_case in test_cases {
// Test little-endian roundtrip
let le_bytes = {
let mut bytes = Vec::new();
bytes.extend_from_slice(&normalize_usize_le(test_case.len()));
for &value in &test_case {
bytes.extend_from_slice(&normalize_usize_le(value));
}
bytes
};
let reconstructed_le = bytes_le_to_vec_usize(&le_bytes).unwrap();
assert_eq!(test_case, reconstructed_le);
// Test big-endian roundtrip
let be_bytes = {
let mut bytes = Vec::new();
bytes.extend_from_slice(&normalize_usize_be(test_case.len()));
for &value in &test_case {
bytes.extend_from_slice(&normalize_usize_be(value));
}
bytes
};
let reconstructed_be = bytes_be_to_vec_usize(&be_bytes).unwrap();
assert_eq!(test_case, reconstructed_be);
}
}
#[test]
fn test_str_to_fr() {
// Test valid hex strings
let test_cases = vec![
("0x0", 16, Fr::from(0u64)),
("0x1", 16, Fr::from(1u64)),
("0xff", 16, Fr::from(255u64)),
("0x100", 16, Fr::from(256u64)),
];
for (input, radix, expected) in test_cases {
let result = str_to_fr(input, radix).unwrap();
assert_eq!(result, expected);
}
// Test invalid inputs
assert!(str_to_fr("invalid", 16).is_err());
assert!(str_to_fr("0x", 16).is_err());
}
#[test]
fn test_endianness_differences() {
let mut rng = thread_rng();
let fr = Fr::rand(&mut rng);
// Test that LE and BE produce different byte representations
let le_bytes = fr_to_bytes_le(&fr);
let be_bytes = fr_to_bytes_be(&fr);
// They should be different (unless the value is symmetric)
if le_bytes != be_bytes {
// Verify they can both be reconstructed correctly
let (reconstructed_le, _) = bytes_le_to_fr(&le_bytes).unwrap();
let (reconstructed_be, _) = bytes_be_to_fr(&be_bytes).unwrap();
assert_eq!(fr, reconstructed_le);
assert_eq!(fr, reconstructed_be);
}
}
#[test]
fn test_error_handling() {
// Test bytes_le_to_fr and bytes_be_to_fr with insufficient data
let short_bytes = vec![0u8; 10]; // Less than FR_BYTE_SIZE (32 bytes)
assert!(bytes_le_to_fr(&short_bytes).is_err());
assert!(bytes_be_to_fr(&short_bytes).is_err());
// Test with empty bytes
let empty_bytes = vec![];
assert!(bytes_le_to_fr(&empty_bytes).is_err());
assert!(bytes_be_to_fr(&empty_bytes).is_err());
// Test with exact size - should succeed
let exact_bytes = vec![0u8; FR_BYTE_SIZE];
assert!(bytes_le_to_fr(&exact_bytes).is_ok());
assert!(bytes_be_to_fr(&exact_bytes).is_ok());
// Test with more than enough data - should succeed
let extra_bytes = vec![0u8; FR_BYTE_SIZE + 10];
assert!(bytes_le_to_fr(&extra_bytes).is_ok());
assert!(bytes_be_to_fr(&extra_bytes).is_ok());
// Test with valid length but insufficient data for vector deserialization
let valid_length_invalid_data = vec![0u8; 8]; // Length 0, but no data
assert!(bytes_le_to_vec_u8(&valid_length_invalid_data).is_ok());
assert!(bytes_be_to_vec_u8(&valid_length_invalid_data).is_ok());
assert!(bytes_le_to_vec_fr(&valid_length_invalid_data).is_ok());
assert!(bytes_be_to_vec_fr(&valid_length_invalid_data).is_ok());
assert!(bytes_le_to_vec_usize(&valid_length_invalid_data).is_ok());
assert!(bytes_be_to_vec_usize(&valid_length_invalid_data).is_ok());
// Test with reasonable length but insufficient data for vector deserialization
let reasonable_length = {
let mut bytes = vec![0u8; 8];
bytes[0] = 1; // Length 1
bytes
};
// This should fail because we don't have enough data for the vector elements
assert!(bytes_le_to_vec_u8(&reasonable_length).is_err());
assert!(bytes_be_to_vec_u8(&reasonable_length).is_err());
assert!(bytes_le_to_vec_fr(&reasonable_length).is_err());
assert!(bytes_be_to_vec_fr(&reasonable_length).is_err());
assert!(bytes_le_to_vec_usize(&reasonable_length).is_err());
assert!(bytes_be_to_vec_usize(&reasonable_length).is_err());
// Test with valid data for u8 vector
let valid_u8_data_le = {
let mut bytes = vec![0u8; 9];
bytes[..8].copy_from_slice(&(1u64.to_le_bytes())); // Length 1, little-endian
bytes[8] = 42; // One byte of data
bytes
};
let valid_u8_data_be = {
let mut bytes = vec![0u8; 9];
bytes[..8].copy_from_slice(&(1u64.to_be_bytes())); // Length 1, big-endian
bytes[8] = 42; // One byte of data
bytes
};
assert!(bytes_le_to_vec_u8(&valid_u8_data_le).is_ok());
assert!(bytes_be_to_vec_u8(&valid_u8_data_be).is_ok());
}
#[test]
fn test_empty_vectors() {
// Test empty vector serialization/deserialization
let empty_fr: Vec<Fr> = vec![];
let empty_u8: Vec<u8> = vec![];
let empty_usize: Vec<usize> = vec![];
// Test Fr vectors
let le_fr_bytes = vec_fr_to_bytes_le(&empty_fr);
let be_fr_bytes = vec_fr_to_bytes_be(&empty_fr);
let (reconstructed_le_fr, _) = bytes_le_to_vec_fr(&le_fr_bytes).unwrap();
let (reconstructed_be_fr, _) = bytes_be_to_vec_fr(&be_fr_bytes).unwrap();
assert_eq!(empty_fr, reconstructed_le_fr);
assert_eq!(empty_fr, reconstructed_be_fr);
// Test u8 vectors
let le_u8_bytes = vec_u8_to_bytes_le(&empty_u8);
let be_u8_bytes = vec_u8_to_bytes_be(&empty_u8);
let (reconstructed_le_u8, _) = bytes_le_to_vec_u8(&le_u8_bytes).unwrap();
let (reconstructed_be_u8, _) = bytes_be_to_vec_u8(&be_u8_bytes).unwrap();
assert_eq!(empty_u8, reconstructed_le_u8);
assert_eq!(empty_u8, reconstructed_be_u8);
// Test usize vectors
let le_usize_bytes = {
let mut bytes = Vec::new();
bytes.extend_from_slice(&normalize_usize_le(0));
bytes
};
let be_usize_bytes = {
let mut bytes = Vec::new();
bytes.extend_from_slice(&normalize_usize_be(0));
bytes
};
let reconstructed_le_usize = bytes_le_to_vec_usize(&le_usize_bytes).unwrap();
let reconstructed_be_usize = bytes_be_to_vec_usize(&be_usize_bytes).unwrap();
assert_eq!(empty_usize, reconstructed_le_usize);
assert_eq!(empty_usize, reconstructed_be_usize);
}
}

6
rustfmt.toml Normal file
View File

@@ -0,0 +1,6 @@
# Run cargo +nightly fmt to format with this configuration
edition = "2021" # use Rust 2021 edition
unstable_features = true # needed for group_imports
reorder_imports = true # sort imports alphabetically
imports_granularity = "Crate" # keep items from the same crate grouped together
group_imports = "StdExternalCrate" # group std, external, and local imports separately

View File

@@ -1,6 +1,6 @@
[package]
name = "zerokit_utils"
version = "0.5.2"
version = "1.0.0"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Various utilities for Zerokit"
@@ -12,28 +12,25 @@ repository = "https://github.com/vacp2p/zerokit"
bench = false
[dependencies]
ark-ff = { version = "0.5.0", default-features = false, features = [
"parallel",
] }
num-bigint = { version = "0.4.6", default-features = false, features = [
"rand",
] }
color-eyre = "0.6.3"
pmtree = { package = "vacp2p_pmtree", version = "2.0.2", optional = true }
ark-ff = { version = "0.5.0", default-features = false }
num-bigint = { version = "0.4.6", default-features = false }
pmtree = { package = "vacp2p_pmtree", version = "2.0.3", optional = true }
sled = "0.34.7"
serde = "1.0"
lazy_static = "1.5.0"
hex = "0.4"
serde_json = "1.0.145"
rayon = "1.11.0"
thiserror = "2.0"
[dev-dependencies]
hex = "0.4.3"
hex-literal = "1.1.0"
ark-bn254 = { version = "0.5.0", features = ["std"] }
num-traits = "0.2.19"
hex-literal = "1.0.0"
tiny-keccak = { version = "2.0.2", features = ["keccak"] }
criterion = { version = "0.4.0", features = ["html_reports"] }
criterion = { version = "0.8.0", features = ["html_reports"] }
[features]
default = []
parallel = ["ark-ff/parallel"]
pmtree-ft = ["pmtree"]
[[bench]]
@@ -43,3 +40,6 @@ harness = false
[[bench]]
name = "poseidon_benchmark"
harness = false
[package.metadata.docs.rs]
all-features = true

View File

@@ -4,7 +4,7 @@ args = ["build", "--release"]
[tasks.test]
command = "cargo"
args = ["test", "--release"]
args = ["test", "--release", "--", "--nocapture"]
[tasks.bench]
command = "cargo"

View File

@@ -1,39 +1,45 @@
# Zerokit Utils Crate
[![Crates.io](https://img.shields.io/crates/v/zerokit_utils.svg)](https://crates.io/crates/zerokit_utils)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
[![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
Cryptographic primitives for zero-knowledge applications, featuring efficient Merkle tree implementations and a Poseidon hash function.
**Zerokit Utils** provides essential cryptographic primitives optimized for zero-knowledge applications.
This crate features efficient Merkle tree implementations and a Poseidon hash function,
designed to be robust and performant.
## Overview
This crate provides core cryptographic components optimized for zero-knowledge proof systems:
1. Multiple Merkle tree implementations with different space/time tradeoffs
2. A Poseidon hash implementation
- **Multiple Merkle Trees**: Various implementations optimised for the trade-off between space and time.
- **Poseidon Hash Function**: An efficient hashing algorithm suitable for ZK contexts, with customizable parameters.
- **Parallel Performance**: Leverages Rayon for significant speed-ups in Merkle tree computations.
- **Arkworks Compatibility**: Poseidon hash implementation is designed to work seamlessly
with Arkworks field traits and data structures.
## Merkle Tree Implementations
The crate supports two interchangeable Merkle tree implementations:
Merkle trees are fundamental data structures for verifying data integrity and set membership.
Zerokit Utils offers two interchangeable implementations:
- **FullMerkleTree**
- Stores each tree node in memory
- **OptimalMerkleTree**
- Only stores nodes used to prove accumulation of set leaves
### Understanding Merkle Tree Terminology
### Implementation notes
To better understand the structure and parameters of our Merkle trees, here's a quick glossary:
Glossary:
- **Depth (`depth`)**: level of leaves if we count from root.
If the root is at level 0, leaves are at level `depth`.
- **Number of Levels**: `depth + 1`.
- **Capacity (Number of Leaves)**: $2^{\text{depth}}$. This is the maximum number of leaves the tree can hold.
- **Total Number of Nodes**: $2^{(\text{depth} + 1)} - 1$ for a full binary tree.
* depth: level of leaves if we count from levels from 0
* number of levels: depth + 1
* capacity (== number of leaves) -- 1 << depth
* total number of nodes: 1 << (depth + 1)) - 1
**Example for a tree with `depth: 3`**:
So for instance:
* depth: 3
* number of levels: 4
* capacity (number of leaves): 8
* total number of nodes: 15
- Number of Levels: 4 (levels 0, 1, 2, 3)
- Capacity (Number of Leaves): $2^3 = 8$
- Total Number of Nodes: $2^{(3+1)} - 1 = 15$
Visual representation of a Merkle tree with `depth: 3`:
```mermaid
flowchart TD
@@ -53,39 +59,55 @@ flowchart TD
N6 -->|Leaf| L8
```
### Available Implementations
- **FullMerkleTree**
- Stores all tree nodes in memory.
- Use Case: Use when memory is abundant and operation speed is critical.
- **OptimalMerkleTree**
- Stores only the nodes required to prove the accumulation of set leaves (i.e., authentication paths).
- Use Case: Suited for environments where memory efficiency is a higher priority than raw speed.
#### Parallel Processing with Rayon
Both `OptimalMerkleTree` and `FullMerkleTree` internally utilize the Rayon crate
to accelerate computations through data parallelism.
This can lead to significant performance improvements, particularly during updates to large Merkle trees.
## Poseidon Hash Implementation
This crate provides an implementation to compute the Poseidon hash round constants and MDS matrices:
This crate provides an implementation for computing Poseidon hash round constants and MDS matrices.
Key characteristics include:
- **Customizable parameters**: Supports different security levels and input sizes
- **Arkworks-friendly**: Adapted to work over arkworks field traits and custom data structures
- **Customizable parameters**: Supports various security levels and input sizes,
allowing you to tailor the hash function to your specific needs.
- **Arkworks-friendly**: Adapted to integrate smoothly with Arkworks field traits and custom data structures.
### Security Note
### ⚠️ Security Note
The MDS matrices are generated iteratively using the Grain LFSR until certain criteria are met.
According to the paper, such matrices must respect specific conditions which are checked by 3 different algorithms in the reference implementation.
The MDS matrices used in the Poseidon hash function are generated iteratively
using the Grain LFSR (Linear Feedback Shift Register) algorithm until specific cryptographic criteria are met.
These validation algorithms are not currently implemented in this crate.
For the hardcoded parameters, the first random matrix generated satisfies these conditions.
If using different parameters, you should check against the reference implementation how many matrices are generated before outputting the correct one,
and pass this number to the `skip_matrices` parameter of the `find_poseidon_ark_and_mds` function.
- The reference Poseidon implementation includes validation algorithms to ensure these criteria are satisfied.
These validation algorithms are not currently implemented in this crate.
- For the hardcoded parameters provided within this crate,
the initially generated random matrix has been verified to meet these conditions.
- If you intend to use custom parameters, it is crucial to verify your generated MDS matrix.
You should consult the Poseidon reference implementation to determine
how many matrices are typically skipped before a valid one is found.
This count should then be passed as the `skip_matrices parameter` to the `find_poseidon_ark_and_mds`
function in this crate.
## Installation
Add Zerokit Utils to your Rust project:
Add zerokit-utils as a dependency to your Cargo.toml file:
```toml
[dependencies]
zerokit-utils = "0.5.1"
zerokit-utils = "1.0.0"
```
## Performance Considerations
- **FullMerkleTree**: Use when memory is abundant and operation speed is critical
- **OptimalMerkleTree**: Use when memory efficiency is more important than raw speed
- **Poseidon**: Offers a good balance between security and performance for ZK applications
## Building and Testing
```bash

View File

@@ -1,11 +1,13 @@
use std::{fmt::Display, str::FromStr, sync::LazyLock};
use criterion::{criterion_group, criterion_main, Criterion};
use hex_literal::hex;
use lazy_static::lazy_static;
use std::{fmt::Display, str::FromStr};
use tiny_keccak::{Hasher as _, Keccak};
use zerokit_utils::{
error::HashError,
merkle_tree::{
FullMerkleConfig, FullMerkleTree, Hasher, OptimalMerkleConfig, OptimalMerkleTree,
ZerokitMerkleTree,
},
};
#[derive(Clone, Copy, Eq, PartialEq)]
@@ -16,19 +18,20 @@ struct TestFr([u8; 32]);
impl Hasher for Keccak256 {
type Fr = TestFr;
type Error = HashError;
fn default_leaf() -> Self::Fr {
TestFr([0; 32])
}
fn hash(inputs: &[Self::Fr]) -> Self::Fr {
fn hash(inputs: &[Self::Fr]) -> Result<Self::Fr, HashError> {
let mut output = [0; 32];
let mut hasher = Keccak::v256();
for element in inputs {
hasher.update(element.0.as_slice());
}
hasher.finalize(&mut output);
TestFr(output)
Ok(TestFr(output))
}
}
@@ -46,56 +49,78 @@ impl FromStr for TestFr {
}
}
lazy_static! {
static ref LEAVES: [TestFr; 4] = [
hex!("0000000000000000000000000000000000000000000000000000000000000001"),
hex!("0000000000000000000000000000000000000000000000000000000000000002"),
hex!("0000000000000000000000000000000000000000000000000000000000000003"),
hex!("0000000000000000000000000000000000000000000000000000000000000004"),
]
.map(TestFr);
}
static LEAVES: LazyLock<Vec<TestFr>> = LazyLock::new(|| {
let mut leaves = Vec::with_capacity(1 << 20);
for i in 0..(1 << 20) {
let mut bytes = [0u8; 32];
bytes[28..].copy_from_slice(&(i as u32).to_be_bytes());
leaves.push(TestFr(bytes));
}
leaves
});
static INDICES: LazyLock<Vec<usize>> = LazyLock::new(|| (0..(1 << 20)).collect());
const NOF_LEAVES: usize = 8192;
pub fn optimal_merkle_tree_benchmark(c: &mut Criterion) {
let mut tree =
OptimalMerkleTree::<Keccak256>::new(2, TestFr([0; 32]), OptimalMerkleConfig::default())
OptimalMerkleTree::<Keccak256>::new(20, TestFr([0; 32]), OptimalMerkleConfig::default())
.unwrap();
for i in 0..NOF_LEAVES {
tree.set(i, LEAVES[i % LEAVES.len()]).unwrap();
}
c.bench_function("OptimalMerkleTree::set", |b| {
let mut index = NOF_LEAVES;
b.iter(|| {
tree.set(0, LEAVES[0]).unwrap();
tree.set(index % (1 << 20), LEAVES[index % LEAVES.len()])
.unwrap();
index = (index + 1) % (1 << 20);
})
});
c.bench_function("OptimalMerkleTree::delete", |b| {
let mut index = 0;
b.iter(|| {
tree.delete(0).unwrap();
tree.delete(index % NOF_LEAVES).unwrap();
tree.set(index % NOF_LEAVES, LEAVES[index % LEAVES.len()])
.unwrap();
index = (index + 1) % NOF_LEAVES;
})
});
c.bench_function("OptimalMerkleTree::override_range", |b| {
let mut offset = 0;
b.iter(|| {
tree.override_range(0, LEAVES.into_iter(), [0, 1, 2, 3].into_iter())
let range = offset..offset + NOF_LEAVES;
tree.override_range(
offset,
LEAVES[range.clone()].iter().cloned(),
INDICES[range.clone()].iter().cloned(),
)
.unwrap();
})
});
c.bench_function("OptimalMerkleTree::compute_root", |b| {
b.iter(|| {
tree.compute_root().unwrap();
offset = (offset + NOF_LEAVES) % (1 << 20);
})
});
c.bench_function("OptimalMerkleTree::get", |b| {
let mut index = 0;
b.iter(|| {
tree.get(0).unwrap();
tree.get(index % NOF_LEAVES).unwrap();
index = (index + 1) % NOF_LEAVES;
})
});
// check intermediate node getter which required additional computation of sub root index
c.bench_function("OptimalMerkleTree::get_subtree_root", |b| {
let mut level = 1;
let mut index = 0;
b.iter(|| {
tree.get_subtree_root(1, 0).unwrap();
tree.get_subtree_root(level % 20, index % (1 << (20 - (level % 20))))
.unwrap();
index = (index + 1) % (1 << (20 - (level % 20)));
level = 1 + (level % 20);
})
});
@@ -108,43 +133,61 @@ pub fn optimal_merkle_tree_benchmark(c: &mut Criterion) {
pub fn full_merkle_tree_benchmark(c: &mut Criterion) {
let mut tree =
FullMerkleTree::<Keccak256>::new(2, TestFr([0; 32]), FullMerkleConfig::default()).unwrap();
FullMerkleTree::<Keccak256>::new(20, TestFr([0; 32]), FullMerkleConfig::default()).unwrap();
for i in 0..NOF_LEAVES {
tree.set(i, LEAVES[i % LEAVES.len()]).unwrap();
}
c.bench_function("FullMerkleTree::set", |b| {
let mut index = NOF_LEAVES;
b.iter(|| {
tree.set(0, LEAVES[0]).unwrap();
tree.set(index % (1 << 20), LEAVES[index % LEAVES.len()])
.unwrap();
index = (index + 1) % (1 << 20);
})
});
c.bench_function("FullMerkleTree::delete", |b| {
let mut index = 0;
b.iter(|| {
tree.delete(0).unwrap();
tree.delete(index % NOF_LEAVES).unwrap();
tree.set(index % NOF_LEAVES, LEAVES[index % LEAVES.len()])
.unwrap();
index = (index + 1) % NOF_LEAVES;
})
});
c.bench_function("FullMerkleTree::override_range", |b| {
let mut offset = 0;
b.iter(|| {
tree.override_range(0, LEAVES.into_iter(), [0, 1, 2, 3].into_iter())
let range = offset..offset + NOF_LEAVES;
tree.override_range(
offset,
LEAVES[range.clone()].iter().cloned(),
INDICES[range.clone()].iter().cloned(),
)
.unwrap();
})
});
c.bench_function("FullMerkleTree::compute_root", |b| {
b.iter(|| {
tree.compute_root().unwrap();
offset = (offset + NOF_LEAVES) % (1 << 20);
})
});
c.bench_function("FullMerkleTree::get", |b| {
let mut index = 0;
b.iter(|| {
tree.get(0).unwrap();
tree.get(index % NOF_LEAVES).unwrap();
index = (index + 1) % NOF_LEAVES;
})
});
// check intermediate node getter which required additional computation of sub root index
c.bench_function("FullMerkleTree::get_subtree_root", |b| {
let mut level = 1;
let mut index = 0;
b.iter(|| {
tree.get_subtree_root(1, 0).unwrap();
tree.get_subtree_root(level % 20, index % (1 << (20 - (level % 20))))
.unwrap();
index = (index + 1) % (1 << (20 - (level % 20)));
level = 1 + (level % 20);
})
});

View File

@@ -1,8 +1,8 @@
use std::hint::black_box;
use ark_bn254::Fr;
use criterion::{
black_box, criterion_group, criterion_main, BatchSize, BenchmarkId, Criterion, Throughput,
};
use zerokit_utils::Poseidon;
use criterion::{criterion_group, criterion_main, BatchSize, BenchmarkId, Criterion, Throughput};
use zerokit_utils::poseidon::Poseidon;
const ROUND_PARAMS: [(usize, usize, usize, usize); 8] = [
(2, 8, 56, 0),

11
utils/src/error.rs Normal file
View File

@@ -0,0 +1,11 @@
use super::poseidon::error::PoseidonError;
pub use crate::merkle_tree::{FromConfigError, ZerokitMerkleTreeError};
/// Errors that can occur during hashing operations.
#[derive(Debug, thiserror::Error)]
pub enum HashError {
#[error("Poseidon hash error: {0}")]
Poseidon(#[from] PoseidonError),
#[error("Generic hash error: {0}")]
Generic(String),
}

Some files were not shown because too many files have changed in this diff Show More