Migrate docs website + improved docs (#389) (#403)

migrate docs website + improved docs (#389)

* Update README.md (#385)

* refactor

* refactor

* refactor

* rename task

* update codespell

* multi gpu docs (#391)

* Refactor

* refacotr

* fix typo

* Apply suggestions from code review



* refactor

* refactor

---------

Co-authored-by: ImmanuelSegol <3ditds@gmail.com>
Co-authored-by: DmytroTym <dmytrotym1@gmail.com>
Co-authored-by: ChickenLover <Romangg81@gmail.com>
This commit is contained in:
Jeremy Felder
2024-02-28 14:40:04 +02:00
committed by GitHub
parent e6035698b5
commit 40309329fb
49 changed files with 15988 additions and 8 deletions

View File

@@ -15,6 +15,6 @@ jobs:
- uses: codespell-project/actions-codespell@v2
with:
# https://github.com/codespell-project/actions-codespell?tab=readme-ov-file#parameter-skip
skip: ./**/target,./**/build
skip: ./**/target,./**/build,./docs/*.js,./docs/*.json
# https://github.com/codespell-project/actions-codespell?tab=readme-ov-file#parameter-ignore_words_file
ignore_words_file: .codespellignore

46
.github/workflows/deploy-docs.yml vendored Normal file
View File

@@ -0,0 +1,46 @@
name: Deploy to GitHub Pages
on:
push:
branches:
- main
paths:
- 'docs/*'
permissions:
contents: write
jobs:
deploy:
name: Deploy to GitHub Pages
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
path: 'repo'
- uses: actions/setup-node@v3
with:
node-version: 18
cache: npm
cache-dependency-path: ./repo/docs/package-lock.json
- name: Install dependencies
run: npm install --frozen-lockfile
working-directory: ./repo/docs
- name: Build website
run: npm run build
working-directory: ./repo/docs
- name: Copy CNAME to build directory
run: echo "dev.ingonyama.com" > ./build/CNAME
working-directory: ./repo/docs
- name: Deploy to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./build
user_name: github-actions[bot]
user_email: 41898282+github-actions[bot]@users.noreply.github.com
working-directory: ./repo/docs

29
.github/workflows/test-deploy-docs.yml vendored Normal file
View File

@@ -0,0 +1,29 @@
name: Test Deploy to GitHub Pages
on:
pull_request:
branches:
- main
paths:
- 'docs/*'
jobs:
test-deploy:
name: Test deployment of docs webiste
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
path: 'repo'
- uses: actions/setup-node@v3
with:
node-version: 18
cache: npm
cache-dependency-path: ./repo/docs/package-lock.json
- name: Install dependencies
run: npm install --frozen-lockfile
working-directory: ./repo/docs
- name: Test build website
run: npm run build
working-directory: ./repo/docs

View File

@@ -48,7 +48,7 @@ ICICLE is a CUDA implementation of general functions widely used in ZKP.
### Accessing Hardware
If you don't have access to a Nvidia GPU we have some options for you.
If you don't have access to an Nvidia GPU we have some options for you.
Checkout [Google Colab](https://colab.google/). Google Colab offers a free [T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/) instance and ICICLE can be used with it, reference this guide for setting up your [Google Colab workplace][GOOGLE-COLAB-ICICLE].
@@ -71,7 +71,7 @@ Running ICICLE via Rust bindings is highly recommended and simple:
- Clone this repo
- go to our [Rust bindings][ICICLE-RUST]
- Enter a [curve](./wrappers/rust/icicle-curves) implementation
- run `cargo build --release` to build or `cargo test -- --test-threads=1` to build and execute tests
- run `cargo build --release` to build or `cargo test` to build and execute tests
In any case you would want to compile and run core icicle c++ tests, just follow these setps:
- Clone this repo

1
docs/.codespellignore Normal file
View File

@@ -0,0 +1 @@
ICICLE

17
docs/.gitignore vendored Normal file
View File

@@ -0,0 +1,17 @@
.docusaurus/
node_modules/
yarn.lock
.DS_Store
# tex build artifacts
.aux
.bbl
.bcf
.blg
.fdb_latexmk
.fls
.log
.out
.xml
.gz
.toc

17
docs/.prettierignore Normal file
View File

@@ -0,0 +1,17 @@
.docusaurus/
node_modules/
yarn.lock
.DS_Store
# tex build artifacts
.aux
.bbl
.bcf
.blg
.fdb_latexmk
.fls
.log
.out
.xml
.gz
.toc

10
docs/.prettierrc Normal file
View File

@@ -0,0 +1,10 @@
{
"semi": false,
"singleQuote": true,
"trailingComma": "es5",
"printWidth": 80,
"tabWidth": 2,
"useTabs": false,
"proseWrap": "preserve",
"endOfLine": "lf"
}

1
docs/CNAME Normal file
View File

@@ -0,0 +1 @@
dev.ingonyama.com

39
docs/README.md Normal file
View File

@@ -0,0 +1,39 @@
# Website
This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.
### Installation
```
$ npm i
```
### Local Development
```
$ npm start
```
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
### Build
```
$ npm run build
```
This command generates static content into the `build` directory and can be served using any static contents hosting service.
### Deployment
Using SSH:
```
$ USE_SSH=true npm run deploy
```
Not using SSH:
```
$ GIT_USER=<Your GitHub username> npm run deploy
```

3
docs/babel.config.js Normal file
View File

@@ -0,0 +1,3 @@
module.exports = {
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
};

12
docs/docs/ZKContainers.md Normal file
View File

@@ -0,0 +1,12 @@
# ZKContainer
We found that developing ZK provers with ICICLE gives developers the ability to scale ZK provers across many machines and many GPUs. To make this possible we developed the ZKContainer.
## What is a ZKContainer?
A ZKContainer is a standardized, optimized and secure docker container that we configured with ICICLE applications in mind. A developer using our ZKContainer can deploy an ICICLE application on a single machine or on a thousand GPU machines in a data center with minimal concerns regarding compatibility.
ZKContainer has been used by Ingonyama clients to achieve scalability across large data centers.
We suggest you read our [article](https://medium.com/@ingonyama/product-announcement-zk-containers-0e2a1f2d0a2b) regarding ZKContainer to understand the benefits of using them.
![ZKContainer inside a ZK data center](../static/img/architecture-zkcontainer.png)

View File

@@ -0,0 +1,23 @@
# Contributor's Guide
We welcome all contributions with open arms. At Ingonyama we take a village approach, believing it takes many hands and minds to build a ecosystem.
## Contributing to ICICLE
- Make suggestions or report bugs via [GitHub issues](https://github.com/ingonyama-zk/icicle/issues)
- Contribute to the ICICLE by opening a [pull request](https://github.com/ingonyama-zk/icicle/pulls).
- Contribute to our [documentation](https://github.com/ingonyama-zk/icicle/tree/main/docs) and [examples](https://github.com/ingonyama-zk/icicle/tree/main/examples).
- Ask questions on Discord
### Opening a pull request
When opening a [pull request](https://github.com/ingonyama-zk/icicle/pulls) please keep the following in mind.
- `Clear Purpose` - The pull request should solve a single issue and be clean of any unrelated changes.
- `Clear description` - If the pull request is for a new feature describe what you built, why you added it and how its best that we test it. For bug fixes please describe the issue and the solution.
- `Consistent style` - Rust and Golang code should be linted by the official linters (golang fmt and rust fmt) and maintain a proper style. For CUDA and C++ code we use [`clang-format`](https://github.com/ingonyama-zk/icicle/blob/main/.clang-format), [here](https://github.com/ingonyama-zk/icicle/blob/605c25f9d22135c54ac49683b710fe2ce06e2300/.github/workflows/main-format.yml#L46) you can see how we run it.
- `Minimal Tests` - please add test which cover basic usage of your changes .
## Questions?
Find us on [Discord](https://discord.gg/6vYrE7waPj).

23
docs/docs/grants.md Normal file
View File

@@ -0,0 +1,23 @@
# Ingonyama Grant programs
Ingonyama understands the importance of supporting and fostering a vibrant community of researchers and builders to advance ZK. To encourage progress, we are not only developing in the open but also sharing resources with researchers and builders through various programs.
## ICICLE ZK-GPU Ecosystem Grant
Ingonyama invites researchers and practitioners to collaborate in advancing ZK acceleration. We are allocating $100,000 for grants to support this initiative.
### Bounties & Grants
Eligibility for grants includes:
1. **Students**: Utilize ICICLE in your research.
2. **Performance Improvement**: Enhance the performance of accelerated primitives in ICICLE.
3. **Protocol Porting**: Migrate existing ZK protocols to ICICLE.
4. **New Primitives**: Contribute new primitives to ICICLE.
5. **Benchmarking**: Compare ZK benchmarks against ICICLE.
## Contact
For questions or submissions: [grants@ingonyama.com](mailto:grants@ingonyama.com)
**Read the full article [here](https://www.ingonyama.com/blog/icicle-for-researchers-grants-challenges)**

View File

@@ -0,0 +1,138 @@
# Run ICICLE on Google Colab
Google Colab lets you use a GPU free of charge, it's an Nvidia T4 GPU with 16 GB of memory, capable of running latest CUDA (tested on Cuda 12.2)
As Colab is able to interact with shell commands, a user can also install a framework and load git repositories into Colab space.
## Prepare Colab environment
First thing to do in a notebook is to set the runtime type to a T4 GPU.
- in the upper corner click on the dropdown menu and select "change runtime type"
![Change runtime](../../static/img/colab_change_runtime.png)
- In the window select "T4 GPU" and press Save
![T4 GPU](../../static/img/t4_gpu.png)
Installing Rust is rather simple, just execute the following command:
```sh
!apt install rustc cargo
```
To test the installation of Rust:
```sh
!rustc --version
!cargo --version
```
A successful installation will result in a rustc and cargo version print, a faulty installation will look like this:
```sh
/bin/bash: line 1: rustc: command not found
/bin/bash: line 1: cargo: command not found
```
Now we will check the environment:
```sh
!nvcc --version
!gcc --version
!cmake --version
!nvidia-smi
```
A correct environment should print the result with no bash errors for `nvidia-smi` command and result in a **Teslt T4 GPU** type:
```sh
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:02:13_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0
gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Copyright (C) 2021 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
cmake version 3.27.9
CMake suite maintained and supported by Kitware (kitware.com/cmake).
Wed Jan 17 13:10:18 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 39C P8 9W / 70W | 0MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
## Cloning ICICLE and running test
Now we are ready to clone ICICE repository,
```sh
!git clone https://github.com/ingonyama-zk/icicle.git
```
We now can browse the repository and run tests to check the runtime environment:
```sh
!ls -la
%cd icicle
```
Let's run a test!
Navigate to icicle/wrappers/rust/icicle-curves/icicle-bn254 and run cargo test:
```sh
%cd wrappers/rust/icicle-curves/icicle-bn254/
!cargo test --release
```
:::note
Compiling the first time may take a while
:::
Test run should end like this:
```sh
running 15 tests
test curve::tests::test_ark_point_convert ... ok
test curve::tests::test_ark_scalar_convert ... ok
test curve::tests::test_affine_projective_convert ... ok
test curve::tests::test_point_equality ... ok
test curve::tests::test_field_convert_montgomery ... ok
test curve::tests::test_scalar_equality ... ok
test curve::tests::test_points_convert_montgomery ... ok
test msm::tests::test_msm ... ok
test msm::tests::test_msm_skewed_distributions ... ok
test ntt::tests::test_ntt ... ok
test ntt::tests::test_ntt_arbitrary_coset ... ok
test msm::tests::test_msm_batch has been running for over 60 seconds
test msm::tests::test_msm_batch ... ok
test ntt::tests::test_ntt_coset_from_subgroup ... ok
test ntt::tests::test_ntt_device_async ... ok
test ntt::tests::test_ntt_batch ... ok
test result: ok. 15 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 99.39s
```
Viola, ICICLE in Colab!

View File

@@ -0,0 +1,3 @@
# Golang bindings
Golang is WIP in v1, coming soon. Please checkout a previous [release v0.1.0](https://github.com/ingonyama-zk/icicle/releases/tag/v0.1.0) for golang bindings.

BIN
docs/docs/icicle/image.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

View File

@@ -0,0 +1,97 @@
# ICICLE integrated provers
ICICLE has been used by companies and projects such as [Celer Network](https://github.com/celer-network), [Consensys Gnark](https://github.com/Consensys/gnark), [EZKL](https://blog.ezkl.xyz/post/acceleration/) and others to accelerate their ZK proving pipeline.
Many of these integrations have been a collaboration between Ingonyama and the integrating company. We have learned a lot about designing GPU based ZK provers.
If you're interested in understanding these integrations better or learning how you can use ICICLE to accelerate your existing ZK proving pipeline this is the place for you.
## A primer to building your own integrations
Lets illustrate an ICICLE integration, so you can understand the core API and design overview of ICICLE.
![ICICLE architecture](../../static/img/architecture-high-level.png)
Engineers usually use a cryptographic library to implement their ZK protocols. These libraries implement efficient primitives which are used as building blocks for the protocol; ICICLE is such a library. The difference is that ICICLE is designed from the start to run on GPUs; the Rust and Golang APIs abstract away all low level CUDA details. Our goal was to allow developers with no GPU experience to quickly get started with ICICLE.
A developer may use ICICLE with two main approaches in mind.
1. Drop-in replacement approach.
2. End-to-End GPU replacement approach.
The first approach for GPU-accelerating your Prover with ICICLE is quick to implement, but it has limitations, such as reduced memory optimization and limited protocol tuning for GPUs. It's a solid starting point, but those committed to fully leveraging GPU acceleration should consider a more comprehensive approach.
A End-to-End GPU replacement means performing the entire ZK proof on the GPU. This approach will reduce latency to a minimum and requires you to change the way you implement the protocol to be more GPU friendly. This approach will take full advantage of GPU acceleration. Redesigning your prover this way may take more engineering effort but we promise you that its worth it!
## Using ICICLE integrated provers
Here we cover how a developer can run existing circuits on ICICLE integrated provers.
### Gnark
[Gnark](https://github.com/Consensys/gnark) officially supports GPU proving with ICICLE. Currently only Groth16 on curve `BN254` is supported. This means that if you are currently using Gnark to write your circuits you can enjoy GPU acceleration without making many changes.
:::info
Currently ICICLE has been merged to Gnark [master branch](https://github.com/Consensys/gnark), however the [latest release](https://github.com/Consensys/gnark/releases/tag/v0.9.1) is from October 2023.
:::
Make sure your golang circuit project has `gnark` as a dependency and that you are using the master branch for now.
```
go get github.com/consensys/gnark@master
```
You should see two indirect dependencies added.
```
...
github.com/ingonyama-zk/icicle v0.1.0 // indirect
github.com/ingonyama-zk/iciclegnark v0.1.1 // indirect
...
```
:::info
As you may notice we are using ICICLE v0.1 here since golang bindings are only support in ICICLE v0.1 for the time being.
:::
To switch over to ICICLE proving, make sure to change the backend you are using, below is an example of how this should be done.
```
// toggle on
proofIci, err := groth16.Prove(ccs, pk, secretWitness, backend.WithIcicleAcceleration())
// toggle off
proof, err := groth16.Prove(ccs, pk, secretWitness)
```
Now that you have enabled `WithIcicleAcceleration` backend simple change the way your run your circuits to:
```
go run -tags=icicle main.go
```
Your logs should look something like this if everything went as expected.
```
13:12:05 INF compiling circuit
13:12:05 INF parsed circuit inputs nbPublic=1 nbSecret=1
13:12:05 INF building constraint builder nbConstraints=3
13:12:05 DBG precomputing proving key in GPU acceleration=icicle backend=groth16 curve=bn254 nbConstraints=3
13:12:05 DBG constraint system solver done nbConstraints=3 took=0.070259
13:12:05 DBG prover done acceleration=icicle backend=groth16 curve=bn254 nbConstraints=3 took=80.356684
13:12:05 DBG verifier done backend=groth16 curve=bn254 took=1.843888
```
`acceleration=icicle` indicates that the prover is running in acceleration mode with ICICLE.
You can reference the [Gnark docs](https://github.com/Consensys/gnark?tab=readme-ov-file#gpu-support) for further information.
### Halo2
[Halo2](https://github.com/zkonduit/halo2) fork integrated with ICICLE for GPU acceleration. This means that you can run your existing Halo2 circuits with GPU acceleration just by activating a feature flag.
To enable GPU acceleration just enable `icicle_gpu` [feature flag](https://github.com/zkonduit/halo2/blob/3d7b5e61b3052680ccb279e05bdcc21dd8a8fedf/halo2_proofs/Cargo.toml#L102).
This feature flag will seamlessly toggle on GPU acceleration for you.

View File

@@ -0,0 +1,260 @@
# Getting started with ICICLE
This guide is oriented towards developers who want to start writing code with the ICICLE libraries. If you just want to run your existing ZK circuits on GPU refer to [this guide](./integrations.md#using-icicle-integrations) please.
## ICICLE repository overview
![ICICLE API overview](../../static/img/apilevels.png)
The diagram above displays the general architecture of ICICLE and the API layers that exist. The CUDA API, which we also call ICICLE Core, is the lowest level and is comprised of CUDA kernels which implement all primitives such as MSM as well as C++ wrappers which expose these methods for different curves.
ICICLE Core compiles into a static library. This library can be used with our official Golang and Rust wrappers or you can implement a wrapper for it in any language.
Based on this dependency architecture, the ICICLE repository has three main sections, each of which is independent from the other.
- ICICLE core
- ICICLE Rust bindings
- ICICLE Golang bindings
### ICICLE Core
[ICICLE core](https://github.com/ingonyama-zk/icicle/tree/main/icicle) contains all the low level CUDA code implementing primitives such as [points](https://github.com/ingonyama-zk/icicle/tree/main/icicle/primitives) and [MSM](https://github.com/ingonyama-zk/icicle/tree/main/icicle/appUtils/msm). There also exists higher level C++ wrappers to expose the low level CUDA primitives ([example](https://github.com/ingonyama-zk/icicle/blob/c1a32a9879a7612916e05aa3098f76144de4109e/icicle/appUtils/msm/msm.cu#L1)).
ICICLE Core would typically be compiled into a static library and used in a third party language such as Rust or Golang.
### ICICLE Rust and Golang bindings
- [ICICLE Rust bindings](https://github.com/ingonyama-zk/icicle/tree/main/wrappers/rust)
- [ICICLE Golang bindings](https://github.com/ingonyama-zk/icicle/tree/main/goicicle)
These bindings allow you to easily use ICICLE in a Rust or Golang project. Setting up Golang bindings requires a bit of extra steps compared to the Rust bindings which utilize the `cargo build` tool.
## Running ICICLE
This guide assumes that you have a Linux or Windows machine with an Nvidia GPU installed. If you don't have access to an Nvidia GPU you can access one for free on [Google Colab](https://colab.google/).
### Prerequisites
- NVCC (version 12.0 or newer)
- cmake 3.18 and above
- GCC - version 9 or newer is recommended.
- Any Nvidia GPU
- Linux or Windows operating system.
#### Optional Prerequisites
- Docker, latest version.
- [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/index.html)
If you don't wish to install these prerequisites you can follow this tutorial using a [ZK-Container](https://github.com/ingonyama-zk/icicle/blob/main/Dockerfile) (docker container). To learn more about using ZK-Containers [read this](../ZKContainers.md).
### Setting up ICICLE and running tests
The objective of this guide is to make sure you can run the ICICLE Core, Rust and Golang tests. Achieving this will ensure you know how to setup ICICLE and run a ICICLE program. For simplicity, we will be using the ICICLE docker container as our environment, however, you may install the prerequisites on your machine and follow the same commands in your terminal.
#### Setting up our environment
Lets begin by cloning the ICICLE repository:
```sh
git clone https://github.com/ingonyama-zk/icicle
```
We will proceed to build the docker image [found here](https://github.com/ingonyama-zk/icicle/blob/main/Dockerfile):
```sh
docker build -t icicle-demo .
docker run -it --runtime=nvidia --gpus all --name icicle_container icicle-demo
```
- `-it` runs the container in interactive mode with a terminal.
- `--gpus all` Allocate all available GPUs to the container. You can also specify which GPUs to use if you don't want to allocate all.
- `--runtime=nvidia` Use the NVIDIA runtime, necessary for GPU support.
To read more about these settings reference this [article](https://developer.nvidia.com/nvidia-container-runtime).
If you accidentally close your terminal and want to reconnect just call:
```sh
docker exec -it icicle_container bash
```
Lets make sure that we have the correct CUDA version before proceeding
```sh
nvcc --version
```
You should see something like this
```sh
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Aug_15_22:02:13_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0
```
Make sure the release version is at least 12.0.
#### ICICLE Core
ICICLE Core is found under [`<project_root>/icicle`](https://github.com/ingonyama-zk/icicle/tree/main/icicle). To build and run the tests first:
```sh
cd icicle
```
We are going to compile ICICLE for a specific curve
```sh
mkdir -p build
cmake -S . -B build -DCURVE=bn254 -DBUILD_TESTS=ON
cmake --build build
```
`-DBUILD_TESTS=ON` compiles the tests, without this flag `ctest` won't work.
`-DCURVE=bn254` tells the compiler which curve to build. You can find a list of supported curves [here](https://github.com/ingonyama-zk/icicle/tree/main/icicle/curves).
The output in `build` folder should include the static libraries for the compiled curve.
:::info
Make sure to only use `-DBUILD_TESTS=ON` for running tests as the archive output will only be available when `-DBUILD_TESTS=ON` is not supplied.
:::
To run the test
```sh
cd build
ctest
```
#### ICICLE Rust
The rust bindings work by first compiling the CUDA static libraries as seen [here](https://github.com/ingonyama-zk/icicle/blob/main/wrappers/rust/icicle-curves/icicle-bn254/build.rs). The compilation of CUDA and the Rust library is all handled by the rust build toolchain.
Similar to ICICLE Core here we also have to compile per curve.
Lets compile curve `bn254`
```sh
cd wrappers/rust/icicle-curves/icicle-bn254
```
Now lets build our library
```sh
cargo build --release
```
This may take a couple of minutes since we are compiling both the CUDA and Rust code.
To run the tests
```sh
cargo test
```
We also include some benchmarks
```sh
cargo bench
```
#### ICICLE Golang
Golang is WIP in v1, coming soon. Please checkout a previous [release v0.1.0](https://github.com/ingonyama-zk/icicle/releases/tag/v0.1.0) for golang bindings.
### Running ICICLE examples
ICICLE examples can be found [here](https://github.com/ingonyama-zk/icicle-examples) these examples cover some simple use cases using C++, rust and golang.
In each example directory, ZK-container files are located in a subdirectory `.devcontainer`.
```sh
msm/
├── .devcontainer
├── devcontainer.json
└── Dockerfile
```
Lets run one of our C++ examples, in this case the [MSM example](https://github.com/ingonyama-zk/icicle-examples/blob/main/c%2B%2B/msm/example.cu).
Clone the repository
```sh
git clone https://github.com/ingonyama-zk/icicle-examples.git
cd icicle-examples
```
Enter the test directory
```sh
cd c++/msm
```
Now lets build our docker file and run the test inside it. Make sure you have installed the [optional prerequisites](#optional-prerequisites).
```sh
docker build -t icicle-example-msm -f .devcontainer/Dockerfile .
```
Lets start and enter the container
```sh
docker run -it --rm --gpus all -v .:/icicle-example icicle-example-msm
```
to run the example
```sh
rm -rf build
mkdir -p build
cmake -S . -B build
cmake --build build
./build/example
```
You can now experiment with our other examples, perhaps try to run a rust or golang example next.
## Writing new bindings for ICICLE
Since ICICLE Core is written in CUDA / C++ its really simple to generate static libraries. These static libraries can be installed on any system and called by higher level languages such as Golang.
static libraries can be loaded into memory once and used by multiple programs, reducing memory usage and potentially improving performance. They also allow you to separate functionality into distinct modules so your static library may need to compile only specific features that you want to use.
Lets review the Golang bindings since its a pretty verbose example (compared to rust which hides it pretty well) of using static libraries. Golang has a library named `CGO` which can be used to link static libraries. Here's a basic example on how you can use cgo to link these libraries:
```go
/*
#cgo LDFLAGS: -L/path/to/shared/libs -lbn254 -lbls12_381 -lbls12_377 -lbw6_671
#include "icicle.h" // make sure you use the correct header file(s)
*/
import "C"
func main() {
// Now you can call the C functions from the ICICLE libraries.
// Note that C function calls are prefixed with 'C.' in Go code.
out := (*C.BN254_projective_t)(unsafe.Pointer(p))
in := (*C.BN254_affine_t)(unsafe.Pointer(affine))
C.projective_from_affine_bn254(out, in)
}
```
The comments on the first line tell `CGO` which libraries to import as well as which header files to include. You can then call methods which are part of the static library and defined in the header file, `C.projective_from_affine_bn254` is an example.
If you wish to create your own bindings for a language of your choice we suggest you start by investigating how you can call static libraries.
### ICICLE Adapters
One of the core ideas behind ICICLE is that developers can gradually accelerate their provers. Many protocols are written using other cryptographic libraries and completely replacing them may be complex and time consuming.
Therefore we offer adapters for various popular libraries, these adapters allow us to convert points and scalars between different formats defined by various libraries. Here is a list:
Golang adapters:
- [Gnark crypto adapter](https://github.com/ingonyama-zk/iciclegnark)

View File

@@ -0,0 +1,64 @@
# Multi GPU with ICICLE
:::info
If you are looking for the Multi GPU API documentation refer here for [Rust](./rust-bindings/multi-gpu.md).
:::
One common challenge with Zero-Knowledge computation is managing the large input sizes. It's not uncommon to encounter circuits surpassing 2^25 constraints, pushing the capabilities of even advanced GPUs to their limits. To effectively scale and process such large circuits, leveraging multiple GPUs in tandem becomes a necessity.
Multi-GPU programming involves developing software to operate across multiple GPU devices. Lets first explore different approaches to Multi-GPU programming then we will cover how ICICLE allows you to easily develop youR ZK computations to run across many GPUs.
## Approaches to Multi GPU programming
There are many [different strategies](https://github.com/NVIDIA/multi-gpu-programming-models) available for implementing multi GPU, however, it can be split into two categories.
### GPU Server approach
This approach usually involves a single or multiple CPUs opening threads to read / write from multiple GPUs. You can think about it as a scaled up HOST - Device model.
![alt text](image.png)
This approach won't let us tackle larger computation sizes but it will allow us to compute multiple computations which we wouldn't be able to load onto a single GPU.
For example let's say that you had to compute two MSMs of size 2^26 on a 16GB VRAM GPU you would normally have to perform them asynchronously. However, if you double the number of GPUs in your system you can now run them in parallel.
### Inter GPU approach
This approach involves a more sophisticated approach to multi GPU computation. Using technologies such as [GPUDirect, NCCL, NVSHMEM](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-cwes1084/) and NVLink it's possible to combine multiple GPUs and split a computation among different devices.
This approach requires redesigning the algorithm at the software level to be compatible with splitting amongst devices. In some cases, to lower latency to a minimum, special inter GPU connections would be installed on a server to allow direct communication between multiple GPUs.
# Writing ICICLE Code for Multi GPUs
The approach we have taken for the moment is a GPU Server approach; we assume you have a machine with multiple GPUs and you wish to run some computation on each GPU.
To dive deeper and learn about the API check out the docs for our different ICICLE API
- [Rust Multi GPU APIs](./rust-bindings/multi-gpu.md)
- C++ Multi GPU APIs
## Best practices
- Never hardcode device IDs, if you want your software to take advantage of all GPUs on a machine use methods such as `get_device_count` to support arbitrary number of GPUs.
- Launch one CPU thread per GPU. To avoid [nasty errors](https://developer.nvidia.com/blog/cuda-pro-tip-always-set-current-device-avoid-multithreading-bugs/) and hard to read code we suggest that for every GPU you create a dedicated thread. Within a CPU thread you should be able to launch as many tasks as you wish for a GPU as long as they all run on the same GPU id. This will make your code way more manageable, easy to read and performant.
## ZKContainer support for multi GPUs
Multi GPU support should work with ZK-Containers by simply defining which devices the docker container should interact with:
```sh
docker run -it --gpus '"device=0,2"' zk-container-image
```
If you wish to expose all GPUs
```sh
docker run --gpus all zk-container-image
```

View File

@@ -0,0 +1,64 @@
# What is ICICLE?
[![Static Badge](https://img.shields.io/badge/Latest-v1.4.0-8a2be2)](https://github.com/ingonyama-zk/icicle/releases)
![Static Badge](https://img.shields.io/badge/Machines%20running%20ICICLE-544-lightblue)
[ICICLE](https://github.com/ingonyama-zk/icicle) is a cryptography library for ZK using GPUs. ICICLE implements blazing fast cryptographic primitives such as EC operations, MSM, NTT, Poseidon hash and more on GPU.
ICICLE allows developers with minimal GPU experience to effortlessly accelerate their ZK application; from our experiments, even the most naive implementation may yield 10X improvement in proving times.
ICICLE has been used by many leading ZK companies such as [Celer Network](https://github.com/celer-network), [Gnark](https://github.com/Consensys/gnark) and others to accelerate their ZK proving pipeline.
## Dont have access to a GPU?
We understand that not all developers have access to a GPU and we don't want this to limit anyone from developing with ICICLE.
Here are some ways we can help you gain access to GPUs:
### Grants
At Ingonyama we are interested in accelerating the progress of ZK and cryptography. If you are an engineer, developer or an academic researcher we invite you to checkout [our grant program](https://www.ingonyama.com/blog/icicle-for-researchers-grants-challenges). We will give you access to GPUs and even pay you to do your dream research!
### Google Colab
This is a great way to get started with ICICLE instantly. Google Colab offers free GPU access to a NVIDIA T4 instance, it's acquired with 16 GB of memory which should be enough for experimenting and even prototyping with ICICLE.
For an extensive guide on how to setup Google Colab with ICICLE refer to [this article](./colab-instructions.md).
If none of these options are appropriate for you reach out to us on [telegram](https://t.me/RealElan) we will do our best to help you.
### Vast.ai
[Vast.ai](https://vast.ai/) is a global GPU marketplace where you can rent many different types of GPUs by the hour for [competitive pricing](https://vast.ai/pricing). They provide on-demand and interruptible rentals depending on your need or use case; you can learn more about their rental types [here](https://vast.ai/faq#rental-types).
:::note
If none of these options suit your needs, contact us on [telegram](https://t.me/RealElan) for assistance. We're committed to ensuring that a lack of a GPU doesn't become a bottleneck for you. If you need help with setup or any other issues, we're here to do our best to help you.
:::
## What can you do with ICICLE?
[ICICLE](https://github.com/ingonyama-zk/icicle) can be used in the same way you would use any other cryptography library. While developing and integrating ICICLE into many proof systems, we found some use case categories:
### Circuit developers
If you are a circuit developer and are experiencing bottlenecks while running your circuits, an ICICLE integrated prover may be the solution.
ICICLE has been integrated into a number of popular ZK provers including [Gnark prover](https://github.com/Consensys/gnark) and [Halo2](https://github.com/zkonduit/halo2). This means that you can enjoy GPU acceleration for your existing circuits immediately without writing a single line of code by simply switching on the GPU prover flag!
### Integrating into existing ZK provers
From our collaborations we have learned that its possible to accelerate a specific part of your prover to solve for a specific bottleneck.
ICICLE can be used to accelerate specific parts of your prover without completely rewriting your ZK prover.
### Developing your own ZK provers
If your goal is to build a ZK prover from the ground up, ICICLE is an ideal tool for creating a highly optimized and scalable ZK prover. A key benefit of using GPUs with ICICLE is the ability to scale your ZK prover efficiently across multiple machines within a data center.
### Developing proof of concepts
ICICLE is also ideal for developing small prototypes. ICICLE has Golang and Rust bindings so you can easily develop a library implementing a specific primitive using ICICLE. An example would be develop a KZG commitment library using ICICLE.

Binary file not shown.

After

Width:  |  Height:  |  Size: 220 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 215 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 322 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

View File

@@ -0,0 +1,162 @@
# MSM - Multi scalar multiplication
MSM stands for Multi scalar multiplication, its defined as:
<math xmlns="http://www.w3.org/1998/Math/MathML">
<mi>M</mi>
<mi>S</mi>
<mi>M</mi>
<mo stretchy="false">(</mo>
<mi>a</mi>
<mo>,</mo>
<mi>G</mi>
<mo stretchy="false">)</mo>
<mo>=</mo>
<munderover>
<mo data-mjx-texclass="OP" movablelimits="false">&#x2211;</mo>
<mrow data-mjx-texclass="ORD">
<mi>j</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<mrow data-mjx-texclass="ORD">
<mi>n</mi>
<mo>&#x2212;</mo>
<mn>1</mn>
</mrow>
</munderover>
<msub>
<mi>a</mi>
<mi>j</mi>
</msub>
<msub>
<mi>G</mi>
<mi>j</mi>
</msub>
</math>
Where
$G_j \in G$ - points from an Elliptic Curve group.
$a_0, \ldots, a_n$ - Scalars
$MSM(a, G) \in G$ - a single EC (elliptic curve) point
In words, MSM is the sum of scalar and EC point multiplications. We can see from this definition that the core operations occurring are Modular Multiplication and Elliptic curve point addition. Its obvious that multiplication can be computed in parallel and then the products summed, making MSM inherently parallelizable.
Accelerating MSM is crucial to a ZK protocol's performance due to the [large percent of run time](https://hackmd.io/@0xMonia/SkQ6-oRz3#Hardware-acceleration-in-action) they take when generating proofs.
You can learn more about how MSMs work from this [video](https://www.youtube.com/watch?v=Bl5mQA7UL2I) and from our resource list on [Ingopedia](https://www.ingonyama.com/ingopedia/msm).
# Using MSM
## Supported curves
MSM supports the following curves:
`bls12-377`, `bls12-381`, `bn-254`, `bw6-761`, `grumpkin`
## Supported algorithms
Our MSM implementation supports two algorithms `Bucket accumulation` and `Large triangle accumulation`.
### Bucket accumulation
The Bucket Accumulation algorithm is a method of dividing the overall MSM task into smaller, more manageable sub-tasks. It involves partitioning scalars and their corresponding points into different "buckets" based on the scalar values.
Bucket Accumulation can be more parallel-friendly because it involves dividing the computation into smaller, independent tasks, distributing scalar-point pairs into buckets and summing points within each bucket. This division makes it well suited for parallel processing on GPUs.
#### When should I use Bucket accumulation?
In scenarios involving large MSM computations with many scalar-point pairs, the ability to parallelize operations makes Bucket Accumulation more efficient. The larger the MSM task, the more significant the potential gains from parallelization.
### Large triangle accumulation
Large Triangle Accumulation is a method for optimizing MSM which focuses on reducing the number of point doublings in the computation. This algorithm is based on the observation that the number of point doublings can be minimized by structuring the computation in a specific manner.
#### When should I use Large triangle accumulation?
The Large Triangle Accumulation algorithm is more sequential in nature, as it builds upon each step sequentially (accumulating sums and then performing doubling). This structure can make it less suitable for parallelization but potentially more efficient for a <b>large batch of smaller MSM computations</b>.
### How do I toggle between the supported algorithms?
When creating your MSM Config you may state which algorithm you wish to use. `is_big_triangle=true` will activate Large triangle accumulation and `is_big_triangle=false` will activate Bucket accumulation.
```rust
...
let mut cfg_bls12377 = msm::get_default_msm_config::<BLS12377CurveCfg>();
// is_big_triangle will determine which algorithm to use
cfg_bls12377.is_big_triangle = true;
msm::msm(&scalars, &points, &cfg, &mut msm_results).unwrap();
...
```
You may reference the rust code [here](https://github.com/ingonyama-zk/icicle/blob/77a7613aa21961030e4e12bf1c9a78a2dadb2518/wrappers/rust/icicle-core/src/msm/mod.rs#L54).
## MSM Modes
ICICLE MSM also supports two different modes `Batch MSM` and `Single MSM`
Batch MSM allows you to run many MSMs with a single API call, Single MSM will launch a single MSM computation.
### Which mode should I use?
This decision is highly dependent on your use case and design. However, if your design allows for it, using batch mode can significantly improve efficiency. Batch processing allows you to perform multiple MSMs leveraging the parallel processing capabilities of GPUs.
Single MSM mode should be used when batching isn't possible or when you have to run a single MSM.
### How do I toggle between MSM modes?
Toggling between MSM modes occurs automatically based on the number of results you are expecting from the `msm::msm` function. If you are expecting an array of `msm_results`, ICICLE will automatically split `scalars` and `points` into equal parts and run them as multiple MSMs in parallel.
```rust
...
let mut msm_result: HostOrDeviceSlice<'_, G1Projective> = HostOrDeviceSlice::cuda_malloc(1).unwrap();
msm::msm(&scalars, &points, &cfg, &mut msm_result).unwrap();
...
```
In the example above we allocate a single expected result which the MSM method will interpret as `batch_size=1` and run a single MSM.
In the next example, we are expecting 10 results which sets `batch_size=10` and runs 10 MSMs in batch mode.
```rust
...
let mut msm_results: HostOrDeviceSlice<'_, G1Projective> = HostOrDeviceSlice::cuda_malloc(10).unwrap();
msm::msm(&scalars, &points, &cfg, &mut msm_results).unwrap();
...
```
Here is a [reference](https://github.com/ingonyama-zk/icicle/blob/77a7613aa21961030e4e12bf1c9a78a2dadb2518/wrappers/rust/icicle-core/src/msm/mod.rs#L108) to the code which automatically sets the batch size. For more MSM examples have a look [here](https://github.com/ingonyama-zk/icicle/blob/77a7613aa21961030e4e12bf1c9a78a2dadb2518/examples/rust/msm/src/main.rs#L1).
## Support for G2 group
MSM also supports G2 group.
Using MSM in G2 requires a G2 config, and of course your Points should also be G2 Points.
```rust
...
let scalars = HostOrDeviceSlice::Host(upper_scalars[..size].to_vec());
let g2_points = HostOrDeviceSlice::Host(g2_upper_points[..size].to_vec());
let mut g2_msm_results: HostOrDeviceSlice<'_, G2Projective> = HostOrDeviceSlice::cuda_malloc(1).unwrap();
let mut g2_cfg = msm::get_default_msm_config::<G2CurveCfg>();
msm::msm(&scalars, &g2_points, &g2_cfg, &mut g2_msm_results).unwrap();
...
```
Here you can [find an example](https://github.com/ingonyama-zk/icicle/blob/5a96f9937d0a7176d88c766bd3ef2062b0c26c37/examples/rust/msm/src/main.rs#L114) of MSM on G2 Points.

View File

@@ -0,0 +1,243 @@
# NTT - Number Theoretic Transform
The Number Theoretic Transform (NTT) is a variant of the Fourier Transform used over finite fields, particularly those of integers modulo a prime number. NTT operates in a discrete domain and is used primarily in applications requiring modular arithmetic, such as cryptography and polynomial multiplication.
NTT is defined similarly to the Discrete Fourier Transform (DFT), but instead of using complex roots of unity, it uses roots of unity within a finite field. The definition hinges on the properties of the finite field, specifically the existence of a primitive root of unity of order $N$ (where $N$ is typically a power of 2), and the modulo operation is performed with respect to a specific prime number that supports these roots.
Formally, given a sequence of integers $a_0, a_1, ..., a_{N-1}$, the NTT of this sequence is another sequence of integers $A_0, A_1, ..., A_{N-1}$, computed as follows:
$$
A_k = \sum_{n=0}^{N-1} a_n \cdot \omega^{nk} \mod p
$$
where:
- $N$ is the size of the input sequence and is a power of 2,
- $p$ is a prime number such that $p = kN + 1$ for some integer $k$, ensuring that $p$ supports the existence of $N$th roots of unity,
- $\omega$ is a primitive $N$th root of unity modulo $p$, meaning $\omega^N \equiv 1 \mod p$ and no smaller positive power of $\omega$ is congruent to 1 modulo $p$,
- $k$ ranges from 0 to $N-1$, and it indexes the output sequence.
The NTT is particularly useful because it enables efficient polynomial multiplication under modulo arithmetic, crucial for algorithms in cryptographic protocols, and other areas requiring fast modular arithmetic operations.
There exists also INTT which is the inverse operation of NTT. INTT can take as input an output sequence of integers from an NTT and reconstruct the original sequence.
# Using NTT
### Supported curves
NTT supports the following curves:
`bls12-377`, `bls12-381`, `bn-254`, `bw6-761`
### Examples
- [Rust API examples](https://github.com/ingonyama-zk/icicle/blob/d84ffd2679a4cb8f8d1ac2ad2897bc0b95f4eeeb/examples/rust/ntt/src/main.rs#L1)
- [C++ API examples](https://github.com/ingonyama-zk/icicle/blob/d84ffd2679a4cb8f8d1ac2ad2897bc0b95f4eeeb/examples/c%2B%2B/ntt/example.cu#L1)
## NTT API overview
```rust
pub fn ntt<F>(
input: &HostOrDeviceSlice<F>,
dir: NTTDir,
cfg: &NTTConfig<F>,
output: &mut HostOrDeviceSlice<F>,
) -> IcicleResult<()>
```
`ntt:ntt` expects:
`input` - buffer to read the inputs of the NTT from. <br/>
`dir` - whether to compute forward or inverse NTT. <br/>
`cfg` - config used to specify extra arguments of the NTT. <br/>
`output` - buffer to write the NTT outputs into. Must be of the same size as input.
The `input` and `output` buffers can be on device or on host. Being on host means that they will be transferred to device during runtime.
### NTT Config
```rust
pub struct NTTConfig<'a, S> {
pub ctx: DeviceContext<'a>,
pub coset_gen: S,
pub batch_size: i32,
pub ordering: Ordering,
are_inputs_on_device: bool,
are_outputs_on_device: bool,
pub is_async: bool,
pub ntt_algorithm: NttAlgorithm,
}
```
The `NTTConfig` struct is a configuration object used to specify parameters for an NTT instance.
#### Fields
- **`ctx: DeviceContext<'a>`**: Specifies the device context, including the device ID and the stream ID.
- **`coset_gen: S`**: Defines the coset generator used for coset (i)NTTs. By default, this is set to `S::one()`, indicating that no coset is being used.
- **`batch_size: i32`**: Determines the number of NTTs to compute in a single batch. The default value is 1, meaning that operations are performed on individual inputs without batching. Batch processing can significantly improve performance by leveraging parallelism in GPU computations.
- **`ordering: Ordering`**: Controls the ordering of inputs and outputs for the NTT operation. This field can be used to specify decimation strategies (in time or in frequency) and the type of butterfly algorithm (Cooley-Tukey or Gentleman-Sande). The ordering is crucial for compatibility with various algorithmic approaches and can impact the efficiency of the NTT.
- **`are_inputs_on_device: bool`**: Indicates whether the input data has been preloaded on the device memory. If `false` inputs will be copied from host to device.
- **`are_outputs_on_device: bool`**: Indicates whether the output data is preloaded in device memory. If `false` outputs will be copied from host to device. If the inputs and outputs are the same pointer NTT will be computed in place.
- **`is_async: bool`**: Specifies whether the NTT operation should be performed asynchronously. When set to `true`, the NTT function will not block the CPU, allowing other operations to proceed concurrently. Asynchronous execution requires careful synchronization to ensure data integrity and correctness.
- **`ntt_algorithm: NttAlgorithm`**: Can be one of `Auto`, `Radix2`, `MixedRadix`.
`Auto` will select `Radix 2` or `Mixed Radix` algorithm based on heuristics.
`Radix2` and `MixedRadix` will force the use of an algorithm regardless of the input size or other considerations. You should use one of these options when you know for sure that you want to
#### Usage
Example initialization with default settings:
```rust
let default_config = NTTConfig::default();
```
Customizing the configuration:
```rust
let custom_config = NTTConfig {
ctx: custom_device_context,
coset_gen: my_coset_generator,
batch_size: 10,
ordering: Ordering::kRN,
are_inputs_on_device: true,
are_outputs_on_device: true,
is_async: false,
ntt_algorithm: NttAlgorithm::MixedRadix,
};
```
### Ordering
The `Ordering` enum defines how inputs and outputs are arranged for the NTT operation, offering flexibility in handling data according to different algorithmic needs or compatibility requirements. It primarily affects the sequencing of data points for the transform, which can influence both performance and the compatibility with certain algorithmic approaches. The available ordering options are:
- **`kNN` (Natural-Natural):** Both inputs and outputs are in their natural order. This is the simplest form of ordering, where data is processed in the sequence it is given, without any rearrangement.
- **`kNR` (Natural-Reversed):** Inputs are in natural order, while outputs are in bit-reversed order. This ordering is typically used in algorithms that benefit from having the output in a bit-reversed pattern.
- **`kRN` (Reversed-Natural):** Inputs are in bit-reversed order, and outputs are in natural order. This is often used with the Cooley-Tukey FFT algorithm.
- **`kRR` (Reversed-Reversed):** Both inputs and outputs are in bit-reversed order.
- **`kNM` (Natural-Mixed):** Inputs are provided in their natural order, while outputs are arranged in a digit-reversed (mixed) order. This ordering is good for mixed radix NTT operations, where the mixed or digit-reversed ordering of outputs is a generalization of the bit-reversal pattern seen in simpler, radix-2 cases.
- **`kMN` (Mixed-Natural):** Inputs are in a digit-reversed (mixed) order, while outputs are restored to their natural order. This ordering would primarily be used for mixed radix NTT
Choosing an algorithm is heavily dependent on your use case. For example Cooley-Tukey will often use `kRN` and Gentleman-Sande often uses `kNR`.
### Modes
NTT also supports two different modes `Batch NTT` and `Single NTT`
Batch NTT allows you to run many NTTs with a single API call, Single MSM will launch a single MSM computation.
You may toggle between single and batch NTT by simply configure `batch_size` to be larger then 1 in your `NTTConfig`.
```rust
let mut cfg = ntt::get_default_ntt_config::<ScalarField>();
cfg.batch_size = 10 // your ntt using this config will run in batch mode.
```
`batch_size=1` would keep our NTT in single NTT mode.
Deciding weather to use `batch NTT` vs `single NTT` is highly dependent on your application and use case.
**Single NTT Mode**
- Choose this mode when your application requires processing individual NTT operations in isolation.
**Batch NTT Mode**
- Batch NTT mode can significantly reduce read/write as well as computation overhead by executing multiple NTT operations in parallel.
- Batch mode may also offer better utilization of computational resources (memory and compute).
## Supported algorithms
Our NTT implementation supports two algorithms `radix-2` and `mixed-radix`.
### Radix 2
At its core, the Radix-2 NTT algorithm divides the problem into smaller sub-problems, leveraging the properties of "divide and conquer" to reduce the overall computational complexity. The algorithm operates on sequences whose lengths are powers of two.
1. **Input Preparation:**
The input is a sequence of integers $a_0, a_1, \ldots, a_{N-1}, \text{ where } N$ is a power of two.
2. **Recursive Decomposition:**
The algorithm recursively divides the input sequence into smaller sequences. At each step, it separates the sequence into even-indexed and odd-indexed elements, forming two subsequences that are then processed independently.
3. **Butterfly Operations:**
The core computational element of the Radix-2 NTT is the "butterfly" operation, which combines pairs of elements from the sequences obtained in the decomposition step.
Each butterfly operation involves multiplication by a "twiddle factor," which is a root of unity in the finite field, and addition or subtraction of the results, all performed modulo the prime modulus.
$$
X_k = (A_k + B_k \cdot W^k) \mod p
$$
$X_k$ - The output of the butterfly operation for the $k$-th element
$A_k$ - an element from the even-indexed subset
$B_k$ - an element from the odd-indexed subset
$p$ - prime modulus
$k$ - The index of the current operation within the butterfly or the transform stage
The twiddle factors are precomputed to save runtime and improve performance.
4. **Bit-Reversal Permutation:**
A final step involves rearranging the output sequence into the correct order. Due to the halving process in the decomposition steps, the elements of the transformed sequence are initially in a bit-reversed order. A bit-reversal permutation is applied to obtain the final sequence in natural order.
### Mixed Radix
The Mixed Radix NTT algorithm extends the concepts of the Radix-2 algorithm by allowing the decomposition of the input sequence based on various factors of its length. Specifically ICICLEs implementation splits the input into blocks of sizes 16,32,64 compared to radix2 which is always splitting such that we end with NTT of size 2. This approach offers enhanced flexibility and efficiency, especially for input sizes that are composite numbers, by leveraging the "divide and conquer" strategy across multiple radixes.
The NTT blocks in Mixed Radix are implemented more efficiently based on winograd NTT but also optimized memory and register usage is better compared to Radix-2.
Mixed Radix can reduce the number of stages required to compute for large inputs.
1. **Input Preparation:**
The input to the Mixed Radix NTT is a sequence of integers $a_0, a_1, \ldots, a_{N-1}$, where $N$ is not strictly required to be a power of two. Instead, $N$ can be any composite number, ideally factorized into primes or powers of primes.
2. **Factorization and Decomposition:**
Unlike the Radix-2 algorithm, which strictly divides the computational problem into halves, the Mixed Radix NTT algorithm implements a flexible decomposition approach which isn't limited to prime factorization.
For example, an NTT of size 256 can be decomposed into two stages of $16 \times \text{NTT}_{16}$, leveraging a composite factorization strategy rather than decomposing into eight stages of $\text{NTT}_{2}$. This exemplifies the use of composite factors (in this case, $256 = 16 \times 16$) to apply smaller NTT transforms, optimizing computational efficiency by adapting the decomposition strategy to the specific structure of $N$.
3. **Butterfly Operations with Multiple Radixes:**
The Mixed Radix algorithm utilizes butterfly operations for various radix sizes. Each sub-transform involves specific butterfly operations characterized by multiplication with twiddle factors appropriate for the radix in question.
The generalized butterfly operation for a radix-$r$ element can be expressed as:
$$
X_{k,r} = \sum_{j=0}^{r-1} (A_{j,k} \cdot W^{jk}) \mod p
$$
where $X_{k,r}$ is the output of the $radix-r$ butterfly operation for the $k-th$ set of inputs, $A_{j,k}$ represents the $j-th$ input element for the $k-th$ operation, $W$ is the twiddle factor, and $p$ is the prime modulus.
4. **Recombination and Reordering:**
After applying the appropriate butterfly operations across all decomposition levels, the Mixed Radix algorithm recombines the results into a single output sequence. Due to the varied sizes of the sub-transforms, a more complex reordering process may be required compared to Radix-2. This involves digit-reversal permutations to ensure that the final output sequence is correctly ordered.
### Which algorithm should I choose ?
Radix 2 is faster for small NTTs. A small NTT would be around logN = 16 and batch size 1. Its also more suited for inputs which are power of 2 (e.g., 256, 512, 1024). Radix 2 won't necessarily perform better for smaller `logn` with larger batches.
Mixed radix on the other hand better for larger NTTs with larger input sizes which are not necessarily power of 2.
Performance really depends on logn size, batch size, ordering, inverse, coset, coeff-field and which GPU you are using.
For this reason we implemented our [heuristic auto-selection](https://github.com/ingonyama-zk/icicle/blob/774250926c00ffe84548bc7dd97aea5227afed7e/icicle/appUtils/ntt/ntt.cu#L474) which should choose the most efficient algorithm in most cases.
We still recommend you benchmark for your specific use case if you think a different configuration would yield better results.

View File

@@ -0,0 +1,10 @@
# ICICLE Primitives
This section of the documentation is dedicated to the ICICLE primitives, we will cover the usage and internal details of our primitives such as hashing algorithms, MSM and NTT.
## Supported primitives
- [MSM](./msm)
- [Poseidon Hash](./poseidon.md)

View File

@@ -0,0 +1,226 @@
# Poseidon
[Poseidon](https://eprint.iacr.org/2019/458.pdf) is a popular hash in the ZK ecosystem primarily because its optimized to work over large prime fields, a common setting for ZK proofs, thereby minimizing the number of multiplicative operations required.
Poseidon has also been specifically designed to be efficient when implemented within ZK circuits, Poseidon uses far less constraints compared to other hash functions like Keccak or SHA-256 in the context of ZK circuits.
Poseidon has been used in many popular ZK protocols such as Filecoin and [Plonk](https://drive.google.com/file/d/1bZZvKMQHaZGA4L9eZhupQLyGINkkFG_b/view?usp=drive_open).
Our implementation of Poseidon is implemented in accordance with the optimized [Filecoin version](https://spec.filecoin.io/algorithms/crypto/poseidon/).
Let understand how Poseidon works.
### Initialization
Poseidon starts with the initialization of its internal state, which is composed of the input elements and some pregenerated constants. An initial round constant is added to each element of the internal state. Adding The round constants ensure the state is properly mixed from the outset.
This is done to prevent collisions and to prevent certain cryptographic attacks by ensuring that the internal state is sufficiently mixed and unpredictable.
![Alt text](image.png)
### Applying full and partial rounds
To generate a secure hash output, the algorithm goes through a series of "full rounds" and "partial rounds" as well as transformations between these sets of rounds.
First full rounds => apply SBox and Round constants => partial rounds => Last full rounds => Apply SBox
#### Full rounds
![Alt text](image-1.png)
**Uniform Application of S-Box:** In full rounds, the S-box (a non-linear transformation) is applied uniformly to every element of the hash function's internal state. This ensures a high degree of mixing and diffusion, contributing to the hash function's security. The functions S-box involves raising each element of the state to a certain power denoted by `α` a member of the finite field defined by the prime `p`, `α` can be different depending on the the implementation and user configuration.
**Linear Transformation:** After applying the S-box, a linear transformation is performed on the state. This involves multiplying the state by a MDS (Maximum Distance Separable) Matrix. which further diffuses the transformations applied by the S-box across the entire state.
**Addition of Round Constants:** Each element of the state is then modified by adding a unique round constant. These constants are different for each round and are precomputed as part of the hash function's initialization. The addition of round constants ensures that even minor changes to the input produce significant differences in the output.
#### Partial Rounds
**Selective Application of S-Box:** Partial rounds apply the S-box transformation to only one element of the internal state per round, rather than to all elements. This selective application significantly reduces the computational complexity of the hash function without compromising its security. The choice of which element to apply the S-box to can follow a specific pattern or be fixed, depending on the design of the hash function.
**Linear Transformation and Round Constants:** A linear transformation is performed and round constants are added. The linear transformation in partial rounds can be designed to be less computationally intensive (this is done by using a sparse matrix) than in full rounds, further optimizing the function's efficiency.
The user of Poseidon can often choose how many partial or full rounds he wishes to apply; more full rounds will increase security but degrade performance. The choice and balance is highly dependent on the use case.
![Alt text](image-2.png)
## Using Poseidon
ICICLE Poseidon is implemented for GPU and parallelization is performed for each element of the state rather than for each state.
What that means is we calculate multiple hash-sums over multiple pre-images in parallel, rather than going block by block over the input vector.
So for Poseidon of arity 2 and input of size 1024 * 2, we would expect 1024 elements of output. Which means each block would be of size 2 and that would result in 1024 Poseidon hashes being performed.
### Supported API
[`Rust`](https://github.com/ingonyama-zk/icicle/tree/main/wrappers/rust/icicle-core/src/poseidon), [`C++`](https://github.com/ingonyama-zk/icicle/tree/main/icicle/appUtils/poseidon)
### Supported curves
Poseidon supports the following curves:
`bls12-377`, `bls12-381`, `bn-254`, `bw6-761`
### Constants
Poseidon is extremely customizable and using different constants will produce different hashes, security levels and performance results.
We support pre-calculated and optimized constants for each of the [supported curves](#supported-curves).The constants can be found [here](https://github.com/ingonyama-zk/icicle/tree/main/icicle/appUtils/poseidon/constants) and are labeled clearly per curve `<curve_name>_poseidon.h`.
If you wish to generate your own constants you can use our python script which can be found [here](https://github.com/ingonyama-zk/icicle/blob/b6dded89cdef18348a5d4e2748b71ce4211c63ad/icicle/appUtils/poseidon/constants/generate_parameters.py#L1).
Prerequisites:
- Install python 3
- `pip install poseidon-hash`
- `pip install galois==0.3.7`
- `pip install numpy`
You will then need to modify the following values before running the script.
```python
# Modify these
arity = 11 # we support arity 2, 4, 8 and 11.
p = 0x73EDA753299D7D483339D80809A1D80553BDA402FFFE5BFEFFFFFFFF00000001 # bls12-381
# p = 0x12ab655e9a2ca55660b44d1e5c37b00159aa76fed00000010a11800000000001 # bls12-377
# p = 0x30644e72e131a029b85045b68181585d2833e84879b9709143e1f593f0000001 # bn254
# p = 0x1ae3a4617c510eac63b05c06ca1493b1a22d9f300f5138f1ef3622fba094800170b5d44300000008508c00000000001 # bw6-761
prime_bit_len = 255
field_bytes = 32
...
# primitive_element = None
primitive_element = 7 # bls12-381
# primitive_element = 22 # bls12-377
# primitive_element = 5 # bn254
# primitive_element = 15 # bw6-761
```
We only support `alpha = 5` so if you want to use another alpha for SBox please reach out on discord or open a github issue.
### Rust API
This is the most basic way to use the Poseidon API.
```rust
let test_size = 1 << 10;
let arity = 2u32;
let ctx = get_default_device_context();
let constants = load_optimized_poseidon_constants::<F>(arity, &ctx).unwrap();
let config = PoseidonConfig::default();
let inputs = vec![F::one(); test_size * arity as usize];
let outputs = vec![F::zero(); test_size];
let mut input_slice = HostOrDeviceSlice::on_host(inputs);
let mut output_slice = HostOrDeviceSlice::on_host(outputs);
poseidon_hash_many::<F>(
&mut input_slice,
&mut output_slice,
test_size as u32,
arity as u32,
&constants,
&config,
)
.unwrap();
```
The `PoseidonConfig::default()` can be modified, by default the inputs and outputs are set to be on `Host` for example.
```
impl<'a> Default for PoseidonConfig<'a> {
fn default() -> Self {
let ctx = get_default_device_context();
Self {
ctx,
are_inputs_on_device: false,
are_outputs_on_device: false,
input_is_a_state: false,
aligned: false,
loop_state: false,
is_async: false,
}
}
}
```
In the example above `load_optimized_poseidon_constants::<F>(arity, &ctx).unwrap();` is used which will load the correct constants based on arity and curve. Its possible to [generate](#constants) your own constants and load them.
```rust
let ctx = get_default_device_context();
let cargo_manifest_dir = env!("CARGO_MANIFEST_DIR");
let constants_file = PathBuf::from(cargo_manifest_dir)
.join("tests")
.join(format!("{}_constants.bin", field_prefix));
let mut constants_buf = vec![];
File::open(constants_file)
.unwrap()
.read_to_end(&mut constants_buf)
.unwrap();
let mut custom_constants = vec![];
for chunk in constants_buf.chunks(field_bytes) {
custom_constants.push(F::from_bytes_le(chunk));
}
let custom_constants = create_optimized_poseidon_constants::<F>(
arity as u32,
&ctx,
full_rounds_half,
partial_rounds,
&mut custom_constants,
)
.unwrap();
```
For more examples using different configurations refer here.
## The Tree Builder
The tree builder allows you to build Merkle trees using Poseidon.
You can define both the tree's `height` and its `arity`. The tree `height` determines the number of layers in the tree, including the root and the leaf layer. The `arity` determines how many children each internal node can have.
```rust
let height = 20;
let arity = 2;
let leaves = vec![F::one(); 1 << (height - 1)];
let mut digests = vec![F::zero(); merkle_tree_digests_len(height, arity)];
let mut leaves_slice = HostOrDeviceSlice::on_host(leaves);
let ctx = get_default_device_context();
let constants = load_optimized_poseidon_constants::<F>(arity, &ctx).unwrap()
let mut config = TreeBuilderConfig::default();
config.keep_rows = 1;
build_poseidon_merkle_tree::<F>(&mut leaves_slice, &mut digests, height, arity, &constants, &config).unwrap();
println!("Root: {:?}", digests[0..1][0]);
```
Similar to Poseidon, you can also configure the Tree Builder `TreeBuilderConfig::default()`
- `keep_rows`: The number of rows which will be written to output, 0 will write all rows.
- `are_inputs_on_device`: Have the inputs been loaded to device memory ?
- `is_async`: Should the TreeBuilder run asynchronously? `False` will block the current CPU thread. `True` will require you call `cudaStreamSynchronize` or `cudaDeviceSynchronize` to retrieve the result.
### Benchmarks
We ran the Poseidon tree builder on:
**CPU**: 12th Gen Intel(R) Core(TM) i9-12900K/
**GPU**: RTX 3090 Ti
**Tree height**: 30 (2^29 elements)
The benchmarks include copying data from and to the device.
| Rows to keep parameter | Run time, Icicle | Supranational PC2
| ----------- | ----------- | ----------- |
| 10 | 9.4 seconds | 13.6 seconds
| 20 | 9.5 seconds | 13.6 seconds
| 29 | 13.7 seconds | 13.6 seconds

View File

@@ -0,0 +1,57 @@
# Rust bindings
Rust bindings allow you to use ICICLE as a rust library.
`icicle-core` defines all interfaces, macros and common methods.
`icicle-cuda-runtime` defines DeviceContext which can be used to manage a specific GPU as well as wrapping common CUDA methods.
`icicle-curves` implements all interfaces and macros from icicle-core for each curve. For example icicle-bn254 implements curve bn254. Each curve has its own build script which will build the CUDA libraries for that curve as part of the rust-toolchain build.
## Using ICICLE Rust bindings in your project
Simply add the following to your `Cargo.toml`.
```
# GPU Icicle integration
icicle-cuda-runtime = { git = "https://github.com/ingonyama-zk/icicle.git" }
icicle-core = { git = "https://github.com/ingonyama-zk/icicle.git" }
icicle-bn254 = { git = "https://github.com/ingonyama-zk/icicle.git" }
```
`icicle-bn254` being the curve you wish to use and `icicle-core` and `icicle-cuda-runtime` contain ICICLE utilities and CUDA wrappers.
If you wish to point to a specific ICICLE branch add `branch = "<name_of_branch>"` or `tag = "<name_of_tag>"` to the ICICLE dependency. For a specific commit add `rev = "<commit_id>"`.
When you build your project ICICLE will be built as part of the build command.
# How do the rust bindings work?
The rust bindings are just rust wrappers for ICICLE Core static libraries which can be compiled. We integrate the compilation of the static libraries into rusts toolchain to make usage seamless and easy. This is achieved by [extending rusts build command](https://github.com/ingonyama-zk/icicle/blob/main/wrappers/rust/icicle-curves/icicle-bn254/build.rs).
```rust
use cmake::Config;
use std::env::var;
fn main() {
println!("cargo:rerun-if-env-changed=CXXFLAGS");
println!("cargo:rerun-if-changed=../../../../icicle");
let cargo_dir = var("CARGO_MANIFEST_DIR").unwrap();
let profile = var("PROFILE").unwrap();
let out_dir = Config::new("../../../../icicle")
.define("BUILD_TESTS", "OFF") //TODO: feature
.define("CURVE", "bn254")
.define("CMAKE_BUILD_TYPE", "Release")
.build_target("icicle")
.build();
println!("cargo:rustc-link-search={}/build", out_dir.display());
println!("cargo:rustc-link-lib=ingo_bn254");
println!("cargo:rustc-link-lib=stdc++");
// println!("cargo:rustc-link-search=native=/usr/local/cuda/lib64");
println!("cargo:rustc-link-lib=cudart");
}
```

View File

@@ -0,0 +1,201 @@
# Multi GPU APIs
To learn more about the theory of Multi GPU programming refer to [this part](../multi-gpu.md) of documentation.
Here we will cover the core multi GPU apis and a [example](#a-multi-gpu-example)
## Device management API
To streamline device management we offer as part of `icicle-cuda-runtime` package methods for dealing with devices.
#### [`set_device`](https://github.com/ingonyama-zk/icicle/blob/e6035698b5e54632f2c44e600391352ccc11cad4/wrappers/rust/icicle-cuda-runtime/src/device.rs#L6)
Sets the current CUDA device by its ID, when calling `set_device` it will set the current thread to a CUDA device.
**Parameters:**
- `device_id: usize`: The ID of the device to set as the current device. Device IDs start from 0.
**Returns:**
- `CudaResult<()>`: An empty result indicating success if the device is set successfully. In case of failure, returns a `CudaError`.
**Errors:**
- Returns a `CudaError` if the specified device ID is invalid or if a CUDA-related error occurs during the operation.
**Example:**
```rust
let device_id = 0; // Device ID to set
match set_device(device_id) {
Ok(()) => println!("Device set successfully."),
Err(e) => eprintln!("Failed to set device: {:?}", e),
}
```
#### [`get_device_count`](https://github.com/ingonyama-zk/icicle/blob/e6035698b5e54632f2c44e600391352ccc11cad4/wrappers/rust/icicle-cuda-runtime/src/device.rs#L10)
Retrieves the number of CUDA devices available on the machine.
**Returns:**
- `CudaResult<usize>`: The number of available CUDA devices. On success, contains the count of CUDA devices. On failure, returns a `CudaError`.
**Errors:**
- Returns a `CudaError` if a CUDA-related error occurs during the retrieval of the device count.
**Example:**
```rust
match get_device_count() {
Ok(count) => println!("Number of devices available: {}", count),
Err(e) => eprintln!("Failed to get device count: {:?}", e),
}
```
#### [`get_device`](https://github.com/ingonyama-zk/icicle/blob/e6035698b5e54632f2c44e600391352ccc11cad4/wrappers/rust/icicle-cuda-runtime/src/device.rs#L15)
Retrieves the ID of the current CUDA device.
**Returns:**
- `CudaResult<usize>`: The ID of the current CUDA device. On success, contains the device ID. On failure, returns a `CudaError`.
**Errors:**
- Returns a `CudaError` if a CUDA-related error occurs during the retrieval of the current device ID.
**Example:**
```rust
match get_device() {
Ok(device_id) => println!("Current device ID: {}", device_id),
Err(e) => eprintln!("Failed to get current device: {:?}", e),
}
```
## Device context API
The `DeviceContext` is embedded into `NTTConfig`, `MSMConfig` and `PoseidonConfig`, meaning you can simply pass a `device_id` to your existing config and the same computation will be triggered on a different device.
#### [`DeviceContext`](https://github.com/ingonyama-zk/icicle/blob/e6035698b5e54632f2c44e600391352ccc11cad4/wrappers/rust/icicle-cuda-runtime/src/device_context.rs#L11)
Represents the configuration a CUDA device, encapsulating the device's stream, ID, and memory pool. The default device is always `0`.
```rust
pub struct DeviceContext<'a> {
pub stream: &'a CudaStream,
pub device_id: usize,
pub mempool: CudaMemPool,
}
```
##### Fields
- **`stream: &'a CudaStream`**
A reference to a `CudaStream`. This stream is used for executing CUDA operations. By default, it points to a null stream CUDA's default execution stream.
- **`device_id: usize`**
The index of the GPU currently in use. The default value is `0`, indicating the first GPU in the system.
In some cases assuming `CUDA_VISIBLE_DEVICES` was configured, for example as `CUDA_VISIBLE_DEVICES=2,3,7` in the system with 8 GPUs - the `device_id=0` will correspond to GPU with id 2. So the mapping may not always be a direct reflection of the number of GPUs installed on a system.
- **`mempool: CudaMemPool`**
Represents the memory pool used for CUDA memory allocations. The default is set to a null pointer, which signifies the use of the default CUDA memory pool.
##### Implementation Notes
- The `DeviceContext` structure is cloneable and can be debugged, facilitating easier logging and duplication of contexts when needed.
#### [`DeviceContext::default_for_device(device_id: usize) -> DeviceContext<'static>`](https://github.com/ingonyama-zk/icicle/blob/e6035698b5e54632f2c44e600391352ccc11cad4/wrappers/rust/icicle-cuda-runtime/src/device_context.rs#L30)
Provides a default `DeviceContext` with system-wide defaults, ideal for straightforward setups.
#### Returns
A `DeviceContext` instance configured with:
- The default stream (`null_mut()`).
- The default device ID (`0`).
- The default memory pool (`null_mut()`).
#### Parameters
- **`device_id: usize`**: The ID of the device for which to create the context.
#### Returns
A `DeviceContext` instance with the provided `device_id` and default settings for the stream and memory pool.
#### [`check_device(device_id: i32)`](https://github.com/vhnatyk/icicle/blob/eef6876b037a6b0797464e7cdcf9c1ecfcf41808/wrappers/rust/icicle-cuda-runtime/src/device_context.rs#L42)
Validates that the specified `device_id` matches the ID of the currently active device, ensuring operations are targeted correctly.
#### Parameters
- **`device_id: i32`**: The device ID to verify against the currently active device.
#### Behavior
- **Panics** if the `device_id` does not match the active device's ID, preventing cross-device operation errors.
#### Example
```rust
let device_id: i32 = 0; // Example device ID
check_device(device_id);
// Ensures that the current context is correctly set for the specified device ID.
```
## A Multi GPU example
In this example we will display how you can
1. Fetch the number of devices installed on a machine
2. For every GPU launch a thread and set a active device per thread.
3. Execute a MSM on each GPU
```rust
...
let device_count = get_device_count().unwrap();
(0..device_count)
.into_par_iter()
.for_each(move |device_id| {
set_device(device_id).unwrap();
// you can allocate points and scalars_d here
let mut cfg = MSMConfig::default_for_device(device_id);
cfg.ctx.stream = &stream;
cfg.is_async = true;
cfg.are_scalars_montgomery_form = true;
msm(&scalars_d, &HostOrDeviceSlice::on_host(points), &cfg, &mut msm_results).unwrap();
// collect and process results
})
...
```
We use `get_device_count` to fetch the number of connected devices, device IDs will be `0...device_count-1`
[`into_par_iter`](https://docs.rs/rayon/latest/rayon/iter/trait.IntoParallelIterator.html#tymethod.into_par_iter) is a parallel iterator, you should expect it to launch a thread for every iteration.
We then call `set_device(device_id).unwrap();` it should set the context of that thread to the selected `device_id`.
Any data you now allocate from the context of this thread will be linked to the `device_id`. We create our `MSMConfig` with the selected device ID `let mut cfg = MSMConfig::default_for_device(device_id);`, behind the scene this will create for us a `DeviceContext` configured for that specific GPU.
We finally call our `msm` method.

View File

@@ -0,0 +1,86 @@
# Supporting Additional Curves
We understand the need for ZK developers to use different curves, some common some more exotic. For this reason we designed ICICLE to allow developers to add any curve they desire.
## ICICLE Core
ICICLE core is very generic by design so all algorithms and primitives are designed to work based of configuration files [selected during compile](https://github.com/ingonyama-zk/icicle/blob/main/icicle/curves/curve_config.cuh) time. This is why we compile ICICLE Core per curve.
To add support a new curve you must create a new file under [`icicle/curves`](https://github.com/ingonyama-zk/icicle/tree/main/icicle/curves). The file should be named `<curve_name>_params.cuh`.
We also require some changes to [`curve_config.cuh`](https://github.com/ingonyama-zk/icicle/blob/main/icicle/curves/curve_config.cuh#L16-L29), we need to add a new curve id.
```
...
#define BN254 1
#define BLS12_381 2
#define BLS12_377 3
#define BW6_761 4
#define GRUMPKIN 5
#define <curve_name> 6
...
```
Make sure to modify the [rest of the file](https://github.com/ingonyama-zk/icicle/blob/4beda3a900eda961f39af3a496f8184c52bf3b41/icicle/curves/curve_config.cuh#L16-L29) accordingly.
Finally we must modify the [`make` file](https://github.com/ingonyama-zk/icicle/blob/main/icicle/CMakeLists.txt#L64) to make sure we can compile our new curve.
```
set(SUPPORTED_CURVES bn254;bls12_381;bls12_377;bw6_761;<curve_name>)
```
## Bindings
In order to support a new curves in the binding libraries you first must support it in ICICLE core.
### Rust
Create a new folder named `icicle-<curve_name>` under the [rust wrappers folder](https://github.com/ingonyama-zk/icicle/tree/main/wrappers/rust/icicle-curves). Your new directory should look like this.
```
└── rust
├── icicle-curves
├── icicle-<curve_name>
│   │   ├── Cargo.toml
│   │   ├── build.rs
│   │   └── src/
│   │   ├── curve.rs
│   │   ├── lib.rs
│   │   ├── msm/
│   │   │   └── mod.rs
│   │   └── ntt/
│   │   └── mod.rs
```
Lets look at [`ntt/mod.rs`](https://github.com/ingonyama-zk/icicle/blob/main/wrappers/rust/icicle-curves/icicle-bn254/src/ntt/mod.rs) for example.
```
...
extern "C" {
#[link_name = "bn254NTTCuda"]
fn ntt_cuda<'a>(
input: *const ScalarField,
size: usize,
is_inverse: bool,
config: &NTTConfig<'a, ScalarField>,
output: *mut ScalarField,
) -> CudaError;
#[link_name = "bn254DefaultNTTConfig"]
fn default_ntt_config() -> NTTConfig<'static, ScalarField>;
#[link_name = "bn254InitializeDomain"]
fn initialize_ntt_domain(primitive_root: ScalarField, ctx: &DeviceContext) -> CudaError;
}
...
```
Here you would need to replace `bn254NTTCuda` with `<curve_name>NTTCuda`. Most of these changes are pretty straight forward. One thing you should pay attention to is limb sizes as these change for different curves. For example `BN254` [has limb size of 8](https://github.com/ingonyama-zk/icicle/blob/4beda3a900eda961f39af3a496f8184c52bf3b41/wrappers/rust/icicle-curves/icicle-bn254/src/curve.rs#L15) but for your curve this may be different.
### Golang
Golang is WIP in v1, coming soon. Please checkout a previous [release v0.1.0](https://github.com/ingonyama-zk/icicle/releases/tag/v0.1.0) for golang bindings.

47
docs/docs/introduction.md Normal file
View File

@@ -0,0 +1,47 @@
---
slug: /
displayed_sidebar: GettingStartedSidebar
title: ''
---
# Welcome to Ingonyama's Developer Documentation
Ingonyama is a next-generation semiconductor company, focusing on Zero-Knowledge Proof hardware acceleration. We build accelerators for advanced cryptography, unlocking real-time applications. Our focus is on democratizing access to compute intensive cryptography and making it accessible for developers to build on top of.
Currently our flagship products are:
- **ICICLE**:
[ICICLE](https://github.com/ingonyama-zk/icicle) is a fully featured GPU accelerated cryptography library for building ZK provers. ICICLE allows you to accelerate your ZK existing protocols in a matter of hours or implement your protocol from scratch on GPU.
---
## Our current take on hardware acceleration
We believe GPUs are as important for ZK as for AI.
- GPUs are a perfect match for ZK compute - around 97% of ZK protocol runtime is parallel by nature.
- GPUs are simple for developers to use and scale compared to other hardware platforms.
- GPUs are extremely competitive in terms of power / performance and price (3x cheaper compared to FPGAs).
- GPUs are popular and readily available.
For a more in-depth understanding on this topic we suggest you read [our article on the subject](https://www.ingonyama.com/blog/revisiting-paradigm-hardware-acceleration-for-zero-knowledge-proofs).
Despite our current focus on GPUs we are still hard at work developing a ZPU (ZK Processing Unit), with the goal of offering a programmable hardware platform for ZK. To read more about ZPUs we suggest you read this [article](https://medium.com/@ingonyama/zpu-the-zero-knowledge-processing-unit-f886a48e00e0).
## ICICLE
[ICICLE](https://github.com/ingonyama-zk/icicle) is a cryptography library for ZK using GPUs.
ICICLE implements blazing fast cryptographic primitives such as EC operations, MSM, NTT, Poseidon hash and more on GPU.
ICICLE is designed to be easy to use, developers don't have to touch a single line of CUDA code. Our Rust and Golang bindings allow your team to transition from CPU to GPU with minimal changes.
Learn more about ICICLE and GPUs [here][ICICLE-OVERVIEW].
## Get in Touch
If you have any questions, ideas, or are thinking of building something in this space join the discussion on [Discord]. You can explore our code on [github](https://github.com/ingonyama-zk) or read some of [our research papers](https://github.com/ingonyama-zk/papers).
Follow us on [Twitter](https://x.com/Ingo_zk) and [YouTube](https://www.youtube.com/@ingo_ZK) and sign up for our [mailing list](https://wkf.ms/3LKCbdj) to get our latest announcements.
[ICICLE-OVERVIEW]: ./icicle/overview.md
[Discord]: https://discord.gg/6vYrE7waPj

171
docs/docusaurus.config.js Normal file
View File

@@ -0,0 +1,171 @@
// @ts-check
const lightCodeTheme = require('prism-react-renderer/themes/github');
const darkCodeTheme = require('prism-react-renderer/themes/dracula');
const math = require('remark-math');
const katex = require('rehype-katex');
/** @type {import('@docusaurus/types').Config} */
const config = {
title: 'Ingonyama Developer Documentation',
tagline: 'Ingonyama is a next-generation semiconductor company, focusing on Zero-Knowledge Proof hardware acceleration. We build accelerators for advanced cryptography, unlocking real-time applications.',
url: 'https://dev.ingonyama.com/',
baseUrl: '/',
onBrokenLinks: 'throw',
onBrokenMarkdownLinks: 'warn',
favicon: 'img/logo.png',
organizationName: 'ingonyama-zk',
projectName: 'developer-docs',
trailingSlash: false,
deploymentBranch: "main",
presets: [
[
'classic',
/** @type {import('@docusaurus/preset-classic').Options} */
({
docs: {
showLastUpdateAuthor: true,
showLastUpdateTime: true,
routeBasePath: '/',
remarkPlugins: [math, require('mdx-mermaid')],
rehypePlugins: [katex],
sidebarPath: require.resolve('./sidebars.js'),
editUrl: 'https://github.com/ingonyama-zk/developer-docs/tree/main',
},
blog: {
remarkPlugins: [math, require('mdx-mermaid')],
rehypePlugins: [katex],
showReadingTime: true,
editUrl: 'https://github.com/ingonyama-zk/developer-docs/tree/main',
},
pages: {},
theme: {
customCss: require.resolve('./src/css/custom.css'),
},
}),
],
],
stylesheets: [
{
href: 'https://cdn.jsdelivr.net/npm/katex@0.13.24/dist/katex.min.css',
type: 'text/css',
integrity:
'sha384-odtC+0UGzzFL/6PNoE8rX/SPcQDXBJ+uRepguP4QkPCm2LBxH3FA3y+fKSiJ+AmM',
crossorigin: 'anonymous',
},
],
scripts: [
{
src: 'https://plausible.io/js/script.js',
'data-domain':'ingonyama.com',
defer: true,
},
],
themeConfig:
/** @type {import('@docusaurus/preset-classic').ThemeConfig} */
({
metadata: [
{name: 'twitter:card', content: 'summary_large_image'},
{name: 'twitter:site', content: '@Ingo_zk'},
{name: 'twitter:title', content: 'Ingonyama Developer Documentation'},
{name: 'twitter:description', content: 'Ingonyama is a next-generation semiconductor company focusing on Zero-Knowledge Proof hardware acceleration...'},
{name: 'twitter:image', content: 'https://dev.ingonyama.com/img/logo.png'},
// title
{name: 'og:title', content: 'Ingonyama Developer Documentation'},
{name: 'og:description', content: 'Ingonyama is a next-generation semiconductor company focusing on Zero-Knowledge Proof hardware acceleration...'},
{name: 'og:image', content: 'https://dev.ingonyama.com/img/logo.png'},
],
hideableSidebar: true,
colorMode: {
defaultMode: 'dark',
respectPrefersColorScheme: false,
},
algolia: {
// The application ID provided by Algolia
appId: 'PZY4KJBBBK',
// Public API key: it is safe to commit it
apiKey: '2cc940a6e0ef5c117f4f44e7f4e6e20b',
indexName: 'ingonyama',
// Optional: see doc section below
contextualSearch: true,
// Optional: Specify domains where the navigation should occur through window.location instead on history.push. Useful when our Algolia config crawls multiple documentation sites and we want to navigate with window.location.href to them.
externalUrlRegex: 'external\\.com|domain\\.com',
// Optional: Replace parts of the item URLs from Algolia. Useful when using the same search index for multiple deployments using a different baseUrl. You can use regexp or string in the `from` param. For example: localhost:3000 vs myCompany.com/docs
replaceSearchResultPathname: {
from: '/docs/', // or as RegExp: /\/docs\//
to: '/',
},
// Optional: Algolia search parameters
searchParameters: {},
// Optional: path for search page that enabled by default (`false` to disable it)
searchPagePath: 'search',
},
navbar: {
title: 'Ingonyama Developer Documentation',
logo: {
alt: 'Ingonyama Logo',
src: 'img/logo.png',
},
items: [
{
position: 'left',
label: 'Docs',
to: '/',
},
{
href: 'https://github.com/ingonyama-zk',
position: 'right',
label: 'GitHub',
},
{
href: 'https://www.ingonyama.com/ingopedia/glossary',
position: 'right',
label: 'Ingopedia',
},
{
type: 'dropdown',
position: 'right',
label: 'Community',
items: [
{
label: 'Discord',
href: 'https://discord.gg/6vYrE7waPj',
},
{
label: 'Twitter',
href: 'https://x.com/Ingo_zk',
},
{
label: 'YouTube',
href: 'https://www.youtube.com/@ingo_ZK'
},
{
label: 'Mailing List',
href: 'https://wkf.ms/3LKCbdj',
}
]
},
],
},
footer: {
copyright: `Copyright © ${new Date().getFullYear()} Ingonyama, Inc. Built with Docusaurus.`,
},
prism: {
theme: lightCodeTheme,
darkTheme: darkCodeTheme,
},
image: 'img/logo.png',
}),
};
module.exports = config;

13681
docs/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

48
docs/package.json Normal file
View File

@@ -0,0 +1,48 @@
{
"name": "docusaurus",
"version": "0.0.0",
"private": true,
"description": "Ingonyama - developer docs",
"scripts": {
"docusaurus": "docusaurus",
"start": "docusaurus start",
"build": "docusaurus build",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
"clear": "docusaurus clear",
"serve": "docusaurus serve",
"write-translations": "docusaurus write-translations",
"write-heading-ids": "docusaurus write-heading-ids",
"dev": "docusaurus start",
"format": "prettier --write '**/*.md'"
},
"dependencies": {
"@docusaurus/core": "2.0.0-beta.18",
"@docusaurus/preset-classic": "2.0.0-beta.18",
"@mdx-js/react": "^1.6.22",
"clsx": "^1.1.1",
"hast-util-is-element": "1.1.0",
"mdx-mermaid": "^1.2.2",
"mermaid": "^9.1.2",
"prism-react-renderer": "^1.3.1",
"react": "^17.0.2",
"react-dom": "^17.0.2",
"rehype-katex": "5",
"remark-math": "3"
},
"browserslist": {
"production": [
">0.5%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
},
"devDependencies": {
"prettier": "^3.2.4"
}
}

10
docs/sandbox.config.json Normal file
View File

@@ -0,0 +1,10 @@
{
"infiniteLoopProtection": true,
"hardReloadOnChange": true,
"view": "browser",
"template": "docusaurus",
"node": "14",
"container": {
"node": "14"
}
}

134
docs/sidebars.js Normal file
View File

@@ -0,0 +1,134 @@
module.exports = {
GettingStartedSidebar: [
{
type: "doc",
label: "Introduction",
id: "introduction",
},
{
type: "category",
label: "ICICLE",
link: {
type: `doc`,
id: 'icicle/overview',
},
collapsed: false,
items: [
{
type: "doc",
label: "Getting started",
id: "icicle/introduction"
},
{
type: "doc",
label: "ICICLE Provers",
id: "icicle/integrations"
},
{
type: "doc",
label: "Golang bindings",
id: "icicle/golang-bindings",
},
{
type: "category",
label: "Rust bindings",
link: {
type: `doc`,
id: "icicle/rust-bindings",
},
collapsed: true,
items: [
{
type: "doc",
label: "Multi GPU Support",
id: "icicle/rust-bindings/multi-gpu",
}
]
},
{
type: "category",
label: "Primitives",
link: {
type: `doc`,
id: 'icicle/primitives/overview',
},
collapsed: true,
items: [
{
type: "doc",
label: "MSM",
id: "icicle/primitives/msm",
},
{
type: "doc",
label: "Poseidon Hash",
id: "icicle/primitives/poseidon",
},
{
type: "doc",
label: "NTT",
id: "icicle/primitives/ntt",
}
],
},
{
type: "doc",
label: "Multi GPU Support",
id: "icicle/multi-gpu",
},
{
type: "doc",
label: "Supporting additional curves",
id: "icicle/supporting-additional-curves",
},
{
type: "doc",
label: "Google Colab Instructions",
id: "icicle/colab-instructions",
},
]
},
{
type: "doc",
label: "ZK Containers",
id: "ZKContainers",
},
{
type: "doc",
label: "Ingonyama Grant program",
id: "grants",
},
{
type: "doc",
label: "Contributor guide",
id: "contributor-guide",
},
{
type: "category",
label: "Additional Resources",
collapsed: false,
items: [
{
type: "link",
label: "YouTube",
href: "https://www.youtube.com/@ingo_ZK"
},
{
type: "link",
label: "Ingonyama Blog",
href: "https://www.ingonyama.com/blog"
},
{
type: "link",
label: "Ingopedia",
href: "https://www.ingonyama.com/ingopedia"
},
{
href: 'https://github.com/ingonyama-zk',
type: "link",
label: 'GitHub',
}
]
}
],
};

59
docs/src/css/custom.css Normal file
View File

@@ -0,0 +1,59 @@
/**
* Any CSS included here will be global. The classic template
* bundles Infima by default. Infima is a CSS framework designed to
* work well for content-centric websites.
*/
/* You can override the default Infima variables here. */
:root {
--ifm-color-primary: #FFCB00;
--ifm-color-primary-dark: #FFCB00;
--ifm-color-primary-darker: #FFCB00;
--ifm-color-primary-darkest: #FFCB00;
--ifm-color-primary-light: #FFCB00;
--ifm-color-primary-lighter: #FFCB00;
--ifm-color-primary-lightest: #FFCB00;
--ifm-code-font-size: 95%;
}
/* For readability concerns, you should choose a lighter palette in dark mode. */
[data-theme='dark'] {
--ifm-color-primary: #FFCB00;
--ifm-color-primary-dark: #FFCB00;
--ifm-color-primary-darker:#FFCB00;
--ifm-color-primary-darkest: #FFCB00;
--ifm-color-primary-light:#FFCB00;
--ifm-color-primary-lighter: #FFCB00;
--ifm-color-primary-lightest: #FFCB00;
}
.docusaurus-highlight-code-line {
background-color: rgba(0, 0, 0, 0.1);
display: block;
margin: 0 calc(-1 * var(--ifm-pre-padding));
padding: 0 var(--ifm-pre-padding);
}
[data-theme='dark'] .docusaurus-highlight-code-line {
background-color: rgba(0, 0, 0, 0.3);
}
/* Mermaid elements must be changed to be visible in dark mode */
[data-theme='dark'] .mermaid .messageLine0, .messageLine1 {
filter: invert(51%) sepia(84%) saturate(405%) hue-rotate(21deg) brightness(94%) contrast(91%) !important;
}
/* NOTE Must be a separate specification from the above or it won't toggle off */
[data-theme='dark'] .mermaid .flowchart-link {
filter: invert(51%) sepia(84%) saturate(405%) hue-rotate(21deg) brightness(94%) contrast(91%) !important;
}
[data-theme='dark'] .mermaid .cluster-label {
filter: invert(51%) sepia(84%) saturate(405%) hue-rotate(21deg) brightness(94%) contrast(91%) !important;
}
[data-theme='dark'] .mermaid .messageText {
stroke:none !important; fill:white !important;
}
/* Our additions */
.anchor {
scroll-margin-top: 50pt;
}

0
docs/static/.nojekyll vendored Normal file
View File

BIN
docs/static/img/apilevels.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 170 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 KiB

BIN
docs/static/img/colab_change_runtime.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 26 KiB

BIN
docs/static/img/logo.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

BIN
docs/static/img/t4_gpu.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -2,10 +2,10 @@
status=0
if [[ $(codespell --skip ./**/target,./**/build -I .codespellignore 2>&1) ]];
if [[ $(codespell --skip ./**/target,./**/build,./docs/*.js,./docs/*.json -I .codespellignore 2>&1) ]];
then
echo "There are typos in some of the files you've changed. Please run the following to check what they are:"
echo "codespell --skip ./**/target,./**/build -I .codespellignore"
echo "codespell --skip ./**/target,./**/build,./docs/*.js,./docs/*.json -I .codespellignore"
echo ""
status=1
fi

View File

@@ -31,7 +31,5 @@ cargo bench
To run test
```sh
cargo test -- --test-threads=1
cargo test
```
The flag `--test-threads=1` is needed because currently some tests might interfere with one another inside the GPU.