Compare commits

...

1244 Commits

Author SHA1 Message Date
Baptiste Roux
52b8e81ccb fix(hpu): Correctly select adder configuration in ERC_20/ERC_20_SIMD
Add knobs to select ripple or kogge adder in ERC_20/ERC_20_SIMD.
Previously, it was hardcoded to ripple carry and thus degraded latency
performance of ERC_20.
2025-12-24 10:38:38 +01:00
Baptiste Roux
b19a7773bb feat: Add IfThenZero impl for Cpu 2025-12-24 10:38:38 +01:00
pgardratzama
0342b0466d chore(hpu): fix panic msg 2025-12-24 10:38:38 +01:00
pgardratzama
edc9ef0026 fix(hpu): fix whitepaper erc20 for HPU using if_then_zero 2025-12-24 10:38:38 +01:00
Guillermo Oyarzun
92df46f8f2 fix(gpu): return to 64 regs in multi-bit pbs 2025-12-23 11:51:00 +01:00
David Testé
effb7ada6d chore(ci): fix argument name passed to data_extractor 2025-12-18 18:09:34 +01:00
Agnes Leroy
49be544297 fix(gpu): fix cpu memory leak in expand and rerand 2025-12-18 16:33:23 +01:00
David Testé
23600eb8e1 chore(ci): split gpu documentation benchmarks execution
This is done to mitigate H100x8-SXM5 server scarcity.
2025-12-18 14:56:15 +01:00
Agnes Leroy
9708cc7fe9 chore(gpu): remove core crypto from valgrind run 2025-12-18 13:01:12 +01:00
dependabot[bot]
4cdfccb659 chore(deps): bump actions/setup-node from 6.0.0 to 6.1.0
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 6.0.0 to 6.1.0.
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](2028fbc5c2...395ad32622)

---
updated-dependencies:
- dependency-name: actions/setup-node
  dependency-version: 6.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-18 11:05:56 +01:00
dependabot[bot]
031c3fe34f chore(deps): bump actions/checkout from 6.0.0 to 6.0.1
Bumps [actions/checkout](https://github.com/actions/checkout) from 6.0.0 to 6.0.1.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](1af3b93b68...8e8c483db8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-18 11:05:47 +01:00
dependabot[bot]
ea99307cf5 chore(deps): bump actions/stale from 10.1.0 to 10.1.1
Bumps [actions/stale](https://github.com/actions/stale) from 10.1.0 to 10.1.1.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](5f858e3efb...997185467f)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-18 11:05:36 +01:00
Enzo Di Maria
ca2a79f1fb refactor(gpu): Threshold for multi-GPU with Classical PBS 2025-12-18 09:27:09 +01:00
Enzo Di Maria
0a59e86675 fix(gpu): Using tbc for classical 64 bits pbs on H100 2025-12-17 19:18:01 +01:00
Nicolas Sarlin
312ce494bf chore(zk): add 1 * 64 benches with production CRS 2025-12-17 15:06:37 +01:00
Nicolas Sarlin
5f2e7e31f1 chore(zk): align wasm bench and integer bench 2025-12-17 15:06:37 +01:00
Agnes Leroy
cfa53682ae fix(gpu): add missing sync before free in oprf 2025-12-16 09:42:11 +01:00
Agnes Leroy
006d6cc300 fix(gpu): fix some cpu memory leaks 2025-12-15 14:27:35 -03:00
David Testé
b950b551e6 chore(ci): update node version to 24.12
To be able to get npm version that allow trusted publishing
(>v11.5.1).
2025-12-15 16:08:29 +01:00
David Testé
95524966ca chore(ci): handle push to crates.io input in release workflow 2025-12-15 16:08:29 +01:00
Thomas Montaigu
d394af7f4d chore: bump dyn-stack to 0.13
Notable changes:
- StackReq methods no longer returns Result<StackReq, SizeOverflow>
  instead, StackReq contains the invalid state.
  Now, its when we create a PodBuffer that we can check/catch if the
  size req is invalid by catching errors when calling
  `PodBuffer::try_new`. Its also possible to manually check that
  `stack_req != StackReq::OVERFLOW`

- GlobalaPodBuffer is now PodBuffer
2025-12-15 10:02:17 +01:00
Andrei Stoian
78d1ce18c1 feat(gpu): support keyswitch 64/32 2025-12-12 22:01:49 +01:00
Arthur Meyre
14d49f0891 chore: add possibility to manually populate tfhe FFT plan cache 2025-12-12 19:06:47 +01:00
Nicolas Sarlin
e544dfc08e fix(integer): handle large string size in DataKind::num_blocks 2025-12-12 13:10:14 +01:00
Nicolas Sarlin
5891a4d78a fix(hl): handles empty compact lists 2025-12-12 13:10:14 +01:00
Nicolas Sarlin
f17cd9bd37 fix(integer): handle message mod 0 in num_blocks 2025-12-12 13:10:14 +01:00
Thomas Montaigu
c083eb826d fix(zk-pok): Check Modulus of deserialized Fp 2025-12-12 13:10:14 +01:00
Nicolas Sarlin
1479315725 fix(integer): handles num_blocks_per_integer is 0 in ct list upgrade 2025-12-12 13:10:14 +01:00
Nicolas Sarlin
b5e5058759 fix(core): handle lwe dim of 0 when computing ct list size 2025-12-12 13:10:14 +01:00
Nicolas Sarlin
d98033c71d fix(integer): check overflows when computing expected list size 2025-12-12 13:10:14 +01:00
Nicolas Sarlin
c7b869c956 fix(core): use saturating_* to convert from lwe dim and size 2025-12-12 13:10:14 +01:00
Nicolas Sarlin
50b76817c9 fix(shortint): use saturating_sub to get degree from message modulus 2025-12-12 13:10:14 +01:00
David Testé
238f7d51f6 chore(ci): fix data extraction on backends comparison for docs 2025-12-12 12:24:47 +01:00
Thomas Montaigu
aa49d141c7 feat: add MetaParameterFinder 2025-12-12 11:20:03 +01:00
Thomas Montaigu
be1de6ef2b chore: impl Versionize for MetaParameters 2025-12-12 11:20:03 +01:00
Guillermo Oyarzun
11579bd3d0 feat(gpu): create noise and pfail tests for pbs + ks + ms 2025-12-12 09:41:11 +01:00
Agnes Leroy
b7a706a3db chore(bench): remove constraint in pcc to not use trivial name in bench 2025-12-11 14:41:07 +01:00
Agnes Leroy
8e4bec0b2a chore(bench): modify whitepaper erc20 to match newest litepaper version 2025-12-11 14:41:07 +01:00
David Testé
641d4988d7 chore(ci): use run-name to distinguish benchmark runs
This would create a run name in the GitHub Actions tabs that will be based on the inputs provided rather than just displaying the workflow filename.
2025-12-11 13:32:46 +01:00
Arthur Meyre
403cabb70a chore: bump CSPRNG to 0.8.0 since MSRV was bumped 2025-12-11 13:12:36 +01:00
Arthur Meyre
63b46c3b99 chore: bump tfhe-versionable to 0.7 since the MSRV was changed 2025-12-11 13:12:36 +01:00
Arthur Meyre
a5aa3c366f chore: bump tfhe-hpu-backend 2025-12-11 13:12:36 +01:00
Arthur Meyre
5818b08f2c chore: bump tfhe-cuda-backend to 0.13.0 2025-12-11 13:12:36 +01:00
Arthur Meyre
de439ff6a1 chore: fix repo name in hpu README 2025-12-11 13:12:36 +01:00
Enzo Di Maria
cf969ff930 refactor(gpu): creating benchmarks for match_value 2025-12-11 12:01:43 +01:00
David Testé
e969fb5a1a refactor(ci): make data extractor formatters modular on layers
This declutters formatter.py file and would help to add formatting
cases that are specific to a layer like ZK-PoK benchmarks on the
integer layer.
2025-12-11 09:35:56 +01:00
Agnes Leroy
68fa268d99 chore(gpu): increase timeout for valgrind run 2025-12-11 08:58:27 +01:00
Agnes Leroy
1f0a83e4bb fix(gpu): fix some CPU memory leaks due to the use of new without delete 2025-12-11 08:58:13 +01:00
Thomas Montaigu
4f14343668 chore: impl Versionize for NormalizedHammingWeightBound 2025-12-10 10:41:37 +01:00
Thomas Montaigu
329677bd26 chore(makefile): warnings as error for tfhe_lints
So the the CI fails, just like it does for clippy
2025-12-10 10:41:37 +01:00
Arthur Meyre
60d92c4de0 chore: add main README link check to fpcc recipe 2025-12-09 16:33:37 +01:00
Arthur Meyre
8888c046de chore: make fpcc actually kind of fast
- fix install_typos.sh script
2025-12-09 16:33:37 +01:00
Arthur Meyre
4da1e19d5c chore: fix typo in js wasm API doc 2025-12-09 16:33:37 +01:00
Arthur Meyre
ca6999cb16 chore: fix links in README.md 2025-12-09 16:33:37 +01:00
David Testé
84d5955bf8 chore(ci): fix env variable values in cpu weekly benchmarks 2025-12-09 14:32:15 +01:00
David Testé
067070942f chore(ci): fix data_extractor related secret names 2025-12-09 14:32:01 +01:00
David Testé
b04585fa10 chore(ci): update feature on perf regression command generation
AVX512 feature is not considered a nightly one anymore.
2025-12-09 14:32:01 +01:00
David Testé
b20a33ca6c chore(ci): use pip-install option from setup-python action
This avoids calling pip directly in the steps using Python.
2025-12-09 14:32:01 +01:00
David Testé
5eb4cc5a22 chore(bench): add fast benchmark capability for hlapi
Run only a small subset of the current benchmarks to speed up developers feedback
2025-12-09 11:34:53 +01:00
David Testé
b3dde277bc chore(ci): install linelint on github hosted runner instance
This is done to be able to run linelint for pull-request from external contributors.
2025-12-09 10:49:09 +01:00
dependabot[bot]
781ca60e33 chore(deps): bump zizmorcore/zizmor-action from 0.2.0 to 0.3.0
Bumps [zizmorcore/zizmor-action](https://github.com/zizmorcore/zizmor-action) from 0.2.0 to 0.3.0.
- [Release notes](https://github.com/zizmorcore/zizmor-action/releases)
- [Commits](e673c3917a...e639db9933)

---
updated-dependencies:
- dependency-name: zizmorcore/zizmor-action
  dependency-version: 0.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-08 12:04:04 +01:00
dependabot[bot]
8ad8f7c505 chore(deps): bump actions/setup-python from 6.0.0 to 6.1.0
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 6.0.0 to 6.1.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](e797f83bcb...83679a892e)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: 6.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-08 12:02:23 +01:00
Agnes Leroy
100b4200c2 chore(gpu): update number of streams in erc20 throughput bench 2025-12-08 09:21:55 +01:00
Enzo Di Maria
5273f61593 refactor(gpu): creating InternalCudaStreams to improve the management of multiple streams per GPU 2025-12-05 10:13:18 +01:00
David Testé
182aad99f1 chore(ci): add backends comparison table generation for docs
This adds backends comparison in data extractor. It performs
comparison on a fixed list (CPU, GPU, HPU) for 64 bits precision
ciphertext as displayed in tfhe-rs public documentation.

SVG table generation is automated via the documentation benchmark
workflow.
2025-12-04 17:59:27 +01:00
David Testé
e85fd936d0 chore(bench): suffix hlapi ops bench with measured type name 2025-12-04 17:59:27 +01:00
Nicolas Sarlin
4d6aeebd0c chore: detect degree in compact list conformance 2025-12-04 13:59:03 +01:00
Arthur Meyre
f8ee0b3649 chore: add comment on the value causing issues in 64 bits decomp tests 2025-12-04 09:43:32 +01:00
Arthur Meyre
37c872786e chore: add ref 2025-12-04 09:43:32 +01:00
Arthur Meyre
85a281efc5 chore: update FFT128 params to match production params in core
- allows to be even more confident about fft128 tests
2025-12-04 09:43:32 +01:00
Arthur Meyre
a9d2137424 chore: add test and fix for specialized decomposition code
- add dedicated edge case for u128
- fix all implementations that can be used and run tests on the same
u128 edge case
2025-12-04 09:43:32 +01:00
Arthur Meyre
3dcdf9043a chore: add arithmetic right shift for split u128 values
- this applies a shift that extends the sign of the value interpreted as an
i128, this will be required for the fix of the specialized decomp code for
u128
2025-12-04 09:43:32 +01:00
Arthur Meyre
dfcceefa83 chore: add (previously) failing test case for u128 decomposition
- the code fix was already correct, but we add another edge case because we
have specialized decomposition code for u128 that we need to update and we
need a previously failing case, this serves as reference for the fix of the
specialized implementation
2025-12-04 09:43:32 +01:00
dependabot[bot]
52a22ea82a chore(deps): bump actions/checkout from 5.0.0 to 6.0.0
Bumps [actions/checkout](https://github.com/actions/checkout) from 5.0.0 to 6.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](08c6903cd8...1af3b93b68)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-04 09:42:59 +01:00
Enzo Di Maria
a96d68323b fix(gpu): No broadcast is needed because full prop is done on 1 single GPU 2025-12-04 09:17:40 +01:00
Nicolas Sarlin
0e1b064182 chore: fix test vectors lut 2025-12-03 17:44:46 +01:00
Nicolas Sarlin
86658e1999 fix(ci): new location for action/checkout git creds 2025-12-03 10:24:40 +01:00
pgardratzama
1f48456b17 feat(hpu): adds a PSI64 HPU now using ISC IOP Ack interrupt to internal core
AMC fw version update from 3.1.0 to 3.1.1
2025-12-03 09:33:44 +01:00
Enzo Di Maria
6752249f7f refactor(gpu): moving vector_comparisons's functions to the backend 2025-12-03 09:11:00 +01:00
aquint-zama
f5bfc7f79d chore: add SECURITY 2025-12-02 18:21:05 +01:00
Agnes Leroy
e6625521ad chore(gpu): add the possibility to run classical bench for erc20 and dex 2025-12-02 15:59:40 +01:00
David Testé
94ef3dab05 chore(ci): add several ci related folders in codeowners file 2025-12-02 15:06:22 +01:00
Enzo Di Maria
cc33161b97 refactor(gpu): moving cast_to_signed to the backend 2025-12-02 12:40:47 +01:00
dependabot[bot]
d5a3275a5a chore(deps): bump peter-evans/create-pull-request from 7.0.8 to 7.0.9
Bumps [peter-evans/create-pull-request](https://github.com/peter-evans/create-pull-request) from 7.0.8 to 7.0.9.
- [Release notes](https://github.com/peter-evans/create-pull-request/releases)
- [Commits](271a8d0340...84ae59a2cd)

---
updated-dependencies:
- dependency-name: peter-evans/create-pull-request
  dependency-version: 7.0.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-02 09:45:42 +01:00
Nicolas Sarlin
f9c2a5d423 chore: use dedicated types for compressed modswitched conformance 2025-12-01 14:39:06 +01:00
David Testé
dac999b279 chore(ci): handle parameters_check triggered from forked repo 2025-12-01 12:33:13 +01:00
David Testé
197443a1c0 chore(ci): update slab-github-runner action to v1.4.2 2025-11-28 16:50:40 +01:00
Agnes Leroy
1d98c2be33 chore(gpu): fix hl api filter for sanitizer tests and add core crypto ones 2025-11-28 16:12:39 +01:00
David Testé
637e829d5d chore(ci): remove unused environment variables
These variables are already declared in the sub-workflow.
2025-11-28 15:34:27 +01:00
David Testé
e6be7a4479 chore(ci): add missing action run url environment variable 2025-11-28 15:34:27 +01:00
Enzo Di Maria
3cfbaa40c3 refactor(gpu): unchecked_index_of_clear to backend 2025-11-28 14:34:53 +01:00
Arthur Meyre
a731be3878 chore: re-enable meta params tests 2025-11-27 18:19:24 +01:00
Andrei Stoian
e2063c8ef4 chore(gpu): bench KS latency batches 2025-11-27 17:32:44 +01:00
Nicolas Sarlin
d6a0a366b9 fix(ci): only check test vectors on x86 2025-11-27 15:48:54 +01:00
Enzo Di Maria
0aa0918fea refactor(gpu): vector_find's functions to backend 2025-11-27 13:36:10 +01:00
David Testé
54cb87c491 chore(ci): add placeholders for whitepaper benchmarks 2025-11-27 12:11:48 +01:00
Agnes Leroy
23c49e2d72 chore(gpu): fix product name for the cost of SXM5 2025-11-26 16:27:27 +01:00
David Testé
bbf484c7f6 chore(ci): move erc20 and dex gpu benchmarks to common files 2025-11-26 13:44:34 +01:00
Nicolas Sarlin
79a89c157a chore(fft-ntt): bump to 0.10.0 and 0.7.0 2025-11-26 13:32:14 +01:00
Agnes Leroy
ef30be5086 chore(gpu): reduce long run tests number of inputs to avoid timeout on 4090 2025-11-26 13:06:14 +01:00
Nicolas Sarlin
212b925b5e chore: use rust-toolchain.toml for default toolchain 2025-11-26 11:28:21 +01:00
Nicolas Sarlin
f8a958663b chore(tfhe): rename nightly feature flag to avx512 2025-11-26 11:28:21 +01:00
Nicolas Sarlin
851bd01873 chore(fft): rename nightly feature flag to avx512 2025-11-26 11:28:21 +01:00
Nicolas Sarlin
8d1f6d4d06 chore(ntt): rename nightly feature flag to avx512 2025-11-26 11:28:21 +01:00
Nicolas Sarlin
d888b7b673 chore(ci): remove unused files 2025-11-26 11:28:21 +01:00
Nicolas Sarlin
8e566c5765 chore: update pulp and bytemuck 2025-11-26 11:28:21 +01:00
Nicolas Sarlin
bf2e9ef504 chore(backward): each data generation crate has its own workspace 2025-11-26 11:28:21 +01:00
Nicolas Sarlin
01367368ed chore(zk): do not bench zkv1 at the integer level 2025-11-25 17:20:06 +01:00
Nicolas Sarlin
33f77458e9 chore(zk): fix elements count for zk throughput benches 2025-11-25 17:20:06 +01:00
Arthur Meyre
caf5e9d879 chore: fix scalar benchmarks generating fixed values
- this would not give an average runtime for scalar benchmarks and for
small precisions could give super good timings (for lucky values)
- the timings for other precisions could still be favorable or unfavorable
depending on the value that was drawn
2025-11-25 14:23:55 +01:00
Nicolas Sarlin
fc78450245 chore: test vector generation tool for core algorithms 2025-11-25 13:45:49 +01:00
David Testé
f45b7a9fdc chore(ci): print out parameters check analysis duration
The workflow might succeed even if it doesn't scan any parameters
set or a smaller one. In this case the analysis duration would be
much lower than usual. As being sent to a Slack channel, this
improved message would warn maintainers about a potential issue.
2025-11-25 12:33:15 +01:00
David Testé
8a424ce2cb chore(ci): run parameters check if parameters folder changes 2025-11-25 12:33:15 +01:00
Enzo Di Maria
b6fffa3d86 fix(gpu): wrong number of blocks in tmp_match_result 2025-11-25 10:50:30 +01:00
Thomas Montaigu
2002184da4 feat(Tag): impl From<&str> 2025-11-25 10:24:25 +01:00
Thomas Montaigu
614ca52749 feat(CompressedXofKeySet): impl key gen of ClientKey
This adds the implemention of the ClientKey generation
that respects the threshold specs, some points are:

* 2 random generators are used one for everything public (masks)
  the other for everything private (private keys and noise).
  These generators are seeded once.
* binary secret keys are generated using
  `fill_slice_with_random_uniform_binary_bits` and support pmax param
2025-11-25 10:24:25 +01:00
Guillermo Oyarzun
28ea0bc086 fix(gpu): force uint64 when calculating lwe chunksize 2025-11-25 09:52:16 +01:00
Arthur Meyre
48cc709cf9 test(core): add exhaustive decomposition test on several levels
- also add a verification that recomposition works in the previously added
edge case test
2025-11-24 18:40:36 +01:00
David Testé
6141ad2eee chore(bench): fix bench prefix pattern for hlapi ops
To follow the standard used by other HLAPI benchmarks and ease parsing for data_extractor.
2025-11-24 17:56:10 +01:00
Nicolas Sarlin
580e545b14 chore(wasm): install wasm bindgen before using wasm pack 2025-11-24 15:58:01 +01:00
David Testé
0b98ef98fc chore(ci): handle inputs vars safely in python script 2025-11-24 14:03:08 +01:00
David Testé
b0393c0acb chore(bench): run scalar ops in integer deduplicated cpu bench 2025-11-24 14:03:08 +01:00
David Testé
87bea9dcf1 chore(ci): remove 2m40 p-fail from core_crypto array generation 2025-11-24 14:03:08 +01:00
David Testé
b3c3647530 chore(ci): add workflow to update documentation benchmark tables
This new workflow can trigger all the required benchmarks needed
to populate benchmarks tables in documentation.
It also can generate SVG tables and store them as artifacts.
Optionally, it can open a pull-request to update the current
tables in documentation.
2025-11-24 14:03:08 +01:00
David Testé
3c76dd8cad chore(ci): small fixes on data_extractor filename generation
This is done to ease automated SVG tables for tfhe-rs public documentation.
2025-11-24 14:03:08 +01:00
David Testé
a75b5107ff chore(docs): change svg benchmark table names
This is done to ease automated table generation through continuous integration pipeline.
2025-11-24 14:03:08 +01:00
Enzo Di Maria
32b1a7ab1d refactor(gpu): unchecked_match_value_or to backend 2025-11-24 13:50:15 +01:00
David Testé
184f40439e chore(ci): use env to handle inputs securely in python script 2025-11-24 12:19:53 +01:00
Agnes Leroy
1ed8710868 chore(gpu): fix sanitizer script 2025-11-24 09:42:55 +01:00
David Testé
97214f801e chore(ci): teardown build instance only if setup has run
In case of a re-run attempt, if one asks for only failing jobs to
be re-run (e.g. MacOS job on GitHub hosted runner) and the
previous attempt was successful regarding setup-build-teardown,
then we would try to teardown again an instance already gone.
This would cause an error in the CI.

By checking the 'github.run_attempt' number, we ensure that we run
the teardown only if the setup has occurred during the same run
attempt.
2025-11-21 17:16:14 +01:00
David Testé
25a8bbfd89 chore(ci): make slack notify fail in case of error on teardown
On instance teardown we want to be informed of any failure to
avoid having zombies running due to Slack notify action error
being silenced.
2025-11-21 17:15:58 +01:00
David Testé
ef1e091ce2 chore(ci): put slack variables in global env for builds
Without them, we are unable to perform Slack notification.
2025-11-21 17:15:58 +01:00
Enzo Di Maria
1a7efbc21b fix(gpu): constant number of blocks for outputs of match_value 2025-11-21 15:56:54 +01:00
Guillermo Oyarzun
02312e23ea fix(gpu): fix pbs128 selection for small num samples 2025-11-21 10:57:14 +01:00
Agnes Leroy
e5742e63e9 chore(gpu): add compute-sanitizer run on H100 2025-11-21 09:15:04 +01:00
Agnes Leroy
bc0a4fb0d6 chore(gpu): add classical pbs tests to HL API, complement missing tests, classical in long run 2025-11-21 09:15:04 +01:00
Enzo Di Maria
c6709a82c0 refactor(gpu): match_value to backend with multiple streams 2025-11-20 17:11:15 +01:00
Agnes Leroy
0f5e538f9a chore(gpu): add better panic when calling old compression functions 2025-11-20 17:06:06 +01:00
David Testé
58378b7972 chore(bench): add dedicated targets for aes cuda benchmarks 2025-11-20 16:58:06 +01:00
Arthur Meyre
9b5df143cb chore: update dp_ks_pbs128_packingks test to take metaparameters 2025-11-20 16:03:50 +01:00
Arthur Meyre
cbac102a0d chore: update dp_ks_ms test to take metaparameters 2025-11-20 16:03:50 +01:00
Arthur Meyre
0da331590c chore: update cpk_ks_ms test to take metaparameters
- also added a seemingly missing sanity check for cpk_ks_ms
2025-11-20 16:03:50 +01:00
Arthur Meyre
e5a15b33d9 chore: fix incorrect number in seed_bytes for rerand noise check test
- has no impact on correctness of test, in addition the size of the array
would have been the same due to integer division
2025-11-20 16:03:50 +01:00
Arthur Meyre
d68645c493 chore: update br_rerand_dp_ks_ms test to take metaparameters 2025-11-20 16:03:50 +01:00
Arthur Meyre
a72de66744 chore: update br_dp_packing_ms test to take metaparameters 2025-11-20 16:03:50 +01:00
Arthur Meyre
1b924fa872 chore: update br_dp_ks_ms test to take metaparameters 2025-11-20 16:03:50 +01:00
Arthur Meyre
560f595620 chore: remove temporary test param 2025-11-20 16:03:50 +01:00
Mayeul@Zama
f4665886ee fix(integer): fix StaticUnsignedBigInt cast into u128 2025-11-20 15:40:39 +01:00
Beka Barbakadze
80cacbd079 feat(gpu): add boolean bitops in cuda backend 2025-11-20 14:56:21 +01:00
tmontaigu
1b17cfc0f8 chore(c-api): add missing docs for some C function
Add explanations on the fact that config and config buider
do not need to be freed/
2025-11-20 14:46:54 +01:00
David Testé
a5c248566d chore(ci): run parameters_check workflow if files change 2025-11-20 12:36:21 +01:00
David Testé
eb2cf19d10 chore(ci): install rust toolchain for parameters checks
This is needed since our AWS EC2 image doesn't embed Rust toolchain.
2025-11-20 11:01:37 +01:00
David Testé
4928efcca8 chore(ci): remove check on secret availability for params checks
This is needed only when the triggering event is pull_request and PR comes from an external contributor.
2025-11-20 11:01:37 +01:00
Nicolas Sarlin
edb435bd46 chore: update msrv to 1.91.1 2025-11-20 09:29:37 +01:00
David Testé
071e70c037 chore(bench): fix benchmark id pattern for aes and aes256 2025-11-19 17:23:05 +01:00
Arthur Meyre
19c2146b2d chore: remove redundant log2/powf in params_to_file.rs 2025-11-19 12:59:26 +01:00
David Testé
c5bac7994a chore(ci): fix typos in newly checked ci folder 2025-11-19 11:51:02 +01:00
David Testé
bdef5b358e chore(ci): ignore only specific files in ci folder
The whole ci/ folder was ignored, that would cause issue regarding search capability in some IDE.
2025-11-19 11:51:02 +01:00
Arthur Meyre
4229523c50 chore: add artificial intelligence contribution guidelines 2025-11-19 10:43:34 +01:00
Nicolas Sarlin
ac6178fd35 chore(zk): add batched mode for verification pairings 2025-11-19 09:24:13 +01:00
David Testé
893c321e60 chore(ci): update repositories before any other apt operation
This is done to avoid failure on dependencies' installation.
2025-11-18 14:13:02 +01:00
David Testé
51139e6e82 chore(ci): use builds result to pass branch protection
Instance teardown might fail regardless the result of build
commands. To make CI green in pull-request we need to check the
result of the build commands instead of the whole workflow result.
Doing so ensure a PR could pass branch protection rules even if
the teardown fails.
2025-11-18 11:44:56 +01:00
David Testé
5ce068642e chore(ci): replace workflow name in cpu weekly bench 2025-11-18 11:42:10 +01:00
David Testé
5c2fe133e1 chore(ci): use explicit boolean checking on scheduled cpu bench
It seems that boolean output set by user is not coerced into
boolean type by the runner. Rather it checks if the string
associated ("true" or "false") is not empty.
2025-11-18 11:42:10 +01:00
David Testé
d12bbd848c chore(ci): fix op_flavor generation for weekly cpu bench
Sub-workflow is supposed to handle comma-separated values not an array.
2025-11-18 11:42:10 +01:00
David Testé
c30e16db9c chore(ci): move workflows runners to aws ec2
Done to reduce CI running costs.
2025-11-18 10:44:50 +01:00
Mayeul@Zama
f9268b889f chore(bench): revert print bench id
This reverts commit ef07963767.
2025-11-17 11:23:50 +01:00
David Testé
1568f7c532 chore(ci): use aws ec2 as runner provider for cargo builds 2025-11-17 10:19:26 +01:00
Arthur Meyre
89bf6a3331 chore: add tool to update AP params' msg and carry moduli 2025-11-17 09:41:53 +01:00
Arthur Meyre
82cebb9b26 test(shortint): add compression atomic pattern for noise checks
- noise checks and pfail based on expected noise have been added
- compatible with KS PBS and KS32 PBS
2025-11-17 09:41:53 +01:00
Arthur Meyre
ba5f4850b9 chore: update naming of noise simulation primitive to avoid clashes
- makes it clearer from which parameters some noise simulation primitives
are built from
2025-11-17 09:41:53 +01:00
Arthur Meyre
7197b85ec9 chore: lift some restrictions on confidence interval function
- if a value is computed it will be correct
- if the value is not finite (NaN or infinity) we panic with a message to
the user indicating what course of action they can take
- ideally we would want to use a scientific crate written in rust, xsf-rust
seemed promising but the dependency on clang + libclang is proving more
annoying than not, given we would need a single function from xsf (and it's
hard to translate all the required pieces) we keep a sort of status quo
- statrs issue : https://github.com/statrs-dev/statrs/issues/361
2025-11-17 09:41:53 +01:00
Enzo Di Maria
54c8c5e020 chore(gpu): no crash with aes benches if oom error 2025-11-14 17:02:33 +01:00
David Testé
164fc26025 chore(ci): add placeholders for documentation benchmarks
This is done to be able to execute CI in further development.
2025-11-14 16:48:49 +01:00
David Testé
ad818ee117 chore(ci): add placeholder for cargo_build_common.yml
This is done to be able to execute CI in further development.
Also, we won't have to temporary lift the branch protection rules
to be able to merge since this upcoming development is a rework
cargo_build.yml workflow.
2025-11-14 16:48:49 +01:00
Agnes Leroy
df73c36cbf fix(gpu): fix decomposition algorithm not matching the theory 2025-11-14 16:36:35 +01:00
David Testé
a33c12d5a9 chore(ci): fix zizmor findings in workflows 2025-11-14 15:24:10 +01:00
David Testé
522a612ad4 chore(ci): update zizmor and use zizmor-action in workflow 2025-11-14 15:24:10 +01:00
David Testé
f8c998f0da chore(ci): avoid unwanted cancellation in csprng tests 2025-11-14 15:18:04 +01:00
Arthur Meyre
84c80c529d chore: remove redundant clones
co-authored-by: Himess <95512809+Himess@users.noreply.github.com>
2025-11-14 14:14:39 +01:00
Arthur Meyre
c3c892708a chore: fix comment confusing comment in decomposer.rs
- function is documented and the comment did not match, the behavior is
checked in a test
2025-11-14 14:14:24 +01:00
Agnes Leroy
4f9f4982f6 fix(gpu): fix memory leak in rerand 2025-11-14 14:00:01 +01:00
Arthur Meyre
d75844dea5 fix(core): fix decomposition algorithm not matching the theory
- problem arose from a shift being done on an unsigned value which did not
keep the signed characteristics of the represented signed value
- introduce an arithmetic_shift on the UnsignedInteger trait with a blanket
implementation
- add the edge case which revelead the issue
- the asm has been verified to only change for the shift operation being
applied, meaning no performance regression will occurr
2025-11-14 13:52:17 +01:00
David Testé
ef07963767 chore(bench): print bench id before running the benchmark
Done to circumvent criterion limitation regarding automatic
truncation of long benchmark ID.
Using a println() call we ensure the complete name is displayed
before benchmark execution to ease manual parsing and debugging.
2025-11-14 13:45:04 +01:00
Nicolas Sarlin
6d2de330a4 feat(core): create Lwe ct from mod switched lwe 2025-11-14 10:57:33 +01:00
David Testé
405b50afbc chore(ci): fix cpu weekly benchmarks schedule groups handling
The steps responsible for setting the OP_FLAVOR and ALL_PRECISION
variables were never executed due to usage of non-existing env
variable.
This causes OP_FLAVOR value to be null and thus would trigger
error on benchmarks that doesn't handle unknown values for
BENCH_OP_FLAVOR.

Also fixes filename to parse for additional boolean benchmark.
2025-11-12 15:37:08 +01:00
pgardratzama
4dcc428d46 chore(hpu): update PBS results with latest bistream 2025-11-10 18:43:50 +01:00
pgardratzama
d38df76eb6 chore(hpu): adds a page about HPU PBS performances 2025-11-10 18:43:50 +01:00
pgardratzama
afaf761cdd chore(hpu): adds 3 custom IOp to measure PBS performance on HPU and update trace parser to handle 32b timestamp wrap 2025-11-10 18:43:50 +01:00
dependabot[bot]
2ca4a7fe1a chore(deps): bump rust-lang/crates-io-auth-action from 1.0.2 to 1.0.3
Bumps [rust-lang/crates-io-auth-action](https://github.com/rust-lang/crates-io-auth-action) from 1.0.2 to 1.0.3.
- [Release notes](https://github.com/rust-lang/crates-io-auth-action/releases)
- [Commits](041cce5b4b...b7e9a28ede)

---
updated-dependencies:
- dependency-name: rust-lang/crates-io-auth-action
  dependency-version: 1.0.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-10 12:52:11 +01:00
David Testé
d53bf79592 chore(bench): fix naming order for erc20 hpu benchmarks 2025-11-10 11:46:41 +01:00
pgardratzama
4eb4fa95e3 feat(hpu): new HPU bitstream with few optimizations (GRAM arb, ALU nb, BSK manager) 2025-11-10 09:14:18 +01:00
David Testé
4cc2df42ed chore(ci): make sage parameters dump ordered
This is done to ease line-by-line comparison between security check runs.
2025-11-07 17:24:19 +01:00
David Testé
40f500ef07 chore(ci): use tuniform value as xe value in parameters dump 2025-11-07 17:24:19 +01:00
Nicolas Sarlin
faaeab12d0 doc(core): update unix seeder doc 2025-11-07 15:44:23 +01:00
Mayeul@Zama
36fb820ed4 chore: fix new lints 2025-11-07 10:43:46 +01:00
Guillermo Oyarzun
12426573fa fix(gpu): add upper bound to lwe_chunk_size calculation 2025-11-07 09:29:40 +01:00
Guillermo Oyarzun
6f105cd82e fix(gpu): fix out of bounds in specialized classical pbs 2025-11-06 15:35:04 +01:00
Arthur Meyre
0cd0333875 chore: remove redundant Clone bound from get()
co-authored-by: VolodymyrBg <aqdrgg19@gmail.com>
2025-11-06 14:43:04 +01:00
Enzo Di Maria
4ff95e3a42 feat(gpu): AES 256 2025-11-05 13:37:08 +01:00
Baptiste Roux
f970031d33 chore(hpu): Update version of hw_regmap deps
This new version update rust MSRV.
2025-11-04 15:26:27 +01:00
David Testé
9390c0ec68 chore(ci): refactor hpu benchmarks workflows
Following the same pattern as GPU benchmarks, HPU benchmarks rely
on a common workflow. All the manual launches via
workflow_dispatch event are now done in one place. That way, one
doesn't have to browse the workflow tree to find the right HPU
benchmark to trigger.
2025-11-04 12:29:43 +01:00
David Testé
0c977a3996 chore(bench): insert params name in bench id for hlapi
To ease parsing and filtering by third parties.
2025-11-04 10:53:25 +01:00
David Testé
de98c41e2f chore(ci): fix n3-h100-sxm5x8 hardware name in benchmarks 2025-11-04 10:53:03 +01:00
David Testé
0138425c60 chore(ci): set regression default target for gpu 2025-11-04 10:53:03 +01:00
Ben
5854c2c450 chore(docs): add example estimator call 2025-11-03 18:25:45 +01:00
Arthur Meyre
058965c9f2 chore: update lattice estimator commit 2025-11-03 18:25:45 +01:00
David Testé
c3017341bd chore(ci): refactor cpu benchmarks workflows
Following the same pattern as GPU benchmarks, CPU benchmarks rely on a common workflow. Weekly benchmarks are all gathered in one place. Also, all the manual launches via workflow_dispatch event are now done in one place. That way, one doesn't have to browse the workflow tree to find the right CPU benchmark to trigger.

Signed-off-by: David Testé <david.teste@zama.ai>
2025-11-03 16:14:02 +01:00
Arthur Meyre
00ce0deec9 chore: make typos version fixed
- add a script to properly install the correct version
- correct new typos
2025-11-03 14:58:23 +01:00
Nicolas Sarlin
67dc8583b1 chore(zk): parallelize verification pairings 2025-11-03 13:37:43 +01:00
Arthur Meyre
0ff5a9ef7c chore: fix typos
closes https://github.com/zama-ai/tfhe-rs/issues/2964
2025-10-31 14:25:34 +01:00
Nicolas Sarlin
83b82091bd chore: use common msrv for the workspace
Since cargo commands create a lock using the smallest msrv in the workspace, it
can prevent getting up-to-date dependencies
2025-10-31 09:31:43 +01:00
Nicolas Sarlin
b8fd0e4240 chore: bump tfhe-versionable to 0.6.3 and tfhe-zk-pok to 0.8.0 2025-10-30 16:53:36 +01:00
Nicolas Sarlin
aff5b7f0c6 chore(backward): add data for the new zk proof 2025-10-30 16:53:36 +01:00
Nicolas Sarlin
b7fc208e40 chore(zk): match zkv2 hash impl with the description
- encode the position of bits proven to be 0 in the hashes
- hash the infinite norm instead of the euclidean one
- hash the value of k with the statement
2025-10-30 16:53:36 +01:00
Nicolas Sarlin
bcb1356b76 fix(versionable): handle #[default] in Versionize types 2025-10-30 16:53:36 +01:00
Mayeul@Zama
54626cab6d refactor(shortint): use ShortintBootstrappingKey in DecompressionKey 2025-10-30 16:52:44 +01:00
Nicolas Sarlin
bc493a5641 fix(shortint): avoid to crash when thread engine is reused 2025-10-30 14:51:01 +01:00
David Testé
073cba10d1 chore(ci): print stddev divergence in regression report 2025-10-30 14:06:30 +01:00
David Testé
2a8885aa9f chore(ci): run erc20 and dex throughput bench only on demand
Following the same pattern as other benchmarks.
2025-10-30 09:52:30 +01:00
David Testé
e17c481736 chore(ci): prefix regression ops results with layer name
This is done to avoid confusion for operations that might have the same between layer. For example, 'bitand' operation have the same name for shortint and integer layers
2025-10-30 09:51:44 +01:00
David Testé
2542ef38e6 chore(ci): add parameters filtering for data extractor
When doing regression generation, one can provide a global parameters set name pattern to filter head branch benchmark results.
This fixes the issue encountered when there are more than one parameters' set used to benchmark an operation, for example, in core_crypto or shortint tfhe-rs layer.
2025-10-30 09:51:44 +01:00
Enzo Di Maria
398c441c95 refactor(gpu): delete useless GPU params 2025-10-30 08:59:10 +01:00
Enzo Di Maria
026cc376ed refactor(gpu): multibit decompression 2025-10-30 08:59:10 +01:00
Pedro Alves
867f8fb579 feat(gpu): implement re-randomization
- exposed to integer and HL API
- test on the HL API
- benchmarks for GPU and CPU implementation
2025-10-29 17:55:45 -03:00
David Testé
3c32b15d02 chore(ci): print change thresholds in regression reports 2025-10-29 15:33:33 +01:00
David Testé
1823321aad chore(ci): skip regression operation with invalid data point 2025-10-29 15:33:33 +01:00
David Testé
67130646ad chore(ci): support shortint layer name parsing in data extractor 2025-10-29 15:33:33 +01:00
David Testé
f768fd1cdd chore(ci): set all operations for default cpu regression profile 2025-10-29 15:33:33 +01:00
Arthur Meyre
0223913aef chore: make functions consistent to generate keyswitching keys
- so that normal and seeded variants have similar APIs
2025-10-29 15:31:22 +01:00
Arthur Meyre
a41cd47b9e refactor(test): make modulus switch config system make more sense
- The config type can hold any type for the drift technique variant because
the bounds are too weird to set on the type, the functions making use of
the config type should properly declare the bounds
2025-10-29 15:31:22 +01:00
Arthur Meyre
d95b46cb9b refactor(test): factorize the any modulus switch function for noise checks 2025-10-29 15:31:22 +01:00
Guillermo Oyarzun
0f0438c8cf feat(gpu): add 1_1 classical pbs params for specialized version 2025-10-29 09:18:18 +01:00
Arthur Meyre
9d31e994aa chore(docs): make difference between benchmarks stand out more 2025-10-28 10:35:23 +01:00
Nicolas Sarlin
95593b1ea9 fix(zk): missing compressed proof version 2025-10-28 09:50:00 +01:00
Agnes Leroy
231d0c5e50 chore(gpu): disable lto in gpu bench compilation 2025-10-28 09:37:14 +01:00
David Testé
1d0a5c96a4 chore(ci): add bench type selection to core_crypto bench workflow 2025-10-27 18:09:54 +01:00
David Testé
b0b49ae533 chore(bench): new parameters set to run core_crypto bench for docs
This creates extended parameters set to reflect what's displayed
in the documentation.
2025-10-27 17:25:41 +01:00
Pedro Alves
70773e442c fix(gpu): fix 128-bit compression benchmark 2025-10-27 17:06:45 +01:00
dependabot[bot]
7b797b8af9 chore(deps): bump actions/upload-artifact from 4.6.2 to 5.0.0
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.2 to 5.0.0.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](ea165f8d65...330a01c490)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-27 16:08:28 +01:00
dependabot[bot]
b6efb109aa chore(deps): bump actions/download-artifact from 5.0.0 to 6.0.0
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 5.0.0 to 6.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](634f93cb29...018cc2cf5b)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-27 16:08:19 +01:00
David Testé
fd6323b311 chore(ci): add throughput and hpu support to data extractor
Now throughput results can be fetched.
HPU backend is supported for integer formatting
2025-10-27 14:39:46 +01:00
Arthur Meyre
b02a3b16ff test: add rerand atomic pattern for noise checks
- make sure it works with KS32 parameters
2025-10-27 13:21:50 +01:00
Arthur Meyre
a95ee140f5 refactor: remove noise check function with PBS for sanity check
- it's a lot of code to "just" compute an additional PBS to make shortint
sanity checks, so run the function which gives the ms result, and complete
the AP by running the PBS as shortint would, gets rid of a big function
that was doing the same thing
2025-10-27 13:21:50 +01:00
Guillermo Oyarzun
62780ac500 fix(gpu): fix decompression mem leak 2025-10-24 13:02:41 +02:00
Thomas Montaigu
c10f1def70 fix: Tag propagation in XofKeySet 2025-10-24 10:40:22 +02:00
Mayeul@Zama
31a0136655 test(all): test multi bit decompression 2025-10-24 09:28:17 +02:00
Mayeul@Zama
859d5e4e1f chore: add backward multi bit decompression keys 2025-10-24 09:28:17 +02:00
Mayeul@Zama
92dcd38e30 chore: add decompression_grouping_factor to TestCompressionParameterSet 2025-10-24 09:28:17 +02:00
Mayeul@Zama
777bbe437a feat(shortint): add multi bit decompression 2025-10-24 09:28:17 +02:00
Mayeul@Zama
3842032f08 chore(shortint): fix unused function 2025-10-24 09:28:17 +02:00
Arthur Meyre
23246f63f7 chore: update fast_dedup opset to match the latency benchmarks in the docs
- signed bench update
2025-10-23 10:42:19 +02:00
Arthur Meyre
11c79b5237 chore: update fast_dedup opset to match the latency benchmarks in the docs 2025-10-23 10:42:19 +02:00
Nicolas Sarlin
a694e08ddc fix(core): par_encrypt_and_prove was using sequential encryption 2025-10-23 10:08:06 +02:00
Guillermo Oyarzun
e12638dabe feat(gpu): extend specialized version to classical pbs 2025-10-22 09:20:40 +02:00
pgardratzama
79f1d22573 fix(hpu): scalar rot & shift were not doing anything and not tested in test/hpu.rs 2025-10-21 13:29:59 +02:00
pgardratzama
f9c89212ea fix(hpu): display name on shift looked wrong 2025-10-21 13:29:59 +02:00
pgardratzama
b918f77859 chore(hpu): add force_reload option in v80 config, remove added line in sim config 2025-10-21 13:29:59 +02:00
Helder Campos
054c5028a1 feat(hpu): Added the option to forcefully reload the HPU 2025-10-21 13:29:59 +02:00
Helder Campos
7b621e57b0 feat(hpu): LLT ROT/SHIFT IOPs 2025-10-21 13:29:59 +02:00
Agnes Leroy
b4b6275ca5 chore(gpu): remove device synchronize in drop for cudavec 2025-10-21 11:33:46 +02:00
Agnes Leroy
42644349ef chore(gpu): remove remaining async functions from the integer gpu api 2025-10-20 16:19:19 +02:00
Thomas Montaigu
20b7b06ffb chore: add check_fmt_js to pcc_batch 2025-10-20 14:37:36 +02:00
Thomas Montaigu
39fbc20360 fix(js): catch undefined variant using Option<>
In the JS ShortintParametersName, users could
make typo in the variant used e.g:
`ShortintParametersName.PARAM_MESSAGE_2_CARRY128`

In JS this returns `undefined` which is then later casted to an
int and it becomes 0, leading to match the first variant

We modify the input to receive an `Option<ShortintParametersName>`
as it seems to allow us to catch the `undefined` and return a proper
error
2025-10-20 14:37:36 +02:00
dependabot[bot]
c3ae852aa2 chore(deps): bump peter-evans/create-or-update-comment
Bumps [peter-evans/create-or-update-comment](https://github.com/peter-evans/create-or-update-comment) from 4.0.0 to 5.0.0.
- [Release notes](https://github.com/peter-evans/create-or-update-comment/releases)
- [Commits](71345be026...e8674b0752)

---
updated-dependencies:
- dependency-name: peter-evans/create-or-update-comment
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-20 14:18:08 +02:00
Arthur Meyre
4a89792579 chore: use pull request permissions to be able to post comments
- the "print URL" comment failed, while the "bench failed" comment worked
difference is the former has an issues write permissions, and the latter
a pull request write permissions
2025-10-20 13:45:39 +02:00
Arthur Meyre
205b767fc1 chore: fix various target issues for benchmarks following renames
- renames were done to uniformize and make it easier to setup perf
regression measurements, some names were not updated this PR fixes that
2025-10-20 13:45:27 +02:00
Beka Barbakadze
39862c2861 fix(gpu): fix bug in are_all_comparison_blocks_true when number of blocks is 0 2025-10-20 13:26:50 +02:00
Thomas Montaigu
eed5a6c5ba chore(bench): add grep check for trivial in benches 2025-10-20 12:26:44 +02:00
Thomas Montaigu
0dd0ead4e2 chore(bench): remove trivial encryptions
It makes benches not accurate
2025-10-20 12:26:44 +02:00
dependabot[bot]
5d5e9d47e9 chore(deps): bump actions/setup-node from 5.0.0 to 6.0.0
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 5.0.0 to 6.0.0.
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](https://github.com/actions/setup-node/compare/v5...2028fbc5c25fe9cf00d9f06a71cc4710d4507903)

---
updated-dependencies:
- dependency-name: actions/setup-node
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-20 12:14:50 +02:00
dependabot[bot]
45b7491726 chore(deps): bump rust-lang/crates-io-auth-action from 1.0.1 to 1.0.2
Bumps [rust-lang/crates-io-auth-action](https://github.com/rust-lang/crates-io-auth-action) from 1.0.1 to 1.0.2.
- [Release notes](https://github.com/rust-lang/crates-io-auth-action/releases)
- [Commits](e919bc7605...041cce5b4b)

---
updated-dependencies:
- dependency-name: rust-lang/crates-io-auth-action
  dependency-version: 1.0.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-20 12:14:42 +02:00
dependabot[bot]
f84a4275ef chore(deps): bump actions/checkout from 4.2.2 to 5.0.0
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.2.2 to 5.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4.2.2...08c6903cd8c0fde910a37f88322edcfb5dd907a8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-20 12:12:50 +02:00
Agnes Leroy
34ffbadc72 chore(gpu): remove async from div, even odd, ilog2 2025-10-20 11:34:37 +02:00
Agnes Leroy
4322214d8f chore(gpu): remove async bitop cmux comparisons neg 2025-10-20 11:34:37 +02:00
David Testé
cf20d73a5f chore(ci): show action run url on custom regression benchmark
When triggered via issue comment, a performance regression
benchmark doesn't appear in the check suite. To ease tracking,
a comment is created which display URL of the workflow run.
2025-10-20 09:18:17 +02:00
Agnes Leroy
c30835fc30 chore(gpu): remove async entry points for abs, add, sub, aes 2025-10-17 15:42:06 +02:00
David Testé
70b0c0ff19 chore(ci): echo post-commit checks sub-recipe names
This is done to improve readability in case of recipe failure.
2025-10-17 15:30:19 +02:00
David Testé
206553e9ee chore(ci): check for performance regression and create report
After running performances regression benchmarks, a performance
changes checking is executed. It will fetch results data with an
external tool then it will look for anomaly in changes.
Finally it will produce a report as an issue comment with any
anomaly display in a Markdown array. A folded section of the
report message contains all the results from the benchmark.

Note that a fully custom benchmark triggered from an issue comment
would not generate a report. In addition HPU performance
regression benchmark is not supported yet.
2025-10-17 15:05:24 +02:00
Agnes Leroy
f78bea23be chore(gpu): remove async functions in radix mod.rs 2025-10-17 13:22:05 +02:00
Thomas Montaigu
106b46be7c chore(docs): add KVStore docs 2025-10-17 13:05:52 +02:00
Nicolas Sarlin
2cdc804670 chore(backward): backward compat data targeted generation 2025-10-17 12:43:13 +02:00
David Testé
23d7e0d844 chore(ci): use trusted publishing for npm packages 2025-10-17 12:06:34 +02:00
David Testé
0e1082f465 chore(docs): update benchmark results for all backends
This also removes tables in PBS benchmarks for failure probability
of 2**-40.
2025-10-17 09:49:47 +02:00
Guillermo Oyarzun
c22e63895e fix(gpu): fix multi-gpu throughput benches with classical pbs 2025-10-16 17:55:10 +02:00
Arthur Meyre
375a4f80ae docs: add ReRand documentation 2025-10-16 16:50:19 +02:00
Arthur Meyre
21b6863c5d chore: update dedup tool to be smarter when finding already aliased params
- keep the alias if found
- update the imports if all items are found to be aliases
2025-10-16 15:23:36 +02:00
Arthur Meyre
20a91337c1 chore: prepare v1.5 2025-10-16 15:23:36 +02:00
Arthur Meyre
a8520a2e22 chore: make main compile with 2024 edition dependencies
- resolver 3 makes sure that incompatible dependencies (rust version wise)
are not selected
- fix a new lint
2025-10-16 15:04:37 +02:00
Agnes Leroy
c8db338376 chore(gpu): remove use of duplicate async in hl api and non async integer ops 2025-10-16 14:30:57 +02:00
Nicolas Sarlin
e849394ea7 chore(tfhe): remove tuniform example 2025-10-16 09:48:24 +02:00
Enzo Di Maria
126e779533 refactor(gpu): oprf_unsigned_custom_range + tests 2025-10-16 09:31:01 +02:00
Enzo Di Maria
353237c0d6 refactor(gpu): oprf_unsigned_custom_range 2025-10-16 09:31:01 +02:00
Agnes Leroy
7bad509f9a fix(gpu): fix perf regression introduced in cf3f25efdd 2025-10-16 09:21:05 +02:00
yuxizama
c99bc6d97f chore(docs): update doc designs 2025-10-15 17:03:20 +02:00
Andrei Stoian
a84cf4ed21 fix(gpu): coprocessor install workflow 2025-10-15 15:38:30 +02:00
Agnes Leroy
ab40df4b7f chore(gpu): change coprocessor gpu bench name to match other names 2025-10-15 14:38:53 +02:00
Thomas Montaigu
3b9eb360c1 chore(backward): regenerate KVStore backward data
This is because now that the KVstore uses a BTreeMap
which is a sorted collection, the serialization of the data
is deterministic
2025-10-14 17:04:13 +02:00
Thomas Montaigu
498b0e6e5c refactor: use BTreeMap as internals of KVStore
This is to make the order of the key and value lists
deterministic when compressing
2025-10-14 17:04:13 +02:00
Arthur Meyre
a9d0b9a3fb chore: fix cmux doctstrings
- mismatched text/rust blocks made the docs broken on docs.rs
- should we remove all ```rust given it is implied by ``` ?
2025-10-14 10:25:42 +02:00
Agnes Leroy
cf3f25efdd chore(gpu): add missing syncs in linearalgebra functions and aes 2025-10-14 09:23:11 +02:00
Agnes Leroy
c3ed1a7558 chore(gpu): internal renaming 2025-10-14 09:23:11 +02:00
Agnes Leroy
6347f25668 chore(gpu): synchronize after every release 2025-10-14 09:23:11 +02:00
Arthur Meyre
58ae2f5359 chore: don't import deprecated GenericArray use the aes crate Block instead
- allow deprecated methods for now since aes 0.9 is not out yet
2025-10-13 13:20:34 +02:00
dependabot[bot]
9ea5c04be6 chore(deps): bump foundry-rs/foundry-toolchain from 1.4.0 to 1.5.0
Bumps [foundry-rs/foundry-toolchain](https://github.com/foundry-rs/foundry-toolchain) from 1.4.0 to 1.5.0.
- [Release notes](https://github.com/foundry-rs/foundry-toolchain/releases)
- [Changelog](https://github.com/foundry-rs/foundry-toolchain/blob/master/RELEASE.md)
- [Commits](82dee4ba65...50d5a8956f)

---
updated-dependencies:
- dependency-name: foundry-rs/foundry-toolchain
  dependency-version: 1.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-13 12:29:29 +02:00
Enzo Di Maria
79fdb33632 refactor(gpu): tests and long run tests for oprf 2025-10-10 17:32:34 +02:00
Agnes Leroy
91b263d480 chore(gpu): split integer utilities file 2025-10-10 14:49:02 +02:00
Thomas Montaigu
41a41278e6 chore(docs): fix docs for docs.rs
doc_auto_cfg is no longer available in nightly >= 1.92

This prevents the docs to be build on docs.rs, as docs.rs
uses the latest nightly

This commit also make the `make doc` target use the lastest
nightly so that we can catch these errors
2025-10-10 13:07:30 +02:00
Andrei Stoian
30938eec74 chore(gpu): use active streams in int_radix_lut 2025-10-09 21:59:15 +02:00
Nicolas Sarlin
516789bd5d chore(backward): add data for ks32 noise squashing server key 2025-10-09 14:03:21 +02:00
dependabot[bot]
027792d659 chore(deps): bump docker/login-action from 3.5.0 to 3.6.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.5.0 to 3.6.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](184bdaa072...5e57cd1181)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-08 13:24:30 +02:00
dependabot[bot]
1ed9d6a85e chore(deps): bump actions/stale from 10.0.0 to 10.1.0
Bumps [actions/stale](https://github.com/actions/stale) from 10.0.0 to 10.1.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](3a9db7e6a4...5f858e3efb)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-08 13:24:21 +02:00
Thomas Montaigu
126138a59d chore: only run KVStore benches on CPU
As its the only backend that supports it
2025-10-08 11:52:14 +02:00
Nicolas Sarlin
241685fccc chore(backward): add data for ks32 client key, server key and ct 2025-10-08 10:27:06 +02:00
Thomas Montaigu
e739f43ec5 chore: add CompressedKVStore backward compat tests 2025-10-07 16:36:36 +02:00
pgardratzama
3073d60f11 fix(hpu): work-around a criterion assert by reducing number of elements on division & modulus throughput bench 2025-10-07 14:23:07 +02:00
Himess
a05d228899 docs(wasm): remove obsolete TODO in CompactPkeCrs::deserialize 2025-10-07 11:24:37 +02:00
Arthur Meyre
63055d5ca8 test: add KS32 compatibility for the dp_ks_pbs128_packingks AP 2025-10-07 10:22:38 +02:00
Arthur Meyre
46a3008739 test: add KS32 compatibility for the dp_ks_ms AP 2025-10-07 10:22:38 +02:00
Arthur Meyre
f2674da031 test: add KS32 compatibility for the br_dp_ks_ms AP 2025-10-07 10:22:38 +02:00
Arthur Meyre
12c2a2a8b7 feat: make FheUint/FheInt/FheBool compatible with AP params for conformance
- update From impl for conformance parameters to manage the AP params
2025-10-07 10:22:11 +02:00
pgardratzama
b61dd21ef7 fix(hpu): HPU HLAPI ERC20 bench was missing pbs-stats feature 2025-10-07 10:14:43 +02:00
pgardratzama
ca4159f123 fix(hpu): fix overflow flag of OVF_MUL & OVF_MULS, also update simulation HPU config 2025-10-07 10:14:43 +02:00
pgardratzama
ab25919187 fix(hpu): throughput benchmarks were done 1 IOp per 1 IOp... 2025-10-07 10:14:43 +02:00
pgardratzama
1b38f8ccfc fix(hpu): fix expected value of ilog2 & modulus operation 2025-10-07 10:14:43 +02:00
Nicolas Sarlin
6a676551d8 chore(shortint): add metaparams for ks32 2025-10-07 09:51:09 +02:00
Thomas Montaigu
afb79a0b1c chore(hlapi): export CompressedKVStore
Without this, users cannot use the CompressedKVStore type
that is required in the KVStore deserialization
2025-10-07 09:07:28 +02:00
Thomas Montaigu
0277403c45 feat: add From<MetaParameters> for Config
This convertion is important to make the use
of meta parameters for high level API
users
2025-10-06 16:47:13 +02:00
Thomas Montaigu
18159d6458 chore(MetaParameters)!: move re-rand ksk params in
Re-Randomization is something that requires a
dedicated public key.

Thus we move the parameters of the KSK
into the struct for dedicated PKE parameters

BREAKING CHANGE: This is breaking change regarding the latest alpha
released. But MetaParameters did not seem to be used directly in
fhevm/kms
2025-10-06 16:47:13 +02:00
Nicolas Sarlin
728409aef8 chore(hl): and cpk tests for ks32 2025-10-06 13:59:15 +02:00
Thomas Montaigu
034f3b3c25 feat(xofkeyset): add ks32 support 2025-10-06 13:59:15 +02:00
Nicolas Sarlin
c30e9c39f6 feat(shortint): add compact pke for the ks32 atomic pattern 2025-10-06 13:59:15 +02:00
Arthur Meyre
1513c3bc8c chore: bump TFHE-rs to 1.4.0 2025-10-06 13:26:54 +02:00
Arthur Meyre
e07f07c4c8 chore: bump tfhe-cuda-backend to 0.12.0 2025-10-06 13:26:54 +02:00
Arthur Meyre
81cc0c31b4 chore: constrain bytemuck < 1.24.0 as we don't have avx512 updated code 2025-10-06 13:24:16 +02:00
tmontaigu
c95e38e26f feat(hlapi): add flip operation 2025-10-06 11:07:12 +02:00
Enzo Di Maria
f0f3dd76eb feat(gpu): aes 128 2025-10-06 09:31:36 +02:00
Andrei Stoian
0604d237eb chore(gpu): multi-gpu debug target 2025-10-03 16:48:42 +02:00
Thomas Montaigu
e523fd2cb6 feat: add KVStore to the high level api
* Added Value type name to crate::integer::KVStore impl of Named trait
  as well as a bool to check we deserialize the correct value type
  (Radix vs SignedRadix)
* Add KVStore to high_level_api
* Add KVStore hlapi benches
* Remove specialized `[add,mul,sub]_to_slot` as `map` is now the
  intended API.
    - mul_to_slot was way slower than using `map`
    - add/mul_to_slot were a bit faster (~5% latency-wise), but returned
      less information (no old_value, no new_value, no boolean to check)
      if the key matched
    - Some known improvement can be made to map, which should result in
      it being better than add/sub_to_slot
* Add FheIntegerType trait to make the KVStore generic over
  FheUint/FheInt, and should make GPU integration "easy"
2025-10-03 15:01:23 +02:00
David Testé
33dee7673c chore(ci): enable multi-bit parameters for core_crypto benchmarks
Classical or multi-bit or both parameters type can be now run on
CPU core_crypto benchmarks at user discretion.
2025-10-03 12:17:42 +02:00
Mayeul@Zama
9b5596ca66 feat(integer): add oprf over any range 2025-10-03 10:00:18 +02:00
Nicolas Sarlin
aefec1fe64 feat(shortint): add ct compression for the ks32 atomic pattern 2025-10-03 09:59:50 +02:00
Agnes Leroy
f9e876730a chore(gpu): remove support for drift noise reduction 2025-10-03 09:45:20 +02:00
pgardratzama
f3cddb5635 chore(hpu): force benches to run on specific board 2025-10-02 13:20:36 +02:00
pgardratzama
a395cfe9bf chore(hpu): new runners missing in actionlint whitelist 2025-10-02 13:20:36 +02:00
pgardratzama
602c6faf8a chore(hpu): update hpu-backend dependencies, fix pcc 2025-10-02 13:20:36 +02:00
pgardratzama
563502a6a6 chore(hpu): update tfhe-hpu-backend version, readme and run-on-hpu doc 2025-10-02 13:20:36 +02:00
pgardratzama
5f30569452 fix(hpu): update AMC firmware in bitstream to lower polling period 2025-10-02 13:20:36 +02:00
pgardratzama
39b81a8ded feat(hpu): move to new bitstream at 400Mhz with GRAM_NB 3
- update SIMD_N and min_batch_size to 12 which seems to give better
  latency and ERC20 throughput
- support IOp on several lines in ami /proc file
- reduce amount of ERC_20_SIMD per batch in HLAPI bench
2025-10-02 13:20:36 +02:00
pgardratzama
da223b36b6 fix(hpu): reduce polling period of backend on iop ack file from 10 to 2us 2025-10-02 13:20:36 +02:00
JJ-hw
db16276715 chore(hpu): Remove all references to U55C, which is not supported anymore. 2025-10-02 13:20:36 +02:00
pgardratzama
a59742f518 fix(hpu): uuid comparison is now done in lower case from both value (metadata, ami read) 2025-10-02 13:20:36 +02:00
pgardratzama
2bf595d0e2 fix(hpu): missing bench numbers for less_than & less_or_equal because lower != less 2025-10-02 13:20:36 +02:00
Nicolas Sarlin
fb2b1a13e7 chore(core): fix encryption of single lwe to use the noise generator
This is aligned with what is done with the list encryption
2025-10-02 10:36:12 +02:00
Arthur Meyre
9fdaa983e3 chore: fix october typos 2025-10-01 14:32:41 +02:00
Andrei Stoian
73de886c07 chore(gpu): make deterministic long run GPU test 2025-10-01 13:37:05 +02:00
Nicolas Sarlin
45a849ad36 feat(shortint): add noise squashing for ks32 2025-10-01 10:36:11 +02:00
Nicolas Sarlin
ef5b984448 docs(core): fix fft128 blind rotation doc 2025-10-01 10:36:11 +02:00
Thomas Montaigu
6abed1f228 chore: complete gpu meta params
Add noise-squashing params, noise-squashing compression and re-rand
2025-10-01 10:30:22 +02:00
Agnes Leroy
71b45c14da chore(gpu): refactor subset_first and subset 2025-09-30 12:21:39 +02:00
David Testé
4f5d711c4e chore(bench): add crs size in wasm zero-knowledge benchmark
Done to improve result display in Grafana.
2025-09-30 10:42:19 +02:00
Arthur Meyre
2602c9e1b3 fix(hlapi): clear rerand metadata once rerand is done 2025-09-29 18:17:35 +02:00
Arthur Meyre
06dffc60bd chore: bump version to 1.4.0-alpha.3 2025-09-29 18:17:35 +02:00
Arthur Meyre
2a82076121 fix(shortint): accept trivial ciphertexts for rerand
- make sure to set their noise to NOMINAL once rerand is done
2025-09-29 17:51:44 +02:00
Beka Barbakadze
7549474aac feat(gpu): Implements optimized division algorithm for message_2_carry_2, when 4 or more gpus are used 2025-09-29 15:16:34 +02:00
dependabot[bot]
4fcff55745 chore(deps): bump zgosalvez/github-actions-ensure-sha-pinned-actions
Bumps [zgosalvez/github-actions-ensure-sha-pinned-actions](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions) from 3.0.25 to 4.0.0.
- [Release notes](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions/releases)
- [Commits](fc87bb5b5a...9e9574ef04)

---
updated-dependencies:
- dependency-name: zgosalvez/github-actions-ensure-sha-pinned-actions
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-29 14:21:26 +02:00
dependabot[bot]
3d345a648b chore(deps): bump actions/cache from 4.2.4 to 4.3.0
Bumps [actions/cache](https://github.com/actions/cache) from 4.2.4 to 4.3.0.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](0400d5f644...0057852bfa)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 4.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-29 14:21:17 +02:00
dependabot[bot]
3975e0115b chore(deps): bump JS-DevTools/npm-publish from 4.1.0 to 4.1.1
Bumps [JS-DevTools/npm-publish](https://github.com/js-devtools/npm-publish) from 4.1.0 to 4.1.1.
- [Release notes](https://github.com/js-devtools/npm-publish/releases)
- [Changelog](https://github.com/JS-DevTools/npm-publish/blob/main/CHANGELOG.md)
- [Commits](1fe17a0931...7f8fe47b3b)

---
updated-dependencies:
- dependency-name: JS-DevTools/npm-publish
  dependency-version: 4.1.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-29 14:21:08 +02:00
David Testé
6494a82fb3 chore(ci): split cargo builds into several jobs
Post-commit checks was the bottleneck regarding running duration.
It's now split into 7 batches to improve parallelism.
Builds that are specific to Ubuntu are run in their own jobs, so
that only build_tfhe_full recipe call remains in the os matrix.
A final check is performed to ensure all the checks have passed,
this very job is used as branch protection rule.
2025-09-29 11:22:56 +02:00
David Testé
8aa60f8882 chore(ci): use large runners for windows and macos builds 2025-09-29 11:22:56 +02:00
Agnes Leroy
15cab8b413 chore(gpu): get decompress size on gpu without calling on_gpu 2025-09-29 11:00:18 +02:00
Agnes Leroy
23d46ba2bc fix(gpu): fix oprf output degree 2025-09-29 08:33:25 +02:00
Agnes Leroy
daf0e79e4a fix(gpu): fix get oprf size on gpu 2025-09-29 08:33:25 +02:00
Arthur Meyre
c5ad73865c chore: prepare alpha.2
- bump tfhe-cuda-backend to 0.12.0-alpha.2
- bump tfhe to 1.4.0-alpha.2
2025-09-27 11:35:27 +02:00
Agnes Leroy
9aab79e23a chore(gpu): fix compilation warning 2025-09-26 17:04:17 +02:00
Arthur Meyre
6ca48132e1 chore: bump TFHE-rs to 1.4.0-alpha.1 2025-09-26 15:08:09 +02:00
Agnes Leroy
f53c75636d chore(gpu): refactor oprf test, remove unused arg and fix multi-GPU for oprf 2025-09-26 13:19:34 +02:00
Arthur Meyre
ce63cabc05 chore: bump tfhe-cuda-backend to 0.12.0-alpha.1 2025-09-26 10:39:24 +02:00
dependabot[bot]
4fec2e17ae chore(deps): bump JS-DevTools/npm-publish from 3.1.1 to 4.0.1
Bumps [JS-DevTools/npm-publish](https://github.com/js-devtools/npm-publish) from 3.1.1 to 4.0.1.
- [Release notes](https://github.com/js-devtools/npm-publish/releases)
- [Changelog](https://github.com/JS-DevTools/npm-publish/blob/main/CHANGELOG.md)
- [Commits](19c28f1ef1...ad693561f8)

---
updated-dependencies:
- dependency-name: JS-DevTools/npm-publish
  dependency-version: 4.0.1
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-25 18:23:05 +02:00
dependabot[bot]
e87c36beb4 chore(deps): bump docker/login-action from 3.3.0 to 3.5.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.3.0 to 3.5.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v3.3.0...184bdaa0721073962dff0199f1fb9940f07167d1)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-25 18:22:15 +02:00
dependabot[bot]
24c6ffc24a chore(deps): bump actions/checkout from 4.2.2 to 5.0.0
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.2.2 to 5.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4.2.2...08c6903cd8c0fde910a37f88322edcfb5dd907a8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-25 18:22:07 +02:00
JJ-hw
3680f796af feat(hpu): Now the mockup takes into account the field position from the regmap toml to generate its register read and write answers. 2025-09-25 14:00:07 +02:00
JJ-hw
3ded3fe7c9 fix(hpu): (From hcampos-zama) Missing a field. 2025-09-25 14:00:07 +02:00
Nicolas Sarlin
451cfe3aba fix(core): removed sanity check for scalar size before zk pke encryption 2025-09-25 13:44:06 +02:00
David Testé
1e28cf7f3b chore(ci): use artifact for cuda backend to speed up publishing 2025-09-25 10:27:44 +02:00
Thomas Montaigu
91b62c737f chore(ci): increase timeout to 8hours for unsigned integer tests 2025-09-24 18:22:35 +02:00
Nicolas Sarlin
da12bb29d8 chore(core): fix typo in ms noise test comment 2025-09-24 17:20:05 +02:00
Arthur Meyre
d60028c47c chore: bump tfhe-cuda-backend to 0.12.0-alpha.0 2025-09-24 15:57:30 +02:00
Arthur Meyre
d5b5369a9a chore: bump tfhe-zk-pok to 0.7.3 2025-09-24 15:52:33 +02:00
David Testé
9457ca786c chore(ci): fix release workflows token permissions 2025-09-24 15:01:24 +02:00
Thomas Montaigu
8b5d7321fb chore: split up more xof key gen function 2025-09-24 14:08:13 +02:00
Thomas Montaigu
736185bb31 feat: make XofKeySet serializable 2025-09-24 14:08:13 +02:00
Thomas Montaigu
e4b230aaf1 chore(XofKetSet): generate mod switch key after BSK 2025-09-24 14:08:13 +02:00
Thomas Montaigu
7ed827808c feat: add noise squashing compression to xof keyset 2025-09-24 14:08:13 +02:00
Thomas Montaigu
6e7aaac90f feat: add re randomization key to XofKeySet 2025-09-24 14:08:13 +02:00
Thomas Montaigu
d1c190fac6 feat(hlapi): add XofKeySet
This adds a specialized struct that is able to generate keys for the
high_level_api in a way that is compatible with the NIST/MPC protocol

There are still things to be done in later commits:
- Backward compatibility
- NIST compliant ClientKey generation
2025-09-24 14:08:13 +02:00
Arthur Meyre
7e1c8f7db5 chore: make NoiseSimulationLwe/NoiseSimulationGlwe properly private
- this avoids submodules of the noise_simulation module to be able to
partially update an output
- switch the NEG_INFINITY default value for Variance to NAN, NAN will fail
all comparisons and absorb all computations which is a nice way to
propagate an undefined noise value in our case
2025-09-24 10:42:39 +02:00
Arthur Meyre
d30c2060bf test: implemented noise simulation traits for shortint keys
- now manages mean reduction and shifted ms
2025-09-24 10:42:39 +02:00
Arthur Meyre
4ccd5ea262 chore: update noise formulas with latest automated code gen 2025-09-24 10:42:39 +02:00
Arthur Meyre
1ab3022df8 chore: update parameters with mean reduction technique
- parameters checked visually with 1.1 for those who differ, all seems ok
2025-09-24 10:42:39 +02:00
Arthur Meyre
a257849c66 chore: various README fixes
co-authored-by: Sumitds074 <sumitds074@gmail.com>
2025-09-23 21:04:27 +02:00
Arthur Meyre
0f4f8dd755 chore(versionable): bump version to 0.6.2 2025-09-23 21:03:30 +02:00
Nicolas Sarlin
aaaa929c2e chore(tfhe): prepare release 1.4.0-alpha.0 2025-09-23 16:35:42 +02:00
David Testé
d397ea3a39 chore(bench): handle ks32 atomic pattern in key size measurements 2025-09-23 12:01:33 +02:00
Arthur Meyre
3e25536021 test: add multi bit blind rotate traits
- given the nature of the mod switch it seems easier to think in terms of
mod switch + blind rotate, the classic PBS might get updated in a similar
way, to be determined
2025-09-23 10:36:53 +02:00
Arthur Meyre
1c19851491 test: add multi bit modswitch noise simulation traits 2025-09-23 10:36:53 +02:00
Agnes Leroy
4b0623da4a chore(gpu): remove unused variable 2025-09-22 16:36:34 +02:00
Guillermo Oyarzun
d415d47894 chore(gpu): remove unnecessary nvtx lib dependency 2025-09-22 16:34:57 +02:00
Nicolas Sarlin
e22f9c09e3 chore(ci): fix audit workflow name 2025-09-22 15:31:55 +02:00
Nicolas Sarlin
4d02d3abb4 fix(hpu): clippy lint 2025-09-22 14:02:41 +02:00
Nicolas Sarlin
ae6f96e0ec chore(core): use a single rng for cpk encryption 2025-09-22 14:02:41 +02:00
Nicolas Sarlin
70e1828c58 chore(backward): add backward compat tests for rerand 2025-09-22 14:02:41 +02:00
Nicolas Sarlin
1b1e6a7068 chore(shortint): add rerand to the meta parameters 2025-09-22 14:02:41 +02:00
Nicolas Sarlin
fc447fd2d0 fix: backward compatibility tests with cache misses 2025-09-22 14:02:41 +02:00
Arthur Meyre
d5e5902f61 feat: add ciphertexts re-randomization 2025-09-22 14:02:41 +02:00
Thomas Montaigu
9f54777ee1 feat(integer): add KVStore compression and serialization 2025-09-22 09:39:59 +02:00
Nicolas Sarlin
4a73b7bb4b fix(versionable): use full type path in proc macro
This avoids name clashes if user re-defines the type
2025-09-19 16:03:56 +02:00
Guillermo Oyarzun
022cb3b18a fix(gpu): avoid out of memory when benchmarking throughput 2025-09-19 14:44:12 +02:00
David Testé
c4feabbfa3 chore(ci): revert package-lock.json 2025-09-19 09:30:15 +02:00
David Testé
3c6ed37a18 chore(ci): factorize release workflows by using a sub-workflow 2025-09-18 17:52:34 +02:00
Agnes Leroy
fe6e81ff78 chore(gpu): post hackathon cleanup 2025-09-18 16:30:45 +02:00
Andrei Stoian
87c0d646a4 fix(gpu): coprocessor bench 2025-09-18 13:56:55 +02:00
Agnes Leroy
e5b39a6d4d fix(gpu): fix memory leak in multi-gpu calculations 2025-09-18 13:55:03 +02:00
Arthur Meyre
27e2fbd972 chore: add implementation note for the NTT formula 2025-09-18 09:51:53 +02:00
Arthur Meyre
f54fbf52ce chore: bump tfhe-ntt version to 0.6.1 2025-09-18 09:51:53 +02:00
Arthur Meyre
2a0dfa5b17 fix(ntt): same update for 64 bits code 2025-09-18 09:51:53 +02:00
Arthur Meyre
a4841036b7 fix: make sure computations don't overflow for certain primes for 32 bits
- The original code seemed to assume that the Barrett reduction would not
overflow if p <= 2^31, this is incorrect but rare
- The correctness constraint has a bound much smaller than 2^31, some
primes bigger than the derived threshold can still use the fast code
given a certain criterion is respected which corresponds to a "lucky" case
of the Barrett reduction, the new code now manages this

maths explained in https://blog.zksecurity.xyz/posts/barrett-tighter-bound/
and copiously in comments in the code
2025-09-18 09:51:53 +02:00
Andrei Stoian
1dcc3c8c89 chore(gpu): structure to encapsulate streams 2025-09-18 09:43:17 +02:00
Nicolas Sarlin
1a2643d1da fix(ci): use precise wasm-bindgen version for the cli 2025-09-17 13:17:57 +02:00
David Testé
bc257904e3 chore(ci): fix issue_comment trigger event for regression bench 2025-09-17 12:15:32 +02:00
Arthur Meyre
8982844a5b chore: adapt naming of traits to better match current scheme
- Standard -> Classic when referring to original PBS implementation
2025-09-17 10:32:40 +02:00
Arthur Meyre
e80d2548af fix: fix noise simulation modulus instantiation 2025-09-17 10:32:40 +02:00
Arthur Meyre
c0ab0a5752 chore: split noise simulation primitives in sub modules
- keep things easier to manage in terms of file size and content density
2025-09-17 10:32:40 +02:00
Arthur Meyre
f7bfe2f10c chore: uniformize noise check tools naming 2025-09-17 10:32:40 +02:00
Arthur Meyre
29c390d92c chore: reorg noise check tools 2025-09-17 10:32:40 +02:00
Pedro Alves
becd08db71 fix(gpu): fix an overflow that may happen when the user tries to allocate a huge amount of blocks 2025-09-16 16:17:32 -03:00
David Testé
ffd7470ef1 chore(ci): check if regression workflow should be run early
Before, any issue comment or label event would trigger the verify-actor job. Then the next job, prepare-benchmarks, would check if the rest of the workflow should run. Moving this very check in verify-actor ensures the whole workflow to run only if required.
2025-09-16 20:55:54 +02:00
David Testé
a3750504c4 chore(ci): use dedicated token to sync repositories 2025-09-16 18:53:17 +02:00
David Testé
378c5ccb73 chore(ci): perform sync on push without third-party action
This is done to better handle git-lfs related changes when syncing
with another repository.
2025-09-16 16:29:44 +02:00
David Testé
4ba1787e12 chore(bench): add crs size in zk-pke benchmark names
This is done get more details about the benchmarks when parsing
results.
2025-09-16 16:06:41 +02:00
David Testé
366d359441 chore(bench): measure ciphertext and key sizes at a large scale
Ciphertext sizes are measured at HLAPI layer with several
parameters set.
Keys sizes are measured at shortint level.
This benchmark has now its dedicated GitHub workflow that would
run, at least, each 24th of the month.
2025-09-16 15:43:36 +02:00
dependabot[bot]
0ece9e684a chore(deps): bump tj-actions/changed-files from 46.0.5 to 47.0.0
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 46.0.5 to 47.0.0.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](ed68ef82c0...24d32ffd49)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-version: 47.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-16 14:02:32 +02:00
David Testé
f8684d1f67 chore(ci): add regression benchmark workflow
Regression benchmarks are meant to be run in pull-request. They
can be launched in two flavors:
 * issue comment: using command like "/bench --backend cpu"
 * adding a label: `bench-perfs-cpu` or `bench-perfs-gpu`

Benchmark definitions are written in TOML and located at
ci/regression.toml.
While not exhaustive, it can be easily modified by reading the
embbeded documentation.

"/bench" commands are parsed by a Python script located at
ci/perf_regression.py. This script produces output files that
contains cargo commands and a shell script generating custom
environment variables. The Python script and generated files are
meant to be used only by the workflow
benchmark_perf_regression.yml.
2025-09-16 13:33:49 +02:00
Nicolas Sarlin
b4066df77f chore(ci): run cargo audit 2025-09-16 12:03:32 +02:00
Pedro Alves
6b94872a00 fix(gpu): add an assert to be sure the carry part has correct size in expand 2025-09-15 12:57:11 -03:00
Nicolas Sarlin
d88caff6dd fix(ci): fix serde root crate in tfhe-lints 2025-09-15 15:18:58 +02:00
Thomas Montaigu
75a265f93b fix(integer): fix aggregate_one_hot_vector
`aggregate_one_hot_vector`` was modified when the KVStore was
added to support inputs where information in the blocks was not packed.
And to detect if blocks where packed it was relying on the degree value.

However, the inputs may come from LUTs that had precise degree, and
could lead to believe the inputs were not packed.

To fix this we split in 2 fn:
* aggregate_one_hot_vector
* aggregate_and_unpack_one_hot_vector

And use the correct one when we know if the inputs are packed
2025-09-15 10:27:24 +02:00
Nicolas Sarlin
bfbf638fed fix(zk): add a size check for the public key 2025-09-12 11:10:06 +02:00
David Testé
01651d6fb2 chore(ci): update lattice estimator version 2025-09-12 11:07:25 +02:00
Pedro Alves
b2624d1a76 chore(gpu): refactor the indexing logic for the LWE expand 2025-09-11 13:10:18 -03:00
tmontaigu
9fb7b56629 feat(integer): add KVStore
The KVStore is a Hash Table, with homomorphic capabilities

The keys are meant to be clear integers, values are meant to be
Radix/SignedRadix

The ServerKey now has functions to be able to do operations that modify
an existing key,value pair using an encrypted key.
2025-09-11 13:55:42 +02:00
Arthur Meyre
24feeb8609 chore(ci): avoid backward compat workflow cancel
- re-use formulas from the integer workflow which also executes on main
2025-09-11 10:43:23 +02:00
pgardratzama
757c2fc828 chore(hpu): make hpu integer bench fast by default 2025-09-10 22:24:31 +02:00
pgardratzama
4ff0d6cac2 feat(hpu): integer bench update (adds mod, div -> div_mod), erc20_simd simd batch size read from iop prototype 2025-09-10 22:24:31 +02:00
pgardratzama
1530f52c79 feat(hpu): adds support of ERC20 SIMD in hpu ERC20 bench 2025-09-10 22:24:31 +02:00
David Testé
9918dacd6a chore(ci): change workflow jobs naming convention
The term "bpr" means Branch Protection Rule. It helps one to
identify any job that must pass before being able to merge to the
base branch.
2025-09-10 15:36:45 +02:00
tmontaigu
2b503acf18 chore(shortint): add consts for MetaParameters 2025-09-10 15:15:06 +02:00
tmontaigu
57cc326a64 feat(shortint): MetaParameters struct
There are a lot of different parameter types in tfhe-rs, related to
different but linked features. Thus when some PBS parameters are
selected, compatible compression parameters must be selected from the
possible parameters.

To make things easier, the MetaParameters struct is added, this stores
in one place parameters that can be used together.
2025-09-10 15:15:06 +02:00
Arthur Meyre
84eb8aeb63 test(shortint): add BR + DP + KS + MS noise checks
- sanity check, noise measurement and pfail are done
2025-09-10 14:50:28 +02:00
Arthur Meyre
f09acfa581 chore: rename test files to remove redundant name fragment 2025-09-10 14:50:28 +02:00
Arthur Meyre
8335a6b6b5 chore(ci): run backward compat tests on merge to main
- this is to prime cache and check backward data on merge to main
2025-09-10 14:49:50 +02:00
tmontaigu
f80fd157ae fix(c-api): add missing safe_deser for ServerKey 2025-09-10 13:40:44 +02:00
Agnes Leroy
0ed97cfba8 chore(gpu): update sxm5 cost 2025-09-10 10:49:25 +02:00
Agnes Leroy
daee3f1850 chore(gpu): fix out of memory error in 4090 doc tests 2025-09-10 10:46:04 +02:00
tmontaigu
e8dc403ebd feat(integer): add flip operation
Add the flip(condition: BooleanBlock, a: T, b: T) -> (T, T)
operation that homomorphically flip/swap two values if the
given encrypted boolean encrypts true
2025-09-10 09:44:28 +02:00
Pedro Alves
63e5504c80 doc(gpu): add a section about noise squashing 2025-09-09 13:10:23 -03:00
Nicolas Sarlin
d664e4ada6 docs(safe_ser): document panics if max size is too large 2025-09-09 17:03:23 +02:00
Pedro Alves
c78cc2d2e9 chore(gpu): add a benchmark for 128-bit multi-bit noise squashing
- Also, remove the lut indexes concept from the 128-bit multi-bit pbs. It's assumed not to exist by the entire backend (as it doesn't for classical PBS). So to keep it here would be a bit error prone.
2025-09-09 07:51:35 -03:00
Pedro Alves
b566d78621 chore(gpu): improve the 128-bit multi-bit PBS core crypto test 2025-09-09 07:51:35 -03:00
Pedro Alves
7da6786d59 feat(gpu): add support to the 128-bit multi-bit PBS on HL's noise squashing 2025-09-09 07:51:35 -03:00
Himess
6edf6b9e26 chore: gate backward_compatibility_tests.rs with shortint feature 2025-09-09 09:35:59 +02:00
Himess
6fde90ad9c chore(clap): Replace use of deprecated attributes
Replace deprecated #[clap(...)] attributes to #[arg]/#[command] and remove redundant use of value_parser
2025-09-09 09:35:59 +02:00
Agnes Leroy
5d70ae4232 fix(gpu): add missing broadcast lut 2025-09-09 08:47:53 +02:00
David Testé
89b36ebca0 chore(bench): remove 2-bits size for full precision bench on gpu
GPU backend cannot accept less than 2 blocks for integer
benchmarks. Since 2-bits precision benchmarks are run with
*_MESSAGE_2_CARRY_2_* parameters, it will create only one block of
ciphertext, thus making the benchmarks unsuitable for GPU backend.
2025-09-08 12:24:24 +02:00
dependabot[bot]
bfc97385f4 chore(deps): bump actions/stale from 9.1.0 to 10.0.0
Bumps [actions/stale](https://github.com/actions/stale) from 9.1.0 to 10.0.0.
- [Release notes](https://github.com/actions/stale/releases)
- [Changelog](https://github.com/actions/stale/blob/main/CHANGELOG.md)
- [Commits](5bef64f19d...3a9db7e6a4)

---
updated-dependencies:
- dependency-name: actions/stale
  dependency-version: 10.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-08 10:39:56 +02:00
dependabot[bot]
7ab763abba chore(deps): bump codecov/codecov-action from 5.5.0 to 5.5.1
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.5.0 to 5.5.1.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](fdcc847654...5a1091511a)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 5.5.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-08 10:39:41 +02:00
dependabot[bot]
a05db18ba3 chore(deps): bump actions/setup-node from 4.4.0 to 5.0.0
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4.4.0 to 5.0.0.
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](49933ea528...a0853c2454)

---
updated-dependencies:
- dependency-name: actions/setup-node
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-08 10:39:27 +02:00
Guillermo Oyarzun
a3168eb1b5 feat(gpu): enable lut generation with preallocated buffers 2025-09-08 10:01:34 +02:00
Arthur Meyre
7fccb851d7 fix(csprng): harmonize behavior for UnixSeeder between small and big endian
- bytes are generated in a given order and endianness needs to be given
to the buffer for the generated number to make sense
- Seed(pub u128) exposes that endianness so it needs to be consistent to
outside users
2025-09-08 09:40:00 +02:00
Arthur Meyre
a78d5cc57b fix(csprng): make Seed interface less confusing wrt endianness
- From a user perspective giving the same u128 seed e.g. 1u128 should have
the same behavior no matter the endianness of the system
2025-09-08 09:40:00 +02:00
Nicolas Sarlin
9c0d078e1a chore(zk): bump tfhe-zk-pok to 0.7.2 2025-09-08 09:30:34 +02:00
Nicolas Sarlin
6016755f9d fix(js): bump wasm bindgen version 2025-09-05 17:55:33 +02:00
Nicolas Sarlin
adcf9bc1f3 fix(zk): handle limit cases in the four_squares algorithm 2025-09-05 15:34:44 +02:00
pgardratzama
0a1651adf3 fix(hpu): update firmware in bitstream to allow SIMD operations 2025-09-05 10:42:36 +02:00
pgardratzama
11b540c456 chore(hpu): adds cost of hpu setups 2025-09-05 10:42:36 +02:00
pgardratzama
bd7df4a03b chore(hpu): enable hpu hlapi workflow and throughput bench in integer workflow 2025-09-05 10:42:36 +02:00
pgardratzama
2279d0deb8 chore(hpu): update hpu firmware (fix 2 bits operations issue) 2025-09-05 10:42:36 +02:00
pgardratzama
6fe24c6ab3 chore(hpu): update hpu integer bench scalar op names 2025-09-05 10:42:36 +02:00
pgardratzama
46c6adb0dc feat(hpu): create a new workflow to launch HLAPI benches for HPU 2025-09-05 10:42:36 +02:00
pgardratzama
c6aa1adbe7 chore(hpu): update benches to run new operations 2025-09-05 10:42:36 +02:00
Helder Campos
d3a867ecfe feat(hpu): High bandwidth HPU 2025-09-05 10:42:36 +02:00
Helder Campos
a83c92f28f feat(hpu): Soft Reset Support and fix some runtime registers 2025-09-05 10:42:36 +02:00
Helder Campos
3b48ef301e feat(hpu): Made two SIMD IOPs, ADD and ERC20. 2025-09-05 10:42:36 +02:00
Helder Campos
827a6e912c feat(hpu): Adding a massively parallel multiplier operation 2025-09-05 10:42:36 +02:00
Guillermo Oyarzun
eeccace7b3 fix(gpu): add missing syncs when releasing scalar ops and returning to old lut release 2025-09-05 09:53:00 +02:00
dependabot[bot]
01d1fa96d7 chore(deps): bump on-headers and serve in /tfhe/web_wasm_parallel_tests
Bumps [on-headers](https://github.com/jshttp/on-headers) to 1.1.0 and updates ancestor dependency [serve](https://github.com/vercel/serve). These dependencies need to be updated together.


Updates `on-headers` from 1.0.2 to 1.1.0
- [Release notes](https://github.com/jshttp/on-headers/releases)
- [Changelog](https://github.com/jshttp/on-headers/blob/master/HISTORY.md)
- [Commits](https://github.com/jshttp/on-headers/compare/v1.0.2...v1.1.0)

Updates `serve` from 14.2.3 to 14.2.5
- [Release notes](https://github.com/vercel/serve/releases)
- [Commits](https://github.com/vercel/serve/compare/14.2.3...v14.2.5)

---
updated-dependencies:
- dependency-name: on-headers
  dependency-version: 1.1.0
  dependency-type: indirect
- dependency-name: serve
  dependency-version: 14.2.5
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-05 09:14:55 +02:00
Arthur Meyre
10789ba3d1 chore(ci): configure tfhe-ntt tests to have an avx512 + IFMA instance
- ubuntu-latest is replaced by m6i.4xlarge to make sure all code is tested
in the tfhe-ntt crate
2025-09-05 09:14:12 +02:00
David Testé
4a0658389e chore(bench): make bits to prove customizable in zk benchmarks
Some application like blockchain, may wants to prove less bits
than CRS size allows to.
2025-09-05 09:03:24 +02:00
David Testé
97574bdae8 chore(bench): add noise squash benchmark with compressions
This new benchmark is extracted from a use case.
From a compressed ciphertext, it measures the decompression, then noise squashes it and finally compresses again the result.
2025-09-04 15:13:08 +02:00
Guillermo Oyarzun
60d137de6e feat(gpu): use mempools to optimize mem reuse 2025-09-04 13:23:18 +02:00
Guillermo Oyarzun
c2e816a86c fix(gpu): change mininum number of elements in benches 2025-09-04 11:03:27 +02:00
Pedro Alves
b42ba79145 feat(gpu): implement support for 128-bit compression on the HL API 2025-09-03 14:33:08 -03:00
Agnes Leroy
69b055c03f chore(gpu): update parameters for classical pbs128 2025-09-03 17:22:52 +02:00
Nicolas Sarlin
e2c7359057 chore(csprng): use getrandom as random source for unix seeder 2025-09-03 17:21:22 +02:00
Guillermo Oyarzun
baad6a6b49 feat(gpu): change broadcast lut to communicate the minimum possible 2025-09-03 15:20:58 +02:00
Guillermo Oyarzun
88c3df8331 feat(gpu): improve communication scheme 2025-09-03 15:20:58 +02:00
Nicolas Sarlin
e3686ed4ba chore(fft): remove dead store in stockham dif16 2025-09-02 16:48:56 +02:00
Nicolas Sarlin
b8a9a15883 doc: explain how to run first example 2025-09-02 16:48:33 +02:00
Nicolas Sarlin
a7d931449a doc(core): remove warning about glwe polynomial size of 1 2025-09-02 15:49:15 +02:00
Nicolas Sarlin
099bccd85f chore(safe_ser): check serialization header version 2025-09-01 17:29:47 +02:00
Nicolas Sarlin
b9d75c9f8f fix: remove references to 2^-64 pfail for GPU 2025-09-01 17:01:15 +02:00
Nicolas Sarlin
543517cea5 chore(core): use checked_mul for container indexing 2025-09-01 15:36:44 +02:00
Nicolas Sarlin
fed5c1db1e fix(core): potential overflow for glwe encrypt on 32b platforms 2025-09-01 15:36:06 +02:00
Nicolas Sarlin
c9249fe991 chore(core): size checks in Fourier128GgswCiphertext::from_container 2025-09-01 15:35:58 +02:00
Nicolas Sarlin
d308305eb1 doc(core): add some "panics" comments 2025-09-01 15:35:41 +02:00
Nicolas Sarlin
f66730deb6 chore(core)!: add ExactSizeIterator to izip macro, renamed izip_eq 2025-09-01 15:35:41 +02:00
dependabot[bot]
cd92146c38 chore(deps): bump actions/cache from 4.2.0 to 4.2.4
Bumps [actions/cache](https://github.com/actions/cache) from 4.2.0 to 4.2.4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v4.2.0...0400d5f644dc74513175e3cd8d07132dd4860809)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 4.2.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-01 11:26:00 +02:00
dependabot[bot]
568f77f5f6 chore(deps): bump actions/setup-node from 4.0.2 to 4.4.0
Bumps [actions/setup-node](https://github.com/actions/setup-node) from 4.0.2 to 4.4.0.
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](60edb5dd54...49933ea528)

---
updated-dependencies:
- dependency-name: actions/setup-node
  dependency-version: 4.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-01 11:25:51 +02:00
dependabot[bot]
f610712e97 chore(deps): bump foundry-rs/foundry-toolchain from 1.3.1 to 1.4.0
Bumps [foundry-rs/foundry-toolchain](https://github.com/foundry-rs/foundry-toolchain) from 1.3.1 to 1.4.0.
- [Release notes](https://github.com/foundry-rs/foundry-toolchain/releases)
- [Changelog](https://github.com/foundry-rs/foundry-toolchain/blob/master/RELEASE.md)
- [Commits](de808b1eea...82dee4ba65)

---
updated-dependencies:
- dependency-name: foundry-rs/foundry-toolchain
  dependency-version: 1.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-01 11:25:42 +02:00
dependabot[bot]
5d8f0b8532 chore(deps): bump actions/checkout from 4.1.7 to 5.0.0
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.1.7 to 5.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4.1.7...08c6903cd8c0fde910a37f88322edcfb5dd907a8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-01 11:25:35 +02:00
dependabot[bot]
11c04d0cc9 chore(deps): bump docker/login-action from 3.3.0 to 3.5.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.3.0 to 3.5.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](9780b0c442...184bdaa072)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-01 11:25:27 +02:00
Pedro Alves
57ea3e3e88 chore(gpu): refactor the entry points for PBS in the backend 2025-08-29 16:46:27 -03:00
Pedro Alves
cad4070ebe fix(gpu): fix the decompression function signature in the backend 2025-08-29 21:09:40 +02:00
Pedro Alves
94d24e1f8b feat(gpu): implement the centered modulus switch technique to classical PBS 2025-08-29 11:38:26 -03:00
Pedro Alves
9a1c0f48f4 feat(gpu): implement 128-bit compression and add it to the integer API 2025-08-29 11:26:07 -03:00
Guillermo Oyarzun
ff29535eb0 feat(gpu): enable specialized pbs for 4_1_1 params 2025-08-29 10:19:45 +02:00
Guillermo Oyarzun
a8f391a442 chore(gpu): update 4_1_1 params to match specialized pbs 2025-08-28 17:54:59 +02:00
Nicolas Sarlin
34743ea304 fix(backward): badly generated backward data 2025-08-28 17:54:59 +02:00
Agnes Leroy
f62e5b3e3b chore(gpu): fix oom in 4090 tests 2025-08-28 16:12:52 +02:00
Andrei Stoian
6a7244105a chore(gpu): fix coprocessor bench 2025-08-28 15:45:41 +02:00
Andrei Stoian
50cfb8021a fix: release sanitizer 2025-08-28 14:21:57 +02:00
Andrei Stoian
c06b513182 chore(gpu): add valgrind and fix leaks 2025-08-28 14:21:57 +02:00
Nicolas Sarlin
677da3855e chore(ci): update dylint 2025-08-28 08:41:48 +02:00
Nicolas Sarlin
c52e2e32d0 chore(ci): ignore cbor and bcode files in typo checker 2025-08-28 08:41:48 +02:00
Nicolas Sarlin
fa48444611 chore(ci): update toolchain to nightly-2025-08-26 2025-08-28 08:41:48 +02:00
Andrei Stoian
71f427de9e chore(gpu): add assert macro 2025-08-27 10:32:43 +02:00
Nicolas Sarlin
451458df97 chore(csprng): bump version to 0.7.0 2025-08-26 19:32:40 +02:00
Nicolas Sarlin
44ac59099b chore(csprng): run clippy without the software-prng feature 2025-08-26 19:32:40 +02:00
Nicolas Sarlin
bafce4657a fix(csprng): use u64 for ChildrenCount and BytesPerChild 2025-08-26 19:32:40 +02:00
dependabot[bot]
3b5545b7a6 chore(deps): bump codecov/codecov-action from 5.4.3 to 5.5.0
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.4.3 to 5.5.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](18283e04ce...fdcc847654)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 5.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-26 16:06:07 +02:00
dependabot[bot]
167e96a30c chore(deps): update dtolnay/rust-toolchain requirement to e97e2d8cc328f1b50210efc529dca0028893a2d9
Updates the requirements on [dtolnay/rust-toolchain](https://github.com/dtolnay/rust-toolchain) to permit the latest version.
- [Release notes](https://github.com/dtolnay/rust-toolchain/releases)
- [Commits](e97e2d8cc3)

---
updated-dependencies:
- dependency-name: dtolnay/rust-toolchain
  dependency-version: e97e2d8cc328f1b50210efc529dca0028893a2d9
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-26 16:05:48 +02:00
Enzo Di Maria
14063ca3b3 fix(gpu): fix perf of ilog2 backend 2025-08-26 14:53:08 +02:00
David Testé
f8cf613640 chore(ci): update lattice estimator version 2025-08-25 16:35:28 +02:00
Andrei Stoian
a3f8dc6c2a chore(gpu): add coprocessor benchmarks in tfhe-rs gpu ci 2025-08-25 15:59:45 +02:00
Andrei Stoian
f776c737a1 chore(gpu): fix typos 2025-08-25 10:02:07 +02:00
Guillermo Oyarzun
c1c7fe78ed fix(gpu): fix memory leak in count consecutive bits 2025-08-22 17:39:32 +02:00
Nicolas Sarlin
cc6b074f6d chore(core): add check for polynomial size in schoolbook mul 2025-08-22 16:53:56 +02:00
David Testé
4b6942a0f8 chore(bench): add unbounded oprf integer benchmarks
Also move Cuda OPRF benchmark into the same file as CPU implementation
2025-08-22 15:01:53 +02:00
Nicolas Sarlin
53da030831 fix(shortint): set correct degree for noise squashed decompressed ct 2025-08-22 09:42:59 +02:00
Nicolas Sarlin
cedcbb99e7 chore(core): add len checks for polynomial lists 2025-08-22 09:42:50 +02:00
Arthur Meyre
58e02e56d1 chore: fix some warnings appearing during compilation with 1.89
- linked to new lints/warnings for elided lifetimes our old nightly
toolchain does not know about
2025-08-21 18:13:05 +02:00
Guillermo Oyarzun
827cea966b chore(gpu): fix nvtx labels and a comment 2025-08-21 18:02:53 +02:00
tmontaigu
d389ea67a1 refactor!: Use NonZero<T> in DataKind
Change the type used to store a block count in
DataKind to NonZero. This makes it impossible to store
'empty' kinds such as DataKind::Unsigned(0), DataKind::Signed(0).

Also, when deserializing, if the count is zero and error will be
returned, adding an additional layer of sanitization.
2025-08-21 16:18:28 +02:00
David Testé
0a28488079 chore(ci): add permission to github token to release crates
When using crates.io trusted publishing feature GitHub token `id-token: write` permission to be able to authenticate the workflow on the registry.
2025-08-21 09:04:39 +02:00
Nicolas Sarlin
8083990c30 chore(zk): prepare tfhe-zk-pok 0.7.1 2025-08-20 16:47:59 +02:00
Nicolas Sarlin
b67964f4a0 feat(zk): add ZeroizeZp type that is automatically zeroized on drop 2025-08-20 16:47:59 +02:00
David Testé
1647ec8f21 chore(bench): add 2 bits integer to full benchmarks
This is done to measure execution time on FheBool equivalent on all operations.
2025-08-19 09:54:03 +02:00
Petar Ivanov
a77c66244c fix(core): improve FFT and NTT plan cache locking
Instead of always write-locking the plan maps first, read-lock them and
check if the entry for the given size is present. If not, write-lock and
insert it.

That reduces contention on the map lock, allowing multiple threads to
get an already created plan concurrently, without waiting on the write
lock.

Furthermore, use a (polynomial size, modulus) key for the NTT plan map,
avoiding an issue where the user would get the incorrect plan if a
different modulus is used for the same polynomial size.
2025-08-18 16:50:02 +02:00
dependabot[bot]
ce9647d3a9 chore(deps): bump actions/checkout from 4.2.2 to 5.0.0
Bumps [actions/checkout](https://github.com/actions/checkout) from 4.2.2 to 5.0.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](11bd71901b...08c6903cd8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-18 14:11:55 +02:00
Nicolas Sarlin
7b7ad5bea0 chore(backward): test noise noise squashing for server key 2025-08-14 11:56:16 +02:00
Nicolas Sarlin
afd628c7b9 doc(backward): explain how to pull backward compat data 2025-08-14 11:56:16 +02:00
Mayeul@Zama
4909a8ef0e chore(backward): add data for multibit noise squashing 2025-08-14 11:56:16 +02:00
Mayeul@Zama
c8a9105953 chore(backward): add multi bit support 2025-08-14 11:56:16 +02:00
David Testé
b3f1a85e1d chore(bench): write parameters to disk for hlapi operations 2025-08-13 18:34:26 +02:00
Nicolas Sarlin
5fa8cc8563 fix(core): use of deprecated rayon repeatn 2025-08-13 15:00:15 +02:00
Arthur Meyre
a7dd071bd4 test(shortint): pbs 128 + compression test with new noise measurement 2025-08-13 09:16:49 +02:00
Arthur Meyre
eb6760a7c8 feat(core): add a primitive to build an LweCiphertextList from an Iterator 2025-08-13 09:16:49 +02:00
Arthur Meyre
7f0838270c chore(core): relax trait requirements for GLWE encryption/decryption 2025-08-13 09:16:49 +02:00
Arthur Meyre
1169096058 chore: fix whitespace in Makefile 2025-08-13 09:16:49 +02:00
Antoniu Pop
9316922e81 fix(benches): fix hlapi dex benchmark transfer function 2025-08-12 17:28:40 +01:00
dependabot[bot]
8ff73f7d73 chore(deps): bump actions/cache from 4.2.3 to 4.2.4
Bumps [actions/cache](https://github.com/actions/cache) from 4.2.3 to 4.2.4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](5a3ec84eff...0400d5f644)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: 4.2.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 15:44:29 +02:00
dependabot[bot]
0c3bda3444 chore(deps): bump actions/download-artifact from 4.3.0 to 5.0.0
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.3.0 to 5.0.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](d3f86a106a...634f93cb29)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 5.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-12 15:44:22 +02:00
luory ✞
55eade03e6 chore: fix typo in comment section 2025-08-12 15:38:37 +02:00
David Testé
52b1946f25 chore(ci): use crates.io trusted publishing feature 2025-08-12 12:54:57 +02:00
Nicolas Sarlin
bc5c2f51ff fix(bench): store correct pfail from params 2025-08-12 09:44:37 +02:00
Enzo Di Maria
e5e54be4a4 refactor(gpu): moving unchecked_ilog2_async to the backend 2025-08-12 09:05:29 +02:00
Nicolas Sarlin
0aaadf04d9 chore(versionable): bump version to 0.6.1 2025-08-11 16:49:27 +02:00
Mayeul@Zama
4d1b917045 feat(shortint): add multibit noise squashing 2025-08-11 16:30:59 +02:00
Mayeul@Zama
a85b30a7b2 refactor(shortint): change NoiseSquashingPrivateKeyView fields 2025-08-11 16:30:59 +02:00
Mayeul@Zama
81fa0e43ee feat(core): add conformance for Fourier128LweMultiBitBootstrapKey 2025-08-11 16:30:59 +02:00
Mayeul@Zama
15bc0c6792 style(core): destructure conformance parameters 2025-08-11 16:30:59 +02:00
Mayeul@Zama
8b5de6d57d feat(core): add support for InputScalar!=OutputScalar for multi_bit_bootstrap 2025-08-11 16:30:59 +02:00
Nicolas Sarlin
54c6b9e50a feat(versionable): impl Versionize for Btree{Map, Set} 2025-08-11 13:47:27 +02:00
Arthur Meyre
e31333b2c7 feat: add missing into/from_raw_parts functions for compressed KSK material 2025-08-11 13:02:21 +02:00
Arthur Meyre
37ed32cf4f chore: fix typo in into_raw_parts function 2025-08-11 13:02:21 +02:00
Guillermo Oyarzun
4a3be71bd7 fix(gpu): create message extract lut only when needed 2025-08-11 10:38:31 +02:00
Arthur Meyre
a63207af9e chore(ci): add MSRV build to check we are compliant with what we announce
- have to downgrade param_dedup edition as 1.84 cannot handle 2024 in a
workspace
2025-08-08 18:06:29 +02:00
Arthur Meyre
4c4c7a47a5 chore(ci): remove old backward compat mechanism for branch fetching
- nowadays backward compat data is directly in the repo which made the old
mechanism obsolete
2025-08-08 18:06:29 +02:00
Arthur Meyre
dbc3924989 chore(ci): enable extended types in the docs.rs build 2025-08-08 18:06:29 +02:00
Arthur Meyre
04d4ccc16c chore(ci): remove TFHE_SPEC from Makefile
- this is a leftover from a complicated attempt at backward compatibility
no need to keep this
2025-08-08 18:06:29 +02:00
Arthur Meyre
9d4a9fe71e chore: check packing is possible before packing in integer noise squashing 2025-08-08 10:35:16 +02:00
David Testé
3b42f9873a chore(bench): write params to file for each zk benchmark on gpu
To be parsable each benchmark criterion ID must have their crypto
details written to a file.
2025-08-07 15:17:33 +02:00
pgardratzama
afd8f58a8d feat(hpu): update backend to support multiple V80 device, id of v80 is its serial number
- update psi64 to replace fw with stable version (3.1.0), remove psi16.hpu
2025-08-07 14:58:39 +02:00
Guillermo Oyarzun
1b92bcf476 feat(gpu): extra optimizations for 2_2 params kernels and bugs fixes 2025-08-07 09:34:32 +02:00
Guillermo Oyarzun
79d5db66d4 feat(gpu): use warp level optimizations for fft 2025-08-07 09:34:32 +02:00
Guillermo Oyarzun
d741e55218 feat(gpu): write specialized pbs keybundle for 2_2 params 2025-08-07 09:34:32 +02:00
Guillermo Oyarzun
ef5a391dc2 feat(gpu): write specialized pbs accumulate for 2_2 params 2025-08-07 09:34:32 +02:00
Enzo Di Maria
d1c417bf71 refactor(gpu): cleaning compression 2025-08-07 09:31:55 +02:00
Arthur Meyre
46a7229c81 chore: fix minimum version for cargo check
- this only works if the current major is the major we expect
2025-08-05 17:30:07 +02:00
Enzo Di Maria
852a06b330 refactor(gpu): orpf with grouped processing and for multi-gpu 2025-08-05 09:58:25 +02:00
Guillermo Oyarzun
ea200c3548 chore(gpu): enable nvidia mps in long run tests 2025-08-04 16:18:48 +02:00
Arthur Meyre
1a1b88362c chore: fix noise checks timeout again as there are TWO timeout locations 2025-08-04 09:48:39 +02:00
Mayeul@Zama
fe2dde0e0c chore(gpu): fix index type 2025-08-01 10:38:09 +02:00
Afounso Souza
e7e095b924 fix(gpu): fix typo
fix(gpu): fix typo
2025-08-01 10:21:54 +02:00
Andrei Stoian
7bf2ec6ff2 chore(gpu): fix warnings detection 2025-07-31 18:47:08 +02:00
Agnes Leroy
2d7e1b2293 chore(gpu): change active gpu count logic 2025-07-31 16:10:45 +01:00
Andrei Stoian
79aeeca3b2 fix(gpu): install gcc requested by the workflow 2025-07-31 15:52:39 +01:00
Andrei Stoian
e89d2f8b05 fix(gpu): install gcc requested by the workflow 2025-07-31 15:52:39 +01:00
Andrei Stoian
0b3ea4be9e fix(gpu): install gcc requested by the workflow 2025-07-31 15:52:39 +01:00
Andrei Stoian
68a7520e73 fix(gpu): install gcc requested by the workflow 2025-07-31 15:52:39 +01:00
Guillermo Oyarzun
a411e5720d fix(gpu): update soon deprecated version nvtx 2025-07-31 16:52:05 +02:00
Mayeul@Zama
5ee5569d0d chore: remove redundant gpu feature gate 2025-07-31 14:11:33 +01:00
Agnes Leroy
54d038ef30 chore(gpu): enhance scatter to check gpu count is ok 2025-07-31 13:11:52 +01:00
Agnes Leroy
b6e6abb066 chore(gpu): add corner case test for mul 2025-07-31 13:11:29 +01:00
Arthur Meyre
82a5cc7f2d chore(ci): increase timeout for noise checks 2025-07-31 12:00:15 +02:00
Guillermo Oyarzun
908922171d fix(gpu): remove unused pointer in squash and add some extra checks 2025-07-31 09:52:34 +01:00
Kendra Karol Sevilla
84f6a8082d fix(cuda): correct radix block mismatch check in LWE array validation 2025-07-31 09:28:48 +01:00
otc group
0bc59dca59 chore: fix typo in comment
chore: fix typo in comment
2025-07-31 09:20:49 +01:00
Agnes Leroy
09ffc39b15 fix(gpu): fix inconsistent types 2025-07-31 08:14:45 +01:00
swarnabhasinha
099345df02 fix(api): Add min/max on owned types 2025-07-30 21:33:58 +02:00
Agnes Leroy
48c10e91f7 chore(gpu): fix gpu setup action for ci 2025-07-30 14:22:13 +01:00
Andrei Stoian
36eceaf05e feat(gpu): utility debug workflows in ci 2025-07-30 12:55:40 +01:00
Arthur Meyre
e8986cbd7c chore: setup CI for noise checks 2025-07-29 15:29:24 +02:00
Arthur Meyre
fc8063a59b test: add noise simulation framework for basic operators
- test secret key encryption + start of compute atomic pattern in shortint
- only supports classic PBS with drift mitigation currently
2025-07-29 15:29:24 +02:00
Arthur Meyre
65b034ef70 chore(core): update noise formulas 2025-07-29 15:29:24 +02:00
cryptoraph
2aa83c99ea fix(core_crypto): correct typo in GLWE keyswitch assertion message 2025-07-28 17:09:56 +02:00
cryptoraph
d78266e141 fix(cuda): correct typo in keyswitch error message 2025-07-28 17:09:56 +02:00
Pedro Alves
62e6504ef0 fix(gpu): fixes some wrong indexes used in cuda_set_device() 2025-07-28 12:43:13 +01:00
bigbear
209a8f1ad9 fix: correct typo in InvalidRangeError message 2025-07-25 17:18:57 +02:00
Guillermo Oyarzun
3621dd1ae7 chore(gpu): correct pfail in readme 2025-07-25 16:46:03 +02:00
Guillermo Oyarzun
b5a7199c15 chore(gpu): update cuda version in ci 2025-07-25 15:57:25 +02:00
Mayeul@Zama
09aaa4e045 chore: enable unused_imports lint in doctests 2025-07-23 11:09:23 +02:00
Mayeul@Zama
63203c58aa refactor(shortint): cleanup decompression key 2025-07-23 10:08:01 +02:00
Mayeul@Zama
03f8a134b3 chore: remove unused gitignore entry 2025-07-21 10:15:56 +02:00
Maximilian Hubert
981da1d3fc fix: fix typo 2025-07-18 18:16:52 +02:00
Nicolas Sarlin
0386090048 chore: missing from/into_raw_parts for noise squash comp priv key 2025-07-18 13:34:23 +02:00
Pedro Alves
7ecda32b41 fix(gpu): refactor broadcast_lut() so make it less error prone 2025-07-17 10:58:52 +01:00
tmontaigu
a4cfc12941 chore: add missing into/from_raw_parts for SQCompression 2025-07-17 11:20:30 +02:00
Alex Pikme
82b6f45785 fix: correct tuple element in izip! macro for 7 parameters 2025-07-17 10:54:45 +02:00
tmontaigu
82cc9d3884 docs: add UpgradeKeyChain 2025-07-16 17:57:10 +02:00
Agnes Leroy
c7785b7214 chore(gpu): add checks on noise levels in debug mode 2025-07-16 11:00:09 +01:00
Enzo Di Maria
a5c876fdac refactor(gpu): creating CudaScalarDivisorFFI for storing decomposed scalars and their metadata 2025-07-16 07:59:20 +01:00
Nicolas Sarlin
2d8ea2de16 feat(shortint): add pbs_order method to AtomicPatternKind 2025-07-15 17:35:47 +02:00
Andrei Stoian
494e0e0601 chore(gpu): add short op sequence test for GPU on PRs 2025-07-15 16:03:45 +02:00
tmontaigu
8c838da209 chore(integer): improve measurements
It seems that in
```rust
bench_group.bench_function(&bench_id, |b| {
  // some code
  b.iter(|| {
      // function to bench
  })
});
```
If we put code in the '// some code' part, it affects the measurements
the slower this code is the worse the measurements can be.

For many operations the gap is small (a few ms or no gap),
but for the division the gap was around 500ms.

So to reduce this, we move out what we can, moving
the keycache access is the most important aspect as it
cost around 70ms to 100ms.

A LazyCell is used in order only access the keycache is the bench is not
filtered out. Which is the behaviour we had before this commit, and the
behaviour we want to keep so that running specific benches via regex
selection stay fast.

Also, for clean input benches, we use `iter` instead of `iter_batched`
as it makes more sense and should give more accurate results as
iter_batched timing include other things that just the timing of the
function.
2025-07-15 12:46:38 +02:00
tmontaigu
c13587b713 fix(integer): fix non-parallel prop with noisy block 2025-07-15 12:43:41 +02:00
tmontaigu
8dea5cf145 feat(integer): truncate carry prop on trivial zeros
This changes the full_propagate_parallelized to not propagate
most significant blocks which are trivial zeros.

This is a small performance improvement, especially interesting
when having a bunch of FheUintX data, casted to FheUintY (Y > X)
and summing them (e.g. n FheUint2, casted to FheUint32  and doing the
sum to get the result on 32 bit)
2025-07-15 12:43:41 +02:00
Agnes Leroy
0d41b4f445 chore(gpu): add bench command for cuda and update weekly bench 2025-07-11 14:04:32 +01:00
Agnes Leroy
068cbc0f41 chore(gpu): add hl api noise squash latency and throughput bench 2025-07-11 14:04:32 +01:00
Agnes Leroy
f8947ddff3 chore(gpu): remove nightly schedule now that ci is lighter 2025-07-11 12:43:36 +01:00
Pedro Alves
1b98312e2c fix(gpu): fix regression on ERC20 throughput
- partially revert changes done in fd79c4f972
- transfers for the GPU case should be measured using sequential
  operations (without rayon!)
2025-07-11 08:57:19 +01:00
Pedro Alves
d3dd010deb fix(gpu): reduces number of elements in the ZK throughput benchmark 2025-07-11 08:57:01 +01:00
Agnes Leroy
15762623d1 chore(gpu): minor refactor in sum ctxt 2025-07-10 16:24:02 +01:00
Beka Barbakadze
c6865ab880 fix(gpu): fix pbs128 multi-gpu bug
Signed-off-by: Beka Barbakadze <beka.barbakadze@zama.ai>
2025-07-10 15:54:27 +01:00
Enzo Di Maria
e376df2fa4 refactor(gpu): moving unsigned_scalar_div_rem and signed_scalar_div_rem to the backend 2025-07-10 09:24:13 +02:00
Arthur Meyre
bd739c2d48 chore(docs): uniformize paths in docs to use "-" instead of "_"
- this is to avoid conflicts with gitbook
2025-07-09 14:36:04 +02:00
Pedro Alves
9960f5e8b6 fix(gpu): Fix expand bench on multi-gpus 2025-07-09 09:17:55 +01:00
Nicolas Sarlin
776f08b534 chore(ci): remove close_data_pr workflow 2025-07-09 09:31:29 +02:00
David Testé
ac13eed3b1 chore(ci): allow git lfs sync between repositories
Since integration of HPU backend, some Git LFS references need to be synced along with the rest of the codebase. The usage of valtech-sd/git-sync action, which is a fork of wei/git-sync, allows to push git lfs reference to another repository.
2025-07-09 09:07:48 +02:00
Arthur Meyre
17d3a492b6 chore: only run backward compat clippy on x86 machines
- older versions of the crates are only compilable with x86, disable on arm
for now
- revisit when the crates are split ?
2025-07-09 08:29:12 +02:00
Enzo Di Maria
ba87f1ba5e chore(gpu): removing useless arguments 2025-07-08 16:17:51 +02:00
Nicolas Sarlin
c70ad3374e chore(ci): allow workflows to run concurrently on main 2025-07-08 09:57:25 +02:00
Nicolas Sarlin
c7ec835e5f chore: adds params_to_file for noise squashing compression 2025-07-07 17:31:28 +02:00
Agnes Leroy
075b2259d3 chore(gpu): reduce ci time by reducing testing of unused parameters 2025-07-07 16:30:35 +01:00
Pedro Alves
23ebd42209 fix(gpu): fix compression throughput benchmark 2025-07-07 16:30:24 +01:00
Nicolas Sarlin
bb1ff363d3 chore(ci): use Cargo.lock for installed tools 2025-07-07 13:10:55 +02:00
Nicolas Sarlin
7bcd6b94da chore: use script to pull hpu files 2025-07-07 13:10:55 +02:00
Nicolas Sarlin
57cbab9fe1 chore(backward): integrate backward compat data
Code is taken from
59a6179831

Adapted to make ci work
2025-07-07 13:10:55 +02:00
Andrei Stoian
97ce0f6ecf feat(gpu): update GPU documentation 2025-07-07 09:44:43 +02:00
Nicolas Sarlin
b6c21ef1fe docs: describe noise squashed compression 2025-07-07 09:32:51 +02:00
Nicolas Sarlin
e599608831 chore(shortint): make decrypt_no_decode public 2025-07-07 09:30:14 +02:00
Arthur Meyre
f243491442 chore(docs): add features to the rust_configuration page 2025-07-04 17:06:15 +02:00
Arthur Meyre
b5248930a2 chore(docs): add handbook in explanation section 2025-07-04 17:06:15 +02:00
Arthur Meyre
2d280d98d2 chore(docs): add handbook in the security and cryptography section 2025-07-04 17:06:15 +02:00
Arthur Meyre
10b57f8a8e chore(docs): add link to GPU and HPU backend docs in the installation page 2025-07-04 17:06:15 +02:00
Arthur Meyre
242df05eb2 chore(docs): add links to GPU and HPU backend on front page 2025-07-04 17:06:15 +02:00
Arthur Meyre
899d4a7750 docs: add noise squashing documentation 2025-07-04 16:08:25 +02:00
Agnes Leroy
48dfeb21dc chore(gpu): refactor size tracker to avoid future bugs 2025-07-04 14:37:02 +01:00
Skylar Ray
a46ce3fb51 chore: fix typo in classic.rs 2025-07-04 13:33:15 +02:00
Arthur Meyre
192777bde6 chore(ci): handle unverified PRs to autoclose 2025-07-04 13:18:35 +02:00
Dmitry
3aa198311c fix: broken GPU arg due to typo 2025-07-04 11:04:14 +01:00
David Testé
7034d4ceb4 doc(bench): update benchmark results tables
All the results are using parameters set with p-fail of 2**-128.
CPU tables using parameters set with p-fail 2**-64  are removed.
GPU tables for 1xH100 and 2xH100 are now replace with the new
hardware standard: 8xH100-SXM5.
HPU results are added to the backend comparison table and integrate
latest operations available.
2025-07-04 10:06:14 +02:00
Arthur Meyre
799ae92f59 chore: remove dead link from docs 2025-07-04 10:04:22 +02:00
Arthur Meyre
36e9371fdf test: use hamming weight = 1/2 for core noise tests
- allows to have less variability and matches exactly what the noise
formulas expect for uniform binary secret keys
2025-07-04 09:55:35 +02:00
Pedro Alves
8c88678ee8 feat(gpu): implement 128-bit multi-bit PBS 2025-07-03 20:34:32 -03:00
leopardracer
e1beea5ecb chore: Update test_user_docs.rs 2025-07-03 20:08:13 +02:00
Agnes Leroy
701411044b chore(gpu): update SXM5 cost 2025-07-03 17:00:02 +01:00
JJ-hw
405fdec6b9 fix(hpu): Fix iop_propagate_msb_to_lsb_blockv: propagation in application was not done correctly 2025-07-03 14:31:59 +02:00
Agnes Leroy
b3355e2b2f chore(gpu): remove template from sum ciphertexts, add two missing delete 2025-07-03 12:51:29 +01:00
Agnes Leroy
e4d856afdf chore(gpu): update noise squashing parameters 2025-07-03 12:51:19 +01:00
Pedro Alves
22ddba7145 fix(gpu): refactor the (128-bit and regular) classical PBS entry point to remove the num_samples parameter
- fixes the throughput for those PBSs
- also fixes the throughput benchmark for regular PBSs
2025-07-03 08:23:09 -03:00
David Testé
d955696fe0 chore(bench): reduce number of bit sizes to benchmark
This is done to reduce execution time since 4 bits precision is not useful to measure.
2025-07-03 12:45:02 +02:00
Baptiste Roux
eb0b9643bb fix(hpu): Fix clippy_hpu_mockup makefile entry 2025-07-03 10:28:52 +02:00
Arthur Meyre
d68305e984 chore: change link to point to the FHE.org discord for support 2025-07-03 10:28:10 +02:00
Enzo Di Maria
3d64316c66 refactor(gpu): moving signed_scalar_div_async and get_signed_scalar_div to the backend 2025-07-03 08:52:04 +01:00
Agnes Leroy
4bba35e926 chore(gpu): remove m3_c3 & gf 3 params from multi-gpu tests to reduce ci time 2025-07-02 17:18:26 +01:00
Baptiste Roux
187159d9f9 chore(hpu): bump backend version 2025-07-02 17:31:45 +02:00
Nicolas Sarlin
0cf9f9f3bd chore(zk): bump tfhe-zk-pok to 0.7.0 2025-07-02 17:31:02 +02:00
tmontaigu
dcb6049441 chore: backward data test for CompressedSquashedNoiseCiphertextList 2025-07-02 16:51:05 +02:00
tmontaigu
7203cc3564 feat(hlapi): add CompressedSquashedNoiseCiphertextList 2025-07-02 16:51:05 +02:00
Agnes Leroy
b198c18498 chore(gpu): bump backend version 2025-07-02 15:34:10 +01:00
pgardratzama
916e6e6a61 chore(hpu): fix typo in comment of Event implementation
Co-authored-by: emmmm <155267286+eeemmmmmm@users.noreply.github.com>
2025-07-02 15:32:57 +02:00
pgardratzama
9ac776185a doc(hpu): fix spelling issue in data_versioning.md
Co-authored-by: futreall <86553580+futreall@users.noreply.github.com>
2025-07-02 15:32:57 +02:00
pgardratzama
28e44ca237 doc(hpu): Fix link to FPGA repository in the README
Co-authored-by: MozirDmitriy <dmitriymozir@gmail.com>
2025-07-02 15:32:57 +02:00
Baptiste Roux
6432b98591 chore(mockup): Add clippy target for tfhe_hpu_mockup
Also fix all clippy lint
2025-07-02 14:41:41 +02:00
Helder Campos
15cce9f641 fix(hpu): Fixing the llt scheduler
In RTL simulations, it is possible that a very strange HPU with huge
amount of batches and very little registers is randomized. In this case,
if the scheduler was configured to fill the batch before flushing, it
would run out of registers. The solution is to force flush in this
scenario.
2025-07-02 14:41:41 +02:00
Baptiste Roux
5090e9152b chore: Revert "chore: allow to not perform the half case correction for mean compensation"
This reverts commit 00ffa3efdc.
2025-07-02 14:41:41 +02:00
Baptiste Roux
24572edb1c feat(hpu): Add support for centered modswitch.
Add new field in HpuPBSParameters (log2_pfail and modulus_switch_type).
Also add new parameters set definition in shortint for benchmark matching.

Remove the used of use_mean_compensation register, this information is now embedded inside the parameters set definition.
Update psi64.hpu archive with newest bitstream
2025-07-02 14:41:41 +02:00
Helder Campos
303f67fe11 fix(hpu): Fixing the multiplication algorithm in LLT
It was failing before for nu > 5. Also corrected the initial degree
after the partial products, which decreases the number of PBSs to do
with nu > 5.
2025-07-02 14:41:41 +02:00
Arthur Meyre
86a40bcea9 chore: move gated import to section with feature gate in HL erc20 bench 2025-07-02 13:14:31 +02:00
Agnes Leroy
97c0290ff7 fix(gpu): revert avoid copy to host in sum ciphertexts
This reverts commit 2b57fc7bd8.
2025-07-02 08:30:12 +01:00
Agnes Leroy
3ba6a72166 chore(gpu): move sum ctxt lut allocation to host to save memory 2025-07-02 08:30:12 +01:00
tmontaigu
dbd158c641 feat(integer): add CompressedSquashedNoiseCiphertextList 2025-07-02 08:51:26 +02:00
Nicolas Sarlin
0a738c368a chore(backward): update backward data repo branch 2025-07-01 14:18:10 +02:00
Arthur Meyre
4325da72cf chore: allow to not perform the half case correction for mean compensation 2025-07-01 14:18:10 +02:00
Mayeul@Zama
e1620d4087 feat(shortint): add support for centered modulus switch in parameters 2025-07-01 14:18:10 +02:00
Mayeul@Zama
6805778cb8 feat: add centered modulus switch 2025-07-01 14:18:10 +02:00
Mayeul@Zama
802945fa52 feat(core): add missing APIs 2025-07-01 14:18:10 +02:00
Mayeul@Zama
fff86fb3b4 fix: fix feature gate 2025-07-01 14:18:10 +02:00
Nicolas Sarlin
950915a108 chore(ci): use the correct data branch in clippy_ws_tests 2025-07-01 14:18:10 +02:00
Andrei Stoian
5e6562878a chore(gpu): add cuda debug target for integer tests 2025-07-01 10:37:17 +02:00
Andrei Stoian
d0743e9d3d chore(gpu): refactor the gpu oom checker 2025-07-01 10:37:05 +02:00
Guillermo Oyarzun
981083360e feat(gpu): increase keyswitch occupancy 2025-07-01 09:54:14 +02:00
tmontaigu
848f9d165c feat: add upgrade key chain
This adds an UpgradeKeyChain struct
that can be used to easily upgrade parameters of ciphertexts
if some some upgrade keys are provided
2025-07-01 09:37:16 +02:00
Beka Barbakadze
2b57fc7bd8 feat(gpu): Avoid copy to host in sum ciphertexts 2025-07-01 07:58:09 +01:00
Andrei Stoian
e3d90341cf chore(gpu): add abs to random op sequence test on GPU 2025-06-30 21:37:09 +02:00
Nicolas Sarlin
dd94d6f823 feat(zk)!: allow to forbid specific configs in zk conformance
BREAKING CHANGE:
- conformance for `CompactPkeProof` is now `CompactPkeProofConformanceParams`
- conformance for `shortint::ciphertext::zk::ProvenCompactCiphertextList` is now
	`ProvenCompactCiphertextListConformanceParams`
2025-06-30 18:05:27 +02:00
Helder Campos
25362b2db2 feat(hpu): Adding support for modulus switch mean compensation
Including the pfail 2e-128 parameter set.

Note: The HPU mockup still does not support mean compensation.
2025-06-30 16:01:39 +01:00
Arthur Meyre
fe5542f39e chore: add SLSA badge
Co-authored-by: Olexandr88 <radole1203@gmail.com>
2025-06-30 15:48:55 +02:00
Agnes Leroy
42112c53c2 chore(gpu): restore mul mem usage 2025-06-30 09:10:54 +01:00
Agnes Leroy
bc2e595cf5 fix(gpu): fix size tracker value 2025-06-27 17:12:11 +01:00
Enzo Di Maria
378b84946f refactor(gpu): moving get_scalar_div_size_on_gpu to backend and fixing gpu tests 2025-06-27 17:02:50 +02:00
Enzo Di Maria
8a4c5ba8ef refactor(gpu): moving unchecked_scalar_div_async to backend 2025-06-27 17:02:50 +02:00
Nicolas Sarlin
940a9ba860 chore(zk): enable tfhe-lints on zk pok 2025-06-27 14:34:25 +02:00
Nicolas Sarlin
c475dc058e feat(zk): add compact hash mode for zkv2 2025-06-27 14:34:25 +02:00
Arthur Meyre
215ded90c0 chore: make multi bit pbs 128 more flexible 2025-06-20 17:15:11 +02:00
Agnes Leroy
8a2d93aaa8 fix(gpu): compression memory check bug, size computation was incorrect 2025-06-20 15:45:01 +02:00
Arthur Meyre
5a48483247 fix(shortint): wrong LweDimension returned by prf multibit mod switched ct
- added multi bit param to uniformity PRF check
2025-06-20 12:08:19 +02:00
pgardratzama
702989f796 fix(hpu): it seems transfer_safe is not totally safe with HPU 2025-06-20 10:04:16 +02:00
pgardratzama
cb1e298ebe chore(hpu): modify workflow to fetch & pull bitstreams using to get git-lfs 2025-06-20 10:04:16 +02:00
Baptiste Roux
a271cedb05 fix(hpu): Remove some hardcoded filename in tandem
Also enhance error handling related to user misconfiguration.
And remove a bug with ami devn reading
2025-06-20 09:04:22 +02:00
Arthur Meyre
9eb0e831f5 chore: fix use proper parameter for wasm bench
javascript and their nonsensical fallbacks be damned to eternal suffering
2025-06-19 19:34:04 +02:00
Enzo Di Maria
7e4abfa4ff refactor(gpu): moving extend_radix_with_sign_msb_async to backend 2025-06-19 14:51:02 +02:00
Nicolas Sarlin
ce7c15585e chore(zk): refactor hashes to reuse code between proof and verify 2025-06-19 13:48:20 +02:00
Nicolas Sarlin
58f7457660 chore(zk): rename verify_inner to verify_impl to match the proof 2025-06-19 13:48:20 +02:00
David Testé
2d224e75a1 chore(ci): set pull-requests permission to write in commit checks
This is mandatory according to the action documentation,
notably to be able to write issue comment within the pull-request.
2025-06-19 13:45:44 +02:00
Agnes Leroy
e5a9145cce fix(gpu): fix perf regression introduced in 1936ec6d84 2025-06-19 13:34:36 +02:00
tmontaigu
f5f7213289 feat: improve division for 2_2 parameters
The improvement is to compute the quotient digit by digit and
not bit by bit.

This could also probably work for 3_3 and 4_4 but it is not a priority

This brings the 64bits division down to ~5.5s from 8.6s
2025-06-19 13:03:40 +02:00
tmontaigu
b917cf4530 feat(core): plug XofSeed 2025-06-19 09:57:29 +02:00
Mayeul@Zama
1873b627d6 chore: add TODO for Glwe MS 2025-06-18 16:54:12 +02:00
Mayeul@Zama
cb8d753ea6 refactor(core): cleanup unused function 2025-06-18 16:54:12 +02:00
Mayeul@Zama
88e8fa6da9 refactor(core): separate BR and MS 2025-06-18 16:54:12 +02:00
Mayeul@Zama
0ea7c29dbd refactor(core): separate BR and MS 2025-06-18 16:54:12 +02:00
Mayeul@Zama
d90bd8bf89 feat(core): add grouping_factor to MultiBitModulusSwitchedCt trait 2025-06-18 16:54:12 +02:00
Mayeul@Zama
bf5e4474a2 feat(core): add ModulusSwitchedCt trait 2025-06-18 16:54:12 +02:00
Mayeul@Zama
7fd5321b78 refactor(core): std_multi_bit_blind_rotate_assign takes msed input 2025-06-18 16:54:12 +02:00
Mayeul@Zama
c168dea284 refactor(core): rename MultiBitModulusSwitchedCt MultiBitModulusSwitchedLweCiphertext 2025-06-18 16:54:12 +02:00
Beka Barbakadze
1936ec6d84 refactor(gpu): refactor and optimize sum_ciphertext in cuda backend 2025-06-18 16:44:20 +02:00
Agnes Leroy
9864dba009 fix(gpu): fix degrees after scalar bitxor 2025-06-18 15:50:59 +02:00
Nicolas Sarlin
8c1ece4fd9 refactor(shortint): improve handling of empty compressed ct list 2025-06-18 11:08:59 +02:00
Nicolas Sarlin
343cad641c chore: TFHE-rs 1.3.0 2025-06-18 10:20:49 +02:00
David Testé
39d77299ed chore(bench): harmonize dex benchmark function names 2025-06-18 09:47:57 +02:00
Arthur Meyre
c841e3be6e chore: add new codeowners for HPU and CUDA code 2025-06-17 11:07:38 +02:00
Nicolas Sarlin
aaeb46f074 feat(shortint): add compression for squashed noise ciphertexts 2025-06-16 18:08:51 +02:00
Nicolas Sarlin
8f7281c219 fix(shortint): handle empty list in compression 2025-06-16 18:08:51 +02:00
tmontaigu
11e86e6162 chore(csprng): bump to 0.6.0
Some (breaking) changes were made to a trait in CSPRNG
2025-06-16 14:05:47 +02:00
David Testé
41c92e06a8 chore(ci): add missing env variable in cuda release workflow 2025-06-16 12:55:31 +02:00
Pedro Alves
f25f394763 feat(gpu): add support to GPU-accelerated expand to HL's CompactCiphertextList
- Drops integer's CudaCompactCiphertextList
2025-06-13 19:18:11 +02:00
pgardratzama
3230c4bb97 chore(hpu): devo was still there, tool compiled in release mode 2025-06-13 16:46:31 +02:00
Baptiste Roux
159a85fc8c chore(hpu): Fix some lint in hpu-v80 ffi 2025-06-13 16:46:31 +02:00
Helder Campos
4cec6fb247 chore(hpu): Fixed some README.md typos.
Also rename pdi_mgmt to hpu_archive_mgmt to match new naming
2025-06-13 16:46:31 +02:00
Baptiste Roux
16c997d686 feat(hpu): Add support for tandem pdi
Add support for hpu archive and Fpga reloading.
Rely on Tandem implementation for hot-reloading of FPGA.
Add reload procedure inside ffi/v80 backend.

Now when starting application on HpuV80, a first check of version is done.
If version mismatch, a pdi reload is triggered
2025-06-13 16:46:31 +02:00
David Testé
dcd1af72d4 chore(ci): fix missing env variable for 4090 benchmark 2025-06-13 11:43:16 +02:00
Agnes Leroy
9bf9107e9e feat(gpu): add memory tracking functions for rand/rand bounded 2025-06-12 17:50:15 +02:00
Andrei Stoian
7986e0bf1d chore(gpu): skip packing ks test if it needs more ram than available 2025-06-12 17:47:10 +02:00
Agnes Leroy
55179c52a7 chore(gpu): fix ci on H100 2025-06-12 16:22:57 +02:00
Emmanuel Ferdman
c103f0380c fix(hpu): modernize logger interface
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-06-12 09:04:31 +02:00
Nicolas Sarlin
8024753be0 fix(zk): test failed with trivial ct equal to 0 2025-06-11 18:40:32 +02:00
Nicolas Sarlin
506fdfbdd1 chore(zk): use Shake256 XoF instead of rand to generate gamma values 2025-06-11 18:03:12 +02:00
David Testé
a0e08b80e6 chore(ci): use gh cli instead of api calls done with curl
This refactor simplifies reading of data_pr_close workflow.
2025-06-11 16:55:37 +02:00
Enzo Di Maria
d06c3e3926 refactor(gpu): moving sub_assign_async to backend 2025-06-11 16:34:46 +02:00
Arthur Meyre
2bf9d25402 feat: add multi bit pbs 128 with GGSW preparation in FFT domain 2025-06-11 14:47:05 +02:00
Arthur Meyre
fe73f101cc test: add test for an adapted parameter set for std multi bit pbs 128 2025-06-11 14:47:05 +02:00
Arthur Meyre
d12de58284 feat: add std pbs 128 2025-06-11 14:47:05 +02:00
Arthur Meyre
d4ea8cd85f feat: add parallel conversion fo Fourier 128 for LweMultiBitBootstrappigKey 2025-06-11 14:47:05 +02:00
Arthur Meyre
64f2befd6c feat: add Fourier 128 variant of LweMultiBitBootstrappingKey 2025-06-11 14:47:05 +02:00
Arthur Meyre
ce372dcea9 refactor(core): split the LWE multi bit bsk entity file
- to prepare for fft128 variant addition
2025-06-11 14:47:05 +02:00
Agnes Leroy
46b4958c9c feat(gpu): add memory tracking functions for booleans 2025-06-11 13:32:06 +02:00
Agnes Leroy
b25bcbc607 feat(gpu): add mem tracking for eq/ne 2025-06-11 13:32:06 +02:00
Agnes Leroy
5dfacc7975 feat(gpu): add memory tracking for compression and decompression 2025-06-11 11:49:09 +02:00
Guillermo Oyarzun
3d857f62cc refactor(gpu): return trivial indexes after ms noise reduction 2025-06-11 11:28:10 +02:00
Nicolas Sarlin
54c314cd71 chore(ci): always run zk tests 2025-06-11 10:29:53 +02:00
Nicolas Sarlin
38a9853140 chore(zk): check crs conformance in backward compat test 2025-06-11 10:29:53 +02:00
Nicolas Sarlin
360097d70e chore(zk): use random seed in tests 2025-06-11 10:29:53 +02:00
Nicolas Sarlin
c94a76a85a fix(zk): overflow in noise tests 2025-06-11 10:29:53 +02:00
Nicolas Sarlin
be1ade6dd2 chore(zk)!: use 8 bytes dsep and 128bits SID in hash functions
BREAKING_CHANGE:
- PublicParams::from_vec methods have been updated to take 8 bytes dsep and an
  SID. CRS generated before this PR are still supported.
2025-06-11 10:29:53 +02:00
Pedro Alves
53845b298a fix(gpu): fix the packing keyswitch buffer not being allocated on large parameter sets 2025-06-11 08:58:09 +02:00
David Testé
11c0340eca chore(bench): plug server-side proof in zk benchmarks 2025-06-10 18:00:39 +02:00
Baptiste Roux
5e966a3d78 chore(hpu): changes based on code review 2025-06-10 17:43:35 +02:00
Baptiste Roux
443e02215f feat(hpu): Add recent IOp in integer benchmarks 2025-06-10 17:43:35 +02:00
Baptiste Roux
3c632c06ba chore(hpu): Fix/Changes to be compliant with CI 2025-06-10 17:43:35 +02:00
Baptiste Roux
833b593845 feat(hpu): Add support for Lead/Trail/Count/ilog2 in high-level API 2025-06-10 17:43:35 +02:00
JJ-hw
a20c90b090 feat(hpu): Add ILOG2/COUNT0/COUNT1/LEAD0/LEAD1/TRAIL0/TRAIL1 IOp.
Those IOp are tested within new bitcnt category
2025-06-10 17:43:35 +02:00
Baptiste Roux
71e86f0522 feat(hpu): Add support for Shift/Rotate in high-level API
Scalar version not supported yet
2025-06-10 17:43:35 +02:00
Baptiste Roux
cb45f7f429 feat(hpu): Add Rot/Shift IOp
Proper implementation of Scalar version need update in the firmware.
And thus it wasn't done yet.
2025-06-10 17:43:35 +02:00
Baptiste Roux
05a51d47fa feat(hpu): Add check with Pbs gid definition
Currently it's a runtime check, but prevent attribution of same Gid for two != lut
2025-06-10 17:43:35 +02:00
Baptiste Roux
2eb1ccd128 feat(hpu): Add support for DivRem in high-level API 2025-06-10 17:43:35 +02:00
JJ-hw
b7a518b9ee chore(hpu): Cleanup code following clippy advices
Also applied cargo fmt
2025-06-10 17:43:35 +02:00
JJ-hw
39faca219f feat(hpu): Add modulo. Note: not optimized version. Use same algo as the division. 2025-06-10 17:43:35 +02:00
JJ-hw
fe7a8915bc feat(hpu): Add DIV/Divs IOp support
Thoses IOp outputs quotient and remainder for numerator and divider of same size.
2025-06-10 17:43:35 +02:00
Baptiste Roux
96c8c44c71 feat(hpu): Enable some erc20 impl
With the support of overflowing ops, those impl are now available to Hpu
2025-06-10 17:43:35 +02:00
Baptiste Roux
24d581afeb feat(hpu): Add support for Neg operation 2025-06-10 17:43:35 +02:00
Baptiste Roux
3c383ba18f feat(hpu): Add support for overflowing ops in the high-level-api 2025-06-10 17:43:35 +02:00
Baptiste Roux
7622122d90 feat(hpu): add support for overflowing iop
Add new Hpu IOp with overflowing flags. Currently only have simple Ilp implementation.
Should be extend to llt one in the future.
2025-06-10 17:43:35 +02:00
Baptiste Roux
949d3e2153 feat(hpu): Add support for Min/Max in hl-api 2025-06-10 17:43:35 +02:00
Baptiste Roux
fb82c652e2 feat(hpu): Remove nops features
This feature was supersed by the trivial one.
Nops option wasn't relevant and usefull anymore.
2025-06-10 17:43:35 +02:00
Baptiste Roux
3af1937250 feat(hpu): add trivial execution in mockup
Enhance FW debug with quicker execution time.
Also add trivial option in the hpu regression test for quick regression
of the FW generation with the mockup.
2025-06-10 17:43:35 +02:00
Baptiste Roux
05f7869c88 feat(mockup): Add support for trivial ciphertext display
Here to enhance IOp firmware debug experience
2025-06-10 17:43:35 +02:00
Nicolas Sarlin
ab0ec4a238 chore(zk): mark non-pke proofs as experimental 2025-06-10 17:07:33 +02:00
dependabot[bot]
167329c52a chore(deps): bump dtolnay/rust-toolchain
Bumps [dtolnay/rust-toolchain](https://github.com/dtolnay/rust-toolchain) from 888c2e1ea69ab0d4330cbf0af1ecc7b68f368cc1 to b3b07ba8b418998c39fb20f53e8b695cdcc8de1b.
- [Release notes](https://github.com/dtolnay/rust-toolchain/releases)
- [Commits](888c2e1ea6...b3b07ba8b4)

---
updated-dependencies:
- dependency-name: dtolnay/rust-toolchain
  dependency-version: b3b07ba8b418998c39fb20f53e8b695cdcc8de1b
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-10 17:04:02 +02:00
Arthur Meyre
8a280642a7 chore(shortint): alias correct parameters for KS PKE to compute
- has no impact today as parameters were the same, however given the
comment it is best to expose the proper KS from PKE to compute parameters
2025-06-10 17:03:41 +02:00
Arthur Meyre
b29e82b96e test: add noise test check for High Level API 2025-06-10 17:03:41 +02:00
Arthur Meyre
9bda365691 chore(core): add noise distribution test tooling 2025-06-10 17:03:41 +02:00
Arthur Meyre
b686d5cb6a chore: update .gitignore 2025-06-10 17:03:41 +02:00
David Testé
2829c9cc92 chore(ci): parse all pbs counts files for dex benchmarks 2025-06-10 14:19:24 +02:00
Guillermo Oyarzun
0d81623a23 feat(gpu): add squash noise in the hlapi 2025-06-10 13:14:29 +02:00
David Testé
13d797fe9b chore(ci): add bench type to zk benchmarks artifact name
This is done to avoid name collision when both latency and throughput benchmarks are executed within the same workflow run.
2025-06-10 12:30:42 +02:00
Baptiste Roux
0ba2e5c6fd chore(hpu): fix issue with linter
Issue arise following a version update.
This function is clearly unused but kept for huge memory api coherency
and should be used in the future.
2025-06-06 17:55:53 +02:00
Enzo Di Maria
ad3edf3cc3 refactor(gpu): moving scalar_mul_high_async to backend 2025-06-06 17:55:53 +02:00
Pedro Alves
16ff092ce4 fix(gpu): fix race condition on expand when on multi-gpu 2025-06-06 09:34:09 +02:00
Pedro Alves
f511bdc279 chore(gpu): add HL test for GPU expand and fix an issue with exception handling 2025-06-06 09:34:09 +02:00
Agnes Leroy
d3f36b1d86 chore(gpu): fallback to a6000 in case L40's are out of stock 2025-06-05 16:55:10 +02:00
Mayeul@Zama
fbb6b9ace3 refactor(shortint): rename apply_blind_rotate apply_ms_blind_rotate 2025-06-05 14:56:09 +02:00
Mayeul@Zama
d1d1579187 refactor(core): separate modulus_switch from multi_bit_blind_rotate_assign 2025-06-05 14:56:09 +02:00
Mayeul@Zama
a268dced53 refactor(core): refactor MultiBitModulusSwitchedCt 2025-06-05 14:56:09 +02:00
Mayeul@Zama
13b2372624 refactor(core): refactor packing 2025-06-05 14:56:09 +02:00
Agnes Leroy
e2d622b186 feat(gpu): add memory tracking for neg and scalar div 2025-06-05 13:35:01 +02:00
Agnes Leroy
f55007c6fb feat(gpu): add mem tracking for div 2025-06-05 13:35:01 +02:00
Agnes Leroy
3ca85bf904 feat(boolean): add move_to_current_device for booleans 2025-06-05 11:45:54 +02:00
Andrei Stoian
ec78318af3 chore(gpu): prevent nvToolsExt inclusion when not profiling
chore(gpu): prevent nvToolsExt inclusion when not profiling

fix(gpu): stdint
2025-06-05 11:45:32 +02:00
David Testé
8a312afbb7 chore(ci): fix benchmark matrix parameters generation
Previous implementation was done to please Zizmor and avoid
template-injection findings during analysis. This had a downside,
using env directive implies a double-interpolation that messes with
fromJSON() later and build badly formatted matrix parameters.
2025-06-05 11:15:39 +02:00
David Testé
b61f1d864c chore(ci): check ks32 parameters with lattice estimator
A small refactoring has been done to handle ciphertext modulus in a more convenient way.
2025-06-04 17:19:17 +02:00
Agnes Leroy
15983a0718 feat(gpu): add memory tracking for mul/scalar mul 2025-06-04 16:42:51 +02:00
David Testé
856fc1a709 chore(ci): ignore stale action refs on rust-toolchain action
This action doesn't create releases so the action refs doesn't point to a known tag.
If this zizmor findings is not ignored, then continuous integration pipeline is broken.
2025-06-04 11:48:01 +02:00
Pedro Alves
fe0a195630 chore(gpu): switches from the TBC PBS to the other variants for many inputs 2025-06-04 05:45:53 -03:00
tmontaigu
aca7e79585 feat(csprng): add Xof random generation
This adds a new kind of seed to the csprng

When created which such seed, the AES-CTR random generator
initialization changes:
- The AES-KEY used is initialized differently
- The AES-CTR starts with a CTR that may not be 0

The changes make it so that the counter still goes from 0..MAX,
but now the AES-CTR will encrypt the counter + some offset allowing
to keep the regular behavior and the new one
2025-06-04 09:57:18 +02:00
tmontaigu
c0e89a53ef fix(csprng): fix and endian for the counter
This commit fixes an endian (little) for the counter
representation of the counter used in the AES-CTR counter.

This is so that, the random bytes generated are the same not matter
the endian of the system.

A test case with known answers is added, as well as make command
to run the test in an emulated big-endian arch using the `cross`
utility.

This also include a small refactor where now the block cipher
do not encrypt `AesIndex`. This is done as it makes more sense
(AES encrypts bytes, not numbers), so this allows to move and centralize
the concept of endian as well a centralize where batch created.
2025-06-04 09:57:18 +02:00
David Testé
312952007f chore(ci): lock zizmor version to avoid breaking ci pipelines
Newer version of Zizmor can trigger errors due to new findings in workflows. To avoid breaking any ongoing pull-request, due to this unhandled update, zizmor version is locked.
2025-06-03 12:29:36 +02:00
Enzo Di Maria
ff51ed3f34 refactor(gpu): moving trim_radix_blocks_lsb_async to backend 2025-06-03 11:42:18 +02:00
Agnes Leroy
9737bdcb98 fix(gpu): fix degrees after bitxor 2025-06-03 08:47:12 +02:00
tmontaigu
87a43a4900 chore(integer): add determinism check for sum 2025-06-02 17:37:21 +02:00
Agnes Leroy
345bdbf17f feat(gpu): add memory tracking function for cmux 2025-06-02 17:29:17 +02:00
Agnes Leroy
cc54ba2236 chore(gpu): fix overflow in div in long run tests 2025-06-02 17:05:09 +02:00
David Testé
11df6c69ee chore(ci): fix workflow security warnings
Since Zizmor v1.9.0, new pedantic warnings are detected especially
regarding template-injection patterns.
2025-06-02 14:46:14 +02:00
Guillermo Oyarzun
b76f4dbfe0 fix(gpu): fix hardcoded use of message modulus 2025-06-02 10:43:14 +02:00
Enzo Di Maria
be21c15c80 refactor(gpu): moving extend_radix_with_trivial_zero_blocks_msb to backend 2025-06-02 09:19:51 +02:00
tmontaigu
aa51b25313 chore(ci): fix test_user_docs run and add hpu
Due to #[cfg] before the test_user_docs module, the module would
not actually be compiled (thus run user doc test) unless all required
features where activated when running.

So we remove these cfg, as each hardware doc supports its own set of
features and its better to have a test fail because a feature is
missing rather than silently not run anything

Also, add commands and ci stuff to check HPU docs
2025-05-30 16:36:56 +02:00
tmontaigu
300c95fe3d fix(doc): finish HPU example fix 2025-05-30 16:36:56 +02:00
pgardratzama
524adda8f6 fix(doc): hpu example was not compiling 2025-05-30 16:36:56 +02:00
tmontaigu
dedcf205b4 feat(integer): improve default neg 2025-05-30 15:02:35 +02:00
tmontaigu
2c8d4c0fb0 feat(hlapi): add overflowing_neg 2025-05-30 15:02:35 +02:00
tmontaigu
3370fb5b7e feat(gpu): add overflowing_neg 2025-05-30 15:02:35 +02:00
tmontaigu
cd77eac42b feat(integer): add overflowing_neg 2025-05-30 15:02:35 +02:00
Baptiste Roux
40f20b4ecb fix(hpu): Rewrite hpu_bench iteration loop
hpu_bench example was wrong for iter > 1 following clippy modifications.
NB: Vector is collect but intermediate value are explicitly drop to enable long-time stressed tests.
2025-05-28 14:45:45 +02:00
Agnes Leroy
59a78c76a9 fix(gpu): fix build after shift/rotate mem tracking merge 2025-05-28 12:08:09 +02:00
Pedro Alves
1025246b17 fix(gpu): fix a linking problem on Hopper GPUs 2025-05-28 09:27:33 +02:00
Agnes Leroy
338e9eaeef feat(gpu): add memory tracking functions for shift/rotate 2025-05-28 09:26:27 +02:00
David Testé
0bec4d2ba1 chore(ci): pin rust-toolchain action to v1 2025-05-27 17:31:33 +02:00
David Testé
c5fab98900 chore(ci): add token to do online workflow security checks 2025-05-27 17:31:33 +02:00
Nicolas Sarlin
14e1ee5bd3 fix(gpu): build with hpu and zk features 2025-05-27 16:10:38 +02:00
Pedro Alves
52bc778629 feat(gpu): completely remove the internal CUDA_STREAMS in the HL API
- From now on the streams stored in the available cuda server key are the ones to be
2025-05-27 10:29:34 -03:00
Pedro Alves
10405c9836 feat(gpu): improve test_specific_gpu_selection() so it always tests all possible GPU configurations 2025-05-27 10:29:34 -03:00
Pedro Alves
5eaf6cec55 feat(gpu): reintroduce the feature that allows a user to perform computation on multi-gpu using a custom selection of GPUs
This reverts commit a7d8d2b1d4.
2025-05-27 10:29:34 -03:00
Agnes Leroy
3bfacc1e9d chore(bench): add swap throughput benchmark 2025-05-27 12:08:31 +02:00
Agnes Leroy
a47a418d41 chore(gpu): rework dex bench to prepare throughput benchmark 2025-05-27 12:08:31 +02:00
David Testé
75b3141e19 chore(ci): fix command parsing for gpu benchmark common workflow
Quote escaping was flawed and would generate an array containing a unique string instead of several ones separated by commas.
2025-05-27 10:14:06 +02:00
Agnes Leroy
d01328e0fe fix(gpu): fix overflow error in clear inputs remainder in long run tests 2025-05-26 22:51:18 +02:00
Agnes Leroy
6e102b5fa1 chore(gpu): fix oom error in ci 2025-05-26 22:50:55 +02:00
Pedro Alves
8aa6fa514e fix(gpu): add missing error checks after some kernels 2025-05-26 16:29:23 -03:00
Nicolas Sarlin
21a19cd3c5 chore(shortint): modswitch noise reduction key upgrade without clone 2025-05-26 16:53:35 +02:00
Nicolas Sarlin
f51c70d536 feat(shortint): adds generic client key for atomic pattern support 2025-05-26 16:53:35 +02:00
Agnes Leroy
66e3c02838 feat(gpu): add memory tracking functions for comparisons 2025-05-23 14:37:39 +02:00
Pedro Alves
408e81c45a feat(gpu): add support for GPU-accelerated expand on the HL Api
- includes documentation about GPU's accelerated expand on the HL API
- rework CudaKeySwitchingKey
- Cloning the key is no longer necessary on the HL API
2025-05-23 11:54:29 +02:00
dependabot[bot]
4152906c5d chore(deps): bump actions/upload-artifact from 4.6.0 to 4.6.2
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.0 to 4.6.2.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v4.6.0...ea165f8d65b6e75b540449e92b4886f43607fa02)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: 4.6.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-23 11:23:02 +02:00
dependabot[bot]
9fc8a0b5bc chore(deps): bump codecov/codecov-action from 5.4.2 to 5.4.3
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.4.2 to 5.4.3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](ad3126e916...18283e04ce)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 5.4.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-23 11:22:55 +02:00
dependabot[bot]
5dc3e59d13 chore(deps): bump zgosalvez/github-actions-ensure-sha-pinned-actions
Bumps [zgosalvez/github-actions-ensure-sha-pinned-actions](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions) from 3.0.23 to 3.0.25.
- [Release notes](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions/releases)
- [Commits](4830be28ce...fc87bb5b5a)

---
updated-dependencies:
- dependency-name: zgosalvez/github-actions-ensure-sha-pinned-actions
  dependency-version: 3.0.25
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-23 11:22:48 +02:00
Nicolas Sarlin
b40996a7e5 chore(shortint): prepare the v1.3 params folder 2025-05-23 10:57:56 +02:00
Pedro Alves
b066ef19fa fix(gpu): fix the internal benchmark 2025-05-23 10:32:24 +02:00
Nicolas Sarlin
25d008bae8 fix(bench): add missing internal keycache feature 2025-05-22 16:14:30 +02:00
David Testé
2749c1088c chore(ci): handle multi directories for parameters records 2025-05-22 15:03:02 +02:00
Guillermo Oyarzun
c19cd9f021 fix(gpu): add indexes to modulus switch noise reduction 2025-05-22 10:50:51 +02:00
Nicolas Sarlin
45fdba04b1 fix(gpu): allow to build with hpu feature enabled 2025-05-22 10:21:35 +02:00
youben11
69d46810b8 feat(core): chunked seeded_lwe_ksk generation 2025-05-21 18:06:58 +01:00
youben11
a16eeb983f feat(core): chunked lwe_ksk generation 2025-05-21 18:06:58 +01:00
Agnes Leroy
8278a9373c fix(gpu): fix degrees after abs 2025-05-21 15:46:18 +02:00
Arthur Meyre
e2a2768484 chore: fix typos
Co-authored-by: crStiv <cryptostiv7@gmail.com>
2025-05-21 13:06:42 +02:00
Arthur Meyre
57cfc38b66 chore: some more CODEOWNERS 2025-05-21 11:30:35 +02:00
Pedro Alves
259d125434 fix(gpu): fix pbs and ks benchmarks 2025-05-20 17:37:48 +02:00
Arthur Meyre
2571196b41 chore: fix ambiguous decrypt 2025-05-20 17:32:05 +02:00
Arthur Meyre
9f3dc6167d chore: remove raw decomposition
- this was left in by mistake
2025-05-20 17:32:05 +02:00
Agnes Leroy
59c17692a3 feat(gpu): add memory tracking functions for bitops 2025-05-20 16:16:22 +02:00
David Testé
e29d615b9d chore(bench): add suitable heuristic for zk throughput
Heuristic based on PBS count was flawed since a ZK verification operation will eat up to 32 threads on the machine. The previous heuristic could generate an input data vector way bigger than the total of threads divided by 32. This in turn lead to long execution time for benchmark and generate bad results.
2025-05-20 15:02:59 +02:00
tmontaigu
8caff604ed chore: use wrapping div in long_run 2025-05-20 14:36:22 +02:00
Agnes Leroy
16badf0c00 chore(gpu): add degree prints in long run tests in case of failure 2025-05-20 14:13:59 +02:00
Nicolas Sarlin
99a27c1cbe chore(hpu): fix Cargo.toml for release 2025-05-19 17:47:40 +02:00
Nicolas Sarlin
9131aaa383 fix(doc): uniformized readme file names 2025-05-19 15:22:34 +02:00
Nicolas Sarlin
a01949e630 fix(bench): compilation error without the internal-keycache feature 2025-05-19 09:50:29 +02:00
Arthur Meyre
30a58cdd1a chore: update version in docs to 1.2.0 2025-05-16 17:10:12 +02:00
Agnes Leroy
03325bf94e feat(gpu): add memory tracking functions for add/sub and scalar add/sub 2025-05-16 16:39:34 +02:00
Nicolas Sarlin
786fe66495 chore(zk): check that crs group element at index n is 0 2025-05-16 16:38:27 +02:00
Baptiste Roux
9ee8259002 feat(hpu): Add Hpu backend implementation
This backend abstract communication with Hpu Fpga hardware.
It define it's proper entities to prevent circular dependencies with
tfhe-rs.
Object lifetime is handle through Arc<Mutex<T>> wrapper, and enforce
that all objects currently alive in Hpu Hw are also kept valid on the
host side.

It contains the second version of HPU instruction set (HIS_V2.0):
* DOp have following properties:
  + Template as first class citizen
  + Support of Immediate template
  + Direct parser and conversion between Asm/Hex
  + Replace deku (and it's associated endianess limitation) by
  + bitfield_struct and manual parsing

* IOp have following properties:
  + Support various number of Destination
  + Support various number of Sources
  + Support various number of Immediat values
  + Support of multiple bitwidth (Not implemented yet in the Fpga
    firmware)

Details could be view in `backends/tfhe-hpu-backend/Readme.md`
2025-05-16 16:30:23 +02:00
Agnes Leroy
a7d8d2b1d4 feat(gpu): revert enables the user to perform computation on multi-gpu using a custom selection of GPUs
This reverts commit 0280dbeb41.
2025-05-15 18:01:17 +02:00
David Testé
8d1058364c chore(ci): fix env var usage in make recipe for gpu benchmarks 2025-05-15 11:15:45 +02:00
Pedro Alves
0280dbeb41 feat(gpu): enables the user to perform computation on multi-gpu using a custom selection of GPUs 2025-05-14 09:24:12 +02:00
David Testé
97b5973e4c chore(bench): store object measurements results in tfhe-benchmark 2025-05-13 16:05:16 +02:00
Agnes Leroy
406425dca4 chore(gpu): add hardware types for gpu bench 2025-05-13 11:51:24 +02:00
Agnes Leroy
fd79c4f972 chore(bench): parallelize transfer bench 2025-05-13 10:45:48 +02:00
David Testé
a96970e8c3 chore: update clap dependency version to 4.5.30 2025-05-13 10:35:51 +02:00
Agnes Leroy
67f11a44df chore(gpu): parallelize dex bench 2025-05-12 18:14:24 +02:00
David Testé
aa6dadfe69 chore(ci): ensure minimal permission for github default token
With recent enforcing of the least permissions for GITHUB_TOKEN, pull-request from external contributors would trigger systematic error (i.e. on repository checkout) in the continuous integration pipeline.
Allowing contents:read fixes this behavior.
2025-05-12 18:07:02 +02:00
David Testé
ca1c5659a1 chore(ci): avoid double-quote on dry-run variable
If the DRY_RUN variable is empty and double-quoted to perform a safe expansion, then `cargo publish` treat the environment variable as `""` and thus fail by handling an unrecognized argument.
2025-05-12 15:25:17 +02:00
David Testé
031efaa39f chore(ci): remove misleading continue-on-error
These continue-on-error would lead to misleading report in Action tab since it would display a successful workflow on the global status page while a job may have failed inside.
2025-05-10 14:26:53 +02:00
Arthur Meyre
6cccaf3f66 chore: fix Makefile to specify toolchain for cargo xtask 2025-05-09 18:32:21 +02:00
Nicolas Sarlin
4e73b4c68c chore(gpu): bump cuda backend version to 0.10.0 2025-05-09 17:18:23 +02:00
Nicolas Sarlin
00b2c35f00 fix(shortint): store correct ap in ciphertext during encryption 2025-05-09 13:54:48 +02:00
David Testé
67ec4a28c1 chore(bench): move benchmarks to their own crate
This is done to speed-up compilation duration by avoiding
recompiling tfhe each time a modification is made in a benchmark
file.
2025-05-09 13:46:27 +02:00
Arthur Meyre
d197a2aa73 chore: TFHE-rs 1.2.0
- update parameters deduped for classic and multi bit
2025-05-08 09:30:36 +02:00
Arthur Meyre
11703fe3c1 chore: update v1_1 parameters so that comments are doc comments
- this allows to keep relevant information with param_dedup, as param_dedup
uses syn, comments are lost as syn does not preserver comments in the AST
2025-05-08 09:30:36 +02:00
Arthur Meyre
d05ee42629 chore: add param_dedup to alias redundant parameter defs across versions 2025-05-08 09:30:36 +02:00
Agnes Leroy
014d18aae9 chore(bench): update pbs count parsing in dex benchmark 2025-05-07 16:44:31 +02:00
Nicolas Sarlin
5a62301968 refactor(zk): run pke_v2 verification inside dedicated thread pools
Reducing the number of available threads actually improve performance
2025-05-07 15:18:24 +02:00
Andrei Stoian
e7de363d0c feat(gpu): add poly product with circulant matrix 2025-05-07 10:10:45 +02:00
Arthur Meyre
9be9a5d2f4 feat(shortint): add CompressedAtomicPatternServerKey 2025-05-07 09:50:16 +02:00
Arthur Meyre
7724b7857f feat(shortint): allow the KS32 parameters to have non native KSK modulus 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
597c61bbdb chore(shortint): add tests for the KS32 AP 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
8a26df9177 chore(tests): add support for AP in tests and benches 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
c17a2527b7 feat(shortint): introduce the KS32 atomic pattern 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
0fd9537ae0 refactor(core): make ksk generation generic over the scalar type 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
3df5ea313a refactor(shortint): make modswitch compression generic over scalar 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
76f0b57f80 refactor(shortint): make oprf generic over the Scalar type 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
6cde78171f refactor(shortint): support any scalar in modswitch noise reduction 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
eb0087bd6a refactor(core): allow different input/output scalars in multibit br 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
8c5bf6b231 refactor(shortint): support any ciphertext modulus in the engine 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
ca31e5fbb5 feat(shortint): add the dynamic ap 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
19f0c649e6 refactor(shortint): engine can create any atomic pattern sk 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
c6a493954b feat(shortint): insert the AP inside the ServerKey 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
4df790550d feat(shortint): create atomic pattern trait and enum 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
056716fbb9 refactor(shortint): remove degree in generate_lookup_table_no_encode 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
0e70dd3641 refactor(shortint): use a dedicated type for lut size 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
604c3b0c75 refactor(shortint): function to directly set noise level to nominal
This allows to call it without having to define a max_noise_level
2025-05-06 14:48:07 +02:00
Nicolas Sarlin
0160289a14 refactor(shortint): use a single lwe buffer inside shortint engine
Since only one kind is used at a time we don't need do allocate both
2025-05-06 14:48:07 +02:00
Nicolas Sarlin
8c3485e774 refactor(shortint): factorize generate_lookup_table 2025-05-06 14:48:07 +02:00
Nicolas Sarlin
d969fe94ab refactor(shortint): wrap PbsOrder into AtomicPattern in ciphertext 2025-05-06 14:48:07 +02:00
David Testé
ce6454cbb1 chore(ci): ignore a shellcheck rule in actionlint analysis 2025-05-06 14:06:17 +02:00
David Testé
664311228f chore(ci): pin dependencies that are directly downloaded 2025-05-06 14:06:17 +02:00
David Testé
1722d8e90e chore(ci): use slab script to send benchmark results to database 2025-05-06 14:06:17 +02:00
David Testé
b570bcd568 chore(ci): add checksum on cuda-keyring download 2025-05-06 14:06:17 +02:00
David Testé
5321f759d7 chore(ci): remove dependencies install on gpu h100 tests
This is redundant with the use of gpu_setup.yml action.
2025-05-06 14:06:17 +02:00
David Testé
6237d2d7c3 chore(ci): upgrade actionlint to v1.7.7
Usage of bash script to download and extract the final binary has
been dropped.
Instead, the tarball is directly fetched according to the
ACTIONLINT_VERSION value and the integrity of the tarball is
checked with an hardcoded SHA256 sum.
2025-05-06 14:06:17 +02:00
David Testé
1ca14e6db0 chore(ci): add workflow security checks with zizmor 2025-05-06 14:06:17 +02:00
David Testé
eea36b1b3d chore(ci): avoid sub workflow inheriting all available secrets 2025-05-06 14:06:17 +02:00
David Testé
76e76160ba chore(ci): add missing persist-credentials arg on checkout 2025-05-06 14:06:17 +02:00
David Testé
3f3b4aef41 chore(ci): fix template-injection and token permissions issues
This is part of security issues remediation campaign after having
analyzed workflow using zizmor cargo tool.
2025-05-06 14:06:17 +02:00
Agnes Leroy
97690ab3bd chore(gpu): write swap bench 2025-05-05 17:46:11 +02:00
Agnes Leroy
7e3a5fd55b feat(gpu): add necessary entry points for 128 bit compression 2025-05-05 17:10:10 +02:00
Agnes Leroy
d9a3bd438f docs(all): add hardware description in the summary bench page 2025-05-05 11:40:25 +02:00
Arthur Meyre
417bf2aac2 chore: fix clippy lint for aarch64 targets, use variables in format string 2025-04-30 14:34:18 +02:00
Nicolas Sarlin
0307a904ad fix(core): remove additional body coeff in multi bit ms compression 2025-04-30 11:43:40 +02:00
Agnes Leroy
9eaa77ddef feat(gpu): make all scratch functions return the amount of memory consumed for temporary buffers 2025-04-30 10:48:03 +02:00
Mayeul@Zama
eb31c3b8a1 chore(shortint): fix edge cases 2025-04-30 10:17:58 +02:00
David Testé
dc67ca721d chore: update toolchain to 2025-04-28 2025-04-29 17:36:08 +02:00
David Testé
f5a52128e2 chore(ci): log action to perform on approval for external pr
External contributor don't have access to secrets so this workflow would fail when attempting to add/remove 'approved' label on pull-request from forks.
This simple log message is here to remind maintainers to handle 'approved' label manually to trigger the second CI pipeline.
2025-04-29 09:35:29 +02:00
Arthur Meyre
2cd16ac70a chore: update CODEOWNERS for more paths 2025-04-28 16:30:15 +02:00
dependabot[bot]
1196ea69c1 chore(deps): bump actions/download-artifact from 4.2.1 to 4.3.0
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.2.1 to 4.3.0.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](95815c38cf...d3f86a106a)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: 4.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-28 16:14:39 +02:00
Agnes Leroy
5996686d1f fix(gpu): fix multi device execution with drift 2025-04-28 14:37:26 +02:00
Guspan Tanadi
80bfb4fecc docs: heading hint note 2025-04-25 16:08:23 +02:00
Nicolas Sarlin
780ec9c3ca chore(core): remove some pub(crate) in structs 2025-04-24 14:33:10 +02:00
Guillermo Oyarzun
25d1a4e4dd chore(gpu): add nvtx tool for profiling 2025-04-24 13:57:16 +02:00
Pedro Alves
ffdaf6ad13 chore(gpu): removes the alias synchronize_threads_in_block() 2025-04-23 15:21:17 -03:00
David Testé
4352e9adb7 docs: fix typo
This error isn't caught by typos-cli tool.
2025-04-23 15:17:40 +02:00
David Testé
319504137a chore(ci): factorize usage of slack env variables 2025-04-23 15:13:37 +02:00
David Testé
5b57470652 chore(ci): fix slack notify in case of cancelled step
If a step is cancelled, it is not considered as failure by GitHub. So if a user cancelled a task or if a job timed out, then no Slack notification was sent and devs weren't able to track down these events.
2025-04-23 15:13:37 +02:00
Arthur Meyre
e4ec27f30e chore: add CODEOWNERS file to avoid unwanted changes to core_crypto source 2025-04-23 11:57:37 +02:00
Nicolas Sarlin
5179dce0a4 doc(core): fix badly closed tags in lwe_wopbs doc 2025-04-22 17:36:52 +02:00
dependabot[bot]
8a8fe6505b chore(deps): bump codecov/codecov-action from 5.4.0 to 5.4.2
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.4.0 to 5.4.2.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](0565863a31...ad3126e916)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-version: 5.4.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-22 16:51:48 +02:00
Arthur Meyre
8bb1735caf chore(core): fix a wrong comment on gaussian generation 2025-04-22 13:29:47 +02:00
Arthur Meyre
df38290c37 feat(core): enable custom modulus generation for TUniform 2025-04-18 18:03:02 +02:00
Mayeul@Zama
2cbde1a56b chore(all): make clippy_rustdoc output less noisy 2025-04-18 16:03:00 +02:00
Nicolas Sarlin
db10a5fcc2 chore(tfhe): allow clippy::struct_field_names at the crate level 2025-04-18 09:20:47 +02:00
Agnes Leroy
4d108f4a88 chore(gpu): remove unused events for carry prop 2025-04-17 18:24:07 +02:00
Agnes Leroy
cd0e077f34 chore(gpu): reduce test threads for small instances case 2025-04-17 18:23:56 +02:00
Nicolas Sarlin
0a279711d8 chore: update toolchain to 2025-04-16 2025-04-16 14:08:48 +02:00
Arthur Meyre
b7dcbaafe7 test(core): use an exhaustive test for balanced decomposition
- we do an exhaustive sweep of all relevant values for a given decomposer
and verify the average is 0
2025-04-16 10:59:18 +02:00
Agnes Leroy
fc0ec1880d docs(gpu): small update in the integer benchmark page 2025-04-16 10:40:23 +02:00
Arthur Meyre
8433648538 feat(hl): add accesor on ServerKey to access NoiseSquashingKey 2025-04-16 09:43:40 +02:00
Arthur Meyre
330a2ddc1f fix(core): fix success probability for Ternary Uniform generation
- corrected from 1.0 to 0.75 given 3/4 of tries will yield a ternary value
- this would have caused a forked generator to fail almost certainly
2025-04-16 09:41:52 +02:00
Pedro Alves
a9b76a2d25 chore(gpu): add multi-bit parameter sets to be used with ZK's expand and rework expand throughput benchmark 2025-04-15 13:54:02 -03:00
dependabot[bot]
7410274126 chore(deps): bump rtCamp/action-slack-notify from 2.3.2 to 2.3.3
Bumps [rtCamp/action-slack-notify](https://github.com/rtcamp/action-slack-notify) from 2.3.2 to 2.3.3.
- [Release notes](https://github.com/rtcamp/action-slack-notify/releases)
- [Commits](c33737706d...e31e87e03d)

---
updated-dependencies:
- dependency-name: rtCamp/action-slack-notify
  dependency-version: 2.3.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-15 13:33:54 +02:00
dependabot[bot]
d93238812b chore(deps): bump tj-actions/changed-files from 46.0.3 to 46.0.5
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 46.0.3 to 46.0.5.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](823fcebdb3...ed68ef82c0)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-version: 46.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-04-15 13:33:46 +02:00
Arthur Meyre
dde4e5dd1b docs: add handbook reference 2025-04-14 09:24:34 +02:00
Pedro Alves
1b5f52c869 fix(gpu): fix a memory leak in ZK's expand 2025-04-11 17:01:13 +02:00
Kelong Cong
6777ffe7f9 chore(shortint): sanity check in NoiseSquashingPrivateKey::from_raw_parts 2025-04-11 15:22:13 +02:00
Kelong Cong
26b88fb413 chore(shortint): to/from_raw_parts for NoiseSquashingPrivateKey 2025-04-11 15:22:13 +02:00
Mayeul@Zama
d8d82c410e chore(core): remove useless export from test module 2025-04-11 14:54:02 +02:00
Nicolas Sarlin
0996604574 fix(integer): handles compression of empty ct in empty list 2025-04-11 11:51:27 +02:00
Agnes Leroy
21efad5fae chore(gpu): add bench command for zk-pok in workflow 2025-04-11 09:16:42 +02:00
yuxizama
851dd7c171 docs(gpu): regroup gpu docs
docs(gpu): a small fix
2025-04-10 15:48:14 +02:00
Pedro Alves
66ac6f8762 chore(gpu): update parameter sets in the C++ test/benchmark tools 2025-04-09 17:32:29 -03:00
tmontaigu
e567b7afd9 docs: add dot prod & scalar_select 2025-04-09 17:08:57 +02:00
Agnes Leroy
11a291906e doc(cpu): precise hpc7a.96xlarge description 2025-04-09 15:45:04 +02:00
Agnes Leroy
b24393f346 doc(gpu): add an example about device selection, update bench svg 2025-04-09 15:45:04 +02:00
Nicolas Sarlin
06f0b8e23e feat(shortint): add from_raw_parts for NoiseSquashingKey 2025-04-08 09:48:58 +02:00
Agnes Leroy
ab681bc17b chore(gpu): add leading zeros to dedup ops for bench 2025-04-07 17:38:02 +02:00
Beka Barbakadze
eeaffab7de feat(gpu): Implement 128 bit classic CG PBS 2025-04-07 17:19:18 +02:00
Agnes Leroy
75061e0914 chore(gpu): add a feature to build for multiple architectures 2025-04-07 13:58:53 +02:00
Nicolas Sarlin
a47ebe93aa chore(versionable): bump version to 0.6.0 2025-04-07 09:48:38 +02:00
Nicolas Sarlin
5f9ac48dbe feat(versionable): add skip attribute to skip field versioning 2025-04-07 09:48:38 +02:00
Baptiste Roux
e57b91eccd feat(ntt-bnf): Add back&Forth ntt implementation
This implementation work on 2**k modulus and used modswitch before and
after every cmux. It mimics the HW implementation

Also modified the bootstrapping key conversion to correctly work with
ciphertext_modulus that is a power of two and with width != of native
one

NB: After decomposition a simple simple reencoding of negative value is
    done instead regarding used prime instead of a full modswitch.
2025-04-07 09:07:47 +02:00
Pedro Alves
618e4b36a7 feat(gpu): implement ZK's expand 2025-04-05 19:57:39 -03:00
Guillermo Oyarzun
08681fb81f feat(gpu): add drift to 128-bit pbs tests 2025-04-03 16:33:18 +02:00
tmontaigu
1e8a12b1e9 feat(hlapi): bind some missing ops 2025-04-03 16:26:10 +02:00
tmontaigu
cecf1f24f3 refactor(hlapi): create 'meta' macro for scalar ops 2025-04-03 16:26:10 +02:00
Agnes Leroy
da3b1cdbb0 chore(gpu): fix bench worflow 2025-04-03 11:54:15 +02:00
Agnes Leroy
2921084ef9 chore(gpu): fix ks_pbs bench on gpu 2025-04-03 11:54:15 +02:00
Agnes Leroy
6b35616515 chore(gpu): add scalar ops to dedup bench ops 2025-04-03 11:54:15 +02:00
David Testé
f9202d524e chore(ci): fix handling of instance setup failure
If an instance, that is not a single-h100, fails to start, the whole setup-instance job have to fail.
Only single-h100 profile can use a permanent remote instance.
2025-04-03 09:33:14 +02:00
Agnes Leroy
eee819cd91 chore(gpu): decrease test threads for small instances 2025-04-02 18:00:50 +02:00
Guillermo Oyarzun
23cbafaa57 fix(gpu): update panic condition to apply only to tbc 2025-04-01 08:35:35 -03:00
Arthur Meyre
69438d40a8 chore(ci): fix data PR close workflow 2025-04-01 11:31:07 +02:00
Arthur Meyre
7354265c52 fix(ci): get head_ref to get a name for the backward compat branch 2025-04-01 11:03:27 +02:00
Arthur Meyre
e8576ca2e1 chore: bump version for release, remove alpha 2025-04-01 11:03:27 +02:00
Arthur Meyre
413559e536 test: update backwad compat tests to test the new ciphertexts types 2025-04-01 11:03:27 +02:00
Arthur Meyre
87e1b57803 feat(hl): add noise squashing primitives to the HL API 2025-04-01 11:03:27 +02:00
Arthur Meyre
7ae4f00805 refactor(integer): more stringent coherence tests for parameters/keys 2025-04-01 11:03:27 +02:00
Nicolas Sarlin
ce56ea2078 feat(hl): create FheTypes from i32 2025-04-01 10:00:38 +02:00
tmontaigu
f9795a6199 feat(hlapi): Add boolean-dot-prod 2025-03-31 23:03:57 +02:00
tmontaigu
c768af8093 feat: add OverflowingAdd trait
Add a OverflowingAdd trait and make UnsignedInteger
and SignedInteger depend on it

This is so that other parts of the code can
use the OverflowingAdd trait without requiring
the big Unsigned/Signed Bound
2025-03-31 23:03:57 +02:00
tmontaigu
d0ce05027c feat(integer): boolean_scalar_dot_prod
Add function do compute a dot product between a vector of encrypted
boolean values and a vector of clear values
2025-03-31 23:03:57 +02:00
tmontaigu
1771a400bc feat: add clear - ciphertex
Add integer and hlapi function to perform
`clear - ciphertext`

As subtraction is not commutative having a specialized version
is better.

As can be seen from the code, the real benefit is for the default
version where the cost of `clear - ciphertext` is the same as
`clear + ciphertex` which is better that transforming the clear into
a trivial ciphertext to perform the subtract algorithm
2025-03-31 23:03:36 +02:00
David Testé
bed95d26f6 chore(bench): implement throughput benchmarks on core_crypto layer 2025-03-31 16:05:41 +02:00
dependabot[bot]
9f2e8128e6 chore(deps): bump tj-actions/changed-files from 46.0.2 to 46.0.3
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 46.0.2 to 46.0.3.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](26a38635fc...823fcebdb3)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-31 14:47:26 +02:00
Beka Barbakadze
8a0bf69f11 fix(gpu): fix max shared memory bug for CG PBS 2025-03-31 14:10:07 +02:00
dependabot[bot]
20602453ce chore(deps): bump zgosalvez/github-actions-ensure-sha-pinned-actions
Bumps [zgosalvez/github-actions-ensure-sha-pinned-actions](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions) from 3.0.22 to 3.0.23.
- [Release notes](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions/releases)
- [Commits](25ed13d062...4830be28ce)

---
updated-dependencies:
- dependency-name: zgosalvez/github-actions-ensure-sha-pinned-actions
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-31 13:28:04 +02:00
tmontaigu
1e58803432 feat(hlapi): add scalar cmux
The current IfThenElse trait was too restrictive
to be able to use it to add these methods so a new trait
was added.
This trait uses method name with the scalar prefix to avoid
conflicts and avoid introducing breaking changes
2025-03-31 10:02:05 +02:00
tmontaigu
1e5fb00715 fix(integer): scalar cmux
Default shortint ops were used, leading
to carries being cleaned internally which
we did not want
2025-03-31 10:02:05 +02:00
tmontaigu
78fc99aa79 feat(tfhe-ntt): Add custom root-of-unity for Solinas Prime
Those root-of-unity enable friendly twiddle generation with low hamming-weigth.
And thus, enable to replace some multiplication with simple shift.

Co-authored-by: Baptiste Roux <baptiste.roux@zama.ai>
2025-03-28 17:23:29 +01:00
youben11
2c3cf3bfd3 feat(core): chunked seeded lwe_bsk generation 2025-03-28 17:23:09 +01:00
youben11
6a252fc08b docs(core): fix lwe_bsk_chunk docs 2025-03-28 17:23:09 +01:00
youben11
a037ca5618 feat(core): make lwe_bsk_chunk safe_serializable (impl Named trait) 2025-03-28 17:23:09 +01:00
Nicolas Sarlin
1f381cf9da chore(tests): run clippy on workspace tests 2025-03-28 15:34:14 +01:00
Nicolas Sarlin
41a09317e7 fix(tests): re-enable backward tests that were skipped 2025-03-28 12:58:15 +01:00
Nicolas Sarlin
6ad29e4540 chore(shortint): fix typo in comment 2025-03-28 11:02:32 +01:00
Nicolas Sarlin
009257b63e chore(hl): remove unwrap in conformance checks 2025-03-28 11:02:32 +01:00
Agnes Leroy
3d1c25888c chore(gpu): reduce test threads for multi-gpu tests 2025-03-28 10:43:35 +01:00
Agnes Leroy
e0d442922e chore(gpu): fix integer throughput bench
- fix num sms for pcie H100 VM
- reduce minimum loading to avoid oom error on mul 256-bit
2025-03-28 10:43:30 +01:00
youben11
8e6663e9fb feat(core): chunked lwe_bsk generation 2025-03-27 18:17:18 +01:00
youben11
0d722d167e chore: ignore pycache folders
some make target generate that
2025-03-27 18:17:18 +01:00
Arthur Meyre
21abcbdf4c feat(integer): add noise squashing to integer 2025-03-27 18:16:06 +01:00
Arthur Meyre
8e30d5e538 refactor(shortint): make NoiseSquashedCiphertext more future proof
- keep message/carry just in case... also allows to perform more runtime
checks
2025-03-27 18:16:06 +01:00
Arthur Meyre
d82a0a4b75 test(integer): add a corner case for signed radix decryption 2025-03-27 18:16:06 +01:00
David Testé
91dc4f44da chore: update tfhe-fft and tfhe-ntt minor version
This is done to get the current version of dependencies defined in workspace, especially pulp.
2025-03-27 15:57:00 +01:00
Guillermo Oyarzun
9eb6d5afd1 feat(gpu): add modulus switch noise reduction gpu 2025-03-27 10:55:51 +01:00
Agnes Leroy
ac4d36d6f6 chore(gpu): modify erc20 throughput bench for better multi-gpu performance 2025-03-26 15:40:04 +01:00
Agnes Leroy
6e158cd109 chore(gpu): use template for first/last iter in split classical PBS 2025-03-26 10:01:39 +01:00
Agnes Leroy
cdcf00af45 chore(gpu): detect if we are in first or last iter with template argument for split kernel multi-bit PBS 2025-03-25 12:16:51 +01:00
Agnes Leroy
78638a24d2 chore(gpu): reduce test threads for 4090 tests to avoid out of mem error 2025-03-25 09:42:58 +01:00
dependabot[bot]
84c12cca56 chore(deps): bump actions/download-artifact from 4.1.9 to 4.2.1
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.1.9 to 4.2.1.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](cc20338598...95815c38cf)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-24 17:52:27 +01:00
dependabot[bot]
7d05a427a5 chore(deps): bump actions/upload-artifact from 4.6.1 to 4.6.2
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.1 to 4.6.2.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](4cec3d8aa0...ea165f8d65)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-24 17:52:17 +01:00
dependabot[bot]
c7bc981f7f chore(deps): bump tj-actions/changed-files from 46.0.1 to 46.0.2
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 46.0.1 to 46.0.2.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](2f7c5bfce2...26a38635fc)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-24 17:52:04 +01:00
dependabot[bot]
f7210c80a9 chore(deps): bump actions/cache from 4.2.2 to 4.2.3
Bumps [actions/cache](https://github.com/actions/cache) from 4.2.2 to 4.2.3.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](d4323d4df1...5a3ec84eff)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-24 17:50:01 +01:00
Arthur Meyre
a11584f905 refactor(integer): provide recompose_unsigned, recompose_signed functions
- to decrypt values and make it flexible wrt input type for the
recomposition, e.g. using u128 for noise squashed primitives
2025-03-24 17:12:18 +01:00
Arthur Meyre
f326aaf2f2 feat(shortint): add conformance for noise squashing keys 2025-03-24 17:12:18 +01:00
Arthur Meyre
3a63b96b77 refactor: make bootstrapping key conformance param generic over Scalar 2025-03-24 17:12:18 +01:00
Arthur Meyre
bbf30e5227 feat(shortint): add CompressedNoiseSquashingKey
- add decompressed key in noise squashing and check output from it as well
number of loops /2 as a result
2025-03-24 17:12:18 +01:00
Arthur Meyre
9f5266a56d fix: fix Named implementation for compression keys in integer
- for now keep the alias for backward compatibility
2025-03-24 17:12:18 +01:00
Arthur Meyre
065f863ba2 refactor(hl): rename some structs related to inner FheInt representations 2025-03-24 17:12:18 +01:00
Arthur Meyre
93dd4cf61a refactor(shortint): more sensible API for noise squashing private keygen 2025-03-24 17:12:18 +01:00
Arthur Meyre
3331ce2548 chore: expose noise squashing parameters and use in shortint test 2025-03-24 17:12:18 +01:00
Arthur Meyre
7e30816fe8 feat(integer): add raw parts APIs for compressed compression keys 2025-03-24 17:12:18 +01:00
Agnes Leroy
765d2b6dbe chore(gpu): relax too strict condition in copy slice 2025-03-24 14:58:45 +01:00
Beka Barbakadze
5207e55684 refactor(gpu): remove lwe_input_indexes, lwe_output_indexes and lut_vector_indexes for pbs128 2025-03-24 14:32:41 +01:00
David Testé
a4bd78912b chore: bump tfhe and tfhe-cuda-backend version to alpha.0 2025-03-24 13:18:46 +01:00
Arthur Meyre
c59fa4c479 chore(ci): make version formatting more resilient 2025-03-24 13:18:46 +01:00
tmontaigu
bbcad438fc feat: add trivial enc/dec for strings 2025-03-24 10:16:15 +01:00
tmontaigu
89016d7a07 feat(integer): add scalar cmux
Add variants of CMUX where one or two of the possible
output values are clear
2025-03-24 10:15:52 +01:00
tmontaigu
d454e67b89 fix(integer): block_rotate
encrypted block_rotate/shift family of functions had a few bugs

* It disallowed the use of 1_1 parameters even though it could support it
(given the another slight fix explained below was done)
* The offset at which shift bits were extracted was hard coded for 2_2
* Directions were inverted, i.e, block_rotate_left would rotate_right
2025-03-24 10:15:24 +01:00
tmontaigu
f1cf021d18 refactor: move bit shift/rotate tests 2025-03-24 10:15:24 +01:00
Agnes Leroy
b1008824e2 chore(gpu): supress warnings in pcc_gpu 2025-03-21 18:02:07 +01:00
Agnes Leroy
4928b1354e chore(gpu): add an alias for GPU compression parameters 2025-03-21 17:17:51 +01:00
Agnes Leroy
7d2a296d4d chore(gpu): reduce testing time after parameter update 2025-03-21 15:43:37 +01:00
Arthur Meyre
11cbffb3f2 chore(ci): fix size benchmark
- don't expand, we have tests for that, the server key would be required
with the "new" parameters we are interested in
2025-03-21 14:37:34 +01:00
Agnes Leroy
a7111014e8 fix(gpu): fix corner case in sum ctxt 2025-03-21 10:14:38 +01:00
Arthur Meyre
7d3cdbf466 chore: bump tfhe-cuda-backend to version 0.9.0 2025-03-20 17:47:18 +01:00
Arthur Meyre
dc9afe1146 chore: bump to 1.1 and add V1_1 parameters
- add aliases for tests to avoid having to upgrade too many locations
2025-03-20 17:47:18 +01:00
David Testé
8287f59ebd chore(ci): update aws ami for cpu
This is done to update python modules: pip, wheel and setuptools.
2025-03-20 15:47:17 +01:00
David Testé
9282dc49bf chore(ci): cache backward compatibility data
Git LFS transfers use a lot of bandwidth. Since data used to test
backward compatibility won't change every day, we can leverage
GitHub cache action.
2025-03-20 15:47:17 +01:00
Agnes Leroy
71d8bcff89 fix(gpu): fix signed scalar comparison tests 2025-03-19 18:12:26 +01:00
Agnes Leroy
30319452a4 chore(gpu): remove last memcpy_to_cpu 2025-03-19 18:12:26 +01:00
Agnes Leroy
fb0e9a3b35 chore(gpu): pass scalars on cpu to c++ to avoid calling copy_to_cpu 2025-03-19 18:12:26 +01:00
Agnes Leroy
dbcbea78a5 chore(gpu): pass host scalar to scalar add to avoid overhead due to copy_to_cpu 2025-03-19 18:12:26 +01:00
Agnes Leroy
c9b9fc52d8 chore(gpu): store h_lut_indexes in buffer to avoid regression in perf 2025-03-19 18:12:26 +01:00
Agnes Leroy
f404e8f10d chore(gpu): avoid syncing too much in release 2025-03-19 18:12:26 +01:00
Agnes Leroy
31de8d52d8 fix(gpu): fix bug introduced when reworking host_compare_with_zero
Bug introduced in this commit: 5258acc08f
2025-03-19 17:17:55 +01:00
Arthur Meyre
fd866d18fe chore(ci): pin changed files action to a sha1 corresponding to a tag 2025-03-19 09:25:20 +01:00
Arthur Meyre
56572d0223 fix: a shortint docstring had values not matching the explanation 2025-03-17 17:55:23 +01:00
dependabot[bot]
f3e14dc311 chore(deps): bump dtolnay/rust-toolchain
Bumps [dtolnay/rust-toolchain](https://github.com/dtolnay/rust-toolchain) from a54c7afa936fefeb4456b2dd8068152669aa8203 to 888c2e1ea69ab0d4330cbf0af1ecc7b68f368cc1.
- [Release notes](https://github.com/dtolnay/rust-toolchain/releases)
- [Commits](a54c7afa93...888c2e1ea6)

---
updated-dependencies:
- dependency-name: dtolnay/rust-toolchain
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-17 17:55:09 +01:00
Mayeul@Zama
1600f8c995 chore: remove trivium from main workspace 2025-03-17 14:22:17 +01:00
Agnes Leroy
5258acc08f fix(gpu): fix the logic of host_compare_with_zero_equality in Cuda to match the CPU 2025-03-17 10:26:38 +01:00
David Testé
912af0e87e chore(ci): install dependencies as standalone job
Installing dependencies several times, due to matrix strategy, lead to job failure.
Now, if the workflow uses the remote instance, the dependencies will be installed only once.
2025-03-14 17:51:47 +01:00
tmontaigu
0fdda14495 fix: upcasting of signed integer when block decomposing
Some parts of the code did not use the correct way to
decompose a clear integer into blocks which could be encrypted
or used in scalar ops.

The sign extension was not always properly done, leading for example
in the encryption of a negative integer stored on a i8 to a
SignedRadixCiphertext with a num_blocks greater than i8 to be incorrect:

```
let ct = cks.encrypt_signed(-1i8, 16) // 2_2 parameters
let d: i32 = cks.decrypt_signed(&ct);
assert_eq!(d, i32::from(-1i8)); // Fails
```

To fix, a BlockDecomposer::with_block_count function is added and used
This function will properly do the sign extension when needed
2025-03-14 17:40:13 +01:00
Carl-Zama
9886256242 feat(core): add glwe keyswitch 2025-03-13 14:45:31 +01:00
Carl-Zama
4b89766011 feat(core): add SliceSignedDecompositionIter 2025-03-13 14:45:31 +01:00
Agnes Leroy
5d3b4438d5 chore(gpu): fix cuda ks_pbs bench and rename workflow files 2025-03-13 14:11:51 +01:00
Agnes Leroy
e62710de12 chore(gpu): add benchmark for gpu pbs128 2025-03-13 14:11:51 +01:00
Agnes Leroy
a2f1825691 fix(gpu): fix bug in mul introduced during noise/degree refactor 2025-03-13 13:26:07 +01:00
Agnes Leroy
dcec10b0cf fix(gpu): fix scalar eq bug 2025-03-13 13:24:33 +01:00
Nicolas Sarlin
573ce0c803 chore(bench): add pbs-stats required feature 2025-03-13 09:34:00 +01:00
Beka Barbakadze
459969e9d2 feat(gpu): Implement 128 bit classic pbs 2025-03-12 22:13:22 +04:00
David Testé
8dadb626f2 chore(ci): add pull-request url to slack notification message
This adds context to Zama developers on slack to quickly go to pull-request if the run emitted from one.
2025-03-12 17:00:30 +01:00
Agnes Leroy
ba1235059a chore(gpu): update error messages about device index in integer/gpu/mod.rs 2025-03-12 12:16:00 +01:00
Agnes Leroy
d53db210de chore(gpu): fix multi-gpu integer throughput bench 2025-03-12 12:16:00 +01:00
Arthur Meyre
2258aa0cbe feat(shortint): add noise squashing capabilities
- noise squshing consists in running a PBS over a large modulus like
2^128 with parameters which ensure a big gap between like the plaintext
and the noise, like 50+ bits, this can allow to run a noise flooding
step in MPC to protect against certain key recovery attacks
2025-03-12 10:24:20 +01:00
Arthur Meyre
54a7d4b57c feat(shortint): make encoding generic over Scalar to use it for u128 2025-03-12 10:24:20 +01:00
Arthur Meyre
18db93f8fa feat(core)!: support mixed scalar bootstrapping key generation
- make Numeric CastFrom<Self>, this is not breaking as it's equivalent to
From<Self> in rust which is blanket implemented
- mark CastFrom<Self> inline(always) for the implementations I could find
- update APIs for bootstrappking key generation to support having mixed
integer types for both secret keys, i.e. having a u64 input key and an
an u128 output key

BREAKING: this change is technically breaking for core
2025-03-12 10:24:20 +01:00
Arthur Meyre
268b5892b7 refactor(core): rename files to avoid potential conflicts with exports 2025-03-12 10:24:20 +01:00
Arthur Meyre
a2beabf003 feat: make PBS 128 implems more flexible with respect to input 2025-03-12 10:24:20 +01:00
Arthur Meyre
311d666042 chore: fix a warning for gpu strings
- we don't currently have strings on GPU and so don't run clippy for them
2025-03-12 10:24:20 +01:00
Arthur Meyre
464f4ef9cf test(shortint): enable some standalone tests using ci_run_filter 2025-03-12 10:24:20 +01:00
David Testé
f8e56c104e chore(ci): fix slack notification message
There was a leftover from first iteration of external contribution management.
2025-03-11 14:20:26 +01:00
Agnes Leroy
adfd8e8c86 fix(gpu): fix ilog2 result when input is 0
This commit reverts ilog2 back to what it was before 00037f3b14.
The implementation on GPU differs from the CPU one though, we need to
dig further.
2025-03-11 13:52:03 +01:00
Agnes Leroy
473a4e383e chore(gpu): add C++ functions to pop/push/insert in radix ciphertext 2025-03-11 12:49:49 +01:00
Agnes Leroy
fca0cca071 chore(gpu): refactor div to track noise level & degree 2025-03-11 12:49:49 +01:00
David Testé
b7d33e6b3f docs: change svg benchmark tables appearance for pbs 2025-03-07 15:46:22 +01:00
Arthur Meyre
b0d7bb9f95 chore: pre-generate keyswitching keys for shortint tests
- we run in a cross process race condition which fucks up the key file
- no rust crate seems to help and linux locks are just a fucking mess
- also avoid truncating file when we are going to write to it, get a lock
first
2025-03-07 13:27:35 +01:00
Nicolas Sarlin
396f30ff5d feat(c_api): add new integer types 2025-03-07 11:07:19 +01:00
Nicolas Sarlin
10b82141eb chore(hl): add a feature for extended types 2025-03-07 11:07:19 +01:00
Nicolas Sarlin
e6e7081c7c feat(js): add new integer types 2025-03-07 11:07:19 +01:00
Nicolas Sarlin
20421747ed feat(hl): add new integer types 2025-03-07 11:07:19 +01:00
Agnes Leroy
59bb7ba35c chore(gpu): do not send slack message for external contributions for signed gpu tests 2025-03-06 13:38:54 +01:00
Agnes Leroy
80a1109260 chore(gpu): fix condition to trigger unsigned gpu test 2025-03-06 13:38:54 +01:00
David Testé
54396370a1 chore(ci): use new heuristic for throughput benchmarks
This is done to load benchmarks machine in smarter way. This makes
sure to saturate compute load of the benchmark machine while
keeping execution time reasonable.

iter_batched() criterion method is used instead of iter() so that
benchmarks are compatible with other flavors of operations
(unchecked_* or smart_*).
2025-03-06 13:26:23 +01:00
Agnes Leroy
3621d12c42 chore(ci): add hourly cost for sxm5 vms 2025-03-06 10:19:49 +01:00
Nicolas Sarlin
1f2e1537fa chore(ci): update tfhe-lints for newer compiler version 2025-03-06 09:48:18 +01:00
Arthur Meyre
52a1191474 chore(ci): force installation of toolchain for tfhe-lints
- also update toolchain.txt to match the tfhe-lint toolchain
2025-03-06 09:48:18 +01:00
Nicolas Sarlin
d06e8d1e87 chore(ci): re-enable tfhe_lints 2025-03-06 09:48:18 +01:00
David Testé
863234d134 docs: change svg benchmark tables appearance
Reduce number of FheUint types displayed in the integer benchmark
tables. Increase policy size and better columns fitting.
Remove link to enlarge image.
2025-03-05 18:41:33 +01:00
David Testé
fcfb77a8c5 chore(ci): fix permanent instance selection condition
Due to 'continue-on-error' directive 'use-permanent-instance' step could not rely on failure() function.
2025-03-05 18:13:20 +01:00
tmontaigu
98d58ada7a fix: BlockDecomposer
The BlockDecomposer gave the possibility when the number of bits per
block was not a multiple of the number of bits in the original integer
to force the extra bits of the last block to a particular value.

However, the way this was done could only work when setting these bits
to 1, when wanting to set them to 0 it would not work.

Good news is that we actually never wanted to set them to 0,
but it should still be fixed for completeness, and allow other
feature to be added without bugs
2025-03-05 14:27:56 +01:00
Agnes Leroy
8962d1f925 chore(gpu): refactor full propagation to track noise / degree 2025-03-05 11:06:30 +01:00
Arthur Meyre
f7655cc749 fix(shortint): make noise_level field of Ciphertext private again
- this is required to make sure we have correctness checks on noise_level
updates if we enable them
2025-03-05 10:16:17 +01:00
Nicolas Sarlin
371e8238db chore(ci): disable dylint until rustup issue is fixed 2025-03-04 15:57:58 +01:00
Beka Barbakadze
c1d534efa4 refactor(gpu): refactor double2 operators to use cuda intrinsics 2025-03-03 17:29:39 +01:00
David Testé
47589ea9a7 chore(bench): run core_crypto benchmarks on all parameters p-fail
This also add KS-PBS benchmarks.
2025-03-03 16:01:17 +01:00
Agnes Leroy
ce327b7b27 chore(gpu): refactor mul/scalar mul to track noise/degree 2025-03-03 13:51:00 +01:00
Arthur Meyre
877d0234ac fix: fix the atomic pattern used to cast in trivium and a test in shortint
- parameters are optimized for a clean ciphertext, the ciphertext being
keyswitched was noisy
2025-03-03 13:10:11 +01:00
dependabot[bot]
f457ac40e5 chore(deps): bump codecov/codecov-action from 5.3.1 to 5.4.0
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 5.3.1 to 5.4.0.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](13ce06bfc6...0565863a31)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 11:53:07 +01:00
dependabot[bot]
d9feb57b92 chore(deps): bump slsa-framework/slsa-github-generator
Bumps [slsa-framework/slsa-github-generator](https://github.com/slsa-framework/slsa-github-generator) from 2.0.0 to 2.1.0.
- [Release notes](https://github.com/slsa-framework/slsa-github-generator/releases)
- [Changelog](https://github.com/slsa-framework/slsa-github-generator/blob/main/CHANGELOG.md)
- [Commits](https://github.com/slsa-framework/slsa-github-generator/compare/v2.0.0...v2.1.0)

---
updated-dependencies:
- dependency-name: slsa-framework/slsa-github-generator
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 11:52:56 +01:00
dependabot[bot]
fa41fb3ad4 chore(deps): bump actions/cache from 4.2.1 to 4.2.2
Bumps [actions/cache](https://github.com/actions/cache) from 4.2.1 to 4.2.2.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](0c907a75c2...d4323d4df1)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 11:52:45 +01:00
dependabot[bot]
375a482d0b chore(deps): bump actions/download-artifact from 4.1.8 to 4.1.9
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4.1.8 to 4.1.9.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](fa0a91b85d...cc20338598)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-03-03 11:52:37 +01:00
Beka Barbakadze
7e941b29c1 refactor(gpu): use hexes to initialize twiddles for 64 bit fft 2025-03-03 14:44:12 +04:00
David Testé
3897137a3f chore(ci): fallback on permanent h100 instance on shortage
When a shortage occurs on n3-H100x1 instances on Hyperstack, we'll
fall back on the permanent one registered on GitHub.
This can be done by using 'h100x1' as runner label to run a job on
it.
2025-03-03 11:38:32 +01:00
Beka Barbakadze
3988c85d6b feat(gpu): Implement fft128 in cuda backend 2025-03-03 12:27:46 +04:00
Agnes Leroy
c1bf43eac1 feat(gpu): add a function to set a CudaLweList to 0 2025-02-28 16:46:17 +01:00
Agnes Leroy
95863e1e36 chore(gpu): plug in signed gpu tests in the hl api 2025-02-28 13:42:52 +01:00
Pedro Alves
a508f4cadc fix(gpu): enforce tighter bounds on compression output 2025-02-28 07:12:36 -03:00
Agnes Leroy
dad278cdd3 chore(gpu): fix typo in doc 2025-02-28 11:12:17 +01:00
tmontaigu
699e24f735 docs: rename to README as its needed for link to work 2025-02-28 10:23:46 +01:00
Agnes Leroy
12ed899b34 chore(gpu): trigger long run tests every evening, edit workflow name 2025-02-27 17:22:02 +01:00
David Testé
8565b79a28 chore(ci): switch environment and add fallback for gpu profiles
Switch n3-H100-SXM5x8 to US-1 as CANADA is out of stock on this
instance.
Also L40 instances fallback on n3-RTX-A6000x1 to mitigate
resource shortages issues.
2025-02-27 16:59:04 +01:00
Agnes Leroy
1d7f9f1152 chore(gpu): refactor comparisons to track noise/degree 2025-02-27 16:57:24 +01:00
tmontaigu
3ecdd0d1bc fix(c-api): add missing casts
cast_into FheUint{12, 512, 1024, 2048} were missing from the C API
2025-02-27 16:30:51 +01:00
J-B Orfila
14517ca111 docs: add link in the README 2025-02-27 15:09:41 +01:00
Agnes Leroy
a2eceabd82 fix(gpu): fix scalar comparisons with 1 block 2025-02-27 13:11:36 +01:00
Guillermo Oyarzun
968ab31f27 fix(cpu): fix corner case when estimating the num blocks required 2025-02-27 11:38:17 +01:00
Agnes Leroy
74d5a88f1b chore(gpu): replace asserts with panic 2025-02-27 11:36:59 +01:00
Agnes Leroy
e18ce00f63 chore(gpu): increase 4090 test timeout 2025-02-27 11:27:55 +01:00
tmontaigu
7ec8f901da docs(js): update JS example
The example was still using CompactFheUint32List
which as been removed in favor of the more generic CompactCiphertextList
2025-02-27 10:54:08 +01:00
Arthur Meyre
610406ac27 chore: link CONTRIBUTING.md in the documentation 2025-02-26 16:07:44 +01:00
J-B Orfila
4162ff5b64 docs: security disclaimer updated 2025-02-26 16:07:31 +01:00
J-B Orfila
efd06c5b43 docs: correcting parameter section 2025-02-26 16:07:31 +01:00
Nicolas Sarlin
bd2a488f13 chore(doc): add a doc page about parameters 2025-02-26 16:07:31 +01:00
David Testé
9f48db2a90 chore(ci): fix workflow concurrency condition
Referencing current branch using github.head_ref is a leftover
from handling pull_request_target event. This event being removed,
there is no need to be specific and we can instead use
'github.workflow_ref' which is more robust.
2025-02-26 14:11:42 +01:00
Pedro Alves
f962716fa5 feat(gpu): refactor the sample extract entry point so the user can pass how many LWEs should be extracted per GLWE 2025-02-26 11:58:47 +01:00
Arthur Meyre
ec3f3a1b52 chore(docs): use tilde requirements to minimize breakage on users' end 2025-02-25 17:59:23 +01:00
Arthur Meyre
ab36f36116 chore: update README 2025-02-25 17:59:23 +01:00
David Testé
06638c33d7 chore(ci): add contributing guidance 2025-02-25 17:21:42 +01:00
David Testé
e583212e6d docs: refactor and update benchmarks pages
Benchmarks tables are rendered as descriptive SVG images.
Sort results by backend to have a clearer view in tree of content.
PBS benchmarks now display results for various p-fail and several
precisions.
2025-02-25 12:47:12 +01:00
David Testé
486ec9f053 chore(ci): update cpu aws ami and install git-lfs
Several network errors occurred while trying to install git-lfs
from within backward compatibility tests workflow. Having git-lfs
installed directly in the Amazon Machine Image fix this issue.
2025-02-25 12:45:47 +01:00
Arthur Meyre
0216e640bf test: make the bound on the base variance check a bit looser
We have seen failures, we need proper confidence intervals on these tests
2025-02-24 17:47:30 +01:00
David Testé
d00224caa3 chore(ci): add should-run to tfhe-fft and tfhe-ntt tests
This is done to avoid testing tfhe-ftt/ntt crates if nothing
changes in their source files.
However, these tests would be run unconditionally on each push on
main branch.
2025-02-24 16:35:31 +01:00
dependabot[bot]
bd06971680 chore(deps): bump actions/cache from 4.2.0 to 4.2.1
Bumps [actions/cache](https://github.com/actions/cache) from 4.2.0 to 4.2.1.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](1bd1e32a3b...0c907a75c2)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-24 11:46:53 +01:00
dependabot[bot]
58688cd401 chore(deps): bump actions/upload-artifact from 4.6.0 to 4.6.1
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 4.6.0 to 4.6.1.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](65c4c4a1dd...4cec3d8aa0)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-24 11:46:44 +01:00
Agnes Leroy
2757f7209a chore(gpu): update backend readme 2025-02-24 11:22:14 +01:00
Mayeul@Zama
b38b119746 chore(docs): add HL strings documentation 2025-02-24 10:58:29 +01:00
Pedro Alves
219c755a77 fix(gpu): fix wrong number of blocks used in cast 2025-02-21 20:09:54 -03:00
Mayeul@Zama
fc4abd5fb1 chore: update toolchain 2025-02-21 15:03:23 +01:00
Guillermo Oyarzun
5de1445cbf fix(gpu): fix wrong assert in division 2025-02-21 11:27:03 +01:00
Yuxi Zhao
6b21bff1e8 chore(docs): improve nagivation 2025-02-20 17:29:36 +01:00
Arthur Meyre
a1dc260fb2 chore(ci): make md doctest checker a bit more versatile on user errors 2025-02-20 17:29:36 +01:00
David Testé
5d9af12f6e chore(ci): fix release workflow for tfhe-versionable
tfhe-versionable crate depends on tfhe-versionable-derive.
Workflow, now ensure that derive crate is published before
attempting to package tfhe-versionable.

Dry-run option is removed since it cannot be use correctly due
the reason aforementioned.
2025-02-20 11:44:58 +01:00
Guillermo Oyarzun
32c93876d7 feat(gpu): enable division in high level api 2025-02-20 10:33:07 +01:00
Guillermo Oyarzun
bede76be82 feat(gpu): enable if then else for boolean ciphertexts in hlapi 2025-02-19 12:50:38 +01:00
Guillermo Oyarzun
508713f926 fix(gpu): enable large integers for the classical pbs flavors 2025-02-19 06:52:49 -03:00
Guillermo Oyarzun
6d7b32dd0a fix(gpu): enable large integers other multi bit pbs 2025-02-19 06:52:49 -03:00
Pedro Alves
15f7ba20aa fix(gpu): Remove unnecessary and incorrect bound check for decompression
Removed unnecessary bounds check for the number of LWEs against polynomial size.
2025-02-19 06:17:11 -03:00
Arthur Meyre
4fa59cdd6d chore(ci): fix web packages publish with provenance
- re-enabled required permissions, notably write id-token
2025-02-18 16:18:59 +01:00
Arthur Meyre
69d5b7206e chore(ci): fix packaging job to also have exported env vars 2025-02-18 15:24:24 +01:00
Arthur Meyre
a9cb617fe8 chore(ci): fix cuda release workflow to have rust re-installed for cargo 2025-02-18 14:58:40 +01:00
Arthur Meyre
54962af887 chore: update copyright year to 2025
co-authored-by: wgyt <wgythe@gmail.com>
2025-02-18 13:19:28 +01:00
Arthur Meyre
cb7d77f59a feat: add 2^-128 parameters 2025-02-18 13:19:28 +01:00
Arthur Meyre
0ecd5e1508 chore: bump tfhe to 1.0.0 2025-02-18 13:19:28 +01:00
Arthur Meyre
dc8b293895 chore: bump tfhe-cuda-backend to 0.8.0 2025-02-18 13:19:28 +01:00
Arthur Meyre
4ca4203c02 chore: bump tfhe-zk-pok to 0.5.0 2025-02-18 13:19:28 +01:00
Arthur Meyre
dfa6b2827a chore: bump tfhe-fft to 0.8.0 2025-02-18 13:19:28 +01:00
Arthur Meyre
06ae56b389 chore: bump tfhe-ntt to 0.5.0 2025-02-18 13:19:28 +01:00
Arthur Meyre
f0238bab16 chore: bump tfhe-versionable to 0.5.0 2025-02-18 13:19:28 +01:00
Arthur Meyre
e4e03277b5 fix(shortint): fix CompressedModulusSwitchNoiseReductionKey generation
- was using the wrong seeded encryption API resulting in garbage values
when decompressing
2025-02-18 13:19:28 +01:00
Arthur Meyre
49566cd7cf refactor(core): rename foot-gunny functions for seeded entities encryption 2025-02-18 13:19:28 +01:00
Guillermo Oyarzun
e0df5364f9 fix(gpu): enable large number of samples in pbs tbc 2025-02-18 07:26:28 -03:00
tmontaigu
4650a5e3e4 chore(hlapi): add FhetTypes::AsciiString 2025-02-18 10:22:00 +01:00
tmontaigu
c86d2616c1 refactor(hlapi)!: introduce HlCompactable trait
The purpose of this introducing this trait
is to purposefully create a breaking change
so that later we have more freedom on refactoring
some stuff with less risk of breaking
2025-02-18 10:22:00 +01:00
tmontaigu
a40501691a feat(hlapi): allow strings in compact/compressed list 2025-02-18 10:22:00 +01:00
David Testé
c7b0fe37ec chore(ci): enable throughput benchmarks for zk-pok 2025-02-18 09:56:49 +01:00
Guillermo Oyarzun
0f44ffdf30 fix(gpu): enable larger number of samples in the keyswitch 2025-02-17 19:34:26 -03:00
tmontaigu
380bc9b91a fix: rotations of 1 blocks of 4_4 2025-02-17 17:27:43 +01:00
Nicolas Sarlin
0809eb942f chore!: homogenize conformance parameters
BREAKING CHANGE: renamed some conformance parameters public types
2025-02-17 15:07:09 +01:00
dependabot[bot]
fb730d2953 chore(deps): bump zgosalvez/github-actions-ensure-sha-pinned-actions
Bumps [zgosalvez/github-actions-ensure-sha-pinned-actions](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions) from 3.0.20 to 3.0.22.
- [Release notes](https://github.com/zgosalvez/github-actions-ensure-sha-pinned-actions/releases)
- [Commits](c3a2b64f69...25ed13d062)

---
updated-dependencies:
- dependency-name: zgosalvez/github-actions-ensure-sha-pinned-actions
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-02-17 14:13:41 +01:00
tmontaigu
1b7f7a5c8f feat: add strings to integer's compressed list 2025-02-14 16:29:56 +01:00
Guillermo Oyarzun
0d3d23daec chore(gpu): remove unused variables 2025-02-14 13:07:07 +01:00
Agnes Leroy
c5f44a6581 chore(gpu): refactor overflowing sub to track noise / degree 2025-02-14 12:03:20 +01:00
tmontaigu
cda43fd407 feat(strings): make strings compatible with compact list 2025-02-14 10:14:31 +01:00
Agnes Leroy
bfd3773322 chore(gpu): refactor arithmetic scalar shift 2025-02-13 20:58:12 +01:00
Agnes Leroy
a7c9357a02 fix(gpu): fix memory error in shift and rotate 2025-02-13 20:57:31 +01:00
Agnes Leroy
285fd2437e chore(gpu): add some checks on radix sizes for carry propagation 2025-02-13 20:57:31 +01:00
David Testé
7ee49387fe chore(ci): deduplicate parameters set to send to lattice estimator
From SageMath point of view some tfhe-rs parameters set are
equivalent. We deduplicate those by storing their name in the tag
field. Grouping them that way we decrease analysis time
dramatically.
2025-02-13 17:10:45 +01:00
Arthur Meyre
8756869fe3 fix: fix compression code for GPU which assumed a CPU data layout
- the CPU data layout is truncated to only store relevant bodies (i.e.
emtpy bodies are assumed to be 0) but the GPU CUDA code manages full GLWEs
only. To fix that we manage the data layout during conversions to have
consistent behavior when copying the list to/from CPU/GPU. Compression code
has been fixed on the CPU side to have the proper length for the output
expected by the CUDA code
2025-02-13 17:06:19 +01:00
Mayeul@Zama
9e4b585468 chore(core): use real modulus in test 2025-02-13 16:21:26 +01:00
tmontaigu
37934e42c1 fix(integer): rotations/shifts < 2 blocks
This commit fixes a few bugs

* The shift/rotate functions used when blocks encrypt a number of bits
  that is a power of 2 was causing a panic when working on one block.
  - Also, when the number of blocks was low (e.g 2 blocks with 2_2
    params) a noise cleaning step was wrongly skipped

* The function used when blocks encrypt non power of 2 number of bits
  also had a problem

The test have been updated to test with different block sizes and check
the noise level

Overall these bugs only affected low block counts (e.g FheUint2,
FheUint4) ciphertexts
2025-02-13 12:59:26 +01:00
Mayeul@Zama
53a1f35d3b feat: update noise reduction to take input noise into account 2025-02-13 10:57:28 +01:00
Mayeul@Zama
4305f8d158 chore(core): refactor DispersionParameter 2025-02-13 10:57:28 +01:00
David Testé
eeb6c8a71f chore(ci): remove pull_request_target for external contributions
We use large GitHub hosted runners to run CI pipeline for external
contributions. This avoids possible secret exposition due to usage
of pull_request_target event. It also removes a layer a complexity
to ensure such secrets are not exposed.
The flow would be improved since tfhe-rs maintainers won't have to
relaunch failed jobs individually, thanks to the "approve and run"
button in GitHub user interface.
2025-02-13 08:45:02 +01:00
tmontaigu
16d8af150c fix(gpu): compressed list gpu <-> cpu
Some counts where to copied from the correct
source to correct destination.

And more importantly, the list on cuda side was stored
using a GlweCiphertextList but the data was compressed
(so the list was mostly empty). This use of a GlweList
instead of a specialized type lead to problems when converting
to Cpu
2025-02-12 15:17:23 +01:00
tmontaigu
d0b0fe8edb fix(gpu): fix wrong degree after decompression
For Signed and Unsigned DataKind, the degree was
incorrectly set, leading to unneeded carry propagations
2025-02-12 15:17:23 +01:00
David Testé
3df08e9259 chore(ci): install github runner as ubuntu user for gpu workflows 2025-02-12 12:10:53 +01:00
2233 changed files with 655775 additions and 66875 deletions

12
.cargo/audit.toml Normal file
View File

@@ -0,0 +1,12 @@
[advisories]
ignore = [
# Ignoring unmaintained 'paste' advisory as it is a widely used, low-risk build dependency.
"RUSTSEC-2024-0436",
]
[output]
# Deny advisories that are warnings by default.
# At the moment this works if we allow paste, we might want to disable this in the future if it
# becomes too tedious
deny = ["warnings"]
quiet = false

3
.gitattributes vendored Normal file
View File

@@ -0,0 +1,3 @@
*.hpu filter=lfs diff=lfs merge=lfs -text
*.bcode filter=lfs diff=lfs merge=lfs -text
*.cbor filter=lfs diff=lfs merge=lfs -text

View File

@@ -6,6 +6,9 @@ self-hosted-runner:
- large_windows_16_latest
- large_ubuntu_16
- large_ubuntu_16-22.04
- v80-desktop
- v80-marais
- v80-couperin
# Configuration variables in array of strings defined in your repository or
# organization. `null` means disabling configuration variables check.
# Empty array means no configuration variable is allowed.

104
.github/actions/gpu_setup/action.yml vendored Normal file
View File

@@ -0,0 +1,104 @@
name: Setup Cuda
description: Setup Cuda on Hyperstack or GitHub instance
inputs:
cuda-version:
description: Version of Cuda to use
required: true
gcc-version:
description: Version of GCC to use
required: true
github-instance:
description: Instance is hosted on GitHub
default: 'false'
runs:
using: "composite"
steps:
# Mandatory on hyperstack since a bootable volume is not re-usable yet.
- name: Install dependencies
shell: bash
run: |
wget https://github.com/Kitware/CMake/releases/download/v"${CMAKE_VERSION}"/cmake-"${CMAKE_VERSION}"-linux-x86_64.sh
echo "${CMAKE_SCRIPT_SHA} cmake-${CMAKE_VERSION}-linux-x86_64.sh" > checksum
sha256sum -c checksum
sudo bash cmake-"${CMAKE_VERSION}"-linux-x86_64.sh --skip-license --prefix=/usr/ --exclude-subdir
sudo apt update
sudo apt remove -y unattended-upgrades
sudo apt install -y cmake-format libclang-dev
env:
CMAKE_VERSION: 3.29.6
CMAKE_SCRIPT_SHA: "6e4fada5cba3472ae503a11232b6580786802f0879cead2741672bf65d97488a"
- name: Install GCC
if: inputs.github-instance == 'true'
shell: bash
env:
GCC_VERSION: ${{ inputs.gcc-version }}
run: |
sudo apt-get install gcc-"{GCC_VERSION}" g++-"{GCC_VERSION}"
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-"{GCC_VERSION}" 20
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-"{GCC_VERSION}" 20
- name: Check GCC
shell: bash
env:
GCC_VERSION: ${{ inputs.gcc-version }}
run: |
which gcc-"${GCC_VERSION}"
- name: Install CUDA
if: inputs.github-instance == 'true'
shell: bash
env:
CUDA_VERSION: ${{ inputs.cuda-version }}
CUDA_KEYRING_PACKAGE: cuda-keyring_1.1-1_all.deb
CUDA_KEYRING_SHA: "d93190d50b98ad4699ff40f4f7af50f16a76dac3bb8da1eaaf366d47898ff8df"
run: |
# Use Sed to extract a value from a string, this cannot be done with the ${variable//search/replace} pattern.
# shellcheck disable=SC2001
TOOLKIT_VERSION="$(echo "${CUDA_VERSION}" | sed 's/\(.*\)\.\(.*\)/\1-\2/')"
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/${CUDA_KEYRING_PACKAGE}
echo "${CUDA_KEYRING_SHA} ${CUDA_KEYRING_PACKAGE}" > checksum
sha256sum -c checksum
sudo dpkg -i "${CUDA_KEYRING_PACKAGE}"
sudo apt update
sudo apt -y install cuda-toolkit-"${TOOLKIT_VERSION}"
- name: Export CUDA variables
shell: bash
run: |
find /usr/local -executable -name "nvcc"
CUDA_PATH=/usr/local/cuda-"${CUDA_VERSION}"
{
echo "CUDA_PATH=$CUDA_PATH";
echo "LD_LIBRARY_PATH=$CUDA_PATH/lib64:$LD_LIBRARY_PATH";
echo "CUDA_MODULE_LOADER=EAGER";
echo "PATH=$PATH:$CUDA_PATH/bin";
} >> "${GITHUB_ENV}"
{
echo "PATH=$PATH:$CUDA_PATH/bin";
} >> "${GITHUB_PATH}"
env:
CUDA_VERSION: ${{ inputs.cuda-version }}
# Specify the correct host compilers
- name: Export gcc and g++ variables
shell: bash
run: |
{
echo "CC=/usr/bin/gcc-${GCC_VERSION}";
echo "CXX=/usr/bin/g++-${GCC_VERSION}";
echo "CUDAHOSTCXX=/usr/bin/g++-${GCC_VERSION}";
} >> "${GITHUB_ENV}"
env:
GCC_VERSION: ${{ inputs.gcc-version }}
- name: Check setup
shell: bash
run: |
which nvcc
- name: Check device is detected
shell: bash
run: nvidia-smi

View File

@@ -1,53 +0,0 @@
name: Setup Cuda
description: Setup Cuda on Hyperstack instance
inputs:
cuda-version:
description: Version of Cuda to use
required: true
gcc-version:
description: Version of GCC to use
required: true
cmake-version:
description: Version of cmake to use
default: 3.29.6
runs:
using: "composite"
steps:
# Mandatory on hyperstack since a bootable volume is not re-usable yet.
- name: Install dependencies
shell: bash
run: |
sudo apt update
sudo apt install -y checkinstall zlib1g-dev libssl-dev libclang-dev
wget https://github.com/Kitware/CMake/releases/download/v${{ inputs.cmake-version }}/cmake-${{ inputs.cmake-version }}.tar.gz
tar -zxvf cmake-${{ inputs.cmake-version }}.tar.gz
cd cmake-${{ inputs.cmake-version }}
./bootstrap
make -j"$(nproc)"
sudo make install
- name: Export CUDA variables
shell: bash
run: |
CUDA_PATH=/usr/local/cuda-${{ inputs.cuda-version }}
echo "CUDA_PATH=$CUDA_PATH" >> "${GITHUB_ENV}"
echo "$CUDA_PATH/bin" >> "${GITHUB_PATH}"
echo "LD_LIBRARY_PATH=$CUDA_PATH/lib:$LD_LIBRARY_PATH" >> "${GITHUB_ENV}"
echo "CUDACXX=/usr/local/cuda-${{ inputs.cuda-version }}/bin/nvcc" >> "${GITHUB_ENV}"
# Specify the correct host compilers
- name: Export gcc and g++ variables
shell: bash
run: |
{
echo "CC=/usr/bin/gcc-${{ inputs.gcc-version }}";
echo "CXX=/usr/bin/g++-${{ inputs.gcc-version }}";
echo "CUDAHOSTCXX=/usr/bin/g++-${{ inputs.gcc-version }}";
echo "HOME=/home/ubuntu";
} >> "${GITHUB_ENV}"
- name: Check device is detected
shell: bash
run: nvidia-smi

View File

@@ -7,3 +7,5 @@ updates:
# Check for updates to GitHub Actions every sunday
interval: "weekly"
day: "sunday"
cooldown:
default-days: 7

View File

@@ -1,16 +1,22 @@
# Add labels in pull request
name: PR label manager
name: approve_label
on:
pull_request:
pull_request_review:
types: [submitted]
permissions: {}
# zizmor: ignore[concurrency-limits] this workflow needs to react to any event in a pull-request
jobs:
trigger-tests:
name: approve_label/trigger-tests
runs-on: ubuntu-latest
permissions:
pull-requests: write
pull-requests: write # Needed to apply or remove label
steps:
- name: Get current labels
uses: snnaplab/get-labels-action@f426df40304808ace3b5282d4f036515f7609576
@@ -34,3 +40,10 @@ jobs:
# We need to use a PAT to be able to trigger `labeled` event for the other workflow.
github_token: ${{ secrets.FHE_ACTIONS_TOKEN }}
labels: approved
- name: Check if maintainer needs to handle label manually
if: ${{ failure() }}
run: |
echo "Pull-request from an external contributor."
echo "A maintainer need to manually add/remove the 'approved' label."
exit 1

View File

@@ -1,5 +1,5 @@
# Run backward compatibility tests
name: Backward compatibility Tests on CPU
name: aws_tfhe_backward_compat_tests
env:
CARGO_TERM_COLOR: always
@@ -11,54 +11,37 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_16"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
paths:
- '**'
- '!.github/**'
- '!ci/**'
push:
branches:
- main
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (backward-compat-tests)
needs: check-user-permission
name: aws_tfhe_backward_compat_tests/setup-instance
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -67,73 +50,98 @@ jobs:
backend: aws
profile: cpu-small
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
backward-compat-tests:
name: Backward compatibility tests
name: aws_tfhe_backward_compat_tests/backward-compat-tests (bpr)
needs: [ setup-instance ]
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
cancel-in-progress: true
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
concurrency:
group: ${{ github.workflow_ref }}${{ github.ref == 'refs/heads/main' && github.sha || '' }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
persist-credentials: 'true' # Needed to pull lfs data
token: ${{ env.CHECKOUT_TOKEN }}
# Cache key is an aggregated hash of lfs files hashes
- name: Get LFS data sha
id: hash-lfs-data
run: |
SHA=$(git lfs ls-files -l -I utils/tfhe-backward-compat-data | sha256sum | cut -d' ' -f1)
echo "sha=${SHA}" >> "${GITHUB_OUTPUT}"
- name: Retrieve data from cache
id: retrieve-data-cache
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 #v4.3.0
with:
path: |
utils/tfhe-backward-compat-data/**/*.cbor
utils/tfhe-backward-compat-data/**/*.bcode
key: ${{ steps.hash-lfs-data.outputs.sha }}
- name: Pull test data
if: steps.retrieve-data-cache.outputs.cache-hit != 'true'
run: |
make pull_backward_compat_data
# Pull token was stored by action/checkout to be used by lfs, we don't need it anymore
- name: Remove git credentials
run: | # Starting version 6.0, action/checkout uses a dedicated file for git credentials
git config --local --unset-all http.https://github.com/.extraheader || rm "${RUNNER_TEMP}"/git-credentials-*.config
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Install git-lfs
run: |
sudo apt update && sudo apt -y install git-lfs
- name: Use specific data branch
if: ${{ contains(github.event.pull_request.labels.*.name, 'data_PR') }}
env:
PR_BRANCH: ${{ github.head_ref || github.ref_name }}
run: |
echo "BACKWARD_COMPAT_DATA_BRANCH=${PR_BRANCH}" >> "${GITHUB_ENV}"
- name: Get backward compat branch
id: backward_compat_branch
run: |
BRANCH="$(make backward_compat_branch)"
echo "branch=${BRANCH}" >> "${GITHUB_OUTPUT}"
- name: Clone test data
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
persist-credentials: 'false'
repository: zama-ai/tfhe-backward-compat-data
path: tests/tfhe-backward-compat-data
lfs: 'true'
ref: ${{ steps.backward_compat_branch.outputs.branch }}
- name: Run backward compatibility tests
run: |
make test_backward_compatibility_ci
- name: Slack Notification
if: ${{ failure() }}
- name: Store data in cache
if: steps.retrieve-data-cache.outputs.cache-hit != 'true'
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 #v4.3.0
with:
path: |
utils/tfhe-backward-compat-data/**/*.cbor
utils/tfhe-backward-compat-data/**/*.bcode
key: ${{ steps.hash-lfs-data.outputs.sha }}
- name: Set pull-request URL
if: ${{ failure() && github.event_name == 'pull_request' }}
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Slack Notification
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Backward compatibility tests finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Backward compatibility tests finished with status: ${{ job.status }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (backward-compat-tests)
name: aws_tfhe_backward_compat_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, backward-compat-tests ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -143,8 +151,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (backward-compat-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (backward-compat-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Run a small subset of tests to ensure quick feedback.
name: Fast AWS Tests on CPU
name: aws_tfhe_fast_tests
env:
CARGO_TERM_COLOR: always
@@ -11,32 +11,30 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' || github.event_name == 'pull_request_target' }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_64-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: aws_tfhe_fast_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
csprng_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.csprng_any_changed }}
zk_pok_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.zk_pok_any_changed }}
@@ -65,16 +63,15 @@ jobs:
any_file_changed: ${{ env.IS_PULL_REQUEST == 'false' || steps.aggregated-changes.outputs.any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
dependencies:
@@ -137,35 +134,19 @@ jobs:
run: |
echo "any_changed=true" >> "$GITHUB_OUTPUT"
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (fast-tests)
name: aws_tfhe_fast_tests/setup-instance
if: github.event_name == 'workflow_dispatch' ||
(github.event_name != 'workflow_dispatch' && needs.should-run.outputs.any_file_changed == 'true')
needs: [ should-run, check-user-permission ]
needs: should-run
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -174,23 +155,29 @@ jobs:
backend: aws
profile: cpu-big
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
fast-tests:
name: Fast CPU tests
needs: [ should-run, setup-instance ]
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: true
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
@@ -198,9 +185,11 @@ jobs:
if: needs.should-run.outputs.csprng_test == 'true'
run: |
make test_tfhe_csprng
make test_tfhe_csprng_big_endian
- name: Run tfhe-zk-pok tests
if: needs.should-run.outputs.zk_pok_test == 'true'
# Always run it to catch non deterministic bugs earlier
# if: needs.should-run.outputs.zk_pok_test == 'true'
run: |
make test_zk_pok
@@ -230,7 +219,7 @@ jobs:
- name: Node cache restoration
id: node-cache
uses: actions/cache/restore@1bd1e32a3bdc45362d1e726936510720a7c30a57 #v4.2.0
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 #v4.3.0
with:
path: |
~/.nvm
@@ -243,7 +232,7 @@ jobs:
make install_node
- name: Node cache save
uses: actions/cache/save@1bd1e32a3bdc45362d1e726936510720a7c30a57 #v4.2.0
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 #v4.3.0
if: steps.node-cache.outputs.cache-hit != 'true'
with:
path: |
@@ -285,23 +274,32 @@ jobs:
run: |
make test_zk
- name: Set pull-request URL
if: ${{ failure() && github.event_name == 'pull_request' }}
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() && env.SECRETS_AVAILABLE == 'true' }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Fast AWS tests finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Fast AWS tests finished with status: ${{ job.status }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (fast-tests)
name: aws_tfhe_fast_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, fast-tests ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -310,9 +308,8 @@ jobs:
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (fast-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (fast-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,4 +1,4 @@
name: AWS Unsigned Integer Tests on CPU
name: aws_tfhe_integer_tests
env:
CARGO_TERM_COLOR: always
@@ -10,59 +10,55 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
SLACKIFY_MARKDOWN: true
PULL_REQUEST_MD_LINK: ""
# We clear the cache to reduce memory pressure because of the numerous processes of cargo
# nextest
TFHE_RS_CLEAR_IN_MEMORY_KEY_CACHE: "1"
NO_BIG_PARAMS: FALSE
REF: ${{ github.event.pull_request.head.sha || github.sha }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_64-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
push:
branches:
- main
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: aws_tfhe_integer_tests/should-run
if:
(github.event_name == 'push' && github.repository == 'zama-ai/tfhe-rs') ||
(github.event_name == 'pull_request_target' && contains(github.event.label.name, 'approved')) ||
(github.event_name == 'pull_request' && contains(github.event.label.name, 'approved')) ||
github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
integer_test: ${{ github.event_name == 'workflow_dispatch' ||
steps.changed-files.outputs.integer_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
integer:
@@ -75,26 +71,9 @@ jobs:
- tfhe/src/integer/**
- .github/workflows/aws_tfhe_integer_tests.yml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (unsigned-integer-tests)
needs: [ should-run, check-user-permission ]
name: aws_tfhe_integer_tests/setup-instance
needs: should-run
if:
(github.event_name == 'push' && github.repository == 'zama-ai/tfhe-rs' && needs.should-run.outputs.integer_test == 'true') ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') ||
@@ -102,11 +81,12 @@ jobs:
github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -115,28 +95,35 @@ jobs:
backend: aws
profile: cpu-big
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
unsigned-integer-tests:
name: Unsigned integer tests
name: aws_tfhe_integer_tests/unsigned-integer-tests
needs: setup-instance
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}${{ github.ref == 'refs/heads/main' && github.sha || '' }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
timeout-minutes: 480 # 8 hours
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: "false"
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Should skip big parameters set
if: github.event_name == 'pull_request_target'
if: github.event_name == 'pull_request'
run: |
echo "NO_BIG_PARAMS=TRUE" >> "${GITHUB_ENV}"
@@ -154,25 +141,34 @@ jobs:
- name: Run unsigned integer tests
run: |
AVX512_SUPPORT=ON NO_BIG_PARAMS=${{ env.NO_BIG_PARAMS }} BIG_TESTS_INSTANCE=TRUE make test_unsigned_integer_ci
AVX512_SUPPORT=ON NO_BIG_PARAMS="${NO_BIG_PARAMS}" BIG_TESTS_INSTANCE=TRUE make test_unsigned_integer_ci
- name: Set pull-request URL
if: ${{ failure() && github.event_name == 'pull_request' }}
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Unsigned Integer tests finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Unsigned Integer tests finished with status: ${{ job.status }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (unsigned-integer-tests)
name: aws_tfhe_integer_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [setup-instance, unsigned-integer-tests]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -182,8 +178,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (unsigned-integer-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (unsigned-integer-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -0,0 +1,116 @@
name: aws_tfhe_noise_checks
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUSTFLAGS: "-C target-cpu=native"
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACKIFY_MARKDOWN: true
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
permissions:
contents: read
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
setup-instance:
name: aws_tfhe_noise_checks/setup-instance
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
# We want an hpc7a more compute, will be faster
profile: bench
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "Cannot run this without secrets"
exit 1
noise-checks:
name: aws_tfhe_noise_checks/noise-checks
needs: setup-instance
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
timeout-minutes: 1440
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Run noise checks
timeout-minutes: 1440
run: |
make test_noise_check
- name: Set pull-request URL
if: ${{ !success() }}
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Slack Notification
if: ${{ !success() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Noise checks tests finished with status: ${{ job.status }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: aws_tfhe_noise_checks/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, noise-checks ]
runs-on: ubuntu-latest
steps:
- name: Stop remote instance
id: stop-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ !success() }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (noise-checks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,4 +1,4 @@
name: AWS Signed Integer Tests on CPU
name: aws_tfhe_signed_integer_tests
env:
CARGO_TERM_COLOR: always
@@ -10,60 +10,56 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
SLACKIFY_MARKDOWN: true
PULL_REQUEST_MD_LINK: ""
# We clear the cache to reduce memory pressure because of the numerous processes of cargo
# nextest
TFHE_RS_CLEAR_IN_MEMORY_KEY_CACHE: "1"
NO_BIG_PARAMS: FALSE
REF: ${{ github.event.pull_request.head.sha || github.sha }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_64-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
push:
branches:
- main
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: aws_tfhe_signed_integer_tests/should-run
if:
(github.event_name == 'push' && github.repository == 'zama-ai/tfhe-rs') ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') ||
((github.event_name == 'pull_request_target' || github.event_name == 'pull_request_target') && contains(github.event.label.name, 'approved')) ||
(github.event_name == 'pull_request' && contains(github.event.label.name, 'approved')) ||
github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
integer_test: ${{ github.event_name == 'workflow_dispatch' ||
steps.changed-files.outputs.integer_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
integer:
@@ -76,26 +72,9 @@ jobs:
- tfhe/src/integer/**
- .github/workflows/aws_tfhe_signed_integer_tests.yml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (unsigned-integer-tests)
needs: [ should-run, check-user-permission ]
name: aws_tfhe_signed_integer_tests/setup-instance
needs: should-run
if:
(github.event_name == 'push' && github.repository == 'zama-ai/tfhe-rs' && needs.should-run.outputs.integer_test == 'true') ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') ||
@@ -103,11 +82,12 @@ jobs:
github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -116,28 +96,34 @@ jobs:
backend: aws
profile: cpu-big
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
signed-integer-tests:
name: Signed integer tests
name: aws_tfhe_signed_integer_tests/signed-integer-tests
needs: setup-instance
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}${{ github.ref == 'refs/heads/main' && github.sha || '' }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: "false"
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Should skip big parameters set
if: github.event_name == 'pull_request_target'
if: github.event_name == 'pull_request'
run: |
echo "NO_BIG_PARAMS=TRUE" >> "${GITHUB_ENV}"
@@ -159,25 +145,34 @@ jobs:
- name: Run signed integer tests
run: |
AVX512_SUPPORT=ON NO_BIG_PARAMS=${{ env.NO_BIG_PARAMS }} BIG_TESTS_INSTANCE=TRUE make test_signed_integer_ci
AVX512_SUPPORT=ON NO_BIG_PARAMS="${NO_BIG_PARAMS}" BIG_TESTS_INSTANCE=TRUE make test_signed_integer_ci
- name: Set pull-request URL
if: ${{ failure() && github.event_name == 'pull_request' }}
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Signed Integer tests finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Signed Integer tests finished with status: ${{ job.status }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (signed-integer-tests)
name: aws_tfhe_signed_integer_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [setup-instance, signed-integer-tests]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -187,8 +182,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (signed-integer-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (signed-integer-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,4 +1,4 @@
name: AWS Tests on CPU
name: aws_tfhe_tests
env:
CARGO_TERM_COLOR: always
@@ -10,39 +10,36 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' || github.event_name == 'pull_request_target' }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_64-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
schedule:
# Nightly tests @ 1AM after each work day
- cron: "0 1 * * MON-FRI"
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: aws_tfhe_tests/should-run
runs-on: ubuntu-latest
if: github.event_name != 'schedule' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
csprng_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.csprng_any_changed }}
zk_pok_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.zk_pok_any_changed }}
@@ -75,16 +72,15 @@ jobs:
any_file_changed: ${{ env.IS_PULL_REQUEST == 'false' || steps.aggregated-changes.outputs.any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
dependencies:
@@ -147,35 +143,19 @@ jobs:
run: |
echo "any_changed=true" >> "$GITHUB_OUTPUT"
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cpu-tests)
if: github.event_name != 'pull_request_target' ||
name: aws_tfhe_tests/setup-instance
if: github.event_name != 'pull_request' ||
(github.event.action == 'labeled' && github.event.label.name == 'approved' && needs.should-run.outputs.any_file_changed == 'true')
needs: [ should-run, check-user-permission ]
needs: should-run
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -184,25 +164,31 @@ jobs:
backend: aws
profile: cpu-big
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cpu-tests:
name: CPU tests
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.setup-instance.result != 'skipped')
name: aws_tfhe_tests/cpu-tests
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
needs: [ should-run, setup-instance ]
concurrency:
group: ${{ github.workflow }}_${{github.event_name}}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}_${{github.event_name}}
cancel-in-progress: true
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
@@ -268,23 +254,32 @@ jobs:
make test_trivium
make test_kreyvium
- name: Set pull-request URL
if: ${{ failure() && github.event_name == 'pull_request' }}
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "CPU tests finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "CPU tests finished with status: ${{ job.status }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cpu-tests)
name: aws_tfhe_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cpu-tests ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -294,8 +289,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cpu-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cpu-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,4 +1,4 @@
name: AWS WASM Tests on CPU
name: aws_tfhe_wasm_tests
env:
CARGO_TERM_COLOR: always
@@ -10,57 +10,36 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_16"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (wasm-tests)
needs: check-user-permission
name: aws_tfhe_wasm_tests/setup-instance
if: ${{ github.event_name == 'workflow_dispatch' || contains(github.event.label.name, 'approved') }}
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -69,23 +48,29 @@ jobs:
backend: aws
profile: cpu-small
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
wasm-tests:
name: WASM tests
name: aws_tfhe_wasm_tests/wasm-tests
needs: setup-instance
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}_${{github.event_name}}
cancel-in-progress: true
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
@@ -95,7 +80,7 @@ jobs:
- name: Node cache restoration
id: node-cache
uses: actions/cache/restore@1bd1e32a3bdc45362d1e726936510720a7c30a57 #v4.2.0
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 #v4.3.0
with:
path: |
~/.nvm
@@ -108,7 +93,7 @@ jobs:
make install_node
- name: Node cache save
uses: actions/cache/save@1bd1e32a3bdc45362d1e726936510720a7c30a57 #v4.2.0
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 #v4.3.0
if: steps.node-cache.outputs.cache-hit != 'true'
with:
path: |
@@ -137,23 +122,32 @@ jobs:
run: |
make test_zk_wasm_x86_compat_ci
- name: Set pull-request URL
if: ${{ failure() && github.event_name == 'pull_request' }}
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "WASM tests finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "WASM tests finished with status: ${{ job.status }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (wasm-tests)
name: aws_tfhe_wasm_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, wasm-tests ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -163,8 +157,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (wasm-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (wasm-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,146 +0,0 @@
# Run boolean benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: Boolean benchmarks
on:
workflow_dispatch:
schedule:
# Weekly benchmarks will be triggered each Saturday at 1a.m.
- cron: '0 1 * * 6'
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
jobs:
setup-instance:
name: Setup instance (boolean-benchmarks)
runs-on: ubuntu-latest
if: github.event_name != 'schedule' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: bench
boolean-benchmarks:
name: Execute boolean benchmarks in EC2
needs: setup-instance
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
continue-on-error: true
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: nightly
- name: Run benchmarks with AVX512
run: |
make bench_boolean
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware "hpc7a.96xlarge" \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512
- name: Measure key sizes
run: |
make measure_boolean_key_sizes
- name: Parse key sizes results
run: |
python3 ./ci/benchmark_parser.py tfhe/boolean_key_sizes.csv ${{ env.RESULTS_FILENAME }} \
--object-sizes \
--append-results
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_boolean
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Boolean benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (boolean-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, boolean-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (boolean-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,137 +0,0 @@
# Run core crypto benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: Core crypto benchmarks
on:
workflow_dispatch:
schedule:
# Weekly benchmarks will be triggered each Saturday at 5a.m.
- cron: '0 5 * * 6'
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
jobs:
setup-instance:
name: Setup instance (core-crypto-benchmarks)
runs-on: ubuntu-latest
if: github.event_name != 'schedule' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: bench
core-crypto-benchmarks:
name: Execute core crypto benchmarks in EC2
needs: setup-instance
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: nightly
- name: Run benchmarks with AVX512
run: |
make bench_pbs
make bench_pbs128
make bench_ks
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware "hpc7a.96xlarge" \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--name-suffix avx512 \
--walk-subdirs
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_core_crypto
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "PBS benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (core-crypto-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, core-crypto-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (core-crypto-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

89
.github/workflows/benchmark_cpu.yml vendored Normal file
View File

@@ -0,0 +1,89 @@
# Run benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: benchmark_cpu
run-name: ${{ inputs.command }}::${{ inputs.bench_type}} (${{ inputs.op_flavor }}, ${{ inputs.precisions_set }}, ${{ inputs.params_type }})
on:
workflow_dispatch:
inputs:
command:
description: "Benchmark command to run"
type: choice
options:
- integer
- signed_integer
- integer_compression
- integer_zk
- shortint
- shortint_oprf
- hlapi
- hlapi_erc20
- hlapi_dex
- hlapi_noise_squash
- tfhe_zk_pok
- boolean
- pbs
- pbs128
- ks
- ks_pbs
op_flavor:
description: "Operations set to run"
type: choice
default: default
options:
- default
- fast_default
- smart
- unchecked
- misc
precisions_set:
description: "Bit precisions set"
type: choice
default: fast
options:
- fast
- all
- documentation
bench_type:
description: "Benchmarks type"
type: choice
default: latency
options:
- latency
- throughput
- both
params_type:
description: "Parameters type"
type: choice
default: classical
options:
- classical
- multi_bit
- classical + multi_bit
- classical_documentation
- multi_bit_documentation
- classical_documentation + multi_bit_documentation
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
run-benchmarks:
name: benchmark_cpu/run-benchmarks
uses: ./.github/workflows/benchmark_cpu_common.yml
with:
command: ${{ inputs.command }}
op_flavor: ${{ inputs.op_flavor }}
bench_type: ${{ inputs.bench_type }}
params_type: ${{ inputs.params_type }}
precisions_set: ${{ inputs.precisions_set }}
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}

View File

@@ -0,0 +1,277 @@
# Run benchmarks on an instance and return parsed results to Slab CI bot.
name: benchmark_cpu_common
on:
workflow_call:
inputs:
command: # Any make recipes stripped of the "bench_" prefix in the Makefile
type: string # Use comma separated values to generate an array
required: true
op_flavor:
type: string # Use comma separated values to generate an array
default: default
bench_type:
type: string
default: latency
params_type:
type: string
default: classical
precisions_set:
type: string
default: fast
additional_recipe: # Make recipes to run aside the benchmarks.
type: string # Use comma separated values to generate an array
additional_file_to_parse: # Other files to parse, located under tfhe-benchmark/ directory
type: string # Use comma separated values to generate an array
additional_results_type:
type: string
default: object-size
secrets:
REPO_CHECKOUT_TOKEN:
required: true
SLAB_ACTION_TOKEN:
required: true
SLAB_BASE_URL:
required: true
SLAB_URL:
required: true
JOB_SECRET:
required: true
SLACK_CHANNEL:
required: true
BOT_USERNAME:
required: true
SLACK_WEBHOOK:
required: true
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
permissions: {}
# zizmor: ignore[concurrency-limits] caller workflow is responsible for the concurrency
jobs:
prepare-matrix:
name: benchmark_cpu_common/prepare-matrix
runs-on: ubuntu-latest
outputs:
command: ${{ steps.set_matrix_args.outputs.command }}
op_flavor: ${{ steps.set_matrix_args.outputs.op_flavor }}
bench_type: ${{ steps.set_matrix_args.outputs.bench_type }}
params_type: ${{ steps.set_matrix_args.outputs.params_type }}
steps:
- name: Parse user inputs
shell: python
env:
INPUTS_COMMAND: ${{ inputs.command }}
INPUTS_OP_FLAVOR: ${{ inputs.op_flavor }}
INPUTS_BENCH_TYPE: ${{ inputs.bench_type }}
INPUTS_PARAMS_TYPE: ${{ inputs.params_type }}
run: |
import os
inputs_command = os.environ["INPUTS_COMMAND"]
inputs_op_flavor = os.environ["INPUTS_OP_FLAVOR"]
inputs_bench_type = os.environ["INPUTS_BENCH_TYPE"]
inputs_params_type = os.environ["INPUTS_PARAMS_TYPE"]
env_file = os.environ["GITHUB_ENV"]
split_command = inputs_command.replace(" ", "").split(",")
split_op_flavor = inputs_op_flavor.replace(" ", "").split(",")
if inputs_bench_type == "both":
bench_type = ["latency", "throughput"]
else:
bench_type = [inputs_bench_type, ]
if "+" in inputs_params_type:
split_params_type= inputs_params_type.replace(" ", "").split("+")
else:
split_params_type = [inputs_params_type, ]
with open(env_file, "a") as f:
for env_name, values_to_join in [
("COMMAND", split_command),
("OP_FLAVOR", split_op_flavor),
("BENCH_TYPE", bench_type),
("PARAMS_TYPE", split_params_type),
]:
f.write(f"""{env_name}=["{'", "'.join(values_to_join)}"]\n""")
- name: Set martix arguments outputs
id: set_matrix_args
run: | # zizmor: ignore[template-injection] these env variable are safe
{
echo "command=${{ toJSON(env.COMMAND) }}";
echo "op_flavor=${{ toJSON(env.OP_FLAVOR) }}";
echo "bench_type=${{ toJSON(env.BENCH_TYPE) }}";
echo "params_type=${{ toJSON(env.PARAMS_TYPE) }}";
} >> "${GITHUB_OUTPUT}"
setup-instance:
name: benchmark_cpu_common/setup-instance
needs: prepare-matrix
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: bench
integer-benchmarks:
name: benchmark_cpu_common/integer-benchmarks
needs: [ prepare-matrix, setup-instance ]
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
timeout-minutes: 1440 # 24 hours
strategy:
max-parallel: 1
matrix:
command: ${{ fromJSON(needs.prepare-matrix.outputs.command) }}
op_flavor: ${{ fromJSON(needs.prepare-matrix.outputs.op_flavor) }}
bench_type: ${{ fromJSON(needs.prepare-matrix.outputs.bench_type) }}
params_type: ${{ fromJSON(needs.prepare-matrix.outputs.params_type) }}
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${SHA}");
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=${COMMIT_DATE}";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
env:
SHA: ${{ github.sha }}
- name: Install rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: nightly
- name: Run benchmarks with AVX512
run: |
make BIT_SIZES_SET="${PRECISIONS_SET}" BENCH_OP_FLAVOR="${OP_FLAVOR}" BENCH_TYPE="${BENCH_TYPE}" BENCH_PARAM_TYPE="${BENCH_PARAMS_TYPE}" bench_"${BENCH_COMMAND}"
env:
OP_FLAVOR: ${{ matrix.op_flavor }}
BENCH_TYPE: ${{ matrix.bench_type }}
BENCH_PARAMS_TYPE: ${{ matrix.params_type }}
BENCH_COMMAND: ${{ matrix.command }}
PRECISIONS_SET: ${{ inputs.precisions_set }}
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion "${RESULTS_FILENAME}" \
--database tfhe_rs \
--hardware "hpc7a.96xlarge" \
--project-version "${COMMIT_HASH}" \
--branch "${REF_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--walk-subdirs \
--name-suffix avx512 \
--bench-type "${BENCH_TYPE}"
env:
REF_NAME: ${{ github.ref_name }}
BENCH_TYPE: ${{ matrix.bench_type }}
- name: Run additional benchmarks
if: ${{ inputs.additional_recipe }}
run: |
targets_list="${targets}"
IFS=','
for target in $targets_list; do
make "$target"
done
env:
targets: ${{ inputs.additional_recipe }}
- name: Parse additional benchmarks results files
if: ${{ inputs.additional_file_to_parse }}
run: |
filenames_list="${filenames}"
IFS=','
for filename in $filenames_list; do
python3 ./ci/benchmark_parser.py "tfhe-benchmark/${filename}" "${RESULTS_FILENAME}" \
--"${results_type}" \
--append-results
done
env:
filenames: ${{ inputs.additional_file_to_parse }}
results_type: ${{ inputs.additional_results_type }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_${{ matrix.command }}_${{ matrix.op_flavor }}_${{ matrix.bench_type }}_${{ matrix.params_type }}
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py "${RESULTS_FILENAME}" "${JOB_SECRET}" \
--slab-url "${SLAB_URL}"
env:
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_URL: ${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "CPU benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: benchmark_cpu_common/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, integer-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cpu-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -0,0 +1,223 @@
# Run CPU latencies benchmarks AWS VMs and return parsed results to Slab CI bot.
name: benchmark_cpu_weekly
on:
schedule:
# Weekly schedules are separated in two groups to avoid spawning too many the machines at once thus risking resource shortages.
# Group 1
# -------
# Weekly benchmarks will be triggered each Saturday at 1a.m.
- cron: '0 1 * * 6'
# Group 2
# -------
# Weekly benchmarks will be triggered each Sunday at 3a.m.
- cron: '0 3 * * 0'
# Quarterly benchmarks will be triggered right before the end of the quarter, the 25th of the current month at 4a.m.
# These benchmarks are far longer to execute, hence the reason to run them only four times a year.
- cron: '0 4 25 MAR,JUN,SEP,DEC *'
permissions: {}
# zizmor: ignore[concurrency-limits] only GitHub can trigger this workflow
jobs:
prepare-inputs:
name: benchmark_cpu_weekly/prepare-inputs
runs-on: ubuntu-latest
outputs:
is_weekly_bench_group_1: ${{ steps.check_bench_group_1.outputs.is_weekly_bench_group_1 }}
is_weekly_bench_group_2: ${{ steps.check_bench_group_2.outputs.is_weekly_bench_group_2 }}
is_quarterly_bench: ${{ steps.check_quarterly_bench.outputs.is_quarterly_bench }}
op_flavor: ${{ steps.set_op_flavor.outputs.op_flavor }}
precisions_set: ${{ steps.set_precisions_set.outputs.precisions_set }}
steps:
- name: Check is weekly bench group 1
id: check_bench_group_1
run: | # zizmor: ignore[template-injection] this env variable is safe
echo "is_weekly_bench_group_1=${{ github.event.schedule == '0 1 * * 6' }}" >> "${GITHUB_OUTPUT}"
- name: Check is weekly bench group 2
id: check_bench_group_2
run: | # zizmor: ignore[template-injection] this env variable is safe
echo "is_weekly_bench_group_2=${{ github.event.schedule == '0 3 * * 0' }}" >> "${GITHUB_OUTPUT}"
- name: Check is quarterly bench
id: check_quarterly_bench
run: | # zizmor: ignore[template-injection] this env variable is safe
echo "is_quarterly_bench=${{ github.event.schedule == '0 4 25 MAR,JUN,SEP,DEC *' }}" >> "${GITHUB_OUTPUT}"
- name: Weekly benchmarks
if: steps.check_bench_group_1.outputs.is_weekly_bench_group_1 == 'true' ||
steps.check_bench_group_2.outputs.is_weekly_bench_group_2 == 'true'
run: |
echo "OP_FLAVOR=default" >> "${GITHUB_ENV}"
echo "PRECISIONS_SET=fast" >> "${GITHUB_ENV}"
- name: Quarterly benchmarks
if: steps.check_quarterly_bench.outputs.is_quarterly_bench == 'true'
run: |
echo "OP_FLAVOR=\"default,unchecked\"" >> "${GITHUB_ENV}"
echo "PRECISIONS_SET=all" >> "${GITHUB_ENV}"
- name: Set operation flavor output
id: set_op_flavor
run: | # zizmor: ignore[template-injection] this env variable is safe
echo "op_flavor=${{ env.OP_FLAVOR }}" >> "${GITHUB_OUTPUT}"
- name: Set bit precisions output
id: set_precisions_set
run: | # zizmor: ignore[template-injection] this env variable is safe
echo "precisions_set=${{ env.PRECISIONS_SET }}" >> "${GITHUB_OUTPUT}"
run-benchmarks-integer:
name: benchmark_cpu_weekly/run-benchmarks-integer
if: github.repository == 'zama-ai/tfhe-rs'
&& (needs.prepare-inputs.outputs.is_weekly_bench_group_1 == 'true' || needs.prepare-inputs.outputs.is_quarterly_bench == 'true')
needs: prepare-inputs
uses: ./.github/workflows/benchmark_cpu_common.yml
with:
command: integer,signed_integer, integer_compression
op_flavor: ${{ needs.prepare-inputs.outputs.op_flavor }}
precisions_set: ${{ needs.prepare-inputs.outputs.precisions_set }}
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-integer-zk-pke:
name: benchmark_cpu_weekly/run-benchmarks-integer-zk-pke
if: github.repository == 'zama-ai/tfhe-rs'
&& needs.prepare-inputs.outputs.is_weekly_bench_group_1 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_cpu_common.yml
with:
command: integer_zk
additional_file_to_parse: pke_zk_crs_sizes.csv
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-hlapi-erc20:
name: benchmark_cpu_weekly/run-benchmarks-hlapi-erc20
if: github.repository == 'zama-ai/tfhe-rs'
&& needs.prepare-inputs.outputs.is_weekly_bench_group_2 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_cpu_common.yml
with:
command: hlapi_erc20
additional_file_to_parse: erc20_pbs_count.csv
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-hlapi-dex:
name: benchmark_cpu_weekly/run-benchmarks-hlapi-dex
if: github.repository == 'zama-ai/tfhe-rs'
&& needs.prepare-inputs.outputs.is_weekly_bench_group_1 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_cpu_common.yml
with:
command: hlapi_dex
additional_file_to_parse: dex_swap_request_update_dex_balance_pbs_count.csv,dex_swap_request_finalize_pbs_count.csv,dex_swap_claim_prepare_pbs_count.csv,dex_swap_claim_update_dex_balance_pbs_count.csv
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-core-crypto:
name: benchmark_cpu_weekly/run-benchmarks-core-crypto
if: github.repository == 'zama-ai/tfhe-rs'
&& needs.prepare-inputs.outputs.is_weekly_bench_group_1 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_cpu_common.yml
with:
command: ks,pbs,pbs128,ks_pbs
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-shortint:
name: benchmark_cpu_weekly/run-benchmarks-shortint
if: github.repository == 'zama-ai/tfhe-rs'
&& (needs.prepare-inputs.outputs.is_weekly_bench_group_2 == 'true' || needs.prepare-inputs.outputs.is_quarterly_bench == 'true')
needs: prepare-inputs
uses: ./.github/workflows/benchmark_cpu_common.yml
with:
op_flavor: ${{ needs.prepare-inputs.outputs.op_flavor }}
command: shortint
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-boolean:
name: benchmark_cpu_weekly/run-benchmarks-boolean
if: github.repository == 'zama-ai/tfhe-rs'
&& needs.prepare-inputs.outputs.is_weekly_bench_group_2 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_cpu_common.yml
with:
command: boolean
additional_recipe: measure_boolean_key_sizes
additional_file_to_parse: boolean_key_sizes.csv
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-tfhe-zk-pok:
name: benchmark_cpu_weekly/run-benchmarks-tfhe-zk-pok
if: github.repository == 'zama-ai/tfhe-rs'
&& needs.prepare-inputs.outputs.is_weekly_bench_group_1 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_cpu_common.yml
with:
command: tfhe_zk_pok
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}

View File

@@ -1,11 +1,11 @@
# Run all ERC20 benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: ERC20 benchmarks
# Run sizes benchmarks on an instance and return parsed results to Slab CI bot.
name: Ciphertext and Keys sizes benchmarks
on:
workflow_dispatch:
schedule:
# Weekly benchmarks will be triggered each Saturday at 5a.m.
- cron: '0 5 * * 6'
# Monthly benchmarks will be triggered each 24th of the month at 1a.m.
- cron: '0 1 24 * 6'
env:
CARGO_TERM_COLOR: always
@@ -18,38 +18,38 @@ env:
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members and GitHub can trigger this workflow
jobs:
setup-instance:
name: Setup instance (erc20-benchmarks)
runs-on: ubuntu-latest
name: Setup instance (sizes-benchmarks)
if: github.event_name == 'workflow_dispatch' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: bench
profile: cpu-big
erc20-benchmarks:
name: Execute ERC20 benchmarks
sizes-benchmarks:
name: Execute sizes client benchmarks
needs: setup-instance
if: needs.setup-instance.result != 'skipped'
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
continue-on-error: true
timeout-minutes: 720 # 12 hours
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
@@ -57,76 +57,87 @@ jobs:
- name: Get benchmark details
run: |
COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${SHA}");
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_DATE=${COMMIT_DATE}";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
env:
SHA: ${{ github.sha }}
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: nightly
- name: Measure public key and ciphertext sizes in HL Api
run: |
make measure_hlapi_compact_pk_ct_sizes
- name: Parse key and ciphertext sizes results
run: |
python3 ./ci/benchmark_parser.py tfhe-benchmark/hlapi_ct_key_sizes.csv "${RESULTS_FILENAME}" \
--database tfhe_rs \
--hardware "m6i.32xlarge" \
--project-version "${COMMIT_HASH}" \
--branch "${REF_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--object-sizes
env:
REF_NAME: ${{ github.ref_name }}
- name: Measure key sizes in shortint
run: |
make measure_shortint_key_sizes
- name: Parse key sizes results
run: |
python3 ./ci/benchmark_parser.py tfhe-benchmark/shortint_key_sizes.csv "${RESULTS_FILENAME}" \
--object-sizes \
--append-results
- name: Upload parsed results artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_ct_key_sizes
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Run benchmarks
run: |
make bench_hlapi_erc20
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware "hpc7a.96xlarge" \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512
- name: Parse PBS counts
run: |
python3 ./ci/benchmark_parser.py tfhe/erc20_pbs_count.csv ${{ env.RESULTS_FILENAME }} \
--object-sizes \
--append-results
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_erc20
path: ${{ env.RESULTS_FILENAME }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
python3 slab/scripts/data_sender.py "${RESULTS_FILENAME}" "${JOB_SECRET}" \
--slab-url "${SLAB_URL}"
env:
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_URL: ${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "ERC20 benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Sizes benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (erc20-benchmarks)
name: Teardown instance (sizes-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, erc20-benchmarks ]
needs: [ setup-instance, sizes-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -136,8 +147,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (erc20-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (sizes-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -0,0 +1,219 @@
# Run all benchmarks displayed in the public documentation.
name: benchmark_documentation
on:
workflow_dispatch:
inputs:
run-cpu-benchmarks:
description: "Run CPU benchmarks"
type: boolean
default: true
# GPU benchmarks are split because of resource scarcity.
run-gpu-integer-benchmarks:
description: "Run GPU integer benchmarks"
type: boolean
default: true
run-gpu-core-crypto-benchmarks:
description: "Run GPU core-crypto benchmarks"
type: boolean
default: true
run-hpu-benchmarks:
description: "Run HPU benchmarks"
type: boolean
default: true
generate-svgs:
description: "Generate SVG tables"
type: boolean
default: true
open-pr:
description: "Open a PR with the benchmark results"
type: boolean
default: false
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
run-benchmarks-cpu-integer:
name: benchmark_documentation/run-benchmarks-cpu-integer
uses: ./.github/workflows/benchmark_cpu_common.yml
if: inputs.run-cpu-benchmarks
with:
command: integer
op_flavor: fast_default
bench_type: both
precisions_set: documentation
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-gpu-integer:
name: benchmark_documentation/run-benchmarks-gpu-integer
uses: ./.github/workflows/benchmark_gpu_common.yml
if: inputs.run-gpu-integer-benchmarks
with:
profile: multi-h100-sxm5
hardware_name: n3-H100-SXM5x8
command: integer_multi_bit
op_flavor: fast_default
bench_type: both
precisions_set: documentation
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-hpu-integer:
name: benchmark_documentation/run-benchmarks-hpu-integer
uses: ./.github/workflows/benchmark_hpu_common.yml
if: inputs.run-hpu-benchmarks
with:
command: integer
op_flavor: default
bench_type: both
precisions_set: documentation
v80_pcie_dev: 24
v80_serial_number: XFL12NWY3ZKG
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
run-benchmarks-cpu-core-crypto:
name: benchmark_documentation/run-benchmarks-cpu-core-crypto
uses: ./.github/workflows/benchmark_cpu_common.yml
if: inputs.run-cpu-benchmarks
with:
command: pbs, ks_pbs
bench_type: latency
params_type: classical_documentation + multi_bit_documentation
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-gpu-core-crypto:
name: benchmark_documentation/run-benchmarks-gpu-core-crypto
uses: ./.github/workflows/benchmark_gpu_common.yml
if: inputs.run-gpu-core-crypto-benchmarks
with:
profile: multi-h100-sxm5
hardware_name: n3-H100-SXM5x8
command: pbs, ks_pbs
bench_type: latency
params_type: classical_documentation + multi_bit_documentation
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
generate-svgs-with-benchmarks-run:
name: benchmark-documentation/generate-svgs-with-benchmarks-run
if: ${{ always() &&
(inputs.run-cpu-benchmarks || inputs.run-gpu-integer-benchmarks || inputs.run-gpu-core-crypto-benchmarks ||inputs.run-hpu-benchmarks) &&
inputs.generate-svgs }}
needs: [
run-benchmarks-cpu-integer, run-benchmarks-gpu-integer, run-benchmarks-hpu-integer,
run-benchmarks-cpu-core-crypto, run-benchmarks-gpu-core-crypto
]
uses: ./.github/workflows/generate_svgs.yml
with:
time_span_days: 5
generate-cpu-svgs: ${{ inputs.run-cpu-benchmarks }}
generate-gpu-svgs: ${{ inputs.run-gpu-integer-benchmarks || inputs.run-gpu-core-crypto-benchmarks }}
generate-hpu-svgs: ${{ inputs.run-hpu-benchmarks }}
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
generate-svgs-without-benchmarks-run:
name: benchmark-documentation/generate-svgs-without-benchmarks-run
if: ${{ !(inputs.run-cpu-benchmarks || inputs.run-gpu-integer-benchmarks || inputs.run-gpu-core-crypto-benchmarks || inputs.run-hpu-benchmarks) &&
inputs.generate-svgs }}
uses: ./.github/workflows/generate_svgs.yml
with:
time_span_days: 60
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
open-pr:
name: benchmark-documentation/open-pr
needs: [ generate-svgs-with-benchmarks-run, generate-svgs-without-benchmarks-run ]
if: ${{ always() && inputs.open-pr &&
(needs.generate-svgs-with-benchmarks-run.result == 'success' || needs.generate-svgs-without-benchmarks-run.result == 'success') }}
runs-on: ubuntu-latest
permissions:
contents: write # Needed to create a commit
pull-requests: write # Needed to open a pull-request
env:
PATH_TO_DOC_ASSETS: tfhe/docs/.gitbook/assets
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
- name: Download SVG tables
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
path: svg_tables
merge-multiple: 'true'
# Perform best effort to copy SVG tables. If the copy fails or files don't exist, the PR will still be created.
- name: Copy SVG tables to documentation location
run: |
cp -f svg_tables/*integer-benchmark*.svg "${PATH_TO_DOC_ASSETS}" 2>/dev/null
cp -f svg_tables/*pbs-benchmark-tuniform*.svg "${PATH_TO_DOC_ASSETS}" 2>/dev/null
cp -f svg_tables/cpu-gpu-hpu-integer-benchmark-fheuint64-tuniform-2m128-ciphertext.svg "${PATH_TO_DOC_ASSETS}" 2>/dev/null
- name: Get current date
id: get-date
run: |
echo "date=$(date '+%g_%m_%d_%Hh%Mm%Ss')" >> "${GITHUB_OUTPUT}"
- name: Create pull-request
uses: peter-evans/create-pull-request@84ae59a2cdc2258d6fa0732dd66352dddae2a412 # v7.0.9
with:
sign-commits: true # Commit will be signed by github-actions bot
add-paths: ${{ env.PATH_TO_DOC_ASSETS }}/*.svg
branch: gh-bot/docs/update-svg-tables-${{ steps.get-date.outputs.date }}
commit-message: |
chore(docs): update benchmark results for all backends
Automated documentation update from tfhe-rs CI pipeline.
title: |
[CI] chore(docs): update benchmark results for all backends
body: |
Documentation update triggered by GitHub workflow.
labels: documentation

127
.github/workflows/benchmark_gpu.yml vendored Normal file
View File

@@ -0,0 +1,127 @@
# Run CUDA benchmarks on a Hyperstack VM and return parsed results to Slab CI bot.
name: benchmark_gpu
run-name: ${{ inputs.command }}::${{ inputs.bench_type}} (${{ inputs.profile }}, ${{ inputs.op_flavor }}, ${{ inputs.precisions_set }}, ${{ inputs.params_type }})
on:
workflow_dispatch:
inputs:
profile:
description: "Instance type"
required: true
type: choice
options:
- "l40 (n3-L40x1)"
- "4-l40 (n3-L40x4)"
- "multi-a100-nvlink (n3-A100x8-NVLink)"
- "single-h100 (n3-H100x1)"
- "2-h100 (n3-H100x2)"
- "4-h100 (n3-H100x4)"
- "multi-h100 (n3-H100x8)"
- "multi-h100-nvlink (n3-H100x8-NVLink)"
- "multi-h100-sxm5 (n3-H100-SXM5x8)"
command:
description: "Benchmark command to run"
type: choice
default: integer
options:
- integer
- integer_compression
- pbs
- pbs128
- ks
- ks_pbs
- integer_zk
- integer_aes
- integer_aes256
- hlapi_erc20
- hlapi_dex
- hlapi_noise_squash
op_flavor:
description: "Operations set to run"
type: choice
default: default
options:
- default
- fast_default
- unchecked
precisions_set:
description: "Bit precisions set"
type: choice
default: fast
options:
- fast
- all
- documentation
bench_type:
description: "Benchmarks type"
type: choice
default: latency
options:
- latency
- throughput
- both
params_type:
description: "Parameters type"
type: choice
default: multi_bit
options:
- classical
- multi_bit
- classical + multi_bit
- classical_documentation
- multi_bit_documentation
- classical_documentation + multi_bit_documentation
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
parse-inputs:
name: benchmark_gpu/parse-inputs
runs-on: ubuntu-latest
outputs:
profile: ${{ steps.parse_profile.outputs.profile }}
hardware_name: ${{ steps.parse_hardware_name.outputs.name }}
env:
INPUTS_PROFILE: ${{ inputs.profile }}
steps:
- name: Parse profile
id: parse_profile
run: |
# Use Sed to extract a value from a string, this cannot be done with the ${variable//search/replace} pattern.
# shellcheck disable=SC2001
PROFILE=$(echo "${INPUTS_PROFILE}" | sed 's|\(.*\)[[:space:]](.*)|\1|')
echo "profile=${PROFILE}" >> "${GITHUB_OUTPUT}"
- name: Parse hardware name
id: parse_hardware_name
run: |
# Use Sed to extract a value from a string, this cannot be done with the ${variable//search/replace} pattern.
# shellcheck disable=SC2001
NAME=$(echo "${INPUTS_PROFILE}" | sed 's|.*[[:space:]](\(.*\))|\1|')
echo "name=${NAME}" >> "${GITHUB_OUTPUT}"
run-benchmarks:
name: benchmark_gpu/run-benchmarks
needs: parse-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: ${{ needs.parse-inputs.outputs.profile }}
hardware_name: ${{ needs.parse-inputs.outputs.hardware_name }}
command: ${{ inputs.command }}
op_flavor: ${{ inputs.op_flavor }}
bench_type: ${{ inputs.bench_type }}
params_type: ${{ inputs.params_type }}
precisions_set: ${{ inputs.precisions_set }}
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}

View File

@@ -1,5 +1,5 @@
# Run benchmarks on an RTX 4090 machine and return parsed results to Slab CI bot.
name: TFHE Cuda Backend - 4090 benchmarks
name: benchmark_gpu_4090
env:
CARGO_TERM_COLOR: always
@@ -11,86 +11,59 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
FAST_BENCH: TRUE
REF: ${{ github.event.pull_request.head.sha || github.sha }}
BIT_SIZES_SET: FAST
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
schedule:
# Weekly benchmarks will be triggered each Friday at 9p.m.
- cron: "0 21 * * 5"
permissions:
contents: read
# zizmor: ignore[concurrency-limits] each job manage its concurrency
jobs:
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
cuda-integer-benchmarks:
name: Cuda integer benchmarks (RTX 4090)
needs: check-user-permission
name: benchmark_gpu_4090/cuda-integer-benchmarks
if: ${{ github.event_name == 'workflow_dispatch' ||
github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs' ||
contains(github.event.label.name, '4090_bench') }}
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}_cuda_integer_bench
group: ${{ github.workflow_ref }}_cuda_integer_bench
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ["self-hosted", "4090-desktop"]
timeout-minutes: 1440 # 24 hours
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
- name: Get benchmark details
run: |
COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${SHA}");
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_DATE=${COMMIT_DATE}";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
echo "FAST_BENCH=TRUE" >> "${GITHUB_ENV}"
env:
SHA: ${{ github.sha }}
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: nightly
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
@@ -103,18 +76,20 @@ jobs:
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
python3 ./ci/benchmark_parser.py target/criterion "${RESULTS_FILENAME}" \
--database tfhe_rs \
--hardware "rtx4090" \
--backend gpu \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--project-version "${COMMIT_HASH}" \
--branch "${REF_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--walk-subdirs
env:
REF_NAME: ${{ github.ref_name }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_integer_multi_bit_gpu_default
path: ${{ env.RESULTS_FILENAME }}
@@ -122,30 +97,33 @@ jobs:
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
python3 slab/scripts/data_sender.py "${RESULTS_FILENAME}" "${JOB_SECRET}" \
--slab-url "${SLAB_URL}"
env:
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_URL: ${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Integer RTX 4090 full benchmarks finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Integer RTX 4090 full benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
cuda-core-crypto-benchmarks:
name: Cuda core crypto benchmarks (RTX 4090)
name: benchmark_gpu_4090/cuda-core-crypto-benchmarks
if: ${{ github.event_name == 'workflow_dispatch' || github.event_name == 'schedule' || contains(github.event.label.name, '4090_bench') }}
needs: cuda-integer-benchmarks
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}_cuda_core_crypto_bench
group: ${{ github.workflow_ref }}_cuda_core_crypto_bench
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ["self-hosted", "4090-desktop"]
timeout-minutes: 1440 # 24 hours
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
@@ -153,19 +131,22 @@ jobs:
- name: Get benchmark details
run: |
COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${SHA}");
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_DATE=${COMMIT_DATE}";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
env:
SHA: ${{ github.sha }}
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: nightly
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
@@ -179,19 +160,20 @@ jobs:
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
python3 ./ci/benchmark_parser.py target/criterion "${RESULTS_FILENAME}" \
--database tfhe_rs \
--hardware "rtx4090" \
--backend gpu \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--project-version "${COMMIT_HASH}" \
--branch "${REF_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--walk-subdirs \
env:
REF_NAME: ${{ github.ref_name }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_core_crypto
path: ${{ env.RESULTS_FILENAME }}
@@ -199,28 +181,23 @@ jobs:
- name: Send data to Slab
shell: bash
run: |
echo "Computing HMac on results file"
SIGNATURE="$(slab/scripts/hmac_calculator.sh ${{ env.RESULTS_FILENAME }} '${{ secrets.JOB_SECRET }}')"
echo "Sending results to Slab..."
curl -v -k \
-H "Content-Type: application/json" \
-H "X-Slab-Repository: ${{ github.repository }}" \
-H "X-Slab-Command: store_data_v2" \
-H "X-Hub-Signature-256: sha256=${SIGNATURE}" \
-d @${{ env.RESULTS_FILENAME }} \
${{ secrets.SLAB_URL }}
python3 slab/scripts/data_sender.py "${RESULTS_FILENAME}" "${JOB_SECRET}" \
--slab-url "${SLAB_URL}"
env:
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_URL: ${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Core crypto RTX 4090 full benchmarks finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Core crypto RTX 4090 full benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
remove_github_label:
name: Remove 4090 bench label
if: ${{ always() && github.event_name == 'pull_request_target' }}
name: benchmark_gpu_4090/remove_github_label
if: ${{ always() && github.event_name == 'pull_request' }}
needs: [cuda-integer-benchmarks, cuda-core-crypto-benchmarks]
runs-on: ubuntu-latest
steps:

View File

@@ -0,0 +1,340 @@
# Run benchmarks on CUDA instance and return parsed results to Slab CI bot.
name: benchmark_gpu_common
on:
workflow_call:
inputs:
backend:
type: string
default: hyperstack
profile:
type: string
required: true
hardware_name:
type: string
required: true
command: # Use a comma separated values to generate an array
type: string
required: true
op_flavor: # Use a comma separated values to generate an array
type: string
default: default
bench_type:
type: string
default: latency
params_type:
type: string
default: multi_bit
precisions_set:
type: string
default: fast
secrets:
REPO_CHECKOUT_TOKEN:
required: true
SLAB_ACTION_TOKEN:
required: true
SLAB_BASE_URL:
required: true
SLAB_URL:
required: true
JOB_SECRET:
required: true
SLACK_CHANNEL:
required: true
BOT_USERNAME:
required: true
SLACK_WEBHOOK:
required: true
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
permissions: {}
# zizmor: ignore[concurrency-limits] caller workflow is responsible for the concurrency
jobs:
prepare-matrix:
name: benchmark_gpu_common/prepare-matrix
runs-on: ubuntu-latest
outputs:
command: ${{ steps.set_matrix_args.outputs.command }}
op_flavor: ${{ steps.set_matrix_args.outputs.op_flavor }}
bench_type: ${{ steps.set_matrix_args.outputs.bench_type }}
params_type: ${{ steps.set_matrix_args.outputs.params_type }}
steps:
- name: Parse user inputs
shell: python
env:
INPUTS_COMMAND: ${{ inputs.command }}
INPUTS_OP_FLAVOR: ${{ inputs.op_flavor }}
INPUTS_BENCH_TYPE: ${{ inputs.bench_type }}
INPUTS_PARAMS_TYPE: ${{ inputs.params_type }}
run: |
import os
inputs_command = os.environ["INPUTS_COMMAND"]
inputs_op_flavor = os.environ["INPUTS_OP_FLAVOR"]
inputs_bench_type = os.environ["INPUTS_BENCH_TYPE"]
inputs_params_type = os.environ["INPUTS_PARAMS_TYPE"]
env_file = os.environ["GITHUB_ENV"]
split_command = inputs_command.replace(" ", "").split(",")
split_op_flavor = inputs_op_flavor.replace(" ", "").split(",")
if inputs_bench_type == "both":
bench_type = ["latency", "throughput"]
else:
bench_type = [inputs_bench_type, ]
if "+" in inputs_params_type:
split_params_type= inputs_params_type.replace(" ", "").split("+")
else:
split_params_type = [inputs_params_type, ]
with open(env_file, "a") as f:
for env_name, values_to_join in [
("COMMAND", split_command),
("OP_FLAVOR", split_op_flavor),
("BENCH_TYPE", bench_type),
("PARAMS_TYPE", split_params_type),
]:
f.write(f"""{env_name}=["{'", "'.join(values_to_join)}"]\n""")
- name: Set martix arguments outputs
id: set_matrix_args
run: | # zizmor: ignore[template-injection] these env variable are safe
{
echo "command=${{ toJSON(env.COMMAND) }}";
echo "op_flavor=${{ toJSON(env.OP_FLAVOR) }}";
echo "bench_type=${{ toJSON(env.BENCH_TYPE) }}";
echo "params_type=${{ toJSON(env.PARAMS_TYPE) }}";
} >> "${GITHUB_OUTPUT}"
setup-instance:
name: benchmark_gpu_common/setup-instance
needs: prepare-matrix
runs-on: ubuntu-latest
outputs:
# Use permanent remote instance label first as on-demand remote instance label output is set before the end of start-remote-instance step.
# If the latter fails due to a failed GitHub action runner set up, we have to fallback on the permanent instance.
# Since the on-demand remote label is set before failure, we have to do the logical OR in this order,
# otherwise we'll try to run the next job on a non-existing on-demand instance.
runner-name: ${{ steps.use-permanent-instance.outputs.runner_group || steps.start-remote-instance.outputs.label }}
remote-instance-outcome: ${{ steps.start-remote-instance.outcome }}
steps:
- name: Start remote instance
id: start-remote-instance
continue-on-error: true
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: ${{ inputs.backend }}
profile: ${{ inputs.profile }}
- name: Acknowledge remote instance failure
if: steps.start-remote-instance.outcome == 'failure' &&
inputs.profile != 'single-h100'
run: |
echo "Remote instance instance has failed to start (profile provided: '${INPUTS_PROFILE}')"
echo "Permanent instance instance cannot be used as a substitute (profile needed: 'single-h100')"
exit 1
env:
INPUTS_PROFILE: ${{ inputs.profile }}
# This will allow to fallback on permanent instances running on Hyperstack.
- name: Use permanent remote instance
id: use-permanent-instance
if: env.SECRETS_AVAILABLE == 'true' &&
steps.start-remote-instance.outcome == 'failure' &&
inputs.profile == 'single-h100'
run: |
echo "runner_group=h100x1" >> "$GITHUB_OUTPUT"
# Install dependencies only once since cuda-benchmarks uses a matrix strategy, thus running multiple times.
install-dependencies:
name: benchmark_gpu_common/install-dependencies
needs: [ setup-instance ]
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
matrix:
# explicit include-based build matrix, of known valid options
include:
- cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
if: needs.setup-instance.outputs.remote-instance-outcome == 'success'
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
cuda-benchmarks:
name: benchmark_gpu_common/cuda-benchmarks
needs: [ prepare-matrix, setup-instance, install-dependencies ]
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
timeout-minutes: 1440 # 24 hours
strategy:
fail-fast: false
max-parallel: 1
matrix:
command: ${{ fromJSON(needs.prepare-matrix.outputs.command) }}
op_flavor: ${{ fromJSON(needs.prepare-matrix.outputs.op_flavor) }}
bench_type: ${{ fromJSON(needs.prepare-matrix.outputs.bench_type) }}
params_type: ${{ fromJSON(needs.prepare-matrix.outputs.params_type) }}
# explicit include-based build matrix, of known valid options
include:
- cuda: "12.8"
gcc: 11
env:
CUDA_PATH: /usr/local/cuda-${{ matrix.cuda }}
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${SHA}");
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=${COMMIT_DATE}";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
env:
SHA: ${{ github.sha }}
# Re-export environment variables as dependencies setup perform this task in the previous job.
# Local env variables are cleaned at the end of each job.
- name: Export CUDA variables
shell: bash
run: |
echo "CUDA_PATH=$CUDA_PATH" >> "${GITHUB_ENV}"
echo "PATH=$PATH:$CUDA_PATH/bin" >> "${GITHUB_PATH}"
echo "LD_LIBRARY_PATH=$CUDA_PATH/lib64:$LD_LIBRARY_PATH" >> "${GITHUB_ENV}"
echo "CUDA_MODULE_LOADER=EAGER" >> "${GITHUB_ENV}"
- name: Export gcc and g++ variables
shell: bash
run: |
{
echo "CC=/usr/bin/gcc-${GCC_VERSION}";
echo "CXX=/usr/bin/g++-${GCC_VERSION}";
echo "CUDAHOSTCXX=/usr/bin/g++-${GCC_VERSION}";
} >> "${GITHUB_ENV}"
env:
GCC_VERSION: ${{ matrix.gcc }}
- name: Install rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: nightly
- name: Run benchmarks
run: |
make BIT_SIZES_SET="${PRECISIONS_SET}" BENCH_OP_FLAVOR="${OP_FLAVOR}" BENCH_TYPE="${BENCH_TYPE}" BENCH_PARAM_TYPE="${BENCH_PARAMS_TYPE}" bench_"${BENCH_COMMAND}"_gpu
env:
OP_FLAVOR: ${{ matrix.op_flavor }}
BENCH_TYPE: ${{ matrix.bench_type }}
BENCH_PARAMS_TYPE: ${{ matrix.params_type }}
BENCH_COMMAND: ${{ matrix.command }}
PRECISIONS_SET: ${{ inputs.precisions_set }}
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion "${RESULTS_FILENAME}" \
--database tfhe_rs \
--hardware "${INPUTS_HARDWARE_NAME}" \
--backend gpu \
--project-version "${COMMIT_HASH}" \
--branch "${REF_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--walk-subdirs \
--name-suffix avx512 \
--bench-type "${BENCH_TYPE}"
env:
INPUTS_HARDWARE_NAME: ${{ inputs.hardware_name }}
REF_NAME: ${{ github.ref_name }}
BENCH_TYPE: ${{ matrix.bench_type }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_${{ matrix.command }}_${{ matrix.op_flavor }}_${{ inputs.profile }}_${{ matrix.bench_type }}_${{ matrix.params_type }}
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py "${RESULTS_FILENAME}" "${JOB_SECRET}" \
--slab-url "${SLAB_URL}"
env:
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_URL: ${{ secrets.SLAB_URL }}
slack-notify:
name: benchmark_gpu_common/slack-notify
needs: [ setup-instance, cuda-benchmarks ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-benchmarks.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Send message
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-benchmarks.result }}
SLACK_MESSAGE: "Cuda benchmarks (${{ inputs.profile }}) finished with status: ${{ needs.cuda-benchmarks.result }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: benchmark_gpu_common/teardown-instance
if: ${{ always() && needs.setup-instance.outputs.remote-instance-outcome == 'success' }}
needs: [ setup-instance, cuda-benchmarks, slack-notify ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-${{ inputs.profile }}-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -0,0 +1,333 @@
# Run all fhevm coprocessor benchmarks on a GPU instance on Hyperstack and return parsed results to Slab CI bot.
name: benchmark_gpu_coprocessor
on:
workflow_dispatch:
inputs:
profile:
description: "Instance type"
required: true
type: choice
options:
- "l40 (n3-L40x1)"
- "4-l40 (n3-L40x4)"
- "single-h100 (n3-H100x1)"
- "2-h100 (n3-H100x2)"
- "4-h100 (n3-H100x4)"
- "multi-h100 (n3-H100x8)"
- "multi-h100-nvlink (n3-H100x8-NVLink)"
- "multi-h100-sxm5 (n3-H100-SXM5x8)"
- "multi-h100-sxm5_fallback (n3-H100-SXM5x8)"
schedule:
# Weekly tests @ 1AM
- cron: "0 1 * * 6"
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
PROFILE_SCHEDULED_RUN: "multi-h100-sxm5 (n3-H100-SXM5x8)"
PROFILE_MANUAL_RUN: ${{ inputs.profile }}
IS_MANUAL_RUN: ${{ github.event_name == 'workflow_dispatch' }}
BENCHMARK_TYPE: "ALL"
OPTIMIZATION_TARGET: "throughput"
BATCH_SIZE: "5000"
SCHEDULING_POLICY: "MAX_PARALLELISM"
BENCHMARKS: "erc20"
BRANCH_NAME: ${{ github.ref_name }}
COMMIT_SHA: ${{ github.sha }}
SLAB_SECRET: ${{ secrets.JOB_SECRET }}
jobs:
parse-inputs:
name: benchmark_gpu_coprocessor/parse-inputs
runs-on: ubuntu-latest
permissions:
contents: 'read'
outputs:
profile: ${{ steps.parse_profile.outputs.profile }}
hardware_name: ${{ steps.parse_hardware_name.outputs.name }}
steps:
- name: Parse profile
id: parse_profile
run: |
if [[ ${IS_MANUAL_RUN} == true ]]; then
PROFILE_RAW="${PROFILE_MANUAL_RUN}"
else
PROFILE_RAW="${PROFILE_SCHEDULED_RUN}"
fi
# shellcheck disable=SC2001
PROFILE_VAL=$(echo "${PROFILE_RAW}" | sed 's|\(.*\)[[:space:]](.*)|\1|')
echo "profile=$PROFILE_VAL" >> "${GITHUB_OUTPUT}"
- name: Parse hardware name
id: parse_hardware_name
run: |
if [[ ${IS_MANUAL_RUN} == true ]]; then
PROFILE_RAW="${PROFILE_MANUAL_RUN}"
else
PROFILE_RAW="${PROFILE}"
fi
# shellcheck disable=SC2001
PROFILE_VAL=$(echo "${PROFILE_RAW}" | sed 's|.*[[:space:]](\(.*\))|\1|')
echo "name=$PROFILE_VAL" >> "${GITHUB_OUTPUT}"
setup-instance:
name: benchmark_gpu_coprocessor/setup-instance
needs: parse-inputs
runs-on: ubuntu-latest
permissions:
contents: 'read'
outputs:
runner-name: ${{ steps.start-remote-instance.outputs.label }}
steps:
- name: Start remote instance
id: start-remote-instance
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: hyperstack
profile: ${{ needs.parse-inputs.outputs.profile }}
benchmark-gpu:
name: benchmark_gpu_coprocessor/benchmark-gpu (bpr)
needs: [ parse-inputs, setup-instance ]
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
continue-on-error: true
timeout-minutes: 720 # 12 hours
permissions:
contents: 'read' # Needed to read repositories contents
packages: 'read' # Needed to get fhevm packages
strategy:
fail-fast: false
# explicit include-based build matrix, of known valid options
matrix:
include:
- os: ubuntu-22.04
cuda: "12.8"
gcc: 11
env:
HW_NAME: "${{ needs.parse-inputs.outputs.hardware_name }}"
steps:
- name: Install git LFS
run: |
sudo apt-get remove -y unattended-upgrades
sudo apt-get update
sudo apt-get install -y git-lfs protobuf-compiler
git lfs install
- name: Checkout tfhe-rs
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
path: tfhe-rs
persist-credentials: false
- name: Check fhEVM and TFHE-rs repos
run: |
pwd
ls
- name: Checkout fhevm
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
repository: zama-ai/fhevm
persist-credentials: 'false'
fetch-depth: 0
lfs: true
ref: antoniu/use-tfhe-main-benches
path: fhevm
- name: Get benchmark details
run: |
COMMIT_DATE_ENV=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${COMMIT_SHA}")
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$COMMIT_DATE_ENV";
echo "COMMIT_HASH=$(git rev-parse HEAD)";
} >> "${GITHUB_ENV}"
working-directory: tfhe-rs/
- name: Setup Hyperstack dependencies
uses: ./tfhe-rs/.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Check fhEVM and TFHE-rs repos
run: |
pwd
ls
mv tfhe-rs fhevm/coprocessor/
- name: Checkout LFS objects
run: git lfs checkout
working-directory: fhevm/
- name: Install rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: nightly
- name: Install cargo dependencies
run: |
sudo apt-get install -y protobuf-compiler pkg-config libssl-dev \
libclang-dev docker-compose-v2 docker.io acl
sudo usermod -aG docker "$USER"
newgrp docker
sudo setfacl --modify user:"$USER":rw /var/run/docker.sock
cargo install sqlx-cli
- name: Install foundry
uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e
- name: Cache cargo
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
restore-keys: ${{ runner.os }}-cargo-
- name: Login to GitHub Container Registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Chainguard Registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: cgr.dev
username: ${{ secrets.CGR_USERNAME }}
password: ${{ secrets.CGR_PASSWORD }}
- name: Init database
run: make init_db
working-directory: fhevm/coprocessor/fhevm-engine/tfhe-worker
- name: Use Node.js
uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: 20.x
- name: Build contracts
env:
HARDHAT_NETWORK: hardhat
run: |
ls
pwd
cp ./host-contracts/.env.example ./host-contracts/.env
cd ./host-contracts
npm ci --include=optional
npm install && npm run deploy:emptyProxies && npx hardhat compile
working-directory: fhevm/
- name: Profile erc20 no-cmux benchmark on GPU
run: |
BENCHMARK_BATCH_SIZE="${BATCH_SIZE}" \
FHEVM_DF_SCHEDULE="${SCHEDULING_POLICY}" \
BENCHMARK_TYPE="THROUGHPUT_200" \
OPTIMIZATION_TARGET="${OPTIMIZATION_TARGET}" \
make -e "profile_erc20_gpu"
working-directory: fhevm/coprocessor/fhevm-engine/tfhe-worker
- name: Get nsys profile name
id: nsys_profile_name
run: echo "profile=coprocessor_profile_$(date +"%Y-%m-%d-%Hh").nsys-rep" >> "$GITHUB_OUTPUT"
- name: Timestamp nsys profile # zizmor: ignore[template-injection]
env:
REPORT_NAME: ${{ steps.nsys_profile_name.outputs.profile }}
run: |
mv report1.nsys-rep ${{ env.REPORT_NAME }}
working-directory: fhevm/coprocessor/fhevm-engine/tfhe-worker
- name: Upload profile artifact
env:
REPORT_NAME: ${{ steps.nsys_profile_name.outputs.profile }}
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ env.REPORT_NAME }}
path: fhevm/coprocessor/fhevm-engine/tfhe-worker/${{ env.REPORT_NAME }}
- name: Run latency benchmark on GPU
run: |
BENCHMARK_BATCH_SIZE="${BATCH_SIZE}" FHEVM_DF_SCHEDULE="${SCHEDULING_POLICY}" BENCHMARK_TYPE="LATENCY" OPTIMIZATION_TARGET="${OPTIMIZATION_TARGET}" make -e "benchmark_${BENCHMARKS}_gpu"
working-directory: fhevm/coprocessor/fhevm-engine/tfhe-worker
- name: Run throughput benchmarks on GPU
run: |
BENCHMARK_BATCH_SIZE="${BATCH_SIZE}" FHEVM_DF_SCHEDULE="${SCHEDULING_POLICY}" BENCHMARK_TYPE="THROUGHPUT_200" OPTIMIZATION_TARGET="${OPTIMIZATION_TARGET}" make -e "benchmark_${BENCHMARKS}_gpu"
working-directory: fhevm/coprocessor/fhevm-engine/tfhe-worker
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py coprocessor/fhevm-engine/target/criterion "${RESULTS_FILENAME}" \
--database coprocessor \
--hardware "${HW_NAME}" \
--backend gpu \
--project-version "${COMMIT_HASH}" \
--branch "${BRANCH_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--walk-subdirs \
--crate "coprocessor/fhevm-engine/tfhe-worker" \
--name-suffix "operation_batch_size_${BATCH_SIZE}-schedule_${SCHEDULING_POLICY}-optimization_target_${OPTIMIZATION_TARGET}"
working-directory: fhevm/
- name: Upload parsed results artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${COMMIT_SHA}_${BENCHMARKS}_${{ needs.parse-inputs.outputs.profile }}
path: fhevm/$${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
env:
SLAB_URL: ${{ secrets.SLAB_URL }}
run: |
python3 slab/scripts/data_sender.py fhevm/"${RESULTS_FILENAME}" "${SLAB_SECRET}" \
--slab-url "${SLAB_URL}"
teardown-instance:
name: benchmark_gpu_coprocessor/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, benchmark-gpu ]
runs-on: ubuntu-latest
permissions:
contents: 'read'
steps:
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}

View File

@@ -1,156 +0,0 @@
# Run core crypto benchmarks on an instance with CUDA and return parsed results to Slab CI bot.
name: Core crypto GPU benchmarks
on:
workflow_dispatch:
schedule:
# Weekly benchmarks will be triggered each Saturday at 1a.m.
- cron: '0 1 * * 6'
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
jobs:
setup-instance:
name: Setup instance (cuda-core-crypto-benchmarks)
runs-on: ubuntu-latest
if: github.event_name != 'schedule' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: hyperstack
profile: single-h100
cuda-core-crypto-benchmarks:
name: Execute GPU core crypto benchmarks
needs: setup-instance
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
fail-fast: false
# explicit include-based build matrix, of known valid options
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
gcc: 11
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Get benchmark details
run: |
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
- name: Set up home
# "Install rust" step require root user to have a HOME directory which is not set.
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: nightly
- name: Run benchmarks with AVX512
run: |
make bench_pbs_gpu
make bench_ks_gpu
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware "n3-H100x1" \
--backend gpu \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--name-suffix avx512 \
--walk-subdirs
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_core_crypto
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
slack-notify:
name: Slack Notification
needs: [ setup-instance, cuda-core-crypto-benchmarks ]
runs-on: ubuntu-latest
if: ${{ !success() && !cancelled() }}
continue-on-error: true
steps:
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ needs.cuda-core-crypto-benchmarks.result }}
SLACK_MESSAGE: "PBS GPU benchmarks finished with status: ${{ needs.cuda-core-crypto-benchmarks.result }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (cuda-integer-full-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-core-crypto-benchmarks, slack-notify ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-core-crypto-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,44 +0,0 @@
# Run CUDA ERC20 benchmarks on a Hyperstack VM and return parsed results to Slab CI bot.
name: Cuda ERC20 benchmarks
on:
workflow_dispatch:
inputs:
profile:
description: "Instance type"
required: true
type: choice
options:
- "l40 (n3-L40x1)"
- "single-h100 (n3-H100x1)"
- "2-h100 (n3-H100x2)"
- "4-h100 (n3-H100x4)"
- "multi-h100 (n3-H100x8)"
- "multi-h100-nvlink (n3-H100x8-NVLink)"
- "multi-h100-sxm5 (n3-H100x8-SXM5)"
jobs:
parse-inputs:
runs-on: ubuntu-latest
outputs:
profile: ${{ steps.parse_profile.outputs.profile }}
hardware_name: ${{ steps.parse_hardware_name.outputs.name }}
steps:
- name: Parse profile
id: parse_profile
run: |
echo "profile=$(echo '${{ inputs.profile }}' | sed 's|\(.*\)[[:space:]](.*)|\1|')" >> "${GITHUB_OUTPUT}"
- name: Parse hardware name
id: parse_hardware_name
run: |
echo "name=$(echo '${{ inputs.profile }}' | sed 's|.*[[:space:]](\(.*\))|\1|')" >> "${GITHUB_OUTPUT}"
run-benchmarks:
name: Run benchmarks
needs: parse-inputs
uses: ./.github/workflows/benchmark_gpu_erc20_common.yml
with:
profile: ${{ needs.parse-inputs.outputs.profile }}
hardware_name: ${{ needs.parse-inputs.outputs.hardware_name }}
secrets: inherit

View File

@@ -1,182 +0,0 @@
# Run ERC20 benchmarks on an instance with CUDA and return parsed results to Slab CI bot.
name: Cuda ERC20 benchmarks - common
on:
workflow_call:
inputs:
backend:
type: string
default: hyperstack
profile:
type: string
required: true
hardware_name:
type: string
required: true
secrets:
REPO_CHECKOUT_TOKEN:
required: true
SLAB_ACTION_TOKEN:
required: true
SLAB_BASE_URL:
required: true
SLAB_URL:
required: true
JOB_SECRET:
required: true
SLACK_CHANNEL:
required: true
BOT_USERNAME:
required: true
SLACK_WEBHOOK:
required: true
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
PARSE_INTEGER_BENCH_CSV_FILE: tfhe_rs_integer_benches_${{ github.sha }}.csv
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
jobs:
setup-instance:
name: Setup instance (cuda-erc20-benchmarks)
runs-on: ubuntu-latest
if: github.event_name == 'workflow_dispatch' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: ${{ inputs.backend }}
profile: ${{ inputs.profile }}
cuda-erc20-benchmarks:
name: Cuda ERC20 benchmarks (${{ inputs.profile }})
needs: setup-instance
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
fail-fast: false
# explicit include-based build matrix, of known valid options
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
gcc: 11
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Get benchmark details
run: |
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
- name: Set up home
# "Install rust" step require root user to have a HOME directory which is not set.
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: nightly
- name: Run benchmarks
run: |
make bench_hlapi_erc20_gpu
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware "${{ inputs.hardware_name }}" \
--backend gpu \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_erc20_${{ inputs.profile }}
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
slack-notify:
name: Slack Notification
needs: [ setup-instance, cuda-erc20-benchmarks ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-erc20-benchmarks.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ needs.cuda-erc20-benchmarks.result }}
SLACK_MESSAGE: "Cuda ERC20 benchmarks (${{ inputs.profile }}) finished with status: ${{ needs.cuda-erc20-benchmarks.result }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (cuda-erc20-${{ inputs.profile }}-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-erc20-benchmarks, slack-notify ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-erc20-${{ inputs.profile }}-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,35 +0,0 @@
# Run CUDA ERC20 benchmarks on multiple Hyperstack VMs and return parsed results to Slab CI bot.
name: Cuda ERC20 weekly benchmarks
on:
schedule:
# Weekly benchmarks will be triggered each Saturday at 5a.m.
- cron: '0 5 * * 6'
jobs:
run-benchmarks-1-h100:
name: Run benchmarks (1xH100)
if: github.repository == 'zama-ai/tfhe-rs'
uses: ./.github/workflows/benchmark_gpu_erc20_common.yml
with:
profile: single-h100
hardware_name: n3-H100x1
secrets: inherit
run-benchmarks-2-h100:
name: Run benchmarks (2xH100)
if: github.repository == 'zama-ai/tfhe-rs'
uses: ./.github/workflows/benchmark_gpu_erc20_common.yml
with:
profile: 2-h100
hardware_name: n3-H100x2
secrets: inherit
run-benchmarks-8-h100:
name: Run benchmarks (8xH100)
if: github.repository == 'zama-ai/tfhe-rs'
uses: ./.github/workflows/benchmark_gpu_erc20_common.yml
with:
profile: multi-h100
hardware_name: n3-H100x8
secrets: inherit

View File

@@ -1,79 +0,0 @@
# Run CUDA benchmarks on a Hyperstack VM and return parsed results to Slab CI bot.
name: Cuda benchmarks
on:
workflow_dispatch:
inputs:
profile:
description: "Instance type"
required: true
type: choice
options:
- "l40 (n3-L40x1)"
- "single-h100 (n3-H100x1)"
- "2-h100 (n3-H100x2)"
- "4-h100 (n3-H100x4)"
- "multi-h100 (n3-H100x8)"
- "multi-h100-nvlink (n3-H100x8-NVLink)"
- "multi-h100-sxm5 (n3-H100x8-SXM5)"
- "multi-a100-nvlink (n3-A100x8-NVLink)"
command:
description: "Benchmark command to run"
type: choice
default: integer_multi_bit
options:
- integer
- integer_multi_bit
- integer_compression
- pbs
- ks
op_flavor:
description: "Operations set to run"
type: choice
default: default
options:
- default
- fast_default
- unchecked
all_precisions:
description: "Run all precisions"
type: boolean
default: false
bench_type:
description: "Benchmarks type"
type: choice
default: latency
options:
- latency
- throughput
- both
jobs:
parse-inputs:
runs-on: ubuntu-latest
outputs:
profile: ${{ steps.parse_profile.outputs.profile }}
hardware_name: ${{ steps.parse_hardware_name.outputs.name }}
steps:
- name: Parse profile
id: parse_profile
run: |
echo "profile=$(echo '${{ inputs.profile }}' | sed 's|\(.*\)[[:space:]](.*)|\1|')" >> "${GITHUB_OUTPUT}"
- name: Parse hardware name
id: parse_hardware_name
run: |
echo "name=$(echo '${{ inputs.profile }}' | sed 's|.*[[:space:]](\(.*\))|\1|')" >> "${GITHUB_OUTPUT}"
run-benchmarks:
name: Run benchmarks
needs: parse-inputs
uses: ./.github/workflows/benchmark_gpu_integer_common.yml
with:
profile: ${{ needs.parse-inputs.outputs.profile }}
hardware_name: ${{ needs.parse-inputs.outputs.hardware_name }}
command: ${{ inputs.command }}
op_flavor: ${{ inputs.op_flavor }}
bench_type: ${{ inputs.bench_type }}
all_precisions: ${{ inputs.all_precisions }}
secrets: inherit

View File

@@ -1,258 +0,0 @@
# Run integer benchmarks on CUDA instance and return parsed results to Slab CI bot.
name: Cuda benchmarks - common
on:
workflow_call:
inputs:
backend:
type: string
default: hyperstack
profile:
type: string
required: true
hardware_name:
type: string
required: true
command: # Use a comma separated values to generate an array
type: string
required: true
op_flavor: # Use a comma separated values to generate an array
type: string
required: true
bench_type:
type: string
default: latency
all_precisions:
type: boolean
default: false
secrets:
REPO_CHECKOUT_TOKEN:
required: true
SLAB_ACTION_TOKEN:
required: true
SLAB_BASE_URL:
required: true
SLAB_URL:
required: true
JOB_SECRET:
required: true
SLACK_CHANNEL:
required: true
BOT_USERNAME:
required: true
SLACK_WEBHOOK:
required: true
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
FAST_BENCH: TRUE
jobs:
prepare-matrix:
name: Prepare operations matrix
runs-on: ubuntu-latest
outputs:
command: ${{ steps.set_command.outputs.command }}
op_flavor: ${{ steps.set_op_flavor.outputs.op_flavor }}
bench_type: ${{ steps.set_bench_type.outputs.bench_type }}
steps:
- name: Set single command
if: ${{ !contains(inputs.command, ',')}}
run: |
echo "COMMAND=[\"${{ inputs.command }}\"]" >> "${GITHUB_ENV}"
- name: Set multiple commands
if: ${{ contains(inputs.command, ',')}}
run: |
PARSED_COMMAND=$(echo "${{ inputs.command }}" | sed 's/[[:space:]]*,[[:space:]]*/\\", \\"/g')
echo "COMMAND=[\"${PARSED_COMMAND}\"]" >> "${GITHUB_ENV}"
- name: Set single operations flavor
if: ${{ !contains(inputs.op_flavor, ',')}}
run: |
echo "OP_FLAVOR=[\"${{ inputs.op_flavor }}\"]" >> "${GITHUB_ENV}"
- name: Set multiple operations flavors
if: ${{ contains(inputs.op_flavor, ',')}}
run: |
PARSED_OP_FLAVOR=$(echo "${{ inputs.op_flavor }}" | sed 's/[[:space:]]*,[[:space:]]*/", "/g')
echo "OP_FLAVOR=[\"${PARSED_OP_FLAVOR}\"]" >> "${GITHUB_ENV}"
- name: Set benchmark types
run: |
if [[ "${{ inputs.bench_type }}" == "both" ]]; then
echo "BENCH_TYPE=[\"latency\", \"throughput\"]" >> "${GITHUB_ENV}"
else
echo "BENCH_TYPE=[\"${{ inputs.bench_type }}\"]" >> "${GITHUB_ENV}"
fi
- name: Set command output
id: set_command
run: |
echo "command=${{ toJSON(env.COMMAND) }}" >> "${GITHUB_OUTPUT}"
- name: Set operation flavor output
id: set_op_flavor
run: |
echo "op_flavor=${{ toJSON(env.OP_FLAVOR) }}" >> "${GITHUB_OUTPUT}"
- name: Set benchmark types output
id: set_bench_type
run: |
echo "bench_type=${{ toJSON(env.BENCH_TYPE) }}" >> "${GITHUB_OUTPUT}"
setup-instance:
name: Setup instance (cuda-${{ inputs.profile }}-benchmarks)
needs: prepare-matrix
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: ${{ inputs.backend }}
profile: ${{ inputs.profile }}
cuda-benchmarks:
name: Cuda benchmarks (${{ inputs.profile }})
needs: [ prepare-matrix, setup-instance ]
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
timeout-minutes: 1440 # 24 hours
continue-on-error: true
strategy:
fail-fast: false
max-parallel: 1
matrix:
command: ${{ fromJSON(needs.prepare-matrix.outputs.command) }}
op_flavor: ${{ fromJSON(needs.prepare-matrix.outputs.op_flavor) }}
bench_type: ${{ fromJSON(needs.prepare-matrix.outputs.bench_type) }}
# explicit include-based build matrix, of known valid options
include:
- os: ubuntu-22.04
cuda: "12.2"
gcc: 11
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Get benchmark details
run: |
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
- name: Set up home
# "Install rust" step require root user to have a HOME directory which is not set.
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: nightly
- name: Should run benchmarks with all precisions
if: inputs.all_precisions
run: |
echo "FAST_BENCH=FALSE" >> "${GITHUB_ENV}"
- name: Run benchmarks
run: |
make BENCH_OP_FLAVOR=${{ matrix.op_flavor }} BENCH_TYPE=${{ matrix.bench_type }} bench_${{ matrix.command }}_gpu
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware "${{ inputs.hardware_name }}" \
--backend gpu \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512 \
--bench-type ${{ matrix.bench_type }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_${{ matrix.command }}_${{ matrix.op_flavor }}_${{ inputs.profile }}
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
slack-notify:
name: Slack Notification
needs: [ setup-instance, cuda-benchmarks ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-benchmarks.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ needs.cuda-benchmarks.result }}
SLACK_MESSAGE: "Cuda benchmarks (${{ inputs.profile }}) finished with status: ${{ needs.cuda-benchmarks.result }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (cuda-${{ inputs.profile }}-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-benchmarks, slack-notify ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-${{ inputs.profile }}-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,60 +0,0 @@
# Run CUDA benchmarks on multiple Hyperstack VMs and return parsed results to Slab CI bot.
name: Cuda weekly benchmarks
on:
schedule:
# Weekly benchmarks will be triggered each Saturday at 1a.m.
- cron: '0 1 * * 6'
jobs:
run-benchmarks-1-h100:
name: Run benchmarks (1xH100)
if: github.repository == 'zama-ai/tfhe-rs'
uses: ./.github/workflows/benchmark_gpu_integer_common.yml
with:
profile: single-h100
hardware_name: n3-H100x1
command: integer,integer_multi_bit
op_flavor: default
bench_type: latency
all_precisions: true
secrets: inherit
run-benchmarks-2-h100:
name: Run benchmarks (2xH100)
if: github.repository == 'zama-ai/tfhe-rs'
uses: ./.github/workflows/benchmark_gpu_integer_common.yml
with:
profile: 2-h100
hardware_name: n3-H100x2
command: integer_multi_bit
op_flavor: default
bench_type: latency
all_precisions: true
secrets: inherit
run-benchmarks-8-h100:
name: Run benchmarks (8xH100)
if: github.repository == 'zama-ai/tfhe-rs'
uses: ./.github/workflows/benchmark_gpu_integer_common.yml
with:
profile: multi-h100
hardware_name: n3-H100x8
command: integer_multi_bit
op_flavor: default
bench_type: latency
all_precisions: true
secrets: inherit
run-benchmarks-l40:
name: Run benchmarks (L40)
if: github.repository == 'zama-ai/tfhe-rs'
uses: ./.github/workflows/benchmark_gpu_integer_common.yml
with:
profile: l40
hardware_name: n3-L40x1
command: integer_multi_bit,integer_compression,pbs,ks
op_flavor: default
bench_type: latency
all_precisions: true
secrets: inherit

View File

@@ -0,0 +1,295 @@
# Run CUDA benchmarks on multiple Hyperstack VMs and return parsed results to Slab CI bot.
name: benchmark_gpu_weekly
on:
schedule:
# Weekly schedules are separated in several groups to avoid spawning too many the machines at once thus risking resource shortages.
# Group 1
# -------
# Weekly benchmarks will be triggered each Saturday at 1a.m.
- cron: '0 1 * * 6'
# Group 2
# -------
# Weekly benchmarks will be triggered each Sunday at 1a.m.
- cron: '0 1 * * 0'
# Group 3
# -------
# Weekly benchmarks will be triggered each Sunday at 9p.m.
- cron: '0 9 * * 0'
permissions: {}
# zizmor: ignore[concurrency-limits] only GitHub can trigger this workflow
jobs:
prepare-inputs:
name: benchmark_cpu_weekly/prepare-inputs
runs-on: ubuntu-latest
outputs:
is_weekly_bench_group_1: ${{ steps.check_bench_group_1.outputs.is_weekly_bench_group_1 }}
is_weekly_bench_group_2: ${{ steps.check_bench_group_2.outputs.is_weekly_bench_group_2 }}
is_weekly_bench_group_3: ${{ steps.check_bench_group_3.outputs.is_weekly_bench_group_3 }}
steps:
- name: Check is weekly bench group 1
id: check_bench_group_1
run: | # zizmor: ignore[template-injection] this env variable is safe
echo "is_weekly_bench_group_1=${{ github.event.schedule == '0 1 * * 6' }}" >> "${GITHUB_OUTPUT}"
- name: Check is weekly bench group 2
id: check_bench_group_2
run: | # zizmor: ignore[template-injection] this env variable is safe
echo "is_weekly_bench_group_2=${{ github.event.schedule == '0 1 * * 0' }}" >> "${GITHUB_OUTPUT}"
- name: Check is weekly bench group 3
id: check_bench_group_3
run: | # zizmor: ignore[template-injection] this env variable is safe
echo "is_weekly_bench_group_3=${{ github.event.schedule == '0 9 * * 0' }}" >> "${GITHUB_OUTPUT}"
run-benchmarks-8-h100-sxm5-integer:
name: benchmark_gpu_weekly/run-benchmarks-8-h100-sxm5-integer
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_1 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: multi-h100-sxm5
hardware_name: n3-H100-SXM5x8
command: integer_multi_bit
op_flavor: default
bench_type: both
precisions_set: fast
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-8-h100-sxm5-integer-compression:
name: benchmark_gpu_weekly/run-benchmarks-8-h100-sxm5-integer-compression
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_1 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: multi-h100-sxm5
hardware_name: n3-H100-SXM5x8
command: integer_compression
op_flavor: default
bench_type: both
precisions_set: fast
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-8-h100-sxm5-integer-zk-aes:
name: benchmark_gpu_weekly/run-benchmarks-8-h100-sxm5-integer-zk-aes
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_1 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: multi-h100-sxm5
hardware_name: n3-H100-SXM5x8
command: integer_zk,integer_aes,integer_aes256
op_flavor: default
bench_type: both
precisions_set: fast
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-8-h100-sxm5-noise-squash:
name: benchmark_gpu_weekly/run-benchmarks-8-h100-sxm5-noise-squash
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_1 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: multi-h100-sxm5
hardware_name: n3-H100-SXM5x8
command: hlapi_noise_squash
op_flavor: default
bench_type: both
precisions_set: fast
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-1-h100-core-crypto:
name: benchmark_gpu_weekly/run-benchmarks-1-h100-core-crypto (1xH100)
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_1 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: single-h100
hardware_name: n3-H100x1
command: pbs,pbs128,ks,ks_pbs
bench_type: latency
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
# -----------------------------------------------------
# ERC20 benchmarks
# -----------------------------------------------------
run-benchmarks-1-h100-erc20:
name: benchmark_gpu_weekly/run-benchmarks-1-h100-erc20
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_2 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: single-h100
hardware_name: n3-H100x1
command: hlapi_erc20
bench_type: both
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-2-h100-erc20:
name: benchmark_gpu_weekly/run-benchmarks-2-h100-erc20
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_2 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: 2-h100
hardware_name: n3-H100x2
command: hlapi_erc20
bench_type: both
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-8-h100-erc20:
name: benchmark_gpu_weekly/run-benchmarks-8-h100-erc20
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_2 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: multi-h100
hardware_name: n3-H100-SXM5x8
command: hlapi_erc20
bench_type: both
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
# -----------------------------------------------------
# DEX benchmarks
# -----------------------------------------------------
run-benchmarks-1-h100-dex:
name: benchmark_gpu_weekly/run-benchmarks-1-h100-dex
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_2 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: single-h100
hardware_name: n3-H100x1
command: hlapi_dex
bench_type: both
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-2-h100-dex:
name: benchmark_gpu_weekly/run-benchmarks-2-h100-dex
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_2 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: 2-h100
hardware_name: n3-H100x2
command: hlapi_dex
bench_type: both
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
run-benchmarks-8-h100-dex:
name: benchmark_gpu_weekly/run-benchmarks-8-h100-dex
if: github.repository == 'zama-ai/tfhe-rs' &&
needs.prepare-inputs.outputs.is_weekly_bench_group_2 == 'true'
needs: prepare-inputs
uses: ./.github/workflows/benchmark_gpu_common.yml
with:
profile: multi-h100
hardware_name: n3-H100-SXM5x8
command: hlapi_dex
bench_type: both
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}

71
.github/workflows/benchmark_hpu.yml vendored Normal file
View File

@@ -0,0 +1,71 @@
# Run benchmarks on a permanent HPU instance and return parsed results to Slab CI bot.
name: benchmark_hpu
run-name: ${{ inputs.command }}::${{ inputs.bench_type}} (${{ inputs.op_flavor }}, ${{ inputs.precisions_set }})
on:
workflow_dispatch:
inputs:
command:
description: "Benchmark command to run"
type: choice
default: integer
options:
- integer
- hlapi
- hlapi_erc20
op_flavor:
description: "Operations set to run"
type: choice
default: default
options:
- default
- fast_default
precisions_set:
description: "Bit precisions set"
type: choice
default: fast
options:
- fast
- all
- documentation
bench_type:
description: "Benchmarks type"
type: choice
default: latency
options:
- latency
- throughput
- both
v80_pcie_dev:
description: "V80 PCIe device number"
default: 24
v80_serial_number:
description: "V80 serial number"
default: XFL12NWY3ZKG
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
run-benchmarks:
name: benchmark_hpu/run-benchmarks
uses: ./.github/workflows/benchmark_hpu_common.yml
with:
command: ${{ inputs.command }}
op_flavor: ${{ inputs.op_flavor }}
bench_type: ${{ inputs.bench_type }}
precisions_set: ${{ inputs.precisions_set }}
v80_pcie_dev: ${{ inputs.v80_pcie_dev }}
v80_serial_number: ${{ inputs.v80_serial_number }}
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}

View File

@@ -0,0 +1,208 @@
# Run benchmarks on a permanent HPU instance and return parsed results to Slab CI bot.
name: benchmark_hpu_common
on:
workflow_call:
inputs:
command: # Use a comma separated values to generate an array
type: string
required: true
op_flavor: # Use a comma separated values to generate an array
type: string
default: default
bench_type:
type: string
default: latency
precisions_set:
type: string
default: fast
v80_pcie_dev:
type: string
default: 24
v80_serial_number:
type: string
default: XFL12NWY3ZKG
secrets:
REPO_CHECKOUT_TOKEN:
required: true
SLAB_ACTION_TOKEN:
required: true
SLAB_BASE_URL:
required: true
SLAB_URL:
required: true
JOB_SECRET:
required: true
SLACK_CHANNEL:
required: true
BOT_USERNAME:
required: true
SLACK_WEBHOOK:
required: true
SSH_PRIVATE_KEY:
required: true
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
permissions: {}
# zizmor: ignore[concurrency-limits] caller workflow is responsible for the concurrency
jobs:
prepare-matrix:
name: benchmark_hpu_common/prepare-matrix
runs-on: ubuntu-latest
outputs:
command: ${{ steps.set_matrix_args.outputs.command }}
op_flavor: ${{ steps.set_matrix_args.outputs.op_flavor }}
bench_type: ${{ steps.set_matrix_args.outputs.bench_type }}
env:
INPUTS_COMMAND: ${{ inputs.command }}
INPUTS_OP_FLAVOR: ${{ inputs.op_flavor }}
steps:
- name: Parse user inputs
shell: python
env:
INPUTS_COMMAND: ${{ inputs.command }}
INPUTS_OP_FLAVOR: ${{ inputs.op_flavor }}
INPUTS_BENCH_TYPE: ${{ inputs.bench_type }}
run: |
import os
inputs_command = os.environ["INPUTS_COMMAND"]
inputs_op_flavor = os.environ["INPUTS_OP_FLAVOR"]
inputs_bench_type = os.environ["INPUTS_BENCH_TYPE"]
env_file = os.environ["GITHUB_ENV"]
split_command = inputs_command.replace(" ", "").split(",")
split_op_flavor = inputs_op_flavor.replace(" ", "").split(",")
if inputs_bench_type == "both":
bench_type = ["latency", "throughput"]
else:
bench_type = [inputs_bench_type, ]
with open(env_file, "a") as f:
for env_name, values_to_join in [
("COMMAND", split_command),
("OP_FLAVOR", split_op_flavor),
("BENCH_TYPE", bench_type),
]:
f.write(f"""{env_name}=["{'", "'.join(values_to_join)}"]\n""")
- name: Set martix arguments outputs
id: set_matrix_args
run: | # zizmor: ignore[template-injection] these env variable are safe
{
echo "command=${{ toJSON(env.COMMAND) }}";
echo "op_flavor=${{ toJSON(env.OP_FLAVOR) }}";
echo "bench_type=${{ toJSON(env.BENCH_TYPE) }}";
} >> "${GITHUB_OUTPUT}"
hpu-benchmarks:
name: benchmark_hpu_common/hpu-benchmarks
needs: prepare-matrix
runs-on: v80-marais
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
timeout-minutes: 1440 # 24 hours
strategy:
max-parallel: 1
matrix:
command: ${{ fromJSON(needs.prepare-matrix.outputs.command) }}
op_flavor: ${{ fromJSON(needs.prepare-matrix.outputs.op_flavor) }}
bench_type: ${{ fromJSON(needs.prepare-matrix.outputs.bench_type) }}
steps:
# Needed as long as hw_regmap repository is private
- name: Configure SSH
uses: webfactory/ssh-agent@a6f90b1f127823b31d4d4a8d96047790581349bd # v0.9.1
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
lfs: true
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${SHA}");
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=${COMMIT_DATE}";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
env:
SHA: ${{ github.sha }}
- name: Install rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: nightly
- name: Select HPU board
run: |
echo "V80_PCIE_DEV=${PCIE_DEV}" >> "${GITHUB_ENV}"
echo "V80_SERIAL_NUMBER=${SERIAL_NUMBER}" >> "${GITHUB_ENV}"
env:
PCIE_DEV: ${{ inputs.v80_pcie_dev }}
SERIAL_NUMBER: ${{ inputs.v80_serial_number }}
- name: Run benchmarks
run: |
echo "${V80_PCIE_DEV} ${V80_SERIAL_NUMBER}"
make pull_hpu_files
make BIT_SIZES_SET="${PRECISIONS_SET}" BENCH_OP_FLAVOR="${OP_FLAVOR}" BENCH_TYPE="${BENCH_TYPE}" BENCH_PARAM_TYPE="${BENCH_PARAMS_TYPE}" bench_"${BENCH_COMMAND}"_hpu
env:
OP_FLAVOR: ${{ matrix.op_flavor }}
BENCH_TYPE: ${{ matrix.bench_type }}
BENCH_COMMAND: ${{ matrix.command }}
PRECISIONS_SET: ${{ inputs.precisions_set }}
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion "${RESULTS_FILENAME}" \
--database tfhe_rs \
--hardware "hpu_x1" \
--backend hpu \
--project-version "${COMMIT_HASH}" \
--branch "${REF_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--walk-subdirs \
--bench-type "${BENCH_TYPE}"
env:
REF_NAME: ${{ github.ref_name }}
BENCH_TYPE: ${{ matrix.bench_type }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_${{ matrix.bench_type }}_integer_benchmarks
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py "${RESULTS_FILENAME}" "${JOB_SECRET}" \
--slab-url "${SLAB_URL}"
env:
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_URL: ${{ secrets.SLAB_URL }}

View File

@@ -1,216 +0,0 @@
# Run all integer benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: Integer benchmarks
on:
workflow_dispatch:
inputs:
all_precisions:
description: "Run all precisions"
type: boolean
default: false
bench_type:
description: "Benchmarks type"
type: choice
default: latency
options:
- latency
- throughput
- both
schedule:
# Weekly benchmarks will be triggered each Saturday at 1a.m.
- cron: '0 1 * * 6'
# Quarterly benchmarks will be triggered right before end of quarter, the 25th of the current month at 4a.m.
# These benchmarks are far longer to execute hence the reason to run them only four time a year.
- cron: '0 4 25 MAR,JUN,SEP,DEC *'
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
FAST_BENCH: TRUE
jobs:
prepare-matrix:
name: Prepare operations matrix
runs-on: ubuntu-latest
if: github.event_name != 'schedule' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
outputs:
op_flavor: ${{ steps.set_op_flavor.outputs.op_flavor }}
bench_type: ${{ steps.set_bench_type.outputs.bench_type }}
steps:
- name: Weekly benchmarks
if: github.event.schedule == '0 1 * * 6'
run: |
echo "OP_FLAVOR=[\"default\"]" >> "${GITHUB_ENV}"
- name: Quarterly benchmarks
if: github.event.schedule == '0 4 25 MAR,JUN,SEP,DEC *'
run: |
echo "OP_FLAVOR=[\"default\", \"smart\", \"unchecked\", \"misc\"]" >> "${GITHUB_ENV}"
- name: Set benchmark types
if: github.event_name == 'workflow_dispatch'
run: |
echo "OP_FLAVOR=[\"default\"]" >> "${GITHUB_ENV}"
if [[ "${{ inputs.bench_type }}" == "both" ]]; then
echo "BENCH_TYPE=[\"latency\", \"throughput\"]" >> "${GITHUB_ENV}"
else
echo "BENCH_TYPE=[\"${{ inputs.bench_type }}\"]" >> "${GITHUB_ENV}"
fi
- name: Default benchmark type
if: github.event_name != 'workflow_dispatch'
run: |
echo "BENCH_TYPE=[\"latency\"]" >> "${GITHUB_ENV}"
- name: Set operation flavor output
id: set_op_flavor
run: |
echo "op_flavor=${{ toJSON(env.OP_FLAVOR) }}" >> "${GITHUB_OUTPUT}"
- name: Set benchmark types output
id: set_bench_type
run: |
echo "bench_type=${{ toJSON(env.BENCH_TYPE) }}" >> "${GITHUB_OUTPUT}"
setup-instance:
name: Setup instance (integer-benchmarks)
needs: prepare-matrix
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: bench
integer-benchmarks:
name: Execute integer benchmarks for all operations flavor
needs: [ prepare-matrix, setup-instance ]
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
continue-on-error: true
timeout-minutes: 1440 # 24 hours
strategy:
max-parallel: 1
matrix:
command: [ integer, integer_multi_bit]
op_flavor: ${{ fromJson(needs.prepare-matrix.outputs.op_flavor) }}
bench_type: ${{ fromJSON(needs.prepare-matrix.outputs.bench_type) }}
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: nightly
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Should run benchmarks with all precisions
if: inputs.all_precisions
run: |
echo "FAST_BENCH=FALSE" >> "${GITHUB_ENV}"
- name: Run benchmarks with AVX512
run: |
make BENCH_OP_FLAVOR=${{ matrix.op_flavor }} BENCH_TYPE=${{ matrix.bench_type }} bench_${{ matrix.command }}
# Run these benchmarks only once per benchmark type
- name: Run compression benchmarks with AVX512
if: matrix.op_flavor == 'default' && matrix.command == 'integer'
run: |
make BENCH_TYPE=${{ matrix.bench_type }} bench_integer_compression
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware "hpc7a.96xlarge" \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512 \
--bench-type ${{ matrix.bench_type }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_${{ matrix.command }}_${{ matrix.op_flavor }}_${{ matrix.bench_type }}
path: ${{ env.RESULTS_FILENAME }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Integer full benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (integer-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, integer-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (integer-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -0,0 +1,399 @@
# Run performance regression benchmarks and return parsed results to associated pull-request.
name: benchmark_perf_regression
on:
issue_comment:
types: [ created ]
pull_request:
types: [ labeled ]
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
permissions: { }
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
verify-triggering-actor:
name: benchmark_perf_regression/verify-actor
if: (github.event_name == 'pull_request' &&
(contains(github.event.label.name, 'bench-perfs-cpu') ||
contains(github.event.label.name, 'bench-perfs-gpu'))) ||
(github.event.issue.pull_request && startsWith(github.event.comment.body, '/bench'))
uses: ./.github/workflows/verify_triggering_actor.yml
secrets:
ALLOWED_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}
prepare-benchmarks:
name: benchmark_perf_regression/prepare-benchmarks
needs: verify-triggering-actor
runs-on: ubuntu-latest
outputs:
commands: ${{ steps.set_commands.outputs.commands }}
slab-backend: ${{ steps.set_slab_details.outputs.backend }}
slab-profile: ${{ steps.set_slab_details.outputs.profile }}
hardware-name: ${{ steps.get_hardware_name.outputs.name }}
tfhe-backend: ${{ steps.set_regression_details.outputs.tfhe-backend }}
selected-regression-profile: ${{ steps.set_regression_details.outputs.selected-profile }}
custom-env: ${{ steps.get_custom_env.outputs.custom_env }}
permissions:
pull-requests: write # Needed to write a comment in a pull-request
steps:
- name: Checkout tfhe-rs repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Acknowledge issue comment
if: github.event_name == 'issue_comment'
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
with:
comment-id: ${{ github.event.comment.id }}
reactions: '+1'
- name: Display workflow run URL
if: github.event_name == 'issue_comment'
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
with:
issue-number: ${{ github.event.issue.number }}
body: |
User triggered performance regression benchmark.
Workflow run URL: ${{ env.ACTION_RUN_URL }}
- name: Generate CPU benchmarks command from label
if: (github.event_name == 'pull_request' && contains(github.event.label.name, 'bench-perfs-cpu'))
run: |
echo "DEFAULT_BENCH_OPTIONS=--backend cpu" >> "${GITHUB_ENV}"
- name: Generate GPU benchmarks command from label
if: (github.event_name == 'pull_request' && contains(github.event.label.name, 'bench-perfs-gpu'))
run: |
echo "DEFAULT_BENCH_OPTIONS=--backend gpu" >> "${GITHUB_ENV}"
# TODO add support for HPU backend
- name: Install Python requirements
run: |
python3 -m pip install -r ci/perf_regression/requirements.txt
- name: Generate cargo commands and env from label
if: github.event_name == 'pull_request'
run: |
python3 ci/perf_regression/perf_regression.py parse_profile --issue-comment "/bench ${DEFAULT_BENCH_OPTIONS}"
echo "COMMANDS=$(cat ci/perf_regression/perf_regression_generated_commands.json)" >> "${GITHUB_ENV}"
- name: Dump issue comment into file # To avoid possible code-injection
if: github.event_name == 'issue_comment'
run: |
echo "${COMMENT_BODY}" >> dumped_comment.txt
env:
COMMENT_BODY: ${{ github.event.comment.body }}
- name: Generate cargo commands and env
if: github.event_name == 'issue_comment'
run: |
python3 ci/perf_regression/perf_regression.py parse_profile --issue-comment "$(cat dumped_comment.txt)"
echo "COMMANDS=$(cat ci/perf_regression/perf_regression_generated_commands.json)" >> "${GITHUB_ENV}"
- name: Set commands output
id: set_commands
run: | # zizmor: ignore[template-injection] this env variable is safe
echo "commands=${{ toJSON(env.COMMANDS) }}" >> "${GITHUB_OUTPUT}"
- name: Set Slab details outputs
id: set_slab_details
run: |
echo "backend=$(cat ci/perf_regression/perf_regression_slab_backend_config.txt)" >> "${GITHUB_OUTPUT}"
echo "profile=$(cat ci/perf_regression/perf_regression_slab_profile_config.txt)" >> "${GITHUB_OUTPUT}"
- name: Get hardware name
id: get_hardware_name
run: | # zizmor: ignore[template-injection] these interpolations are safe
HARDWARE_NAME=$(python3 ci/hardware_finder.py "${{ steps.set_slab_details.outputs.backend }}" "${{ steps.set_slab_details.outputs.profile }}");
echo "name=${HARDWARE_NAME}" >> "${GITHUB_OUTPUT}"
- name: Set regression details outputs
id: set_regression_details
run: |
echo "tfhe-backend=$(cat ci/perf_regression/perf_regression_tfhe_rs_backend_config.txt)" >> "${GITHUB_OUTPUT}"
echo "selected-profile=$(cat ci/perf_regression/perf_regression_selected_profile_config.txt)" >> "${GITHUB_OUTPUT}"
- name: Get custom env vars
id: get_custom_env
run: |
echo "custom_env=$(cat ci/perf_regression/perf_regression_custom_env.sh)" >> "${GITHUB_OUTPUT}"
setup-instance:
name: benchmark_perf_regression/setup-instance
needs: prepare-benchmarks
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: ${{ needs.prepare-benchmarks.outputs.slab-backend }}
profile: ${{ needs.prepare-benchmarks.outputs.slab-profile }}
install-cuda-dependencies-if-required:
name: benchmark_perf_regression/install-cuda-dependencies-if-required
needs: [ prepare-benchmarks, setup-instance ]
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
matrix:
# explicit include-based build matrix, of known valid options
include:
- cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
if: needs.prepare-benchmarks.outputs.slab-backend == 'hyperstack'
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
regression-benchmarks:
name: benchmark_perf_regression/regression-benchmarks
needs: [ prepare-benchmarks, setup-instance, install-cuda-dependencies-if-required ]
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
concurrency:
group: ${{ github.workflow_ref }}_${{ needs.prepare-benchmarks.outputs.slab-backend }}_${{ needs.prepare-benchmarks.outputs.slab-profile }}
cancel-in-progress: true
timeout-minutes: 720 # 12 hours
strategy:
fail-fast: false
max-parallel: 1
matrix:
command: ${{ fromJson(needs.prepare-benchmarks.outputs.commands) }}
steps:
- name: Checkout tfhe-rs repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0 # Needed to get commit hash
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${SHA}");
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=${COMMIT_DATE}";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
env:
SHA: ${{ github.sha }}
- name: Export custom env variables
run: | # zizmor: ignore[template-injection] this env variable is safe
{
${{ needs.prepare-benchmarks.outputs.custom-env }}
} >> "$GITHUB_ENV"
# Re-export environment variables as dependencies setup perform this task in the previous job.
# Local env variables are cleaned at the end of each job.
- name: Export CUDA variables
if: needs.prepare-benchmarks.outputs.slab-backend == 'hyperstack'
shell: bash
run: |
echo "CUDA_PATH=$CUDA_PATH" >> "${GITHUB_ENV}"
echo "PATH=$PATH:$CUDA_PATH/bin" >> "${GITHUB_PATH}"
echo "LD_LIBRARY_PATH=$CUDA_PATH/lib64:$LD_LIBRARY_PATH" >> "${GITHUB_ENV}"
echo "CUDA_MODULE_LOADER=EAGER" >> "${GITHUB_ENV}"
env:
CUDA_PATH: /usr/local/cuda-12.8
- name: Export gcc and g++ variables
if: needs.prepare-benchmarks.outputs.slab-backend == 'hyperstack'
shell: bash
run: |
{
echo "CC=/usr/bin/gcc-${GCC_VERSION}";
echo "CXX=/usr/bin/g++-${GCC_VERSION}";
echo "CUDAHOSTCXX=/usr/bin/g++-${GCC_VERSION}";
} >> "${GITHUB_ENV}"
env:
GCC_VERSION: 11
- name: Install rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: nightly
- name: Checkout Slab repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Run regression benchmarks
run: |
make BENCH_CUSTOM_COMMAND="${BENCH_COMMAND}" bench_custom
env:
BENCH_COMMAND: ${{ matrix.command }}
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion "${RESULTS_FILENAME}" \
--database tfhe_rs \
--hardware "${HARDWARE_NAME}" \
--backend "${TFHE_BACKEND}" \
--project-version "${COMMIT_HASH}" \
--branch "${REF_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--walk-subdirs \
--name-suffix regression \
--bench-type "${BENCH_TYPE}"
echo "RESULTS_FILE_SHA=$(sha256sum "${RESULTS_FILENAME}" | cut -d " " -f1)" >> "${GITHUB_ENV}"
env:
HARDWARE_NAME: ${{ needs.prepare-benchmarks.outputs.hardware-name }}
TFHE_BACKEND: ${{ needs.prepare-benchmarks.outputs.tfhe-backend }}
REF_NAME: ${{ github.head_ref || github.ref_name }}
BENCH_TYPE: ${{ env.__TFHE_RS_BENCH_TYPE }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_regression_${{ env.RESULTS_FILE_SHA }} # RESULT_FILE_SHA is needed to avoid collision between matrix.command runs
path: ${{ env.RESULTS_FILENAME }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py "${RESULTS_FILENAME}" "${JOB_SECRET}" \
--slab-url "${SLAB_URL}"
env:
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_URL: ${{ secrets.SLAB_URL }}
check-regressions:
name: benchmark_perf_regression/check-regressions
needs: [ prepare-benchmarks, regression-benchmarks ]
runs-on: ubuntu-latest
permissions:
pull-requests: write # Needed to write a comment in a pull-request
contents: read # Needed to set up Python dependencies
env:
REF_NAME: ${{ github.head_ref || github.ref_name }}
steps:
- name: Checkout tfhe-rs repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Install recent Python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: '3.12'
pip-install: -r ci/data_extractor/requirements.txt -r ci/perf_regression/requirements.txt
- name: Fetch data
run: |
python3 ci/data_extractor/src/data_extractor.py regression_data \
--generate-regression-json \
--regression-profiles ci/regression.toml \
--regression-selected-profile "${REGRESSION_PROFILE}" \
--backend "${TFHE_BACKEND}" \
--hardware "${HARDWARE_NAME}" \
--branch "${REF_NAME}" \
--time-span-days 60
env:
REGRESSION_PROFILE: ${{ needs.prepare-benchmarks.outputs.selected-regression-profile }}
TFHE_BACKEND: ${{ needs.prepare-benchmarks.outputs.tfhe-backend }}
HARDWARE_NAME: ${{ needs.prepare-benchmarks.outputs.hardware-name }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
- name: Generate regression report
run: |
python3 ci/perf_regression/perf_regression.py check_regression \
--results-file regression_data.json \
--generate-report
- name: Write report in pull-request
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
with:
issue-number: ${{ github.event.pull_request.number || github.event.issue.number }}
body-path: ci/perf_regression/regression_report.md
comment-on-failure:
name: benchmark_perf_regression/comment-on-failure
needs: [ prepare-benchmarks, setup-instance, regression-benchmarks, check-regressions ]
runs-on: ubuntu-latest
if: ${{ failure() && github.event_name == 'issue_comment' }}
continue-on-error: true
permissions:
pull-requests: write # Needed to write a comment in a pull-request
steps:
- name: Write failure message
uses: peter-evans/create-or-update-comment@e8674b075228eee787fea43ef493e45ece1004c9 # v5.0.0
with:
issue-number: ${{ github.event.issue.number }}
body: |
:x: Performance regression benchmark failed ([workflow run](${{ env.ACTION_RUN_URL }}))
slack-notify:
name: benchmark_perf_regression/slack-notify
needs: [ prepare-benchmarks, setup-instance, regression-benchmarks, check-regressions ]
runs-on: ubuntu-latest
if: ${{ failure() }}
continue-on-error: true
steps:
- name: Send message
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: failure
SLACK_MESSAGE: "Performance regression benchmarks failed. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: benchmark_perf_regression/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, regression-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (regression-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,182 +0,0 @@
# Run all shortint benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: Shortint full benchmarks
on:
workflow_dispatch:
schedule:
# Weekly benchmarks will be triggered each Saturday at 1a.m.
- cron: '0 1 * * 6'
# Quarterly benchmarks will be triggered right before end of quarter, the 25th of the current month at 4a.m.
# These benchmarks are far longer to execute hence the reason to run them only four time a year.
- cron: '0 4 25 MAR,JUN,SEP,DEC *'
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
jobs:
prepare-matrix:
name: Prepare operations matrix
runs-on: ubuntu-latest
if: github.event_name != 'schedule' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
outputs:
op_flavor: ${{ steps.set_op_flavor.outputs.op_flavor }}
steps:
- name: Weekly benchmarks
if: github.event_name == 'workflow_dispatch' ||
github.event.schedule == '0 1 * * 6'
run: |
echo "OP_FLAVOR=[\"default\"]" >> "${GITHUB_ENV}"
- name: Quarterly benchmarks
if: github.event.schedule == '0 4 25 MAR,JUN,SEP,DEC *'
run: |
echo "OP_FLAVOR=[\"default\", \"smart\", \"unchecked\"]" >> "${GITHUB_ENV}"
- name: Set operation flavor output
id: set_op_flavor
run: |
echo "op_flavor=${{ toJSON(env.OP_FLAVOR) }}" >> "${GITHUB_OUTPUT}"
setup-instance:
name: Setup instance (shortint-benchmarks)
needs: prepare-matrix
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: bench
shortint-benchmarks:
name: Execute shortint benchmarks for all operations flavor
needs: [ prepare-matrix, setup-instance ]
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
continue-on-error: true
strategy:
max-parallel: 1
matrix:
op_flavor: ${{ fromJson(needs.prepare-matrix.outputs.op_flavor) }}
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: nightly
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Run benchmarks with AVX512
run: |
make BENCH_OP_FLAVOR=${{ matrix.op_flavor }} bench_shortint
- name: Parse results
run: |
COMMIT_DATE="$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})"
COMMIT_HASH="$(git describe --tags --dirty)"
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware "hpc7a.96xlarge" \
--project-version "${COMMIT_HASH}" \
--branch ${{ github.ref_name }} \
--commit-date "${COMMIT_DATE}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512
# This small benchmark needs to be executed only once.
- name: Measure key sizes
if: matrix.op_flavor == 'default'
run: |
make measure_shortint_key_sizes
- name: Parse key sizes results
if: matrix.op_flavor == 'default'
run: |
python3 ./ci/benchmark_parser.py tfhe/shortint_key_sizes.csv ${{ env.RESULTS_FILENAME }} \
--object-sizes \
--append-results
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_shortint_${{ matrix.op_flavor }}
path: ${{ env.RESULTS_FILENAME }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Shortint full benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (shortint-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, shortint-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (shortint-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,210 +0,0 @@
# Run all signed integer benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: Signed Integer full benchmarks
on:
workflow_dispatch:
inputs:
all_precisions:
description: "Run all precisions"
type: boolean
default: false
bench_type:
description: "Benchmarks type"
type: choice
default: latency
options:
- latency
- throughput
- both
schedule:
# Weekly benchmarks will be triggered each Saturday at 1a.m.
- cron: '0 1 * * 6'
# Quarterly benchmarks will be triggered right before end of quarter, the 25th of the current month at 4a.m.
# These benchmarks are far longer to execute hence the reason to run them only four time a year.
- cron: '0 4 25 MAR,JUN,SEP,DEC *'
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
FAST_BENCH: TRUE
jobs:
prepare-matrix:
name: Prepare operations matrix
runs-on: ubuntu-latest
if: github.event_name != 'schedule' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
outputs:
op_flavor: ${{ steps.set_op_flavor.outputs.op_flavor }}
bench_type: ${{ steps.set_bench_type.outputs.bench_type }}
steps:
- name: Weekly benchmarks
if: github.event.schedule == '0 1 * * 6'
run: |
echo "OP_FLAVOR=[\"default\"]" >> "${GITHUB_ENV}"
- name: Quarterly benchmarks
if: github.event.schedule == '0 4 25 MAR,JUN,SEP,DEC *'
run: |
echo "OP_FLAVOR=[\"default\", \"unchecked\"]" >> "${GITHUB_ENV}"
- name: Set benchmark types
if: github.event_name == 'workflow_dispatch'
run: |
echo "OP_FLAVOR=[\"default\"]" >> "${GITHUB_ENV}"
if [[ "${{ inputs.bench_type }}" == "both" ]]; then
echo "BENCH_TYPE=[\"latency\", \"throughput\"]" >> "${GITHUB_ENV}"
else
echo "BENCH_TYPE=[\"${{ inputs.bench_type }}\"]" >> "${GITHUB_ENV}"
fi
- name: Default benchmark type
if: github.event_name != 'workflow_dispatch'
run: |
echo "BENCH_TYPE=[\"latency\"]" >> "${GITHUB_ENV}"
- name: Set operation flavor output
id: set_op_flavor
run: |
echo "op_flavor=${{ toJSON(env.OP_FLAVOR) }}" >> "${GITHUB_OUTPUT}"
- name: Set benchmark types output
id: set_bench_type
run: |
echo "bench_type=${{ toJSON(env.BENCH_TYPE) }}" >> "${GITHUB_OUTPUT}"
setup-instance:
name: Setup instance (signed-integer-benchmarks)
needs: prepare-matrix
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: bench
signed-integer-benchmarks:
name: Execute signed integer benchmarks for all operations flavor
needs: [ prepare-matrix, setup-instance ]
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
continue-on-error: true
timeout-minutes: 1440 # 24 hours
strategy:
max-parallel: 1
matrix:
command: [ integer, integer_multi_bit ]
op_flavor: ${{ fromJSON(needs.prepare-matrix.outputs.op_flavor) }}
bench_type: ${{ fromJSON(needs.prepare-matrix.outputs.bench_type) }}
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: nightly
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Should run benchmarks with all precisions
if: inputs.all_precisions
run: |
echo "FAST_BENCH=FALSE" >> "${GITHUB_ENV}"
- name: Run benchmarks with AVX512
run: |
make BENCH_OP_FLAVOR=${{ matrix.op_flavor }} BENCH_TYPE=${{ matrix.bench_type }} bench_signed_${{ matrix.command }}
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware "hpc7a.96xlarge" \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512 \
--bench-type ${{ matrix.bench_type }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_${{ matrix.command }}_${{ matrix.op_flavor }}_${{ matrix.bench_type }}
path: ${{ env.RESULTS_FILENAME }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Signed integer full benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (integer-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, signed-integer-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (signed-integer-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Run FFT benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: FFT benchmarks
name: benchmark_tfhe_fft
env:
CARGO_TERM_COLOR: always
@@ -23,16 +23,21 @@ on:
# Job will be triggered each Thursday at 11p.m.
- cron: '0 23 * * 4'
permissions: {}
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
setup-ec2:
name: Setup EC2 instance (fft-benchmarks)
setup-instance:
name: benchmark_tfhe_fft/setup-instance
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -42,25 +47,30 @@ jobs:
profile: bench
fft-benchmarks:
name: Execute FFT benchmarks in EC2
needs: setup-ec2
name: benchmark_tfhe_fft/fft-benchmarks
needs: setup-instance
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}
group: ${{ github.workflow_ref }}${{ github.ref == 'refs/heads/main' && github.sha || '' }}
cancel-in-progress: true
runs-on: ${{ needs.setup-ec2.outputs.runner-name }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${SHA}");
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_DATE=${COMMIT_DATE}";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
env:
SHA: ${{ github.sha }}
- name: Install rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
@@ -74,23 +84,25 @@ jobs:
- name: Parse AVX512 results
run: |
python3 ./ci/fft_benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
python3 ./ci/fft_benchmark_parser.py target/criterion "${RESULTS_FILENAME}" \
--database concrete_fft \
--hardware "hpc7a.96xlarge" \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--project-version "${COMMIT_HASH}" \
--branch "${REF_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--name-suffix avx512
env:
REF_NAME: ${{ github.ref_name }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_fft
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
@@ -100,45 +112,39 @@ jobs:
- name: Send data to Slab
shell: bash
run: |
echo "Computing HMac on downloaded artifact"
SIGNATURE="$(slab/scripts/hmac_calculator.sh ${{ env.RESULTS_FILENAME }} '${{ secrets.JOB_SECRET }}')"
echo "Sending results to Slab..."
curl -v -k \
-H "Content-Type: application/json" \
-H "X-Slab-Repository: ${{ github.repository }}" \
-H "X-Slab-Command: store_data_v2" \
-H "X-Hub-Signature-256: sha256=${SIGNATURE}" \
-d @${{ env.RESULTS_FILENAME }} \
${{ secrets.SLAB_URL }}
python3 slab/scripts/data_sender.py "${RESULTS_FILENAME}" "${JOB_SECRET}" \
--slab-url "${SLAB_URL}"
env:
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_URL: ${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "tfhe-fft benchmarks failed. (${{ env.ACTION_RUN_URL }})"
teardown-ec2:
name: Teardown EC2 instance (fft-benchmarks)
if: ${{ always() && needs.setup-ec2.result != 'skipped' }}
needs: [ setup-ec2, fft-benchmarks ]
teardown-instance:
name: benchmark_tfhe_fft/teardown-instance
if: ${{ always() && needs.setup-instance.result != 'skipped' }}
needs: [ setup-instance, fft-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-ec2.outputs.runner-name }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "EC2 teardown (fft-benchmarks) failed. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (fft-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Run NTT benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: NTT benchmarks
name: benchmark_tfhe_ntt
env:
CARGO_TERM_COLOR: always
@@ -23,16 +23,21 @@ on:
# Job will be triggered each Friday at 11p.m.
- cron: "0 23 * * 5"
permissions: {}
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
setup-ec2:
name: Setup EC2 instance (ntt-benchmarks)
setup-instance:
name: benchmark_tfhe_ntt/setup-instance
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -42,25 +47,30 @@ jobs:
profile: bench
ntt-benchmarks:
name: Execute NTT benchmarks in EC2
needs: setup-ec2
name: benchmark_tfhe_ntt/ntt-benchmarks
needs: setup-instance
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}
group: ${{ github.workflow_ref }}${{ github.ref == 'refs/heads/main' && github.sha || '' }}
cancel-in-progress: true
runs-on: ${{ needs.setup-ec2.outputs.runner-name }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${SHA}");
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_DATE=${COMMIT_DATE}";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
env:
SHA: ${{ github.sha }}
- name: Install rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
@@ -74,23 +84,25 @@ jobs:
- name: Parse results
run: |
python3 ./ci/ntt_benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
python3 ./ci/ntt_benchmark_parser.py target/criterion "${RESULTS_FILENAME}" \
--database concrete_ntt \
--hardware "hpc7a.96xlarge" \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--project-version "${COMMIT_HASH}" \
--branch "${REF_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--name-suffix avx512
env:
REF_NAME: ${{ github.ref_name }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_ntt
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
@@ -100,45 +112,39 @@ jobs:
- name: Send data to Slab
shell: bash
run: |
echo "Computing HMac on downloaded artifact"
SIGNATURE="$(slab/scripts/hmac_calculator.sh ${{ env.RESULTS_FILENAME }} '${{ secrets.JOB_SECRET }}')"
echo "Sending results to Slab..."
curl -v -k \
-H "Content-Type: application/json" \
-H "X-Slab-Repository: ${{ github.repository }}" \
-H "X-Slab-Command: store_data_v2" \
-H "X-Hub-Signature-256: sha256=${SIGNATURE}" \
-d @${{ env.RESULTS_FILENAME }} \
${{ secrets.SLAB_URL }}
python3 slab/scripts/data_sender.py "${RESULTS_FILENAME}" "${JOB_SECRET}" \
--slab-url "${SLAB_URL}"
env:
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_URL: ${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "tfhe-ntt benchmarks failed. (${{ env.ACTION_RUN_URL }})"
teardown-ec2:
name: Teardown EC2 instance (ntt-benchmarks)
if: ${{ always() && needs.setup-ec2.result != 'skipped' }}
needs: [setup-ec2, ntt-benchmarks]
teardown-instance:
name: benchmark_tfhe_ntt/teardown-instance
if: ${{ always() && needs.setup-instance.result != 'skipped' }}
needs: [setup-instance, ntt-benchmarks]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-ec2.outputs.runner-name }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "EC2 teardown (ntt-benchmarks) failed. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "EC2 teardown (ntt-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,174 +0,0 @@
# Run benchmarks of the tfhe-zk-pok crate on an instance and return parsed results to Slab CI bot.
name: tfhe-zk-pok benchmarks
on:
workflow_dispatch:
push:
branches:
- main
schedule:
# Weekly benchmarks will be triggered each Saturday at 3a.m.
- cron: '0 3 * * 6'
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
PARSE_INTEGER_BENCH_CSV_FILE: tfhe_rs_integer_benches_${{ github.sha }}.csv
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
jobs:
should-run:
runs-on: ubuntu-latest
if: github.event_name == 'workflow_dispatch' ||
((github.event_name == 'push' || github.event_name == 'schedule') && github.repository == 'zama-ai/tfhe-rs')
outputs:
zk_pok_changed: ${{ steps.changed-files.outputs.zk_pok_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
with:
files_yaml: |
zk_pok:
- tfhe-zk-pok/**
- .github/workflows/benchmark_tfhe_zk_pok.yml
setup-instance:
name: Setup instance (tfhe-zk-pok-benchmarks)
runs-on: ubuntu-latest
needs: should-run
if: github.event_name == 'workflow_dispatch' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') ||
(github.event_name == 'push' &&
github.repository == 'zama-ai/tfhe-rs' &&
needs.should-run.outputs.zk_pok_changed == 'true')
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: bench
tfhe-zk-pok-benchmarks:
name: Execute tfhe-zk-pok benchmarks
if: needs.setup-instance.result != 'skipped'
needs: setup-instance
concurrency:
group: ${{ github.workflow }}_${{github.event_name}}_${{ github.ref }}${{ github.ref == 'refs/heads/main' && github.sha || '' }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: nightly
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Run benchmarks
run: |
make bench_tfhe_zk_pok
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--crate tfhe-zk-pok \
--hardware "hpc7a.96xlarge" \
--backend cpu \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_tfhe_zk_pok
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "tfhe-zk-pok benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (tfhe-zk-pok-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, tfhe-zk-pok-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (tfhe-zk-pok-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Run WASM client benchmarks on an instance and return parsed results to Slab CI bot.
name: WASM client benchmarks
name: benchmark_wasm_client
on:
workflow_dispatch:
@@ -21,19 +21,25 @@ env:
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members and GitHub can trigger this workflow
jobs:
should-run:
name: benchmark_wasm_client/should-run
runs-on: ubuntu-latest
if: github.event_name == 'workflow_dispatch' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') ||
(github.event_name == 'push' && github.repository == 'zama-ai/tfhe-rs')
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
wasm_bench: ${{ steps.changed-files.outputs.wasm_bench_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
@@ -41,7 +47,7 @@ jobs:
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
wasm_bench:
@@ -54,7 +60,7 @@ jobs:
- .github/workflows/wasm_client_benchmark.yml
setup-instance:
name: Setup instance (wasm-client-benchmarks)
name: benchmark_wasm_client/setup-instance
if: github.event_name == 'workflow_dispatch' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') ||
(github.event_name == 'push' && github.repository == 'zama-ai/tfhe-rs' && needs.should-run.outputs.wasm_bench)
@@ -65,7 +71,7 @@ jobs:
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -75,7 +81,7 @@ jobs:
profile: cpu-small
wasm-client-benchmarks:
name: Execute WASM client benchmarks
name: benchmark_wasm_client/wasm-client-benchmarks
needs: setup-instance
if: needs.setup-instance.result != 'skipped'
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
@@ -85,7 +91,7 @@ jobs:
browser: [ chrome, firefox ]
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
@@ -93,14 +99,17 @@ jobs:
- name: Get benchmark details
run: |
COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict "${SHA}");
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_DATE=${COMMIT_DATE}";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
env:
SHA: ${{ github.sha }}
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: nightly
@@ -110,7 +119,7 @@ jobs:
- name: Node cache restoration
id: node-cache
uses: actions/cache/restore@1bd1e32a3bdc45362d1e726936510720a7c30a57 #v4.2.0
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 #v4.3.0
with:
path: |
~/.nvm
@@ -123,7 +132,7 @@ jobs:
make install_node
- name: Node cache save
uses: actions/cache/save@1bd1e32a3bdc45362d1e726936510720a7c30a57 #v4.2.0
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 #v4.3.0
if: steps.node-cache.outputs.cache-hit != 'true'
with:
path: |
@@ -133,47 +142,40 @@ jobs:
- name: Install web resources
run: |
make install_${{ matrix.browser }}_browser
make install_${{ matrix.browser }}_web_driver
make install_"${BROWSER}"_browser
make install_"${BROWSER}"_web_driver
env:
BROWSER: ${{ matrix.browser }}
- name: Run benchmarks
run: |
make bench_web_js_api_parallel_${{ matrix.browser }}_ci
make bench_web_js_api_parallel_"${BROWSER}"_ci
env:
BROWSER: ${{ matrix.browser }}
- name: Parse results
run: |
make parse_wasm_benchmarks
python3 ./ci/benchmark_parser.py tfhe/wasm_pk_gen.csv ${{ env.RESULTS_FILENAME }} \
python3 ./ci/benchmark_parser.py tfhe-benchmark/wasm_pk_gen.csv "${RESULTS_FILENAME}" \
--database tfhe_rs \
--hardware "m6i.4xlarge" \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--project-version "${COMMIT_HASH}" \
--branch "${REF_NAME}" \
--commit-date "${COMMIT_DATE}" \
--bench-date "${BENCH_DATE}" \
--key-gen
rm tfhe/wasm_pk_gen.csv
# Run these benchmarks only once
- name: Measure public key and ciphertext sizes in HL Api
if: matrix.browser == 'chrome'
run: |
make measure_hlapi_compact_pk_ct_sizes
- name: Parse key and ciphertext sizes results
if: matrix.browser == 'chrome'
run: |
python3 ./ci/benchmark_parser.py tfhe/hlapi_cpk_and_cctl_sizes.csv ${{ env.RESULTS_FILENAME }} \
--key-gen \
--append-results
rm tfhe-benchmark/wasm_pk_gen.csv
env:
REF_NAME: ${{ github.ref_name }}
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_wasm_${{ matrix.browser }}
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: zama-ai/slab
path: slab
@@ -183,26 +185,29 @@ jobs:
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
python3 slab/scripts/data_sender.py "${RESULTS_FILENAME}" "${JOB_SECRET}" \
--slab-url "${SLAB_URL}"
env:
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_URL: ${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "WASM benchmarks (${{ matrix.browser }}) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (wasm-client-benchmarks)
name: benchmark_wasm_client/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, wasm-client-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -212,8 +217,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (wasm-client-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -0,0 +1,18 @@
# Run all benchmarks displayed in the tfhe-rs whitepaper.
name: benchmark_whitepaper
on:
workflow_dispatch:
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
placeholder-job:
name: benchmark_whitepaper/placeholder-job
runs-on: ubuntu-latest
steps:
- run: |
echo "Hello this is a Placeholder Workflow"

View File

@@ -1,231 +0,0 @@
# Run PKE Zero-Knowledge benchmarks on an instance and return parsed results to Slab CI bot.
name: PKE ZK benchmarks
on:
workflow_dispatch:
inputs:
bench_type:
description: "Benchmarks type"
type: choice
default: latency
options:
- latency
- throughput
- both
push:
branches:
- main
schedule:
# Weekly benchmarks will be triggered each Saturday at 3a.m.
- cron: '0 3 * * 6'
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
PARSE_INTEGER_BENCH_CSV_FILE: tfhe_rs_integer_benches_${{ github.sha }}.csv
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
jobs:
should-run:
runs-on: ubuntu-latest
if: github.event_name == 'workflow_dispatch' ||
((github.event_name == 'push' || github.event_name == 'schedule') && github.repository == 'zama-ai/tfhe-rs')
outputs:
zk_pok_changed: ${{ steps.changed-files.outputs.zk_pok_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
with:
files_yaml: |
zk_pok:
- tfhe/Cargo.toml
- tfhe-csprng/**
- tfhe-fft/**
- tfhe-zk-pok/**
- tfhe/src/core_crypto/**
- tfhe/src/shortint/**
- tfhe/src/integer/**
- tfhe/src/zk.rs
- tfhe/benches/integer/zk_pke.rs
- .github/workflows/zk_pke_benchmark.yml
prepare-matrix:
name: Prepare operations matrix
runs-on: ubuntu-latest
if: github.event_name != 'schedule' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
outputs:
bench_type: ${{ steps.set_bench_type.outputs.bench_type }}
steps:
- name: Set benchmark types
if: github.event_name == 'workflow_dispatch'
run: |
if [[ "${{ inputs.bench_type }}" == "both" ]]; then
echo "BENCH_TYPE=[\"latency\", \"throughput\"]" >> "${GITHUB_ENV}"
else
echo "BENCH_TYPE=[\"${{ inputs.bench_type }}\"]" >> "${GITHUB_ENV}"
fi
- name: Default benchmark type
if: github.event_name != 'workflow_dispatch'
run: |
echo "BENCH_TYPE=[\"latency\"]" >> "${GITHUB_ENV}"
- name: Set benchmark types output
id: set_bench_type
run: |
echo "bench_type=${{ toJSON(env.BENCH_TYPE) }}" >> "${GITHUB_OUTPUT}"
setup-instance:
name: Setup instance (pke-zk-benchmarks)
runs-on: ubuntu-latest
needs: [ should-run, prepare-matrix ]
if: github.event_name == 'workflow_dispatch' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') ||
(github.event_name == 'push' &&
github.repository == 'zama-ai/tfhe-rs' &&
needs.should-run.outputs.zk_pok_changed == 'true')
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: bench
pke-zk-benchmarks:
name: Execute PKE ZK benchmarks
if: needs.setup-instance.result != 'skipped'
needs: [ prepare-matrix, setup-instance ]
concurrency:
group: ${{ github.workflow }}_${{github.event_name}}_${{ github.ref }}${{ github.ref == 'refs/heads/main' && github.sha || '' }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
max-parallel: 1
matrix:
bench_type: ${{ fromJSON(needs.prepare-matrix.outputs.bench_type) }}
steps:
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Get benchmark details
run: |
{
echo "BENCH_DATE=$(date --iso-8601=seconds)";
echo "COMMIT_DATE=$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})";
echo "COMMIT_HASH=$(git describe --tags --dirty)";
} >> "${GITHUB_ENV}"
- name: Install rust
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: nightly
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Run benchmarks with AVX512
run: |
make BENCH_TYPE=${{ matrix.bench_type }} bench_integer_zk
- name: Parse results
run: |
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware "hpc7a.96xlarge" \
--backend cpu \
--project-version "${{ env.COMMIT_HASH }}" \
--branch ${{ github.ref_name }} \
--commit-date "${{ env.COMMIT_DATE }}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512 \
--bench-type ${{ matrix.bench_type }}
- name: Parse CRS sizes results
run: |
python3 ./ci/benchmark_parser.py tfhe/pke_zk_crs_sizes.csv ${{ env.RESULTS_FILENAME }} \
--object-sizes \
--append-results
- name: Upload parsed results artifact
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08
with:
name: ${{ github.sha }}_integer_zk
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
repository: zama-ai/slab
path: slab
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
python3 slab/scripts/data_sender.py ${{ env.RESULTS_FILENAME }} "${{ secrets.JOB_SECRET }}" \
--slab-url "${{ secrets.SLAB_URL }}"
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "PKE ZK benchmarks finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (pke-zk-benchmarks)
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, pke-zk-benchmarks ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (pke-zk-benchmarks) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

44
.github/workflows/cargo_audit.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
# Run cargo audit
name: cargo_audit
on:
workflow_dispatch:
schedule:
# runs every day at 4am UTC
- cron: '0 4 * * *'
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACKIFY_MARKDOWN: true
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members and GitHub can trigger this workflow
jobs:
audit:
name: cargo_audit/audit
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Audit dependencies
run: |
make audit_dependencies
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "cargo-audit finished with status: ${{ job.status }}. ([action run](${{ env.ACTION_RUN_URL }}))"

View File

@@ -1,4 +1,4 @@
name: Cargo Build TFHE-rs
name: cargo_build
on:
pull_request:
@@ -8,84 +8,155 @@ env:
RUSTFLAGS: "-C target-cpu=native"
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
jobs:
cargo-builds:
runs-on: ${{ matrix.os }}
permissions:
contents: read
jobs:
prepare-parallel-pcc-matrix:
name: cargo_build/prepare-parallel-pcc-matrix
runs-on: ubuntu-latest
outputs:
matrix_command: ${{ steps.set-pcc-commands-matrix.outputs.commands }}
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: "false"
token: ${{ env.CHECKOUT_TOKEN }}
# Fetch all the Make recipes that start with `pcc_batch_`
- name: Set pcc commands matrix
id: set-pcc-commands-matrix
run: |
COMMANDS=$(grep -oE '^pcc_batch_[^:]*:' Makefile | sed 's/:/\"/; s/^/\"/' | paste -sd,)
echo "commands=[${COMMANDS}]" >> "$GITHUB_OUTPUT"
parallel-pcc-cpu:
name: cargo_build/parallel-pcc-cpu
needs: prepare-parallel-pcc-matrix
strategy:
matrix:
# GitHub macos-latest are now M1 macs, so use ours, we limit what runs so it will be fast
# even with a few PRs
os: [large_ubuntu_16, macos-latest, windows-latest]
command: ${{ fromJson(needs.prepare-parallel-pcc-matrix.outputs.matrix_command)}}
fail-fast: false
uses: ./.github/workflows/cargo_build_common.yml
with:
run-pcc-cpu-batch: ${{ matrix.command }}
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
pcc-hpu:
name: cargo_build/pcc-hpu
uses: ./.github/workflows/cargo_build_common.yml
with:
run-pcc-hpu: true
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
build-tfhe-full:
name: cargo_build/build-tfhe-full
uses: ./.github/workflows/cargo_build_common.yml
with:
run-build-tfhe-full: true
# GitHub macos-latest are now M1 macs, so use ours, we limit what runs so it will be fast
# even with a few PRs
extra-runners-to-use: macos-latest-xlarge,large_windows_16_latest
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
build:
name: cargo_build/build
uses: ./.github/workflows/cargo_build_common.yml
with:
run-build: true
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
build-layers:
name: cargo_build/build-layers
uses: ./.github/workflows/cargo_build_common.yml
with:
run-build-layers: true
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
build-c-api:
name: cargo_build/build-c-api
uses: ./.github/workflows/cargo_build_common.yml
with:
run-build-c-api: true
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
JOB_SECRET: ${{ secrets.JOB_SECRET }}
SLAB_ACTION_TOKEN: ${{ secrets.SLAB_ACTION_TOKEN }}
SLAB_URL: ${{ secrets.SLAB_URL }}
SLAB_BASE_URL: ${{ secrets.SLAB_BASE_URL }}
cargo-builds:
name: cargo_build/cargo-builds (bpr)
needs: [ parallel-pcc-cpu, pcc-hpu, build-tfhe-full, build, build-layers, build-c-api ]
if: ${{ always() }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
with:
toolchain: stable
- name: Install and run newline linter checks
if: ${{ contains(matrix.os, 'ubuntu') }}
- name: Check all builds success
if: needs.parallel-pcc-cpu.outputs.builds-result == 'success' &&
needs.pcc-hpu.outputs.builds-result == 'success' &&
needs.build-tfhe-full.outputs.builds-result == 'success' &&
needs.build.outputs.builds-result == 'success' &&
needs.build-layers.outputs.builds-result == 'success' &&
needs.build-c-api.outputs.builds-result == 'success'
run: |
wget https://github.com/fernandrone/linelint/releases/download/0.0.6/linelint-linux-amd64
echo "16b70fb7b471d6f95cbdc0b4e5dc2b0ac9e84ba9ecdc488f7bdf13df823aca4b linelint-linux-amd64" > checksum
sha256sum -c checksum || exit 1
chmod +x linelint-linux-amd64
mv linelint-linux-amd64 /usr/local/bin/linelint
make check_newline
echo "All tfhe-rs build checks passed"
- name: Run pcc checks
if: ${{ contains(matrix.os, 'ubuntu') }}
- name: Check builds failure
if: needs.parallel-pcc-cpu.outputs.builds-result != 'success' ||
needs.pcc-hpu.outputs.builds-result != 'success' ||
needs.build-tfhe-full.outputs.builds-result != 'success' ||
needs.build.outputs.builds-result != 'success' ||
needs.build-layers.outputs.builds-result != 'success' ||
needs.build-c-api.outputs.builds-result != 'success'
run: |
make pcc
- name: Build tfhe-csprng
if: ${{ contains(matrix.os, 'ubuntu') }}
run: |
make build_tfhe_csprng
- name: Build Release core
if: ${{ contains(matrix.os, 'ubuntu') }}
run: |
make build_core AVX512_SUPPORT=ON
make build_core_experimental AVX512_SUPPORT=ON
- name: Build Release boolean
if: ${{ contains(matrix.os, 'ubuntu') }}
run: |
make build_boolean
- name: Build Release shortint
if: ${{ contains(matrix.os, 'ubuntu') }}
run: |
make build_shortint
- name: Build Release integer
if: ${{ contains(matrix.os, 'ubuntu') }}
run: |
make build_integer
- name: Build Release tfhe full
run: |
make build_tfhe_full
- name: Build Release c_api
if: ${{ contains(matrix.os, 'ubuntu') }}
run: |
make build_c_api
- name: Build coverage tests
if: ${{ contains(matrix.os, 'ubuntu') }}
run: |
make build_tfhe_coverage
# The wasm build check is a bit annoying to set-up here and is done during the tests in
# aws_tfhe_tests.yml
echo "Some tfhe-rs build checks failed"
exit 1

258
.github/workflows/cargo_build_common.yml vendored Normal file
View File

@@ -0,0 +1,258 @@
name: cargo_build_common
on:
workflow_call:
inputs:
run-pcc-cpu-batch:
type: string
run-pcc-hpu:
type: boolean
default: false
run-build:
type: boolean
default: false
run-build-layers:
type: boolean
default: false
run-build-c-api:
type: boolean
default: false
run-build-tfhe-full:
type: boolean
default: false
extra-runners-to-use: # Additional runners to run builds command against
type: string # Use comma separated values to generate an array
default: ""
outputs:
builds-result:
description: "Result of builds job"
value: ${{ jobs.builds.outputs.result }}
secrets:
REPO_CHECKOUT_TOKEN:
required: true
SLAB_ACTION_TOKEN:
required: true
SLAB_BASE_URL:
required: true
SLAB_URL:
required: true
JOB_SECRET:
required: true
SLACK_CHANNEL:
required: true
BOT_USERNAME:
required: true
SLACK_WEBHOOK:
required: true
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUSTFLAGS: "-C target-cpu=native"
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACKIFY_MARKDOWN: true
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_16"
LINELINT_VERSION: 0.0.6
LINELINT_CHECKSUM: "16b70fb7b471d6f95cbdc0b4e5dc2b0ac9e84ba9ecdc488f7bdf13df823aca4b"
permissions:
contents: read
# zizmor: ignore[concurrency-limits] caller workflow is responsible for the concurrency
jobs:
setup-instance:
name: cargo_build_common/setup-instance
if: inputs.run-pcc-cpu-batch || inputs.run-pcc-hpu || inputs.run-build || inputs.run-build-layers || inputs.run-build-tfhe-full || inputs.run-build-c-api
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
run_attempt: ${{ github.run_attempt }} # On a re-run with a successful previous run for this job, the run_attempt will not be incremented
steps:
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: cpu-small
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
prepare-matrix:
name: cargo_build_common/prepare-matrix
runs-on: ubuntu-latest
needs: setup-instance
outputs:
runners: ${{ steps.set_matrix_runners.outputs.runners }}
steps:
- name: Parse runners
shell: python
env:
INPUTS_EXTRA_RUNNERS_TO_USE: ${{ inputs.extra-runners-to-use }}
REMOTE_RUNNER_LABEL: ${{ needs.setup-instance.outputs.runner-name }}
run: |
import os
inputs_extra_runners = os.environ["INPUTS_EXTRA_RUNNERS_TO_USE"]
remote_runner_label = os.environ["REMOTE_RUNNER_LABEL"]
env_file = os.environ["GITHUB_ENV"]
runners = [remote_runner_label, ]
if inputs_extra_runners:
split_runners = inputs_extra_runners.replace(" ", "").split(",")
runners.extend(split_runners)
with open(env_file, "a") as f:
f.write(f"""RUNNERS=["{'", "'.join(runners)}"]\n""")
- name: Set martix runners outputs
id: set_matrix_runners
run: | # zizmor: ignore[template-injection] these env variable are safe
echo "runners=${{ toJSON(env.RUNNERS) }}" >> "${GITHUB_OUTPUT}"
builds:
name: cargo_build_common/builds
needs: [ setup-instance, prepare-matrix ]
runs-on: ${{ matrix.runner }}
strategy:
matrix:
runner: ${{ fromJSON(needs.prepare-matrix.outputs.runners) }}
fail-fast: false
outputs:
result: ${{ steps.set_builds_result.outputs.result }}
steps:
- name: Checkout tfhe-rs repo
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Install newline linter
if: inputs.run-build && matrix.runner == env.EXTERNAL_CONTRIBUTION_RUNNER
run: |
wget "https://github.com/fernandrone/linelint/releases/download/${LINELINT_VERSION}/linelint-linux-amd64"
echo "${LINELINT_CHECKSUM} linelint-linux-amd64" > checksum
sha256sum -c checksum
chmod +x linelint-linux-amd64
ln -s "$(pwd)/linelint-linux-amd64" /usr/local/bin/linelint
- name: Run pcc checks batch
if: inputs.run-pcc-cpu-batch
run: |
make "${COMMAND}"
env:
COMMAND: ${{ inputs.run-pcc-cpu-batch }}
- name: Run Hpu pcc checks
if: inputs.run-pcc-hpu
run: |
make pcc_hpu
- name: Build Release tfhe full
if: inputs.run-build-tfhe-full
run: |
make build_tfhe_full
- name: Run newline linter checks
if: inputs.run-build
run: |
make check_newline
- name: Build tfhe-csprng
if: inputs.run-build
run: |
make build_tfhe_csprng
- name: Build with MSRV
if: inputs.run-build
run: |
make build_tfhe_msrv
- name: Build coverage tests
if: inputs.run-build
run: |
make build_tfhe_coverage
- name: Build Release core
if: inputs.run-build-layers
run: |
make build_core AVX512_SUPPORT=ON
make build_core_experimental AVX512_SUPPORT=ON
- name: Build Release boolean
if: inputs.run-build-layers
run: |
make build_boolean
- name: Build Release shortint
if: inputs.run-build-layers
run: |
make build_shortint
- name: Build Release integer
if: inputs.run-build-layers
run: |
make build_integer
- name: Build Release c_api
if: inputs.run-build-c-api
run: |
make build_c_api
# The wasm build check is a bit annoying to set-up here and is done during the tests in
# aws_tfhe_tests.yml
- name: Set result output
id: set_builds_result
if: ${{ always() }}
run: | # zizmor: ignore[template-injection] this context variable is safe
echo "result=${{ job.status }}" >> "${GITHUB_OUTPUT}"
teardown-instance:
name: cargo_build_common/teardown-instance
if: ${{ always() &&
needs.setup-instance.result == 'success' &&
github.run_attempt == needs.setup-instance.outputs.run_attempt }} # Only run if setup-instance has been executed during this run attempt
needs: [setup-instance, builds]
runs-on: ubuntu-latest
steps:
- name: Stop remote instance
id: stop-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cargo-builds) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,18 +1,23 @@
# Build tfhe-fft
name: Cargo Build tfhe-fft
name: cargo_build_tfhe_fft
on:
pull_request:
env:
CARGO_TERM_COLOR: always
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
cargo-builds-fft:
name: cargo_build_tfhe_fft/cargo-builds-fft (bpr)
runs-on: ${{ matrix.runner_type }}
strategy:
@@ -21,7 +26,10 @@ jobs:
fail-fast: false
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install Rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af

View File

@@ -1,25 +1,33 @@
# Build tfhe-ntt
name: Cargo Build tfhe-ntt
name: cargo_build_tfhe_ntt
on:
pull_request:
env:
CARGO_TERM_COLOR: always
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
cargo-builds-ntt:
name: cargo_build_tfhe_ntt/cargo-builds-ntt (bpr)
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
fail-fast: false
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install Rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af

View File

@@ -1,25 +1,65 @@
# Test tfhe-fft
name: Cargo Test tfhe-fft
name: cargo_test_fft
on:
pull_request:
push:
branches:
- main
env:
CARGO_TERM_COLOR: always
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
group: ${{ github.workflow }}-${{ github.head_ref }}${{ github.ref == 'refs/heads/main' && github.sha || '' }}
cancel-in-progress: true
permissions:
contents: read
jobs:
should-run:
name: cargo_test_fft/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read # Needed to check for file change
outputs:
fft_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.fft_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
fft:
- tfhe/Cargo.toml
- Makefile
- tfhe-fft/**
- '.github/workflows/cargo_test_fft.yml'
cargo-tests-fft:
name: cargo_test_fft/cargo-tests-fft
needs: should-run
if: needs.should-run.outputs.fft_test == 'true'
runs-on: ${{ matrix.runner_type }}
strategy:
matrix:
runner_type: [ubuntu-latest, macos-latest, windows-latest]
runner_type: [ ubuntu-latest, macos-latest, windows-latest ]
fail-fast: false
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install Rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
@@ -27,45 +67,63 @@ jobs:
toolchain: stable
override: true
- name: Test debug
- name: Test avx2
run: |
make test_fft
- name: Test serialization
run: make test_fft_serde
- name: Test no-std
- name: Test no-std avx2
run: |
make test_fft_no_std
cargo-tests-fft-nightly:
runs-on: ${{ matrix.runner_type }}
strategy:
matrix:
runner_type: [ubuntu-latest, macos-latest, windows-latest]
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Install Rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: nightly
override: true
- name: Test nightly
- name: Test avx512
run: |
make test_fft_nightly
make test_fft_avx512
- name: Test no-std nightly
- name: Test no-std avx512
run: |
make test_fft_no_std_nightly
make test_fft_no_std_avx512
cargo-tests-fft-node-js:
runs-on: "ubuntu-latest"
name: cargo_test_fft/cargo-tests-fft-node-js
needs: should-run
if: needs.should-run.outputs.fft_test == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Test node js
run: |
make install_node
make test_fft_node_js_ci
cargo-tests-fft-successful:
name: cargo_test_fft/cargo-tests-fft-successful (bpr)
needs: [ should-run, cargo-tests-fft, cargo-tests-fft-node-js ]
if: ${{ always() }}
runs-on: ubuntu-latest
steps:
- name: Tests do not need to run
if: needs.should-run.outputs.fft_test == 'false'
run: |
echo "tfhe-fft files haven't changed tests don't need to run"
- name: Check all tests passed
if: needs.should-run.outputs.fft_test == 'true' &&
needs.cargo-tests-fft.result == 'success' &&
needs.cargo-tests-fft-node-js.result == 'success'
run: |
echo "All tfhe-fft test passed"
- name: Check tests failure
if: needs.should-run.outputs.fft_test == 'true' &&
(needs.cargo-tests-fft.result != 'success' ||
needs.cargo-tests-fft-node-js.result != 'success')
run: |
echo "Some tfhe-fft tests failed"
exit 1

View File

@@ -1,25 +1,96 @@
# Test tfhe-ntt
name: Cargo Test tfhe-ntt
name: cargo_test_ntt
on:
pull_request:
push:
branches:
- main
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
group: ${{ github.workflow }}-${{ github.head_ref }}${{ github.ref == 'refs/heads/main' && github.sha || '' }}
cancel-in-progress: true
permissions:
contents: read
jobs:
should-run:
name: cargo_test_ntt/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read # Needed to check for file change
outputs:
ntt_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.ntt_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: "false"
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
ntt:
- tfhe/Cargo.toml
- Makefile
- tfhe-ntt/**
- '.github/workflows/cargo_test_ntt.yml'
setup-instance:
name: cargo_test_ntt/setup-instance
needs: should-run
if: needs.should-run.outputs.ntt_test == 'true'
runs-on: ubuntu-latest
outputs:
matrix_os: ${{ steps.set-os-matrix.outputs.matrix_os }}
runner-name: ${{ steps.start-remote-instance.outputs.label }}
steps:
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: cpu-small
- name: Set os matrix
id: set-os-matrix
env:
SLAB_INSTANCE: ${{ steps.start-remote-instance.outputs.label }}
run: |
INSTANCE_TO_USE="${SLAB_INSTANCE:-ubuntu-latest}"
echo "matrix_os=[\"${INSTANCE_TO_USE}\", \"macos-latest\", \"windows-latest\"]" >> "$GITHUB_OUTPUT"
cargo-tests-ntt:
name: cargo_test_ntt/cargo-tests-ntt
needs: [should-run, setup-instance]
if: needs.should-run.outputs.ntt_test == 'true'
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
os: ${{fromJson(needs.setup-instance.outputs.matrix_os)}}
fail-fast: false
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: "false"
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install Rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
@@ -27,28 +98,63 @@ jobs:
toolchain: stable
override: true
- name: Test debug
- name: Test avx2
run: make test_ntt
- name: Test no-std
- name: Test no-std avx2
run: make test_ntt_no_std
cargo-tests-ntt-nightly:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
- name: Test avx512
run: make test_ntt_avx512
- name: Test no-std avx512
run: make test_ntt_no_std_avx512
cargo-tests-ntt-successful:
name: cargo_test_ntt/cargo-tests-ntt-successful (bpr)
needs: [should-run, cargo-tests-ntt]
if: ${{ always() }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- name: Tests do not need to run
if: needs.should-run.outputs.ntt_test == 'false'
run: |
echo "tfhe-ntt files haven't changed tests don't need to run"
- name: Install Rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
- name: Check all tests success
if: needs.should-run.outputs.ntt_test == 'true' &&
needs.cargo-tests-ntt.result == 'success'
run: |
echo "All tfhe-ntt tests passed"
- name: Check tests failure
if: needs.should-run.outputs.ntt_test == 'true' &&
needs.cargo-tests-ntt.result != 'success'
run: |
echo "Some tfhe-ntt tests failed"
exit 1
teardown-instance:
name: cargo_test_ntt/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [setup-instance, cargo-tests-ntt-successful]
runs-on: ubuntu-latest
steps:
- name: Stop remote instance
id: stop-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
toolchain: nightly
override: true
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Test nightly
run: make test_ntt_nightly
- name: Test no-std nightly
run: make test_ntt_no_std_nightly
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cargo-tests-ntt) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,39 +0,0 @@
# Check if an actor is a collaborator and has write access
name: Check Actor Permissions
on:
workflow_call:
inputs:
username:
type: string
default: ${{ github.triggering_actor }}
outputs:
is_authorized:
value: ${{ jobs.check-actor-permission.outputs.actor_authorized }}
secrets:
TOKEN:
required: true
jobs:
check-actor-permission:
runs-on: ubuntu-latest
outputs:
actor_authorized: ${{ steps.check-access.outputs.require-result }}
steps:
- name: Get User Permission
id: check-access
uses: actions-cool/check-user-permission@7b90a27f92f3961b368376107661682c441f6103 # v2.3.0
with:
require: write
username: ${{ inputs.username }}
env:
GITHUB_TOKEN: ${{ secrets.TOKEN }}
- name: Check User Permission
if: ${{ !(inputs.username == 'dependabot[bot]' || inputs.username == 'cla-bot[bot]') &&
steps.check-access.outputs.require-result == 'false' }}
run: |
echo "${{ inputs.username }} does not have permissions on this repo."
echo "Current permission level is ${{ steps.check-access.outputs.user-permission }}"
echo "Job originally triggered by ${{ github.actor }}"
exit 1

View File

@@ -1,40 +0,0 @@
# Check if there is any change in CI files since last commit
name: Check changes in CI files
on:
workflow_call:
inputs:
checkout_ref:
type: string
required: true
outputs:
ci_file_changed:
value: ${{ jobs.check-changes.outputs.ci_file_changed }}
secrets:
REPO_CHECKOUT_TOKEN:
required: true
jobs:
check-changes:
runs-on: ubuntu-latest
permissions:
pull-requests: read
outputs:
ci_file_changed: ${{ steps.changed-files.outputs.ci_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ inputs.checkout_ref }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
with:
files_yaml: |
ci:
- .github/**
- ci/**

View File

@@ -1,12 +1,19 @@
# Check commit and PR compliance
name: Check commit and PR compliance
name: check_commit
on:
pull_request:
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow (via manual approval for PR from forks)
jobs:
check-commit-pr:
name: Check commit and PR
name: check_commit/check-commit-pr (bpr)
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write # Permission needed to scan commits in a pull-request and write issue comment
steps:
- name: Check first line
uses: gsactions/commit-message-checker@16fa2d5de096ae0d35626443bcd24f1e756cafee

View File

@@ -1,32 +0,0 @@
# Check if a pull request fulfill pre-conditions to be accepted
name: Check PR from fork
on:
pull_request_target:
paths:
- '.github/**'
- 'ci/**'
jobs:
# Fail if the triggering actor is not part of Zama organization.
check-user-permission:
name: Check event user permissions
uses: ./.github/workflows/check_actor_permissions.yml
with:
username: ${{ github.event.pull_request.user.login }}
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
write-comment:
name: Write PR comment
if: ${{ always() && needs.check-user-permission.outputs.is_authorized == 'false' }}
needs: check-user-permission
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- name: Write warning
uses: thollander/actions-comment-pull-request@24bffb9b452ba05a4f3f77933840a6a841d1b32b
with:
message: |
CI files have changed. Only Zama organization members are authorized to modify these files.

View File

@@ -1,36 +1,56 @@
# Lint and check CI
name: CI Lint and Checks
name: ci_lint
on:
pull_request:
env:
ACTIONLINT_VERSION: 1.6.27
ACTIONLINT_VERSION: 1.7.7
ACTIONLINT_CHECKSUM: "023070a287cd8cccd71515fedc843f1985bf96c436b7effaecce67290e7e0757"
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
permissions:
contents: read
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow (via manual approval for PR from forks)
jobs:
lint-check:
name: Lint and checks
name: ci_lint/lint-check (bpr)
runs-on: ubuntu-latest
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Get actionlint
run: |
bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash) ${{ env.ACTIONLINT_VERSION }}
echo "f2ee6d561ce00fa93aab62a7791c1a0396ec7e8876b2a8f2057475816c550782 actionlint" > checksum
wget "https://github.com/rhysd/actionlint/releases/download/v${ACTIONLINT_VERSION}/actionlint_${ACTIONLINT_VERSION}_linux_amd64.tar.gz"
echo "${ACTIONLINT_CHECKSUM} actionlint_${ACTIONLINT_VERSION}_linux_amd64.tar.gz" > checksum
sha256sum -c checksum
tar -xf actionlint_"${ACTIONLINT_VERSION}"_linux_amd64.tar.gz actionlint
ln -s "$(pwd)/actionlint" /usr/local/bin/
- name: Lint workflows
run: |
make lint_workflow
- name: Get Zimzor version to use
id: get_zizmor
run: |
echo "version=$(make zizmor_version)" >> "${GITHUB_OUTPUT}"
- name: Check workflows security
uses: zizmorcore/zizmor-action@e639db99335bc9038abc0e066dfcd72e23d26fb4 # v0.3.0
with:
advanced-security: 'false' # Print results directly in logs
persona: pedantic
version: ${{ steps.get_zizmor.outputs.version }}
- name: Ensure SHA pinned actions
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@c3a2b64f69b7a1542a68f44d9edbd9ec3fc1455e # v3.0.20
uses: zgosalvez/github-actions-ensure-sha-pinned-actions@9e9574ef04ea69da568d6249bd69539ccc704e74 # v4.0.0
with:
allowlist: |
slsa-framework/slsa-github-generator

View File

@@ -1,4 +1,4 @@
name: Code Coverage
name: code_coverage
env:
CARGO_TERM_COLOR: always
@@ -10,22 +10,28 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Code coverage workflow is only run via workflow_dispatch event since execution duration is not stabilized yet.
permissions:
contents: read
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
setup-instance:
name: Setup instance (code-coverage)
name: code_coverage/setup-instance
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -34,26 +40,29 @@ jobs:
backend: aws
profile: cpu-small
code-coverage:
name: Code coverage tests
code-coverage-tests:
name: code_coverage/code-coverage-tests
needs: setup-instance
concurrency:
group: ${{ github.workflow }}_${{ github.event_name }}_${{ github.ref }}
group: ${{ github.workflow_ref }}_${{ github.event_name }}
cancel-in-progress: true
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
timeout-minutes: 5760 # 4 days
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
tfhe:
@@ -83,7 +92,7 @@ jobs:
make test_shortint_cov
- name: Upload tfhe coverage to Codecov
uses: codecov/codecov-action@13ce06bfc6bbe3ecf90edbbf1bc32fe5978ca1d3
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7
if: steps.changed-files.outputs.tfhe_any_changed == 'true'
with:
token: ${{ secrets.CODECOV_TOKEN }}
@@ -97,7 +106,7 @@ jobs:
make test_integer_cov
- name: Upload tfhe coverage to Codecov
uses: codecov/codecov-action@13ce06bfc6bbe3ecf90edbbf1bc32fe5978ca1d3
uses: codecov/codecov-action@5a1091511ad55cbe89839c7260b706298ca349f7
if: steps.changed-files.outputs.tfhe_any_changed == 'true'
with:
token: ${{ secrets.CODECOV_TOKEN }}
@@ -106,22 +115,22 @@ jobs:
files: integer/cobertura.xml
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Code coverage finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (code-coverage)
name: code_coverage/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, code-coverage ]
needs: [ setup-instance, code-coverage-tests ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -131,8 +140,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (code-coverage) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (code-coverage-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,4 +1,4 @@
name: CSPRNG randomness testing Workflow
name: csprng_randomness_tests
env:
CARGO_TERM_COLOR: always
@@ -10,57 +10,34 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_16"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (csprng-randomness-tests)
needs: check-user-permission
name: csprng_randomness_tests/setup-instance
if: ${{ github.event_name == 'workflow_dispatch' || contains(github.event.label.name, 'approved') }}
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -69,23 +46,29 @@ jobs:
backend: aws
profile: cpu-small
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
csprng-randomness-tests:
name: CSPRNG randomness tests
name: csprng_randomness_tests/csprng-randomness-tests
needs: setup-instance
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}_${{ github.sha }}_${{ github.event_name }}
cancel-in-progress: true
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
@@ -94,22 +77,23 @@ jobs:
make dieharder_csprng
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "tfhe-csprng randomness check finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "tfhe-csprng randomness check finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (csprng-randomness-tests)
name: csprng_randomness_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, csprng-randomness-tests ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -119,8 +103,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (csprng-randomness-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (csprng-randomness-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,127 +0,0 @@
name: Close or Merge corresponding PR on the data repo
# When a PR with the data_PR tag is closed or merged, this will close the corresponding PR in the data repo.
env:
TARGET_REPO_API_URL: ${{ github.api_url }}/repos/zama-ai/tfhe-backward-compat-data
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
PR_BRANCH: ${{ github.head_ref || github.ref_name }}
CLOSE_TYPE: ${{ github.event.pull_request.merged && 'merge' || 'close' }}
# only trigger on pull request closed events
on:
pull_request:
types: [ closed ]
pull_request_target:
types: [ closed ]
# The same pattern is used for jobs that use the github api:
# - save the result of the API call in the env var "GH_API_RES". Since the var is multiline
# we use this trick: https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#example-of-a-multiline-string
# - "set +e" will make sure we reach the last "echo EOF" even in case of error
# - "set -o" pipefail makes one line piped command return the error of the first failure
# - 'RES="$?"' and 'exit $RES' are used to return the error code if a command failed. Without it, with "set +e"
# the script will always return 0 because of the "echo EOF".
jobs:
auto_close_job:
if: ${{ contains(github.event.pull_request.labels.*.name, 'data_PR') }}
runs-on: ubuntu-latest
steps:
- name: Find corresponding Pull Request in the data repo
run: |
{
set +e
set -o pipefail
echo 'TARGET_REPO_PR<<EOF'
curl --fail-with-body --no-progress-meter -L -X GET \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
${{ env.TARGET_REPO_API_URL }}/pulls\?head=${{ github.repository_owner }}:${{ env.PR_BRANCH }} | jq -e '.[0]' | sed 's/null/{ "message": "corresponding PR not found" }/'
RES="$?"
echo EOF
} >> "${GITHUB_ENV}"
exit $RES
- name: Comment on the PR to indicate the reason of the close
run: |
{
set +e
set -o pipefail
echo 'GH_API_RES<<EOF'
curl --fail-with-body --no-progress-meter -L -X POST \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.FHE_ACTIONS_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
${{ fromJson(env.TARGET_REPO_PR).comments_url }} \
-d '{ "body": "PR ${{ env.CLOSE_TYPE }}d because the corresponding PR in main repo was ${{ env.CLOSE_TYPE }}d: ${{ github.repository }}#${{ github.event.number }}" }'
RES="$?"
echo EOF
} >> "${GITHUB_ENV}"
exit $RES
- name: Merge the Pull Request in the data repo
if: ${{ github.event.pull_request.merged }}
run: |
{
set +e
set -o pipefail
echo 'GH_API_RES<<EOF'
curl --fail-with-body --no-progress-meter -L -X PUT \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.FHE_ACTIONS_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
${{ fromJson(env.TARGET_REPO_PR).url }}/merge \
-d '{ "merge_method": "rebase" }'
RES="$?"
echo EOF
} >> "${GITHUB_ENV}"
exit $RES
- name: Close the Pull Request in the data repo
if: ${{ !github.event.pull_request.merged }}
run: |
{
set +e
set -o pipefail
echo 'GH_API_RES<<EOF'
curl --fail-with-body --no-progress-meter -L -X PATCH \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.FHE_ACTIONS_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
${{ fromJson(env.TARGET_REPO_PR).url }} \
-d '{ "state": "closed" }'
RES="$?"
echo EOF
} >> "${GITHUB_ENV}"
exit $RES
- name: Delete the associated branch in the data repo
run: |
{
set +e
set -o pipefail
echo 'GH_API_RES<<EOF'
curl --fail-with-body --no-progress-meter -L -X DELETE \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer ${{ secrets.FHE_ACTIONS_TOKEN }}" \
-H "X-GitHub-Api-Version: 2022-11-28" \
${{ env.TARGET_REPO_API_URL }}/git/refs/heads/${{ env.PR_BRANCH }}
RES="$?"
echo EOF
} >> "${GITHUB_ENV}"
exit $RES
- name: Slack Notification
if: ${{ always() && job.status == 'failure' }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Failed to auto-${{ env.CLOSE_TYPE }} PR on data repo: ${{ fromJson(env.GH_API_RES || env.TARGET_REPO_PR).message }}"

View File

@@ -0,0 +1,99 @@
name: generate_svg_common
on:
workflow_call:
inputs:
backend:
type: string
hardware_name:
type: string
layer:
type: string
pbs_kind: # Valid values are 'classical', 'multi_bit' or 'any'
type: string
grouping_factor: # Valid values are 2, 3, or 4
type: string
default: 4
bench_type: # Valid values are 'latency', 'throughput'
type: string
backend_comparison:
type: boolean
default: false
time_span_days:
type: string
default: 60
output_filename:
type: string
required: true
secrets:
DATA_EXTRACTOR_DATABASE_USER:
required: true
DATA_EXTRACTOR_DATABASE_HOST:
required: true
DATA_EXTRACTOR_DATABASE_PASSWORD:
required: true
permissions: {}
# zizmor: ignore[concurrency-limits] caller workflow is responsible for the concurrency
jobs:
generate-table:
name: generate_svg_common/generate-table
runs-on: ubuntu-latest
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
- name: Produce table from database
if: inputs.backend_comparison == false
run: |
python3 -m pip install -r ci/data_extractor/requirements.txt
python3 ci/data_extractor/src/data_extractor.py "${OUTPUT_FILENAME}" \
--generate-svg \
--branch "${REF_NAME}" \
--backend "${BACKEND}" \
--hardware "${HARDWARE_NAME}" \
--tfhe-rs-layer "${LAYER}" \
--pbs-kind "${PBS_KIND}" \
--grouping-factor "${GROUPING_FACTOR}" \
--bench-type "${BENCH_TYPE}" \
--time-span-days "${TIME_SPAN}"
env:
OUTPUT_FILENAME: ${{ inputs.output_filename }}
REF_NAME: ${{ github.ref_name }}
BACKEND: ${{ inputs.backend }}
HARDWARE_NAME: ${{ inputs.hardware_name }}
LAYER: ${{ inputs.layer }}
PBS_KIND: ${{ inputs.pbs_kind }}
GROUPING_FACTOR: ${{ inputs.grouping_factor }}
BENCH_TYPE: ${{ inputs.bench_type }}
TIME_SPAN: ${{ inputs.time_span_days }}
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
- name: Produce backends comparison table from database
if: inputs.backend_comparison == true
run: |
python3 -m pip install -r ci/data_extractor/requirements.txt
python3 ci/data_extractor/src/data_extractor.py "${OUTPUT_FILENAME}" \
--generate-svg \
--backends-comparison \
--time-span-days "${TIME_SPAN}"
env:
OUTPUT_FILENAME: ${{ inputs.output_filename }}
TIME_SPAN: ${{ inputs.time_span_days }}
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
- name: Upload tables
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: ${{ github.sha }}_${{ inputs.backend }}_${{ inputs.layer }}_${{ inputs.pbs_kind }}_${{ inputs.bench_type }}_tables
# This will upload all the file generated
path: ${{ inputs.output_filename }}*.svg
retention-days: 60

191
.github/workflows/generate_svgs.yml vendored Normal file
View File

@@ -0,0 +1,191 @@
# Generate benchmark SVGs for public documentation
name: generate_documentation_svgs
on:
workflow_call:
inputs:
time_span_days:
type: string
required: true
generate-cpu-svgs:
type: boolean
default: true
generate-gpu-svgs:
type: boolean
default: true
generate-hpu-svgs:
type: boolean
default: true
secrets:
DATA_EXTRACTOR_DATABASE_USER:
required: true
DATA_EXTRACTOR_DATABASE_HOST:
required: true
DATA_EXTRACTOR_DATABASE_PASSWORD:
required: true
permissions: {}
# zizmor: ignore[concurrency-limits] caller workflow is responsible for the concurrency
jobs:
# -----------------------------------------------------------
# Integer benchmarks tables
# -----------------------------------------------------------
cpu-integer-latency-table:
name: generate_documentation_svgs/cpu-integer-latency-table
uses: ./.github/workflows/generate_svg_common.yml
if: inputs.generate-cpu-svgs
with:
backend: cpu
hardware_name: hpc7a.96xlarge
layer: integer
pbs_kind: classical
bench_type: latency
time_span_days: ${{ inputs.time_span_days }}
output_filename: cpu-integer-benchmark-tuniform-2m128-latency
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
cpu-integer-throughput-table:
name: generate_documentation_svgs/cpu-integer-latency-table
uses: ./.github/workflows/generate_svg_common.yml
if: inputs.generate-cpu-svgs
with:
backend: cpu
hardware_name: hpc7a.96xlarge
layer: integer
pbs_kind: classical
bench_type: throughput
time_span_days: ${{ inputs.time_span_days }}
output_filename: cpu-integer-benchmark-tuniform-2m128-throughput
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
gpu-integer-latency-table:
name: generate_documentation_svgs/gpu-integer-latency-table
uses: ./.github/workflows/generate_svg_common.yml
if: inputs.generate-gpu-svgs
with:
backend: gpu
hardware_name: n3-H100-SXM5x8
layer: integer
pbs_kind: multi_bit
grouping_factor: 4
bench_type: latency
time_span_days: ${{ inputs.time_span_days }}
output_filename: gpu-integer-benchmark-h100x8-sxm5-multi-bit-tuniform-2m128-latency
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
gpu-integer-throughput-table:
name: generate_documentation_svgs/gpu-integer-throughput-table
uses: ./.github/workflows/generate_svg_common.yml
if: inputs.generate-gpu-svgs
with:
backend: gpu
hardware_name: n3-H100-SXM5x8
layer: integer
pbs_kind: multi_bit
grouping_factor: 4
bench_type: throughput
time_span_days: ${{ inputs.time_span_days }}
output_filename: gpu-integer-benchmark-h100x8-sxm5-multi-bit-tuniform-2m128-throughput
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
hpu-integer-latency-table:
name: generate_documentation_svgs/hpu-integer-latency-table
uses: ./.github/workflows/generate_svg_common.yml
if: inputs.generate-hpu-svgs
with:
backend: hpu
hardware_name: hpu_x1
layer: integer
pbs_kind: classical
bench_type: latency
time_span_days: ${{ inputs.time_span_days }}
output_filename: hpu-integer-benchmark-hpux1-tuniform-2m128-latency
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
hpu-integer-throughput-table:
name: generate_documentation_svgs/hpu-integer-throughput-table
uses: ./.github/workflows/generate_svg_common.yml
if: inputs.generate-hpu-svgs
with:
backend: hpu
hardware_name: hpu_x1
layer: integer
pbs_kind: classical
bench_type: throughput
time_span_days: ${{ inputs.time_span_days }}
output_filename: hpu-integer-benchmark-hpux1-tuniform-2m128-throughput
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
backend-comparison-latency-table:
name: generate_documentation_svgs/backend-comparison-latency-table
uses: ./.github/workflows/generate_svg_common.yml
if: inputs.generate-cpu-svgs && inputs.generate-gpu-svgs && inputs.generate-hpu-svgs
with:
backend_comparison: true
time_span_days: ${{ inputs.time_span_days }}
output_filename: cpu-gpu-hpu-integer-benchmark-fheuint64-tuniform-2m128-ciphertext
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
# -----------------------------------------------------------
# PBS benchmarks tables
# -----------------------------------------------------------
cpu-pbs-tables:
name: generate_documentation_svgs/cpu-pbs-tables
uses: ./.github/workflows/generate_svg_common.yml
if: inputs.generate-cpu-svgs
with:
backend: cpu
hardware_name: hpc7a.96xlarge
layer: core_crypto
pbs_kind: any
grouping_factor: 4
bench_type: latency
time_span_days: ${{ inputs.time_span_days }}
output_filename: cpu-pbs-benchmark
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}
gpu-pbs-tables:
name: generate_documentation_svgs/gpu-pbs-tables
uses: ./.github/workflows/generate_svg_common.yml
if: inputs.generate-gpu-svgs
with:
backend: gpu
hardware_name: n3-H100-SXM5x8
layer: core_crypto
pbs_kind: any
grouping_factor: 4
bench_type: latency
time_span_days: ${{ inputs.time_span_days }}
output_filename: gpu-pbs-benchmark
secrets:
DATA_EXTRACTOR_DATABASE_USER: ${{ secrets.DATA_EXTRACTOR_DATABASE_USER }}
DATA_EXTRACTOR_DATABASE_HOST: ${{ secrets.DATA_EXTRACTOR_DATABASE_HOST }}
DATA_EXTRACTOR_DATABASE_PASSWORD: ${{ secrets.DATA_EXTRACTOR_DATABASE_PASSWORD }}

View File

@@ -1,5 +1,5 @@
# Compile and test tfhe-cuda-backend on an RTX 4090 machine
name: Cuda - 4090 full tests
name: gpu_4090_tests
env:
CARGO_TERM_COLOR: always
@@ -11,70 +11,43 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
schedule:
# Nightly tests @ 1AM after each work day
- cron: "0 1 * * MON-FRI"
permissions:
contents: read
# zizmor: ignore[concurrency-limits] only Zama organization members and GitHub can trigger this workflow
jobs:
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
cuda-tests-linux:
name: CUDA tests (RTX 4090)
needs: check-user-permission
name: gpu_4090_tests/cuda-tests-linux
if: github.event_name == 'workflow_dispatch' ||
contains(github.event.label.name, '4090_test') ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: true
runs-on: ["self-hosted", "4090-desktop"]
timeout-minutes: 1440 # 24 hours
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
@@ -103,15 +76,15 @@ jobs:
make test_high_level_api_gpu
- uses: actions-ecosystem/action-remove-labels@2ce5d41b4b6aa8503e285553f75ed56e0a40bae0
if: ${{ always() && github.event_name == 'pull_request_target' }}
if: ${{ always() && github.event_name == 'pull_request' }}
with:
labels: 4090_test
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "CUDA RTX 4090 tests finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "CUDA RTX 4090 tests finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -0,0 +1,153 @@
# Compile and test tfhe-cuda-backend on an AWS instance
name: gpu_code_validation_tests
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUSTFLAGS: "-C target-cpu=native"
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
schedule:
# every month
- cron: "0 0 1 * *"
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
setup-instance:
name: gpu_code_validation_tests/setup-instance
runs-on: ubuntu-latest
if: github.event_name != 'pull_request' ||
(github.event.action == 'labeled' && github.event.label.name == 'approved')
outputs:
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: hyperstack
profile: single-h100
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: gpu_code_validation_tests/cuda-tests-linux
needs: [ setup-instance ]
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
timeout-minutes: 14400
strategy:
fail-fast: false
# explicit include-based build matrix, of known valid options
matrix:
include:
- os: ubuntu-22.04
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Find tools
run: |
sudo apt update && sudo apt install -y valgrind
find /usr -executable -name "compute-sanitizer"
which valgrind
- name: Install latest stable
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Run memory sanitizer
run: |
make test_high_level_api_gpu_valgrind
slack-notify:
name: gpu_code_validation_tests/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests-linux.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "GPU Memory Checks tests finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: gpu_code_validation_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop remote instance
id: stop-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Compile and test tfhe-cuda-backend on an H100 VM on hyperstack
name: Cuda - Fast tests on H100
name: gpu_fast_h100_tests
env:
CARGO_TERM_COLOR: always
@@ -11,48 +11,44 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' || github.event_name == 'pull_request_target' }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: gpu_fast_h100_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
gpu_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.gpu_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
gpu:
@@ -72,36 +68,26 @@ jobs:
- scripts/integer-tests.sh
- ci/slab.toml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cuda-h100-tests)
needs: [ should-run, check-user-permission ]
if: github.event_name != 'pull_request_target' ||
name: gpu_fast_h100_tests/setup-instance
needs: should-run
if: github.event_name != 'pull_request' ||
(github.event.action != 'labeled' && needs.should-run.outputs.gpu_test == 'true') ||
(github.event.action == 'labeled' && github.event.label.name == 'approved' && needs.should-run.outputs.gpu_test == 'true')
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
# Use permanent remote instance label first as on-demand remote instance label output is set before the end of start-remote-instance step.
# If the latter fails due to a failed GitHub action runner set up, we have to fallback on the permanent instance.
# Since the on-demand remote label is set before failure, we have to do the logical OR in this order,
# otherwise we'll try to run the next job on a non-existing on-demand instance.
runner-name: ${{ steps.use-permanent-instance.outputs.runner_group || steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
remote-instance-outcome: ${{ steps.start-remote-instance.outcome }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
continue-on-error: true
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -110,13 +96,27 @@ jobs:
backend: hyperstack
profile: single-h100
# This will allow to fallback on permanent instances running on Hyperstack.
- name: Use permanent remote instance
id: use-permanent-instance
if: env.SECRETS_AVAILABLE == 'true' && steps.start-remote-instance.outcome == 'failure'
run: |
echo "runner_group=h100x1" >> "$GITHUB_OUTPUT"
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: CUDA H100 tests
name: gpu_fast_h100_tests/cuda-tests-linux
needs: [ should-run, setup-instance ]
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.setup-instance.result != 'skipped')
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -125,31 +125,30 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
if: needs.setup-instance.outputs.remote-instance-outcome == 'success'
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Run core crypto and internal CUDA backend tests
run: |
BIG_TESTS_INSTANCE=TRUE make test_core_crypto_gpu
@@ -169,27 +168,37 @@ jobs:
BIG_TESTS_INSTANCE=TRUE make test_high_level_api_gpu
slack-notify:
name: Slack Notification
name: gpu_fast_h100_tests/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests-linux.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "Fast H100 tests finished with status: ${{ needs.cuda-tests-linux.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Fast H100 tests finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cuda-h100-tests)
if: ${{ always() && needs.setup-instance.result == 'success' }}
name: gpu_fast_h100_tests/teardown-instance
if: ${{ always() && needs.setup-instance.outputs.remote-instance-outcome == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -199,8 +208,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-h100-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-h100-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Compile and test tfhe-cuda-backend on an AWS instance
name: Cuda - Fast tests
name: gpu_fast_tests
env:
CARGO_TERM_COLOR: always
@@ -11,46 +11,43 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' || github.event_name == 'pull_request_target' }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: gpu_fast_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
gpu_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.gpu_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
gpu:
@@ -70,35 +67,19 @@ jobs:
- scripts/integer-tests.sh
- ci/slab.toml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cuda-tests)
needs: [ should-run, check-user-permission ]
name: gpu_fast_tests/setup-instance
needs: should-run
if: github.event_name == 'workflow_dispatch' ||
needs.should-run.outputs.gpu_test == 'true'
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -107,13 +88,20 @@ jobs:
backend: hyperstack
profile: gpu-test
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: CUDA tests
name: gpu_fast_tests/cuda-tests-linux
needs: [ should-run, setup-instance ]
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.setup-instance.result != 'skipped')
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -122,31 +110,31 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Run core crypto and internal CUDA backend tests
run: |
make test_core_crypto_gpu
@@ -166,27 +154,37 @@ jobs:
make test_high_level_api_gpu
slack-notify:
name: Slack Notification
name: gpu_fast_tests/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests-linux.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "Base GPU tests finished with status: ${{ needs.cuda-tests-linux.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Base GPU tests finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cuda-tests)
name: gpu_fast_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -196,8 +194,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Compile and test tfhe-cuda-backend on an H100 VM on hyperstack
name: Cuda - Full tests on H100
name: gpu_full_h100_tests
env:
CARGO_TERM_COLOR: always
@@ -15,16 +15,27 @@ env:
on:
workflow_dispatch:
permissions: {}
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
setup-instance:
name: Setup instance (cuda-h100-tests)
name: gpu_full_h100_tests/setup-instance
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
# Use permanent remote instance label first as on-demand remote instance label output is set before the end of start-remote-instance step.
# If the latter fails due to a failed GitHub action runner set up, we have to fallback on the permanent instance.
# Since the on-demand remote label is set before failure, we have to do the logical OR in this order,
# otherwise we'll try to run the next job on a non-existing on-demand instance.
runner-name: ${{ steps.use-permanent-instance.outputs.runner_group || steps.start-remote-instance.outputs.label }}
remote-instance-outcome: ${{ steps.start-remote-instance.outcome }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
continue-on-error: true
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -33,11 +44,18 @@ jobs:
backend: hyperstack
profile: single-h100
# This will allow to fallback on permanent instances running on Hyperstack.
- name: Use permanent remote instance
id: use-permanent-instance
if: env.SECRETS_AVAILABLE == 'true' && steps.start-remote-instance.outcome == 'failure'
run: |
echo "runner_group=h100x1" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: CUDA H100 tests
name: gpu_full_h100_tests/cuda-tests-linux
needs: [ setup-instance ]
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -46,42 +64,29 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
cuda: "12.8"
gcc: 11
steps:
# Mandatory on hyperstack since a bootable volume is not re-usable yet.
- name: Install dependencies
run: |
sudo apt update
sudo apt install -y checkinstall zlib1g-dev libssl-dev libclang-dev
wget https://github.com/Kitware/CMake/releases/download/v${{ env.CMAKE_VERSION }}/cmake-${{ env.CMAKE_VERSION }}.tar.gz
tar -zxvf cmake-${{ env.CMAKE_VERSION }}.tar.gz
cd cmake-${{ env.CMAKE_VERSION }}
./bootstrap
make -j"$(nproc)"
sudo make install
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
if: needs.setup-instance.outputs.remote-instance-outcome == 'success'
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Run core crypto, integer and internal CUDA backend tests
run: |
make test_gpu
@@ -99,26 +104,27 @@ jobs:
make test_high_level_api_gpu
slack-notify:
name: Slack Notification
name: gpu_full_h100_tests/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ failure() }}
continue-on-error: true
steps:
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "Full H100 tests finished with status: ${{ needs.cuda-tests-linux.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Full H100 tests finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (cuda-h100-tests)
name: gpu_full_h100_tests/teardown-instance
if: ${{ always() && needs.setup-instance.outputs.remote-instance-outcome == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -128,8 +134,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-h100-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-h100-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Compile and test tfhe-cuda-backend on an AWS instance
name: Cuda - Full tests multi-GPU
name: gpu_full_multi_gpu_tests
env:
CARGO_TERM_COLOR: always
@@ -11,48 +11,44 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' || github.event_name == 'pull_request_target' }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: gpu_full_multi_gpu_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
gpu_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.gpu_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
gpu:
@@ -72,51 +68,42 @@ jobs:
- scripts/integer-tests.sh
- ci/slab.toml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cuda-tests-multi-gpu)
needs: [ should-run, check-user-permission ]
if: github.event_name != 'pull_request_target' ||
name: gpu_full_multi_gpu_tests/setup-instance
needs: should-run
if: github.event_name != 'pull_request' ||
(github.event.action != 'labeled' && needs.should-run.outputs.gpu_test == 'true') ||
(github.event.action == 'labeled' && github.event.label.name == 'approved' && needs.should-run.outputs.gpu_test == 'true')
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: hyperstack
profile: multi-gpu-test
profile: 4-l40
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: CUDA multi-GPU tests
name: gpu_full_multi_gpu_tests/cuda-tests-linux
needs: [ should-run, setup-instance ]
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.setup-instance.result != 'skipped')
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -125,31 +112,29 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Run multi-bit CUDA integer compression tests
run: |
BIG_TESTS_INSTANCE=TRUE make test_integer_compression_gpu
@@ -157,7 +142,7 @@ jobs:
# No need to test core_crypto and classic PBS in integer since it's already tested on single GPU.
- name: Run multi-bit CUDA integer tests
run: |
BIG_TESTS_INSTANCE=TRUE make test_integer_multi_bit_gpu_ci
BIG_TESTS_INSTANCE=TRUE NO_BIG_PARAMS_GPU=TRUE make test_integer_multi_bit_gpu_ci
- name: Run user docs tests
run: |
@@ -169,30 +154,40 @@ jobs:
- name: Run High Level API Tests
run: |
BIG_TESTS_INSTANCE=TRUE make test_high_level_api_gpu
make test_high_level_api_gpu
slack-notify:
name: Slack Notification
name: gpu_full_multi_gpu_tests/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests-linux.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "Multi-GPU tests finished with status: ${{ needs.cuda-tests-linux.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Multi-GPU tests finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cuda-tests-multi-gpu)
name: gpu_full_multi_gpu_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -202,8 +197,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-tests-multi-gpu) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-tests-multi-gpu) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,4 +1,4 @@
name: Long Run Tests on GPU
name: gpu_integer_long_run_tests
env:
CARGO_TERM_COLOR: always
@@ -10,17 +10,26 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
IS_PR: ${{ github.event_name == 'pull_request' }}
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
schedule:
# Weekly tests will be triggered each Friday at 9p.m.
- cron: "0 21 * * 5"
# Nightly tests will be triggered each evening 8p.m.
- cron: "0 20 * * *"
pull_request:
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
setup-instance:
name: Setup instance (gpu-tests)
name: gpu_integer_long_run_tests/setup-instance
if: github.event_name != 'schedule' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
runs-on: ubuntu-latest
@@ -29,20 +38,20 @@ jobs:
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: hyperstack
profile: multi-gpu-test
profile: 4-l40
cuda-tests:
name: Long run GPU tests
name: gpu_integer_long_run_tests/cuda-tests
needs: [ setup-instance ]
concurrency:
group: ${{ github.workflow }}_${{github.event_name}}_${{ github.ref }}
group: ${{ github.workflow_ref }}_${{github.event_name}}
cancel-in-progress: true
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -51,54 +60,59 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
cuda: "12.8"
gcc: 11
timeout-minutes: 4320 # 72 hours
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Run tests
run: |
make test_integer_long_run_gpu
if [[ "${IS_PR}" == "true" ]]; then
make test_integer_short_run_gpu
else
make test_integer_long_run_gpu
fi
slack-notify:
name: Slack Notification
name: gpu_integer_long_run_tests/slack-notify
needs: [ setup-instance, cuda-tests ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests.result }}
SLACK_MESSAGE: "Integer GPU long run tests finished with status: ${{ needs.cuda-tests.result }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (gpu-tests)
name: gpu_integer_long_run_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-tests ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -108,8 +122,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (gpu-long-run-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -0,0 +1,150 @@
# Compile and test tfhe-cuda-backend on an AWS instance
name: gpu_memory_sanitizer
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUSTFLAGS: "-C target-cpu=native"
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
pull_request:
types: [ labeled ]
workflow_dispatch:
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
setup-instance:
name: gpu_memory_sanitizer/setup-instance
runs-on: ubuntu-latest
if: github.event_name != 'pull_request' ||
(github.event.action == 'labeled' && github.event.label.name == 'approved')
outputs:
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: hyperstack
profile: gpu-test
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: gpu_memory_sanitizer/cuda-tests-linux
needs: [ setup-instance ]
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
timeout-minutes: 240
strategy:
fail-fast: false
# explicit include-based build matrix, of known valid options
matrix:
include:
- os: ubuntu-22.04
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Find tools
run: |
find /usr -executable -name "compute-sanitizer"
- name: Install latest stable
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Run memory sanitizer
run: |
make test_high_level_api_gpu_sanitizer
slack-notify:
name: gpu_memory_sanitizer/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests-linux.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "GPU Memory Checks tests finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: gpu_memory_sanitizer/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop remote instance
id: stop-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (gpu-memory-sanitizer) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -0,0 +1,150 @@
# Compile and test tfhe-cuda-backend on an AWS instance
name: gpu_memory_sanitizer_h100
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUSTFLAGS: "-C target-cpu=native"
RUST_BACKTRACE: "full"
RUST_MIN_STACK: "8388608"
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
pull_request:
types: [ labeled ]
workflow_dispatch:
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
setup-instance:
name: gpu_memory_sanitizer/setup-instance
runs-on: ubuntu-latest
if: github.event_name != 'pull_request' ||
(github.event.action == 'labeled' && github.event.label.name == 'approved')
outputs:
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: hyperstack
profile: single-h100
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: gpu_memory_sanitizer/cuda-tests-linux
needs: [ setup-instance ]
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
timeout-minutes: 240
strategy:
fail-fast: false
# explicit include-based build matrix, of known valid options
matrix:
include:
- os: ubuntu-22.04
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Find tools
run: |
find /usr -executable -name "compute-sanitizer"
- name: Install latest stable
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Run memory sanitizer
run: |
make test_high_level_api_gpu_sanitizer
slack-notify:
name: gpu_memory_sanitizer/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests-linux.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "GPU Memory Checks tests finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: gpu_memory_sanitizer/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop remote instance
id: stop-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (gpu-memory-sanitizer-h100) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Perfom tfhe-cuda-backend post-commit checks on an AWS instance
name: Cuda - Post-commit Checks
# Perform tfhe-cuda-backend post-commit checks on an AWS instance
name: gpu_pcc
env:
CARGO_TERM_COLOR: always
@@ -11,52 +11,34 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_16-22.04"
CUDA_KEYRING_PACKAGE: cuda-keyring_1.1-1_all.deb
CUDA_KEYRING_SHA: "d93190d50b98ad4699ff40f4f7af50f16a76dac3bb8da1eaaf366d47898ff8df"
on:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow (via manual approval for PR from forks)
jobs:
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cuda-pcc)
needs: check-user-permission
name: gpu_pcc/setup-instance
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -65,11 +47,18 @@ jobs:
backend: aws
profile: gpu-build
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-pcc:
name: CUDA post-commit checks
name: gpu_pcc/cuda-pcc (bpr)
needs: setup-instance
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: true
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -85,18 +74,29 @@ jobs:
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Set up home
- name: Install CUDA
if: env.SECRETS_AVAILABLE == 'false'
shell: bash
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
# Use Sed to extract a value from a string, this cannot be done with the ${variable//search/replace} pattern.
# shellcheck disable=SC2001
TOOLKIT_VERSION="$(echo "${CUDA_VERSION}" | sed 's/\(.*\)\.\(.*\)/\1-\2/')"
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/"${CUDA_KEYRING_PACKAGE}"
echo "${CUDA_KEYRING_SHA} ${CUDA_KEYRING_PACKAGE}" > checksum
sha256sum -c checksum
sudo dpkg -i "${CUDA_KEYRING_PACKAGE}"
sudo apt update
sudo apt -y install "cuda-toolkit-${TOOLKIT_VERSION}" cmake-format
env:
CUDA_VERSION: ${{ matrix.cuda }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
@@ -106,18 +106,21 @@ jobs:
echo "CUDA_PATH=$CUDA_PATH" >> "${GITHUB_ENV}"
echo "$CUDA_PATH/bin" >> "${GITHUB_PATH}"
echo "LD_LIBRARY_PATH=$CUDA_PATH/lib:$LD_LIBRARY_PATH" >> "${GITHUB_ENV}"
echo "CUDACXX=/usr/local/cuda-${{ matrix.cuda }}/bin/nvcc" >> "${GITHUB_ENV}"
echo "CUDACXX=/usr/local/cuda-${CUDA_VERSION}/bin/nvcc" >> "${GITHUB_ENV}"
env:
CUDA_VERSION: ${{ matrix.cuda }}
# Specify the correct host compilers
- name: Export gcc and g++ variables
if: ${{ !cancelled() }}
run: |
{
echo "CC=/usr/bin/gcc-${{ matrix.gcc }}";
echo "CXX=/usr/bin/g++-${{ matrix.gcc }}";
echo "CUDAHOSTCXX=/usr/bin/g++-${{ matrix.gcc }}";
echo "HOME=/home/ubuntu";
echo "CC=/usr/bin/gcc-${GCC_VERSION}";
echo "CXX=/usr/bin/g++-${GCC_VERSION}";
echo "CUDAHOSTCXX=/usr/bin/g++-${GCC_VERSION}";
} >> "${GITHUB_ENV}"
env:
GCC_VERSION: ${{ matrix.gcc }}
- name: Run fmt checks
run: |
@@ -127,23 +130,36 @@ jobs:
run: |
make pcc_gpu
- name: Check build with hpu enabled
run: |
make clippy_gpu_hpu
- name: Set pull-request URL
if: ${{ failure() && github.event_name == 'pull_request' }}
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() && env.SECRETS_AVAILABLE == 'true' }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "CUDA AWS post-commit checks finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "CUDA AWS post-commit checks finished with status: ${{ job.status }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cuda-pcc)
name: cuda_pcc/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-pcc ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -153,8 +169,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-pcc) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-pcc) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Signed integer GPU tests on an RTXA6000 VM on hyperstack with classical PBS
name: Cuda - Signed integer tests with classical PBS
name: gpu_signed_integer_classic_tests
env:
CARGO_TERM_COLOR: always
@@ -11,48 +11,44 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' || github.event_name == 'pull_request_target' }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: gpu_signed_integer_classic_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
gpu_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.gpu_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
gpu:
@@ -72,36 +68,20 @@ jobs:
- scripts/integer-tests.sh
- ci/slab.toml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cuda-signed-classic-tests)
needs: [ should-run, check-user-permission ]
if: github.event_name != 'pull_request_target' ||
name: gpu_signed_integer_classic_tests/setup-instance
needs: should-run
if: github.event_name != 'pull_request' ||
(github.event.action != 'labeled' && needs.should-run.outputs.gpu_test == 'true') ||
(github.event.action == 'labeled' && github.event.label.name == 'approved' && needs.should-run.outputs.gpu_test == 'true')
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -110,13 +90,20 @@ jobs:
backend: hyperstack
profile: gpu-test
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: CUDA signed integer tests with classical PBS
name: gpu_signed_integer_classic_tests/cuda-tests-linux
needs: [ should-run, setup-instance ]
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.setup-instance.result != 'skipped')
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -125,57 +112,65 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Run signed integer tests
run: |
BIG_TESTS_INSTANCE=TRUE make test_signed_integer_gpu_ci
slack-notify:
name: Slack Notification
name: gpu_signed_integer_classic_tests/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests-linux.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "Integer GPU signed integer tests with classical PBS finished with status: ${{ needs.cuda-tests-linux.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Integer GPU signed integer tests with classical PBS finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cuda-signed-classic-tests)
name: gpu_signed_integer_classic_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -185,8 +180,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-signed-classic-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-signed-classic-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Signed integer GPU tests on an H100 VM on hyperstack
name: Cuda - Signed integer tests on H100
name: gpu_signed_integer_h100_tests
env:
CARGO_TERM_COLOR: always
@@ -11,48 +11,44 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' || github.event_name == 'pull_request_target' }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: gpu_signed_integer_h100_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
gpu_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.gpu_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
gpu:
@@ -72,36 +68,26 @@ jobs:
- scripts/integer-tests.sh
- ci/slab.toml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cuda-h100-tests)
needs: [ should-run, check-user-permission ]
if: github.event_name != 'pull_request_target' ||
name: gpu_signed_integer_h100_tests/setup-instance
needs: should-run
if: github.event_name != 'pull_request' ||
(github.event.action != 'labeled' && needs.should-run.outputs.gpu_test == 'true') ||
(github.event.action == 'labeled' && github.event.label.name == 'approved' && needs.should-run.outputs.gpu_test == 'true')
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
# Use permanent remote instance label first as on-demand remote instance label output is set before the end of start-remote-instance step.
# If the latter fails due to a failed GitHub action runner set up, we have to fallback on the permanent instance.
# Since the on-demand remote label is set before failure, we have to do the logical OR in this order,
# otherwise we'll try to run the next job on a non-existing on-demand instance.
runner-name: ${{ steps.use-permanent-instance.outputs.runner_group || steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
remote-instance-outcome: ${{ steps.start-remote-instance.outcome }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
continue-on-error: true
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -110,13 +96,27 @@ jobs:
backend: hyperstack
profile: single-h100
# This will allow to fallback on permanent instances running on Hyperstack.
- name: Use permanent remote instance
id: use-permanent-instance
if: env.SECRETS_AVAILABLE == 'true' && steps.start-remote-instance.outcome == 'failure'
run: |
echo "runner_group=h100x1" >> "$GITHUB_OUTPUT"
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: CUDA H100 signed integer tests
name: gpu_signed_integer_h100_tests/cuda-tests-linux
needs: [ should-run, setup-instance ]
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.setup-instance.result != 'skipped')
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -125,57 +125,66 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
if: needs.setup-instance.outputs.remote-instance-outcome == 'success'
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Run signed integer multi-bit tests
run: |
BIG_TESTS_INSTANCE=TRUE make test_signed_integer_multi_bit_gpu_ci
slack-notify:
name: Slack Notification
name: gpu_signed_integer_h100_tests/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests-linux.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "Integer GPU H100 tests finished with status: ${{ needs.cuda-tests-linux.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Integer GPU H100 tests finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cuda-h100-tests)
if: ${{ always() && needs.setup-instance.result == 'success' }}
name: gpu_signed_integer_h100_tests/teardown-instance
if: ${{ always() && needs.setup-instance.outputs.remote-instance-outcome == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -185,8 +194,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-h100-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-h100-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Compile and test tfhe-cuda-backend signed integer on an AWS instance
name: Cuda - Signed integer tests
name: gpu_signed_integer_tests
env:
CARGO_TERM_COLOR: always
@@ -11,51 +11,45 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
SLACKIFY_MARKDOWN: true
FAST_TESTS: TRUE
NIGHTLY_TESTS: FALSE
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' || github.event_name == 'pull_request_target' }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
paths:
- '**'
- '!.github/**'
- '!ci/**'
schedule:
# Nightly tests @ 1AM after each work day
- cron: "0 1 * * MON-FRI"
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: gpu_signed_integer_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
gpu_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.gpu_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
gpu:
@@ -75,36 +69,20 @@ jobs:
- scripts/integer-tests.sh
- ci/slab.toml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cuda-signed-integer-tests)
name: gpu_signed_integer_tests/setup-instance
runs-on: ubuntu-latest
needs: [ should-run, check-user-permission ]
needs: should-run
if: (github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') ||
github.event_name == 'workflow_dispatch' ||
needs.should-run.outputs.gpu_test == 'true'
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -113,13 +91,20 @@ jobs:
backend: hyperstack
profile: gpu-test
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-signed-integer-tests:
name: CUDA signed integer tests
name: gpu_signed_integer_tests/cuda-signed-integer-tests
needs: [ should-run, setup-instance ]
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.setup-instance.result != 'skipped')
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -128,31 +113,29 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
gcc: 11
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Should run nightly tests
if: github.event_name == 'schedule'
run: |
@@ -166,27 +149,37 @@ jobs:
make test_signed_integer_multi_bit_gpu_ci
slack-notify:
name: Slack Notification
name: gpu_signed_integer_tests/slack-notify
needs: [ setup-instance, cuda-signed-integer-tests ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-signed-integer-tests.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-signed-integer-tests.result }}
SLACK_MESSAGE: "Base GPU tests finished with status: ${{ needs.cuda-signed-integer-tests.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Signed GPU tests finished with status: ${{ needs.cuda-signed-integer-tests.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cuda-tests)
name: gpu_signed_integer_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-signed-integer-tests ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -196,8 +189,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-signed-integer-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-signed-integer-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Test unsigned integers on an RTXA6000 VM on hyperstack with the classical PBS
name: Cuda - Unsigned integer tests with classical PBS
name: gpu_unsigned_integer_classic_tests
env:
CARGO_TERM_COLOR: always
@@ -11,48 +11,44 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' || github.event_name == 'pull_request_target' }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: gpu_unsigned_integer_classic_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
gpu_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.gpu_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
gpu:
@@ -72,36 +68,20 @@ jobs:
- scripts/integer-tests.sh
- ci/slab.toml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cuda-unsigned-classic-tests)
needs: [ should-run, check-user-permission ]
name: gpu_unsigned_integer_classic_tests/setup-instance
needs: should-run
if: github.event_name == 'workflow_dispatch' ||
(github.event.action != 'labeled' && needs.should-run.outputs.gpu_test == 'true') ||
(github.event.action == 'labeled' && github.event.label.name == 'approved' && needs.should-run.outputs.gpu_test == 'true')
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -110,13 +90,20 @@ jobs:
backend: hyperstack
profile: gpu-test
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: CUDA unsigned integer tests with classical PBS
name: gpu_unsigned_integer_classic_tests/cuda-tests-linux
needs: [ should-run, setup-instance ]
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.setup-instance.result != 'skipped')
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -125,57 +112,65 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Run unsigned integer tests
run: |
BIG_TESTS_INSTANCE=TRUE make test_unsigned_integer_gpu_ci
slack-notify:
name: Slack Notification
name: gpu_unsigned_integer_classic_tests/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests-linux.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "Unsigned integer GPU classic tests finished with status: ${{ needs.cuda-tests-linux.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Unsigned integer GPU classic tests finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cuda-unsigned-classic-tests)
name: gpu_unsigned_integer_classic_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -185,8 +180,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-unsigned-classic-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-unsigned-classic-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Test unsigned integers on an H100 VM on hyperstack
name: Cuda - Unsigned integer tests on H100
name: gpu_unsigned_integer_h100_tests/
env:
CARGO_TERM_COLOR: always
@@ -11,48 +11,44 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' || github.event_name == 'pull_request_target' }}
REF: ${{ github.event.pull_request.head.sha || github.sha }}
SLACKIFY_MARKDOWN: true
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: gpu_unsigned_integer_h100_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
gpu_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.gpu_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
gpu:
@@ -72,36 +68,26 @@ jobs:
- scripts/integer-tests.sh
- ci/slab.toml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cuda-h100-tests)
needs: [ should-run, check-user-permission ]
name: gpu_unsigned_integer_h100_tests/setup-instance
needs: should-run
if: github.event_name == 'workflow_dispatch' ||
(github.event.action != 'labeled' && needs.should-run.outputs.gpu_test == 'true') ||
(github.event.action == 'labeled' && github.event.label.name == 'approved' && needs.should-run.outputs.gpu_test == 'true')
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
# Use permanent remote instance label first as on-demand remote instance label output is set before the end of start-remote-instance step.
# If the latter fails due to a failed GitHub action runner set up, we have to fallback on the permanent instance.
# Since the on-demand remote label is set before failure, we have to do the logical OR in this order,
# otherwise we'll try to run the next job on a non-existing on-demand instance.
runner-name: ${{ steps.use-permanent-instance.outputs.runner_group || steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
remote-instance-outcome: ${{ steps.start-remote-instance.outcome }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
continue-on-error: true
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -110,13 +96,27 @@ jobs:
backend: hyperstack
profile: single-h100
# This will allow to fallback on permanent instances running on Hyperstack.
- name: Use permanent remote instance
id: use-permanent-instance
if: env.SECRETS_AVAILABLE == 'true' && steps.start-remote-instance.outcome == 'failure'
run: |
echo "runner_group=h100x1" >> "$GITHUB_OUTPUT"
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-tests-linux:
name: CUDA H100 unsigned integer tests
name: gpu_unsigned_integer_h100_tests/cuda-tests-linux
needs: [ should-run, setup-instance ]
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.setup-instance.result != 'skipped')
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -125,57 +125,66 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
if: needs.setup-instance.outputs.remote-instance-outcome == 'success'
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Run unsigned integer multi-bit tests
run: |
BIG_TESTS_INSTANCE=TRUE make test_unsigned_integer_multi_bit_gpu_ci
slack-notify:
name: Slack Notification
name: gpu_unsigned_integer_h100_tests/slack-notify
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-tests-linux.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-tests-linux.result }}
SLACK_MESSAGE: "Unsigned integer GPU H100 tests finished with status: ${{ needs.cuda-tests-linux.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Unsigned integer GPU H100 tests finished with status: ${{ needs.cuda-tests-linux.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cuda-h100-tests)
if: ${{ always() && needs.setup-instance.result == 'success' }}
name: gpu_unsigned_integer_h100_tests/teardown-instance
if: ${{ always() && needs.setup-instance.outputs.remote-instance-outcome == 'success' }}
needs: [ setup-instance, cuda-tests-linux ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
- name: Stop remote instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -185,8 +194,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-h100-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-h100-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,5 +1,5 @@
# Compile and test tfhe-cuda-backend unsigned integer on an AWS instance
name: Cuda - Unsigned integer tests
name: gpu_unsigned_integer_tests
env:
CARGO_TERM_COLOR: always
@@ -11,52 +11,45 @@ env:
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
SLACKIFY_MARKDOWN: true
FAST_TESTS: TRUE
NIGHTLY_TESTS: FALSE
REF: ${{ github.event.pull_request.head.sha || github.sha }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
PULL_REQUEST_MD_LINK: ""
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "gpu_ubuntu-22.04"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
schedule:
# Nightly tests @ 1AM after each work day
- cron: "0 1 * * MON-FRI"
permissions:
contents: read
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
should-run:
name: gpu_unsigned_integer_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read
pull-requests: read # Needed to check for file change
outputs:
gpu_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.gpu_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@dcc7a0cba800f454d79fff4b993e8c3555bcc0a8
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
gpu:
@@ -76,36 +69,20 @@ jobs:
- scripts/integer-tests.sh
- ci/slab.toml
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
setup-instance:
name: Setup instance (cuda-unsigned-integer-tests)
needs: [ should-run, check-user-permission ]
name: gpu_unsigned_integer_tests/setup-instance
runs-on: ubuntu-latest
needs: should-run
if: (github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') ||
github.event_name == 'workflow_dispatch' ||
needs.should-run.outputs.gpu_test == 'true'
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -114,13 +91,20 @@ jobs:
backend: hyperstack
profile: gpu-test
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cuda-unsigned-integer-tests:
name: CUDA unsigned integer tests
name: gpu_unsigned_integer_tests/cuda-unsigned-integer-tests
needs: [ should-run, setup-instance ]
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.setup-instance.result != 'skipped')
if: github.event_name != 'pull_request' ||
(github.event_name == 'pull_request' && needs.setup-instance.result != 'skipped')
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
strategy:
@@ -129,31 +113,29 @@ jobs:
matrix:
include:
- os: ubuntu-22.04
cuda: "12.2"
cuda: "12.8"
gcc: 11
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Setup Hyperstack dependencies
uses: ./.github/actions/hyperstack_setup
uses: ./.github/actions/gpu_setup
with:
cuda-version: ${{ matrix.cuda }}
gcc-version: ${{ matrix.gcc }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
github-instance: ${{ env.SECRETS_AVAILABLE == 'false' }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Enable nvidia multi-process service
run: |
nvidia-cuda-mps-control -d
- name: Should run nightly tests
if: github.event_name == 'schedule'
run: |
@@ -167,27 +149,37 @@ jobs:
make test_unsigned_integer_multi_bit_gpu_ci
slack-notify:
name: Slack Notification
name: gpu_unsigned_integer_tests/slack-notify
needs: [ setup-instance, cuda-unsigned-integer-tests ]
runs-on: ubuntu-latest
if: ${{ always() && needs.cuda-unsigned-integer-tests.result != 'skipped' && failure() }}
continue-on-error: true
steps:
- name: Set pull-request URL
if: env.SECRETS_AVAILABLE == 'true' && github.event_name == 'pull_request'
run: |
echo "PULL_REQUEST_MD_LINK=[pull-request](${PR_BASE_URL}${PR_NUMBER}), " >> "${GITHUB_ENV}"
env:
PR_BASE_URL: ${{ vars.PR_BASE_URL }}
PR_NUMBER: ${{ github.event.pull_request.number }}
- name: Send message
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
if: env.SECRETS_AVAILABLE == 'true'
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cuda-unsigned-integer-tests.result }}
SLACK_MESSAGE: "Unsigned integer GPU tests finished with status: ${{ needs.cuda-unsigned-integer-tests.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Unsigned integer GPU tests finished with status: ${{ needs.cuda-unsigned-integer-tests.result }}. (${{ env.PULL_REQUEST_MD_LINK }}[action run](${{ env.ACTION_RUN_URL }}))"
teardown-instance:
name: Teardown instance (cuda-tests)
name: gpu_unsigned_integer_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cuda-unsigned-integer-tests ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -197,8 +189,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cuda-unsigned-integer-tests) finished with status: ${{ job.status }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Instance teardown (cuda-unsigned-integer-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

130
.github/workflows/hpu_hlapi_tests.yml vendored Normal file
View File

@@ -0,0 +1,130 @@
# Test HPU backend HLAPI layer
name: hpu_hlapi_tests
on:
pull_request:
push:
branches:
- main
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
IS_PULL_REQUEST: ${{ github.event_name == 'pull_request' }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_16"
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}${{ github.ref == 'refs/heads/main' && github.sha || '' }}
cancel-in-progress: true
permissions: {}
jobs:
should-run:
name: hpu_hlapi_tests/should-run
runs-on: ubuntu-latest
permissions:
pull-requests: read # Needed to check for file change
outputs:
hpu_test: ${{ env.IS_PULL_REQUEST == 'false' || steps.changed-files.outputs.hpu_any_changed }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@24d32ffd492484c1d75e0c0b894501ddb9d30d62 # v47.0.0
with:
files_yaml: |
hpu:
- tfhe/Cargo.toml
- Makefile
- backends/tfhe-hpu-backend/**
- mockups/tfhe-hpu-mockup/**
setup-instance:
name: hpu_hlapi_tests/setup-instance
needs: should-run
if:
needs.should-run.outputs.hpu_test == 'true' &&
((github.event_name == 'push' && github.repository == 'zama-ai/tfhe-rs') ||github.event_name == 'pull_request')
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: cpu-big
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
cargo-tests-hpu:
name: hpu_hlapi_tests/cargo-tests-hpu (bpr)
needs: setup-instance
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install Rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: stable
override: true
- name: Install Just
run: |
cargo install just
- name: Test HLAPI HPU
run: |
source setup_hpu.sh
just -f mockups/tfhe-hpu-mockup/Justfile BUILD_PROFILE=release mockup &
make HPU_CONFIG=sim test_high_level_api_hpu
make HPU_CONFIG=sim test_user_doc_hpu
teardown-instance:
name: hpu_hlapi_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [setup-instance, cargo-tests-hpu]
runs-on: ubuntu-latest
steps:
- name: Stop remote instance
id: stop-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (hpu_hlapi_tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,4 +1,4 @@
name: AWS Long Run Tests on CPU
name: integer_long_run_tests
env:
CARGO_TERM_COLOR: always
@@ -18,9 +18,14 @@ on:
# Weekly tests will be triggered each Friday at 9p.m.
- cron: "0 21 * * 5"
permissions: {}
# zizmor: ignore[concurrency-limits] concurrency is managed after instance setup to ensure safe provisioning
jobs:
setup-instance:
name: Setup instance (cpu-tests)
name: integer_long_run_tests/setup-instance
if: github.event_name != 'schedule' ||
(github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs')
runs-on: ubuntu-latest
@@ -29,7 +34,7 @@ jobs:
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -39,22 +44,22 @@ jobs:
profile: cpu-big
cpu-tests:
name: Long run CPU tests
name: integer_long_run_tests/cpu-tests
needs: [ setup-instance ]
concurrency:
group: ${{ github.workflow }}_${{github.event_name}}_${{ github.ref }}
group: ${{ github.workflow_ref }}_${{github.event_name}}
cancel-in-progress: true
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
timeout-minutes: 4320 # 72 hours
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
@@ -63,22 +68,22 @@ jobs:
make test_integer_long_run
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "CPU long run tests finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (cpu-tests)
name: integer_long_run_tests/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, cpu-tests ]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -88,8 +93,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (cpu-long-run-tests) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,21 +1,9 @@
name: Tests on M1 CPU
name: m1_tests
on:
workflow_dispatch:
# Trigger pull_request event on CI files to be able to test changes before merging to main branch.
# Workflow would fail if changes come from a forked repository since secrets are not available with this event.
pull_request:
types: [ labeled ]
paths:
- '.github/**'
- 'ci/**'
# General entry point for Zama's pull request as well as contribution from forks.
pull_request_target:
types: [ labeled ]
paths:
- '**'
- '!.github/**'
- '!ci/**'
# Have a nightly build for M1 tests
schedule:
# * is a special character in YAML so you have to quote this string
@@ -33,32 +21,18 @@ env:
# We clear the cache to reduce memory pressure because of the numerous processes of cargo
# nextest
TFHE_RS_CLEAR_IN_MEMORY_KEY_CACHE: "1"
REF: ${{ github.event.pull_request.head.sha || github.sha }}
CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN || secrets.GITHUB_TOKEN }}
concurrency:
group: ${{ github.workflow }}_${{ github.head_ref || github.ref }}
group: ${{ github.workflow_ref }}
cancel-in-progress: true
permissions:
contents: read
jobs:
check-ci-files:
uses: ./.github/workflows/check_ci_files_change.yml
with:
checkout_ref: ${{ github.event.pull_request.head.sha || github.sha }}
secrets:
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
# Fail if the triggering actor is not part of Zama organization.
# If pull_request_target is emitted and CI files have changed, skip this job. This would skip following jobs.
check-user-permission:
needs: check-ci-files
if: github.event_name != 'pull_request_target' ||
(github.event_name == 'pull_request_target' && needs.check-ci-files.outputs.ci_file_changed == 'false')
uses: ./.github/workflows/check_actor_permissions.yml
secrets:
TOKEN: ${{ secrets.GITHUB_TOKEN }}
cargo-builds-m1:
needs: check-user-permission
name: m1_tests/cargo-builds-m1
if: ${{ (github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') ||
github.event_name == 'workflow_dispatch' ||
contains(github.event.label.name, 'm1_test') }}
@@ -67,14 +41,13 @@ jobs:
timeout-minutes: 720
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: "false"
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ref: ${{ env.REF }}
token: ${{ env.CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
@@ -94,9 +67,7 @@ jobs:
run: |
make test_fft
make test_fft_serde
make test_fft_nightly
make test_fft_no_std
make test_fft_no_std_nightly
# we don't run the js stuff here as it's causing issues with the M1 config
- name: Run pcc NTT checks
@@ -206,28 +177,27 @@ jobs:
make test_integer_multi_bit_ci
remove_label:
name: Remove m1_test label
name: m1_tests/remove_label
runs-on: ubuntu-latest
needs:
- cargo-builds-m1
if: ${{ always() }}
steps:
- uses: actions-ecosystem/action-remove-labels@2ce5d41b4b6aa8503e285553f75ed56e0a40bae0
if: ${{ github.event_name == 'pull_request_target' }}
if: ${{ github.event_name == 'pull_request' }}
with:
labels: m1_test
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Slack Notification
if: ${{ needs.cargo-builds-m1.result != 'skipped' }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ needs.cargo-builds-m1.result }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "M1 tests finished with status: ${{ needs.cargo-builds-m1.result }} on '${{ env.BRANCH }}'. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "M1 tests finished with status: ${{ needs.cargo-builds-m1.result }}. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
MSG_MINIMAL: event,action url,commit
BRANCH: ${{ github.head_ref || github.ref }}
BRANCH: ${{ github.ref }}

View File

@@ -1,164 +0,0 @@
# Publish new release of tfhe-rs on various platform.
name: Publish release
on:
workflow_dispatch:
inputs:
dry_run:
description: "Dry-run"
type: boolean
default: true
push_to_crates:
description: "Push to crate"
type: boolean
default: true
push_web_package:
description: "Push web js package"
type: boolean
default: true
push_node_package:
description: "Push node js package"
type: boolean
default: true
npm_latest_tag:
description: "Set NPM tag as latest"
type: boolean
default: false
env:
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
NPM_TAG: ""
jobs:
verify_tag:
uses: ./.github/workflows/verify_tagged_commit.yml
secrets:
RELEASE_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}
package:
runs-on: ubuntu-latest
needs: verify_tag
outputs:
hash: ${{ steps.hash.outputs.hash }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Prepare package
run: |
cargo package -p tfhe
- uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 # v4.6.0
with:
name: crate
path: target/package/*.crate
- name: generate hash
id: hash
run: cd target/package && echo "hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
provenance:
if: ${{ !inputs.dry_run }}
needs: [package]
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.0.0
permissions:
# Needed to detect the GitHub Actions environment
actions: read
# Needed to create the provenance via GitHub OIDC
id-token: write
# Needed to upload assets/artifacts
contents: write
with:
# SHA-256 hashes of the Crate package.
base64-subjects: ${{ needs.package.outputs.hash }}
publish_release:
name: Publish Release
needs: [package] # for comparing hashes
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Create NPM version tag
if: ${{ inputs.npm_latest_tag }}
run: |
echo "NPM_TAG=latest" >> "${GITHUB_ENV}"
- name: Download artifact
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: crate
path: target/package
- name: Publish crate.io package
if: ${{ inputs.push_to_crates }}
env:
CRATES_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
DRY_RUN: ${{ inputs.dry_run && '--dry-run' || '' }}
run: |
cargo publish -p tfhe --token ${{ env.CRATES_TOKEN }} ${{ env.DRY_RUN }}
- name: Generate hash
id: published_hash
run: cd target/package && echo "pub_hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
- name: Slack notification (hashes comparison)
if: ${{ needs.package.outputs.hash != steps.published_hash.outputs.pub_hash }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: failure
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "SLSA tfhe crate - hash comparison failure: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
- name: Build web package
if: ${{ inputs.push_web_package }}
run: |
make build_web_js_api_parallel
- name: Publish web package
if: ${{ inputs.push_web_package }}
uses: JS-DevTools/npm-publish@19c28f1ef146469e409470805ea4279d47c3d35c
with:
token: ${{ secrets.NPM_TOKEN }}
package: tfhe/pkg/package.json
dry-run: ${{ inputs.dry_run }}
tag: ${{ env.NPM_TAG }}
provenance: true
- name: Build Node package
if: ${{ inputs.push_node_package }}
run: |
rm -rf tfhe/pkg
make build_node_js_api
sed -i 's/"tfhe"/"node-tfhe"/g' tfhe/pkg/package.json
- name: Publish Node package
if: ${{ inputs.push_node_package }}
uses: JS-DevTools/npm-publish@19c28f1ef146469e409470805ea4279d47c3d35c
with:
token: ${{ secrets.NPM_TOKEN }}
package: tfhe/pkg/package.json
dry-run: ${{ inputs.dry_run }}
tag: ${{ env.NPM_TAG }}
provenance: true
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "tfhe release failed: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View File

@@ -0,0 +1,141 @@
# Common workflow to make crate release
name: make_release_common
on:
workflow_call:
inputs:
package-name:
type: string
required: true
dry-run:
type: boolean
default: true
secrets:
REPO_CHECKOUT_TOKEN:
required: true
SLACK_CHANNEL:
required: true
BOT_USERNAME:
required: true
SLACK_WEBHOOK:
required: true
ALLOWED_TEAM:
required: true
READ_ORG_TOKEN:
required: true
env:
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
permissions: {}
# zizmor: ignore[concurrency-limits] caller workflow is responsible for the concurrency
jobs:
verify-triggering-actor:
name: make_release_common/verify-triggering-actor
if: startsWith(github.ref, 'refs/tags/')
uses: ./.github/workflows/verify_triggering_actor.yml
secrets:
ALLOWED_TEAM: ${{ secrets.ALLOWED_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}
package:
name: make_release_common/package
runs-on: ubuntu-latest
needs: verify-triggering-actor
outputs:
hash: ${{ steps.hash.outputs.hash }}
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Prepare package
env:
PACKAGE: ${{ inputs.package-name }}
run: |
cargo package -p "${PACKAGE}"
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: crate-${{ inputs.package-name }}
path: target/package/*.crate
- name: generate hash
id: hash
run: cd target/package && echo "hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
provenance:
name: make_release_common/provenance
if: ${{ !inputs.dry-run }}
needs: package
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
permissions:
actions: read # Needed to detect the GitHub Actions environment
id-token: write # Needed to create the provenance via GitHub OIDC
contents: write # Needed to upload assets/artifacts
with:
# SHA-256 hashes of the Crate package.
base64-subjects: ${{ needs.package.outputs.hash }}
publish_release:
name: make_release_common/publish-release
needs: package
runs-on: ubuntu-latest
permissions:
id-token: write # Needed for OIDC token exchange on crates.io
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Download artifact
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: crate-${{ inputs.package-name }}
path: target/package
- name: Authenticate on registry
uses: rust-lang/crates-io-auth-action@b7e9a28eded4986ec6b1fa40eeee8f8f165559ec # v1.0.3
id: auth
- name: Publish crate.io package
env:
CARGO_REGISTRY_TOKEN: ${{ steps.auth.outputs.token }}
PACKAGE: ${{ inputs.package-name }}
DRY_RUN: ${{ inputs.dry-run && '--dry-run' || '' }}
run: |
# DRY_RUN expansion cannot be double quoted when variable contains empty string otherwise cargo publish
# would fail. This is safe since DRY_RUN is handled in the env section above.
# shellcheck disable=SC2086
cargo publish -p "${PACKAGE}" ${DRY_RUN}
- name: Generate hash
id: published_hash
run: cd target/package && echo "pub_hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
- name: Slack notification (hashes comparison)
if: ${{ needs.package.outputs.hash != steps.published_hash.outputs.pub_hash }}
continue-on-error: true
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661 # v2.3.3
env:
SLACK_COLOR: failure
SLACK_MESSAGE: "SLSA ${{ inputs.package-name }} - hash comparison failure: (${{ env.ACTION_RUN_URL }})"
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661 # v2.3.3
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "${{ inputs.package-name }} release finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,4 +1,4 @@
name: Publish CUDA release
name: make_release_cuda
on:
workflow_dispatch:
@@ -15,23 +15,29 @@ env:
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
verify_tag:
uses: ./.github/workflows/verify_tagged_commit.yml
verify-triggering-actor:
name: make_release_cuda/verify-triggering-actor
if: startsWith(github.ref, 'refs/tags/')
uses: ./.github/workflows/verify_triggering_actor.yml
secrets:
RELEASE_TEAM: ${{ secrets.RELEASE_TEAM }}
ALLOWED_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}
setup-instance:
name: Setup instance (publish-cuda-release)
needs: verify_tag
name: make_release_cuda/setup-instance
needs: verify-triggering-actor
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-instance.outputs.label }}
steps:
- name: Start instance
id: start-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -41,7 +47,7 @@ jobs:
profile: gpu-build
package:
name: Package CUDA Release for provenance
name: make_release_cuda/package
needs: setup-instance
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
outputs:
@@ -58,18 +64,14 @@ jobs:
CUDA_PATH: /usr/local/cuda-${{ matrix.cuda }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: 'false'
persist-credentials: "false"
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install latest stable
uses: dtolnay/rust-toolchain@a54c7afa936fefeb4456b2dd8068152669aa8203
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
@@ -80,45 +82,56 @@ jobs:
{
echo "CUDA_PATH=$CUDA_PATH";
echo "LD_LIBRARY_PATH=$CUDA_PATH/lib:$LD_LIBRARY_PATH";
echo "CUDACXX=/usr/local/cuda-${{ matrix.cuda }}/bin/nvcc";
echo "CUDACXX=/usr/local/cuda-${CUDA_VERSION}/bin/nvcc";
} >> "${GITHUB_ENV}"
env:
CUDA_VERSION: ${{ matrix.cuda }}
# Specify the correct host compilers
- name: Export gcc and g++ variables
if: ${{ !cancelled() }}
run: |
{
echo "CC=/usr/bin/gcc-${{ matrix.gcc }}";
echo "CXX=/usr/bin/g++-${{ matrix.gcc }}";
echo "CUDAHOSTCXX=/usr/bin/g++-${{ matrix.gcc }}";
echo "CC=/usr/bin/gcc-${GCC_VERSION}";
echo "CXX=/usr/bin/g++-${GCC_VERSION}";
echo "CUDAHOSTCXX=/usr/bin/g++-${GCC_VERSION}";
echo "HOME=/home/ubuntu";
} >> "${GITHUB_ENV}"
env:
GCC_VERSION: ${{ matrix.gcc }}
- name: Prepare package
run: |
cargo package -p tfhe-cuda-backend
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: crate-tfhe-cuda-backend
path: target/package/*.crate
- name: generate hash
id: hash
run: cd target/package && echo "hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
provenance:
name: make_release_cuda/provenance
if: ${{ !inputs.dry_run }}
needs: [package]
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.0.0
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.1.0
permissions:
# Needed to detect the GitHub Actions environment
actions: read
# Needed to create the provenance via GitHub OIDC
id-token: write
# Needed to upload assets/artifacts
contents: write
actions: read # Needed to detect the GitHub Actions environment
id-token: write # Needed to create the provenance via GitHub OIDC
contents: write # Needed to upload assets/artifacts
with:
# SHA-256 hashes of the Crate package.
base64-subjects: ${{ needs.package.outputs.hash }}
publish-cuda-release:
name: Publish CUDA Release
name: make_release_cuda/publish-cuda-release
needs: [setup-instance, package] # for comparing hashes
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
permissions:
id-token: write # Needed for OIDC token exchange on crates.io
strategy:
fail-fast: false
# explicit include-based build matrix, of known valid options
@@ -127,13 +140,58 @@ jobs:
- os: ubuntu-22.04
cuda: "12.2"
gcc: 9
env:
CUDA_PATH: /usr/local/cuda-${{ matrix.cuda }}
steps:
- name: Install latest stable
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Export CUDA variables
if: ${{ !cancelled() }}
run: |
echo "$CUDA_PATH/bin" >> "${GITHUB_PATH}"
{
echo "CUDA_PATH=$CUDA_PATH";
echo "LD_LIBRARY_PATH=$CUDA_PATH/lib:$LD_LIBRARY_PATH";
echo "CUDACXX=/usr/local/cuda-${CUDA_VERSION}/bin/nvcc";
} >> "${GITHUB_ENV}"
env:
CUDA_VERSION: ${{ matrix.cuda }}
# Specify the correct host compilers
- name: Export gcc and g++ variables
if: ${{ !cancelled() }}
run: |
{
echo "CC=/usr/bin/gcc-${GCC_VERSION}";
echo "CXX=/usr/bin/g++-${GCC_VERSION}";
echo "CUDAHOSTCXX=/usr/bin/g++-${GCC_VERSION}";
echo "HOME=/home/ubuntu";
} >> "${GITHUB_ENV}"
env:
GCC_VERSION: ${{ matrix.gcc }}
- name: Download artifact
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: crate-tfhe-cuda-backend
path: target/package
- name: Authenticate on registry
uses: rust-lang/crates-io-auth-action@b7e9a28eded4986ec6b1fa40eeee8f8f165559ec # v1.0.3
id: auth
- name: Publish crate.io package
env:
CRATES_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
CARGO_REGISTRY_TOKEN: ${{ steps.auth.outputs.token }}
DRY_RUN: ${{ inputs.dry_run && '--dry-run' || '' }}
run: |
cargo publish -p tfhe-cuda-backend --token ${{ env.CRATES_TOKEN }} ${{ env.DRY_RUN }}
# DRY_RUN expansion cannot be double quoted when variable contains empty string otherwise cargo publish
# would fail. This is safe since DRY_RUN is handled in the env section above.
# shellcheck disable=SC2086
cargo publish -p tfhe-cuda-backend ${DRY_RUN}
- name: Generate hash
id: published_hash
@@ -142,32 +200,28 @@ jobs:
- name: Slack notification (hashes comparison)
if: ${{ needs.package.outputs.hash != steps.published_hash.outputs.pub_hash }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661 # v2.3.3
env:
SLACK_COLOR: failure
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "SLSA tfhe-cuda-backend crate - hash comparison failure: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
- name: Slack Notification
if: ${{ failure() }}
if: ${{ failure() || (cancelled() && github.event_name != 'pull_request') }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661 # v2.3.3
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "tfhe-cuda-backend release finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
teardown-instance:
name: Teardown instance (publish-release)
name: make_release_cuda/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [ setup-instance, publish-cuda-release ]
needs: [setup-instance, publish-cuda-release]
runs-on: ubuntu-latest
steps:
- name: Stop instance
id: stop-instance
uses: zama-ai/slab-github-runner@79939325c3c429837c10d6041e4fd8589d328bac
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
@@ -177,8 +231,7 @@ jobs:
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (publish-cuda-release) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

32
.github/workflows/make_release_hpu.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: make_release_hpu
on:
workflow_dispatch:
inputs:
dry_run:
description: "Dry-run"
type: boolean
default: true
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
make-release:
name: make_release_hpu/make-release
uses: ./.github/workflows/make_release_common.yml
with:
package-name: "tfhe-hpu-backend"
dry-run: ${{ inputs.dry_run }}
permissions:
actions: read # Needed to detect the GitHub Actions environment
id-token: write # Needed to create the provenance via GitHub OIDC
contents: write # Needed to upload assets/artifacts
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ALLOWED_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}

125
.github/workflows/make_release_tfhe.yml vendored Normal file
View File

@@ -0,0 +1,125 @@
# Publish new release of tfhe-rs on various platform.
name: make_release_tfhe
on:
workflow_dispatch:
inputs:
dry_run:
description: "Dry-run"
type: boolean
default: true
push_to_crates:
description: "Push to crate"
type: boolean
default: true
push_web_package:
description: "Push web js package"
type: boolean
default: true
push_node_package:
description: "Push node js package"
type: boolean
default: true
npm_latest_tag:
description: "Set NPM tag as latest"
type: boolean
default: false
env:
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
NPM_TAG: ""
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
make-release:
name: make_release_tfhe/make-release
uses: ./.github/workflows/make_release_common.yml
if: ${{ inputs.push_to_crates }}
with:
package-name: "tfhe"
dry-run: ${{ inputs.dry_run }}
permissions:
actions: read # Needed to detect the GitHub Actions environment
id-token: write # Needed to create the provenance via GitHub OIDC
contents: write # Needed to upload assets/artifacts
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ALLOWED_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}
make-release-js:
name: make_release_tfhe/make-release-js
needs: make-release
if: ${{ always() && needs.make-release.result != 'failure' }}
runs-on: ubuntu-latest
# For provenance of npmjs publish
permissions:
contents: read
id-token: write # also needed for OIDC token exchange on crates.io and npmjs.com
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Create NPM version tag
if: ${{ inputs.npm_latest_tag }}
run: |
echo "NPM_TAG=latest" >> "${GITHUB_ENV}"
- name: Build web package
if: ${{ inputs.push_web_package }}
run: |
make build_web_js_api_parallel
- name: Authenticate on NPM
uses: actions/setup-node@395ad3262231945c25e8478fd5baf05154b1d79f # v6.1.0
with:
node-version: '24'
registry-url: 'https://registry.npmjs.org'
- name: Publish web package
if: ${{ inputs.push_web_package }}
uses: JS-DevTools/npm-publish@7f8fe47b3bea1be0c3aec2b717c5ec1f3e03410b
with:
package: tfhe/pkg/package.json
dry-run: ${{ inputs.dry_run }}
tag: ${{ env.NPM_TAG }}
provenance: true
- name: Build Node package
if: ${{ inputs.push_node_package }}
run: |
rm -rf tfhe/pkg
make build_node_js_api
sed -i 's/"tfhe"/"node-tfhe"/g' tfhe/pkg/package.json
- name: Publish Node package
if: ${{ inputs.push_node_package }}
uses: JS-DevTools/npm-publish@7f8fe47b3bea1be0c3aec2b717c5ec1f3e03410b
with:
package: tfhe/pkg/package.json
dry-run: ${{ inputs.dry_run }}
tag: ${{ env.NPM_TAG }}
provenance: true
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661 # v2.3.3
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "tfhe release finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,4 +1,4 @@
name: Publish tfhe-csprng release
name: make_release_tfhe_csprng
on:
workflow_dispatch:
@@ -8,96 +8,25 @@ on:
type: boolean
default: true
env:
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
verify_tag:
uses: ./.github/workflows/verify_tagged_commit.yml
secrets:
RELEASE_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}
package:
runs-on: ubuntu-latest
outputs:
hash: ${{ steps.hash.outputs.hash }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Prepare package
run: |
cargo package -p tfhe-csprng
- uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 # v4.6.0
with:
name: crate-tfhe-csprng
path: target/package/*.crate
- name: generate hash
id: hash
run: cd target/package && echo "hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
provenance:
if: ${{ !inputs.dry_run }}
needs: [package]
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.0.0
permissions:
# Needed to detect the GitHub Actions environment
actions: read
# Needed to create the provenance via GitHub OIDC
id-token: write
# Needed to upload assets/artifacts
contents: write
make-release:
name: make_release_tfhe_csprng/make-release
uses: ./.github/workflows/make_release_common.yml
with:
# SHA-256 hashes of the Crate package.
base64-subjects: ${{ needs.package.outputs.hash }}
publish_release:
name: Publish tfhe-csprng Release
needs: [verify_tag, package]
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
token: ${{ secrets.FHE_ACTIONS_TOKEN }}
- name: Download artifact
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: crate-tfhe-csprng
path: target/package
- name: Publish crate.io package
env:
CRATES_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
DRY_RUN: ${{ inputs.dry_run && '--dry-run' || '' }}
run: |
cargo publish -p tfhe-csprng --token ${{ env.CRATES_TOKEN }} ${{ env.DRY_RUN }}
- name: Generate hash
id: published_hash
run: cd target/package && echo "pub_hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
- name: Slack notification (hashes comparison)
if: ${{ needs.package.outputs.hash != steps.published_hash.outputs.pub_hash }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: failure
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "SLSA tfhe-csprng - hash comparison failure: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "tfhe-csprng release finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
package-name: "tfhe-csprng"
dry-run: ${{ inputs.dry_run }}
permissions:
actions: read # Needed to detect the GitHub Actions environment
id-token: write # Needed to create the provenance via GitHub OIDC
contents: write # Needed to upload assets/artifacts
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ALLOWED_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}

View File

@@ -1,5 +1,5 @@
# Publish new release of tfhe-fft
name: Publish tfhe-fft release
name: make_release_tfhe_fft
on:
workflow_dispatch:
@@ -9,95 +9,25 @@ on:
type: boolean
default: true
env:
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
verify_tag:
uses: ./.github/workflows/verify_tagged_commit.yml
secrets:
RELEASE_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}
package:
runs-on: ubuntu-latest
needs: verify_tag
outputs:
hash: ${{ steps.hash.outputs.hash }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
token: ${{ secrets.FHE_ACTIONS_TOKEN }}
- name: Prepare package
run: |
cargo package -p tfhe-fft
- uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 # v4.6.0
with:
name: crate
path: target/package/*.crate
- name: generate hash
id: hash
run: cd target/package && echo "hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
provenance:
if: ${{ !inputs.dry_run }}
needs: [package]
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.0.0
permissions:
# Needed to detect the GitHub Actions environment
actions: read
# Needed to create the provenance via GitHub OIDC
id-token: write
# Needed to upload assets/artifacts
contents: write
make-release:
name: make_release_tfhe_fft/make-release
uses: ./.github/workflows/make_release_common.yml
with:
# SHA-256 hashes of the Crate package.
base64-subjects: ${{ needs.package.outputs.hash }}
publish_release:
name: Publish tfhe-fft Release
runs-on: ubuntu-latest
needs: [verify_tag, package] # for comparing hashes
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
token: ${{ secrets.FHE_ACTIONS_TOKEN }}
- name: Publish crate.io package
env:
CRATES_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
DRY_RUN: ${{ inputs.dry_run && '--dry-run' || '' }}
run: |
cargo publish -p tfhe-fft --token ${{ env.CRATES_TOKEN }} ${{ env.DRY_RUN }}
- name: Generate hash
id: published_hash
run: cd target/package && echo "pub_hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
- name: Slack notification (hashes comparison)
if: ${{ needs.package.outputs.hash != steps.published_hash.outputs.pub_hash }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: failure
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "SLSA tfhe-fft crate - hash comparison failure: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "tfhe-fft release failed: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
package-name: "tfhe-fft"
dry-run: ${{ inputs.dry_run }}
permissions:
actions: read # Needed to detect the GitHub Actions environment
id-token: write # Needed to create the provenance via GitHub OIDC
contents: write # Needed to upload assets/artifacts
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ALLOWED_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}

View File

@@ -1,5 +1,5 @@
# Publish new release of tfhe-ntt
name: Publish tfhe-ntt release
name: make_release_tfhe_ntt
on:
workflow_dispatch:
@@ -9,94 +9,25 @@ on:
type: boolean
default: true
env:
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
verify_tag:
uses: ./.github/workflows/verify_tagged_commit.yml
secrets:
RELEASE_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}
package:
runs-on: ubuntu-latest
needs: verify_tag
outputs:
hash: ${{ steps.hash.outputs.hash }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
token: ${{ secrets.FHE_ACTIONS_TOKEN }}
- name: Prepare package
run: |
cargo package -p tfhe-ntt
- uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 # v4.6.0
with:
name: crate
path: target/package/*.crate
- name: generate hash
id: hash
run: cd target/package && echo "hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
provenance:
if: ${{ !inputs.dry_run }}
needs: [package]
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.0.0
permissions:
# Needed to detect the GitHub Actions environment
actions: read
# Needed to create the provenance via GitHub OIDC
id-token: write
# Needed to upload assets/artifacts
contents: write
make-release:
name: make_release_tfhe_ntt/make-release
uses: ./.github/workflows/make_release_common.yml
with:
# SHA-256 hashes of the Crate package.
base64-subjects: ${{ needs.package.outputs.hash }}
publish_release:
name: Publish tfhe-ntt Release
runs-on: ubuntu-latest
needs: [verify_tag, package] # for comparing hashes
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Publish crate.io package
env:
CRATES_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
DRY_RUN: ${{ inputs.dry_run && '--dry-run' || '' }}
run: |
cargo publish -p tfhe-ntt --token ${{ env.CRATES_TOKEN }} ${{ env.DRY_RUN }}
- name: Generate hash
id: published_hash
run: cd target/package && echo "pub_hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
- name: Slack notification (hashes comparison)
if: ${{ needs.package.outputs.hash != steps.published_hash.outputs.pub_hash }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: failure
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "SLSA tfhe-ntt crate - hash comparison failure: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "tfhe-ntt release failed: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
package-name: "tfhe-ntt"
dry-run: ${{ inputs.dry_run }}
permissions:
actions: read # Needed to detect the GitHub Actions environment
id-token: write # Needed to create the provenance via GitHub OIDC
contents: write # Needed to upload assets/artifacts
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ALLOWED_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}

View File

@@ -1,4 +1,4 @@
name: Publish tfhe-versionable release
name: make_release_tfhe_versionable
on:
workflow_dispatch:
@@ -8,175 +8,44 @@ on:
type: boolean
default: true
env:
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
verify_tag:
uses: ./.github/workflows/verify_tagged_commit.yml
make-release-derive:
name: make_release_tfhe_versionable/make-release-derive
uses: ./.github/workflows/make_release_common.yml
with:
package-name: "tfhe-versionable-derive"
dry-run: ${{ inputs.dry_run }}
permissions:
actions: read # Needed to detect the GitHub Actions environment
id-token: write # Needed to create the provenance via GitHub OIDC
contents: write # Needed to upload assets/artifacts
secrets:
RELEASE_TEAM: ${{ secrets.RELEASE_TEAM }}
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ALLOWED_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}
package-derive:
runs-on: ubuntu-latest
outputs:
hash: ${{ steps.hash.outputs.hash }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Prepare package
run: |
cargo package -p tfhe-versionable-derive
- uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 # v4.6.0
with:
name: crate-tfhe-versionable-derive
path: target/package/*.crate
- name: generate hash
id: hash
run: cd target/package && echo "hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
provenance-derive:
needs: [package-derive]
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.0.0
permissions:
# Needed to detect the GitHub Actions environment
actions: read
# Needed to create the provenance via GitHub OIDC
id-token: write
# Needed to upload assets/artifacts
contents: write
make-release:
name: make_release_tfhe_versionable/make-release
needs: make-release-derive
uses: ./.github/workflows/make_release_common.yml
with:
# SHA-256 hashes of the Crate package.
base64-subjects: ${{ needs.package-derive.outputs.hash }}
publish_release-derive:
name: Publish tfhe-versionable Release
needs: [verify_tag, package-derive] # for comparing hashes
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Download artifact
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: crate-tfhe-versionable-derive
path: target/package
- name: Publish crate.io package
env:
CRATES_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
cargo publish -p tfhe-versionable-derive --token ${{ env.CRATES_TOKEN }} ${{ env.DRY_RUN }}
- name: Generate hash
id: published_hash
run: cd target/package && echo "pub_hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
- name: Slack notification (hashes comparison)
if: ${{ needs.package-derive.outputs.hash != steps.published_hash.outputs.pub_hash }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: failure
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "SLSA tfhe-versionable-derive - hash comparison failure: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "tfhe-versionable-derive release finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
package:
runs-on: ubuntu-latest
outputs:
hash: ${{ steps.hash.outputs.hash }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- name: Prepare package
run: |
cargo package -p tfhe-versionable
- uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 # v4.6.0
with:
name: crate-tfhe-versionable
path: target/package/*.crate
- name: generate hash
id: hash
run: cd target/package && echo "hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
provenance:
needs: [package]
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.0.0
package-name: "tfhe-versionable"
dry-run: ${{ inputs.dry_run }}
permissions:
# Needed to detect the GitHub Actions environment
actions: read
# Needed to create the provenance via GitHub OIDC
id-token: write
# Needed to upload assets/artifacts
contents: write
with:
# SHA-256 hashes of the Crate package.
base64-subjects: ${{ needs.package.outputs.hash }}
publish_release:
name: Publish tfhe-versionable Release
needs: [package] # for comparing hashes
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- name: Download artifact
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: crate-tfhe-versionable
path: target/package
- name: Publish crate.io package
env:
CRATES_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
run: |
cargo publish -p tfhe-versionable --token ${{ env.CRATES_TOKEN }} ${{ env.DRY_RUN }}
- name: Generate hash
id: published_hash
run: cd target/package && echo "pub_hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
- name: Slack notification (hashes comparison)
if: ${{ needs.package.outputs.hash != steps.published_hash.outputs.pub_hash }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: failure
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "SLSA tfhe-versionable - hash comparison failure: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "tfhe-versionable release finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
actions: read # Needed to detect the GitHub Actions environment
id-token: write # Needed to create the provenance via GitHub OIDC
contents: write # Needed to upload assets/artifacts
secrets:
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ALLOWED_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}

View File

@@ -1,4 +1,4 @@
name: Publish tfhe-zk-pok release
name: make_release_zk_pok
on:
workflow_dispatch:
@@ -8,94 +8,25 @@ on:
type: boolean
default: true
env:
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
permissions: { }
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
package:
runs-on: ubuntu-latest
outputs:
hash: ${{ steps.hash.outputs.hash }}
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Prepare package
run: |
cargo package -p tfhe-zk-pok
- uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 # v4.6.0
with:
name: crate-zk-pok
path: target/package/*.crate
- name: generate hash
id: hash
run: cd target/package && echo "hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
provenance:
if: ${{ !inputs.dry_run }}
needs: [package]
uses: slsa-framework/slsa-github-generator/.github/workflows/generator_generic_slsa3.yml@v2.0.0
permissions:
# Needed to detect the GitHub Actions environment
actions: read
# Needed to create the provenance via GitHub OIDC
id-token: write
# Needed to upload assets/artifacts
contents: write
make-release:
name: make_release_zk_pok/make-release
uses: ./.github/workflows/make_release_common.yml
with:
# SHA-256 hashes of the Crate package.
base64-subjects: ${{ needs.package.outputs.hash }}
verify_tag:
uses: ./.github/workflows/verify_tagged_commit.yml
package-name: "tfhe-zk-pok"
dry-run: ${{ inputs.dry_run }}
permissions:
actions: read # Needed to detect the GitHub Actions environment
id-token: write # Needed to create the provenance via GitHub OIDC
contents: write # Needed to upload assets/artifacts
secrets:
RELEASE_TEAM: ${{ secrets.RELEASE_TEAM }}
BOT_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
REPO_CHECKOUT_TOKEN: ${{ secrets.REPO_CHECKOUT_TOKEN }}
ALLOWED_TEAM: ${{ secrets.RELEASE_TEAM }}
READ_ORG_TOKEN: ${{ secrets.READ_ORG_TOKEN }}
publish_release:
name: Publish tfhe-zk-pok Release
needs: [verify_tag, package] # for comparing hashes
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Download artifact
uses: actions/download-artifact@fa0a91b85d4f404e444e00e005971372dc801d16 # v4.1.8
with:
name: crate-zk-pok
path: target/package
- name: Publish crate.io package
env:
CRATES_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
DRY_RUN: ${{ inputs.dry_run && '--dry-run' || '' }}
run: |
cargo publish -p tfhe-zk-pok --token ${{ env.CRATES_TOKEN }} ${{ env.DRY_RUN }}
- name: Verify hash
id: published_hash
run: cd target/package && echo "pub_hash=$(sha256sum ./*.crate | base64 -w0)" >> "${GITHUB_OUTPUT}"
- name: Slack notification (hashes comparison)
if: ${{ needs.package.outputs.hash != steps.published_hash.outputs.pub_hash }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: failure
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "SLSA tfhe-zk-pok crate - hash comparison failure: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990 # v2.3.2
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "tfhe-zk-pok release failed: (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View File

@@ -1,30 +1,82 @@
# Perform a security check on all the cryptographic parameters set
name: Parameters curves security check
name: parameters_check
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUSTFLAGS: "-C target-cpu=native"
# Secrets will be available only to zama-ai organization members
SECRETS_AVAILABLE: ${{ secrets.JOB_SECRET != '' }}
EXTERNAL_CONTRIBUTION_RUNNER: "large_ubuntu_16"
on:
pull_request:
paths:
- '.github/workflows/parameters_check.yml'
- 'ci/lattice_estimator.sage'
- 'tfhe/examples/utilities/params_to_file.rs'
- 'tfhe/src/shortint/parameters/*'
push:
branches:
- "main"
workflow_dispatch:
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members and GitHub can trigger this workflow
jobs:
setup-instance:
name: parameters_check/setup-instance
if:
(github.event_name == 'push' && github.repository == 'zama-ai/tfhe-rs') ||
github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
outputs:
runner-name: ${{ steps.start-remote-instance.outputs.label || steps.start-github-instance.outputs.runner_group }}
steps:
- name: Start remote instance
id: start-remote-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: start
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
backend: aws
profile: cpu-small
# This instance will be spawned especially for pull-request from forked repository
- name: Start GitHub instance
id: start-github-instance
if: env.SECRETS_AVAILABLE == 'false'
run: |
echo "runner_group=${EXTERNAL_CONTRIBUTION_RUNNER}" >> "$GITHUB_OUTPUT"
params-curves-security-check:
runs-on: large_ubuntu_16-22.04
name: parameters_check/params-curves-security-check
needs: setup-instance
runs-on: ${{ needs.setup-instance.outputs.runner-name }}
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: Install latest stable
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # zizmor: ignore[stale-action-refs] this action doesn't create releases
with:
toolchain: stable
- name: Checkout lattice-estimator
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8
with:
repository: malb/lattice-estimator
path: lattice_estimator
ref: 'e80ec6bbbba212428b0e92d0467c18629cf9ed67'
ref: '352ddaf4a288a0543f5d9eb588d2f89c7acec463'
persist-credentials: 'false'
- name: Install Sage
run: |
@@ -35,18 +87,67 @@ jobs:
run: |
CARGO_PROFILE=devo make write_params_to_file
- name: Get start time
if: ${{ always() }}
id: start-time
run: |
echo "value=$(date +%s)" >> "${GITHUB_OUTPUT}"
- name: Perform security check
run: |
PYTHONPATH=lattice_estimator sage ci/lattice_estimator.sage
- name: Get time elapsed
if: ${{ always() }}
shell: python
env:
START_DATE: ${{ steps.start-time.outputs.value }}
run: |
import datetime
import math
import os
env_file = os.environ["GITHUB_ENV"]
start_date = datetime.datetime.fromtimestamp(int(os.environ["START_DATE"]))
end_date = datetime.datetime.now()
total_minutes = math.floor((end_date - start_date).total_seconds() / 60)
with open(env_file, "a") as f:
f.write(f"TIME_ELAPSED={total_minutes}\n")
- name: Slack Notification
if: ${{ always() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@c33737706dea87cd7784c687dadc9adf1be59990
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "Security check for parameters finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_MESSAGE: "Security check for parameters finished with status: ${{ job.status }} (analysis took: ${{ env.TIME_ELAPSED }} mins). (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}
teardown-instance:
name: parameters_check/teardown-instance
if: ${{ always() && needs.setup-instance.result == 'success' }}
needs: [setup-instance, params-curves-security-check]
runs-on: ubuntu-latest
steps:
- name: Stop remote instance
id: stop-instance
if: env.SECRETS_AVAILABLE == 'true'
uses: zama-ai/slab-github-runner@973c1d22702de8d0acd2b34e83404c96ed92c264 # v1.4.2
with:
mode: stop
github-token: ${{ secrets.SLAB_ACTION_TOKEN }}
slab-url: ${{ secrets.SLAB_BASE_URL }}
job-secret: ${{ secrets.JOB_SECRET }}
label: ${{ needs.setup-instance.outputs.runner-name }}
- name: Slack Notification
if: ${{ failure() }}
uses: rtCamp/action-slack-notify@e31e87e03dd19038e411e38ae27cbad084a90661
env:
SLACK_COLOR: ${{ job.status }}
SLACK_MESSAGE: "Instance teardown (params-curves-security-check) finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"

View File

@@ -1,12 +1,16 @@
# Placeholder workflow file allowing running it without having to merge to main first
name: Placeholder Workflow
name: placeholder_workflow
on:
workflow_dispatch:
permissions: {}
# zizmor: ignore[concurrency-limits] only Zama organization members can trigger this workflow
jobs:
placeholder:
name: Placeholder
name: placeholder_workflow/placeholder
runs-on: ubuntu-latest
steps:

View File

@@ -1,5 +1,5 @@
# Sync repos
name: Sync repos
name: sync_on_push
on:
push:
@@ -7,28 +7,66 @@ on:
- 'main'
workflow_dispatch:
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.sha }}
cancel-in-progress: ${{ github.event_name == 'push' }}
jobs:
sync-repo:
name: sync_on_push/sync-repo
if: ${{ github.repository == 'zama-ai/tfhe-rs' }}
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
persist-credentials: 'false'
token: ${{ secrets.REPO_CHECKOUT_TOKEN }}
- name: git-sync
uses: wei/git-sync@55c6b63b4f21607da0e9877ca9b4d11a29fc6d83
with:
source_repo: "zama-ai/tfhe-rs"
source_branch: "main"
destination_repo: "https://${{ secrets.BOT_USERNAME }}:${{ secrets.FHE_ACTIONS_TOKEN }}@github.com/${{ secrets.SYNC_DEST_REPO }}"
destination_branch: "main"
- name: git-sync tags
uses: wei/git-sync@55c6b63b4f21607da0e9877ca9b4d11a29fc6d83
with:
source_repo: "zama-ai/tfhe-rs"
source_branch: "refs/tags/*"
destination_repo: "https://${{ secrets.BOT_USERNAME }}:${{ secrets.FHE_ACTIONS_TOKEN }}@github.com/${{ secrets.SYNC_DEST_REPO }}"
destination_branch: "refs/tags/*"
env:
SOURCE_REPO: "zama-ai/tfhe-rs"
SOURCE_BRANCH: "main"
DESTINATION_BRANCH: "main"
USERNAME: ${{ secrets.BOT_USERNAME }}
TOKEN: ${{ secrets.SYNC_REPO_TOKEN }}
DEST_REPO: ${{ secrets.SYNC_DEST_REPO }}
run: |
echo ">>> Cloning source repo..."
git lfs install
git clone "https://${USERNAME}:${TOKEN}@github.com/${SOURCE_REPO}.git" ./tfhe-rs --origin source && cd ./tfhe-rs
git remote add destination "https://${USERNAME}:${TOKEN}@github.com/${DEST_REPO}.git"
echo ">>> Fetching all branches references down locally so subsequent commands can see them..."
git fetch source '+refs/heads/*:refs/heads/*' --update-head-ok
echo ">>> Print out all branches"
git --no-pager branch -a -vv
echo ">>> Fetching all LFS items from source..."
git lfs fetch --all source "${SOURCE_BRANCH}"
echo ">>> Pushing git changes..."
git push destination "${SOURCE_BRANCH}:${DESTINATION_BRANCH}" -f
echo ">>> Pushing all LFS items..."
git lfs push --all destination "${DESTINATION_BRANCH}"
- name: git-sync-tags
env:
SOURCE_REPO: "zama-ai/tfhe-rs"
SOURCE_BRANCH: "refs/tags/*"
DESTINATION_BRANCH: "refs/tags/*"
USERNAME: ${{ secrets.BOT_USERNAME }}
TOKEN: ${{ secrets.SYNC_REPO_TOKEN }}
DEST_REPO: ${{ secrets.SYNC_DEST_REPO }}
run: |
echo ">>> Cloning source repo..."
git lfs install
git clone "https://${USERNAME}:${TOKEN}@github.com/${SOURCE_REPO}.git" ./tfhe-rs-tag --origin source && cd ./tfhe-rs-tag
git remote add destination "https://${USERNAME}:${TOKEN}@github.com/${DEST_REPO}.git"
echo ">>> Fetching all branches references down locally so subsequent commands can see them..."
git fetch source '+refs/heads/*:refs/heads/*' --update-head-ok
echo ">>> Print out all branches"
git --no-pager branch -a -vv
echo ">>> Pushing git changes..."
git push destination "${SOURCE_BRANCH}:${DESTINATION_BRANCH}" -f

31
.github/workflows/unverified_prs.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
# Close unverified PRs'
name: unverified_prs
on:
schedule:
- cron: '30 1 * * *'
permissions: {}
# zizmor: ignore[concurrency-limits] only GitHub can trigger this workflow
jobs:
stale:
name: unverified_prs/stale
runs-on: ubuntu-latest
permissions:
issues: read # Needed to fetch all issues
pull-requests: write # Needed to write message and close the PR
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
with:
stale-pr-message: 'This PR is unverified and has been open for 2 days, it will now be closed. If you want to contribute please sign the CLA as indicated by the bot.'
days-before-stale: 2
days-before-close: 0
# We are not interested in suppressing issues so have a currently non existent label
# if we ever accept issues to become stale/closable this label will be the signal for that
only-issue-labels: can-be-auto-closed
# Only unverified PRs are an issue
exempt-pr-labels: cla-signed
# We don't want people commenting to keep an unverified PR
ignore-updates: true

View File

@@ -1,18 +1,22 @@
# Verify a tagged commit
name: Verify tagged commit
# Verify a triggering actor
name: verify_triggering_actor
on:
workflow_call:
secrets:
RELEASE_TEAM:
ALLOWED_TEAM:
required: true
READ_ORG_TOKEN:
required: true
permissions: {}
# zizmor: ignore[concurrency-limits] caller workflow is responsible for the concurrency
jobs:
checks:
check-actor:
name: verify_triggering_actor/check-actor
runs-on: ubuntu-latest
if: startsWith(github.ref, 'refs/tags/')
steps:
# Check triggering actor membership
- name: Actor verification
@@ -21,12 +25,15 @@ jobs:
with:
username: ${{ github.triggering_actor }}
org: ${{ github.repository_owner }}
team: ${{ secrets.RELEASE_TEAM }}
team: ${{ secrets.ALLOWED_TEAM }}
github_token: ${{ secrets.READ_ORG_TOKEN }}
- name: Actor authorized
run: |
if [ "${{ steps.actor_check.outputs.authorized }}" == "false" ]; then
echo "Actor '${{ github.triggering_actor }}' is not authorized to perform release"
if [ "${ACTOR_CHECK_OUTPUT}" == "false" ]; then
echo "Actor '${TRIGGERING_ACTOR}' is not authorized to perform release"
exit 1
fi
env:
TRIGGERING_ACTOR: ${{ github.triggering_actor }}
ACTOR_CHECK_OUTPUT: ${{ steps.actor_check.outputs.authorized }}

11
.gitignore vendored
View File

@@ -32,5 +32,12 @@ web-test-runner/
node_modules/
package-lock.json
# Dir used for backward compatibility test data
tests/tfhe-backward-compat-data/
# Python .env
.env
__pycache__
# File auto-generated by the lattice-estimator from lattice_estimator.sage
ci/lattice_estimator.sage.py
# In case someone clones the lattice-estimator locally to verify security
/lattice-estimator

2
.lfsconfig Normal file
View File

@@ -0,0 +1,2 @@
[lfs]
fetchexclude = *

View File

@@ -10,6 +10,7 @@ ignore:
- keys
- coverage
- utils/tfhe-lints/ui/main.stderr
- utils/tfhe-backward-compat-data/**/*.ron # ron files are autogenerated
rules:
# checks if file ends in a newline character

32
CODEOWNERS Normal file
View File

@@ -0,0 +1,32 @@
# Specifying a path without code owners means that path won't have owners and is akin to a negation
# i.e. the `core_crypto` dir is owned and needs owner approval/review, but not the `gpu` sub dir
# See https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners#example-of-a-codeowners-file
/backends/tfhe-cuda-backend/ @agnesLeroy
/backends/tfhe-hpu-backend/ @zama-ai/hardware
/tfhe/examples/hpu @zama-ai/hardware
/tfhe/src/core_crypto/ @IceTDrinker
/tfhe/src/core_crypto/gpu @agnesLeroy
/tfhe/src/core_crypto/hpu @zama-ai/hardware
/tfhe/src/shortint/ @mayeul-zama
/tfhe/src/integer/ @tmontaigu
/tfhe/src/integer/gpu @agnesLeroy
/tfhe/src/integer/hpu @zama-ai/hardware
/tfhe/src/high_level_api/ @tmontaigu
/tfhe-benchmark/ @soonum
/Makefile @IceTDrinker @soonum
/mockups/tfhe-hpu-mockup @zama-ai/hardware
/.github/ @soonum
/ci/ @soonum
/scripts/ @soonum
/CODEOWNERS @IceTDrinker

Some files were not shown because too many files have changed in this diff Show More