Compare commits

...

548 Commits

Author SHA1 Message Date
J-B Orfila
c58b0a3f68 chore(bench): add new parameter sets to bench 2023-08-09 09:56:15 +02:00
J-B Orfila
1f95e2d45a chore(tfhe): bump version to 0.3.1 2023-08-09 09:56:15 +02:00
tmontaigu
9bd9180261 feat(integer): make full_propagate_parallelized more parallel
Using the functions that were introduced recently,
it is possible to make the full_propagate_parallelized method
more parallel than it was, resulting in faster computations.

The new carry propapagation should now be the cost of
a default add + one PBS, so ~400ms in 256 instead of ~3s.

However its probably slower for smaller number of blocks (eg 4 blocks)

This is done by extracting carry and messages in parallel,
then adding the carries to the correct message, the final step
is to use the single carry propapagation function.
2023-08-09 09:56:15 +02:00
tmontaigu
2d7251f88c feat(hlapi): add if_then_else 2023-08-09 09:56:15 +02:00
tmontaigu
59c5ef81e2 feat(integer): add if_then_else
This adds if_then_else (aka cmux / select)
to the integer API.

This also makes the min/max implementation use that
cmux instead of their own version of it, and allows
to save one pbs.
2023-08-09 09:56:15 +02:00
J-B Orfila
197afb62d0 feat(boolean): add KS-PBS pattern choice to boolean
Co-authored-by: tmontaigu<thomas.montaigu@laposte.net>
2023-08-09 09:56:15 +02:00
tmontaigu
205de1966a feat(hlapi): allow scalar ops on values up to U256
This enables to use u128 and U256 as operands to
operations in the high level api.

BREAKING CHANGE: a breaking change in the C API for scalar operations
for FheUint128 and FheUint256 as they previously required
a u64 and now a U218 / U256 respectively.
2023-08-09 09:56:15 +02:00
J-B Orfila
89def834b6 docs: update the README for v0.3 2023-07-27 15:16:21 +02:00
Arthur Meyre
58b4089524 chore(docs): add remarks on smart operations taking mutable inputs 2023-07-26 11:57:53 +02:00
Arthur Meyre
98db328de2 fix(integer): set proper MaxDegree for CompressedServerKey
- add shortint API to generate a CompressedServerKey with MaxDegree
- add non regression test based on the user issue
- factorize MaxDegree computation for integer server keys
2023-07-26 10:00:24 +02:00
David Testé
f5fab4db99 chore(bench): run groups of benchmarks using env variable 2023-07-26 09:36:29 +02:00
Arthur Meyre
95f2eef94f chore(doc): fix multiplication typo 2023-07-25 20:51:15 +02:00
David Testé
2c10a792a5 chore(ci): trigger benchmarks only if layers have changed
For example, if only shortint layer related files have changed,
only the shortint benchmarks would be run on push.
However, if any files changed in the common_benches group then
all the benchmarks would be run.
2023-07-25 09:13:41 +02:00
dependabot[bot]
96689443ef chore(deps): bump tj-actions/changed-files from 37.1.2 to 37.4.0
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 37.1.2 to 37.4.0.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](2a968ff601...de0eba3279)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-24 13:04:38 +02:00
Arthur Meyre
514cb9e6af feat(core): add concrete-cpu tests for wopbs
- manage luts a bit differently to match TFHE-rs wopbs implementation
2023-07-24 10:56:49 +02:00
Pakorn Nathong
c79da46bb2 feat(integer): expose scalar mul and sub trait 2023-07-24 10:56:14 +02:00
tmontaigu
a8449f1ded feat(integer): allow scalar shift/rotate with more unsigned types
This is mainly for convenience.

Also, rust implements shift by u8, u16..u128 for each types.
(even shift by i8...i128 are implemented).
2023-07-21 13:31:29 +02:00
tmontaigu
11517703e6 fix(integer): remove incorrect bounds
In 35c6aea84b the bounds for
the scalar_div family of functions were changed.

However, the a few bounds `u64: From<T>` were
not removed meaning the functions which still
had these were still stuck with u64 as the max scalar value.

This commit removes the leftover bounds.
2023-07-20 17:10:26 +02:00
tmontaigu
35c6aea84b feat(integer): allow scalar_div/rem up to 256 bits 2023-07-20 13:55:21 +02:00
J-B Orfila
4e37f7e5bf docs(all): TFHE-rs v0.3.0 doc update 2023-07-19 11:08:36 +02:00
tmontaigu
a69a9c727b feat(integer): allow scalar_mul with U256
Same as scalar_sub, the UnsignedInteger bound was to strict, so we create a
`ScalarMultiplier` trait to allow using U256 as a scalar.
2023-07-17 16:30:17 +02:00
tmontaigu
bafb4f9e17 feat(integer): allow scalar sub with 256 bit scalar
The scalar (T) in scalar_sub could be at most a u128
because the bounds were `T: UnsignedInteger` and our
U256 does not implement this trait yet.

To make scalar_sub accept a U256 we create
a smaller trait.
2023-07-17 16:30:17 +02:00
Arthur Meyre
0663d7ca0e chore(ci): use aws ami with the latest updates
- Ubuntu 22.04 based
2023-07-17 15:57:58 +02:00
Arthur Meyre
8112684aae chore(ci): yet another bench fix 2023-07-17 15:51:07 +02:00
Arthur Meyre
f2f4cb7937 chore(ci): select benches to run 2023-07-17 14:16:28 +02:00
Arthur Meyre
e0e6aa845a chore(core): track caller on CiphertextModulus methods that can fail 2023-07-17 14:16:12 +02:00
Arthur Meyre
1455da273d chore(ci): remove auto retry for wasm tests
- the pipe was masking the potential error test
2023-07-17 14:16:00 +02:00
Arthur Meyre
763ad60ff9 chore(ci): fix m1 workflow run conidition 2023-07-17 11:07:32 +02:00
Arthur Meyre
6f4f923951 chore(ci): avoid removing labels when we are not on a PR 2023-07-17 11:07:32 +02:00
dependabot[bot]
5de984f7d6 chore(deps): bump tj-actions/changed-files from 37.0.5 to 37.1.2
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 37.0.5 to 37.1.2.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](54849deb96...2a968ff601)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-17 10:32:28 +02:00
dependabot[bot]
640b849d4b chore(deps): bump JS-DevTools/npm-publish from 2.2.0 to 2.2.1
Bumps [JS-DevTools/npm-publish](https://github.com/js-devtools/npm-publish) from 2.2.0 to 2.2.1.
- [Release notes](https://github.com/js-devtools/npm-publish/releases)
- [Changelog](https://github.com/JS-DevTools/npm-publish/blob/main/CHANGELOG.md)
- [Commits](a25b4180b7...5a85faf05d)

---
updated-dependencies:
- dependency-name: JS-DevTools/npm-publish
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-07-17 10:32:07 +02:00
Arthur Meyre
7d484583ff chore(ci): add more triggers for re-running benches
- the bench "dependency" tree is bigger than first assumed
2023-07-17 09:36:54 +02:00
Arthur Meyre
68aa2ba25a chore(ci): fix workflow dispatch not triggering bench start 2023-07-17 09:36:54 +02:00
Arthur Meyre
228f85d843 chore(tfhe): remove dbg! macro calls and add a Makefile check for it 2023-07-13 19:45:15 +02:00
Arthur Meyre
f982c58538 chore(shortint): make shortint div behavior match integer on div by zero 2023-07-13 13:22:43 +02:00
Arthur Meyre
e2e901c220 chore(ci): fix usage of changed files 2023-07-12 17:27:06 +02:00
Arthur Meyre
507c569eee chore(shortint): add more convenience parameter aliases 2023-07-12 17:26:52 +02:00
Arthur Meyre
c37d9c590b chore(hlapi): remove leftover empty file 2023-07-12 14:55:12 +02:00
Arthur Meyre
549e9e70da chore(benches): need to checkout repo to check changed files for benchmarks 2023-07-10 13:05:17 +02:00
Arthur Meyre
d56afcd8c3 chore(integer): disable most sequence default add tests
- those are too slow and not the most optimized option to perform those
operations
2023-07-10 09:34:10 +02:00
Arthur Meyre
2019cd1708 chore(ci): M1 don't run multibit integer tests (too slow) 2023-07-10 09:34:10 +02:00
Arthur Meyre
3cfee104cb chore(ci): forward profile to shortint and integer test scripts 2023-07-10 09:34:10 +02:00
Arthur Meyre
4b174d552a chore(ci): run all M1 tests in FAST_TESTS=TRUE mode for better coverage 2023-07-10 09:34:10 +02:00
Arthur Meyre
1764c88de0 chore(ci): run schedule build only on public repo 2023-07-10 09:34:10 +02:00
Arthur Meyre
e4af2bad0f chore(test): fix wopbs only test which was using a wrong set of parameters 2023-07-10 09:34:10 +02:00
Arthur Meyre
59ef915095 chore(ci): fix C API build system to manage profiles other than release 2023-07-10 09:34:10 +02:00
Arthur Meyre
10f034171f chore(ci): LTO is causing issues in M1 CI tests use LTO off instead 2023-07-10 09:34:10 +02:00
Arthur Meyre
5e0aff616e chore(ci): run tests on M1 without integer as those are too long
- add a nightly trigger
2023-07-10 09:34:10 +02:00
Arthur Meyre
9687c55eb6 chore(ci): fix c_api_tests.sh to use threads on M1 properly 2023-07-10 09:34:10 +02:00
Arthur Meyre
222c5e1c19 chore(tfhe): misc fixes for error messages 2023-07-10 09:34:10 +02:00
Arthur Meyre
ea47265f15 chore(tfhe): remove unwarranted uses of unsafe when the code is not unsafe
- marking functions unsafe because the computations may be wrong due to a
bad choice of crypto parameters is not in line with the meaning of unsafe
in rust, so remove those uses
2023-07-10 09:33:18 +02:00
David Testé
465b79f42d chore(ci): trigger benchmarks only on specific file changes 2023-07-10 09:30:15 +02:00
tmontaigu
2557b29230 fix(shortint): Ciphertext::copy_from
Ciphertext::copy_from did not copy the degree
resulting in potential bad results for some operations.

This fixes it, and rewrites to use destructuring
in order to prevent such thing from happenning again
(with destructuring, if a member is not destructured,
a compile error is emited)

Also we move the implementation of copy_from into
clone_from.
2023-07-09 20:15:07 +02:00
tmontaigu
490bdaea30 fix(integer): fix U256::copy_to_be_byte_slice
There was a bug in to_be_bytes_slice, it was missing a
`slice.reverse()` (from_be_bytes correctly has it)

The from/to functions have been refactored to used
from_be_bytes / to_be_bytes, etc from the stdlib to
only have one layer of endianess to manage.

The test value used did not catch that so we change the value
used to expose the problem.
2023-07-07 15:29:35 +02:00
tmontaigu
936ac05e51 chore(core): fix typo in SignedInteger trait doc 2023-07-07 10:10:12 +02:00
tmontaigu
d496cfa431 feat(hlapi): bind scalar_bitwise/div/rem operations 2023-07-06 17:57:58 +02:00
Arthur Meyre
16be1c1c1d chore(bench): enable auto integer multi bit bench launch 2023-07-06 17:06:43 +02:00
Arthur Meyre
f2f4e397f1 chore(tfhe): bump version to 0.3.0 2023-06-30 23:10:26 +02:00
David Testé
facc2a162f test(integer): add unit and doc test for bitnot operator 2023-06-30 19:42:18 +02:00
Arthur Meyre
5981a886fd chore(bench): add multi bit key size measurements 2023-06-30 18:37:52 +02:00
tmontaigu
e98315fa60 feat(integer): add division by encrypted value
Adds a simple and slow algorithms for division/remainder
but at least it enables to use of this operators.

This also adds the same implementation in clear
so we will now be able to have u256 div.
2023-06-30 16:26:22 +02:00
Arthur Meyre
6b235f6fef chore(bench): fix issue due to overlapping merge 2023-06-30 13:15:36 +02:00
Arthur Meyre
4d376eea39 chore(bench): proper param name fix for WASM bench 2023-06-30 11:16:31 +02:00
tmontaigu
d93ddbe897 feat(integer): add scalar division/remainder 2023-06-30 09:46:47 +02:00
tmontaigu
189018ed05 feat(hlapi): allow use of multibit for integers 2023-06-30 09:45:14 +02:00
David Testé
fdae4e958c chore(ci): add bitnot operators to integer benchmarks 2023-06-30 09:32:42 +02:00
David Testé
d5ef359a04 chore(ci): use multi-bit params in shortint for pbs benchmarks
Use up-to-date crypto parameters for PBS benchmarks with multi-bit
instead of hardcoded ones.
2023-06-30 09:31:56 +02:00
J-B Orfila
a52cd6454d feat(shortint): add encrypt_message_and_carry 2023-06-29 17:34:36 +02:00
Arthur Meyre
142851792a chore(bench): fix param names 2023-06-29 16:20:58 +02:00
David Testé
e52bc09db5 chore(ci): add integer benchmarks with multi-bit parameters 2023-06-29 15:30:19 +02:00
Arthur Meyre
5bea1e0bc0 chore(ci): fix fast tests launching too many multi bit parameters 2023-06-28 19:14:20 +02:00
Arthur Meyre
224d81378a chore(docs): add information about the KS_PBS/PBS_KS naming "spec" 2023-06-28 19:14:20 +02:00
Arthur Meyre
011cb48ded chore(shortint): update exposed parameters 2023-06-28 19:14:20 +02:00
Arthur Meyre
da05f16c10 chore(shortint): add aliases for "old" parameter sets
- wopbs not included as it's due a heavy rewok
2023-06-28 19:14:20 +02:00
Arthur Meyre
ffc2472c95 chore(shortint): update keycache for CPK params, remove unusable params 2023-06-28 19:14:20 +02:00
Arthur Meyre
c0b82c77fb chore(shortint): plug cpk tests in scripts 2023-06-28 19:14:20 +02:00
Arthur Meyre
b09dc1f3ca chore(tfhe): rename params 2023-06-28 19:14:20 +02:00
J-B Orfila
a8e8a2e555 chore(shortint): update param compact key 2023-06-28 19:14:20 +02:00
David Testé
61819b2cea chore(ci): add ciphertext modulus for boolean crypto parameters 2023-06-28 18:14:27 +02:00
David Testé
1ee4440c0a chore(ci): put casting benchmarks into group to parse results
Fix usage of generics for crypto parameters in utilities.
2023-06-28 12:01:06 +02:00
David Testé
cbfaf63964 chore(ci): add pbs throughput benchmarks
This implies to add a conversion method to CiphertextModulus in
order to create the CryptoParametersRecord struct used as utils.
2023-06-27 18:16:27 +02:00
Arthur Meyre
6ac96bb46a chore(tfhe): dump non deterministic key and use deterministic when required 2023-06-27 16:10:22 +02:00
David Testé
f9b49eeb39 chore(ci): add feature gate for shortint benchmarks in utilities 2023-06-27 16:00:04 +02:00
Arthur Meyre
fdda5c56f2 feat(multibit): give the possibility to select deterministic execution
BREAKING CHANGE:
shortint ServerKey serialization has changed due to the additional info for
deterministic execution carried by the MultiBit variant
2023-06-27 13:21:23 +02:00
tmontaigu
cb20b4ad3a fix(integer): fix strict assert in add_paralellized 2023-06-27 12:54:50 +02:00
David Testé
fb653ef9b2 chore(ci): write shortint casting benchmarks to json file 2023-06-27 12:33:54 +02:00
Arthur Meyre
2e58fe36a4 test(core): add test on noise variance for lwe encryption 2023-06-26 14:27:09 +02:00
tmontaigu
2cbd8c9fd5 feat(integer): implement more U256 operators
This implements the following operators for U256
- BitXor
- Mul
- is_power_of_two
- ilog2
- SubAssign
2023-06-23 18:59:41 +02:00
twiby
11ac8e6cb9 feat(trivium): add bench for casting and packing 2023-06-23 16:01:40 +02:00
twiby
5f635e97fa feat(apps): add Trivium application of TFHE 2023-06-23 16:01:40 +02:00
twiby
7426e441ba feat(hlapi): keys can be derefed into their underlying keys 2023-06-23 16:01:40 +02:00
twiby
8ae799c477 feat(hlapi): impl TryFrom opeartors for GenericInteger: RadixCiphertext and Vec<Ciphertext> 2023-06-23 16:01:40 +02:00
tmontaigu
ee232ed81e feat(integer): add scalar bitwise operations
Nothing much interesting in terms of performance,
we only use the fact that we can 'inspect' the scalar
to avoid unnecessary work.
2023-06-23 15:57:21 +02:00
tmontaigu
16f4c721ab chore(wasm): re-enable tests which were wrongly disabled
Also fix a small typo, in HLAPI error message
2023-06-23 15:02:32 +02:00
Arthur Meyre
7ea13715ee chore(ci): run example tests 2023-06-22 11:57:27 +02:00
Arthur Meyre
a924b6ebde chore(ci): fix actions URL in a few workflows 2023-06-22 10:31:35 +02:00
Arthur Meyre
2b5d39c927 chore(ci): make release idiot proof 2023-06-22 10:31:35 +02:00
Morten Dahl
aafcbd0a3f chore(docs): fix typo 2023-06-21 18:09:12 +02:00
Arthur Meyre
e810b42eb6 chore(tfhe): remove wildcard deps 2023-06-21 16:24:22 +02:00
tmontaigu
352b282149 feat(hlapi): bind not operator (!) 2023-06-20 21:16:40 +02:00
tmontaigu
bc5f648c35 feat(hlapi): bind scalar comparisons 2023-06-20 21:16:40 +02:00
tmontaigu
7f83761fde feat(hlapi): bind not equal 2023-06-20 21:16:40 +02:00
tmontaigu
8c3993def2 feat(hlapi): bind bit rotations 2023-06-20 15:52:47 +02:00
tmontaigu
a361ad339d feat(hlapi): bind shift by encrypted amount
To keep things easy we have to drop the part
where the macro generated the docs.
2023-06-20 15:52:47 +02:00
sarah el kazdadi
aa82d9f19c feat(multibit): implement deterministic multibit pbs 2023-06-20 13:23:44 +02:00
tmontaigu
eae2d8137b refactor(shortint): rename generate_accumulator into generate_lookup_table
This renames the generate_accumulator* family of
functions into `generate_lookup_table`.

The reasoning is that we have `generate_accumulator`
which returns a `LookupTable` type, and you use
that with the `apply_lookup_table` function which is not
coherent.

Accumulator was the name we had originaly and consistently
for these, however lookup_table is probably easier to
understand / guess what it is about.
Also, it is the term used in concrete, the other
Zama product.
2023-06-19 18:09:28 +02:00
tmontaigu
ca127b2878 refactor(all): remove PBSOrder generic marker
The motivation for this change is that having the Big/Small
being a generic parameter of types, makes the code more complex
than it should be.

Also, It was not complete, not all structs had this generic parameter
making the whole thing a bit more clunky.
2023-06-19 15:16:09 +02:00
Arthur Meyre
80b5ce7f63 chore(core): remove Deref and DerefMut for Polynomial
- add indexing and iteration primitives
2023-06-19 13:21:39 +02:00
dependabot[bot]
2469aa0c2a chore(deps): bump actions/checkout from 3.5.2 to 3.5.3
Bumps [actions/checkout](https://github.com/actions/checkout) from 3.5.2 to 3.5.3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3.5.2...c85c95e3d7251135ab7dc9ce3241c5835cc595a9)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-19 11:47:07 +02:00
Arthur Meyre
48b307c627 chore(ci): fix wasm test docker 2023-06-19 11:46:52 +02:00
Arthur Meyre
8100b2d0de chore(ci): skip super slow integer tests in CI 2023-06-19 09:25:29 +02:00
Arthur Meyre
aab390470c chore(ci): add PR template 2023-06-19 09:25:16 +02:00
aquint-zama
f8f723f42d chore(readme): add citing tfhe-rs section 2023-06-16 14:14:13 +02:00
Arthur Meyre
1afdc71689 chore(tfhe): bump version for pre-release 2023-06-16 14:12:39 +02:00
tmontaigu
120f6b0304 feat(integer): add scalar comparisons
This adds scalar comparisons and min/max

- eq (==) and ne (!=) have similar performances
  to the non scalar version, only gaining a few milliseconds
  when the scalar value is smaller than the encrypted value

- orderings (<, >, <= ,>=) have more interesting
  performances gain the more the scalar value
  is small than the number of encrypted bits in the
  integer ciphertext.

  e.g:
  comparing an encrypted U256 to a value <= u128::MAX
  brings the comparison time from 234 ms to 194 ms

  comparing an encrypted U256 to a value <= u64::MAX
  brings the comparison time from 234 ms to 169 ms
2023-06-16 10:41:39 +02:00
Petar Ivanov
1d817c45d5 chore(makefile): add experimental deterministic FFT target for the C API
`Duration` is only needed when `experimental-force_fft_algo_dif4` is
set. We add a cfg directive to avoid a compiler warning.
2023-06-15 16:14:11 +03:00
David Testé
5690796da4 chore(ci): write boolean keys results file at the correct location 2023-06-14 18:47:32 +02:00
twiby
00c4eb417b fix(shortint): key switching key doctests 2023-06-14 18:45:54 +02:00
David Testé
42374db7cb chore(ci): use correct path to wasm pk generation result file 2023-06-14 15:57:59 +02:00
twiby
f98127498e feat(integer): add CastingKey struct to allow users to switch between integer parameter sets.
Possibilities are limited: you can only cast to a parameter set with the exact same representation: same message and carry size, same number of blocks in a radix representation, and same basis in crt representation.
2023-06-14 13:53:12 +02:00
twiby
d70729668e feat(shortint): add CastingKey struct to allow users to change server_key during a server circuit 2023-06-14 13:53:12 +02:00
twiby
8d339f2fbf feat(boolean): add CastingKey struct to allow users to change server_key during a server circuit 2023-06-14 13:53:12 +02:00
David Testé
c4a73f4f44 chore(ci): use correct relative path to wasm bench directory
Parsing program is using tfhe/ as working directory. Thus providing
a relative path starting with tfhe/ would result to an error while
trying to walk the directory.
2023-06-14 09:32:36 +02:00
tmontaigu
bd061dc85c refactor(hlpai): remove ServerKeyOp trait and macros
Since 8b3d31ae8a
integers use the same serve key, so
the `GenericIntegerServerKey<P>` type was removed.

Since `GenericIntegrServerKey<P>` does not exist,
there is much less need for the collection
of server key traits (ServerKeyAdd, SeverKeySub).

This commit removes them, as well as the macro layer
that was used to implement it
2023-06-13 12:57:03 +02:00
tmontaigu
2defb5a669 feat(c_api): safer destroys
This adds a null check in the different destroy function
of the C API. The performance impact of this should
be negligible / inexistant but should make usage more
ergonomic and safer.

Also this renames the detroy functions from being named
`destroy_shortint_*` | `destroy_boolean_*` to
`shortint_destroy_*` | `boolean_destroy_*` as it is more
coherent with the rest of the API where functions starts with `shortint`
or `boolean`.
2023-06-13 09:38:56 +02:00
David Testé
1b0f3631d4 chore(ci): fix wasm benchmarks and boolean keys measurement
Now use the CI version of the make recipe to run WASM client
benchmarks. In addition, boolean keys and wasm parsing is fixed
so that benchmarks_parameters directory is created and populated
under tfhe directory.
2023-06-12 18:30:36 +02:00
David Testé
06ddfe893a chore(ci): notify slack channel in case of benchmarks failure 2023-06-12 15:20:42 +02:00
David Testé
b69c9e7e7a chore(ci): create profile for wasm client bench and add in workflow 2023-06-12 15:20:13 +02:00
David Testé
18ed2e29a1 chore(ci): create fast feedback unit test profile
This is done to get quick feedback to developpers in a Pull Request.
It tests shorint level with only three sets of parameters. Integer
level is tested with only the default operations with two sets of
parameters.
This profile will be automatically triggered on each push in a
pull request. Conversely the full suite of test will also be
triggered automatically but once the review is approved.
2023-06-12 15:19:56 +02:00
Arthur Meyre
3a17ebd2fa feat(c_api): add entry point to generate LWE multi bit BSK 2023-06-12 14:18:13 +02:00
Agnes Leroy
dd15fd1b05 fix(core): fix multi bit bsk number of GGSWs check 2023-06-12 14:18:13 +02:00
dependabot[bot]
097ea6500c chore(deps): bump actions/checkout from 3.5.2 to 3.5.3
Bumps [actions/checkout](https://github.com/actions/checkout) from 3.5.2 to 3.5.3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](8e5e7e5ab8...c85c95e3d7)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-12 13:51:34 +02:00
Arthur Meyre
9e307a8945 chore(hlapi): add example to measure CPK and CCTL sizes
This also includes key generation time in WASM web client side
2023-06-12 11:41:21 +02:00
Arthur Meyre
f8b497a4b8 chore(ci): fix integer bench workflow uploads 2023-06-12 10:23:13 +02:00
tmontaigu
189f02b696 refactor(hlapi): simplify wrapping of booleans
The way the boolean type was
wrapped was done the same way
shortints and integers were wrapped.

This was so that the internal code was
consistent despite not needing the same complexity.

Now that integers are wrapped differently it make
sense to remove the consistency constraint and
simplify the way booleans are wrapped in the HLAPI
2023-06-09 17:04:29 +02:00
tmontaigu
5654fe7981 feat(integer): scalar_mul generic over UnsignedInteger
This makes the scalar_mul family of operation
accepts any scalar of type T that implements
the UnsignedInteger traits.

This unlocks scalar_multiplication with
scalar being a u128.
2023-06-09 15:21:24 +02:00
Arthur Meyre
2b83a1fec0 chore(ci): add a parser to output csv files for integer benchmarks
- will simplify "just to see" benchmarks output parsing to share when
iterating on performance work
2023-06-09 14:27:49 +02:00
tmontaigu
efac3c842f feat(c_api): add #[repr(C)] for boolean parameters
The same thing was done in 3508019cd2
for shortint.

This does it for booleans
2023-06-09 11:53:22 +02:00
Arthur Meyre
7dbb4485bc chore(shortint): use the right noise at encryption time 2023-06-09 10:49:17 +02:00
aquint-zama
a5906bb7cb chore(tfhe): add a Code of Conduct 2023-06-08 14:06:29 +02:00
Jeremy Shulman
90b7494acd chore(doc): attach tutorials to doc 2023-06-08 14:05:46 +02:00
Arthur Meyre
3508019cd2 feat(core): Add Compact Public Key
- Based on "TFHE Public-Key Encryption Revisited "
  https://eprint.iacr.org/2023/603.pdf

Co-authored-by: tmontaigu <thomas.montaigu@laposte.net>
2023-06-07 19:47:50 +02:00
Arthur Meyre
200c8a177a feat(core): add std multi-bit bootstrapping 2023-06-07 16:12:37 +02:00
Arthur Meyre
2f6c1cf0b5 chore(ci): add docs alias make target for doc 2023-06-07 14:18:49 +02:00
tmontaigu
b96027f417 feat(integer): improve default sub latency 2023-06-07 11:04:11 +02:00
tmontaigu
90c850ca0d feat(integer): improve scalar add,sub and negation
- scalar_add now uses the same parallel carry propagation algorithm
  as the add function.

- scalar_sub now uses the same parallel carry propagation algorithm
  as the sub function.

- the 'default' negation function uses the now improved scalar_add
  to be faster

- unchecked_scalar_add, smart_scalar_add, checked_scalar_add, scalar_add
  have been updated to work on generic scalar type so it should work
  on u32, u64, u128, U256, etc

- unchecked_scalar_sub, smart_scalar_sub, checked_scalar_sub, scalar_sub
  have been updated to work on generic scalar type so it should work
  on u32, u64, u128.
  As U256 does not yet implement the UnsignedInteger trait, its not
  usable yet as a scalar type for the sub operation.

- The HLAPI is still locked to u64 scalars, it will be updated
  when most / all scalar ops are ready
2023-06-06 19:56:56 +02:00
Arthur Meyre
c8d3008a8d chore(shortint): proper ThreadCount serialization for bootstrapping key
- skip thread_count on serialization, deserialize using the function to
properly populate thread_count
2023-06-06 16:58:23 +02:00
David Testé
08c264f193 chore(ci): put wasm tests in their own workflow
This is mostly done to avoid failure on AWS tests (core, boolean,
shortint, ...) workflow due to flaky tests in WASM.
2023-06-06 14:02:52 +02:00
twiby
4ae202d8a4 refactor(tfhe): provide CiphertextBase with functions to convert from a generic type OpOrder to a specific struct.
This allows removing all calls to std::mem::transmute in shortint/engine/server_side/mod.rs, isolating unsafe blocks in the conversion functions. This makes the code safer and more likely to panic! in case of an error.
2023-06-06 12:19:56 +02:00
dependabot[bot]
7eb8601540 chore(deps): bump JS-DevTools/npm-publish from 2.1.0 to 2.2.0
Bumps [JS-DevTools/npm-publish](https://github.com/JS-DevTools/npm-publish) from 2.1.0 to 2.2.0.
- [Release notes](https://github.com/JS-DevTools/npm-publish/releases)
- [Changelog](https://github.com/JS-DevTools/npm-publish/blob/main/CHANGELOG.md)
- [Commits](541aa6b21b...a25b4180b7)

---
updated-dependencies:
- dependency-name: JS-DevTools/npm-publish
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-06 10:23:30 +02:00
tmontaigu
8a1691c536 chore(wasm): remove serialization in web test
In the web wasm test we serialize the public key
to print its size (38_931_6265 bytes) this
means we hold the public key twice in ram.

I suspect this causes frequent out of
memory errors which then result in the
test timing out.

So we remove that hoping it has a positive impact
2023-06-02 17:19:04 +02:00
Arthur Meyre
d1cb55ba24 chore(tfhe): add multi bit shortint and integer tests
- default tests do not run multi bit PBS as it's not yet deterministic
- only radix parallel currently use multi bit pbs in integer
- remove determinism checks for some unchecked ops
- 4_4 multi bit parameters are disabled for now as they seem to introduce
too much noise
2023-06-02 16:00:28 +02:00
Arthur Meyre
2b9a49db87 chore(tfhe): switch to using Into for PBS parameters conversion
- it seems generally better for some "Self conversion" i.e. Into<A> for A
seems to work better than From<A> for A
2023-06-02 16:00:28 +02:00
Arthur Meyre
62ddb24f00 chore(ci): add multibit to key cache generation 2023-06-02 16:00:28 +02:00
Arthur Meyre
c6ae463b41 feat(shortint): add the possibility to use multi bit PBS 2023-06-02 16:00:28 +02:00
tmontaigu
4947eefad4 fix(u256): align with rust for shift behaviours 2023-06-02 12:00:42 +02:00
tmontaigu
71209e3927 feat(integer): make scalar shift match rust when shift >= bit size
When the scalar value denoting the shift was bigger or equal to
the total  number of bits in the ciphertext we would return zeros.

To match more the rust behaviour as well as the behaviour of
non scalar shift / rotate, the scalar shift will now remove
any higher bits of the clear shift value
2023-06-02 11:35:54 +02:00
tmontaigu
2a66ea3d16 feat(intger): add shifts and rotates on encrypted values
This implemantation is base on barrel shifters
which are used un hardware
2023-06-02 11:35:54 +02:00
tmontaigu
d4ff1f5595 feat(wasm): add parralellism in wasm API and add wasm for HLAPI
Co-authored-by: David Testé <david.teste@zama.ai>
2023-06-02 11:13:12 +02:00
Arthur Meyre
8ae92a960d chore(ci): add multibit workflow 2023-06-02 08:55:42 +02:00
tmontaigu
b042c2f7d6 refactor(integer): improve decomposition/recomposition into blocks
This new implementation should hopefully be a little bit easier to understand.

But more importantly it is more general/generic,
the previous implementation required the input type to be able to be described as u64 words,
the new one works for any type (as long as needed trait are implemented)

Also the new implementation is separated from the encryption code,
meaning it will be usable by scalar operation, which will allow us
to deduplicate code and start making scalar ops support scalar values
that are on more than 64-bits.
2023-06-01 18:13:34 +02:00
tmontaigu
e307da5c7f feat(integer): make eq (==) faster and add ne (!=) 2023-05-31 19:03:02 +02:00
Arthur Meyre
3d5b88d608 chore(core): encode the proper expectation wrt to ciphertext modulus
- we don't manage any non native moduli but rather native-compatible moduli
so update the asserts accordingly
2023-05-30 15:39:14 +02:00
Arthur Meyre
4fbf0691c5 chore(core): rename get_scaling_to_native_torus
- function now named get_power_of_two_scaling_to_native_torus to emphasize
it's reserved to power of 2 moduli
2023-05-30 15:39:14 +02:00
Arthur Meyre
5d277e85b9 feat(core): add non native decomposer 2023-05-30 15:39:14 +02:00
Arthur Meyre
778eea30e9 chore(tfhe): remove anyhow, just use Box<dyn std::error::Error> 2023-05-30 11:55:43 +02:00
tmontaigu
63247fa227 chore(sha256_example): use array_fn 2023-05-25 00:22:01 +02:00
David Testé
799291a1f0 docs(tfhe): format sha256_bool and add make recipes to run it 2023-05-25 00:22:01 +02:00
Sexosexosexo
509fe7a63e docs(tfhe): add boolean sha256 tutorial
Clap dev dependency added
2023-05-25 00:22:01 +02:00
tmontaigu
4eac45f0c6 fix(dark_market): fix change cwd logic 2023-05-24 23:30:26 +02:00
David Testé
ddb3451087 docs(tfhe): format dark market example add make recipe to run it 2023-05-24 23:30:26 +02:00
Yagiz Senal
e66a329e33 docs(tfhe): add dark market tutorial 2023-05-24 23:30:26 +02:00
David Testé
d79b1d9b19 docs(tfhe): format regex_engine and add make recipes to run it 2023-05-24 22:11:53 +02:00
Rick Klomp
b501cc078a docs(tfhe): add FHE Regex Pattern Matching Engine
this includes a tutorial and an example implementation for the regex bounty
2023-05-24 22:11:53 +02:00
tmontaigu
800878d89e feat(hlapi): add CompressedPublicKey decompression 2023-05-23 14:19:35 +02:00
tmontaigu
20d0e81bae feat(boolean): add CompressedPublicKey 2023-05-19 19:07:16 +02:00
tmontaigu
d3dbf4ecc9 feat(integer): allow decompressing CompressedPublicKey 2023-05-19 15:32:25 +02:00
tmontaigu
c20ca07cd3 chore(ci): reduce number of test-threads
Reduce number of test-threads being spawned
to reduce propability if tests getting killed due
to out of memory
2023-05-17 15:58:27 +02:00
tmontaigu
9f6c7e9139 feat(hlapi): add CompressedServerKey
Now that WopPBS key are optional in the hlapi
we can have a CompressedServerKey.
If a user tries to create a CompressedServerKey
but has enabled function evaluation on integers
(WopPBS) then it will panic as WopPBS are not yet compressible.
And 'stuffing' the non-compressed wopbs-key in the
compressed server key, would defeat the purpose of
compressed server key, as WopPBS key makes of for
the vast majority of the space used.

Also having CompressedServerKey is required to
be able to have wasm API of the hlapi
as wasm cannot generate normal server key.
2023-05-17 11:15:37 +02:00
David Testé
3c8d6a6f8b chore(ci): handle aws tests in pull request from forked repository 2023-05-17 08:42:19 +02:00
Arthur Meyre
1c837fa6f0 test(core): add normality test based on Shapiro-Francia 2023-05-16 10:12:28 +02:00
tmontaigu
1ec7e4762a feat(integer): make wopbs compile on wasm
The goal here is just to make the code compile
and not allow js api to generate wopbs key yet.
2023-05-15 22:06:36 +02:00
tmontaigu
20fb697d57 refactor(hlapi): disable WopPBS by default in hlapi
In the HLAPI, the WopPBS is enabled by default,
meaning the WopPBS key is generated when integers
are enabled.

This is not really good as the wopbs key is huge
(~700MB with PARAM_2_2) and only used for function evaluation
which does not scale for all types exposed by the halpi
and is still a bit experimental so not really advertised in the docs.

Also keys for wopbs are not compressible yet
(that is why the HLAPI does not yet have a CompressedServerKey).

So disabling wopbs by default will enable to have a compressed server
key that actually compresse things.
2023-05-15 19:01:53 +02:00
tmontaigu
0429d56cf3 chore(U256): add small tests 2023-05-15 11:40:44 +02:00
tmontaigu
509bf3e284 docs(bench): update results of benchmarks in the docs 2023-05-12 21:58:47 +02:00
Arthur Meyre
b2fc1d5266 refactor(shortint): make a difference between PBS and Wopbs parameters
- preparatory work to manage several PBS implementations and harmonize
parameters management

BREAKING CHANGE:
- parameters structures changed
- gen_keys for integer now takes parameters by value to uniformize with
shortint
2023-05-12 17:20:05 +02:00
Arthur Meyre
62d94dbee8 chore(tfhe): fix double Example heading in docstring 2023-05-12 17:20:05 +02:00
Agnes Leroy
fbe911d7db chore(tfhe): hard set number of threads to 10 for the multi-bit PBS
It's the optimal value measured on an m6i.metal instance where we run the benchmarks
2023-05-12 15:12:11 +02:00
tmontaigu
ba72faf828 chore(readme): remove non-needed mut in boolean example 2023-05-11 22:25:12 +02:00
tmontaigu
c387b9340f feat(integer): improve mul and scalar mul
This improves the mul and scalar_mul algorithms
to be faster

The improvement is made within the code
that was responsible for summing up all
the terms by making better use of carries
and avoiding uncessary propagations.

The scalar mul forwards the call to a right shift
when the scalar is a power of two as it just cost one
PBS so it will always be faster.

For 64-bits, target-cpu=native + avx512:
- mul before: 3.4s
- mul after: 900ms
2023-05-11 14:07:11 +02:00
Arthur Meyre
cbb7d30fb8 chore(core): avoid having branching depending on secret values in PKE 2023-05-11 11:12:43 +02:00
David Testé
6e4a707eff chore(ci): compute throughput as operations per second
Since most of the operations are over 1 ms, there is no point
to compute the number of operation per millisecond.
2023-05-11 08:58:45 +02:00
tmontaigu
06b700f904 feat(integer): improve parallel algorithms for add/sub
This adds fully parallel algorithms for the addition and subtraction

These algorithms take ciphertexts with clean carries and
return creates a sum ciphertext that also has clean carries.

The carries are propagated in parallel, using
parallel algorithms for prefix sum / cumulative sum.

The is one based on Hillis and Steele, it the fastest
but uses a lot of threads.

The other on Blelloch, it requires less threads but is a
bit slower.

256bits addition using param_2_2 goes down from ~2.7s to:
- 364ms using Hillis and Steele
- 474ms using Blelloch

The commit also adds bitwise not, as it it necessary for the
subtraction.
2023-05-10 14:43:40 +02:00
dependabot[bot]
cfbabf7480 chore(deps): bump JS-DevTools/npm-publish from 2.0.0 to 2.1.0
Bumps [JS-DevTools/npm-publish](https://github.com/JS-DevTools/npm-publish) from 2.0.0 to 2.1.0.
- [Release notes](https://github.com/JS-DevTools/npm-publish/releases)
- [Changelog](https://github.com/JS-DevTools/npm-publish/blob/main/CHANGELOG.md)
- [Commits](0be441d808...541aa6b21b)

---
updated-dependencies:
- dependency-name: JS-DevTools/npm-publish
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-09 11:15:16 +02:00
tmontaigu
291ed9026f feat(hlapi): add casting between integer types
This adds the casting of integer types.

Downcasting truncates blocks.
Upcasting appends 0s

Casting is done via the introduced `cast_from` associated
function and the `cast_into` method. They are the equivalent of
the `From` and `Into` traits.

It was not possible to implement casting by implementing the
standard `From` and `Into` traits as initially planned because:

```rust
impl<P1, P2> From<GenericInteger<P1>> for GenericInteger<P2>
where P1: IntegerParameter,
      P2: Integer Parameter, {
    fn from(_: GenericInteger<P1>) -> Self {
        todo!()
    }
}
```

As it conflicts the blanket impl found in the stdlib
`impl<T> From<T> for T;` as P1, P2 may be the same and we have no way
of telling the compiler to consider this impl only when P1 != P2.
So we had to create our own stuff
2023-05-05 21:43:10 +02:00
tmontaigu
610f0010b8 refactor(hlapi): remove RefCell used to wrap integers
Our integer types are based on tfhe::integer.

Originally, operators (+, -, *, <<, etc) were mapped to "smart" operations
until commit "ee96a0ff185fedb9c4467a5b0c8195798c30b19f" where we swapped
to "default" ops.

The motivation of swapping from smart to default was that default ops
timing was always the same, so its easier to predict and reason about when
comparing with available benchmarks.
However, they may give worse performance depending
on the computations being done (addition/subtraction heavy or not).

In the High level API, we overloaded operators on const ref e.g. `&a + &b`
but as we initially mapped to smart operations we needed interior mutability.
RefCell was chosen, mainly because using Mutexes would have allowed
users to write "fake" parallel code, that is, the code compiles but is not
truly parallel due to the mutexes. RefCell makes writting parallel code
harder but If you manage to do it, its truly parallel.

After moving to default ops for the hlapi, we kept the inner RefCells
to allow time to decide to retract the change.

The final choice is to keep default ops as the default, so that means
we can remove the RefCells.

Entry points to smart operation will be added later to enable
'power users' to explicitely try them and see if they bring
improvement(s) for them.
Letting advanced user explicitely handle mutability of smart
operations and let them choose their synchronisation is likely a better
choice than making it for them as its an important choice.
2023-05-05 18:21:43 +02:00
tmontaigu
8b3d31ae8a refactor(hlapi): use one unique key for integers
This refactor of the inner workings of the High Level API
makes it so that all integer types share and use the same key
to encrypt their blocks

Before this commit, users that wanted to use integers via the
hlapi needed to select the type amongst the ones available
and enable it (eg i want to use 16 bits integers so I call
enable_default_uint16).

This meant that if users wanted to use many integer types
they would have to manually enable them. Since each type had its
own key, it meant they were completely separate (no casting possible)
and the different key type `ClientKey`, `ServerKey` would become very
big. So in practice you would stick to one and only integer type in your
program.

With this changes, users that wishes to use integers, will just need
to enable them (enable_default_integers()), and will get access to
all statically difined integer types (FheUint8, FheUint16, etc)
at the cost of one key for of these types.

This reduces complexity, memory footprint and,
will enable to introduce in commits later the ability to cast between
integer types.

BREAKING CHANGES:
 - Serialized keys are not backward compatible
 - enable_default_uint[8,16,etc] become enable_default_integers
2023-05-05 16:05:58 +02:00
sarah el kazdadi
98539aaa61 fix(pbs): fix bug in rounding code in f128 pbs 2023-05-05 15:32:59 +02:00
Arthur Meyre
9a80a01dc3 feat(integer): add trim/extend APIs for radix ciphertexts 2023-05-04 09:46:06 +02:00
tmontaigu
ecf9d50058 feat(integer): add parallelized scalar rotate_left/right
Like shifts, rotates are implemented by combining
a rotation of the block and bivariate PBSs in case
the rotation number `n` is not a multiple of the number
of bits in a block.

Since the behaviour of rotations is to 'cycle' bits
back the end/beginning of the 'bit slice' (i.e. no bits is ever lost like
it can with shifts), the performance is always the same when
(n % nb_bits_in block) != 0. However the implementation is simpler.

So assuming a machine where the number of threads
is >= to the ciphertext's number of block, the operation
cost one bivariate PBS.
2023-05-02 17:52:44 +02:00
Arthur Meyre
65e4aab38d chore(shortint): fix docstrings which were mixing big and small key params 2023-05-02 17:21:39 +02:00
Arthur Meyre
ac348870ba refactor(shortint): add encryption key choice in parameters
BREAKING CHANGE:
- Parameters layout change
- C API removal of SHORTINT_NATIVE_MODULUS which was a leftover from
a refactor
2023-05-02 17:21:39 +02:00
Arthur Meyre
6adfcaa5f7 chore(tfhe): bump version to 0.3.0 2023-05-02 11:10:14 +02:00
Arthur Meyre
bc6bbe66d9 chore(shortint): fix some wopbs function signature 2023-05-02 11:10:04 +02:00
Arthur Meyre
871d4aea17 refactor(core): refactor CiphertextModulus to be less error prone 2023-04-28 16:40:33 +02:00
Arthur Meyre
f81376b762 chore(ci): start benches only on our repo 2023-04-28 11:17:31 +02:00
Arthur Meyre
64813bae18 chore(tfhe): as seen there are uses of ilog2 which come from rust 1.67 2023-04-28 11:01:06 +02:00
Arthur Meyre
16ce2a8a3f refactor(wopbs): manage LUTs for wopbs to avoid copies 2023-04-28 09:45:11 +02:00
tmontaigu
f018987eac feat(integer): improve scalar shifts performances
This reworks the left/right scalar shift method.

This new implementation takes more advantage of the
radix representation property and use a combo
of shifting/moving blocks via a rotate_left/right
optionnaly followed by in-block shifting and propagation
to other blocks.

This new implementation requires the carries to be empty
(not sure what the preconditions were for the previous implementation)
and the output will also have clean carries.
Requiring empty carries allows to do the shift in a way
that scales well with the number of blocks as we can use truly parallel
operations.

This means that the time required to shift is dependent
in the shift value, which should not be a security problem
as its a clear value. (The previous implementation also
had timing that depended on the shift value)

There are two scenarios possible:
- The shift only require to move blocks -> its fast
- The shift requires moving + in-block shifting -> slower,
  but still faster than the previous implementation.

Worst case is when shift is less than the number of bits
in a block.

This also changes the type of the `shift` parameter from
`usize` to `u64` to be consistent with other scalar operations.

The follwing pseudo bench code on 64 bits
gives the following time ranges:

With changes:
```
unchecked_left: BenchStat {
    min: 16.743µs
    max: 526.518563ms
    mean: 150.77871ms
}
unchecked_left_parallelized: BenchStat {
    min: 17.291µs
    max: 94.408455ms
    mean: 30.092279ms
}
unchecked_right: BenchStat {
    min: 16.723µs
    max: 548.417332ms
    mean: 160.345234ms
}
unchecked_right_parallelized: BenchStat {
    min: 16.978µs
    max: 97.955322ms
    mean: 33.500562ms
}
Measured in 37.890296743s
```

Previous code:
```
unchecked_left: BenchStat {
    min: 1.055401595s
    max: 1.156574075s
    mean: 1.085648592s
}
unchecked_left_parallelized: BenchStat {
    min: 559.636545ms
    max: 630.83338ms
    mean: 584.747893ms
}
unchecked_right: BenchStat {
    min: 1.055041354s
    max: 2.314891255s
    mean: 1.644513996s
}
unchecked_right_parallelized: BenchStat {
    min: 562.017144ms
    max: 1.275945891s
    mean: 894.812286ms
}
Measured in 421.412913883s
```

```rust
use rand::Rng;
use std::time::Instant;
use tfhe::integer::{gen_keys, RadixClientKey};
use tfhe::shortint::parameters::PARAM_MESSAGE_2_CARRY_2;

const NB_CTXT: usize = 32;
const NB_TEST: usize = 100;

struct BenchStat {
    min: Option<std::time::Duration>,
    max: Option<std::time::Duration>,
    sum: std::time::Duration,
    count: u32,
}

impl BenchStat {
    fn update(&mut self, elapsed: std::time::Duration) {
        if self.min.is_none() {
            self.min = Some(elapsed);
        } else {
            self.min = self.min.map(|l| l.min(elapsed));
        }
        if self.max.is_none() {
            self.max = Some(elapsed);
        } else {
            self.max = self.max.map(|l| l.max(elapsed));
        }
        self.sum += elapsed;
        self.count += 1;
    }

    fn print(&self) {
        println!("BenchStat {{");
        println!("    min: {:?}", self.min.unwrap());
        println!("    max: {:?}", self.max.unwrap());
        println!("    mean: {:?}", self.sum / self.count);
        println!("}}");
    }
}

type ShiftType = u64;

fn main() {
    let mut unchecked_left_timing = BenchStat::default();
    let mut unchecked_left_parallelized_timing = BenchStat::default();
    let mut unchecked_right_timing = BenchStat::default();
    let mut unchecked_right_parallelized_timing = BenchStat::default();

    let total = Instant::now();

    let param = PARAM_MESSAGE_2_CARRY_2;
    let (cks, sks) = gen_keys(&param);
    let cks = RadixClientKey::from((cks, NB_CTXT));

    let mut rng = rand::thread_rng();

    //Nb of bits to shift
    let tmp_f64 = param.message_modulus.0 as f64;
    let nb_bits = tmp_f64.log2().floor() as usize * NB_CTXT;
    let modulus = (param.message_modulus.0 as u128).pow(NB_CTXT as u32);
    assert_eq!(nb_bits, 64);

    for i in 0..NB_TEST {
        println!("{} / {NB_TEST}", i + 1);
        let clear = rng.gen::<u128>() % modulus;
        let scalar = rng.gen::<u128>() % nb_bits as u128;

        println!("clear: {clear}, scalar: {scalar}");

        let ct = cks.encrypt(clear);

        {
            let before = Instant::now();
            let ct_res = sks.unchecked_scalar_left_shift(&ct, scalar as ShiftType);
            unchecked_left_timing.update(before.elapsed());
            //assert!(ct_res.block_carries_are_empty());
            let dec_res: u128 = cks.decrypt(&ct_res);
            assert_eq!((clear << scalar) % modulus, dec_res);

            let before = Instant::now();
            let ct_res = sks.unchecked_scalar_left_shift_parallelized(&ct, scalar as ShiftType);
            unchecked_left_parallelized_timing.update(before.elapsed());
            //assert!(ct_res.block_carries_are_empty());
            let dec_res: u128 = cks.decrypt(&ct_res);
            assert_eq!((clear << scalar) % modulus, dec_res);
        }

        {
            let before = Instant::now();
            let ct_res = sks.unchecked_scalar_right_shift(&ct, scalar as ShiftType);
            unchecked_right_timing.update(before.elapsed());
            // assert!(ct_res.block_carries_are_empty());
            let dec_res: u128 = cks.decrypt(&ct_res);
            assert_eq!((clear >> scalar) % modulus, dec_res);

            let before = Instant::now();
            let ct_res = sks.unchecked_scalar_right_shift_parallelized(&ct, scalar as ShiftType);
            unchecked_right_parallelized_timing.update(before.elapsed());
            //assert!(ct_res.block_carries_are_empty());
            let dec_res: u128 = cks.decrypt(&ct_res);
            assert_eq!((clear >> scalar) % modulus, dec_res);
        }
    }

    print!("unchecked_left: ");
    unchecked_left_timing.print();
    print!("unchecked_left_parallelized: ");
    unchecked_left_parallelized_timing.print();

    print!("unchecked_right: ");
    unchecked_right_timing.print();
    print!("unchecked_right_parallelized: ");
    unchecked_right_parallelized_timing.print();

    println!("Measured in {:?}", total.elapsed());
}
```

BREAKING CHANGE: parameter type changed from usize to u64
2023-04-27 17:18:55 +02:00
Arthur Meyre
20f6c5419b chore(core): re-enable split pbs for u128 2023-04-25 17:42:40 +02:00
Arthur Meyre
58b530f40b chore(doc): fix docstring ref 2023-04-25 17:42:40 +02:00
Arthur Meyre
689ad195f3 refactor(integer): remove usage of Mutex for determinism 2023-04-25 15:25:51 +02:00
sarah el kazdadi
4a1eda25d3 fix(split): fix split pbs backward conversion 2023-04-24 16:08:23 +02:00
Arthur Meyre
af936df064 chore(core): change rng tests to better avoid false failures
- we still check we generate non zero values but add retry conditions or
have less stringent checks, to allow some values to be zero for example as
it's a valid value that can be generated
- each test suite (test and doctest) for these tests ran 1000 times without
failure
2023-04-24 13:26:35 +02:00
dependabot[bot]
0233a69ea6 chore(deps): bump JS-DevTools/npm-publish from 1.4.3 to 2.0.0
Bumps [JS-DevTools/npm-publish](https://github.com/JS-DevTools/npm-publish) from 1.4.3 to 2.0.0.
- [Release notes](https://github.com/JS-DevTools/npm-publish/releases)
- [Changelog](https://github.com/JS-DevTools/npm-publish/blob/main/CHANGELOG.md)
- [Commits](0f451a9417...0be441d808)

---
updated-dependencies:
- dependency-name: JS-DevTools/npm-publish
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-24 12:56:59 +02:00
Arthur Meyre
f72a6ec835 chore(doc): fix typo 2023-04-24 09:30:16 +02:00
David Testé
25a2586eae chore(ci): publish tfhe release on-demand
This will perform on-demand release publication.
It will publish on the following channels:
 * crates.io
 * web and node package on npmjs
2023-04-21 14:39:36 +02:00
Arthur Meyre
c112a43a63 chore(core): add more sanity checks on RNG 2023-04-21 14:36:14 +02:00
Arthur Meyre
2813812380 fix(core): fix rng 2023-04-21 14:36:14 +02:00
tmontaigu
84a6036789 feat(boolean): add BooleanEngine::replace_thread_local
This new associated function allows to replace
the engine used in the thread.
2023-04-20 15:16:13 +02:00
David Testé
658368d0b6 chore(ci): create dummy release workflow
This is done to be able to test the effective worklfow
implementation in a development branch.
2023-04-20 10:26:31 +02:00
Arthur Meyre
9368049adf chore(core): disable split pbs128 2023-04-19 18:38:22 +02:00
David Testé
5e8ca0b52c chore(ci): fix decomposition basis and add bit size to params
Decomposition basis wasn't correctly set to handle CRT. Now it uses
a Vec that would be displayed as a string in the database.
In addition the bit size has been added to ease comparison between
various of them in Grafana.
2023-04-19 17:58:22 +02:00
Arthur Meyre
605cd5b3b0 chore(doc): updated benchmarks for min to reflect the fix done to min/max 2023-04-19 16:56:46 +02:00
David Testé
4bfe9c22d4 chore(ci): remove unused env variable in boolean benchmarks 2023-04-19 16:28:13 +02:00
Arthur Meyre
1c0b36c672 chore(bench): only run avx512 benches 2023-04-19 09:23:01 +02:00
Arthur Meyre
7dccb01a8d fix(integer): fix mul correctness
- update benches accordingly
2023-04-19 09:23:01 +02:00
Arthur Meyre
7bff348367 chore(bench): more multi-bit bench params 2023-04-19 09:03:58 +02:00
tmontaigu
74a5a278b6 fix(hlapi): use correct number of blocks for FheUint32
The FheUint32 was wrongly defined as being 32 blocks of 2 bits
when it should have been 16 blocks.
2023-04-18 18:40:39 +02:00
tmontaigu
426ced3295 feat(hlapi): add trivial encryptions 2023-04-18 15:07:06 +02:00
tmontaigu
7af5fcc7eb feat(integer): add trivial encryption 2023-04-18 15:07:06 +02:00
Arthur Meyre
12d9947149 chore(integer): add non regression test for scalar mul fix 2023-04-18 10:46:19 +02:00
dependabot[bot]
7c54896e68 chore(deps): bump actions/checkout from 3.5.0 to 3.5.2
Bumps [actions/checkout](https://github.com/actions/checkout) from 3.5.0 to 3.5.2.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](8f4b7f8486...8e5e7e5ab8)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-04-17 11:17:58 +02:00
J-B Orfila
04533dedfe chore(doc): fix typo 2023-04-14 18:30:30 +02:00
J-B Orfila
d01be35557 chore(doc): fix TOML 2023-04-14 18:30:30 +02:00
J-B Orfila
9fc32e2f52 chore(doc): fix dead links 2023-04-14 13:37:03 +02:00
Arthur Meyre
e1e78b8b9d chore(integer): restore empty carry check for default comparator tests
- only extract assign message instead of doing a full propagate as carries
are not supposed to be non zero (though the degree will have grown)
2023-04-14 13:34:35 +02:00
J-B Orfila
a2384e0d1f chore(doc): last fixes 2023-04-13 14:38:42 +02:00
J-B Orfila
37da2f1f1e chore(doc): bench integers added 2023-04-13 14:38:42 +02:00
J-B Orfila
8c775e5a27 chore(doc): add default benches 2023-04-13 14:38:42 +02:00
J-B Orfila
43ba7e103d chore(doc): 0.2 doc 2023-04-13 14:38:42 +02:00
Arthur Meyre
448e634748 fix(integer): fix scalar mul bug when representing integers > 64 bits
- a product was overflowing, we now compute a progressive division with
the same effect and stop once we reach zero to limit the number of
generated tasks
2023-04-13 13:26:48 +02:00
Arthur Meyre
6268752ac9 fix(integer): fix radix wopbs table size issue 2023-04-13 11:03:33 +02:00
David Testé
e0ed2d91c6 chore(ci): add shortint default ops to benchmarks 2023-04-12 19:11:15 +02:00
Arthur Meyre
fef389e002 chore(core): more reasonable LWE sub test
- otherwise we are just checking that x.wrapping_sub(x) == 0
2023-04-12 16:21:20 +02:00
Arthur Meyre
ae30f7c086 chore(bench): use clean inputs for default ops bench
- by design default ops are made to work best on clean CTs
2023-04-12 15:48:00 +02:00
Arthur Meyre
3f719a30f6 chore(tfhe): update check toolchain 2023-04-12 15:47:46 +02:00
tmontaigu
d28880ac30 chore(makefile): allow passing cargo profile
This allows to invoke the Makefile with a cargo profile
eg:
- `make CARGO_PROFILE=devo build_integer`
- `make CARGO_PROFILE=dev build_integer`
- `make CARGO_PROFILE=release build_integer`

By default still use release profile.
2023-04-12 12:39:54 +02:00
Arthur Meyre
ca9cdc0e73 chore(tfhe): add fpcc target to have a fast pcc locally 2023-04-12 11:21:10 +02:00
Arthur Meyre
f768e62d89 refactor(tfhe): add support for power of 2 q for LWE linalg + KS + PBS 2023-04-11 23:01:25 +02:00
tmontaigu
ee96a0ff18 chore(hlapi): use 'default' ops 2023-04-11 21:56:01 +02:00
J-B Orfila
ee944b3129 chore(ci): add default op 2023-04-11 21:35:56 +02:00
David Testé
672f855770 chore(ci): make curl based job step fails upon 4xx or 5xx response 2023-04-11 21:35:56 +02:00
David Testé
362992a4ba chore(ci): benchmark only fastest integer operations
This is done to speed-up execution and to avoid having benchmark
job running for more than 6 hours in GitHub Actions. The selected
operations set gathers the ones that most user would look for, i.e
the fastest and smartest ones.
2023-04-11 21:35:56 +02:00
David Testé
2b24eb304d chore(ci): record benchmarks parameters to be stored in database
This is done to comply with the new Zama benchmark standard.
Exhaustive parameters list is stored so once it's parsed and send
to database, one can easily filter results on such parameters in
visualization tool.
2023-04-11 21:35:56 +02:00
Arthur Meyre
b484b8a851 chore(core): add multi bit PBS bench structure 2023-04-11 21:35:56 +02:00
Arthur Meyre
6dea738725 chore(integer): fix default scalar_mul missing full propagate 2023-04-11 21:29:12 +02:00
Arthur Meyre
3bb342879f chore(tfhe): temporarily disable integer 3_3 tests 2023-04-11 21:29:12 +02:00
Jérémy Zaccherini
9f024e2dac chore(tfhe): update design and links of the README.md 2023-04-11 21:28:44 +02:00
tmontaigu
190b483d23 chore(tfhe): rename typed_api to high_level_api
high_level_api makes it easier to understand
what this api brings (at least more than typed_api does)
and is term used in the documenation
2023-04-11 20:57:36 +02:00
Arthur Meyre
e799d240a7 chore(c_api): allow to build in a simple cargo command, requires nightly 2023-04-11 19:51:51 +02:00
Arthur Meyre
16596137c1 chore(integer): disable smart_add for params 1_1 which is very slow 2023-04-11 19:05:17 +02:00
Arthur Meyre
03cd7ef15a feat(integer): add default scalar shift ops 2023-04-11 19:05:17 +02:00
Arthur Meyre
4cda0a7211 feat(integer): add default sub op 2023-04-11 19:05:17 +02:00
Arthur Meyre
9b668c1d50 feat(integer): add default scalar ops 2023-04-11 19:05:17 +02:00
Arthur Meyre
dc4d9c7968 feat(integer): add default neg op 2023-04-11 19:05:17 +02:00
Arthur Meyre
e3e7abd652 feat(integer): add default mul ops 2023-04-11 19:05:17 +02:00
Arthur Meyre
4265fbe67e feat(integer): add "default" radix_parallel comparison ops 2023-04-11 19:05:17 +02:00
Arthur Meyre
337400ce3d feat(integer): add "default" radix_parallel bitwise ops 2023-04-11 19:05:17 +02:00
Arthur Meyre
be650d8e6b feat(integer): add "default" radix_parallel add ops 2023-04-11 19:05:17 +02:00
Arthur Meyre
47604a6297 feat(shortint): add "default" sub operations 2023-04-11 19:01:12 +02:00
Arthur Meyre
95d6fc5b1b feat(shortint): add "default" shift operations 2023-04-11 19:01:12 +02:00
Arthur Meyre
19a6855b82 chore(shortint): add default scalar ops tests 2023-04-11 19:01:12 +02:00
Arthur Meyre
f894c33bfd feat(shortint): add "default" scalar sub operations 2023-04-11 19:01:12 +02:00
Arthur Meyre
6578aff8a4 feat(shortint): add "default" scalar mul operations 2023-04-11 19:01:12 +02:00
Arthur Meyre
9096c62f32 feat(shortint): add "default" scalar add operations 2023-04-11 19:01:12 +02:00
Arthur Meyre
22f186af17 feat(shortint): add "default" neg operations 2023-04-11 19:01:12 +02:00
Arthur Meyre
7820523d1f feat(shortint): add "default" mul ops 2023-04-11 19:01:12 +02:00
Arthur Meyre
c0386c7e54 feat(shortint): add "default" div and mod operations 2023-04-11 19:01:12 +02:00
Arthur Meyre
1ea73a68c4 feat(shortint): add "default" comp_op 2023-04-11 19:01:12 +02:00
Arthur Meyre
6a02ae04e1 feat(shortint): add "default" bitwise ops 2023-04-11 19:01:12 +02:00
Arthur Meyre
becd11b45f feat(shortint): add "default" add and add_assign operators 2023-04-11 19:01:12 +02:00
Arthur Meyre
366964f1e6 feat(shortint): add function to check if a ciphertext has an empty carry 2023-04-11 19:01:12 +02:00
Arthur Meyre
32f8561af1 chore(tfhe): add devo profile to be able to iterate faster on tests 2023-04-11 19:01:12 +02:00
tmontaigu
063ad26b9e feat(tfhe): add CompressedPublicKey 2023-04-11 18:04:42 +02:00
tmontaigu
dba18a889a feat(hlapi): add 32, 64, 128 bits types 2023-04-11 16:58:32 +02:00
tmontaigu
0f5e1f0141 feat(c_api): add a C API of the high level API
One notable change is that since this C API
relies a lot on macro_rules! to be generated
we have to activate cbindgen's `expand` option,
which will use cargo-expand to expand macros.

However this means we can't call bindgen from the build.rs
as it seems to lead to a infinite-loop
(build.rs calls bindgen which calls cargo-expand which calls build.rs...)

So we call the cbindgen binary via the makefile.
2023-04-11 13:41:18 +02:00
J-B Orfila
d4c7aff90b fix(integer): fix unchecked_add in unchecked_mul 2023-04-07 15:55:08 +02:00
Arthur Meyre
1d9f8c57da chore(core): fix multi bit parameters 2023-04-07 11:55:33 +02:00
J-B Orfila
aa58748d33 refactor(integer): simplify PubliKey API 2023-04-07 11:55:33 +02:00
tmontaigu
412463ed27 chore(shortint): remove the Default impl for Parameters
The rationale behind this is that, `shortint::Parameters::default()`
does not convey the information about how much bit of message
and carry this parameter provides, and so might lead to
errors/confusions.

Instead user will be forced to use the param name like
`PARAM_MESSAGE_2_CARRY_2` which is less ambiguous.

This is obviously a breaking change.
2023-04-07 10:26:33 +02:00
sarah el kazdadi
72e7f16179 feat(core): implement 128bit pbs 2023-04-06 17:14:11 +02:00
Arthur Meyre
a1fcfcc55e chore(core): lower the noise in multibit test to avoid bad decryptions 2023-04-06 15:50:03 +02:00
Arthur Meyre
5ede4d6b0c chore(core): reverse the order in which we encrypt KS levels
- allows to avoid reversing the iterator, potentially improving cache
access during a keyswitch

BREAKING CHANGE: the keyswitch key level order has been reversed

TODO: fix the mismatch between DecompositionTerm and DecompositionIter for
the meaning of a decomposition level see
https://github.com/zama-ai/tfhe-rs-internal/issues/72
2023-04-06 14:47:29 +02:00
tmontaigu
9430e6dcf8 chore(integer): annotate decrypt_radix type in tests
When working on the integer part of the crate,
if you introduced a compile error (as is common when working stuff out)
the type inference of rust would not fully work and call to
`let dec = cks.decrypt_radix(&ctxt);` would fail to deduce the type
of `dec`.

This resulted on many errors in the compiler output about
"type annotation needed", requiring to scroll up a certain amount
to be able to see the errors messages you actually care about.

This commit adds these missing type annotation the the errors won't
appear, so as to have less noise.
2023-04-06 10:56:19 +02:00
tmontaigu
74f47e3655 feat(tfhe): add compressed ciphertexts in HL API 2023-04-05 16:55:28 +02:00
tmontaigu
0d57da7608 chore(tfhe): add typed_api test in ci 2023-04-05 16:48:42 +02:00
tmontaigu
25a43181e0 doc(tfhe): add high level api docs 2023-04-05 16:14:43 +02:00
tmontaigu
b8e64377fa doc(tfhe): add integer example + mention --release 2023-04-04 13:45:01 +02:00
David Testé
c206aa89b8 chore(ci): test core_crypto with avx512 2023-04-04 09:03:27 +02:00
Arthur Meyre
f39e318019 chore(core): update parameters for multi-bit PBS tests 2023-04-03 17:41:59 +02:00
Arthur Meyre
40b9497dbf chore(core): remove feature gate for multi-bit PBS 2023-04-03 17:41:59 +02:00
tmontaigu
e1cfb0e3f7 doc(integer): reorganize user documentation 2023-04-03 13:32:28 +02:00
tmontaigu
a410aaaed6 feat(typed_api): add missing Serialize/Deserialize
The "top level" key types (ClientKey, ServerKey and PublicKey)
were missing serde::{Serialize, Deserialize} implementations
2023-03-31 18:02:54 +02:00
tmontaigu
d7a4e87efb feat(typed_api): plug choice of big/small ciphertext 2023-03-31 16:29:34 +02:00
tmontaigu
3bc1536fa6 feat(integer): improve RadixClientKey 2023-03-31 16:29:33 +02:00
Arthur Meyre
6633496e7b chore(tfhe): remove mut keyword for cks and sks that don't need them 2023-03-30 16:19:17 +02:00
tmontaigu
14f7ca7492 feat(integer): plug shortint big/small in integer 2023-03-30 12:06:56 +02:00
David Testé
accd3cfb3f chore(ci): add windows as target build platform 2023-03-29 15:44:59 +02:00
Arthur Meyre
7f050c0fe9 chore(core_crypto): enable the choice of a single fixed fft algorithm 2023-03-29 12:48:49 +02:00
tmontaigu
c4769cbc0f fix(js): bump nvm 2023-03-27 12:17:07 +02:00
tmontaigu
1633eb573f feat(integer): parallelized bitwise operations 2023-03-27 12:17:07 +02:00
sarah el kazdadi
10174cdac6 feat(fft): update concrete-fft to 0.2.1 2023-03-27 11:00:37 +02:00
tmontaigu
475b838943 chore(makefile): add --all-targets switch to build command
This adds the --all-targets to the cargo build commands
invoked by the makefile so that when running
`make build_boolean`, lib, tests, benches, examples are built.

See the `cargo help build`

```
--all-targets
    Build all targets. This is equivalent to specifying --lib --bins --tests --benches --examples.
```
2023-03-27 10:59:49 +02:00
dependabot[bot]
1bdc447915 chore(deps): bump actions/checkout from 3.4.0 to 3.5.0
Bumps [actions/checkout](https://github.com/actions/checkout) from 3.4.0 to 3.5.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](24cb908017...8f4b7f8486)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-27 09:49:45 +02:00
Arthur Meyre
a04d68f1fb feat(shortint): add support for small LWE key encryption 2023-03-23 16:45:39 +01:00
tmontaigu
42b569bcd7 feat(tfhe): add typed API
the `typed_api` module is basically the concrete 0.2 codebase
with modifications
2023-03-23 11:49:50 +01:00
tmontaigu
8999ea3766 chore(integer): add getters to client keys 2023-03-22 13:14:17 +01:00
tmontaigu
b0d059eef1 chore(boolean): add missing PublicKey derives 2023-03-22 13:14:16 +01:00
Arthur Meyre
64f9dc0813 refactor(tfhe): rename with_z function to with_correcting_term 2023-03-22 11:40:58 +01:00
David Testé
52afc382a0 fix(integer): stop decomposing before overflow
This only happens on binary scalar operations over 64bits of
precision.
2023-03-22 10:43:57 +01:00
tmontaigu
1e94d80044 fix(shortint): correct incoherences in bivariate pbs shifts
A bivariate PBS is a univariate PBS where we encode
the lhs, and rhs values into a singular value:
`univariate_value = (lhs * shift) + rhs`

Some places shifted the lhs by the parameter's message modulus
while others shifted by rhs.degree + 1, this could leed to incoherences
and wrong result in some cases.

The commits adds a `BivariateAccumulator` that stores the shift
value that was used to create the LUT, to avoid said incoherences.

Also, bivariate function family that expected
a univariate closure `Fn(u64) -> u64` will now expect a
bivariate closure `Fn(u64, u64) -> u64` so that they are less
error prone as the user does not need to figure out the
shift to be used.
2023-03-21 13:25:48 +01:00
Arthur Meyre
68fa6b78a4 feat(tfhe): introduce experimental feature approach for multi_bit_pbs 2023-03-20 16:47:33 +01:00
Arthur Meyre
75f05c0f3a feat(core): add multi-bit BSK generation and PBS threaded implementation 2023-03-20 16:47:33 +01:00
Arthur Meyre
bf6f699e8c refactor(fft): update fft code to use FourierPolynomialSize 2023-03-20 16:47:33 +01:00
Arthur Meyre
d3b3c5ab21 chore(core): fix ciphertext typo 2023-03-20 16:47:33 +01:00
Arthur Meyre
ceb26def05 feat(core): add constant GGSW ciphertext decryption 2023-03-20 16:47:33 +01:00
Arthur Meyre
638f210555 chore(core): fix typo 2023-03-20 16:47:33 +01:00
tmontaigu
1294727b11 chore(core_crypto): fix overflows in tests
These overflows appeared in debug builds,
and are easly fixed by using explicit wrapping operation
or correct values.
2023-03-20 12:44:15 +01:00
Arthur Meyre
e954247f1b chore(ci): CI at the speed of ligth
- use a 128 vcpu instance
- update script to have a no compromise test run
- update Makefile to be able to run the "no compromise" CI mode
2023-03-20 11:24:37 +01:00
dependabot[bot]
8d9ba2a1f9 chore(deps): bump actions/checkout from 3.3.0 to 3.4.0
Bumps [actions/checkout](https://github.com/actions/checkout) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](ac59398561...24cb908017)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-03-20 10:16:16 +01:00
sarah el kazdadi
34fc96319d fix(tfhe): fix faulty comparison in avx512 code 2023-03-17 16:14:41 +01:00
Arthur Meyre
13ad7d5468 chore(ci): change ubuntu mirror urls as the original ones are too slow 2023-03-16 17:18:08 +01:00
Arthur Meyre
9151eb72b3 chore(ci): silence skipped M1 tests due to cla-bot label 2023-03-16 17:17:57 +01:00
Rui LOPES
8d8b8ab511 fix(build): remove -- flag from make targets that do not use wasm-pack 2023-03-16 17:09:15 +01:00
Rui LOPES
0c30e7525a fix(build): pass the --features arguments to the wasm-pack command in Makefile js targets 2023-03-16 17:09:15 +01:00
tmontaigu
385c907807 fix(shortint): remove wrong large_mod in cmp operations 2023-03-13 13:22:02 +01:00
Arthur Meyre
6266d18211 chore(tfhe): fix typos 2023-03-13 09:54:41 +01:00
tmontaigu
0a39f369d2 fix(integer): make radix encryption / decryption work on big endian 2023-03-10 14:09:13 +01:00
tmontaigu
bb6663cfe5 chore(integer): simplify radix decryption 2023-03-09 15:44:33 +01:00
tmontaigu
06713fa42d fix(integer): make radix encryption work on big endian 2023-03-09 15:44:33 +01:00
tmontaigu
b59afc7eee feat(integer): add PublicKey 2023-03-09 15:44:33 +01:00
tmontaigu
2ede9fb852 chore(integer): move u256 into its own mod 2023-03-09 15:44:33 +01:00
tmontaigu
ccf21c1716 feat(integer): add compressed ciphertexts 2023-03-09 15:44:33 +01:00
tmontaigu
f3dc9e52f6 feat(integer): add min,max and comparisons ops 2023-03-09 15:44:33 +01:00
tmontaigu
195efaf09c chore(integer): refactor benches 2023-03-09 15:44:32 +01:00
tmontaigu
3c9325f939 feat(tfhe): arbitrary sized integer encryption 2023-03-08 09:47:19 +01:00
aquint-zama
a542b64dea chore(docs): minor fixes 2023-03-07 15:53:00 +01:00
Arthur Meyre
e8a560b887 refactor(integer): rewrite extract_bits to avoid ciphertext copies 2023-03-07 10:08:53 +01:00
Arthur Meyre
14da0ca001 feat(integer): add concrete-integer as integer module 2023-03-07 10:08:53 +01:00
Arthur Meyre
5d8a138c69 chore(tfhe): update copyright year 2023-03-03 15:44:31 +01:00
David Testé
10b0ff7f8b chore(ci): split sync repo url into several secrets
Enforcing usage of fine-grained token means that a token always
have an expiration date. Thus it must be update fromp time to time.
The seldom SYNC_DEST_REPO secrets would have contained such fine
grained token. By spliting this seldom secret and using
CONCRETE_ACTIONS_TOKEN there is no need to update SYNC_DEST_REPO
each time the token is updated.
2023-03-03 10:08:13 +01:00
David Testé
2279da604b chore(ci): benchmark more operations in shortint
The following operations have been added:
 * unchecked_neg
 * unchecked_div
 * unchecked_greater
 * unchecked_less
 * unchecked_equal
 * unchecked_scalar_div
 * unchecked_scalar_mod
 * unchecked_scalar_left_shift
 * unchecked_scalar_right_shift
2023-03-02 14:41:00 +01:00
David Testé
f21fb9068c chore(ci): benchmark some operations with more crypto parameters 2023-03-01 14:21:24 +01:00
Arthur Meyre
87b9431881 chore(thfe): add integer workflow to make it availble for slab-ci 2023-03-01 08:58:24 +01:00
tmontaigu
c2c43a2313 refactor(shortint): reduce memory usage of buffers
Replace the BTreeMap of buffers with a Memory struct
that contains a Vec that is resized/sliced and converted
to views, akin to what already exists in boolean module.

This has the advantage of making the memory held by the engine smaller
when using multiple keys.
Now, the memory held will be the maximum of buffer size needed out of all the parameters used
instead of being the sum of the buffer size of all the parameters used.
2023-02-28 18:06:45 +01:00
tmontaigu
9db7a42f8b fix(shortint): use correct lwe dimension in key id
In the KeyId that we used as to identify buffers needed
for the bootstrap/keyswitch we were storing the lwe dimension
of the output of a lwe bootstrap.

However what is stored and used as a value of the BTreeMap is a buffer
meant to store the ouput of a lwe keyswitch.

The fix is to store the output lwe keyswitch dimension as part
of the KeyId instead as its the correct one.
2023-02-28 18:06:45 +01:00
David Testé
a47d8e3ee1 chore(ci): reduce pbs benchmark execution duration
When using a criterion sample size of 5000, the benchmark duration
for PBS using shortint can be very long (3620s for
MESSAGE_4_CARRY_4). Switching to a sample size of 2000 would cut
down all of the benchmarks duration by a factor of at least 2.
2023-02-24 15:47:00 +01:00
David Testé
b2407d530e chore(ci): provide hardware name for benchmarks with avx512
This also print a human friendly error from parser if the hardware
cannot be found in the product list.
2023-02-24 12:28:53 +01:00
David Testé
97830e934a chore(ci): compute throughput on boolean and shortint benchmarks 2023-02-24 11:41:20 +01:00
David Testé
91d04d97e9 chore(ci): add aws profile for pbs benchmarks using slab 2023-02-24 11:41:20 +01:00
David Testé
a228f24abc chore(ci): make cli argument --throughput optional 2023-02-24 11:41:20 +01:00
David Testé
8ee7b14abe chore(ci): benchmark pbs with cost per ms and per dollar spent
Here we benchmark a fixed number of PBS with boolean and shortint
flavors on AWS EC2 instance. Once measurements are done, we compute
the number of operations per millisecond and also operations per
dollar we can perform for a given set of cryptographic parameters
and EC2 instance type. Data are then set to Slab that in turn send
them to a database to be plotted in Grafana.
2023-02-23 18:31:23 +01:00
Arthur Meyre
85dc0f0164 fix(core_crypto): correct PFPKSK list serial generation
- add equivalence keygen test between serial and parallel as we now near
exclusively use the parallel version ourselves
2023-02-21 17:06:10 +01:00
aquint-zama
c6eb6da0a0 chore(doc): fix shortint params example 2023-02-21 16:48:57 +01:00
sarah el kazdadi
acfe8697b7 feat(core): speed up karatsuba multiplication 2023-02-14 10:24:22 +01:00
Arthur Meyre
8c4ecb805f chore(tfhe): bump criterion version to remove outdated dep from dep tree 2023-02-10 15:57:05 +01:00
Arthur Meyre
0ad2d8cef2 chore(tfhe): upgrade csprng version to avoid indirect deprecated aes dep 2023-02-09 17:12:58 +01:00
Arthur Meyre
1931315f73 chore(ci): change docker image mirrors for JS test for faster CI 2023-02-08 11:07:51 +01:00
Arthur Meyre
af865f8d75 refactor(polynomials): plug karatsuba algorithm for polynomial mul
- remove key cache as generating is faster and incurs less issues for cache
coherency and re-use
2023-02-08 11:07:51 +01:00
Arthur Meyre
f8f6323ad4 chore(ci): re-organize tests a bit for better parallelism usage 2023-02-08 11:07:51 +01:00
Arthur Meyre
b29008830c refactor(core): implement missing traits for u128/i128 to make them usable
- enables the use of u128 in ciphertexts
- add encryption test based on shortint 2_2 params
2023-02-06 11:08:04 +01:00
Arthur Meyre
a43dbebd1b chore(tfhe): TFHE-rs uses GATs, so needs rust >= 1.65 2023-02-02 17:34:37 +01:00
Arthur Meyre
d224821aaa chore(tfhe): update testing script to allow custom RUSTFLAGS 2023-02-02 17:34:08 +01:00
Arthur Meyre
d24896ed09 chore(doc): fix code example where useless mut were used 2023-02-02 17:33:53 +01:00
tmontaigu
106624048c refactor(all): only depend on bincode when needed 2023-02-01 10:03:41 +01:00
tmontaigu
5849cc9e7d refactor(all): derive serde::{Serialize, Deserialize}
This replaces our manual implementations of serde's
Serialize and Deserialize trait with 'derives'.

The manual implementetions were needed when using concrete-core
but as tfhe-rs does not use concrete-core's engines we can
simply derive the implementations.
2023-02-01 10:03:41 +01:00
Arthur Meyre
02e6d3c955 feat(c_api): expose create_trivial for shortint in C api 2023-01-31 11:22:06 +01:00
Arthur Meyre
3acaa2e242 chore(ci): make no_tfhe_typo mac friendly 2023-01-31 10:18:35 +01:00
Arthur Meyre
e293dc2bc1 chore(tfhe): update check toolchain after new stable rust release 2023-01-31 10:18:35 +01:00
Arthur Meyre
d9e0220dce chore(shortint): update CI test cases 2023-01-30 17:00:10 +01:00
J-B Orfila
2539e3e0c7 fix(shortint): add degree management in KS-PBS 2023-01-30 11:49:28 +01:00
Arthur Meyre
28cacfca86 chore(doc): fix docstring add some links to methods in lwe_wopbs 2023-01-27 15:58:29 +01:00
Arthur Meyre
e894bb0b11 docs(core): add blind_rotate_assign doctest 2023-01-27 15:58:29 +01:00
Arthur Meyre
313ccf3014 feat(core): add add_external_product_assign 2023-01-27 15:58:29 +01:00
Arthur Meyre
305baa1a6b feat(core): expose the cmux operation 2023-01-27 15:58:29 +01:00
Arthur Meyre
357dee3197 feat(core): add conversion functions for GgswCiphertext 2023-01-27 15:58:29 +01:00
Arthur Meyre
6580d652bb chore(core): fix an import in lwe_bootstrap_key_conversion 2023-01-27 15:58:29 +01:00
Arthur Meyre
5db0584356 refactor(fft): rename new and add an Owned alias for fourier GGSW 2023-01-27 15:58:29 +01:00
J-B Orfila
b691bc9820 feat(core_crypto): lwe_sub 2023-01-26 09:55:07 +01:00
aquint-zama
0653c7c896 chore(doc): update README twitter badge
twitter API closed to 3rd party
see https://github.com/badges/shields/issues/8837
2023-01-24 18:33:20 +01:00
J-B Orfila
bd9e453615 fix(shortint): fix smart_mul_lsb conditions 2023-01-23 10:31:37 +01:00
Arthur Meyre
4673a6349e chore(tfhe): harden github actions versions, enable dependabot for GHA 2023-01-13 17:22:45 +01:00
aquint-zama
b63181b21a chore(doc): update cover image 2023-01-13 14:29:43 +01:00
Arthur Meyre
0ae2722729 chore(tfhe): update README 2023-01-13 09:21:32 +01:00
Arthur Meyre
5945a52eba feat(tfhe): add WASM and C API bindings and tests 2023-01-13 09:21:19 +01:00
Arthur Meyre
384850f7fa feat(boolean): add CompressedCiphertext 2023-01-13 09:21:19 +01:00
Arthur Meyre
4a88290a97 feat(shortint): add CompressedCiphertext 2023-01-13 09:21:19 +01:00
Arthur Meyre
62843a4ef6 feat(tfhe): add SeededLweCiphertext in core_crypto 2023-01-13 09:21:19 +01:00
J-B Orfila
f5653f551d doc(core_crypto): gitbook 2023-01-12 17:41:35 +01:00
Arthur Meyre
43670d7b15 docs(tfhe): add user docs for JS on WASM API and limitations in a tutorial 2023-01-12 10:49:07 +01:00
Arthur Meyre
97e2d96661 doc(tfhe): update PBS docstring to demnonstrate seeded bsk decompression 2023-01-12 10:49:07 +01:00
Arthur Meyre
c90e0626f9 refactor(tfhe): update wopbs primitive docstring and arg order 2023-01-12 10:49:07 +01:00
Arthur Meyre
da9ae6a70d refactor(tfhe): move SeededLwePublicKey generation
- match the organization of other seeded/generation modules
- update module docstring to include Seeded entities where relevant
2023-01-12 10:49:07 +01:00
Arthur Meyre
c5dbbaa071 docs(core): update docstrings, add missing doctests for lwe_linear_algebra 2023-01-12 10:49:07 +01:00
Arthur Meyre
a0dae1c9ae docs(tfhe): updated user documentation and API documentation 2023-01-12 10:49:07 +01:00
Arthur Meyre
b2e3773c40 feat(tfhe): add CompressedServerKey to Boolean +C API +WASM API
- rename wasm functions to remove redundant boolean and shortint naming
- update C API tests for Boolean to include CompressedServerKey generation
and serde
2023-01-05 15:22:54 +01:00
Arthur Meyre
a66d377599 feat(shortint): add CompressedServerKey to shortint +C API +WASM API 2023-01-05 15:22:54 +01:00
Arthur Meyre
8b7b3d02b7 refactor(tfhe): change new method naming for secret keys
- new -> new_empty_key so that it's obvious the key will be empty
- add static methods on secret keys to easily generate them
2023-01-05 15:22:54 +01:00
Arthur Meyre
82b3d2154e refactor(tfhe): make the seeders module more ergonomic to use 2023-01-05 15:22:54 +01:00
Arthur Meyre
702360c03f chore(tfhe): correct docstrings 2023-01-05 15:22:54 +01:00
Arthur Meyre
d065e98888 chore(ci): rustdoc warnings as error 2023-01-05 15:22:54 +01:00
Arthur Meyre
7dee0a9202 chore(ci): sync tags from public to internal repo 2023-01-04 10:14:53 +01:00
Arthur Meyre
4a5be86cfa test(c_api): add public key serde in shortint test 2023-01-04 09:38:31 +01:00
Arthur Meyre
ccc41a89af refactor(core_crypto): add several useful structs to the prelude
- add main high level random generators as well as the underlying activated
byte random generator
- add SignedDecomposer which helps with rounding
2023-01-04 09:38:31 +01:00
Arthur Meyre
c9258e7515 chore(tfhe): add doc test for new_seeder 2023-01-04 09:38:31 +01:00
Arthur Meyre
00c31f4802 refactor(tfhe): move seeders module to core_crypto and add to prelude 2023-01-04 09:38:31 +01:00
Arthur Meyre
d09169d6bc chore(tfhe): rename scratch -> requirement
- renamed wopbs primitives which did not follow the naming convention
2023-01-04 09:38:31 +01:00
Arthur Meyre
729d019bc1 chore(tfhe): rename some primitives whose functionality changed 2023-01-04 09:38:31 +01:00
Arthur Meyre
823fb6d989 chore(tools): add .editorconfig 2023-01-04 09:38:31 +01:00
David Testé
0876d7fec0 chore(ci): measure and report key sizes used in benchmarks
Size of boostrapping and key switching keys used in benchmarks are
measured and then sent to Slab to be stored into our benchmark
database.
2023-01-03 18:34:41 +01:00
Arthur Meyre
c302a4f871 chore(tfhe): fix thfe typo 2023-01-03 16:55:47 +01:00
Arthur Meyre
8f12073bce feat(tfhe): add SeededLweKeyswitchKey
- add generation equivalence test
2023-01-02 13:42:09 +01:00
Arthur Meyre
2614d6430a chore(tfhe): update check toolchain 2023-01-02 13:42:09 +01:00
Arthur Meyre
87c153423e feat(tfhe): add missing encryption functions for CompressedPublicKey 2023-01-02 13:42:09 +01:00
Arthur Meyre
c94922d6a2 feat(tfhe): add SeededGgswCiphertextList, SeededLweBootstrapKey 2023-01-02 13:42:09 +01:00
J-B Orfila
aeff001bf6 docs(crypto_api): add lwe_bootstrap_key gen doctest 2023-01-02 13:42:09 +01:00
Arthur Meyre
f3d1b1bc49 feat(tfhe): add SeededGgswCiphertext 2023-01-02 13:42:09 +01:00
Arthur Meyre
4e4b15a8be feat(tfhe): add SeededGlweCiphertextList 2023-01-02 13:42:09 +01:00
Arthur Meyre
268371fda6 feat(tfhe): add SeededGlweCiphertext 2023-01-02 13:42:09 +01:00
Arthur Meyre
d773d3e7ff feat(tfhe): add CompressedPublicKey for Shortint 2023-01-02 13:42:09 +01:00
Arthur Meyre
d2392e887f feat(tfhe): js tests, remove server key requirement for shortint PK 2023-01-02 13:42:09 +01:00
Arthur Meyre
6cf14a5161 feat(core): add SeededLwePublicKey 2023-01-02 13:42:09 +01:00
Arthur Meyre
ae76230bd9 feat(core): add SeededLweCiphertextList 2023-01-02 13:42:09 +01:00
Arthur Meyre
cbf846dea7 chore(docs): fix a clippy lint for docstrings 2023-01-02 13:42:09 +01:00
Arthur Meyre
952f70fdf9 chore(tfhe): rename lwe_linear_algebra algorithms 2023-01-02 13:42:09 +01:00
Arthur Meyre
914007383f chore(ci): fix shellcheck lints in workflows 2023-01-02 13:42:09 +01:00
Arthur Meyre
3fd6b0d917 chore(ci): update m1 workflow 2023-01-02 13:42:09 +01:00
Arthur Meyre
fd4139dadc chore(ci): target to check all targets (bench, test, etc.) for clippy lints 2023-01-02 13:42:09 +01:00
Arthur Meyre
5c81e04c0b docs(tfhe): add various docstrings
- add docstring for lwe_keyswitch
- add docstring for lwe_keyswitch_key_generation
- add docstring for lwe_secret_key_generation
2023-01-02 13:42:09 +01:00
Arthur Meyre
c6fb496ea1 chore(ci): restore boolean tests on CPU machine
- fix exit code of toolchain installation in case of failure
2023-01-02 13:42:09 +01:00
Arthur Meyre
d7226bcfb9 docs(tfhe): add docstrings for lwe_encryption 2023-01-02 13:42:09 +01:00
Arthur Meyre
f792cc2737 fix(tfhe): fix various docstring content and LweMask creation bug 2023-01-02 13:42:09 +01:00
Arthur Meyre
3a2434b5ff chore(tfhe): rename some buffers to avoid confusion about their usage 2023-01-02 13:42:09 +01:00
Arthur Meyre
712af5d2b9 docs(tfhe): add docstring for glwe_sample_extraction 2023-01-02 13:42:09 +01:00
Arthur Meyre
2bdad26a9a docs(tfhe): add PolynomialList docstrings 2023-01-02 13:42:09 +01:00
Arthur Meyre
bd1a5b9a87 docs(tfhe): add docstring for Polynomial 2023-01-02 13:42:09 +01:00
Arthur Meyre
ad59566621 fix(tfhe): make seeders module public 2023-01-02 13:42:09 +01:00
Arthur Meyre
62803dfb82 docs(tfhe): add docstring for glwe_secret_key_generation module 2023-01-02 13:42:09 +01:00
Arthur Meyre
913f1d517a docs(tfhe): add glwe encryption formal definitions and docstrings
- correct some an -> a
2023-01-02 13:42:09 +01:00
Arthur Meyre
e624a74871 chore(docs): fix GGSW docstring to have actual GlweSecretKey generation 2023-01-02 13:42:09 +01:00
Arthur Meyre
5d52a23c0b docs(tfhe): add link for GGSW encryption algorithm definition
- document helper function for ggsw encryption
2023-01-02 13:42:09 +01:00
Arthur Meyre
a6091682d1 docs(tfhe): docstring for Plaintext
- add more sensible bounds for Plaintext and add PlaintextRef and
PlaintextRefMut for a more homogeneous and less confusing dev experience
2023-01-02 13:42:09 +01:00
J-B Orfila
7e3cc2d6e9 docs(crypto_api): add ggsw encryption doctest 2023-01-02 13:42:09 +01:00
Arthur Meyre
0e64b38f30 docs(tfhe): docstring for LweSecretKey 2023-01-02 13:42:09 +01:00
Arthur Meyre
f0165e62d3 docs(tfhe): correct a -> an 2023-01-02 13:42:09 +01:00
Arthur Meyre
db2a7a4582 docs(tfhe): add disclaimer about parameters being toy example parameters 2023-01-02 13:42:09 +01:00
Arthur Meyre
0e1f54ef54 docs(tfhe): add docstrings for LwePublicKey 2023-01-02 13:42:09 +01:00
Arthur Meyre
44091cb038 docs(tfhe): docstring for LwePrivateFunctionalPackingKeyswitchKey 2023-01-02 13:42:09 +01:00
Arthur Meyre
3c6c90b0c5 docs(tfhe): docstring for LwePrivateFunctionalPackingKeyswitchKeyList 2023-01-02 13:42:09 +01:00
Arthur Meyre
c43d84491a docs(tfhe): add LweKeyswitchKey docstring
- fix method naming
2023-01-02 13:42:09 +01:00
Arthur Meyre
4ef7a73efe chore(tools): add tasks tools to escape latex equations in docs
- add all checks to pcc and run that in CI
2023-01-02 13:42:09 +01:00
Arthur Meyre
1a72c4a814 docs(tfhe): add GswCiphertext for formal definitions 2023-01-02 13:42:09 +01:00
Arthur Meyre
d8abb9c2b2 docs(tfhe): add docstrings for LweCiphertext 2023-01-02 13:42:09 +01:00
Arthur Meyre
740dee2267 docs(tfhe): add LweCiphertextList docstring 2023-01-02 13:42:09 +01:00
Arthur Meyre
387c025e90 docs(tfhe): add LweBootstrapKey docstrings
- update wording for `new` functions, the allocated vector is not empty.
2023-01-02 13:42:09 +01:00
Arthur Meyre
a0dee63a2f docs(tfhe): add docstring for GlweSecretKey
- update docstring to indicate useful functions to fill structs
- fix GlweMask docstring
2023-01-02 13:42:09 +01:00
Arthur Meyre
9d9c407f7f chore(tfhe): update wording to use imperative form in docstrings 2023-01-02 13:42:09 +01:00
Arthur Meyre
ff062a33f9 refactor(core): use from_le_bytes for gaussian RNG (see uniform RNG)
- avoids small allocations, uses std::mem::size_of for size
2023-01-02 13:42:09 +01:00
Arthur Meyre
e353af5a72 docs(tfhe): add GlweCiphertext documentation 2023-01-02 13:42:09 +01:00
Arthur Meyre
d274891948 chore(tfhe): finish GlweSize/PolynomialSize ordering consistency 2023-01-02 13:42:09 +01:00
Arthur Meyre
3129d18247 chore(ci): add test compilation checks 2023-01-02 13:42:09 +01:00
Arthur Meyre
59925e4273 docs(tfhe): add docstring for GlweCiphertextList
- uniformize orders of GlweSize and PolynomialSize arguments for GLWE-like
entities
2023-01-02 13:42:09 +01:00
Arthur Meyre
15864202d7 chore(tfhe): change update wording for in place random noise addition 2023-01-02 13:42:09 +01:00
Arthur Meyre
68dce4eeb8 chore(tfhe): change "in place" naming for "assign" following rust style 2023-01-02 13:42:09 +01:00
Arthur Meyre
390fffac88 docs(tfhe): add docstrings for GgswCiphertext, import formal definition 2023-01-02 13:42:09 +01:00
Arthur Meyre
1cb8aa026f chore(tfhe): misc fixes 2023-01-02 13:42:09 +01:00
Arthur Meyre
7db702cebf docs(core): bring back some doc strings for random generators 2023-01-02 13:42:09 +01:00
Arthur Meyre
583bfaa643 feat(tfhe): add karatsuba multiplication for polynomials 2023-01-02 13:42:09 +01:00
Arthur Meyre
1b3baf5635 docs(tfhe): update polynomial and slice algorithms naming
- update docstrings to be better rendered in html.
2023-01-02 13:42:09 +01:00
Arthur Meyre
0481fdadfb docs(tfhe): update name in module documentation 2023-01-02 13:42:09 +01:00
Arthur Meyre
3ce9017784 docs(tfhe): update entities documentation 2023-01-02 13:42:09 +01:00
Arthur Meyre
9e5de38050 docs(tfhe): update common traits docs 2023-01-02 13:42:09 +01:00
Arthur Meyre
afc19a9b5b docs(core): add docstring and tests for GgswCiphertextList 2023-01-02 13:42:09 +01:00
Arthur Meyre
48f7457330 feat(core): add prelude 2023-01-02 13:42:09 +01:00
Arthur Meyre
fe31bbf7c1 chore(core): update Plaintext docstring 2023-01-02 13:42:09 +01:00
J-B Orfila
80a426f1df docs(crypto): doctests slice algorithms 2023-01-02 13:42:09 +01:00
Arthur Meyre
525225a4b2 refactor(tfhe): rename polynomial primitives and add docstrings + tests 2023-01-02 13:42:09 +01:00
Arthur Meyre
c222459d07 chore(tfhe): derive PartialEq and Eq for all entities by default 2023-01-02 13:42:09 +01:00
Arthur Meyre
aec0a17a1c chore(tfhe): update rand to avoid deprecation warnings 2023-01-02 13:42:09 +01:00
Arthur Meyre
dbfc0b969b refactor(thfe): remove deprecation on MonomialDegree 2023-01-02 13:42:09 +01:00
Arthur Meyre
499f904a61 refactor(tfhe): move parameters and dispersion modules 2023-01-02 13:42:09 +01:00
Arthur Meyre
778414da89 refactor(tfhe): only one instance of FftBuffers, use for simple PBS algo 2023-01-02 13:42:09 +01:00
Arthur Meyre
a31087badf chore(doc): deny doc broken links crate-wide 2023-01-02 13:42:09 +01:00
Arthur Meyre
2dd6c237f9 chore(tfhe): add convenience traits to commons::traits for glob import 2023-01-02 13:42:09 +01:00
Arthur Meyre
286d016003 chore(tools): add convenience pcc and conformance targets 2023-01-02 13:42:09 +01:00
Arthur Meyre
03f63ec202 chore(tfhe): fix refactor TODOs 2023-01-02 13:42:09 +01:00
Arthur Meyre
4d08b61064 refactor(tfhe): unplug core and remove unused parts 2023-01-02 13:42:09 +01:00
Arthur Meyre
91b310289d refactor(boolean): unplug core engines 2023-01-02 13:42:09 +01:00
Arthur Meyre
bdd4461702 refactor(tfhe): unplug CUDA from boolean and remove the CUDA backend 2023-01-02 13:42:09 +01:00
Arthur Meyre
c6060eb478 refactor(tfhe): refactor serizalization, unplug core_crypto::prelude 2023-01-02 13:42:09 +01:00
Arthur Meyre
8ac33a9f63 refactor(tfhe): entities Clone + Debug and default parallel + serialization 2023-01-02 13:42:09 +01:00
J-B Orfila
ba984c2537 feat(core): blind rotate binding 2023-01-02 13:42:09 +01:00
Arthur Meyre
c933f6d900 refactor(tfhe): Change Base naming scheme 2023-01-02 13:42:09 +01:00
Arthur Meyre
b182d8ef05 refactor(tfhe): remove core engines from ShortintEngine 2023-01-02 13:42:09 +01:00
Arthur Meyre
4aef755a81 refactor(tfhe): migrate PFPKSK 2023-01-02 13:42:09 +01:00
Arthur Meyre
67e9b02283 refactor(tfhe): plug woPBS primitives 2023-01-02 13:42:09 +01:00
Arthur Meyre
00bbfd1545 refactor(tfhe): plug fft backend with new primitives
- uniformize fft caches to avoid serialization problems
2023-01-02 13:42:09 +01:00
Arthur Meyre
a239b9e386 chore(tfhe): remove binary naming 2023-01-02 13:42:09 +01:00
Arthur Meyre
04415320d9 refactor(tfhe): add allocate and encrypt for BSK
- use new generation when creating ServerKey in shortint
- next step requires taking parts of the FFT backend for the refactor
2023-01-02 13:42:09 +01:00
Arthur Meyre
4a0fb6b42e refactor(tfhe): add parallel bootstrap key generation
- add equivalence test between refactored sequential and parallel BSK
generation
2023-01-02 13:42:09 +01:00
Arthur Meyre
b445e349a6 chore(tfhe): update associated types name for contiguous container traits 2023-01-02 13:42:09 +01:00
Arthur Meyre
2aa84d2b3c refactor(tfhe): reproduce sequential BSK generation 2023-01-02 13:42:09 +01:00
Arthur Meyre
d0d0b542ac refactor(tfhe): add GGSW encryption with coherency test between old and new 2023-01-02 13:42:09 +01:00
Arthur Meyre
07f496ac23 chore(tfhe): minor fixes 2023-01-02 13:42:09 +01:00
Arthur Meyre
b3e456de28 refactor(tfhe): rewrite lwe keyswitch algorithm with new system 2023-01-02 13:42:09 +01:00
Arthur Meyre
4bdb507086 chore(tfhe): make imports globs for ease of use 2023-01-02 13:42:09 +01:00
Arthur Meyre
98d2e358bb chore(ci): fix tooling with minimum version for GATs requirements 2023-01-02 13:42:09 +01:00
Arthur Meyre
be4e1a878d refactor(tfhe): add refactored LweKeyswitchKey generation algorithm 2023-01-02 13:42:09 +01:00
Arthur Meyre
120e7b5a6b refactor(tfhe): transition GlweSecretKey
- serialization work still pending
2023-01-02 13:42:09 +01:00
Arthur Meyre
3185310610 refactor(shortint): change the LweCiphertext type 2023-01-02 13:42:09 +01:00
Arthur Meyre
1c40890aeb refactor(tfhe): first step of progressive refactor
- provide new structs and compatibility layers (as much as possible) to
convert between types as much as possible
- we are missing key view types in public APIs making this a bit tricky in
that particular case
2023-01-02 13:42:09 +01:00
Arthur Meyre
be7f26a30f refactor(core): introduce new modules for progressive rework
- strategy is to have new entities for which required algorithms will be
implemented re-using existing private implementations
- when algorithms are missing at first conversion functions will be used to
be able to switch back to the old system and use existing primitives
2023-01-02 13:42:09 +01:00
Petar Ivanov
6a3d579749 fix(tools): fix arch detection script for aarch64
On Linux with Apple M1, the output of `uname -a` is:

```
Linux ... aarch64 aarch64 aarch64 GNU/Linux
```

Therefore, recognize that output as aarch64.
2022-12-16 13:53:13 +01:00
Jeremy Bradley-Silverio Donato
9e04f031e8 chore(tfhe): Update README.md 2022-12-14 16:18:05 +01:00
Arthur Meyre
506fd88468 chore(ci): sync repos on push 2022-12-05 17:51:04 +01:00
J-B Orfila
da1592f997 chore(all): update root licence 2022-12-02 15:06:00 +01:00
David Testé
d38edb5096 chore(ci): do not parse report dir when walking subdirectories 2022-11-30 18:03:25 +01:00
J-B Orfila
6d6fcb9562 chore(all): licence updated 2022-11-30 17:43:19 +01:00
David Testé
9a2212e305 chore(ci): parse subdirectories for shortint benchmark results 2022-11-30 15:53:42 +01:00
J-B Orfila
b8d437cbde fix(doc): update pk encryption example for shortint 2022-11-30 15:00:56 +01:00
Alexandre Quint
b89ca6fd87 chore(doc): language edits
GitBook: [#1] TFHE-rs edits - JS
2022-11-30 14:14:56 +01:00
David Testé
d92bcb3ef4 chore(ci): create benchmark aws profile using ec2 m6i.metal 2022-11-23 19:01:36 +01:00
David Testé
34011798f5 chore(ci): change benchmark parser input name
The use of "schema" was incorrect since it's meant to be used as
database name when sending data to Slab.
2022-11-23 19:01:36 +01:00
David Testé
3e192630d5 chore(ci): fix repositories checkout
There are no submodules in tfhe-rs nor the need to authenticate
to get access to it. The right secret is used to checkout Slab.
2022-11-23 19:01:36 +01:00
David Testé
76ec565217 chore(ci): add workflow to trigger all benchmarks automatically 2022-11-23 19:01:36 +01:00
Arthur Meyre
0c7159c040 chore(tfhe): fix README 2022-11-23 13:32:06 +01:00
David Testé
8ea446a105 chore(ci): add benchmark workflow for boolean and shortint
These workflows are meant to be triggered by Slab CI bot server.
2022-11-23 11:46:36 +01:00
Arthur Meyre
3c5ffca775 chore(ci): add clippy_all, upgrade slab workflows, change cpu instance 2022-11-22 14:59:59 +01:00
Arthur Meyre
cc67dc9bb6 feat(wasm): add boolean server key primitives 2022-11-16 13:21:27 +01:00
Arthur Meyre
0891ea5551 chore(wasm): fix clippy lints 2022-11-16 13:21:27 +01:00
Arthur Meyre
45fb747c20 chore(ci): add commit checks for all branches 2022-11-16 11:13:58 +01:00
Arthur Meyre
dc9c651d3b chore(tfhe): fix Makefile typo 2022-11-16 11:13:58 +01:00
Arthur Meyre
95646ca03a chore(ci): update workflows 2022-11-16 11:13:58 +01:00
Arthur Meyre
2d4b8e3aa3 chore(tfhe): update version to 0.2.0 2022-11-14 09:56:22 +01:00
Arthur Meyre
352a2c69ab chore(doc): fix docs.rs build by adding katex header 2022-11-14 09:22:38 +01:00
J-B Orfila
d7b7b84f5b fix(thfe): update public key parameters 2022-11-10 20:16:30 +01:00
Arthur Meyre
ca16a80dfb chore(crate): fix description metadata 2022-11-10 20:15:35 +01:00
787 changed files with 126929 additions and 51462 deletions

2
.cargo/config.toml Normal file
View File

@@ -0,0 +1,2 @@
[alias]
xtask = "run --manifest-path ./tasks/Cargo.toml --"

View File

@@ -8,10 +8,10 @@ slow-timeout = "5m"
[[profile.ci.overrides]]
filter = 'test(/^.*param_message_1_carry_[567]$/) or test(/^.*param_message_4_carry_4$/)'
filter = 'test(/^.*param_message_1_carry_[567]_ks_pbs$/) or test(/^.*param_message_4_carry_4_ks_pbs$/)'
retries = 3
[[profile.ci.overrides]]
filter = 'test(/^.*param_message_[23]_carry_[23]$/)'
filter = 'test(/^.*param_message_[23]_carry_[23]_ks_pbs$/)'
retries = 1

15
.editorconfig Normal file
View File

@@ -0,0 +1,15 @@
# EditorConfig is awesome: https://EditorConfig.org
# top-most EditorConfig file
root = true
# Unix-style newlines with a newline ending every file
[*]
end_of_line = lf
insert_final_newline = true
# 4 space indentation
[*.rs]
charset = utf-8
indent_style = space
indent_size = 4

9
.github/dependabot.yaml vendored Normal file
View File

@@ -0,0 +1,9 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
# Check for updates to GitHub Actions every sunday
interval: "weekly"
day: "sunday"

13
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,13 @@
<!-- Feel free to delete the template if the PR (bumping a version e.g.) does not fit the template -->
closes: _please link all relevant issues_
### PR content/description
### Check-list:
* [ ] Tests for the changes have been added (for bug fixes / features)
* [ ] Docs have been added / updated (for bug fixes / features)
* [ ] Relevant issues are marked as resolved/closed, related issues are linked in the description
* [ ] Check for breaking changes (including serialization changes) and add them to commit message following the conventional commit [specification][conventional-breaking]
[conventional-breaking]: https://www.conventionalcommits.org/en/v1.0.0/#commit-message-with-description-and-breaking-change-footer

View File

@@ -0,0 +1,119 @@
# Run a small subset of shortint and integer tests to ensure quick feedback.
name: Fast AWS Tests on CPU
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUSTFLAGS: "-C target-cpu=native"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# All the inputs are provided by Slab
inputs:
instance_id:
description: "AWS instance ID"
type: string
instance_image_id:
description: "AWS instance AMI ID"
type: string
instance_type:
description: "AWS instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
request_id:
description: 'Slab request ID'
type: string
fork_repo:
description: 'Name of forked repo as user/repo'
type: string
fork_git_sha:
description: 'Git SHA to checkout from fork'
type: string
jobs:
fast-tests:
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}_${{ inputs.instance_image_id }}_${{ inputs.instance_type }}
cancel-in-progress: true
runs-on: ${{ inputs.runner_name }}
steps:
# Step used for log purpose.
- name: Instance configuration used
run: |
echo "ID: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
echo "Fork repo: ${{ inputs.fork_repo }}"
echo "Fork git sha: ${{ inputs.fork_git_sha }}"
- name: Checkout tfhe-rs
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: ${{ inputs.fork_repo }}
ref: ${{ inputs.fork_git_sha }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install latest stable
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: stable
default: true
- name: Run core tests
run: |
AVX512_SUPPORT=ON make test_core_crypto
- name: Run boolean tests
run: |
make test_boolean
- name: Run user docs tests
run: |
make test_user_doc
- name: Run js on wasm API tests
run: |
make test_nodejs_wasm_api_in_docker
- name: Gen Keys if required
run: |
make gen_key_cache
- name: Run shortint tests
run: |
BIG_TESTS_INSTANCE=TRUE FAST_TESTS=TRUE make test_shortint_ci
- name: Run integer tests
run: |
BIG_TESTS_INSTANCE=TRUE FAST_TESTS=TRUE make test_integer_ci
- name: Run shortint multi-bit tests
run: |
BIG_TESTS_INSTANCE=TRUE FAST_TESTS=TRUE make test_shortint_multi_bit_ci
- name: Run integer multi-bit tests
run: |
BIG_TESTS_INSTANCE=TRUE FAST_TESTS=TRUE make test_integer_multi_bit_ci
- name: Run high-level API tests
run: |
make test_high_level_api
- name: Slack Notification
if: ${{ always() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "Fast AWS tests finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View File

@@ -0,0 +1,86 @@
name: AWS Integer Tests on CPU
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUSTFLAGS: "-C target-cpu=native"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# All the inputs are provided by Slab
inputs:
instance_id:
description: "AWS instance ID"
type: string
instance_image_id:
description: "AWS instance AMI ID"
type: string
instance_type:
description: "AWS instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
request_id:
description: 'Slab request ID'
type: string
fork_repo:
description: 'Name of forked repo as user/repo'
type: string
fork_git_sha:
description: 'Git SHA to checkout from fork'
type: string
jobs:
integer-tests:
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}_${{ inputs.instance_image_id }}_${{ inputs.instance_type }}
cancel-in-progress: true
runs-on: ${{ inputs.runner_name }}
steps:
# Step used for log purpose.
- name: Instance configuration used
run: |
echo "ID: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
echo "Fork repo: ${{ inputs.fork_repo }}"
echo "Fork git sha: ${{ inputs.fork_git_sha }}"
- name: Checkout tfhe-rs
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: ${{ inputs.fork_repo }}
ref: ${{ inputs.fork_git_sha }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install latest stable
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: stable
default: true
- name: Gen Keys if required
run: |
make gen_key_cache
- name: Run integer tests
run: |
BIG_TESTS_INSTANCE=TRUE make test_integer_ci
- name: Slack Notification
if: ${{ always() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "Integer tests finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View File

@@ -0,0 +1,90 @@
name: AWS Multi Bit Tests on CPU
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUSTFLAGS: "-C target-cpu=native"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# All the inputs are provided by Slab
inputs:
instance_id:
description: "AWS instance ID"
type: string
instance_image_id:
description: "AWS instance AMI ID"
type: string
instance_type:
description: "AWS instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
request_id:
description: 'Slab request ID'
type: string
fork_repo:
description: 'Name of forked repo as user/repo'
type: string
fork_git_sha:
description: 'Git SHA to checkout from fork'
type: string
jobs:
multi-bit-tests:
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}_${{ inputs.instance_image_id }}_${{ inputs.instance_type }}
cancel-in-progress: true
runs-on: ${{ inputs.runner_name }}
steps:
# Step used for log purpose.
- name: Instance configuration used
run: |
echo "ID: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
echo "Fork repo: ${{ inputs.fork_repo }}"
echo "Fork git sha: ${{ inputs.fork_git_sha }}"
- name: Checkout tfhe-rs
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: ${{ inputs.fork_repo }}
ref: ${{ inputs.fork_git_sha }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install latest stable
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: stable
default: true
- name: Gen Keys if required
run: |
make GEN_KEY_CACHE_MULTI_BIT_ONLY=TRUE gen_key_cache
- name: Run shortint multi-bit tests
run: |
make test_shortint_multi_bit_ci
- name: Run integer multi-bit tests
run: |
make test_integer_multi_bit_ci
- name: Slack Notification
if: ${{ always() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "Shortint tests finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View File

@@ -22,36 +22,56 @@ on:
runner_name:
description: "Action runner name"
type: string
request_id:
description: 'Slab request ID'
type: string
fork_repo:
description: 'Name of forked repo as user/repo'
type: string
fork_git_sha:
description: 'Git SHA to checkout from fork'
type: string
jobs:
shortint-tests:
concurrency:
group: ${{ github.ref }}_${{ github.event.inputs.instance_image_id }}_${{ github.event.inputs.instance_type }}
group: ${{ github.workflow }}_${{ github.ref }}_${{ inputs.instance_image_id }}_${{ inputs.instance_type }}
cancel-in-progress: true
runs-on: ${{ github.event.inputs.runner_name }}
runs-on: ${{ inputs.runner_name }}
steps:
# Step used for log purpose.
- name: Instance configuration used
run: |
echo "ID: ${{ github.event.inputs.instance_id }}"
echo "AMI: ${{ github.event.inputs.instance_image_id }}"
echo "Type: ${{ github.event.inputs.instance_type }}"
echo "ID: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
echo "Fork repo: ${{ inputs.fork_repo }}"
echo "Fork git sha: ${{ inputs.fork_git_sha }}"
- uses: actions/checkout@v2
- name: Checkout tfhe-rs
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: ${{ inputs.fork_repo }}
ref: ${{ inputs.fork_git_sha }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install latest stable
uses: actions-rs/toolchain@v1
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: stable
default: true
- name: Run core tests
run: |
make test_core_crypto
AVX512_SUPPORT=ON make test_core_crypto
- name: Run boolean tests
run: |
make test_boolean
- name: Run C API tests
run: |
@@ -61,33 +81,21 @@ jobs:
run: |
make test_user_doc
- name: Install AWS CLI
run: |
apt update
apt install -y awscli
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_IAM_ID }}
aws-secret-access-key: ${{ secrets.AWS_IAM_KEY }}
role-to-assume: concrete-lib-ci
aws-region: eu-west-3
role-duration-seconds: 10800
- name: Download keys locally
run: aws s3 cp --recursive --no-progress s3://concrete-libs-keycache ./keys
- name: Gen Keys if required
run: |
make gen_key_cache
- name: Sync keys
run: aws s3 sync ./keys s3://concrete-libs-keycache
- name: Run shortint tests
run: |
make test_shortint_ci
BIG_TESTS_INSTANCE=TRUE make test_shortint_ci
- name: Run high-level API tests
run: |
BIG_TESTS_INSTANCE=TRUE make test_high_level_api
- name: Run example tests
run: |
make test_examples
- name: Slack Notification
if: ${{ always() }}

View File

@@ -1,113 +0,0 @@
# Compile and test project on an AWS instance
name: AWS tests on GPU
# This workflow is meant to be run via Zama CI bot Slab.
on:
workflow_dispatch:
inputs:
instance_id:
description: "AWS instance ID"
type: string
instance_image_id:
description: "AWS instance AMI ID"
type: string
instance_type:
description: "AWS EC2 instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
env:
CARGO_TERM_COLOR: always
RUSTFLAGS: "-C target-cpu=native"
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
jobs:
run-tests-linux:
concurrency:
group: ${{ github.ref }}_${{ github.event.inputs.instance_image_id }}_${{ github.event.inputs.instance_type }}
cancel-in-progress: true
name: Test code in EC2
runs-on: ${{ github.event.inputs.runner_name }}
strategy:
fail-fast: false
# explicit include-based build matrix, of known valid options
matrix:
include:
- os: ubuntu-20.04
cuda: "11.8"
old_cuda: "11.1"
cuda_arch: "70"
gcc: 8
env:
CUDA_PATH: /usr/local/cuda-${{ matrix.cuda }}
OLD_CUDA_PATH: /usr/local/cuda-${{ matrix.old_cuda }}
steps:
- name: EC2 instance configuration used
run: |
echo "IDs: ${{ github.event.inputs.instance_id }}"
echo "AMI: ${{ github.event.inputs.instance_image_id }}"
echo "Type: ${{ github.event.inputs.instance_type }}"
- uses: actions/checkout@v2
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Export CUDA variables
run: |
echo "CUDA_PATH=$CUDA_PATH" >> "${GITHUB_ENV}"
echo "$CUDA_PATH/bin" >> "${GITHUB_PATH}"
echo "LD_LIBRARY_PATH=$CUDA_PATH/lib:$LD_LIBRARY_PATH" >> "${GITHUB_ENV}"
# Specify the correct host compilers
- name: Export gcc and g++ variables
run: |
echo "CC=/usr/bin/gcc-${{ matrix.gcc }}" >> "${GITHUB_ENV}"
echo "CXX=/usr/bin/g++-${{ matrix.gcc }}" >> "${GITHUB_ENV}"
echo "CUDAHOSTCXX=/usr/bin/g++-${{ matrix.gcc }}" >> "${GITHUB_ENV}"
echo "CUDACXX=$CUDA_PATH/bin/nvcc" >> "${GITHUB_ENV}"
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install latest stable
uses: actions-rs/toolchain@v1
with:
toolchain: stable
default: true
- name: Cuda clippy
run: |
make clippy_cuda
- name: Run core cuda tests
run: |
make test_core_crypto_cuda
- name: Test tfhe-rs/boolean with cpu
run: |
make test_boolean
- name: Test tfhe-rs/boolean with cuda backend with CUDA 11.8
run: |
make test_boolean_cuda
- name: Export variables for CUDA 11.1
run: |
echo "CUDA_PATH=$OLD_CUDA_PATH" >> "${GITHUB_ENV}"
echo "LD_LIBRARY_PATH=$OLD_CUDA_PATH/lib:$LD_LIBRARY_PATH" >> "${GITHUB_ENV}"
echo "CUDACXX=$OLD_CUDA_PATH/bin/nvcc" >> "${GITHUB_ENV}"
- name: Test tfhe-rs/boolean with cuda backend with CUDA 11.1
run: |
cargo clean
make test_boolean_cuda
- name: Slack Notification
if: ${{ always() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "(Slab ci-bot beta) AWS tests GPU finished with status ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View File

@@ -0,0 +1,87 @@
name: AWS WASM Tests on CPU
env:
CARGO_TERM_COLOR: always
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
RUSTFLAGS: "-C target-cpu=native"
on:
# Allows you to run this workflow manually from the Actions tab as an alternative.
workflow_dispatch:
# All the inputs are provided by Slab
inputs:
instance_id:
description: "AWS instance ID"
type: string
instance_image_id:
description: "AWS instance AMI ID"
type: string
instance_type:
description: "AWS instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
request_id:
description: 'Slab request ID'
type: string
fork_repo:
description: 'Name of forked repo as user/repo'
type: string
fork_git_sha:
description: 'Git SHA to checkout from fork'
type: string
jobs:
wasm-tests:
concurrency:
group: ${{ github.workflow }}_${{ github.ref }}_${{ inputs.instance_image_id }}_${{ inputs.instance_type }}
cancel-in-progress: true
runs-on: ${{ inputs.runner_name }}
steps:
# Step used for log purpose.
- name: Instance configuration used
run: |
echo "ID: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
echo "Fork repo: ${{ inputs.fork_repo }}"
echo "Fork git sha: ${{ inputs.fork_git_sha }}"
- name: Checkout tfhe-rs
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: ${{ inputs.fork_repo }}
ref: ${{ inputs.fork_git_sha }}
- name: Set up home
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install latest stable
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: stable
default: true
- name: Run js on wasm API tests
run: |
make test_nodejs_wasm_api_in_docker
- name: Run parallel wasm tests
run: |
make install_node
make ci_test_web_js_api_parallel
- name: Slack Notification
if: ${{ always() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "WASM tests finished with status: ${{ job.status }}. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

127
.github/workflows/boolean_benchmark.yml vendored Normal file
View File

@@ -0,0 +1,127 @@
# Run boolean benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: Boolean benchmarks
on:
workflow_dispatch:
inputs:
instance_id:
description: "Instance ID"
type: string
instance_image_id:
description: "Instance AMI ID"
type: string
instance_type:
description: "Instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
request_id:
description: "Slab request ID"
type: string
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
jobs:
run-boolean-benchmarks:
name: Execute boolean benchmarks in EC2
runs-on: ${{ github.event.inputs.runner_name }}
if: ${{ !cancelled() }}
steps:
- name: Instance configuration used
run: |
echo "IDs: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
- name: Get benchmark date
run: |
echo "BENCH_DATE=$(date --iso-8601=seconds)" >> "${GITHUB_ENV}"
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- name: Set up home
# "Install rust" step require root user to have a HOME directory which is not set.
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: nightly
override: true
- name: Run benchmarks with AVX512
run: |
make AVX512_SUPPORT=ON bench_boolean
- name: Parse results
run: |
COMMIT_DATE="$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})"
COMMIT_HASH="$(git describe --tags --dirty)"
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware ${{ inputs.instance_type }} \
--project-version "${COMMIT_HASH}" \
--branch ${{ github.ref_name }} \
--commit-date "${COMMIT_DATE}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512 \
--throughput
- name: Measure key sizes
run: |
make measure_boolean_key_sizes
- name: Parse key sizes results
run: |
python3 ./ci/benchmark_parser.py tfhe/boolean_key_sizes.csv ${{ env.RESULTS_FILENAME }} \
--key-sizes \
--append-results
- name: Upload parsed results artifact
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce
with:
name: ${{ github.sha }}_boolean
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: zama-ai/slab
path: slab
token: ${{ secrets.CONCRETE_ACTIONS_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
echo "Computing HMac on results file"
SIGNATURE="$(slab/scripts/hmac_calculator.sh ${{ env.RESULTS_FILENAME }} '${{ secrets.JOB_SECRET }}')"
echo "Sending results to Slab..."
curl -v -k \
-H "Content-Type: application/json" \
-H "X-Slab-Repository: ${{ github.repository }}" \
-H "X-Slab-Command: store_data_v2" \
-H "X-Hub-Signature-256: sha256=${SIGNATURE}" \
-d @${{ env.RESULTS_FILENAME }} \
${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "Boolean benchmarks failed. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View File

@@ -17,53 +17,40 @@ jobs:
strategy:
matrix:
os: [ubuntu-latest, macos-latest]
os: [ubuntu-latest, macos-latest, windows-latest]
fail-fast: false
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
- name: Get rust toolchain to use for checks and lints
id: toolchain
- name: Run pcc checks
run: |
echo "rs-toolchain=$(make rs_toolchain)" >> "${GITHUB_OUTPUT}"
make pcc
- name: Check format
- name: Build Release core
run: |
make check_fmt
- name: Build doc
run: |
make doc
- name: Clippy boolean
run: |
make clippy_boolean
make build_core AVX512_SUPPORT=ON
make build_core_experimental AVX512_SUPPORT=ON
- name: Build Release boolean
run: |
make build_boolean
- name: Clippy shortint
run: |
make clippy_shortint
- name: Build Release shortint
run: |
make build_shortint
- name: Clippy shortint and boolean
- name: Build Release integer
run: |
make clippy
make build_integer
- name: Build Release shortint and boolean
- name: Build Release tfhe full
run: |
make build_boolean_and_shortint
- name: C API Clippy
run: |
make clippy_c_api
make build_tfhe_full
- name: Build Release c_api
run: |
make build_c_api
# The wasm build check is a bit annoying to set-up here and is done during the tests in
# aws_tfhe_tests.yml

View File

@@ -2,18 +2,15 @@
name: Check commit and PR compliance
on:
pull_request:
branches:
- main
- dev
jobs:
check-commit-pr:
name: Check commit and PR
runs-on: ubuntu-latest
steps:
- name: Check first line
uses: gsactions/commit-message-checker@v1
uses: gsactions/commit-message-checker@16fa2d5de096ae0d35626443bcd24f1e756cafee
with:
pattern: '^((feat|fix|chore|refactor|style|test|docs|doc)\(\w+\)\:) .+$'
pattern: '^((feat|fix|chore|refactor|style|test|docs|doc)(\(\w+\))?\:) .+$'
flags: "gs"
error: 'Your first line has to contain a commit type and scope like "feat(my_feature): msg".'
excludeDescription: "true" # optional: this excludes the description body of a pull request
@@ -22,7 +19,7 @@ jobs:
accessToken: ${{ secrets.GITHUB_TOKEN }} # github access token is only required if checkAllCommitMessages is true
- name: Check line length
uses: gsactions/commit-message-checker@v1
uses: gsactions/commit-message-checker@16fa2d5de096ae0d35626443bcd24f1e756cafee
with:
pattern: '(^.{0,74}$\r?\n?){0,20}'
flags: "gm"

129
.github/workflows/integer_benchmark.yml vendored Normal file
View File

@@ -0,0 +1,129 @@
# Run integer benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: Integer benchmarks
on:
workflow_dispatch:
inputs:
instance_id:
description: "Instance ID"
type: string
instance_image_id:
description: "Instance AMI ID"
type: string
instance_type:
description: "Instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
request_id:
description: "Slab request ID"
type: string
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
PARSE_INTEGER_BENCH_CSV_FILE: tfhe_rs_integer_benches_${{ github.sha }}.csv
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
jobs:
run-integer-benchmarks:
name: Execute integer benchmarks in EC2
runs-on: ${{ github.event.inputs.runner_name }}
if: ${{ !cancelled() }}
steps:
- name: Instance configuration used
run: |
echo "IDs: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
- name: Get benchmark date
run: |
echo "BENCH_DATE=$(date --iso-8601=seconds)" >> "${GITHUB_ENV}"
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- name: Set up home
# "Install rust" step require root user to have a HOME directory which is not set.
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: nightly
override: true
- name: Run benchmarks with AVX512
run: |
make AVX512_SUPPORT=ON bench_integer
- name: Parse benchmarks to csv
run: |
make PARSE_INTEGER_BENCH_CSV_FILE=${{ env.PARSE_INTEGER_BENCH_CSV_FILE }} \
parse_integer_benches
- name: Upload csv results artifact
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce
with:
name: ${{ github.sha }}_csv_integer
path: ${{ env.PARSE_INTEGER_BENCH_CSV_FILE }}
- name: Parse results
run: |
COMMIT_DATE="$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})"
COMMIT_HASH="$(git describe --tags --dirty)"
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware ${{ inputs.instance_type }} \
--project-version "${COMMIT_HASH}" \
--branch ${{ github.ref_name }} \
--commit-date "${COMMIT_DATE}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512 \
--throughput
- name: Upload parsed results artifact
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce
with:
name: ${{ github.sha }}_integer
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: zama-ai/slab
path: slab
token: ${{ secrets.CONCRETE_ACTIONS_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
echo "Computing HMac on results file"
SIGNATURE="$(slab/scripts/hmac_calculator.sh ${{ env.RESULTS_FILENAME }} '${{ secrets.JOB_SECRET }}')"
echo "Sending results to Slab..."
curl -v -k \
-H "Content-Type: application/json" \
-H "X-Slab-Repository: ${{ github.repository }}" \
-H "X-Slab-Command: store_data_v2" \
-H "X-Hub-Signature-256: sha256=${SIGNATURE}" \
-d @${{ env.RESULTS_FILENAME }} \
${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "Integer benchmarks failed. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View File

@@ -0,0 +1,129 @@
# Run integer benchmarks with multi-bit cryptographic parameters on an AWS instance and return parsed results to Slab CI bot.
name: Integer Multi-bit benchmarks
on:
workflow_dispatch:
inputs:
instance_id:
description: "Instance ID"
type: string
instance_image_id:
description: "Instance AMI ID"
type: string
instance_type:
description: "Instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
request_id:
description: "Slab request ID"
type: string
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
PARSE_INTEGER_BENCH_CSV_FILE: tfhe_rs_integer_benches_${{ github.sha }}.csv
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
jobs:
run-integer-benchmarks:
name: Execute integer multi-bit benchmarks in EC2
runs-on: ${{ github.event.inputs.runner_name }}
if: ${{ !cancelled() }}
steps:
- name: Instance configuration used
run: |
echo "IDs: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
- name: Get benchmark date
run: |
echo "BENCH_DATE=$(date --iso-8601=seconds)" >> "${GITHUB_ENV}"
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- name: Set up home
# "Install rust" step require root user to have a HOME directory which is not set.
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: nightly
override: true
- name: Run multi-bit benchmarks with AVX512
run: |
make AVX512_SUPPORT=ON bench_integer_multi_bit
- name: Parse benchmarks to csv
run: |
make PARSE_INTEGER_BENCH_CSV_FILE=${{ env.PARSE_INTEGER_BENCH_CSV_FILE }} \
parse_integer_benches
- name: Upload csv results artifact
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce
with:
name: ${{ github.sha }}_csv_integer
path: ${{ env.PARSE_INTEGER_BENCH_CSV_FILE }}
- name: Parse results
run: |
COMMIT_DATE="$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})"
COMMIT_HASH="$(git describe --tags --dirty)"
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware ${{ inputs.instance_type }} \
--project-version "${COMMIT_HASH}" \
--branch ${{ github.ref_name }} \
--commit-date "${COMMIT_DATE}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512 \
--throughput
- name: Upload parsed results artifact
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce
with:
name: ${{ github.sha }}_integer
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: zama-ai/slab
path: slab
token: ${{ secrets.CONCRETE_ACTIONS_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
echo "Computing HMac on results file"
SIGNATURE="$(slab/scripts/hmac_calculator.sh ${{ env.RESULTS_FILENAME }} '${{ secrets.JOB_SECRET }}')"
echo "Sending results to Slab..."
curl -v -k \
-H "Content-Type: application/json" \
-H "X-Slab-Repository: ${{ github.repository }}" \
-H "X-Slab-Command: store_data_v2" \
-H "X-Hub-Signature-256: sha256=${SIGNATURE}" \
-d @${{ env.RESULTS_FILENAME }} \
${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "Integer benchmarks failed. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

View File

@@ -4,11 +4,19 @@ on:
workflow_dispatch:
pull_request:
types: [labeled]
# Have a nightly build for M1 tests
schedule:
# * is a special character in YAML so you have to quote this string
# At 22:00 every day
# Timezone is UTC, so Paris time is +2 during the summer and +1 during winter
- cron: "0 22 * * *"
env:
CARGO_TERM_COLOR: always
RUSTFLAGS: "-C target-cpu=native"
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
CARGO_PROFILE: release_lto_off
FAST_TESTS: "TRUE"
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref }}
@@ -16,62 +24,54 @@ concurrency:
jobs:
cargo-builds:
if: "github.event_name != 'pull_request' || contains(github.event.label.name, 'm1_test')"
if: ${{ (github.event_name == 'schedule' && github.repository == 'zama-ai/tfhe-rs') || github.event_name == 'workflow_dispatch' || contains(github.event.label.name, 'm1_test') }}
runs-on: ["self-hosted", "m1mac"]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
- name: Install latest stable
uses: actions-rs/toolchain@v1
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: stable
default: true
- name: Build doc
- name: Run pcc checks
run: |
make doc
make pcc
- name: Clippy boolean
- name: Build Release core
run: |
make clippy_boolean
make build_core
- name: Build Release boolean
run: |
make build_boolean
- name: Clippy shortint
run: |
make clippy_shortint
- name: Build Release shortint
run: |
make build_shortint
- name: Clippy shortint and boolean
- name: Build Release integer
run: |
make clippy
make build_integer
- name: Build Release shortint and boolean
- name: Build Release tfhe full
run: |
make build_boolean_and_shortint
- name: C API Clippy
run: |
make clippy_c_api
make build_tfhe_full
- name: Build Release c_api
run: |
make build_c_api
- name: Test tfhe-rs/boolean with cpu
run: |
make test_boolean
- name: Run core tests
run: |
make test_core_crypto
- name: Run boolean tests
run: |
make test_boolean
- name: Run C API tests
run: |
make test_c_api
@@ -80,29 +80,34 @@ jobs:
run: |
make test_user_doc
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_IAM_ID }}
aws-secret-access-key: ${{ secrets.AWS_IAM_KEY }}
role-to-assume: concrete-lib-ci
aws-region: eu-west-3
role-duration-seconds: 10800
- name: Download keys locally
run: aws s3 cp --recursive --no-progress s3://concrete-libs-keycache ./keys
# JS tests are more easily launched in docker, we won't test that on M1 as docker is pretty
# slow on Apple machines due to the virtualization layer.
- name: Gen Keys if required
run: |
make gen_key_cache
- name: Sync keys
run: aws s3 sync ./keys s3://concrete-libs-keycache
- name: Run shortint tests
run: |
make test_shortint_ci
- name: Run integer tests
run: |
make test_integer_ci
- name: Gen Keys if required
run: |
make GEN_KEY_CACHE_MULTI_BIT_ONLY=TRUE gen_key_cache
- name: Run shortint multi bit tests
run: |
make test_shortint_multi_bit_ci
# # These multi bit integer tests are too slow on M1 with low core count and low RAM
# - name: Run integer multi bit tests
# run: |
# make test_integer_multi_bit_ci
remove_label:
name: Remove m1_test label
runs-on: ubuntu-latest
@@ -110,13 +115,14 @@ jobs:
- cargo-builds
if: ${{ always() }}
steps:
- uses: actions-ecosystem/action-remove-labels@v1
- uses: actions-ecosystem/action-remove-labels@2ce5d41b4b6aa8503e285553f75ed56e0a40bae0
if: ${{ github.event_name == 'pull_request' }}
with:
labels: m1_test
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Slack Notification
if: ${{ always() }}
if: ${{ needs.cargo-builds.result != 'skipped' }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:

84
.github/workflows/make_release.yml vendored Normal file
View File

@@ -0,0 +1,84 @@
# Publish new release of tfhe-rs on various platform.
name: Publish release
on:
workflow_dispatch:
inputs:
dry_run:
description: "Dry-run"
type: boolean
default: true
push_to_crates:
description: "Push to crate"
type: boolean
default: true
push_web_package:
description: "Push web js package"
type: boolean
default: true
push_node_package:
description: "Push node js package"
type: boolean
default: true
env:
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
jobs:
publish_release:
name: Publish Release
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- name: Publish crate.io package
if: ${{ inputs.push_to_crates }}
env:
CRATES_TOKEN: ${{ secrets.CARGO_REGISTRY_TOKEN }}
DRY_RUN: ${{ inputs.dry_run && '--dry-run' || '' }}
run: |
cargo publish -p tfhe --token ${{ env.CRATES_TOKEN }} ${{ env.DRY_RUN }}
- name: Build web package
if: ${{ inputs.push_web_package }}
run: |
make build_web_js_api
- name: Publish web package
if: ${{ inputs.push_web_package }}
uses: JS-DevTools/npm-publish@5a85faf05d2ade2d5b6682bfe5359915d5159c6c
with:
token: ${{ secrets.NPM_TOKEN }}
package: tfhe/pkg/package.json
dry-run: ${{ inputs.dry_run }}
- name: Build Node package
if: ${{ inputs.push_node_package }}
run: |
rm -rf tfhe/pkg
make build_node_js_api
sed -i 's/"tfhe"/"node-tfhe"/g' tfhe/pkg/package.json
- name: Publish Node package
if: ${{ inputs.push_node_package }}
uses: JS-DevTools/npm-publish@5a85faf05d2ade2d5b6682bfe5359915d5159c6c
with:
token: ${{ secrets.NPM_TOKEN }}
package: tfhe/pkg/package.json
dry-run: ${{ inputs.dry_run }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "Integer benchmarks failed. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

117
.github/workflows/pbs_benchmark.yml vendored Normal file
View File

@@ -0,0 +1,117 @@
# Run PBS benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: PBS benchmarks
on:
workflow_dispatch:
inputs:
instance_id:
description: "Instance ID"
type: string
instance_image_id:
description: "Instance AMI ID"
type: string
instance_type:
description: "Instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
request_id:
description: "Slab request ID"
type: string
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
jobs:
run-pbs-benchmarks:
name: Execute PBS benchmarks in EC2
runs-on: ${{ github.event.inputs.runner_name }}
if: ${{ !cancelled() }}
steps:
- name: Instance configuration used
run: |
echo "IDs: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
- name: Get benchmark date
run: |
echo "BENCH_DATE=$(date --iso-8601=seconds)" >> "${GITHUB_ENV}"
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- name: Set up home
# "Install rust" step require root user to have a HOME directory which is not set.
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: nightly
override: true
- name: Run benchmarks with AVX512
run: |
make AVX512_SUPPORT=ON bench_pbs
- name: Parse results
run: |
COMMIT_DATE="$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})"
COMMIT_HASH="$(git describe --tags --dirty)"
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware ${{ inputs.instance_type }} \
--project-version "${COMMIT_HASH}" \
--branch ${{ github.ref_name }} \
--commit-date "${COMMIT_DATE}" \
--bench-date "${{ env.BENCH_DATE }}" \
--name-suffix avx512 \
--walk-subdirs \
--throughput
- name: Upload parsed results artifact
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce
with:
name: ${{ github.sha }}_pbs
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: zama-ai/slab
path: slab
token: ${{ secrets.CONCRETE_ACTIONS_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
echo "Computing HMac on downloaded artifact"
SIGNATURE="$(slab/scripts/hmac_calculator.sh ${{ env.RESULTS_FILENAME }} '${{ secrets.JOB_SECRET }}')"
echo "Sending results to Slab..."
curl -v -k \
-H "Content-Type: application/json" \
-H "X-Slab-Repository: ${{ github.repository }}" \
-H "X-Slab-Command: store_data_v2" \
-H "X-Hub-Signature-256: sha256=${SIGNATURE}" \
-d @${{ env.RESULTS_FILENAME }} \
${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "PBS benchmarks failed. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

127
.github/workflows/shortint_benchmark.yml vendored Normal file
View File

@@ -0,0 +1,127 @@
# Run shortint benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: Shortint benchmarks
on:
workflow_dispatch:
inputs:
instance_id:
description: "Instance ID"
type: string
instance_image_id:
description: "Instance AMI ID"
type: string
instance_type:
description: "Instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
request_id:
description: "Slab request ID"
type: string
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
jobs:
run-shortint-benchmarks:
name: Execute shortint benchmarks in EC2
runs-on: ${{ github.event.inputs.runner_name }}
if: ${{ !cancelled() }}
steps:
- name: Instance configuration used
run: |
echo "IDs: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
- name: Get benchmark date
run: |
echo "BENCH_DATE=$(date --iso-8601=seconds)" >> "${GITHUB_ENV}"
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- name: Set up home
# "Install rust" step require root user to have a HOME directory which is not set.
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: nightly
override: true
- name: Run benchmarks with AVX512
run: |
make AVX512_SUPPORT=ON bench_shortint
- name: Parse results
run: |
COMMIT_DATE="$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})"
COMMIT_HASH="$(git describe --tags --dirty)"
python3 ./ci/benchmark_parser.py target/criterion ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware ${{ inputs.instance_type }} \
--project-version "${COMMIT_HASH}" \
--branch ${{ github.ref_name }} \
--commit-date "${COMMIT_DATE}" \
--bench-date "${{ env.BENCH_DATE }}" \
--walk-subdirs \
--name-suffix avx512 \
--throughput
- name: Measure key sizes
run: |
make measure_shortint_key_sizes
- name: Parse key sizes results
run: |
python3 ./ci/benchmark_parser.py tfhe/shortint_key_sizes.csv ${{ env.RESULTS_FILENAME }} \
--key-sizes \
--append-results
- name: Upload parsed results artifact
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce
with:
name: ${{ github.sha }}_shortint
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: zama-ai/slab
path: slab
token: ${{ secrets.CONCRETE_ACTIONS_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
echo "Computing HMac on results file"
SIGNATURE="$(slab/scripts/hmac_calculator.sh ${{ env.RESULTS_FILENAME }} '${{ secrets.JOB_SECRET }}')"
echo "Sending results to Slab..."
curl -v -k \
-H "Content-Type: application/json" \
-H "X-Slab-Repository: ${{ github.repository }}" \
-H "X-Slab-Command: store_data_v2" \
-H "X-Hub-Signature-256: sha256=${SIGNATURE}" \
-d @${{ env.RESULTS_FILENAME }} \
${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "Shortint benchmarks failed. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

109
.github/workflows/start_benchmarks.yml vendored Normal file
View File

@@ -0,0 +1,109 @@
# Start all benchmark jobs on Slab CI bot.
name: Start all benchmarks
on:
push:
branches:
- "main"
workflow_dispatch:
inputs:
# The input name must be the name of the slab command to launch
boolean_bench:
description: "Run Boolean benches"
type: boolean
default: true
shortint_bench:
description: "Run shortint benches"
type: boolean
default: true
integer_bench:
description: "Run integer benches"
type: boolean
default: true
integer_multi_bit_bench:
description: "Run integer multi bit benches"
type: boolean
default: true
pbs_bench:
description: "Run PBS benches"
type: boolean
default: true
wasm_client_bench:
description: "Run WASM client benches"
type: boolean
default: true
jobs:
start-benchmarks:
if: ${{ (github.event_name == 'push' && github.repository == 'zama-ai/tfhe-rs') || github.event_name == 'workflow_dispatch' }}
strategy:
matrix:
command: [boolean_bench, shortint_bench, integer_bench, integer_multi_bit_bench, pbs_bench, wasm_client_bench]
runs-on: ubuntu-latest
steps:
- name: Checkout tfhe-rs
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- name: Check for file changes
id: changed-files
uses: tj-actions/changed-files@de0eba32790fb9bf87471b32855a30fc8f9d5fc6
with:
files_yaml: |
common_benches:
- toolchain.txt
- Makefile
- ci/slab.toml
- tfhe/Cargo.toml
- tfhe/src/core_crypto/**
- .github/workflows/start_benchmarks.yml
boolean_bench:
- tfhe/src/boolean/**
- tfhe/benches/boolean/**
- .github/workflows/boolean_benchmark.yml
shortint_bench:
- tfhe/src/shortint/**
- tfhe/benches/shortint/**
- .github/workflows/shortint_benchmark.yml
integer_bench:
- tfhe/src/shortint/**
- tfhe/src/integer/**
- tfhe/benches/integer/**
- .github/workflows/integer_benchmark.yml
integer_multi_bit_bench:
- tfhe/src/shortint/**
- tfhe/src/integer/**
- tfhe/benches/integer/**
- .github/workflows/integer_benchmark.yml
pbs_bench:
- tfhe/src/core_crypto/**
- tfhe/benches/core_crypto/**
- .github/workflows/pbs_benchmark.yml
wasm_client_bench:
- tfhe/web_wasm_parallel_tests/**
- .github/workflows/wasm_client_benchmark.yml
- name: Checkout Slab repo
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: zama-ai/slab
path: slab
token: ${{ secrets.CONCRETE_ACTIONS_TOKEN }}
- name: Start AWS job in Slab
# If manually triggered check that the current bench has been requested
# Otherwise if it's on push check that files relevant to benchmarks have changed
if: (github.event_name == 'workflow_dispatch' && github.event.inputs[matrix.command] == 'true') || (github.event_name == 'push' && (steps.changed-files.outputs.common_benches_any_changed == 'true' || steps.changed-files.outputs[format('{0}_any_changed', matrix.command)] == 'true'))
shell: bash
run: |
echo -n '{"command": "${{ matrix.command }}", "git_ref": "${{ github.ref }}", "sha": "${{ github.sha }}"}' > command.json
SIGNATURE="$(slab/scripts/hmac_calculator.sh command.json '${{ secrets.JOB_SECRET }}')"
curl -v -k \
--fail-with-body \
-H "Content-Type: application/json" \
-H "X-Slab-Repository: ${{ github.repository }}" \
-H "X-Slab-Command: start_aws" \
-H "X-Hub-Signature-256: sha256=${SIGNATURE}" \
-d @command.json \
${{ secrets.SLAB_URL }}

37
.github/workflows/sync_on_push.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
# Sync repos
name: Sync repos
on:
push:
branches:
- 'main'
workflow_dispatch:
jobs:
sync-repo:
if: ${{ github.repository == 'zama-ai/tfhe-rs' }}
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- name: Save repo
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce
with:
name: repo-archive
path: '.'
- name: git-sync
uses: wei/git-sync@55c6b63b4f21607da0e9877ca9b4d11a29fc6d83
with:
source_repo: "zama-ai/tfhe-rs"
source_branch: "main"
destination_repo: "https://${{ secrets.BOT_USERNAME }}:${{ secrets.CONCRETE_ACTIONS_TOKEN }}@github.com/${{ secrets.SYNC_DEST_REPO }}"
destination_branch: "main"
- name: git-sync tags
uses: wei/git-sync@55c6b63b4f21607da0e9877ca9b4d11a29fc6d83
with:
source_repo: "zama-ai/tfhe-rs"
source_branch: "refs/tags/*"
destination_repo: "https://${{ secrets.BOT_USERNAME }}:${{ secrets.CONCRETE_ACTIONS_TOKEN }}@github.com/${{ secrets.SYNC_DEST_REPO }}"
destination_branch: "refs/tags/*"

View File

@@ -0,0 +1,34 @@
# Trigger an AWS build each time commits are pushed to a pull request.
name: PR AWS build trigger
on:
pull_request:
pull_request_review:
types: [submitted]
jobs:
trigger-tests:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- name: Launch fast tests
if: ${{ github.event_name == 'pull_request' }}
uses: mshick/add-pr-comment@a65df5f64fc741e91c59b8359a4bc56e57aaf5b1
with:
allow-repeats: true
message: |
@slab-ci cpu_fast_test
- name: Launch full tests suite
if: ${{ github.event_name == 'pull_request_review' && github.event.review.state == 'approved' }}
uses: mshick/add-pr-comment@a65df5f64fc741e91c59b8359a4bc56e57aaf5b1
with:
allow-repeats: true
message: |
Pull Request has been approved :tada:
Launching full test suite...
@slab-ci cpu_test
@slab-ci cpu_integer_test
@slab-ci cpu_multi_bit_test
@slab-ci cpu_wasm_test

View File

@@ -0,0 +1,128 @@
# Run WASM client benchmarks on an AWS instance and return parsed results to Slab CI bot.
name: WASM client benchmarks
on:
workflow_dispatch:
inputs:
instance_id:
description: "Instance ID"
type: string
instance_image_id:
description: "Instance AMI ID"
type: string
instance_type:
description: "Instance product type"
type: string
runner_name:
description: "Action runner name"
type: string
request_id:
description: "Slab request ID"
type: string
env:
CARGO_TERM_COLOR: always
RESULTS_FILENAME: parsed_benchmark_results_${{ github.sha }}.json
ACTION_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
jobs:
run-wasm-client-benchmarks:
name: Execute WASM client benchmarks in EC2
runs-on: ${{ github.event.inputs.runner_name }}
if: ${{ !cancelled() }}
steps:
- name: Instance configuration used
run: |
echo "IDs: ${{ inputs.instance_id }}"
echo "AMI: ${{ inputs.instance_image_id }}"
echo "Type: ${{ inputs.instance_type }}"
echo "Request ID: ${{ inputs.request_id }}"
- name: Get benchmark date
run: |
echo "BENCH_DATE=$(date --iso-8601=seconds)" >> "${GITHUB_ENV}"
- name: Checkout tfhe-rs repo with tags
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
fetch-depth: 0
- name: Set up home
# "Install rust" step require root user to have a HOME directory which is not set.
run: |
echo "HOME=/home/ubuntu" >> "${GITHUB_ENV}"
- name: Install rust
uses: actions-rs/toolchain@16499b5e05bf2e26879000db0c1d13f7e13fa3af
with:
toolchain: nightly
override: true
- name: Run benchmarks
run: |
make install_node
make ci_bench_web_js_api_parallel
- name: Parse results
run: |
make parse_wasm_benchmarks
COMMIT_DATE="$(git --no-pager show -s --format=%cd --date=iso8601-strict ${{ github.sha }})"
COMMIT_HASH="$(git describe --tags --dirty)"
python3 ./ci/benchmark_parser.py tfhe/wasm_pk_gen.csv ${{ env.RESULTS_FILENAME }} \
--database tfhe_rs \
--hardware ${{ inputs.instance_type }} \
--project-version "${COMMIT_HASH}" \
--branch ${{ github.ref_name }} \
--commit-date "${COMMIT_DATE}" \
--bench-date "${{ env.BENCH_DATE }}" \
--key-gen
- name: Measure public key and ciphertext sizes in HL Api
run: |
make measure_hlapi_compact_pk_ct_sizes
- name: Parse key and ciphertext sizes results
run: |
python3 ./ci/benchmark_parser.py tfhe/hlapi_cpk_and_cctl_sizes.csv ${{ env.RESULTS_FILENAME }} \
--key-gen \
--append-results
- name: Upload parsed results artifact
uses: actions/upload-artifact@0b7f8abb1508181956e8e162db84b466c27e18ce
with:
name: ${{ github.sha }}_wasm
path: ${{ env.RESULTS_FILENAME }}
- name: Checkout Slab repo
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9
with:
repository: zama-ai/slab
path: slab
token: ${{ secrets.CONCRETE_ACTIONS_TOKEN }}
- name: Send data to Slab
shell: bash
run: |
echo "Computing HMac on results file"
SIGNATURE="$(slab/scripts/hmac_calculator.sh ${{ env.RESULTS_FILENAME }} '${{ secrets.JOB_SECRET }}')"
echo "Sending results to Slab..."
curl -v -k \
-H "Content-Type: application/json" \
-H "X-Slab-Repository: ${{ github.repository }}" \
-H "X-Slab-Command: store_data_v2" \
-H "X-Hub-Signature-256: sha256=${SIGNATURE}" \
-d @${{ env.RESULTS_FILENAME }} \
${{ secrets.SLAB_URL }}
- name: Slack Notification
if: ${{ failure() }}
continue-on-error: true
uses: rtCamp/action-slack-notify@12e36fc18b0689399306c2e0b3e0f2978b7f1ee7
env:
SLACK_COLOR: ${{ job.status }}
SLACK_CHANNEL: ${{ secrets.SLACK_CHANNEL }}
SLACK_ICON: https://pbs.twimg.com/profile_images/1274014582265298945/OjBKP9kn_400x400.png
SLACK_MESSAGE: "WASM benchmarks failed. (${{ env.ACTION_RUN_URL }})"
SLACK_USERNAME: ${{ secrets.BOT_USERNAME }}
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }}

8
.gitignore vendored
View File

@@ -3,7 +3,13 @@ target/
.vscode/
# Path we use for internal-keycache during tests
keys/
./keys/
# In case of symlinked keys
./keys
**/Cargo.lock
**/*.bin
# Some of our bench outputs
/tfhe/benchmarks_parameters
**/*.csv

131
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,131 @@
# Contributor Covenant Code of Conduct
## Our pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our standards
Examples of behavior that contributes to a positive environment for our
community include:
- Demonstrating empathy and kindness toward other people
- Being respectful of differing opinions, viewpoints, and experiences
- Giving and gracefully accepting constructive feedback
- Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
- Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
- The use of sexualized language or imagery, and sexual attention or advances of
any kind
- Trolling, insulting or derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or email address,
without their explicit permission
- Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting us anonymously through [this form](https://forms.gle/569j3cZqGRFgrR3u9).
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][mozilla coc].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][faq]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
[faq]: https://www.contributor-covenant.org/faq
[homepage]: https://www.contributor-covenant.org
[mozilla coc]: https://github.com/mozilla/diversity
[translations]: https://www.contributor-covenant.org/translations
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html

View File

@@ -1,9 +1,19 @@
[workspace]
resolver = "2"
members = ["tfhe"]
members = ["tfhe", "tasks", "apps/trivium"]
[profile.bench]
lto = "fat"
[profile.release]
lto = "fat"
[profile.release_lto_off]
inherits = "release"
lto = "off"
# Compiles much faster for tests and allows reasonable performance for iterating
[profile.devo]
inherits = "dev"
opt-level = 3
lto = "off"

View File

@@ -1,6 +1,6 @@
BSD 3-Clause Clear License
Copyright © 2022 ZAMA.
Copyright © 2023 ZAMA.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
@@ -16,7 +16,7 @@ materials provided with the distribution.
3. Neither the name of ZAMA nor the names of its contributors may be used to endorse
or promote products derived from this software without specific prior written permission.
NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY THIS LICENSE*.
NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY THIS LICENSE.
THIS SOFTWARE IS PROVIDED BY THE ZAMA AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
@@ -26,8 +26,3 @@ OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CA
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*In addition to the rights carried by this license, ZAMA grants to the user a non-exclusive,
free and non-commercial license on all patents filed in its name relating to the open-source
code (the "Patents") for the sole purpose of evaluation, development, research, prototyping
and experimentation.

475
Makefile
View File

@@ -1,13 +1,39 @@
SHELL:=$(shell /usr/bin/env which bash)
RS_CHECK_TOOLCHAIN:=$(shell cat toolchain.txt)
OS:=$(shell uname)
RS_CHECK_TOOLCHAIN:=$(shell cat toolchain.txt | tr -d '\n')
CARGO_RS_CHECK_TOOLCHAIN:=+$(RS_CHECK_TOOLCHAIN)
TARGET_ARCH_FEATURE:=$(shell ./scripts/get_arch_feature.sh)
RS_BUILD_TOOLCHAIN:=$(shell \
( (echo $(TARGET_ARCH_FEATURE) | grep -q x86) && echo stable) || echo $(RS_CHECK_TOOLCHAIN))
CARGO_RS_BUILD_TOOLCHAIN:=+$(RS_BUILD_TOOLCHAIN)
CARGO_PROFILE?=release
MIN_RUST_VERSION:=$(shell grep rust-version tfhe/Cargo.toml | cut -d '=' -f 2 | xargs)
AVX512_SUPPORT?=OFF
WASM_RUSTFLAGS:=
BIG_TESTS_INSTANCE?=FALSE
GEN_KEY_CACHE_MULTI_BIT_ONLY?=FALSE
PARSE_INTEGER_BENCH_CSV_FILE?=tfhe_rs_integer_benches.csv
FAST_TESTS?=FALSE
BENCH_OP_FLAVOR?=DEFAULT
# This is done to avoid forgetting it, we still precise the RUSTFLAGS in the commands to be able to
# copy paste the command in the termianl and change them if required without forgetting the flags
export RUSTFLAGS:=-C target-cpu=native
# copy paste the command in the terminal and change them if required without forgetting the flags
export RUSTFLAGS?=-C target-cpu=native
ifeq ($(AVX512_SUPPORT),ON)
AVX512_FEATURE=nightly-avx512
else
AVX512_FEATURE=
endif
ifeq ($(GEN_KEY_CACHE_MULTI_BIT_ONLY),TRUE)
MULTI_BIT_ONLY=--multi-bit-only
else
MULTI_BIT_ONLY=
endif
# Variables used only for regex_engine example
REGEX_STRING?=''
REGEX_PATTERN?=''
.PHONY: rs_check_toolchain # Echo the rust toolchain used for checks
rs_check_toolchain:
@@ -21,21 +47,37 @@ rs_build_toolchain:
install_rs_check_toolchain:
@rustup toolchain list | grep -q "$(RS_CHECK_TOOLCHAIN)" || \
rustup toolchain install --profile default "$(RS_CHECK_TOOLCHAIN)" || \
echo "Unable to install $(RS_CHECK_TOOLCHAIN) toolchain, check your rustup installation. \
Rustup can be downloaded at https://rustup.rs/"
( echo "Unable to install $(RS_CHECK_TOOLCHAIN) toolchain, check your rustup installation. \
Rustup can be downloaded at https://rustup.rs/" && exit 1 )
.PHONY: install_rs_build_toolchain # Install the toolchain used for builds
install_rs_build_toolchain:
@rustup toolchain list | grep -q "$(RS_BUILD_TOOLCHAIN)" || \
@( rustup toolchain list | grep -q "$(RS_BUILD_TOOLCHAIN)" && \
./scripts/check_cargo_min_ver.sh \
--rust-toolchain "$(CARGO_RS_BUILD_TOOLCHAIN)" \
--min-rust-version "$(MIN_RUST_VERSION)" ) || \
rustup toolchain install --profile default "$(RS_BUILD_TOOLCHAIN)" || \
echo "Unable to install $(RS_BUILD_TOOLCHAIN) toolchain, check your rustup installation. \
Rustup can be downloaded at https://rustup.rs/"
( echo "Unable to install $(RS_BUILD_TOOLCHAIN) toolchain, check your rustup installation. \
Rustup can be downloaded at https://rustup.rs/" && exit 1 )
.PHONY: install_cargo_nextest # Install cargo nextest used for shortint tests
install_cargo_nextest: install_rs_build_toolchain
@cargo nextest --version > /dev/null 2>&1 || \
cargo $(CARGO_RS_BUILD_TOOLCHAIN) install cargo-nextest --locked || \
echo "Unable to install cargo nextest, unknown error."
( echo "Unable to install cargo nextest, unknown error." && exit 1 )
.PHONY: install_wasm_pack # Install wasm-pack to build JS packages
install_wasm_pack: install_rs_build_toolchain
@wasm-pack --version > /dev/null 2>&1 || \
cargo $(CARGO_RS_BUILD_TOOLCHAIN) install wasm-pack || \
( echo "Unable to install cargo wasm-pack, unknown error." && exit 1 )
.PHONY: install_node # Install last version of NodeJS via nvm
install_node:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | $(SHELL)
source ~/.bashrc
$(SHELL) -i -c 'nvm install node' || \
( echo "Unable to install node, unknown error." && exit 1 )
.PHONY: fmt # Format rust code
fmt: install_rs_check_toolchain
@@ -45,6 +87,15 @@ fmt: install_rs_check_toolchain
check_fmt: install_rs_check_toolchain
cargo "$(CARGO_RS_CHECK_TOOLCHAIN)" fmt --check
.PHONY: clippy_core # Run clippy lints on core_crypto with and without experimental features
clippy_core: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo "$(CARGO_RS_CHECK_TOOLCHAIN)" clippy \
--features=$(TARGET_ARCH_FEATURE) \
-p tfhe -- --no-deps -D warnings
RUSTFLAGS="$(RUSTFLAGS)" cargo "$(CARGO_RS_CHECK_TOOLCHAIN)" clippy \
--features=$(TARGET_ARCH_FEATURE),experimental \
-p tfhe -- --no-deps -D warnings
.PHONY: clippy_boolean # Run clippy lints enabling the boolean features
clippy_boolean: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo "$(CARGO_RS_CHECK_TOOLCHAIN)" clippy \
@@ -57,10 +108,16 @@ clippy_shortint: install_rs_check_toolchain
--features=$(TARGET_ARCH_FEATURE),shortint \
-p tfhe -- --no-deps -D warnings
.PHONY: clippy # Run clippy lints enabling the boolean, shortint
clippy: install_rs_check_toolchain
.PHONY: clippy_integer # Run clippy lints enabling the integer features
clippy_integer: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo "$(CARGO_RS_CHECK_TOOLCHAIN)" clippy \
--features=$(TARGET_ARCH_FEATURE),boolean,shortint \
--features=$(TARGET_ARCH_FEATURE),integer \
-p tfhe -- --no-deps -D warnings
.PHONY: clippy # Run clippy lints enabling the boolean, shortint, integer
clippy: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo "$(CARGO_RS_CHECK_TOOLCHAIN)" clippy --all-targets \
--features=$(TARGET_ARCH_FEATURE),boolean,shortint,integer \
-p tfhe -- --no-deps -D warnings
.PHONY: clippy_c_api # Run clippy lints enabling the boolean, shortint and the C API
@@ -69,83 +126,397 @@ clippy_c_api: install_rs_check_toolchain
--features=$(TARGET_ARCH_FEATURE),boolean-c-api,shortint-c-api \
-p tfhe -- --no-deps -D warnings
.PHONY: clippy_cuda # Run clippy lints enabling the boolean, shortint, cuda and c API features
clippy_cuda: install_rs_check_toolchain
.PHONY: clippy_js_wasm_api # Run clippy lints enabling the boolean, shortint, integer and the js wasm API
clippy_js_wasm_api: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo "$(CARGO_RS_CHECK_TOOLCHAIN)" clippy \
--features=$(TARGET_ARCH_FEATURE),cuda,boolean-c-api,shortint-c-api \
--features=boolean-client-js-wasm-api,shortint-client-js-wasm-api,integer-client-js-wasm-api \
-p tfhe -- --no-deps -D warnings
.PHONY: clippy_tasks # Run clippy lints on helper tasks crate.
clippy_tasks:
RUSTFLAGS="$(RUSTFLAGS)" cargo "$(CARGO_RS_CHECK_TOOLCHAIN)" clippy \
-p tasks -- --no-deps -D warnings
.PHONY: clippy_all_targets # Run clippy lints on all targets (benches, examples, etc.)
clippy_all_targets:
RUSTFLAGS="$(RUSTFLAGS)" cargo "$(CARGO_RS_CHECK_TOOLCHAIN)" clippy --all-targets \
--features=$(TARGET_ARCH_FEATURE),boolean,shortint,integer,internal-keycache \
-p tfhe -- --no-deps -D warnings
.PHONY: clippy_all # Run all clippy targets
clippy_all: clippy clippy_boolean clippy_shortint clippy_integer clippy_all_targets clippy_c_api \
clippy_js_wasm_api clippy_tasks clippy_core
.PHONY: clippy_fast # Run main clippy targets
clippy_fast: clippy clippy_all_targets clippy_c_api clippy_js_wasm_api clippy_tasks clippy_core
.PHONY: gen_key_cache # Run the script to generate keys and cache them for shortint tests
gen_key_cache: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) run --release \
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) run --profile $(CARGO_PROFILE) \
--example generates_test_keys \
--features=$(TARGET_ARCH_FEATURE),shortint,internal-keycache -p tfhe
--features=$(TARGET_ARCH_FEATURE),shortint,internal-keycache -p tfhe -- \
$(MULTI_BIT_ONLY)
.PHONY: build_core # Build core_crypto without experimental features
build_core: install_rs_build_toolchain install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) build --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE) -p tfhe
@if [[ "$(AVX512_SUPPORT)" == "ON" ]]; then \
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) build --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),$(AVX512_FEATURE) -p tfhe; \
fi
.PHONY: build_core_experimental # Build core_crypto with experimental features
build_core_experimental: install_rs_build_toolchain install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) build --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),experimental -p tfhe
@if [[ "$(AVX512_SUPPORT)" == "ON" ]]; then \
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) build --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),experimental,$(AVX512_FEATURE) -p tfhe; \
fi
.PHONY: build_boolean # Build with boolean enabled
build_boolean: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) build --release \
--features=$(TARGET_ARCH_FEATURE),boolean -p tfhe
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) build --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),boolean -p tfhe --all-targets
.PHONY: build_shortint # Build with shortint enabled
build_shortint: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) build --release \
--features=$(TARGET_ARCH_FEATURE),shortint -p tfhe
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) build --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),shortint -p tfhe --all-targets
.PHONY: build_boolean_and_shortint # Build with boolean and shortint enabled
build_boolean_and_shortint: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) build --release \
--features=$(TARGET_ARCH_FEATURE),boolean,shortint -p tfhe
.PHONY: build_integer # Build with integer enabled
build_integer: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) build --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),integer -p tfhe --all-targets
.PHONY: build_c_api # Build the C API for boolean and shortint
build_c_api: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) build --release
--features=$(TARGET_ARCH_FEATURE),boolean-c-api,shortint-c-api -p tfhe
.PHONY: build_tfhe_full # Build with boolean, shortint and integer enabled
build_tfhe_full: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) build --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),boolean,shortint,integer -p tfhe --all-targets
.PHONY: test_core_crypto # Run the tests of the core_crypto module
test_core_crypto: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --release \
--features=$(TARGET_ARCH_FEATURE) -p tfhe -- core_crypto::
.PHONY: build_c_api # Build the C API for boolean, shortint and integer
build_c_api: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) build --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),boolean-c-api,shortint-c-api,high-level-c-api \
-p tfhe
.PHONY: test_core_crypto_cuda # Run the tests of the core_crypto module with cuda enabled
test_core_crypto_cuda: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --release \
--features=$(TARGET_ARCH_FEATURE),cuda -p tfhe -- core_crypto::backends::cuda::
.PHONY: build_c_api_experimental_deterministic_fft # Build the C API for boolean, shortint and integer with experimental deterministic FFT
build_c_api_experimental_deterministic_fft: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) build --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),boolean-c-api,shortint-c-api,high-level-c-api,experimental-force_fft_algo_dif4 \
-p tfhe
.PHONY: build_web_js_api # Build the js API targeting the web browser
build_web_js_api: install_rs_build_toolchain install_wasm_pack
cd tfhe && \
RUSTFLAGS="$(WASM_RUSTFLAGS)" rustup run "$(RS_BUILD_TOOLCHAIN)" \
wasm-pack build --release --target=web \
-- --features=boolean-client-js-wasm-api,shortint-client-js-wasm-api,integer-client-js-wasm-api
.PHONY: build_web_js_api_parallel # Build the js API targeting the web browser with parallelism support
build_web_js_api_parallel: install_rs_check_toolchain install_wasm_pack
cd tfhe && \
rustup component add rust-src --toolchain $(RS_CHECK_TOOLCHAIN) && \
RUSTFLAGS="$(WASM_RUSTFLAGS) -C target-feature=+atomics,+bulk-memory,+mutable-globals" rustup run $(RS_CHECK_TOOLCHAIN) \
wasm-pack build --release --target=web \
-- --features=boolean-client-js-wasm-api,shortint-client-js-wasm-api,integer-client-js-wasm-api,parallel-wasm-api \
-Z build-std=panic_abort,std
.PHONY: build_node_js_api # Build the js API targeting nodejs
build_node_js_api: install_rs_build_toolchain install_wasm_pack
cd tfhe && \
RUSTFLAGS="$(WASM_RUSTFLAGS)" rustup run "$(RS_BUILD_TOOLCHAIN)" \
wasm-pack build --release --target=nodejs \
-- --features=boolean-client-js-wasm-api,shortint-client-js-wasm-api,integer-client-js-wasm-api
.PHONY: test_core_crypto # Run the tests of the core_crypto module including experimental ones
test_core_crypto: install_rs_build_toolchain install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),experimental -p tfhe -- core_crypto::
@if [[ "$(AVX512_SUPPORT)" == "ON" ]]; then \
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),experimental,$(AVX512_FEATURE) -p tfhe -- core_crypto::; \
fi
.PHONY: test_boolean # Run the tests of the boolean module
test_boolean: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --release \
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),boolean -p tfhe -- boolean::
.PHONY: test_boolean_cuda # Run the tests of the boolean module with cuda enabled
test_boolean_cuda: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --release \
--features=$(TARGET_ARCH_FEATURE),boolean,cuda -p tfhe -- boolean::
.PHONY: test_c_api_rs # Run the rust tests for the C API
test_c_api_rs: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),boolean-c-api,shortint-c-api,high-level-c-api \
-p tfhe \
c_api
.PHONY: test_c_api # Run the tests for the C API
test_c_api: install_rs_build_toolchain
./scripts/c_api_tests.sh $(CARGO_RS_BUILD_TOOLCHAIN)
.PHONY: test_c_api_c # Run the C tests for the C API
test_c_api_c: build_c_api
./scripts/c_api_tests.sh
.PHONY: test_c_api # Run all the tests for the C API
test_c_api: test_c_api_rs test_c_api_c
.PHONY: test_shortint_ci # Run the tests for shortint ci
test_shortint_ci: install_rs_build_toolchain install_cargo_nextest
./scripts/shortint-tests.sh $(CARGO_RS_BUILD_TOOLCHAIN)
BIG_TESTS_INSTANCE="$(BIG_TESTS_INSTANCE)" \
FAST_TESTS="$(FAST_TESTS)" \
./scripts/shortint-tests.sh --rust-toolchain $(CARGO_RS_BUILD_TOOLCHAIN) \
--cargo-profile "$(CARGO_PROFILE)"
.PHONY: test_shortint_multi_bit_ci # Run the tests for shortint ci running only multibit tests
test_shortint_multi_bit_ci: install_rs_build_toolchain install_cargo_nextest
BIG_TESTS_INSTANCE="$(BIG_TESTS_INSTANCE)" \
FAST_TESTS="$(FAST_TESTS)" \
./scripts/shortint-tests.sh --rust-toolchain $(CARGO_RS_BUILD_TOOLCHAIN) \
--cargo-profile "$(CARGO_PROFILE)" --multi-bit
.PHONY: test_shortint # Run all the tests for shortint
test_shortint: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --release \
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),shortint,internal-keycache -p tfhe -- shortint::
.PHONY: test_integer_ci # Run the tests for integer ci
test_integer_ci: install_rs_build_toolchain install_cargo_nextest
BIG_TESTS_INSTANCE="$(BIG_TESTS_INSTANCE)" \
FAST_TESTS="$(FAST_TESTS)" \
./scripts/integer-tests.sh --rust-toolchain $(CARGO_RS_BUILD_TOOLCHAIN) \
--cargo-profile "$(CARGO_PROFILE)"
.PHONY: test_integer_multi_bit_ci # Run the tests for integer ci running only multibit tests
test_integer_multi_bit_ci: install_rs_build_toolchain install_cargo_nextest
BIG_TESTS_INSTANCE="$(BIG_TESTS_INSTANCE)" \
FAST_TESTS="$(FAST_TESTS)" \
./scripts/integer-tests.sh --rust-toolchain $(CARGO_RS_BUILD_TOOLCHAIN) \
--cargo-profile "$(CARGO_PROFILE)" --multi-bit
.PHONY: test_integer # Run all the tests for integer
test_integer: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),integer,internal-keycache -p tfhe -- integer::
.PHONY: test_high_level_api # Run all the tests for high_level_api
test_high_level_api: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
--features=$(TARGET_ARCH_FEATURE),boolean,shortint,integer,internal-keycache -p tfhe \
-- high_level_api::
.PHONY: test_user_doc # Run tests from the .md documentation
test_user_doc: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --release --doc \
--features=$(TARGET_ARCH_FEATURE),shortint,boolean,internal-keycache -p tfhe \
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --profile $(CARGO_PROFILE) --doc \
--features=$(TARGET_ARCH_FEATURE),boolean,shortint,integer,internal-keycache -p tfhe \
-- test_user_docs::
.PHONY: test_regex_engine # Run tests for regex_engine example
test_regex_engine: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
--example regex_engine \
--features=$(TARGET_ARCH_FEATURE),integer
.PHONY: test_sha256_bool # Run tests for sha256_bool example
test_sha256_bool: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
--example sha256_bool \
--features=$(TARGET_ARCH_FEATURE),boolean
.PHONY: test_examples # Run tests for examples
test_examples: test_sha256_bool test_regex_engine
.PHONY: test_trivium # Run tests for trivium
test_trivium: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
trivium --features=$(TARGET_ARCH_FEATURE),boolean,shortint,integer \
-- --test-threads=1
.PHONY: test_kreyvium # Run tests for kreyvium
test_kreyvium: install_rs_build_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --profile $(CARGO_PROFILE) \
kreyvium --features=$(TARGET_ARCH_FEATURE),boolean,shortint,integer \
-- --test-threads=1
.PHONY: doc # Build rust doc
doc: install_rs_check_toolchain
RUSTDOCFLAGS="--html-in-header katex-header.html" \
RUSTDOCFLAGS="--html-in-header katex-header.html -Dwarnings" \
cargo "$(CARGO_RS_CHECK_TOOLCHAIN)" doc \
--features=$(TARGET_ARCH_FEATURE),boolean,shortint --no-deps
--features=$(TARGET_ARCH_FEATURE),boolean,shortint,integer --no-deps
.PHONY: docs # Build rust doc alias for doc
docs: doc
.PHONY: format_doc_latex # Format the documentation latex equations to avoid broken rendering.
format_doc_latex:
cargo xtask format_latex_doc
@"$(MAKE)" --no-print-directory fmt
@printf "\n===============================\n\n"
@printf "Please manually inspect changes made by format_latex_doc, rustfmt can break equations \
if the line length is exceeded\n"
@printf "\n===============================\n"
.PHONY: check_compile_tests # Build tests in debug without running them
check_compile_tests:
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_BUILD_TOOLCHAIN) test --no-run \
--features=$(TARGET_ARCH_FEATURE),experimental,boolean,shortint,integer,internal-keycache \
-p tfhe
@if [[ "$(OS)" == "Linux" || "$(OS)" == "Darwin" ]]; then \
"$(MAKE)" build_c_api; \
./scripts/c_api_tests.sh --build-only; \
fi
.PHONY: build_nodejs_test_docker # Build a docker image with tools to run nodejs tests for wasm API
build_nodejs_test_docker:
DOCKER_BUILDKIT=1 docker build --build-arg RUST_TOOLCHAIN="$(RS_BUILD_TOOLCHAIN)" \
-f docker/Dockerfile.wasm_tests -t tfhe-wasm-tests .
.PHONY: test_nodejs_wasm_api_in_docker # Run tests for the nodejs on wasm API in a docker container
test_nodejs_wasm_api_in_docker: build_nodejs_test_docker
if [[ -t 1 ]]; then RUN_FLAGS="-it"; else RUN_FLAGS="-i"; fi && \
docker run --rm "$${RUN_FLAGS}" \
-v "$$(pwd)":/tfhe-wasm-tests/tfhe-rs \
-v tfhe-rs-root-target-cache:/root/tfhe-rs-target \
-v tfhe-rs-pkg-cache:/tfhe-wasm-tests/tfhe-rs/tfhe/pkg \
-v tfhe-rs-root-cargo-registry-cache:/root/.cargo/registry \
-v tfhe-rs-root-cache:/root/.cache \
tfhe-wasm-tests /bin/bash -i -c 'make test_nodejs_wasm_api'
.PHONY: test_nodejs_wasm_api # Run tests for the nodejs on wasm API
test_nodejs_wasm_api: build_node_js_api
cd tfhe && node --test js_on_wasm_tests
.PHONY: test_web_js_api_parallel # Run tests for the web wasm api
test_web_js_api_parallel: build_web_js_api_parallel
$(MAKE) -C tfhe/web_wasm_parallel_tests test
.PHONY: ci_test_web_js_api_parallel # Run tests for the web wasm api
ci_test_web_js_api_parallel: build_web_js_api_parallel
source ~/.nvm/nvm.sh && \
nvm use node && \
$(MAKE) -C tfhe/web_wasm_parallel_tests test-ci
.PHONY: no_tfhe_typo # Check we did not invert the h and f in tfhe
no_tfhe_typo:
@./scripts/no_tfhe_typo.sh
.PHONY: no_dbg_log # Check we did not leave dbg macro calls in the rust code
no_dbg_log:
@./scripts/no_dbg_calls.sh
#
# Benchmarks
#
.PHONY: bench_integer # Run benchmarks for integer
bench_integer: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" __TFHE_RS_BENCH_OP_FLAVOR=$(BENCH_OP_FLAVOR) \
cargo $(CARGO_RS_CHECK_TOOLCHAIN) bench \
--bench integer-bench \
--features=$(TARGET_ARCH_FEATURE),integer,internal-keycache,$(AVX512_FEATURE) -p tfhe --
.PHONY: bench_integer_multi_bit # Run benchmarks for integer using multi-bit parameters
bench_integer_multi_bit: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" __TFHE_RS_BENCH_TYPE=MULTI_BIT __TFHE_RS_BENCH_OP_FLAVOR=$(BENCH_OP_FLAVOR) \
cargo $(CARGO_RS_CHECK_TOOLCHAIN) bench \
--bench integer-bench \
--features=$(TARGET_ARCH_FEATURE),integer,internal-keycache,$(AVX512_FEATURE) -p tfhe --
.PHONY: bench_shortint # Run benchmarks for shortint
bench_shortint: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" __TFHE_RS_BENCH_OP_FLAVOR=$(BENCH_OP_FLAVOR) \
cargo $(CARGO_RS_CHECK_TOOLCHAIN) bench \
--bench shortint-bench \
--features=$(TARGET_ARCH_FEATURE),shortint,internal-keycache,$(AVX512_FEATURE) -p tfhe
.PHONY: bench_boolean # Run benchmarks for boolean
bench_boolean: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) bench \
--bench boolean-bench \
--features=$(TARGET_ARCH_FEATURE),boolean,internal-keycache,$(AVX512_FEATURE) -p tfhe
.PHONY: bench_pbs # Run benchmarks for PBS
bench_pbs: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) bench \
--bench pbs-bench \
--features=$(TARGET_ARCH_FEATURE),boolean,shortint,internal-keycache,$(AVX512_FEATURE) -p tfhe
.PHONY: bench_web_js_api_parallel # Run benchmarks for the web wasm api
bench_web_js_api_parallel: build_web_js_api_parallel
$(MAKE) -C tfhe/web_wasm_parallel_tests bench
.PHONY: ci_bench_web_js_api_parallel # Run benchmarks for the web wasm api
ci_bench_web_js_api_parallel: build_web_js_api_parallel
source ~/.nvm/nvm.sh && \
nvm use node && \
$(MAKE) -C tfhe/web_wasm_parallel_tests bench-ci
#
# Utility tools
#
.PHONY: measure_hlapi_compact_pk_ct_sizes # Measure sizes of public keys and ciphertext for high-level API
measure_hlapi_compact_pk_ct_sizes: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) run --profile $(CARGO_PROFILE) \
--example hlapi_compact_pk_ct_sizes \
--features=$(TARGET_ARCH_FEATURE),integer,internal-keycache
.PHONY: measure_shortint_key_sizes # Measure sizes of bootstrapping and key switching keys for shortint
measure_shortint_key_sizes: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) run --profile $(CARGO_PROFILE) \
--example shortint_key_sizes \
--features=$(TARGET_ARCH_FEATURE),shortint,internal-keycache
.PHONY: measure_boolean_key_sizes # Measure sizes of bootstrapping and key switching keys for boolean
measure_boolean_key_sizes: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) run --profile $(CARGO_PROFILE) \
--example boolean_key_sizes \
--features=$(TARGET_ARCH_FEATURE),boolean,internal-keycache
.PHONY: parse_integer_benches # Run python parser to output a csv containing integer benches data
parse_integer_benches:
python3 ./ci/parse_integer_benches_to_csv.py \
--criterion-dir target/criterion \
--output-file "$(PARSE_INTEGER_BENCH_CSV_FILE)"
.PHONY: parse_wasm_benchmarks # Parse benchmarks performed with WASM web client into a CSV file
parse_wasm_benchmarks: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) run --profile $(CARGO_PROFILE) \
--example wasm_benchmarks_parser \
--features=$(TARGET_ARCH_FEATURE),shortint,internal-keycache \
-- web_wasm_parallel_tests/test/benchmark_results
#
# Real use case examples
#
.PHONY: regex_engine # Run regex_engine example
regex_engine: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) run --profile $(CARGO_PROFILE) \
--example regex_engine \
--features=$(TARGET_ARCH_FEATURE),integer \
-- $(REGEX_STRING) $(REGEX_PATTERN)
.PHONY: dark_market # Run dark market example
dark_market: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) run --profile $(CARGO_PROFILE) \
--example dark_market \
--features=$(TARGET_ARCH_FEATURE),integer,internal-keycache \
-- fhe-modified fhe-parallel plain fhe
.PHONY: sha256_bool # Run sha256_bool example
sha256_bool: install_rs_check_toolchain
RUSTFLAGS="$(RUSTFLAGS)" cargo $(CARGO_RS_CHECK_TOOLCHAIN) run --profile $(CARGO_PROFILE) \
--example sha256_bool \
--features=$(TARGET_ARCH_FEATURE),boolean
.PHONY: pcc # pcc stands for pre commit checks
pcc: no_tfhe_typo no_dbg_log check_fmt doc clippy_all check_compile_tests
.PHONY: fpcc # pcc stands for pre commit checks, the f stands for fast
fpcc: no_tfhe_typo no_dbg_log check_fmt doc clippy_fast check_compile_tests
.PHONY: conformance # Automatically fix problems that can be fixed
conformance: fmt
.PHONY: help # Generate list of targets with descriptions
help:
@grep '^.PHONY: .* #' Makefile | sed 's/\.PHONY: \(.*\) # \(.*\)/\1\t\2/' | expand -t30 | sort
@grep '^\.PHONY: .* #' Makefile | sed 's/\.PHONY: \(.*\) # \(.*\)/\1\t\2/' | expand -t30 | sort

170
README.md
View File

@@ -1,31 +1,25 @@
<p align="center">
<!-- product name logo -->
<img width=600 src="https://user-images.githubusercontent.com/86411313/201107820-b1b861be-6b3f-46cc-bccd-ed051201781a.png">
<img width=600 src="https://user-images.githubusercontent.com/5758427/231206749-8f146b97-3c5a-4201-8388-3ffa88580415.png">
</p>
<hr/>
<p align="center">
<a href="https://docs.zama.ai/tfhe-rs"> 📒 Read documentation</a> | <a href="https://zama.ai/community"> 💛 Community support</a>
</p>
<p align="center">
<!-- Version badge using shields.io -->
<a href="https://github.com/zama-ai/tfhe-rs/releases">
<img src="https://img.shields.io/github/v/release/zama-ai/tfhe-rs?style=flat-square">
</a>
<!-- Link to docs badge using shields.io -->
<a href="https://docs.zama.ai/tfhe-rs">
<img src="https://img.shields.io/badge/read-documentation-yellow?style=flat-square">
</a>
<!-- Community forum badge using shields.io -->
<a href="https://community.zama.ai">
<img src="https://img.shields.io/badge/community%20forum-online-brightgreen?style=flat-square">
</a>
<!-- Open source badge using shields.io -->
<a href="https://docs.zama.ai/tfhe-rs/developers/contributing">
<img src="https://img.shields.io/badge/we're%20open%20source-contributing.md-blue?style=flat-square">
</a>
<!-- Follow on twitter badge using shields.io -->
<a href="https://twitter.com/zama_fhe">
<img src="https://img.shields.io/twitter/follow/zama_fhe?color=blue&style=flat-square">
<!-- Zama Bounty Program -->
<a href="https://github.com/zama-ai/bounty-program">
<img src="https://img.shields.io/badge/Contribute-Zama%20Bounty%20Program-yellow?style=flat-square">
</a>
</p>
<hr/>
**TFHE-rs** is a pure Rust implementation of TFHE for boolean and small integer
**TFHE-rs** is a pure Rust implementation of TFHE for boolean and integer
arithmetics over encrypted data. It includes:
- a **Rust** API
- a **C** API
@@ -33,75 +27,107 @@ arithmetics over encrypted data. It includes:
**TFHE-rs** is meant for developers and researchers who want full control over
what they can do with TFHE, while not having to worry about the low level
implementation. The goal is to have a stable, simple, high-performance and
implementation. The goal is to have a stable, simple, high-performance, and
production-ready library for all the advanced features of TFHE.
## Getting Started
The steps to run a first example are described below.
To use `TFHE-rs` in your project, you first need to add it as a dependency in your `Cargo.toml`:
### Cargo.toml configuration
To use the latest version of `TFHE-rs` in your project, you first need to add it as a dependency in your `Cargo.toml`:
+ For x86_64-based machines running Unix-like OSes:
```toml
tfhe = { version = "0.1.0", features = [ "boolean","shortint","x86_64-unix" ] }
tfhe = { version = "*", features = ["boolean", "shortint", "integer", "x86_64-unix"] }
```
Here is a full example evaluating a Boolean circuit:
+ For Apple Silicon or aarch64-based machines running Unix-like OSes:
```rust
use tfhe::boolean::prelude::*;
```toml
tfhe = { version = "*", features = ["boolean", "shortint", "integer", "aarch64-unix"] }
```
Note: users with ARM devices must use `TFHE-rs` by compiling using the `nightly` toolchain.
fn main() {
// We generate a set of client/server keys, using the default parameters:
let (mut client_key, mut server_key) = gen_keys();
// We use the client secret key to encrypt two messages:
let ct_1 = client_key.encrypt(true);
let ct_2 = client_key.encrypt(false);
+ For x86_64-based machines with the [`rdseed instruction`](https://en.wikipedia.org/wiki/RDRAND)
running Windows:
// We use the server public key to execute a boolean circuit:
// if ((NOT ct_2) NAND (ct_1 AND ct_2)) then (NOT ct_2) else (ct_1 AND ct_2)
let ct_3 = server_key.not(&ct_2);
let ct_4 = server_key.and(&ct_1, &ct_2);
let ct_5 = server_key.nand(&ct_3, &ct_4);
let ct_6 = server_key.mux(&ct_5, &ct_3, &ct_4);
```toml
tfhe = { version = "*", features = ["boolean", "shortint", "integer", "x86_64"] }
```
// We use the client key to decrypt the output of the circuit:
let output = client_key.decrypt(&ct_6);
assert_eq!(output, true);
Note: aarch64-based machines are not yet supported for Windows as it's currently missing an entropy source to be able to seed the [CSPRNGs](https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator) used in TFHE-rs
## A simple example
Here is a full example:
``` rust
use tfhe::prelude::*;
use tfhe::{generate_keys, set_server_key, ConfigBuilder, FheUint32, FheUint8};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Basic configuration to use homomorphic integers
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
// Key generation
let (client_key, server_keys) = generate_keys(config);
let clear_a = 1344u32;
let clear_b = 5u32;
let clear_c = 7u8;
// Encrypting the input data using the (private) client_key
// FheUint32: Encrypted equivalent to u32
let mut encrypted_a = FheUint32::try_encrypt(clear_a, &client_key)?;
let encrypted_b = FheUint32::try_encrypt(clear_b, &client_key)?;
// FheUint8: Encrypted equivalent to u8
let encrypted_c = FheUint8::try_encrypt(clear_c, &client_key)?;
// On the server side:
set_server_key(server_keys);
// Clear equivalent computations: 1344 * 8 = 10752
let encrypted_res_mul = &encrypted_a * &encrypted_b;
// Clear equivalent computations: 1344 >> 8 = 42
encrypted_a = &encrypted_res_mul >> &encrypted_b;
// Clear equivalent computations: let casted_a = a as u8;
let casted_a: FheUint8 = encrypted_a.cast_into();
// Clear equivalent computations: min(42, 7) = 7
let encrypted_res_min = &casted_a.min(&encrypted_c);
// Operation between clear and encrypted data:
// Clear equivalent computations: 7 & 1 = 1
let encrypted_res = encrypted_res_min & 1_u8;
// Decrypting on the client side:
let clear_res: u8 = encrypted_res.decrypt(&client_key);
assert_eq!(clear_res, 1_u8);
Ok(())
}
```
Another example of how the library can be used with shortints:
To run this code, use the following command:
<p align="center"> <code> cargo run --release </code> </p>
```rust
use tfhe::shortint::prelude::*;
Note that when running code that uses `tfhe-rs`, it is highly recommended
to run in release mode with cargo's `--release` flag to have the best performances possible,
fn main() {
// We generate a set of client/server keys, using the default parameters:
let (client_key, server_key) = gen_keys(Parameters::default());
let msg1 = 1;
let msg2 = 0;
let modulus = client_key.parameters.message_modulus.0;
// We use the client key to encrypt two messages:
let ct_1 = client_key.encrypt(msg1);
let ct_2 = client_key.encrypt(msg2);
// We use the server public key to execute an integer circuit:
let ct_3 = server_key.unchecked_add(&ct_1, &ct_2);
// We use the client key to decrypt the output of the circuit:
let output = client_key.decrypt(&ct_3);
assert_eq!(output, (msg1 + msg2) % modulus as u64);
}
```
## Contributing
There are two ways to contribute to TFHE-rs:
- you can open issues to report bugs or typos and to suggest new ideas
- you can open issues to report bugs or typos, or to suggest new ideas
- you can ask to become an official contributor by emailing [hello@zama.ai](mailto:hello@zama.ai).
(becoming an approved contributor involves signing our Contributor License Agreement (CLA))
@@ -112,6 +138,24 @@ Only approved contributors can send pull requests, so please make sure to get in
This library uses several dependencies and we would like to thank the contributors of those
libraries.
## Need support?
<a target="_blank" href="https://community.zama.ai">
<img src="https://user-images.githubusercontent.com/5758427/231115030-21195b55-2629-4c01-9809-be5059243999.png">
</a>
## Citing TFHE-rs
To cite TFHE-rs in academic papers, please use the following entry:
```text
@Misc{TFHE-rs,
title={{TFHE-rs: A Pure Rust Implementation of the TFHE Scheme for Boolean and Integer Arithmetics Over Encrypted Data}},
author={Zama},
year={2022},
note={\url{https://github.com/zama-ai/tfhe-rs}},
}
```
## License
This software is distributed under the BSD-3-Clause-Clear license. If you have any questions,

24
apps/trivium/Cargo.toml Normal file
View File

@@ -0,0 +1,24 @@
[package]
name = "tfhe-trivium"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
rayon = { version = "1.7.0"}
[target.'cfg(target_arch = "x86_64")'.dependencies.tfhe]
path = "../../tfhe"
features = [ "boolean", "shortint", "integer", "x86_64" ]
[target.'cfg(target_arch = "aarch64")'.dependencies.tfhe]
path = "../../tfhe"
features = [ "boolean", "shortint", "integer", "aarch64-unix" ]
[dev-dependencies]
criterion = { version = "0.4", features = [ "html_reports" ]}
[[bench]]
name = "trivium"
harness = false

204
apps/trivium/README.md Normal file
View File

@@ -0,0 +1,204 @@
# FHE boolean Trivium implementation using TFHE-rs
The cleartext boolean Trivium is available to be built using the function `TriviumStream::<bool>::new`.
This takes as input 2 arrays of 80 bool: the Trivium key and the IV. After initialization, it returns a TriviumStream on
which the user can call `next`, getting the next bit of the cipher stream, or `next_64`, which will compute 64 values at once,
using multithreading to accelerate the computation.
Quite similarly, the function `TriviumStream::<FheBool>::new` will return a very similar object running in FHE space. Its arguments are
2 arrays of 80 FheBool representing the encrypted Trivium key, and the encrypted IV. It also requires a reference to the the server key of the
current scheme. This means that any user of this feature must also have the `tfhe-rs` crate as a dependency.
Example of a Rust main below:
```rust
use tfhe::{ConfigBuilder, generate_keys, FheBool};
use tfhe::prelude::*;
use tfhe_trivium::TriviumStream;
fn get_hexadecimal_string_from_lsb_first_stream(a: Vec<bool>) -> String {
assert!(a.len() % 8 == 0);
let mut hexadecimal: String = "".to_string();
for test in a.chunks(8) {
// Encoding is bytes in LSB order
match test[4..8] {
[false, false, false, false] => hexadecimal.push('0'),
[true, false, false, false] => hexadecimal.push('1'),
[false, true, false, false] => hexadecimal.push('2'),
[true, true, false, false] => hexadecimal.push('3'),
[false, false, true, false] => hexadecimal.push('4'),
[true, false, true, false] => hexadecimal.push('5'),
[false, true, true, false] => hexadecimal.push('6'),
[true, true, true, false] => hexadecimal.push('7'),
[false, false, false, true] => hexadecimal.push('8'),
[true, false, false, true] => hexadecimal.push('9'),
[false, true, false, true] => hexadecimal.push('A'),
[true, true, false, true] => hexadecimal.push('B'),
[false, false, true, true] => hexadecimal.push('C'),
[true, false, true, true] => hexadecimal.push('D'),
[false, true, true, true] => hexadecimal.push('E'),
[true, true, true, true] => hexadecimal.push('F'),
_ => ()
};
match test[0..4] {
[false, false, false, false] => hexadecimal.push('0'),
[true, false, false, false] => hexadecimal.push('1'),
[false, true, false, false] => hexadecimal.push('2'),
[true, true, false, false] => hexadecimal.push('3'),
[false, false, true, false] => hexadecimal.push('4'),
[true, false, true, false] => hexadecimal.push('5'),
[false, true, true, false] => hexadecimal.push('6'),
[true, true, true, false] => hexadecimal.push('7'),
[false, false, false, true] => hexadecimal.push('8'),
[true, false, false, true] => hexadecimal.push('9'),
[false, true, false, true] => hexadecimal.push('A'),
[true, true, false, true] => hexadecimal.push('B'),
[false, false, true, true] => hexadecimal.push('C'),
[true, false, true, true] => hexadecimal.push('D'),
[false, true, true, true] => hexadecimal.push('E'),
[true, true, true, true] => hexadecimal.push('F'),
_ => ()
};
}
return hexadecimal;
}
fn main() {
let config = ConfigBuilder::all_disabled().enable_default_bool().build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [false; 80];
for i in (0..key_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&key_string[i..i+2], 16).unwrap();
for j in 0..8 {
key[8*(i>>1) + j] = val % 2 == 1;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [false; 80];
for i in (0..iv_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&iv_string[i..i+2], 16).unwrap();
for j in 0..8 {
iv[8*(i>>1) + j] = val % 2 == 1;
val >>= 1;
}
}
let output_0_63 = "F4CD954A717F26A7D6930830C4E7CF0819F80E03F25F342C64ADC66ABA7F8A8E6EAA49F23632AE3CD41A7BD290A0132F81C6D4043B6E397D7388F3A03B5FE358".to_string();
let cipher_key = key.map(|x| FheBool::encrypt(x, &client_key));
let cipher_iv = iv.map(|x| FheBool::encrypt(x, &client_key));
let mut trivium = TriviumStream::<FheBool>::new(cipher_key, cipher_iv, &server_key);
let mut vec = Vec::<bool>::with_capacity(64*8);
while vec.len() < 64*8 {
let cipher_outputs = trivium.next_64();
for c in cipher_outputs {
vec.push(c.decrypt(&client_key))
}
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(output_0_63, hexadecimal[0..64*2]);
}
```
# FHE byte Trivium implementation
The same objects have also been implemented to stream bytes insead of booleans. They can be constructed and used in the same way via the functions `TriviumStreamByte::<u8>::new` and
`TriviumStreamByte::<FheUint8>::new` with the same arguments as before. The `FheUint8` version is significantly slower than the `FheBool` version, because not running
with the same cryptographic parameters. Its interest lie in its trans-ciphering capabilities: `TriviumStreamByte<FheUint8>` implements the trait `TransCiphering`,
meaning it implements the functions `trans_encrypt_64`. This function takes as input a `FheUint64` and outputs a `FheUint64`, the output being
encrypted via tfhe and trivium. For convenience we also provide `trans_decrypt_64`, but this is of course the exact same function.
Other sizes than 64 bit are expected to be available in the future.
# FHE shortint Trivium implementation
The same implementation is also available for generic Ciphertexts representing bits (meant to be used with parameters `PARAM_MESSAGE_1_CARRY_1_KS_PBS`). It uses a lower level API
of tfhe-rs, so the syntax is a little bit different. It also implements the `TransCiphering` trait. For optimization purposes, it does not internally run on the same
cryptographic parameters as the high level API of tfhe-rs. As such, it requires the usage of a casting key, to switch from one parameter space to another, which makes
its setup a little more intricate.
Example code:
```rust
use tfhe::shortint::prelude::*;
use tfhe::shortint::CastingKey;
use tfhe::{ConfigBuilder, generate_keys, FheUint64};
use tfhe::prelude::*;
use tfhe_trivium::TriviumStreamShortint;
fn test_shortint() {
let config = ConfigBuilder::all_disabled().enable_default_integers().build();
let (hl_client_key, hl_server_key) = generate_keys(config);
let (client_key, server_key): (ClientKey, ServerKey) = gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let ksk = CastingKey::new((&client_key, &server_key), (&hl_client_key, &hl_server_key));
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0; 80];
for i in (0..key_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&key_string[i..i+2], 16).unwrap();
for j in 0..8 {
key[8*(i>>1) + j] = val % 2;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0; 80];
for i in (0..iv_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&iv_string[i..i+2], 16).unwrap();
for j in 0..8 {
iv[8*(i>>1) + j] = val % 2;
val >>= 1;
}
}
let output_0_63 = "F4CD954A717F26A7D6930830C4E7CF0819F80E03F25F342C64ADC66ABA7F8A8E6EAA49F23632AE3CD41A7BD290A0132F81C6D4043B6E397D7388F3A03B5FE358".to_string();
let cipher_key = key.map(|x| client_key.encrypt(x));
let cipher_iv = iv.map(|x| client_key.encrypt(x));
let mut ciphered_message = vec![FheUint64::try_encrypt(0u64, &hl_client_key).unwrap(); 9];
let mut trivium = TriviumStreamShortint::new(cipher_key, cipher_iv, &server_key, &ksk);
let mut vec = Vec::<u64>::with_capacity(8);
while vec.len() < 8 {
let trans_ciphered_message = trivium.trans_encrypt_64(ciphered_message.pop().unwrap(), &hl_server_key);
vec.push(trans_ciphered_message.decrypt(&hl_client_key));
}
let hexadecimal = get_hexagonal_string_from_u64(vec);
assert_eq!(output_0_63, hexadecimal[0..64*2]);
}
```
# FHE Kreyvium implementation using tfhe-rs crate
This will work in exactly the same way as the Trivium implementation, except that the key and iv need to be 128 bits now. Available for the same internal types as Trivium, with similar syntax.
`KreyviumStreamByte<FheUint8>` and `KreyviumStreamShortint` also implement the `TransCiphering` trait.
# Testing
If you wish to run tests on this app, please run `cargo test -r trivium -- --test-threads=1` as multithreading provokes interferences between several running
Triviums at the same time.

View File

@@ -0,0 +1,75 @@
use tfhe::prelude::*;
use tfhe::{generate_keys, ConfigBuilder, FheBool};
use tfhe_trivium::KreyviumStream;
use criterion::Criterion;
pub fn kreyvium_bool_gen(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled().enable_default_bool().build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [false; 128];
for i in (0..key_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [false; 128];
for i in (0..iv_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let cipher_key = key.map(|x| FheBool::encrypt(x, &client_key));
let mut kreyvium = KreyviumStream::<FheBool>::new(cipher_key, iv, &server_key);
c.bench_function("kreyvium bool generate 64 bits", |b| {
b.iter(|| kreyvium.next_64())
});
}
pub fn kreyvium_bool_warmup(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled().enable_default_bool().build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [false; 128];
for i in (0..key_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [false; 128];
for i in (0..iv_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
c.bench_function("kreyvium bool warmup", |b| {
b.iter(|| {
let cipher_key = key.map(|x| FheBool::encrypt(x, &client_key));
let _kreyvium = KreyviumStream::<FheBool>::new(cipher_key, iv, &server_key);
})
});
}

View File

@@ -0,0 +1,96 @@
use tfhe::prelude::*;
use tfhe::{generate_keys, ConfigBuilder, FheUint64, FheUint8};
use tfhe_trivium::{KreyviumStreamByte, TransCiphering};
use criterion::Criterion;
pub fn kreyvium_byte_gen(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.enable_function_evaluation_integers()
.build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [0u8; 16];
for i in (0..key_string.len()).step_by(2) {
key[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [0u8; 16];
for i in (0..iv_string.len()).step_by(2) {
iv[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
let cipher_key = key.map(|x| FheUint8::encrypt(x, &client_key));
let mut kreyvium = KreyviumStreamByte::<FheUint8>::new(cipher_key, iv, &server_key);
c.bench_function("kreyvium byte generate 64 bits", |b| {
b.iter(|| kreyvium.next_64())
});
}
pub fn kreyvium_byte_trans(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.enable_function_evaluation_integers()
.build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [0u8; 16];
for i in (0..key_string.len()).step_by(2) {
key[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [0u8; 16];
for i in (0..iv_string.len()).step_by(2) {
iv[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
let cipher_key = key.map(|x| FheUint8::encrypt(x, &client_key));
let ciphered_message = FheUint64::try_encrypt(0u64, &client_key).unwrap();
let mut kreyvium = KreyviumStreamByte::<FheUint8>::new(cipher_key, iv, &server_key);
c.bench_function("kreyvium byte transencrypt 64 bits", |b| {
b.iter(|| kreyvium.trans_encrypt_64(ciphered_message.clone()))
});
}
pub fn kreyvium_byte_warmup(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.enable_function_evaluation_integers()
.build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [0u8; 16];
for i in (0..key_string.len()).step_by(2) {
key[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [0u8; 16];
for i in (0..iv_string.len()).step_by(2) {
iv[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
c.bench_function("kreyvium byte warmup", |b| {
b.iter(|| {
let cipher_key = key.map(|x| FheUint8::encrypt(x, &client_key));
let _kreyvium = KreyviumStreamByte::<FheUint8>::new(cipher_key, iv, &server_key);
})
});
}

View File

@@ -0,0 +1,155 @@
use tfhe::prelude::*;
use tfhe::shortint::prelude::*;
use tfhe::shortint::KeySwitchingKey;
use tfhe::{generate_keys, ConfigBuilder, FheUint64};
use tfhe_trivium::{KreyviumStreamShortint, TransCiphering};
use criterion::Criterion;
pub fn kreyvium_shortint_warmup(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (hl_client_key, hl_server_key) = generate_keys(config);
let underlying_ck: tfhe::shortint::ClientKey = (*hl_client_key.as_ref()).clone().into();
let underlying_sk: tfhe::shortint::ServerKey = (*hl_server_key.as_ref()).clone().into();
let (client_key, server_key): (ClientKey, ServerKey) = gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let ksk = KeySwitchingKey::new(
(&client_key, &server_key),
(&underlying_ck, &underlying_sk),
PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS,
);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [0; 128];
for i in (0..key_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [0; 128];
for i in (0..iv_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
c.bench_function("kreyvium 1_1 warmup", |b| {
b.iter(|| {
let cipher_key = key.map(|x| client_key.encrypt(x));
let _kreyvium = KreyviumStreamShortint::new(
cipher_key,
iv,
server_key.clone(),
ksk.clone(),
hl_server_key.clone(),
);
})
});
}
pub fn kreyvium_shortint_gen(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (hl_client_key, hl_server_key) = generate_keys(config);
let underlying_ck: tfhe::shortint::ClientKey = (*hl_client_key.as_ref()).clone().into();
let underlying_sk: tfhe::shortint::ServerKey = (*hl_server_key.as_ref()).clone().into();
let (client_key, server_key): (ClientKey, ServerKey) = gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let ksk = KeySwitchingKey::new(
(&client_key, &server_key),
(&underlying_ck, &underlying_sk),
PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS,
);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [0; 128];
for i in (0..key_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [0; 128];
for i in (0..iv_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let cipher_key = key.map(|x| client_key.encrypt(x));
let mut kreyvium = KreyviumStreamShortint::new(cipher_key, iv, server_key, ksk, hl_server_key);
c.bench_function("kreyvium 1_1 generate 64 bits", |b| {
b.iter(|| kreyvium.next_64())
});
}
pub fn kreyvium_shortint_trans(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (hl_client_key, hl_server_key) = generate_keys(config);
let underlying_ck: tfhe::shortint::ClientKey = (*hl_client_key.as_ref()).clone().into();
let underlying_sk: tfhe::shortint::ServerKey = (*hl_server_key.as_ref()).clone().into();
let (client_key, server_key): (ClientKey, ServerKey) = gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let ksk = KeySwitchingKey::new(
(&client_key, &server_key),
(&underlying_ck, &underlying_sk),
PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS,
);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [0; 128];
for i in (0..key_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [0; 128];
for i in (0..iv_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let cipher_key = key.map(|x| client_key.encrypt(x));
let ciphered_message = FheUint64::try_encrypt(0u64, &hl_client_key).unwrap();
let mut kreyvium = KreyviumStreamShortint::new(cipher_key, iv, server_key, ksk, hl_server_key);
c.bench_function("kreyvium 1_1 transencrypt 64 bits", |b| {
b.iter(|| kreyvium.trans_encrypt_64(ciphered_message.clone()))
});
}

View File

@@ -0,0 +1,53 @@
use criterion::{criterion_group, criterion_main};
mod trivium_bool;
criterion_group!(
trivium_bool,
trivium_bool::trivium_bool_gen,
trivium_bool::trivium_bool_warmup
);
mod kreyvium_bool;
criterion_group!(
kreyvium_bool,
kreyvium_bool::kreyvium_bool_gen,
kreyvium_bool::kreyvium_bool_warmup
);
mod trivium_shortint;
criterion_group!(
trivium_shortint,
trivium_shortint::trivium_shortint_gen,
trivium_shortint::trivium_shortint_warmup,
trivium_shortint::trivium_shortint_trans
);
mod kreyvium_shortint;
criterion_group!(
kreyvium_shortint,
kreyvium_shortint::kreyvium_shortint_gen,
kreyvium_shortint::kreyvium_shortint_warmup,
kreyvium_shortint::kreyvium_shortint_trans
);
mod trivium_byte;
criterion_group!(
trivium_byte,
trivium_byte::trivium_byte_gen,
trivium_byte::trivium_byte_trans,
trivium_byte::trivium_byte_warmup
);
mod kreyvium_byte;
criterion_group!(
kreyvium_byte,
kreyvium_byte::kreyvium_byte_gen,
kreyvium_byte::kreyvium_byte_trans,
kreyvium_byte::kreyvium_byte_warmup
);
criterion_main!(
trivium_bool,
trivium_shortint,
trivium_byte,
kreyvium_bool,
kreyvium_shortint,
kreyvium_byte,
);

View File

@@ -0,0 +1,75 @@
use tfhe::prelude::*;
use tfhe::{generate_keys, ConfigBuilder, FheBool};
use tfhe_trivium::TriviumStream;
use criterion::Criterion;
pub fn trivium_bool_gen(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled().enable_default_bool().build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [false; 80];
for i in (0..key_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [false; 80];
for i in (0..iv_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let cipher_key = key.map(|x| FheBool::encrypt(x, &client_key));
let mut trivium = TriviumStream::<FheBool>::new(cipher_key, iv, &server_key);
c.bench_function("trivium bool generate 64 bits", |b| {
b.iter(|| trivium.next_64())
});
}
pub fn trivium_bool_warmup(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled().enable_default_bool().build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [false; 80];
for i in (0..key_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [false; 80];
for i in (0..iv_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
c.bench_function("trivium bool warmup", |b| {
b.iter(|| {
let cipher_key = key.map(|x| FheBool::encrypt(x, &client_key));
let _trivium = TriviumStream::<FheBool>::new(cipher_key, iv, &server_key);
})
});
}

View File

@@ -0,0 +1,93 @@
use tfhe::prelude::*;
use tfhe::{generate_keys, ConfigBuilder, FheUint64, FheUint8};
use tfhe_trivium::{TransCiphering, TriviumStreamByte};
use criterion::Criterion;
pub fn trivium_byte_gen(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0u8; 10];
for i in (0..key_string.len()).step_by(2) {
key[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0u8; 10];
for i in (0..iv_string.len()).step_by(2) {
iv[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
let cipher_key = key.map(|x| FheUint8::encrypt(x, &client_key));
let mut trivium = TriviumStreamByte::<FheUint8>::new(cipher_key, iv, &server_key);
c.bench_function("trivium byte generate 64 bits", |b| {
b.iter(|| trivium.next_64())
});
}
pub fn trivium_byte_trans(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0u8; 10];
for i in (0..key_string.len()).step_by(2) {
key[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0u8; 10];
for i in (0..iv_string.len()).step_by(2) {
iv[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
let cipher_key = key.map(|x| FheUint8::encrypt(x, &client_key));
let ciphered_message = FheUint64::try_encrypt(0u64, &client_key).unwrap();
let mut trivium = TriviumStreamByte::<FheUint8>::new(cipher_key, iv, &server_key);
c.bench_function("trivium byte transencrypt 64 bits", |b| {
b.iter(|| trivium.trans_encrypt_64(ciphered_message.clone()))
});
}
pub fn trivium_byte_warmup(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0u8; 10];
for i in (0..key_string.len()).step_by(2) {
key[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0u8; 10];
for i in (0..iv_string.len()).step_by(2) {
iv[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
c.bench_function("trivium byte warmup", |b| {
b.iter(|| {
let cipher_key = key.map(|x| FheUint8::encrypt(x, &client_key));
let _trivium = TriviumStreamByte::<FheUint8>::new(cipher_key, iv, &server_key);
})
});
}

View File

@@ -0,0 +1,155 @@
use tfhe::prelude::*;
use tfhe::shortint::prelude::*;
use tfhe::shortint::KeySwitchingKey;
use tfhe::{generate_keys, ConfigBuilder, FheUint64};
use tfhe_trivium::{TransCiphering, TriviumStreamShortint};
use criterion::Criterion;
pub fn trivium_shortint_warmup(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (hl_client_key, hl_server_key) = generate_keys(config);
let underlying_ck: tfhe::shortint::ClientKey = (*hl_client_key.as_ref()).clone().into();
let underlying_sk: tfhe::shortint::ServerKey = (*hl_server_key.as_ref()).clone().into();
let (client_key, server_key): (ClientKey, ServerKey) = gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let ksk = KeySwitchingKey::new(
(&client_key, &server_key),
(&underlying_ck, &underlying_sk),
PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS,
);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0; 80];
for i in (0..key_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0; 80];
for i in (0..iv_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
c.bench_function("trivium 1_1 warmup", |b| {
b.iter(|| {
let cipher_key = key.map(|x| client_key.encrypt(x));
let _trivium = TriviumStreamShortint::new(
cipher_key,
iv,
server_key.clone(),
ksk.clone(),
hl_server_key.clone(),
);
})
});
}
pub fn trivium_shortint_gen(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (hl_client_key, hl_server_key) = generate_keys(config);
let underlying_ck: tfhe::shortint::ClientKey = (*hl_client_key.as_ref()).clone().into();
let underlying_sk: tfhe::shortint::ServerKey = (*hl_server_key.as_ref()).clone().into();
let (client_key, server_key): (ClientKey, ServerKey) = gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let ksk = KeySwitchingKey::new(
(&client_key, &server_key),
(&underlying_ck, &underlying_sk),
PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS,
);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0; 80];
for i in (0..key_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0; 80];
for i in (0..iv_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let cipher_key = key.map(|x| client_key.encrypt(x));
let mut trivium = TriviumStreamShortint::new(cipher_key, iv, server_key, ksk, hl_server_key);
c.bench_function("trivium 1_1 generate 64 bits", |b| {
b.iter(|| trivium.next_64())
});
}
pub fn trivium_shortint_trans(c: &mut Criterion) {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (hl_client_key, hl_server_key) = generate_keys(config);
let underlying_ck: tfhe::shortint::ClientKey = (*hl_client_key.as_ref()).clone().into();
let underlying_sk: tfhe::shortint::ServerKey = (*hl_server_key.as_ref()).clone().into();
let (client_key, server_key): (ClientKey, ServerKey) = gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let ksk = KeySwitchingKey::new(
(&client_key, &server_key),
(&underlying_ck, &underlying_sk),
PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS,
);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0; 80];
for i in (0..key_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0; 80];
for i in (0..iv_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let cipher_key = key.map(|x| client_key.encrypt(x));
let ciphered_message = FheUint64::try_encrypt(0u64, &hl_client_key).unwrap();
let mut trivium = TriviumStreamShortint::new(cipher_key, iv, server_key, ksk, hl_server_key);
c.bench_function("trivium 1_1 transencrypt 64 bits", |b| {
b.iter(|| trivium.trans_encrypt_64(ciphered_message.clone()))
});
}

View File

@@ -0,0 +1,257 @@
//! This module implements the Kreyvium stream cipher, using booleans or FheBool
//! for the representaion of the inner bits.
use crate::static_deque::StaticDeque;
use tfhe::prelude::*;
use tfhe::{set_server_key, unset_server_key, FheBool, ServerKey};
use rayon::prelude::*;
/// Internal trait specifying which operations are necessary for KreyviumStream generic type
pub trait KreyviumBoolInput<OpOutput>:
Sized
+ Clone
+ std::ops::BitXor<Output = OpOutput>
+ std::ops::BitAnd<Output = OpOutput>
+ std::ops::Not<Output = OpOutput>
{
}
impl KreyviumBoolInput<bool> for bool {}
impl KreyviumBoolInput<bool> for &bool {}
impl KreyviumBoolInput<FheBool> for FheBool {}
impl KreyviumBoolInput<FheBool> for &FheBool {}
/// KreyviumStream: a struct implementing the Kreyvium stream cipher, using T for the internal
/// representation of bits (bool or FheBool). To be able to compute FHE operations, it also owns
/// an Option for a ServerKey.
pub struct KreyviumStream<T> {
a: StaticDeque<93, T>,
b: StaticDeque<84, T>,
c: StaticDeque<111, T>,
k: StaticDeque<128, T>,
iv: StaticDeque<128, T>,
fhe_key: Option<ServerKey>,
}
impl KreyviumStream<bool> {
/// Contructor for `KreyviumStream<bool>`: arguments are the secret key and the input vector.
/// Outputs a KreyviumStream object already initialized (1152 steps have been run before
/// returning)
pub fn new(mut key: [bool; 128], mut iv: [bool; 128]) -> KreyviumStream<bool> {
// Initialization of Kreyvium registers: a has the secret key, b the input vector,
// and c a few ones.
let mut a_register = [false; 93];
let mut b_register = [false; 84];
let mut c_register = [false; 111];
for i in 0..93 {
a_register[i] = key[128 - 93 + i];
}
for i in 0..84 {
b_register[i] = iv[128 - 84 + i];
}
for i in 0..44 {
c_register[111 - 44 + i] = iv[i];
}
for i in 0..66 {
c_register[i + 1] = true;
}
key.reverse();
iv.reverse();
KreyviumStream::<bool>::new_from_registers(
a_register, b_register, c_register, key, iv, None,
)
}
}
impl KreyviumStream<FheBool> {
/// Constructor for `KreyviumStream<FheBool>`: arguments are the encrypted secret key and input
/// vector, and the FHE server key.
/// Outputs a KreyviumStream object already initialized (1152 steps have been run before
/// returning)
pub fn new(
mut key: [FheBool; 128],
mut iv: [bool; 128],
sk: &ServerKey,
) -> KreyviumStream<FheBool> {
set_server_key(sk.clone());
// Initialization of Kreyvium registers: a has the secret key, b the input vector,
// and c a few ones.
let mut a_register = [false; 93].map(|x| FheBool::encrypt_trivial(x));
let mut b_register = [false; 84].map(|x| FheBool::encrypt_trivial(x));
let mut c_register = [false; 111].map(|x| FheBool::encrypt_trivial(x));
for i in 0..93 {
a_register[i] = key[128 - 93 + i].clone();
}
for i in 0..84 {
b_register[i] = FheBool::encrypt_trivial(iv[128 - 84 + i]);
}
for i in 0..44 {
c_register[111 - 44 + i] = FheBool::encrypt_trivial(iv[i]);
}
for i in 0..66 {
c_register[i + 1] = FheBool::encrypt_trivial(true);
}
key.reverse();
iv.reverse();
let iv = iv.map(|x| FheBool::encrypt_trivial(x));
unset_server_key();
KreyviumStream::<FheBool>::new_from_registers(
a_register,
b_register,
c_register,
key,
iv,
Some(sk.clone()),
)
}
}
impl<T> KreyviumStream<T>
where
T: KreyviumBoolInput<T> + std::marker::Send + std::marker::Sync,
for<'a> &'a T: KreyviumBoolInput<T>,
{
/// Internal generic contructor: arguments are already prepared registers, and an optional FHE
/// server key
fn new_from_registers(
a_register: [T; 93],
b_register: [T; 84],
c_register: [T; 111],
k_register: [T; 128],
iv_register: [T; 128],
key: Option<ServerKey>,
) -> Self {
let mut ret = Self {
a: StaticDeque::<93, T>::new(a_register),
b: StaticDeque::<84, T>::new(b_register),
c: StaticDeque::<111, T>::new(c_register),
k: StaticDeque::<128, T>::new(k_register),
iv: StaticDeque::<128, T>::new(iv_register),
fhe_key: key,
};
ret.init();
ret
}
/// The specification of Kreyvium includes running 1152 (= 18*64) unused steps to mix up the
/// registers, before starting the proper stream
fn init(&mut self) {
for _ in 0..18 {
self.next_64();
}
}
/// Computes one turn of the stream, updating registers and outputting the new bit.
pub fn next(&mut self) -> T {
match &self.fhe_key {
Some(sk) => set_server_key(sk.clone()),
None => (),
};
let [o, a, b, c] = self.get_output_and_values(0);
self.a.push(a);
self.b.push(b);
self.c.push(c);
self.k.shift();
self.iv.shift();
o
}
/// Computes a potential future step of Kreyvium, n terms in the future. This does not update
/// registers, but rather returns with the output, the three values that will be used to
/// update the registers, when the time is right. This function is meant to be used in
/// parallel.
fn get_output_and_values(&self, n: usize) -> [T; 4] {
assert!(n < 65);
let (((temp_a, temp_b), (temp_c, a_and)), (b_and, c_and)) = rayon::join(
|| {
rayon::join(
|| {
rayon::join(
|| &self.a[65 - n] ^ &self.a[92 - n],
|| &self.b[68 - n] ^ &self.b[83 - n],
)
},
|| {
rayon::join(
|| &(&self.c[65 - n] ^ &self.c[110 - n]) ^ &self.k[127 - n],
|| &(&self.a[91 - n] & &self.a[90 - n]) ^ &self.iv[127 - n],
)
},
)
},
|| {
rayon::join(
|| &self.b[82 - n] & &self.b[81 - n],
|| &self.c[109 - n] & &self.c[108 - n],
)
},
);
let ((o, a), (b, c)) = rayon::join(
|| {
rayon::join(
|| &(&temp_a ^ &temp_b) ^ &temp_c,
|| &temp_c ^ &(&c_and ^ &self.a[68 - n]),
)
},
|| {
rayon::join(
|| &temp_a ^ &(&a_and ^ &self.b[77 - n]),
|| &temp_b ^ &(&b_and ^ &self.c[86 - n]),
)
},
);
[o, a, b, c]
}
/// This calls `get_output_and_values` in parallel 64 times, and stores all results in a Vec.
fn get_64_output_and_values(&self) -> Vec<[T; 4]> {
(0..64)
.into_par_iter()
.map(|x| self.get_output_and_values(x))
.rev()
.collect()
}
/// Computes 64 turns of the stream, outputting the 64 bits all at once in a
/// Vec (first value is oldest, last is newest)
pub fn next_64(&mut self) -> Vec<T> {
match &self.fhe_key {
Some(sk) => {
rayon::broadcast(|_| set_server_key(sk.clone()));
}
None => (),
}
let mut values = self.get_64_output_and_values();
match &self.fhe_key {
Some(_) => {
rayon::broadcast(|_| unset_server_key());
}
None => (),
}
let mut ret = Vec::<T>::with_capacity(64);
while let Some([o, a, b, c]) = values.pop() {
ret.push(o);
self.a.push(a);
self.b.push(b);
self.c.push(c);
}
self.k.n_shifts(64);
self.iv.n_shifts(64);
ret
}
}

View File

@@ -0,0 +1,297 @@
//! This module implements the Kreyvium stream cipher, using u8 or FheUint8
//! for the representaion of the inner bits.
use crate::static_deque::{StaticByteDeque, StaticByteDequeInput};
use tfhe::prelude::*;
use tfhe::{set_server_key, unset_server_key, FheUint8, ServerKey};
use rayon::prelude::*;
/// Internal trait specifying which operations are necessary for KreyviumStreamByte generic type
pub trait KreyviumByteInput<OpOutput>:
Sized
+ Send
+ Sync
+ Clone
+ StaticByteDequeInput<OpOutput>
+ std::ops::BitXor<Output = OpOutput>
+ std::ops::BitAnd<Output = OpOutput>
+ std::ops::Shr<u8, Output = OpOutput>
+ std::ops::Shl<u8, Output = OpOutput>
+ std::ops::Add<Output = OpOutput>
{
}
impl KreyviumByteInput<u8> for u8 {}
impl KreyviumByteInput<u8> for &u8 {}
impl KreyviumByteInput<FheUint8> for FheUint8 {}
impl KreyviumByteInput<FheUint8> for &FheUint8 {}
/// KreyviumStreamByte: a struct implementing the Kreyvium stream cipher, using T for the internal
/// representation of bits (u8 or FheUint8). To be able to compute FHE operations, it also owns
/// an Option for a ServerKey.
/// Since the original Kreyvium registers' sizes are not a multiple of 8, these registers (which
/// store byte-like objects) have a size that is the eigth of the closest multiple of 8 above the
/// originals' sizes.
pub struct KreyviumStreamByte<T> {
a_byte: StaticByteDeque<12, T>,
b_byte: StaticByteDeque<11, T>,
c_byte: StaticByteDeque<14, T>,
k_byte: StaticByteDeque<16, T>,
iv_byte: StaticByteDeque<16, T>,
fhe_key: Option<ServerKey>,
}
impl KreyviumStreamByte<u8> {
/// Contructor for `KreyviumStreamByte<u8>`: arguments are the secret key and the input vector.
/// Outputs a KreyviumStream object already initialized (1152 steps have been run before
/// returning)
pub fn new(key_bytes: [u8; 16], iv_bytes: [u8; 16]) -> KreyviumStreamByte<u8> {
// Initialization of Kreyvium registers: a has the secret key, b the input vector,
// and c a few ones.
let mut a_byte_reg = [0u8; 12];
let mut b_byte_reg = [0u8; 11];
let mut c_byte_reg = [0u8; 14];
// Copy key bits into a register
for b in 0..12 {
a_byte_reg[b] = key_bytes[b + 4];
}
// Copy iv bits into a register
for b in 0..11 {
b_byte_reg[b] = iv_bytes[b + 5];
}
// Copy a lot of ones in the c register
c_byte_reg[0] = 252;
for b in 1..8 {
c_byte_reg[b] = 255;
}
// Copy iv bits in the c register
c_byte_reg[8] = (iv_bytes[0] << 4) | 31;
for b in 9..14 {
c_byte_reg[b] = (iv_bytes[b - 9] >> 4) | (iv_bytes[b - 8] << 4);
}
// Key and iv are stored in reverse in their shift registers
let mut key = key_bytes.map(|b| b.reverse_bits());
let mut iv = iv_bytes.map(|b| b.reverse_bits());
key.reverse();
iv.reverse();
let mut ret = KreyviumStreamByte::<u8>::new_from_registers(
a_byte_reg, b_byte_reg, c_byte_reg, key, iv, None,
);
ret.init();
ret
}
}
impl KreyviumStreamByte<FheUint8> {
/// Constructor for `KreyviumStream<FheUint8>`: arguments are the encrypted secret key and input
/// vector, and the FHE server key.
/// Outputs a KreyviumStream object already initialized (1152 steps have been run before
/// returning)
pub fn new(
key_bytes: [FheUint8; 16],
iv_bytes: [u8; 16],
server_key: &ServerKey,
) -> KreyviumStreamByte<FheUint8> {
set_server_key(server_key.clone());
// Initialization of Kreyvium registers: a has the secret key, b the input vector,
// and c a few ones.
let mut a_byte_reg = [0u8; 12].map(|x| FheUint8::encrypt_trivial(x));
let mut b_byte_reg = [0u8; 11].map(|x| FheUint8::encrypt_trivial(x));
let mut c_byte_reg = [0u8; 14].map(|x| FheUint8::encrypt_trivial(x));
// Copy key bits into a register
for b in 0..12 {
a_byte_reg[b] = key_bytes[b + 4].clone();
}
// Copy iv bits into a register
for b in 0..11 {
b_byte_reg[b] = FheUint8::encrypt_trivial(iv_bytes[b + 5]);
}
// Copy a lot of ones in the c register
c_byte_reg[0] = FheUint8::encrypt_trivial(252u8);
for b in 1..8 {
c_byte_reg[b] = FheUint8::encrypt_trivial(255u8);
}
// Copy iv bits in the c register
c_byte_reg[8] = FheUint8::encrypt_trivial((&iv_bytes[0] << 4u8) | 31u8);
for b in 9..14 {
c_byte_reg[b] =
FheUint8::encrypt_trivial((&iv_bytes[b - 9] >> 4u8) | (&iv_bytes[b - 8] << 4u8));
}
// Key and iv are stored in reverse in their shift registers
let mut key = key_bytes.map(|b| b.map(|x| (x as u8).reverse_bits() as u64));
let mut iv = iv_bytes.map(|x| FheUint8::encrypt_trivial(x.reverse_bits()));
key.reverse();
iv.reverse();
unset_server_key();
let mut ret = KreyviumStreamByte::<FheUint8>::new_from_registers(
a_byte_reg,
b_byte_reg,
c_byte_reg,
key,
iv,
Some(server_key.clone()),
);
ret.init();
ret
}
}
impl<T> KreyviumStreamByte<T>
where
T: KreyviumByteInput<T> + Send,
for<'a> &'a T: KreyviumByteInput<T>,
{
/// Internal generic contructor: arguments are already prepared registers, and an optional FHE
/// server key
fn new_from_registers(
a_register: [T; 12],
b_register: [T; 11],
c_register: [T; 14],
k_register: [T; 16],
iv_register: [T; 16],
sk: Option<ServerKey>,
) -> Self {
Self {
a_byte: StaticByteDeque::<12, T>::new(a_register),
b_byte: StaticByteDeque::<11, T>::new(b_register),
c_byte: StaticByteDeque::<14, T>::new(c_register),
k_byte: StaticByteDeque::<16, T>::new(k_register),
iv_byte: StaticByteDeque::<16, T>::new(iv_register),
fhe_key: sk,
}
}
/// The specification of Kreyvium includes running 1152 (= 18*64) unused steps to mix up the
/// registers, before starting the proper stream
fn init(&mut self) {
for _ in 0..18 {
self.next_64();
}
}
/// Computes 8 potential future step of Kreyvium, b*8 terms in the future. This does not update
/// registers, but rather returns with the output, the three values that will be used to
/// update the registers, when the time is right. This function is meant to be used in
/// parallel.
fn get_output_and_values(&self, b: usize) -> [T; 4] {
let n = b * 8 + 7;
assert!(n < 65);
let (((k, iv), (a1, a2, a3, a4, a5)), ((b1, b2, b3, b4, b5), (c1, c2, c3, c4, c5))) =
rayon::join(
|| {
rayon::join(
|| (self.k_byte.byte(127 - n), self.iv_byte.byte(127 - n)),
|| Self::get_bytes(&self.a_byte, [91 - n, 90 - n, 68 - n, 65 - n, 92 - n]),
)
},
|| {
rayon::join(
|| Self::get_bytes(&self.b_byte, [82 - n, 81 - n, 77 - n, 68 - n, 83 - n]),
|| {
Self::get_bytes(
&self.c_byte,
[109 - n, 108 - n, 86 - n, 65 - n, 110 - n],
)
},
)
},
);
let (((temp_a, temp_b), (temp_c, a_and)), (b_and, c_and)) = rayon::join(
|| {
rayon::join(
|| rayon::join(|| a4 ^ a5, || b4 ^ b5),
|| rayon::join(|| c4 ^ c5 ^ k, || a1 & a2 ^ iv),
)
},
|| rayon::join(|| b1 & b2, || c1 & c2),
);
let (temp_a_2, temp_b_2, temp_c_2) = (temp_a.clone(), temp_b.clone(), temp_c.clone());
let ((o, a), (b, c)) = rayon::join(
|| {
rayon::join(
|| (temp_a_2 ^ temp_b_2) ^ temp_c_2,
|| temp_c ^ ((c_and) ^ a3),
)
},
|| rayon::join(|| temp_a ^ (a_and ^ b3), || temp_b ^ (b_and ^ c3)),
);
[o, a, b, c]
}
/// This calls `get_output_and_values` in parallel 8 times, and stores all results in a Vec.
fn get_64_output_and_values(&self) -> Vec<[T; 4]> {
(0..8)
.into_par_iter()
.map(|i| self.get_output_and_values(i))
.collect()
}
/// Computes 64 turns of the stream, outputting the 64 bits (in 8 bytes) all at once in a
/// Vec (first value is oldest, last is newest)
pub fn next_64(&mut self) -> Vec<T> {
match &self.fhe_key {
Some(sk) => {
rayon::broadcast(|_| set_server_key(sk.clone()));
}
None => (),
}
let values = self.get_64_output_and_values();
match &self.fhe_key {
Some(_) => {
rayon::broadcast(|_| unset_server_key());
}
None => (),
}
let mut bytes = Vec::<T>::with_capacity(8);
for [o, a, b, c] in values {
self.a_byte.push(a);
self.b_byte.push(b);
self.c_byte.push(c);
bytes.push(o);
}
self.k_byte.n_shifts(8);
self.iv_byte.n_shifts(8);
bytes
}
/// Reconstructs a bunch of 5 bytes in a parallel fashion.
fn get_bytes<const N: usize>(
reg: &StaticByteDeque<N, T>,
offsets: [usize; 5],
) -> (T, T, T, T, T) {
let mut ret = offsets
.par_iter()
.rev()
.map(|&i| reg.byte(i))
.collect::<Vec<_>>();
(
ret.pop().unwrap(),
ret.pop().unwrap(),
ret.pop().unwrap(),
ret.pop().unwrap(),
ret.pop().unwrap(),
)
}
}
impl KreyviumStreamByte<FheUint8> {
pub fn get_server_key(&self) -> &ServerKey {
&self.fhe_key.as_ref().unwrap()
}
}

View File

@@ -0,0 +1,205 @@
use crate::static_deque::StaticDeque;
use tfhe::shortint::prelude::*;
use rayon::prelude::*;
/// KreyviumStreamShortint: a struct implementing the Kreyvium stream cipher, using a generic
/// Ciphertext for the internal representation of bits (intended to represent a single bit). To be
/// able to compute FHE operations, it also owns a ServerKey.
pub struct KreyviumStreamShortint {
a: StaticDeque<93, Ciphertext>,
b: StaticDeque<84, Ciphertext>,
c: StaticDeque<111, Ciphertext>,
k: StaticDeque<128, Ciphertext>,
iv: StaticDeque<128, Ciphertext>,
internal_server_key: ServerKey,
transciphering_casting_key: KeySwitchingKey,
hl_server_key: tfhe::ServerKey,
}
impl KreyviumStreamShortint {
/// Contructor for KreyviumStreamShortint: arguments are the secret key and the input vector,
/// and a ServerKey reference. Outputs a KreyviumStream object already initialized (1152
/// steps have been run before returning)
pub fn new(
mut key: [Ciphertext; 128],
mut iv: [u64; 128],
sk: ServerKey,
ksk: KeySwitchingKey,
hl_sk: tfhe::ServerKey,
) -> Self {
// Initialization of Kreyvium registers: a has the secret key, b the input vector,
// and c a few ones.
let mut a_register: [Ciphertext; 93] = [0; 93].map(|x| sk.create_trivial(x));
let mut b_register: [Ciphertext; 84] = [0; 84].map(|x| sk.create_trivial(x));
let mut c_register: [Ciphertext; 111] = [0; 111].map(|x| sk.create_trivial(x));
for i in 0..93 {
a_register[i] = key[128 - 93 + i].clone();
}
for i in 0..84 {
b_register[i] = sk.create_trivial(iv[128 - 84 + i]);
}
for i in 0..44 {
c_register[111 - 44 + i] = sk.create_trivial(iv[i]);
}
for i in 0..66 {
c_register[i + 1] = sk.create_trivial(1);
}
key.reverse();
iv.reverse();
let iv = iv.map(|x| sk.create_trivial(x));
let mut ret = Self {
a: StaticDeque::<93, Ciphertext>::new(a_register),
b: StaticDeque::<84, Ciphertext>::new(b_register),
c: StaticDeque::<111, Ciphertext>::new(c_register),
k: StaticDeque::<128, Ciphertext>::new(key),
iv: StaticDeque::<128, Ciphertext>::new(iv),
internal_server_key: sk,
transciphering_casting_key: ksk,
hl_server_key: hl_sk,
};
ret.init();
ret
}
/// The specification of Kreyvium includes running 1152 (= 18*64) unused steps to mix up the
/// registers, before starting the proper stream
fn init(&mut self) {
for _ in 0..18 {
self.next_64();
}
}
/// Computes one turn of the stream, updating registers and outputting the new bit.
pub fn next(&mut self) -> Ciphertext {
let [o, a, b, c] = self.get_output_and_values(0);
self.a.push(a);
self.b.push(b);
self.c.push(c);
o
}
/// Computes a potential future step of Kreyvium, n terms in the future. This does not update
/// registers, but rather returns with the output, the three values that will be used to
/// update the registers, when the time is right. This function is meant to be used in
/// parallel.
fn get_output_and_values(&self, n: usize) -> [Ciphertext; 4] {
let (k, iv) = (&self.k[127 - n], &self.iv[127 - n]);
let (a1, a2, a3, a4, a5) = (
&self.a[65 - n],
&self.a[92 - n],
&self.a[91 - n],
&self.a[90 - n],
&self.a[68 - n],
);
let (b1, b2, b3, b4, b5) = (
&self.b[68 - n],
&self.b[83 - n],
&self.b[82 - n],
&self.b[81 - n],
&self.b[77 - n],
);
let (c1, c2, c3, c4, c5) = (
&self.c[65 - n],
&self.c[110 - n],
&self.c[109 - n],
&self.c[108 - n],
&self.c[86 - n],
);
let temp_a = self.internal_server_key.unchecked_add(a1, a2);
let temp_b = self.internal_server_key.unchecked_add(b1, b2);
let mut temp_c = self.internal_server_key.unchecked_add(c1, c2);
self.internal_server_key
.unchecked_add_assign(&mut temp_c, k);
let ((new_a, new_b), (new_c, o)) = rayon::join(
|| {
rayon::join(
|| {
let mut new_a = self.internal_server_key.unchecked_bitand(c3, c4);
self.internal_server_key
.unchecked_add_assign(&mut new_a, a5);
self.internal_server_key.add_assign(&mut new_a, &temp_c);
new_a
},
|| {
let mut new_b = self.internal_server_key.unchecked_bitand(a3, a4);
self.internal_server_key
.unchecked_add_assign(&mut new_b, b5);
self.internal_server_key
.unchecked_add_assign(&mut new_b, &temp_a);
self.internal_server_key.add_assign(&mut new_b, iv);
new_b
},
)
},
|| {
rayon::join(
|| {
let mut new_c = self.internal_server_key.unchecked_bitand(b3, b4);
self.internal_server_key
.unchecked_add_assign(&mut new_c, c5);
self.internal_server_key
.unchecked_add_assign(&mut new_c, &temp_b);
self.internal_server_key.clear_carry_assign(&mut new_c);
new_c
},
|| {
self.internal_server_key.bitxor(
&self.internal_server_key.unchecked_add(&temp_a, &temp_b),
&temp_c,
)
},
)
},
);
[o, new_a, new_b, new_c]
}
/// This calls `get_output_and_values` in parallel 64 times, and stores all results in a Vec.
fn get_64_output_and_values(&self) -> Vec<[Ciphertext; 4]> {
(0..64)
.into_par_iter()
.map(|x| self.get_output_and_values(x))
.rev()
.collect()
}
/// Computes 64 turns of the stream, outputting the 64 bits all at once in a
/// Vec (first value is oldest, last is newest)
pub fn next_64(&mut self) -> Vec<Ciphertext> {
let mut values = self.get_64_output_and_values();
let mut ret = Vec::<Ciphertext>::with_capacity(64);
while let Some([o, a, b, c]) = values.pop() {
ret.push(o);
self.a.push(a);
self.b.push(b);
self.c.push(c);
}
self.k.n_shifts(64);
self.iv.n_shifts(64);
ret
}
pub fn get_internal_server_key(&self) -> &ServerKey {
&self.internal_server_key
}
pub fn get_casting_key(&self) -> &KeySwitchingKey {
&self.transciphering_casting_key
}
pub fn get_hl_server_key(&self) -> &tfhe::ServerKey {
&self.hl_server_key
}
}

View File

@@ -0,0 +1,11 @@
mod kreyvium;
pub use kreyvium::KreyviumStream;
mod kreyvium_byte;
pub use kreyvium_byte::KreyviumStreamByte;
mod kreyvium_shortint;
pub use kreyvium_shortint::KreyviumStreamShortint;
#[cfg(test)]
mod test;

View File

@@ -0,0 +1,378 @@
use tfhe::prelude::*;
use tfhe::{generate_keys, ConfigBuilder, FheBool, FheUint64, FheUint8};
use crate::{KreyviumStream, KreyviumStreamByte, KreyviumStreamShortint, TransCiphering};
// Values for these tests come from the github repo renaud1239/Kreyvium,
// commit fd6828f68711276c25f55e605935028f5e843f43
fn get_hexadecimal_string_from_lsb_first_stream(a: Vec<bool>) -> String {
assert!(a.len() % 8 == 0);
let mut hexadecimal: String = "".to_string();
for test in a.chunks(8) {
// Encoding is bytes in LSB order
match test[4..8] {
[false, false, false, false] => hexadecimal.push('0'),
[true, false, false, false] => hexadecimal.push('1'),
[false, true, false, false] => hexadecimal.push('2'),
[true, true, false, false] => hexadecimal.push('3'),
[false, false, true, false] => hexadecimal.push('4'),
[true, false, true, false] => hexadecimal.push('5'),
[false, true, true, false] => hexadecimal.push('6'),
[true, true, true, false] => hexadecimal.push('7'),
[false, false, false, true] => hexadecimal.push('8'),
[true, false, false, true] => hexadecimal.push('9'),
[false, true, false, true] => hexadecimal.push('A'),
[true, true, false, true] => hexadecimal.push('B'),
[false, false, true, true] => hexadecimal.push('C'),
[true, false, true, true] => hexadecimal.push('D'),
[false, true, true, true] => hexadecimal.push('E'),
[true, true, true, true] => hexadecimal.push('F'),
_ => (),
};
match test[0..4] {
[false, false, false, false] => hexadecimal.push('0'),
[true, false, false, false] => hexadecimal.push('1'),
[false, true, false, false] => hexadecimal.push('2'),
[true, true, false, false] => hexadecimal.push('3'),
[false, false, true, false] => hexadecimal.push('4'),
[true, false, true, false] => hexadecimal.push('5'),
[false, true, true, false] => hexadecimal.push('6'),
[true, true, true, false] => hexadecimal.push('7'),
[false, false, false, true] => hexadecimal.push('8'),
[true, false, false, true] => hexadecimal.push('9'),
[false, true, false, true] => hexadecimal.push('A'),
[true, true, false, true] => hexadecimal.push('B'),
[false, false, true, true] => hexadecimal.push('C'),
[true, false, true, true] => hexadecimal.push('D'),
[false, true, true, true] => hexadecimal.push('E'),
[true, true, true, true] => hexadecimal.push('F'),
_ => (),
};
}
return hexadecimal;
}
fn get_hexagonal_string_from_bytes(a: Vec<u8>) -> String {
assert!(a.len() % 8 == 0);
let mut hexadecimal: String = "".to_string();
for test in a {
hexadecimal.push_str(&format!("{:02X?}", test));
}
return hexadecimal;
}
fn get_hexagonal_string_from_u64(a: Vec<u64>) -> String {
let mut hexadecimal: String = "".to_string();
for test in a {
hexadecimal.push_str(&format!("{:016X?}", test));
}
return hexadecimal;
}
#[test]
fn kreyvium_test_1() {
let key = [false; 128];
let iv = [false; 128];
let output = "26DCF1F4BC0F1922";
let mut kreyvium = KreyviumStream::<bool>::new(key, iv);
let mut vec = Vec::<bool>::with_capacity(64);
while vec.len() < 64 {
vec.push(kreyvium.next());
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(output, hexadecimal);
}
#[test]
fn kreyvium_test_2() {
let mut key = [false; 128];
let iv = [false; 128];
key[0] = true;
let output = "4FD421D4DA3D2C8A";
let mut kreyvium = KreyviumStream::<bool>::new(key, iv);
let mut vec = Vec::<bool>::with_capacity(64);
while vec.len() < 64 {
vec.push(kreyvium.next());
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(output, hexadecimal);
}
#[test]
fn kreyvium_test_3() {
let key = [false; 128];
let mut iv = [false; 128];
iv[0] = true;
let output = "C9217BA0D762ACA1";
let mut kreyvium = KreyviumStream::<bool>::new(key, iv);
let mut vec = Vec::<bool>::with_capacity(64);
while vec.len() < 64 {
vec.push(kreyvium.next());
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(output, hexadecimal);
}
#[test]
fn kreyvium_test_4() {
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [false; 128];
for i in (0..key_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [false; 128];
for i in (0..iv_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let output = "D1F0303482061111";
let mut kreyvium = KreyviumStream::<bool>::new(key, iv);
let mut vec = Vec::<bool>::with_capacity(64);
while vec.len() < 64 {
vec.push(kreyvium.next());
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(hexadecimal, output);
}
#[test]
fn kreyvium_test_fhe_long() {
let config = ConfigBuilder::all_disabled().enable_default_bool().build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [false; 128];
for i in (0..key_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [false; 128];
for i in (0..iv_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let output = "D1F0303482061111";
let cipher_key = key.map(|x| FheBool::encrypt(x, &client_key));
let mut kreyvium = KreyviumStream::<FheBool>::new(cipher_key, iv, &server_key);
let mut vec = Vec::<bool>::with_capacity(64);
while vec.len() < 64 {
let cipher_outputs = kreyvium.next_64();
for c in cipher_outputs {
vec.push(c.decrypt(&client_key))
}
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(output, hexadecimal);
}
use tfhe::shortint::prelude::*;
#[test]
fn kreyvium_test_shortint_long() {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (hl_client_key, hl_server_key) = generate_keys(config);
let underlying_ck: tfhe::shortint::ClientKey = (*hl_client_key.as_ref()).clone().into();
let underlying_sk: tfhe::shortint::ServerKey = (*hl_server_key.as_ref()).clone().into();
let (client_key, server_key): (ClientKey, ServerKey) = gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let ksk = KeySwitchingKey::new(
(&client_key, &server_key),
(&underlying_ck, &underlying_sk),
PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS,
);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [0; 128];
for i in (0..key_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [0; 128];
for i in (0..iv_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let output = "D1F0303482061111".to_string();
let cipher_key = key.map(|x| client_key.encrypt(x));
let ciphered_message = FheUint64::try_encrypt(0u64, &hl_client_key).unwrap();
let mut kreyvium = KreyviumStreamShortint::new(cipher_key, iv, server_key, ksk, hl_server_key);
let trans_ciphered_message = kreyvium.trans_encrypt_64(ciphered_message);
let ciphered_message = trans_ciphered_message.decrypt(&hl_client_key);
let hexadecimal = get_hexagonal_string_from_u64(vec![ciphered_message]);
assert_eq!(output, hexadecimal);
}
#[test]
fn kreyvium_test_clear_byte() {
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key_bytes = [0u8; 16];
for i in (0..key_string.len()).step_by(2) {
key_bytes[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv_bytes = [0u8; 16];
for i in (0..iv_string.len()).step_by(2) {
iv_bytes[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
let output = "D1F0303482061111".to_string();
let mut kreyvium = KreyviumStreamByte::<u8>::new(key_bytes, iv_bytes);
let mut vec = Vec::<u8>::with_capacity(8);
while vec.len() < 8 {
let outputs = kreyvium.next_64();
for c in outputs {
vec.push(c)
}
}
let hexadecimal = get_hexagonal_string_from_bytes(vec);
assert_eq!(output, hexadecimal);
}
#[test]
fn kreyvium_test_byte_long() {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.enable_function_evaluation_integers()
.build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key_bytes = [0u8; 16];
for i in (0..key_string.len()).step_by(2) {
key_bytes[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv_bytes = [0u8; 16];
for i in (0..iv_string.len()).step_by(2) {
iv_bytes[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
let cipher_key = key_bytes.map(|x| FheUint8::encrypt(x, &client_key));
let output = "D1F0303482061111".to_string();
let mut kreyvium = KreyviumStreamByte::<FheUint8>::new(cipher_key, iv_bytes, &server_key);
let mut vec = Vec::<u8>::with_capacity(8);
while vec.len() < 8 {
let cipher_outputs = kreyvium.next_64();
for c in cipher_outputs {
vec.push(c.decrypt(&client_key))
}
}
let hexadecimal = get_hexagonal_string_from_bytes(vec);
assert_eq!(output, hexadecimal);
}
#[test]
fn kreyvium_test_fhe_byte_transciphering_long() {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.enable_function_evaluation_integers()
.build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB000000000000".to_string();
let mut key = [0u8; 16];
for i in (0..key_string.len()).step_by(2) {
key[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC000000000000".to_string();
let mut iv = [0u8; 16];
for i in (0..iv_string.len()).step_by(2) {
iv[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
let output = "D1F0303482061111".to_string();
let cipher_key = key.map(|x| FheUint8::encrypt(x, &client_key));
let ciphered_message = FheUint64::try_encrypt(0u64, &client_key).unwrap();
let mut kreyvium = KreyviumStreamByte::<FheUint8>::new(cipher_key, iv, &server_key);
let trans_ciphered_message = kreyvium.trans_encrypt_64(ciphered_message);
let ciphered_message = trans_ciphered_message.decrypt(&client_key);
let hexadecimal = get_hexagonal_string_from_u64(vec![ciphered_message]);
assert_eq!(output, hexadecimal);
}

10
apps/trivium/src/lib.rs Normal file
View File

@@ -0,0 +1,10 @@
mod static_deque;
mod kreyvium;
pub use kreyvium::{KreyviumStream, KreyviumStreamByte, KreyviumStreamShortint};
mod trivium;
pub use trivium::{TriviumStream, TriviumStreamByte, TriviumStreamShortint};
mod trans_ciphering;
pub use trans_ciphering::TransCiphering;

View File

@@ -0,0 +1,4 @@
mod static_deque;
pub use static_deque::StaticDeque;
mod static_byte_deque;
pub use static_byte_deque::{StaticByteDeque, StaticByteDequeInput};

View File

@@ -0,0 +1,141 @@
//! This module implements the StaticByteDeque struct: a deque of bytes. The idea
//! is that this is a wrapper around StaticDeque, but StaticByteDeque has an additional
//! functionnality: it can construct the "intermediate" bytes, made of parts of other bytes.
//! This is pretending to store bits, and allows accessing bits in chunks of 8 consecutive.
use crate::static_deque::StaticDeque;
use tfhe::FheUint8;
/// Internal trait specifying which operations are needed by StaticByteDeque
pub trait StaticByteDequeInput<OpOutput>:
Clone
+ std::ops::Shr<u8, Output = OpOutput>
+ std::ops::Shl<u8, Output = OpOutput>
+ std::ops::BitOr<Output = OpOutput>
{
}
impl StaticByteDequeInput<u8> for u8 {}
impl StaticByteDequeInput<u8> for &u8 {}
impl StaticByteDequeInput<FheUint8> for FheUint8 {}
impl StaticByteDequeInput<FheUint8> for &FheUint8 {}
/// Here T must represent a type covering a byte, like u8 or FheUint8.
#[derive(Clone)]
pub struct StaticByteDeque<const N: usize, T> {
deque: StaticDeque<N, T>,
}
impl<const N: usize, T> StaticByteDeque<N, T>
where
T: StaticByteDequeInput<T>,
for<'a> &'a T: StaticByteDequeInput<T>,
{
/// Constructor always uses a fully initialized array, the first element of
/// which is oldest, the last is newest
pub fn new(_arr: [T; N]) -> Self {
Self {
deque: StaticDeque::<N, T>::new(_arr),
}
}
/// Elements are pushed via a byte element (covering 8 underlying bits)
pub fn push(&mut self, val: T) {
self.deque.push(val)
}
/// computes n shift in a row
pub fn n_shifts(&mut self, n: usize) {
self.deque.n_shifts(n);
}
/// Getter for the internal memory
#[allow(dead_code)]
fn get_arr(&self) -> &[T; N] {
self.deque.get_arr()
}
/// This returns a byte full of zeros, except maybe a one
/// at the specified location, if it is present in the deque
#[allow(dead_code)]
fn bit(&self, i: usize) -> T
where
for<'a> &'a T: std::ops::BitAnd<u8, Output = T>,
{
let byte: &T = &self.deque[i / 8];
let bit_selector: u8 = 1u8 << (i % 8);
byte & bit_selector
}
/// This function reconstructs an intermediate byte if necessary
pub fn byte(&self, i: usize) -> T {
let byte: &T = &self.deque[i / 8];
let bit_idx: u8 = (i % 8) as u8;
if bit_idx == 0 {
return byte.clone();
}
let byte_next: &T = &self.deque[i / 8 + 1];
return (byte << bit_idx) | (byte_next >> (8 - bit_idx as u8));
}
}
#[cfg(test)]
mod tests {
use crate::static_deque::StaticByteDeque;
#[test]
fn byte_deque_test() {
let mut deque = StaticByteDeque::<3, u8>::new([2, 64, 128]);
deque.push(4);
// Youngest: 4
assert!(deque.bit(0) == 0);
assert!(deque.bit(1) == 0);
assert!(deque.bit(2) > 0);
assert!(deque.bit(3) == 0);
assert!(deque.bit(4) == 0);
assert!(deque.bit(5) == 0);
assert!(deque.bit(6) == 0);
assert!(deque.bit(7) == 0);
// second youngest: 128
assert!(deque.bit(8 + 0) == 0);
assert!(deque.bit(8 + 1) == 0);
assert!(deque.bit(8 + 2) == 0);
assert!(deque.bit(8 + 3) == 0);
assert!(deque.bit(8 + 4) == 0);
assert!(deque.bit(8 + 5) == 0);
assert!(deque.bit(8 + 6) == 0);
assert!(deque.bit(8 + 7) > 0);
// oldest: 64
assert!(deque.bit(16 + 0) == 0);
assert!(deque.bit(16 + 1) == 0);
assert!(deque.bit(16 + 2) == 0);
assert!(deque.bit(16 + 3) == 0);
assert!(deque.bit(16 + 4) == 0);
assert!(deque.bit(16 + 5) == 0);
assert!(deque.bit(16 + 6) > 0);
assert!(deque.bit(16 + 7) == 0);
assert_eq!(deque.byte(0), 4u8);
assert_eq!(deque.byte(1), 9u8);
assert_eq!(deque.byte(2), 18u8);
assert_eq!(deque.byte(3), 36u8);
assert_eq!(deque.byte(4), 72u8);
assert_eq!(deque.byte(5), 144u8);
assert_eq!(deque.byte(6), 32u8);
assert_eq!(deque.byte(7), 64u8);
assert_eq!(deque.byte(8), 128u8);
assert_eq!(deque.byte(9), 0u8);
assert_eq!(deque.byte(10), 1u8);
assert_eq!(deque.byte(11), 2u8);
assert_eq!(deque.byte(12), 4u8);
assert_eq!(deque.byte(13), 8u8);
assert_eq!(deque.byte(14), 16u8);
assert_eq!(deque.byte(15), 32u8);
assert_eq!(deque.byte(16), 64u8);
}
}

View File

@@ -0,0 +1,135 @@
//! This module implements the StaticDeque struct: a deque utility whose size
//! is known at compile time. Construction, push, and indexing are publicly
//! available.
use core::ops::{Index, IndexMut};
/// StaticDeque: a struct implementing a deque whose size is known at compile time.
/// It has 2 members: the static array conatining the data (never empty), and a cursor
/// equal to the index of the oldest element (and the next one to be overwritten).
#[derive(Clone)]
pub struct StaticDeque<const N: usize, T> {
arr: [T; N],
cursor: usize,
}
impl<const N: usize, T> StaticDeque<N, T> {
/// Constructor always uses a fully initialized array, the first element of
/// which is oldest, the last is newest
pub fn new(_arr: [T; N]) -> Self {
Self {
arr: _arr,
cursor: 0,
}
}
/// Push a new element to the deque, overwriting the oldest at the same time.
pub fn push(&mut self, val: T) {
self.arr[self.cursor] = val;
self.shift();
}
/// Shift: equivalent to pushing the oldest element
pub fn shift(&mut self) {
self.n_shifts(1);
}
/// computes n shift in a row
pub fn n_shifts(&mut self, n: usize) {
self.cursor += n;
self.cursor %= N;
}
/// Getter for the internal memory
#[allow(dead_code)]
pub fn get_arr(&self) -> &[T; N] {
&self.arr
}
}
/// Index trait for the StaticDeque: 0 is the youngest element, N-1 is the oldest,
/// and above N will panic.
impl<const N: usize, T> Index<usize> for StaticDeque<N, T> {
type Output = T;
/// 0 is youngest
fn index(&self, i: usize) -> &T {
if i >= N {
panic!("Index {:?} too high for size {:?}", i, N);
}
&self.arr[(N + self.cursor - i - 1) % N]
}
}
/// IndexMut trait for the StaticDeque: 0 is the youngest element, N-1 is the oldest,
/// and above N will panic.
impl<const N: usize, T> IndexMut<usize> for StaticDeque<N, T> {
/// 0 is youngest
fn index_mut(&mut self, i: usize) -> &mut T {
if i >= N {
panic!("Index {:?} too high for size {:?}", i, N);
}
&mut self.arr[(N + self.cursor - i - 1) % N]
}
}
#[cfg(test)]
mod tests {
use crate::static_deque::StaticDeque;
#[test]
fn test_static_deque() {
let a = [1, 2, 3, 4, 5, 6];
let mut static_deque = StaticDeque::new(a);
for i in 7..11 {
static_deque.push(i);
}
assert_eq!(*static_deque.get_arr(), [7, 8, 9, 10, 5, 6]);
for i in 11..15 {
static_deque.push(i);
}
assert_eq!(*static_deque.get_arr(), [13, 14, 9, 10, 11, 12]);
assert_eq!(static_deque[0], 14);
assert_eq!(static_deque[1], 13);
assert_eq!(static_deque[2], 12);
assert_eq!(static_deque[3], 11);
assert_eq!(static_deque[4], 10);
assert_eq!(static_deque[5], 9);
}
#[test]
fn test_static_deque_indexmut() {
let a = [1, 2, 3, 4, 5, 6];
let mut static_deque = StaticDeque::new(a);
for i in 7..11 {
static_deque.push(i);
}
assert_eq!(*static_deque.get_arr(), [7, 8, 9, 10, 5, 6]);
for i in 11..15 {
static_deque.push(i);
}
assert_eq!(*static_deque.get_arr(), [13, 14, 9, 10, 11, 12]);
static_deque[1] = 100;
assert_eq!(static_deque[0], 14);
assert_eq!(static_deque[1], 100);
assert_eq!(static_deque[2], 12);
assert_eq!(static_deque[3], 11);
assert_eq!(static_deque[4], 10);
assert_eq!(static_deque[5], 9);
}
#[test]
#[should_panic]
fn test_static_deque_index_fail() {
let a = [1, 2, 3, 4, 5, 6];
let static_deque = StaticDeque::new(a);
let _ = static_deque[6];
}
}

View File

@@ -0,0 +1,118 @@
//! This module will contain extensions of some TriviumStream of KreyviumStream objects,
//! when trans ciphering is available to them.
use crate::{KreyviumStreamByte, KreyviumStreamShortint, TriviumStreamByte, TriviumStreamShortint};
use tfhe::shortint::Ciphertext;
use tfhe::{set_server_key, unset_server_key, FheUint64, FheUint8, ServerKey};
use rayon::prelude::*;
/// Triat specifying the interface for trans ciphering a FheUint64 object. Since it is meant
/// to be used with stream ciphers, encryption and decryption are by default the same.
pub trait TransCiphering {
fn trans_encrypt_64(&mut self, cipher: FheUint64) -> FheUint64;
fn trans_decrypt_64(&mut self, cipher: FheUint64) -> FheUint64 {
self.trans_encrypt_64(cipher)
}
}
fn transcipher_from_fheu8_stream(
stream: Vec<FheUint8>,
cipher: FheUint64,
fhe_server_key: &ServerKey,
) -> FheUint64 {
assert_eq!(stream.len(), 8);
set_server_key(fhe_server_key.clone());
rayon::broadcast(|_| set_server_key(fhe_server_key.clone()));
let ret: FheUint64 = stream
.into_par_iter()
.enumerate()
.map(|(i, x)| &cipher ^ &(FheUint64::cast_from(x) << (8 * (7 - i) as u8)))
.reduce_with(|a, b| a | b)
.unwrap();
unset_server_key();
rayon::broadcast(|_| unset_server_key());
ret
}
fn transcipher_from_1_1_stream(
stream: Vec<Ciphertext>,
cipher: FheUint64,
hl_server_key: &ServerKey,
internal_server_key: &tfhe::shortint::ServerKey,
casting_key: &tfhe::shortint::KeySwitchingKey,
) -> FheUint64 {
assert_eq!(stream.len(), 64);
let pairs = (0..32)
.into_par_iter()
.map(|i| {
let byte_idx = 7 - i / 4;
let pair_idx = i % 4;
let b0 = &stream[8 * byte_idx + 2 * pair_idx];
let b1 = &stream[8 * byte_idx + 2 * pair_idx + 1];
casting_key.cast(
&internal_server_key
.unchecked_add(b0, &internal_server_key.unchecked_scalar_mul(b1, 2)),
)
})
.collect::<Vec<_>>();
set_server_key(hl_server_key.clone());
let ret = &cipher ^ &FheUint64::try_from(pairs).unwrap();
unset_server_key();
ret
}
impl TransCiphering for TriviumStreamByte<FheUint8> {
/// `TriviumStreamByte<FheUint8>`: since a full step outputs 8 bytes, these bytes
/// are each shifted by a number in [0, 8), and XORed with the input cipher
fn trans_encrypt_64(&mut self, cipher: FheUint64) -> FheUint64 {
transcipher_from_fheu8_stream(self.next_64(), cipher, self.get_server_key())
}
}
impl TransCiphering for KreyviumStreamByte<FheUint8> {
/// `KreyviumStreamByte<FheUint8>`: since a full step outputs 8 bytes, these bytes
/// are each shifted by a number in [0, 8), and XORed with the input cipher
fn trans_encrypt_64(&mut self, cipher: FheUint64) -> FheUint64 {
transcipher_from_fheu8_stream(self.next_64(), cipher, self.get_server_key())
}
}
impl TransCiphering for TriviumStreamShortint {
/// TriviumStreamShortint: since a full step outputs 64 shortints, these bits
/// are paired 2 by 2 in the HL parameter space and packed in a full word,
/// and XORed with the input cipher
fn trans_encrypt_64(&mut self, cipher: FheUint64) -> FheUint64 {
transcipher_from_1_1_stream(
self.next_64(),
cipher,
self.get_hl_server_key(),
self.get_internal_server_key(),
self.get_casting_key(),
)
}
}
impl TransCiphering for KreyviumStreamShortint {
/// KreyviumStreamShortint: since a full step outputs 64 shortints, these bits
/// are paired 2 by 2 in the HL parameter space and packed in a full word,
/// and XORed with the input cipher
fn trans_encrypt_64(&mut self, cipher: FheUint64) -> FheUint64 {
transcipher_from_1_1_stream(
self.next_64(),
cipher,
self.get_hl_server_key(),
self.get_internal_server_key(),
self.get_casting_key(),
)
}
}

View File

@@ -0,0 +1,11 @@
mod trivium;
pub use trivium::TriviumStream;
mod trivium_byte;
pub use trivium_byte::TriviumStreamByte;
mod trivium_shortint;
pub use trivium_shortint::TriviumStreamShortint;
#[cfg(test)]
mod test;

View File

@@ -0,0 +1,412 @@
use tfhe::prelude::*;
use tfhe::{generate_keys, ConfigBuilder, FheBool, FheUint64, FheUint8};
use crate::{TransCiphering, TriviumStream, TriviumStreamByte, TriviumStreamShortint};
// Values for these tests come from the github repo cantora/avr-crypto-lib, commit 2a5b018,
// file testvectors/trivium-80.80.test-vectors
fn get_hexadecimal_string_from_lsb_first_stream(a: Vec<bool>) -> String {
assert!(a.len() % 8 == 0);
let mut hexadecimal: String = "".to_string();
for test in a.chunks(8) {
// Encoding is bytes in LSB order
match test[4..8] {
[false, false, false, false] => hexadecimal.push('0'),
[true, false, false, false] => hexadecimal.push('1'),
[false, true, false, false] => hexadecimal.push('2'),
[true, true, false, false] => hexadecimal.push('3'),
[false, false, true, false] => hexadecimal.push('4'),
[true, false, true, false] => hexadecimal.push('5'),
[false, true, true, false] => hexadecimal.push('6'),
[true, true, true, false] => hexadecimal.push('7'),
[false, false, false, true] => hexadecimal.push('8'),
[true, false, false, true] => hexadecimal.push('9'),
[false, true, false, true] => hexadecimal.push('A'),
[true, true, false, true] => hexadecimal.push('B'),
[false, false, true, true] => hexadecimal.push('C'),
[true, false, true, true] => hexadecimal.push('D'),
[false, true, true, true] => hexadecimal.push('E'),
[true, true, true, true] => hexadecimal.push('F'),
_ => (),
};
match test[0..4] {
[false, false, false, false] => hexadecimal.push('0'),
[true, false, false, false] => hexadecimal.push('1'),
[false, true, false, false] => hexadecimal.push('2'),
[true, true, false, false] => hexadecimal.push('3'),
[false, false, true, false] => hexadecimal.push('4'),
[true, false, true, false] => hexadecimal.push('5'),
[false, true, true, false] => hexadecimal.push('6'),
[true, true, true, false] => hexadecimal.push('7'),
[false, false, false, true] => hexadecimal.push('8'),
[true, false, false, true] => hexadecimal.push('9'),
[false, true, false, true] => hexadecimal.push('A'),
[true, true, false, true] => hexadecimal.push('B'),
[false, false, true, true] => hexadecimal.push('C'),
[true, false, true, true] => hexadecimal.push('D'),
[false, true, true, true] => hexadecimal.push('E'),
[true, true, true, true] => hexadecimal.push('F'),
_ => (),
};
}
return hexadecimal;
}
fn get_hexagonal_string_from_bytes(a: Vec<u8>) -> String {
assert!(a.len() % 8 == 0);
let mut hexadecimal: String = "".to_string();
for test in a {
hexadecimal.push_str(&format!("{:02X?}", test));
}
return hexadecimal;
}
fn get_hexagonal_string_from_u64(a: Vec<u64>) -> String {
let mut hexadecimal: String = "".to_string();
for test in a {
hexadecimal.push_str(&format!("{:016X?}", test));
}
return hexadecimal;
}
#[test]
fn trivium_test_1() {
let key = [false; 80];
let iv = [false; 80];
let output_0_63 = "FBE0BF265859051B517A2E4E239FC97F563203161907CF2DE7A8790FA1B2E9CDF75292030268B7382B4C1A759AA2599A285549986E74805903801A4CB5A5D4F2".to_string();
let output_192_255 = "0F1BE95091B8EA857B062AD52BADF47784AC6D9B2E3F85A9D79995043302F0FDF8B76E5BC8B7B4F0AA46CD20DDA04FDD197BC5E1635496828F2DBFB23F6BD5D0".to_string();
let output_256_319 = "80F9075437BAC73F696D0ABE3972F5FCE2192E5FCC13C0CB77D0ABA09126838D31A2D38A2087C46304C8A63B54109F679B0B1BC71E72A58D6DD3E0A3FF890D4A".to_string();
let output_448_511 = "68450EB0910A98EF1853E0FC1BED8AB6BB08DF5F167D34008C2A85284D4B886DD56883EE92BF18E69121670B4C81A5689C9B0538373D22EB923A28A2DB44C0EB".to_string();
let mut trivium = TriviumStream::<bool>::new(key, iv);
let mut vec = Vec::<bool>::with_capacity(512 * 8);
while vec.len() < 512 * 8 {
vec.push(trivium.next());
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(output_0_63, hexadecimal[0..64 * 2]);
assert_eq!(output_192_255, hexadecimal[192 * 2..256 * 2]);
assert_eq!(output_256_319, hexadecimal[256 * 2..320 * 2]);
assert_eq!(output_448_511, hexadecimal[448 * 2..512 * 2]);
}
#[test]
fn trivium_test_2() {
let mut key = [false; 80];
let iv = [false; 80];
key[7] = true;
let output_0_63 = "38EB86FF730D7A9CAF8DF13A4420540DBB7B651464C87501552041C249F29A64D2FBF515610921EBE06C8F92CECF7F8098FF20CCCC6A62B97BE8EF7454FC80F9".to_string();
let output_192_255 = "EAF2625D411F61E41F6BAEEDDD5FE202600BD472F6C9CD1E9134A745D900EF6C023E4486538F09930CFD37157C0EB57C3EF6C954C42E707D52B743AD83CFF297".to_string();
let output_256_319 = "9A203CF7B2F3F09C43D188AA13A5A2021EE998C42F777E9B67C3FA221A0AA1B041AA9E86BC2F5C52AFF11F7D9EE480CB1187B20EB46D582743A52D7CD080A24A".to_string();
let output_448_511 = "EBF14772061C210843C18CEA2D2A275AE02FCB18E5D7942455FF77524E8A4CA51E369A847D1AEEFB9002FCD02342983CEAFA9D487CC2032B10192CD416310FA4".to_string();
let mut trivium = TriviumStream::<bool>::new(key, iv);
let mut vec = Vec::<bool>::with_capacity(512 * 8);
while vec.len() < 512 * 8 {
vec.push(trivium.next());
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(output_0_63, hexadecimal[0..64 * 2]);
assert_eq!(output_192_255, hexadecimal[192 * 2..256 * 2]);
assert_eq!(output_256_319, hexadecimal[256 * 2..320 * 2]);
assert_eq!(output_448_511, hexadecimal[448 * 2..512 * 2]);
}
#[test]
fn trivium_test_3() {
let key = [false; 80];
let mut iv = [false; 80];
iv[7] = true;
let output_0_63 = "F8901736640549E3BA7D42EA2D07B9F49233C18D773008BD755585B1A8CBAB86C1E9A9B91F1AD33483FD6EE3696D659C9374260456A36AAE11F033A519CBD5D7".to_string();
let output_192_255 = "87423582AF64475C3A9C092E32A53C5FE07D35B4C9CA288A89A43DEF3913EA9237CA43342F3F8E83AD3A5C38D463516F94E3724455656A36279E3E924D442F06".to_string();
let output_256_319 = "D94389A90E6F3BF2BB4C8B057339AAD8AA2FEA238C29FCAC0D1FF1CB2535A07058BA995DD44CFC54CCEC54A5405B944C532D74E50EA370CDF1BA1CBAE93FC0B5".to_string();
let output_448_511 = "4844151714E56A3A2BBFBA426A1D60F9A4F265210A91EC29259AE2035234091C49FFB1893FA102D425C57C39EB4916F6D148DC83EBF7DE51EEB9ABFE045FB282".to_string();
let mut trivium = TriviumStream::<bool>::new(key, iv);
let mut vec = Vec::<bool>::with_capacity(512 * 8);
while vec.len() < 512 * 8 {
vec.push(trivium.next());
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(output_0_63, hexadecimal[0..64 * 2]);
assert_eq!(output_192_255, hexadecimal[192 * 2..256 * 2]);
assert_eq!(output_256_319, hexadecimal[256 * 2..320 * 2]);
assert_eq!(output_448_511, hexadecimal[448 * 2..512 * 2]);
}
#[test]
fn trivium_test_4() {
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [false; 80];
for i in (0..key_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [false; 80];
for i in (0..iv_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let output_0_63 = "F4CD954A717F26A7D6930830C4E7CF0819F80E03F25F342C64ADC66ABA7F8A8E6EAA49F23632AE3CD41A7BD290A0132F81C6D4043B6E397D7388F3A03B5FE358".to_string();
let output_65472_65535 = "C04C24A6938C8AF8A491D5E481271E0E601338F01067A86A795CA493AA4FF265619B8D448B706B7C88EE8395FC79E5B51AB40245BBF7773AE67DF86FCFB71F30".to_string();
let output_65536_65599 = "011A0D7EC32FA102C66C164CFCB189AED9F6982E8C7370A6A37414781192CEB155C534C1C8C9E53FDEADF2D3D0577DAD3A8EB2F6E5265F1E831C86844670BC69".to_string();
let output_131008_131071 = "48107374A9CE3AAF78221AE77789247CF6896A249ED75DCE0CF2D30EB9D889A0C61C9F480E5C07381DED9FAB2AD54333E82C89BA92E6E47FD828F1A66A8656E0".to_string();
let mut trivium = TriviumStream::<bool>::new(key, iv);
let mut vec = Vec::<bool>::with_capacity(131072 * 8);
while vec.len() < 131072 * 8 {
vec.push(trivium.next());
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(output_0_63, hexadecimal[0..64 * 2]);
assert_eq!(output_65472_65535, hexadecimal[65472 * 2..65536 * 2]);
assert_eq!(output_65536_65599, hexadecimal[65536 * 2..65600 * 2]);
assert_eq!(output_131008_131071, hexadecimal[131008 * 2..131072 * 2]);
}
#[test]
fn trivium_test_clear_byte() {
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0u8; 10];
for i in (0..key_string.len()).step_by(2) {
key[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0u8; 10];
for i in (0..iv_string.len()).step_by(2) {
iv[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
let output_0_63 = "F4CD954A717F26A7D6930830C4E7CF0819F80E03F25F342C64ADC66ABA7F8A8E6EAA49F23632AE3CD41A7BD290A0132F81C6D4043B6E397D7388F3A03B5FE358".to_string();
let output_65472_65535 = "C04C24A6938C8AF8A491D5E481271E0E601338F01067A86A795CA493AA4FF265619B8D448B706B7C88EE8395FC79E5B51AB40245BBF7773AE67DF86FCFB71F30".to_string();
let output_65536_65599 = "011A0D7EC32FA102C66C164CFCB189AED9F6982E8C7370A6A37414781192CEB155C534C1C8C9E53FDEADF2D3D0577DAD3A8EB2F6E5265F1E831C86844670BC69".to_string();
let output_131008_131071 = "48107374A9CE3AAF78221AE77789247CF6896A249ED75DCE0CF2D30EB9D889A0C61C9F480E5C07381DED9FAB2AD54333E82C89BA92E6E47FD828F1A66A8656E0".to_string();
let mut trivium = TriviumStreamByte::<u8>::new(key, iv);
let mut vec = Vec::<u8>::with_capacity(131072);
while vec.len() < 131072 {
let outputs = trivium.next_64();
for c in outputs {
vec.push(c)
}
}
let hexadecimal = get_hexagonal_string_from_bytes(vec);
assert_eq!(output_0_63, hexadecimal[0..64 * 2]);
assert_eq!(output_65472_65535, hexadecimal[65472 * 2..65536 * 2]);
assert_eq!(output_65536_65599, hexadecimal[65536 * 2..65600 * 2]);
assert_eq!(output_131008_131071, hexadecimal[131008 * 2..131072 * 2]);
}
#[test]
fn trivium_test_fhe_long() {
let config = ConfigBuilder::all_disabled().enable_default_bool().build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [false; 80];
for i in (0..key_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [false; 80];
for i in (0..iv_string.len()).step_by(2) {
let mut val: u8 = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2 == 1;
val >>= 1;
}
}
let output_0_63 = "F4CD954A717F26A7D6930830C4E7CF0819F80E03F25F342C64ADC66ABA7F8A8E6EAA49F23632AE3CD41A7BD290A0132F81C6D4043B6E397D7388F3A03B5FE358".to_string();
let cipher_key = key.map(|x| FheBool::encrypt(x, &client_key));
let mut trivium = TriviumStream::<FheBool>::new(cipher_key, iv, &server_key);
let mut vec = Vec::<bool>::with_capacity(64 * 8);
while vec.len() < 64 * 8 {
let cipher_outputs = trivium.next_64();
for c in cipher_outputs {
vec.push(c.decrypt(&client_key))
}
}
let hexadecimal = get_hexadecimal_string_from_lsb_first_stream(vec);
assert_eq!(output_0_63, hexadecimal[0..64 * 2]);
}
#[test]
fn trivium_test_fhe_byte_long() {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0u8; 10];
for i in (0..key_string.len()).step_by(2) {
key[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0u8; 10];
for i in (0..iv_string.len()).step_by(2) {
iv[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
let output_0_63 = "F4CD954A717F26A7D6930830C4E7CF0819F80E03F25F342C64ADC66ABA7F8A8E6EAA49F23632AE3CD41A7BD290A0132F81C6D4043B6E397D7388F3A03B5FE358".to_string();
let cipher_key = key.map(|x| FheUint8::encrypt(x, &client_key));
let mut trivium = TriviumStreamByte::<FheUint8>::new(cipher_key, iv, &server_key);
let mut vec = Vec::<u8>::with_capacity(64);
while vec.len() < 64 {
let cipher_outputs = trivium.next_64();
for c in cipher_outputs {
vec.push(c.decrypt(&client_key))
}
}
let hexadecimal = get_hexagonal_string_from_bytes(vec);
assert_eq!(output_0_63, hexadecimal[0..64 * 2]);
}
#[test]
fn trivium_test_fhe_byte_transciphering_long() {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (client_key, server_key) = generate_keys(config);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0u8; 10];
for i in (0..key_string.len()).step_by(2) {
key[i >> 1] = u8::from_str_radix(&key_string[i..i + 2], 16).unwrap();
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0u8; 10];
for i in (0..iv_string.len()).step_by(2) {
iv[i >> 1] = u8::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
}
let output_0_63 = "F4CD954A717F26A7D6930830C4E7CF0819F80E03F25F342C64ADC66ABA7F8A8E6EAA49F23632AE3CD41A7BD290A0132F81C6D4043B6E397D7388F3A03B5FE358".to_string();
let cipher_key = key.map(|x| FheUint8::encrypt(x, &client_key));
let mut ciphered_message = vec![FheUint64::try_encrypt(0u64, &client_key).unwrap(); 9];
let mut trivium = TriviumStreamByte::<FheUint8>::new(cipher_key, iv, &server_key);
let mut vec = Vec::<u64>::with_capacity(8);
while vec.len() < 8 {
let trans_ciphered_message = trivium.trans_encrypt_64(ciphered_message.pop().unwrap());
vec.push(trans_ciphered_message.decrypt(&client_key));
}
let hexadecimal = get_hexagonal_string_from_u64(vec);
assert_eq!(output_0_63, hexadecimal[0..64 * 2]);
}
use tfhe::shortint::prelude::*;
#[test]
fn trivium_test_shortint_long() {
let config = ConfigBuilder::all_disabled()
.enable_default_integers()
.build();
let (hl_client_key, hl_server_key) = generate_keys(config);
let underlying_ck: tfhe::shortint::ClientKey = (*hl_client_key.as_ref()).clone().into();
let underlying_sk: tfhe::shortint::ServerKey = (*hl_server_key.as_ref()).clone().into();
let (client_key, server_key): (ClientKey, ServerKey) = gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let ksk = KeySwitchingKey::new(
(&client_key, &server_key),
(&underlying_ck, &underlying_sk),
PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS,
);
let key_string = "0053A6F94C9FF24598EB".to_string();
let mut key = [0; 80];
for i in (0..key_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&key_string[i..i + 2], 16).unwrap();
for j in 0..8 {
key[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let iv_string = "0D74DB42A91077DE45AC".to_string();
let mut iv = [0; 80];
for i in (0..iv_string.len()).step_by(2) {
let mut val = u64::from_str_radix(&iv_string[i..i + 2], 16).unwrap();
for j in 0..8 {
iv[8 * (i >> 1) + j] = val % 2;
val >>= 1;
}
}
let output_0_63 = "F4CD954A717F26A7D6930830C4E7CF0819F80E03F25F342C64ADC66ABA7F8A8E6EAA49F23632AE3CD41A7BD290A0132F81C6D4043B6E397D7388F3A03B5FE358".to_string();
let cipher_key = key.map(|x| client_key.encrypt(x));
let mut ciphered_message = vec![FheUint64::try_encrypt(0u64, &hl_client_key).unwrap(); 9];
let mut trivium = TriviumStreamShortint::new(cipher_key, iv, server_key, ksk, hl_server_key);
let mut vec = Vec::<u64>::with_capacity(8);
while vec.len() < 8 {
let trans_ciphered_message = trivium.trans_encrypt_64(ciphered_message.pop().unwrap());
vec.push(trans_ciphered_message.decrypt(&hl_client_key));
}
let hexadecimal = get_hexagonal_string_from_u64(vec);
assert_eq!(output_0_63, hexadecimal[0..64 * 2]);
}

View File

@@ -0,0 +1,225 @@
//! This module implements the Trivium stream cipher, using booleans or FheBool
//! for the representaion of the inner bits.
use crate::static_deque::StaticDeque;
use tfhe::prelude::*;
use tfhe::{set_server_key, unset_server_key, FheBool, ServerKey};
use rayon::prelude::*;
/// Internal trait specifying which operations are necessary for TriviumStream generic type
pub trait TriviumBoolInput<OpOutput>:
Sized
+ Clone
+ std::ops::BitXor<Output = OpOutput>
+ std::ops::BitAnd<Output = OpOutput>
+ std::ops::Not<Output = OpOutput>
{
}
impl TriviumBoolInput<bool> for bool {}
impl TriviumBoolInput<bool> for &bool {}
impl TriviumBoolInput<FheBool> for FheBool {}
impl TriviumBoolInput<FheBool> for &FheBool {}
/// TriviumStream: a struct implementing the Trivium stream cipher, using T for the internal
/// representation of bits (bool or FheBool). To be able to compute FHE operations, it also owns
/// an Option for a ServerKey.
pub struct TriviumStream<T> {
a: StaticDeque<93, T>,
b: StaticDeque<84, T>,
c: StaticDeque<111, T>,
fhe_key: Option<ServerKey>,
}
impl TriviumStream<bool> {
/// Contructor for `TriviumStream<bool>`: arguments are the secret key and the input vector.
/// Outputs a TriviumStream object already initialized (1152 steps have been run before
/// returning)
pub fn new(key: [bool; 80], iv: [bool; 80]) -> TriviumStream<bool> {
// Initialization of Trivium registers: a has the secret key, b the input vector,
// and c a few ones.
let mut a_register = [false; 93];
let mut b_register = [false; 84];
let mut c_register = [false; 111];
for i in 0..80 {
a_register[93 - 80 + i] = key[i];
b_register[84 - 80 + i] = iv[i];
}
c_register[0] = true;
c_register[1] = true;
c_register[2] = true;
TriviumStream::<bool>::new_from_registers(a_register, b_register, c_register, None)
}
}
impl TriviumStream<FheBool> {
/// Constructor for `TriviumStream<FheBool>`: arguments are the encrypted secret key and input
/// vector, and the FHE server key.
/// Outputs a TriviumStream object already initialized (1152 steps have been run before
/// returning)
pub fn new(key: [FheBool; 80], iv: [bool; 80], sk: &ServerKey) -> TriviumStream<FheBool> {
set_server_key(sk.clone());
// Initialization of Trivium registers: a has the secret key, b the input vector,
// and c a few ones.
let mut a_register = [false; 93].map(|x| FheBool::encrypt_trivial(x));
let mut b_register = [false; 84].map(|x| FheBool::encrypt_trivial(x));
let mut c_register = [false; 111].map(|x| FheBool::encrypt_trivial(x));
for i in 0..80 {
a_register[93 - 80 + i] = key[i].clone();
b_register[84 - 80 + i] = FheBool::encrypt_trivial(iv[i]);
}
c_register[0] = FheBool::try_encrypt_trivial(true).unwrap();
c_register[1] = FheBool::try_encrypt_trivial(true).unwrap();
c_register[2] = FheBool::try_encrypt_trivial(true).unwrap();
unset_server_key();
TriviumStream::<FheBool>::new_from_registers(
a_register,
b_register,
c_register,
Some(sk.clone()),
)
}
}
impl<T> TriviumStream<T>
where
T: TriviumBoolInput<T> + std::marker::Send + std::marker::Sync,
for<'a> &'a T: TriviumBoolInput<T>,
{
/// Internal generic contructor: arguments are already prepared registers, and an optional FHE
/// server key
fn new_from_registers(
a_register: [T; 93],
b_register: [T; 84],
c_register: [T; 111],
key: Option<ServerKey>,
) -> Self {
let mut ret = Self {
a: StaticDeque::<93, T>::new(a_register),
b: StaticDeque::<84, T>::new(b_register),
c: StaticDeque::<111, T>::new(c_register),
fhe_key: key,
};
ret.init();
ret
}
/// The specification of Trivium includes running 1152 (= 18*64) unused steps to mix up the
/// registers, before starting the proper stream
fn init(&mut self) {
for _ in 0..18 {
self.next_64();
}
}
/// Computes one turn of the stream, updating registers and outputting the new bit.
pub fn next(&mut self) -> T {
match &self.fhe_key {
Some(sk) => set_server_key(sk.clone()),
None => (),
};
let [o, a, b, c] = self.get_output_and_values(0);
self.a.push(a);
self.b.push(b);
self.c.push(c);
o
}
/// Computes a potential future step of Trivium, n terms in the future. This does not update
/// registers, but rather returns with the output, the three values that will be used to
/// update the registers, when the time is right. This function is meant to be used in
/// parallel.
fn get_output_and_values(&self, n: usize) -> [T; 4] {
assert!(n < 65);
let (((temp_a, temp_b), (temp_c, a_and)), (b_and, c_and)) = rayon::join(
|| {
rayon::join(
|| {
rayon::join(
|| &self.a[65 - n] ^ &self.a[92 - n],
|| &self.b[68 - n] ^ &self.b[83 - n],
)
},
|| {
rayon::join(
|| &self.c[65 - n] ^ &self.c[110 - n],
|| &self.a[91 - n] & &self.a[90 - n],
)
},
)
},
|| {
rayon::join(
|| &self.b[82 - n] & &self.b[81 - n],
|| &self.c[109 - n] & &self.c[108 - n],
)
},
);
let ((o, a), (b, c)) = rayon::join(
|| {
rayon::join(
|| &(&temp_a ^ &temp_b) ^ &temp_c,
|| &temp_c ^ &(&c_and ^ &self.a[68 - n]),
)
},
|| {
rayon::join(
|| &temp_a ^ &(&a_and ^ &self.b[77 - n]),
|| &temp_b ^ &(&b_and ^ &self.c[86 - n]),
)
},
);
[o, a, b, c]
}
/// This calls `get_output_and_values` in parallel 64 times, and stores all results in a Vec.
fn get_64_output_and_values(&self) -> Vec<[T; 4]> {
(0..64)
.into_par_iter()
.map(|x| self.get_output_and_values(x))
.rev()
.collect()
}
/// Computes 64 turns of the stream, outputting the 64 bits all at once in a
/// Vec (first value is oldest, last is newest)
pub fn next_64(&mut self) -> Vec<T> {
match &self.fhe_key {
Some(sk) => {
rayon::broadcast(|_| set_server_key(sk.clone()));
}
None => (),
}
let mut values = self.get_64_output_and_values();
match &self.fhe_key {
Some(_) => {
rayon::broadcast(|_| unset_server_key());
}
None => (),
}
let mut ret = Vec::<T>::with_capacity(64);
while let Some([o, a, b, c]) = values.pop() {
ret.push(o);
self.a.push(a);
self.b.push(b);
self.c.push(c);
}
ret
}
}

View File

@@ -0,0 +1,241 @@
//! This module implements the Trivium stream cipher, using u8 or FheUint8
//! for the representaion of the inner bits.
use crate::static_deque::{StaticByteDeque, StaticByteDequeInput};
use tfhe::prelude::*;
use tfhe::{set_server_key, unset_server_key, FheUint8, ServerKey};
use rayon::prelude::*;
/// Internal trait specifying which operations are necessary for TriviumStreamByte generic type
pub trait TriviumByteInput<OpOutput>:
Sized
+ Clone
+ Send
+ Sync
+ StaticByteDequeInput<OpOutput>
+ std::ops::BitXor<Output = OpOutput>
+ std::ops::BitAnd<Output = OpOutput>
+ std::ops::Shr<u8, Output = OpOutput>
+ std::ops::Shl<u8, Output = OpOutput>
+ std::ops::Add<Output = OpOutput>
{
}
impl TriviumByteInput<u8> for u8 {}
impl TriviumByteInput<u8> for &u8 {}
impl TriviumByteInput<FheUint8> for FheUint8 {}
impl TriviumByteInput<FheUint8> for &FheUint8 {}
/// TriviumStreamByte: a struct implementing the Trivium stream cipher, using T for the internal
/// representation of bits (u8 or FheUint8). To be able to compute FHE operations, it also owns
/// an Option for a ServerKey.
/// Since the original Trivium registers' sizes are not a multiple of 8, these registers (which
/// store byte-like objects) have a size that is the eigth of the closest multiple of 8 above the
/// originals' sizes.
pub struct TriviumStreamByte<T> {
a_byte: StaticByteDeque<12, T>,
b_byte: StaticByteDeque<11, T>,
c_byte: StaticByteDeque<14, T>,
fhe_key: Option<ServerKey>,
}
impl TriviumStreamByte<u8> {
/// Contructor for `TriviumStreamByte<u8>`: arguments are the secret key and the input vector.
/// Outputs a TriviumStream object already initialized (1152 steps have been run before
/// returning)
pub fn new(key: [u8; 10], iv: [u8; 10]) -> TriviumStreamByte<u8> {
// Initialization of Trivium registers: a has the secret key, b the input vector,
// and c a few ones.
let mut a_byte_reg = [0u8; 12];
let mut b_byte_reg = [0u8; 11];
let mut c_byte_reg = [0u8; 14];
for i in 0..10 {
a_byte_reg[12 - 10 + i] = key[i];
b_byte_reg[11 - 10 + i] = iv[i];
}
// Magic number 14, aka 00001110: this represents the 3 ones at the beginning of the c
// registers, with additional zeros to make the register's size a multiple of 8.
c_byte_reg[0] = 14;
let mut ret =
TriviumStreamByte::<u8>::new_from_registers(a_byte_reg, b_byte_reg, c_byte_reg, None);
ret.init();
ret
}
}
impl TriviumStreamByte<FheUint8> {
/// Constructor for `TriviumStream<FheUint8>`: arguments are the encrypted secret key and input
/// vector, and the FHE server key.
/// Outputs a TriviumStream object already initialized (1152 steps have been run before
/// returning)
pub fn new(
key: [FheUint8; 10],
iv: [u8; 10],
server_key: &ServerKey,
) -> TriviumStreamByte<FheUint8> {
set_server_key(server_key.clone());
// Initialization of Trivium registers: a has the secret key, b the input vector,
// and c a few ones.
let mut a_byte_reg = [0u8; 12].map(|x| FheUint8::encrypt_trivial(x));
let mut b_byte_reg = [0u8; 11].map(|x| FheUint8::encrypt_trivial(x));
let mut c_byte_reg = [0u8; 14].map(|x| FheUint8::encrypt_trivial(x));
for i in 0..10 {
a_byte_reg[12 - 10 + i] = key[i].clone();
b_byte_reg[11 - 10 + i] = FheUint8::encrypt_trivial(iv[i]);
}
// Magic number 14, aka 00001110: this represents the 3 ones at the beginning of the c
// registers, with additional zeros to make the register's size a multiple of 8.
c_byte_reg[0] = FheUint8::encrypt_trivial(14u8);
unset_server_key();
let mut ret = TriviumStreamByte::<FheUint8>::new_from_registers(
a_byte_reg,
b_byte_reg,
c_byte_reg,
Some(server_key.clone()),
);
ret.init();
ret
}
}
impl<T> TriviumStreamByte<T>
where
T: TriviumByteInput<T> + Send,
for<'a> &'a T: TriviumByteInput<T>,
{
/// Internal generic contructor: arguments are already prepared registers, and an optional FHE
/// server key
fn new_from_registers(
a_register: [T; 12],
b_register: [T; 11],
c_register: [T; 14],
sk: Option<ServerKey>,
) -> Self {
Self {
a_byte: StaticByteDeque::<12, T>::new(a_register),
b_byte: StaticByteDeque::<11, T>::new(b_register),
c_byte: StaticByteDeque::<14, T>::new(c_register),
fhe_key: sk,
}
}
/// The specification of Trivium includes running 1152 (= 18*64) unused steps to mix up the
/// registers, before starting the proper stream
fn init(&mut self) {
for _ in 0..18 {
self.next_64();
}
}
/// Computes 8 potential future step of Trivium, b*8 terms in the future. This does not update
/// registers, but rather returns with the output, the three values that will be used to
/// update the registers, when the time is right. This function is meant to be used in
/// parallel.
fn get_output_and_values(&self, b: usize) -> [T; 4] {
let n = b * 8 + 7;
assert!(n < 65);
let ((a1, a2, a3, a4, a5), ((b1, b2, b3, b4, b5), (c1, c2, c3, c4, c5))) = rayon::join(
|| Self::get_bytes(&self.a_byte, [91 - n, 90 - n, 68 - n, 65 - n, 92 - n]),
|| {
rayon::join(
|| Self::get_bytes(&self.b_byte, [82 - n, 81 - n, 77 - n, 68 - n, 83 - n]),
|| Self::get_bytes(&self.c_byte, [109 - n, 108 - n, 86 - n, 65 - n, 110 - n]),
)
},
);
let (((temp_a, temp_b), (temp_c, a_and)), (b_and, c_and)) = rayon::join(
|| {
rayon::join(
|| rayon::join(|| a4 ^ a5, || b4 ^ b5),
|| rayon::join(|| c4 ^ c5, || a1 & a2),
)
},
|| rayon::join(|| b1 & b2, || c1 & c2),
);
let (temp_a_2, temp_b_2, temp_c_2) = (temp_a.clone(), temp_b.clone(), temp_c.clone());
let ((o, a), (b, c)) = rayon::join(
|| {
rayon::join(
|| (temp_a_2 ^ temp_b_2) ^ temp_c_2,
|| temp_c ^ ((c_and) ^ a3),
)
},
|| rayon::join(|| temp_a ^ (a_and ^ b3), || temp_b ^ (b_and ^ c3)),
);
[o, a, b, c]
}
/// This calls `get_output_and_values` in parallel 8 times, and stores all results in a Vec.
fn get_64_output_and_values(&self) -> Vec<[T; 4]> {
(0..8)
.into_par_iter()
.map(|i| self.get_output_and_values(i))
.collect()
}
/// Computes 64 turns of the stream, outputting the 64 bits (in 8 bytes) all at once in a
/// Vec (first value is oldest, last is newest)
pub fn next_64(&mut self) -> Vec<T> {
match &self.fhe_key {
Some(sk) => {
rayon::broadcast(|_| set_server_key(sk.clone()));
}
None => (),
}
let values = self.get_64_output_and_values();
match &self.fhe_key {
Some(_) => {
rayon::broadcast(|_| unset_server_key());
}
None => (),
}
let mut bytes = Vec::<T>::with_capacity(8);
for [o, a, b, c] in values {
self.a_byte.push(a);
self.b_byte.push(b);
self.c_byte.push(c);
bytes.push(o);
}
bytes
}
/// Reconstructs a bunch of 5 bytes in a parallel fashion.
fn get_bytes<const N: usize>(
reg: &StaticByteDeque<N, T>,
offsets: [usize; 5],
) -> (T, T, T, T, T) {
let mut ret = offsets
.par_iter()
.rev()
.map(|&i| reg.byte(i))
.collect::<Vec<_>>();
(
ret.pop().unwrap(),
ret.pop().unwrap(),
ret.pop().unwrap(),
ret.pop().unwrap(),
ret.pop().unwrap(),
)
}
}
impl TriviumStreamByte<FheUint8> {
pub fn get_server_key(&self) -> &ServerKey {
&self.fhe_key.as_ref().unwrap()
}
}

View File

@@ -0,0 +1,189 @@
use crate::static_deque::StaticDeque;
use tfhe::shortint::prelude::*;
use rayon::prelude::*;
/// TriviumStreamShortint: a struct implementing the Trivium stream cipher, using a generic
/// Ciphertext for the internal representation of bits (intended to represent a single bit). To be
/// able to compute FHE operations, it also owns a ServerKey.
pub struct TriviumStreamShortint {
a: StaticDeque<93, Ciphertext>,
b: StaticDeque<84, Ciphertext>,
c: StaticDeque<111, Ciphertext>,
internal_server_key: ServerKey,
transciphering_casting_key: KeySwitchingKey,
hl_server_key: tfhe::ServerKey,
}
impl TriviumStreamShortint {
/// Contructor for TriviumStreamShortint: arguments are the secret key and the input vector, and
/// a ServerKey reference. Outputs a TriviumStream object already initialized (1152 steps
/// have been run before returning)
pub fn new(
key: [Ciphertext; 80],
iv: [u64; 80],
sk: ServerKey,
ksk: KeySwitchingKey,
hl_sk: tfhe::ServerKey,
) -> Self {
// Initialization of Trivium registers: a has the secret key, b the input vector,
// and c a few ones.
let mut a_register: [Ciphertext; 93] = [0; 93].map(|x| sk.create_trivial(x));
let mut b_register: [Ciphertext; 84] = [0; 84].map(|x| sk.create_trivial(x));
let mut c_register: [Ciphertext; 111] = [0; 111].map(|x| sk.create_trivial(x));
for i in 0..80 {
a_register[93 - 80 + i] = key[i].clone();
b_register[84 - 80 + i] = sk.create_trivial(iv[i]);
}
c_register[0] = sk.create_trivial(1);
c_register[1] = sk.create_trivial(1);
c_register[2] = sk.create_trivial(1);
let mut ret = Self {
a: StaticDeque::<93, Ciphertext>::new(a_register),
b: StaticDeque::<84, Ciphertext>::new(b_register),
c: StaticDeque::<111, Ciphertext>::new(c_register),
internal_server_key: sk,
transciphering_casting_key: ksk,
hl_server_key: hl_sk,
};
ret.init();
ret
}
/// The specification of Trivium includes running 1152 (= 18*64) unused steps to mix up the
/// registers, before starting the proper stream
fn init(&mut self) {
for _ in 0..18 {
self.next_64();
}
}
/// Computes one turn of the stream, updating registers and outputting the new bit.
pub fn next(&mut self) -> Ciphertext {
let [o, a, b, c] = self.get_output_and_values(0);
self.a.push(a);
self.b.push(b);
self.c.push(c);
o
}
/// Computes a potential future step of Trivium, n terms in the future. This does not update
/// registers, but rather returns with the output, the three values that will be used to
/// update the registers, when the time is right. This function is meant to be used in
/// parallel.
fn get_output_and_values(&self, n: usize) -> [Ciphertext; 4] {
let (a1, a2, a3, a4, a5) = (
&self.a[65 - n],
&self.a[92 - n],
&self.a[91 - n],
&self.a[90 - n],
&self.a[68 - n],
);
let (b1, b2, b3, b4, b5) = (
&self.b[68 - n],
&self.b[83 - n],
&self.b[82 - n],
&self.b[81 - n],
&self.b[77 - n],
);
let (c1, c2, c3, c4, c5) = (
&self.c[65 - n],
&self.c[110 - n],
&self.c[109 - n],
&self.c[108 - n],
&self.c[86 - n],
);
let temp_a = self.internal_server_key.unchecked_add(a1, a2);
let temp_b = self.internal_server_key.unchecked_add(b1, b2);
let temp_c = self.internal_server_key.unchecked_add(c1, c2);
let ((new_a, new_b), (new_c, o)) = rayon::join(
|| {
rayon::join(
|| {
let mut new_a = self.internal_server_key.unchecked_bitand(c3, c4);
self.internal_server_key
.unchecked_add_assign(&mut new_a, a5);
self.internal_server_key
.unchecked_add_assign(&mut new_a, &temp_c);
self.internal_server_key.clear_carry_assign(&mut new_a);
new_a
},
|| {
let mut new_b = self.internal_server_key.unchecked_bitand(a3, a4);
self.internal_server_key
.unchecked_add_assign(&mut new_b, b5);
self.internal_server_key
.unchecked_add_assign(&mut new_b, &temp_a);
self.internal_server_key.clear_carry_assign(&mut new_b);
new_b
},
)
},
|| {
rayon::join(
|| {
let mut new_c = self.internal_server_key.unchecked_bitand(b3, b4);
self.internal_server_key
.unchecked_add_assign(&mut new_c, c5);
self.internal_server_key
.unchecked_add_assign(&mut new_c, &temp_b);
self.internal_server_key.clear_carry_assign(&mut new_c);
new_c
},
|| {
self.internal_server_key.bitxor(
&self.internal_server_key.unchecked_add(&temp_a, &temp_b),
&temp_c,
)
},
)
},
);
[o, new_a, new_b, new_c]
}
/// This calls `get_output_and_values` in parallel 64 times, and stores all results in a Vec.
fn get_64_output_and_values(&self) -> Vec<[Ciphertext; 4]> {
(0..64)
.into_par_iter()
.map(|x| self.get_output_and_values(x))
.rev()
.collect()
}
/// Computes 64 turns of the stream, outputting the 64 bits all at once in a
/// Vec (first value is oldest, last is newest)
pub fn next_64(&mut self) -> Vec<Ciphertext> {
let mut values = self.get_64_output_and_values();
let mut ret = Vec::<Ciphertext>::with_capacity(64);
while let Some([o, a, b, c]) = values.pop() {
ret.push(o);
self.a.push(a);
self.b.push(b);
self.c.push(c);
}
ret
}
pub fn get_internal_server_key(&self) -> &ServerKey {
&self.internal_server_key
}
pub fn get_casting_key(&self) -> &KeySwitchingKey {
&self.transciphering_casting_key
}
pub fn get_hl_server_key(&self) -> &tfhe::ServerKey {
&self.hl_server_key
}
}

384
ci/benchmark_parser.py Normal file
View File

@@ -0,0 +1,384 @@
"""
benchmark_parser
----------------
Parse criterion benchmark or keys size results.
"""
import argparse
import csv
import pathlib
import json
import sys
ONE_HOUR_IN_NANOSECONDS = 3600E9
parser = argparse.ArgumentParser()
parser.add_argument('results',
help='Location of criterion benchmark results directory.'
'If the --key-size option is used, then the value would have to point to'
'a CSV file.')
parser.add_argument('output_file', help='File storing parsed results')
parser.add_argument('-d', '--database', dest='database',
help='Name of the database used to store results')
parser.add_argument('-w', '--hardware', dest='hardware',
help='Hardware reference used to perform benchmark')
parser.add_argument('-V', '--project-version', dest='project_version',
help='Commit hash reference')
parser.add_argument('-b', '--branch', dest='branch',
help='Git branch name on which benchmark was performed')
parser.add_argument('--commit-date', dest='commit_date',
help='Timestamp of commit hash used in project_version')
parser.add_argument('--bench-date', dest='bench_date',
help='Timestamp when benchmark was run')
parser.add_argument('--name-suffix', dest='name_suffix', default='',
help='Suffix to append to each of the result test names')
parser.add_argument('--append-results', dest='append_results', action='store_true',
help='Append parsed results to an existing file')
parser.add_argument('--walk-subdirs', dest='walk_subdirs', action='store_true',
help='Check for results in subdirectories')
parser.add_argument('--key-sizes', dest='key_sizes', action='store_true',
help='Parse only the results regarding keys size measurements')
parser.add_argument('--key-gen', dest='key_gen', action='store_true',
help='Parse only the results regarding keys generation time measurements')
parser.add_argument('--throughput', dest='throughput', action='store_true',
help='Compute and append number of operations per second and'
'operations per dollar')
parser.add_argument('--backend', dest='backend', default='cpu',
help='Backend on which benchmarks have run')
def recursive_parse(directory, walk_subdirs=False, name_suffix="", compute_throughput=False,
hardware_hourly_cost=None):
"""
Parse all the benchmark results in a directory. It will attempt to parse all the files having a
.json extension at the top-level of this directory.
:param directory: path to directory that contains raw results as :class:`pathlib.Path`
:param walk_subdirs: traverse results subdirectories if parameters changes for benchmark case.
:param name_suffix: a :class:`str` suffix to apply to each test name found
:param compute_throughput: compute number of operations per second and operations per
dollar
:param hardware_hourly_cost: hourly cost of the hardware used in dollar
:return: tuple of :class:`list` as (data points, parsing failures)
"""
excluded_directories = ["child_generate", "fork", "parent_generate", "report"]
result_values = []
parsing_failures = []
bench_class = "evaluate"
for dire in directory.iterdir():
if dire.name in excluded_directories or not dire.is_dir():
continue
for subdir in dire.iterdir():
if walk_subdirs:
if subdir.name == "new":
pass
else:
subdir = subdir.joinpath("new")
if not subdir.exists():
continue
elif subdir.name != "new":
continue
full_name, test_name = parse_benchmark_file(subdir)
if test_name is None:
parsing_failures.append((full_name, "'function_id' field is null in report"))
continue
try:
params, display_name, operator = get_parameters(test_name)
except Exception as err:
parsing_failures.append((full_name, f"failed to get parameters: {err}"))
continue
for stat_name, value in parse_estimate_file(subdir).items():
test_name_parts = list(filter(None, [test_name, stat_name, name_suffix]))
result_values.append(
_create_point(
value,
"_".join(test_name_parts),
bench_class,
"latency",
operator,
params,
display_name=display_name
)
)
if stat_name == "mean" and compute_throughput:
test_suffix = "ops-per-sec"
test_name_parts.append(test_suffix)
result_values.append(
_create_point(
compute_ops_per_second(value),
"_".join(test_name_parts),
bench_class,
"throughput",
operator,
params,
display_name="_".join([display_name, test_suffix])
)
)
test_name_parts.pop()
if hardware_hourly_cost is not None:
test_suffix = "ops-per-dollar"
test_name_parts.append(test_suffix)
result_values.append(
_create_point(
compute_ops_per_dollar(value, hardware_hourly_cost),
"_".join(test_name_parts),
bench_class,
"throughput",
operator,
params,
display_name="_".join([display_name, test_suffix])
)
)
return result_values, parsing_failures
def _create_point(value, test_name, bench_class, bench_type, operator, params, display_name=None):
return {
"value": value,
"test": test_name,
"name": display_name,
"class": bench_class,
"type": bench_type,
"operator": operator,
"params": params}
def parse_benchmark_file(directory):
"""
Parse file containing details of the parameters used for a benchmark.
:param directory: directory where a benchmark case results are located as :class:`pathlib.Path`
:return: name of the test as :class:`str`
"""
raw_res = _parse_file_to_json(directory, "benchmark.json")
return raw_res["full_id"], raw_res["function_id"]
def parse_estimate_file(directory):
"""
Parse file containing timing results for a benchmark.
:param directory: directory where a benchmark case results are located as :class:`pathlib.Path`
:return: :class:`dict` of data points
"""
raw_res = _parse_file_to_json(directory, "estimates.json")
return {
stat_name: raw_res[stat_name]["point_estimate"]
for stat_name in ("mean", "std_dev")
}
def _parse_key_results(result_file, bench_type):
"""
Parse file containing results about operation on keys. The file must be formatted as CSV.
:param result_file: results file as :class:`pathlib.Path`
:return: tuple of :class:`list` as (data points, parsing failures)
"""
result_values = []
parsing_failures = []
with result_file.open() as csv_file:
reader = csv.reader(csv_file)
for (test_name, value) in reader:
try:
params, display_name, operator = get_parameters(test_name)
except Exception as err:
parsing_failures.append((test_name, f"failed to get parameters: {err}"))
continue
result_values.append({
"value": int(value),
"test": test_name,
"name": display_name,
"class": "keygen",
"type": bench_type,
"operator": operator,
"params": params})
return result_values, parsing_failures
def parse_key_sizes(result_file):
"""
Parse file containing key sizes results. The file must be formatted as CSV.
:param result_file: results file as :class:`pathlib.Path`
:return: tuple of :class:`list` as (data points, parsing failures)
"""
return _parse_key_results(result_file, "keysize")
def parse_key_gen_time(result_file):
"""
Parse file containing key generation time results. The file must be formatted as CSV.
:param result_file: results file as :class:`pathlib.Path`
:return: tuple of :class:`list` as (data points, parsing failures)
"""
return _parse_key_results(result_file, "latency")
def get_parameters(bench_id):
"""
Get benchmarks parameters recorded for a given benchmark case.
:param bench_id: function name used for the benchmark case
:return: :class:`tuple` as ``(benchmark parameters, display name, operator type)``
"""
params_dir = pathlib.Path("tfhe", "benchmarks_parameters", bench_id)
params = _parse_file_to_json(params_dir, "parameters.json")
display_name = params.pop("display_name")
operator = params.pop("operator_type")
# Put cryptographic parameters at the same level as the others parameters
crypto_params = params.pop("crypto_parameters")
params.update(crypto_params)
return params, display_name, operator
def compute_ops_per_dollar(data_point, product_hourly_cost):
"""
Compute numbers of operations per dollar for a given ``data_point``.
:param data_point: timing value measured during benchmark in nanoseconds
:param product_hourly_cost: cost in dollar per hour of hardware used
:return: number of operations per dollar
"""
return ONE_HOUR_IN_NANOSECONDS / (product_hourly_cost * data_point)
def compute_ops_per_second(data_point):
"""
Compute numbers of operations per second for a given ``data_point``.
:param data_point: timing value measured during benchmark in nanoseconds
:return: number of operations per second
"""
return 1E9 / data_point
def _parse_file_to_json(directory, filename):
result_file = directory.joinpath(filename)
return json.loads(result_file.read_text())
def dump_results(parsed_results, filename, input_args):
"""
Dump parsed results formatted as JSON to file.
:param parsed_results: :class:`list` of data points
:param filename: filename for dump file as :class:`pathlib.Path`
:param input_args: CLI input arguments
"""
for point in parsed_results:
point["backend"] = input_args.backend
if input_args.append_results:
parsed_content = json.loads(filename.read_text())
parsed_content["points"].extend(parsed_results)
filename.write_text(json.dumps(parsed_content))
else:
filename.parent.mkdir(parents=True, exist_ok=True)
series = {
"database": input_args.database,
"hardware": input_args.hardware,
"project_version": input_args.project_version,
"branch": input_args.branch,
"insert_date": input_args.bench_date,
"commit_date": input_args.commit_date,
"points": parsed_results,
}
filename.write_text(json.dumps(series))
def check_mandatory_args(input_args):
"""
Check for availability of required input arguments, the program will exit if one of them is
not present. If `append_results` flag is set, all the required arguments will be ignored.
:param input_args: CLI input arguments
"""
if input_args.append_results:
return
missing_args = []
for arg_name in vars(input_args):
if arg_name in ["results_dir", "output_file", "name_suffix",
"append_results", "walk_subdirs", "key_sizes",
"key_gen", "throughput"]:
continue
if not getattr(input_args, arg_name):
missing_args.append(arg_name)
if missing_args:
for arg_name in missing_args:
print(f"Missing required argument: --{arg_name.replace('_', '-')}")
sys.exit(1)
if __name__ == "__main__":
args = parser.parse_args()
check_mandatory_args(args)
#failures = []
raw_results = pathlib.Path(args.results)
if args.key_sizes or args.key_gen:
if args.key_sizes:
print("Parsing key sizes results... ")
results, failures = parse_key_sizes(raw_results)
if args.key_gen:
print("Parsing key generation time results... ")
results, failures = parse_key_gen_time(raw_results)
else:
print("Parsing benchmark results... ")
hardware_cost = None
if args.throughput:
print("Throughput computation enabled")
ec2_costs = json.loads(
pathlib.Path("ci/ec2_products_cost.json").read_text(encoding="utf-8"))
try:
hardware_cost = abs(ec2_costs[args.hardware])
print(f"Hardware hourly cost: {hardware_cost} $/h")
except KeyError:
print(f"Cannot find hardware hourly cost for '{args.hardware}'")
sys.exit(1)
results, failures = recursive_parse(raw_results, args.walk_subdirs, args.name_suffix,
args.throughput, hardware_cost)
print("Parsing results done")
output_file = pathlib.Path(args.output_file)
print(f"Dump parsed results into '{output_file.resolve()}' ... ", end="")
dump_results(results, output_file, args)
print("Done")
if failures:
print("\nParsing failed for some results")
print("-------------------------------")
for name, error in failures:
print(f"[{name}] {error}")
sys.exit(1)

View File

@@ -0,0 +1,3 @@
{
"m6i.metal": 7.168
}

View File

@@ -0,0 +1,90 @@
import argparse
from pathlib import Path
import json
def main(args):
criterion_dir = Path(args.criterion_dir)
output_file = Path(args.output_file)
data = []
for json_file in sorted(criterion_dir.glob("**/*.json")):
if json_file.parent.name == "base" or json_file.name != "benchmark.json":
continue
try:
bench_data = json.loads(json_file.read_text())
estimate_file = json_file.with_name("estimates.json")
estimate_data = json.loads(estimate_file.read_text())
bench_function_id = bench_data["function_id"]
split = bench_function_id.split("::")
(_, function_name, parameter_set, bits) = split
(bits, _) = bits.split("_")
bits = int(bits)
estimate_mean_ms = estimate_data["mean"]["point_estimate"] / 1000000
estimate_lower_bound_ms = (
estimate_data["mean"]["confidence_interval"]["lower_bound"] / 1000000
)
estimate_upper_bound_ms = (
estimate_data["mean"]["confidence_interval"]["upper_bound"] / 1000000
)
data.append(
(
function_name,
parameter_set,
bits,
estimate_mean_ms,
estimate_lower_bound_ms,
estimate_upper_bound_ms,
)
)
except:
pass
if len(data) == 0:
print("No integer bench found, skipping writing output file")
return
with open(output_file, "w", encoding="utf-8") as output:
output.write(
"function_name,parameter_set,bits,mean_ms,"
"confidence_interval_lower_bound_ms,confidence_interval_upper_bound_ms\n"
)
# Sort by func_name, bit width and then parameters
data.sort(key=lambda x: (x[0], x[2], x[1]))
for dat in data:
(
function_name,
parameter_set,
bits,
estimate_mean_ms,
estimate_lower_bound_ms,
estimate_upper_bound_ms,
) = dat
output.write(
f"{function_name},{parameter_set},{bits},{estimate_mean_ms},"
f"{estimate_lower_bound_ms},{estimate_upper_bound_ms}\n"
)
if __name__ == "__main__":
parser = argparse.ArgumentParser("Parse criterion results to csv file")
parser.add_argument(
"--criterion-dir",
type=str,
default="target/criterion",
help="Where to look for criterion result json files",
)
parser.add_argument(
"--output-file",
type=str,
default="parsed_benches.csv",
help="Path of the output file, will be csv formatted",
)
main(parser.parse_args())

View File

@@ -1,21 +1,69 @@
[profile.cpu-big]
region = "eu-west-3"
image_id = "ami-04deffe45b5b236fd"
instance_type = "c5a.8xlarge"
image_id = "ami-0ab73f5bd11708a85"
instance_type = "m6i.32xlarge"
[profile.gpu]
region = "us-east-1"
image_id = "ami-0ae662beb44082155"
instance_type = "p3.2xlarge"
subnet_id = "subnet-8123c9e7"
security_group = "sg-0466d33ced960ba35"
[profile.cpu-small]
region = "eu-west-3"
image_id = "ami-0ab73f5bd11708a85"
instance_type = "m6i.4xlarge"
[profile.bench]
region = "eu-west-3"
image_id = "ami-0ab73f5bd11708a85"
instance_type = "m6i.metal"
[command.cpu_test]
workflow = "aws_tfhe_tests.yml"
profile = "cpu-big"
check_run_name = "Shortint CPU AWS Tests"
check_run_name = "CPU AWS Tests"
[command.gpu_test]
workflow = "aws_tfhe_tests_w_gpu.yml"
profile = "gpu"
check_run_name = "AWS tests GPU (Slab)"
[command.cpu_integer_test]
workflow = "aws_tfhe_integer_tests.yml"
profile = "cpu-big"
check_run_name = "CPU Integer AWS Tests"
[command.cpu_multi_bit_test]
workflow = "aws_tfhe_multi_bit_tests.yml"
profile = "cpu-big"
check_run_name = "CPU AWS Multi Bit Tests"
[command.cpu_wasm_test]
workflow = "aws_tfhe_wasm_tests.yml"
profile = "cpu-small"
check_run_name = "CPU AWS WASM Tests"
[command.cpu_fast_test]
workflow = "aws_tfhe_fast_tests.yml"
profile = "cpu-big"
check_run_name = "CPU AWS Fast Tests"
[command.integer_bench]
workflow = "integer_benchmark.yml"
profile = "bench"
check_run_name = "Integer CPU AWS Benchmarks"
[command.integer_multi_bit_bench]
workflow = "integer_multi_bit_benchmark.yml"
profile = "bench"
check_run_name = "Integer multi bit CPU AWS Benchmarks"
[command.shortint_bench]
workflow = "shortint_benchmark.yml"
profile = "bench"
check_run_name = "Shortint CPU AWS Benchmarks"
[command.boolean_bench]
workflow = "boolean_benchmark.yml"
profile = "bench"
check_run_name = "Boolean CPU AWS Benchmarks"
[command.pbs_bench]
workflow = "pbs_benchmark.yml"
profile = "bench"
check_run_name = "PBS CPU AWS Benchmarks"
[command.wasm_client_bench]
workflow = "wasm_client_benchmark.yml"
profile = "cpu-small"
check_run_name = "WASM Client AWS Benchmarks"

View File

@@ -0,0 +1,39 @@
FROM ubuntu:22.04
ENV TZ=Europe/Paris
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
# Replace default archive.ubuntu.com with fr mirror
# original archive showed performance issues and is farther away
RUN sed -i 's|^deb http://archive.ubuntu.com/ubuntu/|deb http://mirror.ubuntu.ikoula.com/|g' /etc/apt/sources.list && \
sed -i 's|^deb http://security.ubuntu.com/ubuntu/|deb http://mirror.ubuntu.ikoula.com/|g' /etc/apt/sources.list
ENV CARGO_TARGET_DIR=/root/tfhe-rs-target
ARG RUST_TOOLCHAIN="stable"
WORKDIR /tfhe-wasm-tests
RUN apt-get update && \
apt-get install -y \
build-essential \
curl \
git \
python3 \
python3-pip \
python3-venv && \
rm -rf /var/lib/apt/lists/*
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs > install-rustup.sh && \
chmod +x install-rustup.sh && \
./install-rustup.sh -y --default-toolchain "${RUST_TOOLCHAIN}" \
-c rust-src -t wasm32-unknown-unknown && \
. "$HOME/.cargo/env" && \
cargo install wasm-pack && \
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh > install-node.sh && \
chmod +x install-node.sh && \
./install-node.sh && \
. "$HOME/.nvm/nvm.sh" && \
bash -i -c 'nvm install node && nvm use node'
WORKDIR /tfhe-wasm-tests/tfhe-rs/

View File

@@ -2,8 +2,37 @@
set -e
function usage() {
echo "$0: build and/or run the C API tests"
echo
echo "--help Print this message"
echo "--build-only Pass to only build the tests without running them"
echo
}
BUILD_ONLY=0
while [ -n "$1" ]
do
case "$1" in
"--help" | "-h" )
usage
exit 0
;;
"--build-only" )
BUILD_ONLY=1
;;
*)
echo "Unknown param : $1"
exit 1
;;
esac
shift
done
CURR_DIR="$(dirname "$0")"
ARCH_FEATURE="$("${CURR_DIR}/get_arch_feature.sh")"
REPO_ROOT="${CURR_DIR}/.."
TFHE_BUILD_DIR="${REPO_ROOT}/tfhe/build/"
@@ -11,10 +40,20 @@ mkdir -p "${TFHE_BUILD_DIR}"
cd "${TFHE_BUILD_DIR}"
cmake .. -DCMAKE_BUILD_TYPE=RELEASE
RUSTFLAGS="-C target-cpu=native" cargo ${1:+"${1}"} build \
--release --features="${ARCH_FEATURE}",boolean-c-api,shortint-c-api -p tfhe
cmake .. -DCMAKE_BUILD_TYPE=RELEASE -DCARGO_PROFILE="${CARGO_PROFILE}"
make -j
make "test"
if [[ "${BUILD_ONLY}" == "1" ]]; then
exit 0
fi
nproc_bin=nproc
# macOS detects CPUs differently
if [[ $(uname) == "Darwin" ]]; then
nproc_bin="sysctl -n hw.logicalcpu"
fi
# Let's go parallel
ARGS="-j$(${nproc_bin})" make test

62
scripts/check_cargo_min_ver.sh Executable file
View File

@@ -0,0 +1,62 @@
#!/usr/bin/env bash
set -e
CURR_DIR="$(dirname "$0")"
REL_CARGO_TOML_PATH="${CURR_DIR}/../tfhe/Cargo.toml"
MIN_RUST_VERSION="$(grep rust-version "${REL_CARGO_TOML_PATH}" | cut -d '=' -f 2 | xargs)"
function usage() {
echo "$0: check minimum cargo version"
echo
echo "--help Print this message"
echo "--rust-toolchain The toolchain to check the version for with leading"
echo "--min-rust-version Check toolchain version is >= to this version, default is ${MIN_RUST_VERSION}"
echo
}
RUST_TOOLCHAIN=""
while [ -n "$1" ]
do
case "$1" in
"--help" | "-h" )
usage
exit 0
;;
"--rust-toolchain" )
shift
RUST_TOOLCHAIN="$1"
;;
"--min-rust-version" )
shift
MIN_RUST_VERSION="$1"
;;
*)
echo "Unknown param : $1"
exit 1
;;
esac
shift
done
if [[ "${RUST_TOOLCHAIN::1}" != "+" ]]; then
RUST_TOOLCHAIN="+${RUST_TOOLCHAIN}"
fi
ver_string="$(cargo ${RUST_TOOLCHAIN:+"${RUST_TOOLCHAIN}"} --version | \
cut -d ' ' -f 2 | cut -d '-' -f 1)"
ver_major="$(echo "${ver_string}" | cut -d '.' -f 1)"
ver_minor="$(echo "${ver_string}" | cut -d '.' -f 2)"
min_ver_major="$(echo "${MIN_RUST_VERSION}" | cut -d '.' -f 1)"
min_ver_minor="$(echo "${MIN_RUST_VERSION}" | cut -d '.' -f 2)"
if [[ "${ver_major}" -ge "${min_ver_major}" ]] && [[ "${ver_minor}" -ge "${min_ver_minor}" ]]; then
exit 0
fi
exit 1

View File

@@ -4,7 +4,7 @@ set -e
ARCH_FEATURE=x86_64
IS_AARCH64="$( (uname -a | grep -c arm64) || true)"
IS_AARCH64="$( (uname -a | grep -c "arm64\|aarch64") || true)"
if [[ "${IS_AARCH64}" != "0" ]]; then
ARCH_FEATURE=aarch64

165
scripts/integer-tests.sh Executable file
View File

@@ -0,0 +1,165 @@
#!/bin/bash
set -e
function usage() {
echo "$0: shortint test runner"
echo
echo "--help Print this message"
echo "--rust-toolchain The toolchain to run the tests with default: stable"
echo "--multi-bit Run multi-bit tests only: default off"
echo "--cargo-profile The cargo profile used to build tests"
echo
}
RUST_TOOLCHAIN="+stable"
multi_bit=""
not_multi_bit="_multi_bit"
cargo_profile="release"
while [ -n "$1" ]
do
case "$1" in
"--help" | "-h" )
usage
exit 0
;;
"--rust-toolchain" )
shift
RUST_TOOLCHAIN="$1"
;;
"--multi-bit" )
multi_bit="_multi_bit"
not_multi_bit=""
;;
"--cargo-profile" )
shift
cargo_profile="$1"
;;
*)
echo "Unknown param : $1"
exit 1
;;
esac
shift
done
if [[ "${RUST_TOOLCHAIN::1}" != "+" ]]; then
RUST_TOOLCHAIN="+${RUST_TOOLCHAIN}"
fi
CURR_DIR="$(dirname "$0")"
ARCH_FEATURE="$("${CURR_DIR}/get_arch_feature.sh")"
nproc_bin=nproc
# macOS detects CPUs differently
if [[ $(uname) == "Darwin" ]]; then
nproc_bin="sysctl -n hw.logicalcpu"
fi
n_threads="$(${nproc_bin})"
if uname -a | grep "arm64"; then
if [[ $(uname) == "Darwin" ]]; then
# Keys are 4.7 gigs at max, CI M1 macs only has 8 gigs of RAM
n_threads=1
fi
else
# Keys are 4.7 gigs at max, test machine has 32 gigs of RAM
n_threads=6
fi
if [[ "${BIG_TESTS_INSTANCE}" != TRUE ]]; then
if [[ "${FAST_TESTS}" != TRUE ]]; then
# block pbs are too slow for high params
# mul_crt_4_4 is extremely flaky (~80% failure)
# test_wopbs_bivariate_crt_wopbs_param_message generate tables that are too big at the moment
# test_integer_smart_mul_param_message_4_carry_4_ks_pbs is too slow
# so is test_integer_default_add_sequence_multi_thread_param_message_4_carry_4_ks_pbs
filter_expression="""\
test(/^integer::.*${multi_bit}/) \
${not_multi_bit:+"and not test(~${not_multi_bit})"} \
and not test(/.*_block_pbs(_base)?_param_message_[34]_carry_[34]_ks_pbs$/) \
and not test(~mul_crt_param_message_4_carry_4_ks_pbs) \
and not test(/.*test_wopbs_bivariate_crt_wopbs_param_message_[34]_carry_[34]_ks_pbs$/) \
and not test(/.*test_integer_smart_mul_param_message_4_carry_4_ks_pbs$/) \
and not test(/.*test_integer_default_add_sequence_multi_thread_param_message_4_carry_4_ks_pbs$/)"""
else
# test only fast default operations with only two set of parameters
filter_expression="""\
test(/^integer::.*${multi_bit}/) \
${not_multi_bit:+"and not test(~${not_multi_bit})"} \
and test(/.*_default_.*?_param${multi_bit}_message_[2-3]_carry_[2-3]${multi_bit:+"_group_2"}_ks_pbs/) \
and not test(/.*_param_message_[14]_carry_[14]_ks_pbs$/) \
and not test(/.*default_add_sequence_multi_thread_param_message_3_carry_3_ks_pbs$/)"""
fi
cargo "${RUST_TOOLCHAIN}" nextest run \
--tests \
--cargo-profile "${cargo_profile}" \
--package tfhe \
--profile ci \
--features="${ARCH_FEATURE}",integer,internal-keycache \
--test-threads "${n_threads}" \
-E "$filter_expression"
if [[ "${multi_bit}" == "" ]]; then
cargo "${RUST_TOOLCHAIN}" test \
--profile "${cargo_profile}" \
--package tfhe \
--features="${ARCH_FEATURE}",integer,internal-keycache \
--doc \
-- integer::
fi
else
if [[ "${FAST_TESTS}" != TRUE ]]; then
# block pbs are too slow for high params
# mul_crt_4_4 is extremely flaky (~80% failure)
# test_wopbs_bivariate_crt_wopbs_param_message generate tables that are too big at the moment
# test_integer_smart_mul_param_message_4_carry_4_ks_pbs is too slow
# so is test_integer_default_add_sequence_multi_thread_param_message_4_carry_4_ks_pbs
filter_expression="""\
test(/^integer::.*${multi_bit}/) \
${not_multi_bit:+"and not test(~${not_multi_bit})"} \
and not test(/.*_block_pbs(_base)?_param_message_[34]_carry_[34]_ks_pbs$/) \
and not test(~mul_crt_param_message_4_carry_4_ks_pbs) \
and not test(/.*test_wopbs_bivariate_crt_wopbs_param_message_[34]_carry_[34]_ks_pbs$/) \
and not test(/.*test_integer_smart_mul_param_message_4_carry_4_ks_pbs$/) \
and not test(/.*test_integer_default_add_sequence_multi_thread_param_message_4_carry_4_ks_pbs$/)"""
else
# test only fast default operations with only two set of parameters
filter_expression="""\
test(/^integer::.*${multi_bit}/) \
${not_multi_bit:+"and not test(~${not_multi_bit})"} \
and test(/.*_default_.*?_param${multi_bit}_message_[2-3]_carry_[2-3]${multi_bit:+"_group_2"}_ks_pbs/) \
and not test(/.*_param_message_[14]_carry_[14]_ks_pbs$/) \
and not test(/.*default_add_sequence_multi_thread_param_message_3_carry_3_ks_pbs$/)"""
fi
num_cpu_threads="$(${nproc_bin})"
num_threads=$((num_cpu_threads * 2 / 3))
cargo "${RUST_TOOLCHAIN}" nextest run \
--tests \
--cargo-profile "${cargo_profile}" \
--package tfhe \
--profile ci \
--features="${ARCH_FEATURE}",integer,internal-keycache \
--test-threads $num_threads \
-E "$filter_expression"
if [[ "${multi_bit}" == "" ]]; then
cargo "${RUST_TOOLCHAIN}" test \
--profile "${cargo_profile}" \
--package tfhe \
--features="${ARCH_FEATURE}",integer,internal-keycache \
--doc \
-- --test-threads="$(${nproc_bin})" integer::
fi
fi
echo "Test ran in $SECONDS seconds"

20
scripts/no_dbg_calls.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -e
THIS_SCRIPT_NAME="$(basename "$0")"
TMP_FILE="$(mktemp)"
COUNT="$(git grep -rniI "dbg!" . | grep -v "${THIS_SCRIPT_NAME}" | \
tee "${TMP_FILE}" | wc -l | tr -d '[:space:]')"
cat "${TMP_FILE}"
rm -rf "${TMP_FILE}"
if [[ "${COUNT}" == "0" ]]; then
exit 0
else
echo "dbg macro calls detected, see output log above"
exit 1
fi

20
scripts/no_tfhe_typo.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
set -e
THIS_SCRIPT_NAME="$(basename "$0")"
TMP_FILE="$(mktemp)"
COUNT="$(git grep -rniI "thfe\|tfhr\|thfr" . | grep -v "${THIS_SCRIPT_NAME}" | \
tee "${TMP_FILE}" | wc -l | tr -d '[:space:]')"
cat "${TMP_FILE}"
rm -rf "${TMP_FILE}"
if [[ "${COUNT}" == "0" ]]; then
exit 0
else
echo "tfhe typo detected, see output log above"
exit 1
fi

View File

@@ -2,6 +2,54 @@
set -e
function usage() {
echo "$0: shortint test runner"
echo
echo "--help Print this message"
echo "--rust-toolchain The toolchain to run the tests with default: stable"
echo "--multi-bit Run multi-bit tests only: default off"
echo "--cargo-profile The cargo profile used to build tests"
echo
}
RUST_TOOLCHAIN="+stable"
multi_bit=""
cargo_profile="release"
while [ -n "$1" ]
do
case "$1" in
"--help" | "-h" )
usage
exit 0
;;
"--rust-toolchain" )
shift
RUST_TOOLCHAIN="$1"
;;
"--multi-bit" )
multi_bit="_multi_bit"
;;
"--cargo-profile" )
shift
cargo_profile="$1"
;;
*)
echo "Unknown param : $1"
exit 1
;;
esac
shift
done
if [[ "${RUST_TOOLCHAIN::1}" != "+" ]]; then
RUST_TOOLCHAIN="+${RUST_TOOLCHAIN}"
fi
CURR_DIR="$(dirname "$0")"
ARCH_FEATURE="$("${CURR_DIR}/get_arch_feature.sh")"
@@ -12,49 +60,135 @@ if [[ $(uname) == "Darwin" ]]; then
nproc_bin="sysctl -n hw.logicalcpu"
fi
n_threads="$(${nproc_bin})"
n_threads_small="$(${nproc_bin})"
n_threads_big="${n_threads_small}"
# TODO: automate thread selection by measuring host machine ram and loading the key sizes from the
# 'keys' cache directory keeping a safety margin for test execution
if uname -a | grep "arm64"; then
if [[ $(uname) == "Darwin" ]]; then
# Keys are 2 gigs at max, CI M1 macs only has 8 gigs of RAM so a bit conservative here
n_threads_small=3
# Keys are 4.7 gigs at max, CI M1 macs only has 8 gigs of RAM
n_threads=1
n_threads_big=1
fi
else
# Keys are 4.7 gigs at max, test machine has 32 gigs of RAM
n_threads=6
# Keys are 4.7 gigs at max, test machine has 64 gigs of RAM
n_threads_big=13
fi
filter_expression=''\
'('\
' test(/^shortint::server_key::.*_param_message_1_carry_1$/)'\
'or test(/^shortint::server_key::.*_param_message_1_carry_2$/)'\
'or test(/^shortint::server_key::.*_param_message_1_carry_3$/)'\
'or test(/^shortint::server_key::.*_param_message_1_carry_4$/)'\
'or test(/^shortint::server_key::.*_param_message_1_carry_5$/)'\
'or test(/^shortint::server_key::.*_param_message_1_carry_6$/)'\
'or test(/^shortint::server_key::.*_param_message_2_carry_2$/)'\
'or test(/^shortint::server_key::.*_param_message_3_carry_3$/)'\
'or test(/^shortint::server_key::.*_param_message_4_carry_4$/)'\
')'\
'and not test(~smart_add_and_mul)' # This test is too slow
if [[ "${BIG_TESTS_INSTANCE}" != TRUE ]]; then
if [[ "${FAST_TESTS}" != TRUE ]]; then
filter_expression_small_params="""\
(\
test(/^shortint::.*_param${multi_bit}_message_1_carry_1${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_1_carry_2${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_1_carry_3${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_1_carry_4${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_1_carry_5${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_1_carry_6${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_2_carry_1${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_2_carry_2${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_2_carry_3${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_3_carry_1${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_3_carry_2${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_3_carry_3${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
)\
and not test(~smart_add_and_mul)""" # This test is too slow
else
filter_expression_small_params="""\
(\
test(/^shortint::.*_param${multi_bit}_message_2_carry_1${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_2_carry_2${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_2_carry_3${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
)\
and not test(~smart_add_and_mul)""" # This test is too slow
fi
export RUSTFLAGS="-C target-cpu=native"
# Run tests only no examples or benches with small params and more threads
cargo "${RUST_TOOLCHAIN}" nextest run \
--tests \
--cargo-profile "${cargo_profile}" \
--package tfhe \
--profile ci \
--features="${ARCH_FEATURE}",shortint,internal-keycache \
--test-threads "${n_threads_small}" \
-E "${filter_expression_small_params}"
# Run tests only no examples or benches
cargo ${1:+"${1}"} nextest run \
--tests \
--release \
--package tfhe \
--profile ci \
--features="${ARCH_FEATURE}",shortint,internal-keycache \
--test-threads "${n_threads}" \
-E "${filter_expression}"
if [[ "${FAST_TESTS}" != TRUE ]]; then
filter_expression_big_params="""\
(\
test(/^shortint::.*_param${multi_bit}_message_4_carry_4${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
) \
and not test(~smart_add_and_mul)"""
cargo ${1:+"${1}"} test \
--release \
--package tfhe \
--features="${ARCH_FEATURE}",shortint,internal-keycache \
--doc \
shortint::
# Run tests only no examples or benches with big params and less threads
cargo "${RUST_TOOLCHAIN}" nextest run \
--tests \
--cargo-profile "${cargo_profile}" \
--package tfhe \
--profile ci \
--features="${ARCH_FEATURE}",shortint,internal-keycache \
--test-threads "${n_threads_big}" \
-E "${filter_expression_big_params}"
if [[ "${multi_bit}" == "" ]]; then
cargo "${RUST_TOOLCHAIN}" test \
--profile "${cargo_profile}" \
--package tfhe \
--features="${ARCH_FEATURE}",shortint,internal-keycache \
--doc \
-- shortint::
fi
fi
else
if [[ "${FAST_TESTS}" != TRUE ]]; then
filter_expression="""\
(\
test(/^shortint::.*_param${multi_bit}_message_1_carry_1${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_1_carry_2${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_1_carry_3${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_1_carry_4${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_1_carry_5${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_1_carry_6${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_2_carry_1${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_2_carry_2${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_2_carry_3${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_3_carry_1${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_3_carry_2${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_3_carry_3${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_4_carry_4${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
)\
and not test(~smart_add_and_mul)""" # This test is too slow
else
filter_expression="""\
(\
test(/^shortint::.*_param${multi_bit}_message_2_carry_1${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_2_carry_2${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
or test(/^shortint::.*_param${multi_bit}_message_2_carry_3${multi_bit:+"_group_[0-9]"}(_compact_pk)?_ks_pbs/) \
)\
and not test(~smart_add_and_mul)""" # This test is too slow
fi
# Run tests only no examples or benches with small params and more threads
cargo "${RUST_TOOLCHAIN}" nextest run \
--tests \
--cargo-profile "${cargo_profile}" \
--package tfhe \
--profile ci \
--features="${ARCH_FEATURE}",shortint,internal-keycache \
--test-threads "$(${nproc_bin})" \
-E "${filter_expression}"
if [[ "${multi_bit}" == "" ]]; then
cargo "${RUST_TOOLCHAIN}" test \
--profile "${cargo_profile}" \
--package tfhe \
--features="${ARCH_FEATURE}",shortint,internal-keycache \
--doc \
-- --test-threads="$(${nproc_bin})" shortint::
fi
fi
echo "Test ran in $SECONDS seconds"

12
tasks/Cargo.toml Normal file
View File

@@ -0,0 +1,12 @@
[package]
name = "tasks"
version = "0.0.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
clap = "3.1"
lazy_static = "1.4"
log = "0.4"
simplelog = "0.12"

View File

@@ -0,0 +1,453 @@
use crate::utils::project_root;
use std::io::{Error, ErrorKind};
use std::{fmt, fs};
fn recurse_find_rs_files(
root_dir: std::path::PathBuf,
rs_files: &mut Vec<std::path::PathBuf>,
at_root: bool,
) {
for curr_entry in root_dir.read_dir().unwrap() {
let curr_path = curr_entry.unwrap().path().canonicalize().unwrap();
if curr_path.is_file() {
if let Some(extension) = curr_path.extension() {
if extension == "rs" {
rs_files.push(curr_path);
}
}
} else if curr_path.is_dir() {
if at_root {
// Hardcoded ignores for root .git and target
match curr_path.file_name().unwrap().to_str().unwrap() {
".git" => continue,
"target" => continue,
_ => recurse_find_rs_files(curr_path.to_path_buf(), rs_files, false),
};
} else {
recurse_find_rs_files(curr_path.to_path_buf(), rs_files, false);
}
}
}
}
#[derive(Debug)]
struct LatexEscapeToolError {
details: String,
}
impl LatexEscapeToolError {
fn new(msg: &str) -> LatexEscapeToolError {
LatexEscapeToolError {
details: msg.to_string(),
}
}
}
impl fmt::Display for LatexEscapeToolError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}", self.details)
}
}
impl std::error::Error for LatexEscapeToolError {}
const DOC_TEST_START: &str = "///";
const DOC_COMMENT_START: &str = "//!";
const BACKSLASH_UTF8_LEN: usize = '\\'.len_utf8();
enum LineType {
DocTest { code_block_limit: bool },
DocComment { code_block_limit: bool },
EmptyLine,
Other,
}
fn get_line_type_and_trimmed_line(line: &str) -> (LineType, &str) {
let mut trimmed_line = line.trim_start();
let line_type = if trimmed_line.starts_with(DOC_COMMENT_START) {
trimmed_line = trimmed_line
.strip_prefix(DOC_COMMENT_START)
.unwrap()
.trim_start();
let has_code_block_limit = trimmed_line.starts_with("```");
LineType::DocComment {
code_block_limit: has_code_block_limit,
}
} else if trimmed_line.starts_with(DOC_TEST_START) {
trimmed_line = trimmed_line
.strip_prefix(DOC_TEST_START)
.unwrap()
.trim_start();
let has_code_block_limit = trimmed_line.starts_with("```");
LineType::DocTest {
code_block_limit: has_code_block_limit,
}
} else if trimmed_line.is_empty() {
LineType::EmptyLine
} else {
LineType::Other
};
(line_type, trimmed_line)
}
struct CommentContent<'a> {
is_in_code_block: bool,
line_start: &'a str,
line_content: &'a str,
}
fn find_contiguous_doc_comment<'a>(
lines: &[&'a str],
start_line_idx: usize,
) -> (Vec<CommentContent<'a>>, usize) {
let mut doc_comment_end_line_idx = start_line_idx + 1;
let mut is_in_code_block = false;
let mut contiguous_doc_comment = Vec::<CommentContent>::new();
for (line_idx, line) in lines.iter().enumerate().skip(start_line_idx) {
let (line_type, line_content) = get_line_type_and_trimmed_line(line);
let line_start = &line[..line.len() - line_content.len()];
// If there is an empty line we are still in the DocComment
let line_type = if let LineType::EmptyLine = line_type {
LineType::DocComment {
code_block_limit: false,
}
} else {
line_type
};
match line_type {
LineType::DocComment { code_block_limit } => {
if code_block_limit {
// We have found a code block limit, either starting or ending, toggle the
// flag
is_in_code_block = !is_in_code_block;
};
contiguous_doc_comment.push(CommentContent {
is_in_code_block,
line_start,
line_content,
});
// For now the only thing we know is that the next line is potentially the end of
// the comment block, required if a file is a giant comment block to have the proper
// bound
doc_comment_end_line_idx = line_idx + 1;
}
_ => {
// We are sure that the current line is the end of the comment block
doc_comment_end_line_idx = line_idx;
break;
}
};
}
(contiguous_doc_comment, doc_comment_end_line_idx)
}
fn find_contiguous_doc_test<'a>(
lines: &[&'a str],
start_line_idx: usize,
) -> (Vec<CommentContent<'a>>, usize) {
let mut doc_test_end_line_idx = start_line_idx + 1;
let mut is_in_code_block = false;
let mut contiguous_doc_test = Vec::<CommentContent>::new();
for (line_idx, line) in lines.iter().enumerate().skip(start_line_idx) {
let (line_type, line_content) = get_line_type_and_trimmed_line(line);
let line_start = &line[..line.len() - line_content.len()];
// If there is an empty line we are still in the DocTest
let line_type = if let LineType::EmptyLine = line_type {
LineType::DocTest {
code_block_limit: false,
}
} else {
line_type
};
match line_type {
LineType::DocTest { code_block_limit } => {
if code_block_limit {
// We have found a code block limit, either starting or ending, toggle the
// flag
is_in_code_block = !is_in_code_block;
};
contiguous_doc_test.push(CommentContent {
is_in_code_block,
line_start,
line_content,
});
// For now the only thing we know is that the next line is potentially the end of
// the comment block, required if a file is a giant comment block to have the proper
// bound
doc_test_end_line_idx = line_idx + 1;
}
_ => {
// We are sure that the current line is the end of the comment block
doc_test_end_line_idx = line_idx;
break;
}
};
}
(contiguous_doc_test, doc_test_end_line_idx)
}
fn find_contiguous_part_in_doc_test_or_comment(
part_is_code_block: bool,
full_doc_comment_content: &Vec<CommentContent>,
part_start_idx: usize,
) -> (usize, usize) {
let mut next_line_idx = part_start_idx + 1;
loop {
// We have exhausted the doc comment content, break
if next_line_idx == full_doc_comment_content.len() {
break;
}
let CommentContent {
is_in_code_block: next_line_is_in_code_block,
line_start: _,
line_content: _,
} = full_doc_comment_content[next_line_idx];
// We check if the next line is in a different part, if so we break
if next_line_is_in_code_block != part_is_code_block {
break;
}
next_line_idx += 1;
}
// next_line_idx points to the end of the part and is therefore returned as the part_stop_idx
(part_start_idx, next_line_idx)
}
enum LatexEquationKind {
Inline,
Multiline,
NotAnEquation,
}
fn escape_underscores_rewrite_equations(
comment_to_rewrite: &[CommentContent],
rewritten_content: &mut String,
) -> Result<(), LatexEscapeToolError> {
let mut latex_equation_kind = LatexEquationKind::NotAnEquation;
for CommentContent {
is_in_code_block: _,
line_start,
line_content,
} in comment_to_rewrite.iter()
{
rewritten_content.push_str(line_start);
let mut previous_char = '\0';
let mut chars = line_content.chars().peekable();
while let Some(current_char) = chars.next() {
match (previous_char, current_char) {
('$', '$') => {
match latex_equation_kind {
LatexEquationKind::Inline => {
// Problem we find an opening $$ after an opening $, return an error
return Err(LatexEscapeToolError::new(
"Found an opening '$' without a corresponding closing '$'",
));
}
LatexEquationKind::Multiline => {
// Closing $$, no more in a latex equation
latex_equation_kind = LatexEquationKind::NotAnEquation
}
LatexEquationKind::NotAnEquation => {
// Opening $$, in a multiline latex equation
latex_equation_kind = LatexEquationKind::Multiline
}
};
}
(_, '$') => {
let is_inline_marker = chars.peek() != Some(&'$');
if is_inline_marker {
match latex_equation_kind {
LatexEquationKind::Multiline => {
// Problem we find an opening $ after an opening $$, return an error
return Err(LatexEscapeToolError::new(
"Found an opening '$$' without a corresponding closing '$$'",
));
}
LatexEquationKind::Inline => {
// Closing $, no more in a latex equation
latex_equation_kind = LatexEquationKind::NotAnEquation
}
LatexEquationKind::NotAnEquation => {
// Opening $, in an inline latex equation
latex_equation_kind = LatexEquationKind::Inline
}
};
}
// If the marker is not an inline marker but a multiline marker let the other
// case manage it at the next iteration
}
// If the _ is not escaped and we are in an equation we need to escape it
(prev, '_') if prev != '\\' => match latex_equation_kind {
LatexEquationKind::NotAnEquation => (),
_ => rewritten_content.push('\\'),
},
_ => (),
}
rewritten_content.push(current_char);
previous_char = current_char;
}
}
Ok(())
}
fn process_doc_lines_until_impossible<'a>(
lines: &[&'a str],
rewritten_content: &'a mut String,
comment_search_fn: fn(&[&'a str], usize) -> (Vec<CommentContent<'a>>, usize),
start_line_idx: usize,
) -> Result<usize, LatexEscapeToolError> {
let (full_doc_content, doc_end_line_idx) = comment_search_fn(lines, start_line_idx);
// Now we find code blocks parts OR pure comments parts
let mut current_line_in_doc_idx = 0;
while current_line_in_doc_idx < full_doc_content.len() {
let CommentContent {
is_in_code_block,
line_start: _,
line_content: _,
} = full_doc_content[current_line_in_doc_idx];
let (current_part_start_idx, current_part_stop_idx) =
find_contiguous_part_in_doc_test_or_comment(
is_in_code_block,
&full_doc_content,
current_line_in_doc_idx,
);
let current_part_content = &full_doc_content[current_part_start_idx..current_part_stop_idx];
// The current part is a code block
if is_in_code_block {
for CommentContent {
is_in_code_block: _,
line_start,
line_content,
} in current_part_content.iter()
{
// We can just push the content unmodified
rewritten_content.push_str(line_start);
rewritten_content.push_str(line_content);
}
} else {
// The part is a pure comment, we need to rewrite equations
escape_underscores_rewrite_equations(current_part_content, rewritten_content)?;
}
current_line_in_doc_idx += current_part_content.len();
}
Ok(doc_end_line_idx)
}
fn process_non_doc_lines_until_impossible(
lines: &Vec<&str>,
rewritten_content: &mut String,
mut line_idx: usize,
) -> usize {
while line_idx < lines.len() {
let line = lines[line_idx];
match get_line_type_and_trimmed_line(line) {
(LineType::Other, _) => {
rewritten_content.push_str(line);
line_idx += 1;
}
_ => break,
};
}
line_idx
}
fn escape_underscore_in_latex_doc_in_file(
file_path: &std::path::Path,
) -> Result<(), LatexEscapeToolError> {
let file_name = file_path.to_str().unwrap();
let content = std::fs::read_to_string(file_name).unwrap();
let number_of_underscores = content.matches('_').count();
let potential_additional_capacity_required = number_of_underscores * BACKSLASH_UTF8_LEN;
// Enough for the length of the original string + the length if we had to escape *all* `_`
// which won't happen but avoids reallocations
let mut rewritten_content =
String::with_capacity(content.len() + potential_additional_capacity_required);
let content_by_lines: Vec<&str> = content.split_inclusive('\n').collect();
let mut line_idx = 0_usize;
while line_idx < content_by_lines.len() {
let line = content_by_lines[line_idx];
let (line_type, _) = get_line_type_and_trimmed_line(line);
line_idx = match line_type {
LineType::DocComment {
code_block_limit: _,
} => process_doc_lines_until_impossible(
&content_by_lines,
&mut rewritten_content,
find_contiguous_doc_comment,
line_idx,
)?,
LineType::DocTest {
code_block_limit: _,
} => process_doc_lines_until_impossible(
&content_by_lines,
&mut rewritten_content,
find_contiguous_doc_test,
line_idx,
)?,
LineType::Other => process_non_doc_lines_until_impossible(
&content_by_lines,
&mut rewritten_content,
line_idx,
),
LineType::EmptyLine => {
rewritten_content.push_str(line);
line_idx + 1
}
};
}
fs::write(file_name, rewritten_content).unwrap();
Ok(())
}
pub fn escape_underscore_in_latex_doc() -> Result<(), Error> {
let project_root = project_root();
let mut src_files: Vec<std::path::PathBuf> = Vec::new();
recurse_find_rs_files(project_root, &mut src_files, true);
println!("Found {} files to process.", src_files.len());
let mut files_with_problems: Vec<(std::path::PathBuf, LatexEscapeToolError)> = Vec::new();
println!("Processing...");
for file in src_files.into_iter() {
if let Err(err) = escape_underscore_in_latex_doc_in_file(&file) {
files_with_problems.push((file, err));
}
}
println!("Done!");
if !files_with_problems.is_empty() {
for (file_with_problem, error) in files_with_problems.iter() {
println!(
"File: {}, has error: {}",
file_with_problem.display(),
error
);
}
return Err(Error::new(
ErrorKind::InvalidInput,
"Issues while processing files, check log.",
));
}
Ok(())
}

88
tasks/src/main.rs Normal file
View File

@@ -0,0 +1,88 @@
#[macro_use]
extern crate lazy_static;
use clap::{Arg, Command};
use log::LevelFilter;
use simplelog::{ColorChoice, CombinedLogger, Config, TermLogger, TerminalMode};
use std::collections::HashMap;
use std::path::PathBuf;
use std::sync::atomic::AtomicBool;
use std::sync::atomic::Ordering::Relaxed;
mod format_latex_doc;
mod utils;
// -------------------------------------------------------------------------------------------------
// CONSTANTS
// -------------------------------------------------------------------------------------------------
lazy_static! {
static ref DRY_RUN: AtomicBool = AtomicBool::new(false);
static ref ROOT_DIR: PathBuf = utils::project_root();
static ref ENV_TARGET_NATIVE: utils::Environment = {
let mut env = HashMap::new();
env.insert("RUSTFLAGS", "-Ctarget-cpu=native");
env
};
}
// -------------------------------------------------------------------------------------------------
// MACROS
// -------------------------------------------------------------------------------------------------
#[macro_export]
macro_rules! cmd {
(<$env: ident> $cmd: expr) => {
$crate::utils::execute($cmd, Some(&*$env), Some(&*$crate::ROOT_DIR))
};
($cmd: expr) => {
$crate::utils::execute($cmd, None, Some(&*$crate::ROOT_DIR))
};
}
// -------------------------------------------------------------------------------------------------
// MAIN
// -------------------------------------------------------------------------------------------------
fn main() -> Result<(), std::io::Error> {
// We parse the input args
let matches = Command::new("tasks")
.about("Rust scripts runner")
.arg(
Arg::new("verbose")
.short('v')
.long("verbose")
.help("Prints debug messages"),
)
.arg(
Arg::new("dry-run")
.long("dry-run")
.help("Do not execute the commands"),
)
.subcommand(Command::new("format_latex_doc").about("Escape underscores in latex equations"))
.arg_required_else_help(true)
.get_matches();
// We initialize the logger with proper verbosity
let verb = if matches.contains_id("verbose") {
LevelFilter::Debug
} else {
LevelFilter::Info
};
CombinedLogger::init(vec![TermLogger::new(
verb,
Config::default(),
TerminalMode::Mixed,
ColorChoice::Auto,
)])
.unwrap();
// We set the dry-run mode if present
if matches.contains_id("dry-run") {
DRY_RUN.store(true, Relaxed);
}
if matches.subcommand_matches("format_latex_doc").is_some() {
format_latex_doc::escape_underscore_in_latex_doc()?;
}
Ok(())
}

50
tasks/src/utils.rs Normal file
View File

@@ -0,0 +1,50 @@
use log::{debug, info};
use std::collections::HashMap;
use std::io::{Error, ErrorKind};
use std::path::{Path, PathBuf};
use std::process::{Command, Stdio};
use std::sync::atomic::Ordering::Relaxed;
pub type Environment = HashMap<&'static str, &'static str>;
#[allow(dead_code)]
pub fn execute(cmd: &str, env: Option<&Environment>, cwd: Option<&PathBuf>) -> Result<(), Error> {
info!("Executing {}", cmd);
debug!("Env {:?}", env);
debug!("Cwd {:?}", cwd);
if crate::DRY_RUN.load(Relaxed) {
info!("Skipping execution because of --dry-run mode");
return Ok(());
}
let mut command = Command::new("sh");
command
.arg("-c")
.arg(cmd)
.stderr(Stdio::inherit())
.stdout(Stdio::inherit());
if let Some(env) = env {
for (key, val) in env.iter() {
command.env(key, val);
}
}
if let Some(cwd) = cwd {
command.current_dir(cwd);
}
let output = command.output()?;
if !output.status.success() {
Err(Error::new(
ErrorKind::Other,
"Command exited with nonzero status.",
))
} else {
Ok(())
}
}
pub fn project_root() -> PathBuf {
Path::new(&env!("CARGO_MANIFEST_DIR"))
.ancestors()
.nth(1)
.unwrap()
.to_path_buf()
}

View File

@@ -1,6 +1,6 @@
[package]
name = "tfhe"
version = "0.1.0"
version = "0.3.1"
edition = "2021"
readme = "../README.md"
keywords = ["fully", "homomorphic", "encryption", "fhe", "cryptography"]
@@ -8,57 +8,88 @@ homepage = "https://zama.ai/"
documentation = "https://docs.zama.ai/tfhe-rs"
repository = "https://github.com/zama-ai/tfhe-rs"
license = "BSD-3-Clause-Clear"
description = "Concrete is a fully homomorphic encryption (FHE) library that implements Zama's variant of TFHE."
description = "TFHE-rs is a fully homomorphic encryption (FHE) library that implements Zama's variant of TFHE."
build = "build.rs"
exclude = ["/docs/", "/c_api_tests/", "/CMakeLists.txt"]
exclude = [
"/docs/",
"/c_api_tests/",
"/CMakeLists.txt",
"/js_on_wasm_tests/",
"/web_wasm_parallel_tests/",
]
rust-version = "1.67"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dev-dependencies]
rand = "0.7"
kolmogorov_smirnov = "1.1.0"
rand = "0.8.5"
rand_distr = "0.4.3"
paste = "1.0.7"
lazy_static = { version = "1.4.0" }
criterion = "0.3.5"
criterion = "0.4.0"
doc-comment = "0.3.3"
serde_json = "1.0.94"
clap = { version = "4.2.7", features = ["derive"] }
# Used in user documentation
bincode = "1.3.3"
fs2 = { version = "0.4.3"}
fs2 = { version = "0.4.3" }
itertools = "0.10.5"
num_cpus = "1.15"
# For erf and normality test
libm = "0.2.6"
test-case = "3.1.0"
combine = "4.6.6"
env_logger = "0.10.0"
log = "0.4.19"
[build-dependencies]
cbindgen = { version = "0.24.3", optional = true }
[dependencies]
concrete-csprng = { version = "0.2.1" }
concrete-cuda = { version = "0.1.1", optional = true }
concrete-csprng = { version = "0.3.0", features = [
"generator_fallback",
"parallel",
] }
lazy_static = { version = "1.4.0", optional = true }
serde = { version = "1.0", optional = true }
rayon = { version = "1.5.0", optional = true }
serde = { version = "1.0", features = ["derive"] }
rayon = { version = "1.5.0" }
bincode = { version = "1.3.3", optional = true }
concrete-fft = { version = "0.1", optional = true }
aligned-vec = "0.5"
dyn-stack = { version = "0.8", optional = true }
concrete-fft = { version = "0.2.1", features = ["serde", "fft128"] }
pulp = "0.11"
aligned-vec = { version = "0.5", features = ["serde"] }
dyn-stack = { version = "0.9" }
once_cell = "1.13"
paste = "1.0.7"
fs2 = { version = "0.4.3", optional = true }
# While we wait for repeat_n in rust standard library
itertools = "0.10.5"
# wasm deps
wasm-bindgen = { version = "0.2.63", features = [
wasm-bindgen = { version = "0.2.86", features = [
"serde-serialize",
], optional = true }
wasm-bindgen-rayon = { version = "1.0", optional = true }
js-sys = { version = "0.3", optional = true }
console_error_panic_hook = { version = "0.1.7", optional = true }
serde-wasm-bindgen = { version = "0.4", optional = true }
getrandom = { version = "0.2.8", optional = true }
bytemuck = "1.13.1"
[features]
boolean = ["minimal_core_crypto_features"]
shortint = ["minimal_core_crypto_features"]
internal-keycache = ["lazy_static", "fs2"]
boolean = []
shortint = []
integer = ["shortint"]
internal-keycache = ["lazy_static", "fs2", "bincode"]
__c_api = ["cbindgen", "minimal_core_crypto_features"]
# Experimental section
experimental = []
experimental-force_fft_algo_dif4 = []
# End experimental section
__c_api = ["cbindgen", "bincode"]
boolean-c-api = ["boolean", "__c_api"]
shortint-c-api = ["shortint", "__c_api"]
high-level-c-api = ["boolean-c-api", "shortint-c-api", "integer", "__c_api"]
__wasm_api = [
"wasm-bindgen",
@@ -67,87 +98,43 @@ __wasm_api = [
"serde-wasm-bindgen",
"getrandom",
"getrandom/js",
"bincode",
]
boolean-client-js-wasm-api = ["boolean", "__wasm_api"]
shortint-client-js-wasm-api = ["shortint", "__wasm_api"]
integer-client-js-wasm-api = ["integer", "__wasm_api"]
high-level-client-js-wasm-api = ["boolean", "shortint", "integer", "__wasm_api"]
parallel-wasm-api = ["wasm-bindgen-rayon"]
cuda = ["backend_cuda"]
nightly-avx512 = ["backend_fft_nightly_avx512"]
# A pure-rust CPU backend.
backend_default = ["concrete-csprng/generator_soft"]
# An accelerated backend, using the `concrete-fft` library.
backend_fft = ["concrete-fft", "dyn-stack"]
backend_fft_serialization = [
"bincode",
"concrete-fft/serde",
"aligned-vec/serde",
"__commons_serialization",
]
backend_fft_nightly_avx512 = ["concrete-fft/nightly"]
# Enables the parallel engine in default backend.
backend_default_parallel = ["__commons_parallel"]
nightly-avx512 = ["concrete-fft/nightly", "pulp/nightly"]
# Enable the x86_64 specific accelerated implementation of the random generator for the default
# backend
backend_default_generator_x86_64_aesni = [
"concrete-csprng/generator_x86_64_aesni",
]
generator_x86_64_aesni = ["concrete-csprng/generator_x86_64_aesni"]
# Enable the aarch64 specific accelerated implementation of the random generator for the default
# backend
backend_default_generator_aarch64_aes = [
"concrete-csprng/generator_aarch64_aes",
]
# Enable the serialization engine in the default backend.
backend_default_serialization = ["bincode", "__commons_serialization"]
# A GPU backend, relying on Cuda acceleration
backend_cuda = ["concrete-cuda"]
generator_aarch64_aes = ["concrete-csprng/generator_aarch64_aes"]
# Private features
__profiling = []
__private_docs = []
__commons_parallel = ["rayon", "concrete-csprng/parallel"]
__commons_serialization = ["serde", "serde/derive"]
seeder_unix = ["concrete-csprng/seeder_unix"]
seeder_x86_64_rdseed = ["concrete-csprng/seeder_x86_64_rdseed"]
minimal_core_crypto_features = [
"backend_default",
"backend_default_parallel",
"backend_default_serialization",
"backend_fft",
"backend_fft_serialization",
]
# These target_arch features enable a set of public features for concrete-core if users want a known
# good/working configuration for concrete-core.
# These target_arch features enable a set of public features for tfhe if users want a known
# good/working configuration for tfhe.
# For a target_arch that does not yet have such a feature, one can still enable features manually or
# create a feature for said target_arch to make its use simpler.
x86_64 = [
"minimal_core_crypto_features",
"backend_default_generator_x86_64_aesni",
"seeder_x86_64_rdseed",
]
x86_64 = ["generator_x86_64_aesni", "seeder_x86_64_rdseed"]
x86_64-unix = ["x86_64", "seeder_unix"]
# CUDA builds are Unix only at the moment
x86_64-unix-cuda = ["x86_64-unix", "cuda"]
aarch64 = [
"minimal_core_crypto_features",
"backend_default_generator_aarch64_aes",
]
aarch64 = ["generator_aarch64_aes"]
aarch64-unix = ["aarch64", "seeder_unix"]
[package.metadata.docs.rs]
# TODO: manage builds for docs.rs based on their documentation https://docs.rs/about
features = ["x86_64-unix", "boolean", "shortint"]
features = ["x86_64-unix", "boolean", "shortint", "integer"]
rustdoc-args = ["--html-in-header", "katex-header.html"]
###########
@@ -156,6 +143,23 @@ rustdoc-args = ["--html-in-header", "katex-header.html"]
# #
###########
[[bench]]
name = "pbs-bench"
path = "benches/core_crypto/pbs_bench.rs"
harness = false
required-features = ["boolean", "shortint", "internal-keycache"]
[[bench]]
name = "dev-bench"
path = "benches/core_crypto/dev_bench.rs"
harness = false
required-features = ["experimental", "internal-keycache"]
[[bench]]
name = "pbs128-bench"
path = "benches/core_crypto/pbs128_bench.rs"
harness = false
[[bench]]
name = "boolean-bench"
path = "benches/boolean/bench.rs"
@@ -168,12 +172,68 @@ path = "benches/shortint/bench.rs"
harness = false
required-features = ["shortint", "internal-keycache"]
[[bench]]
name = "integer-bench"
path = "benches/integer/bench.rs"
harness = false
required-features = ["integer", "internal-keycache"]
[[bench]]
name = "keygen"
path = "benches/keygen/bench.rs"
harness = false
required-features = ["shortint", "internal-keycache"]
[[bench]]
name = "utilities"
path = "benches/utilities.rs"
harness = false
required-features = ["boolean", "shortint", "integer", "internal-keycache"]
# Examples used as tools
[[example]]
name = "generates_test_keys"
name = "wasm_benchmarks_parser"
path = "examples/utilities/wasm_benchmarks_parser.rs"
required-features = ["shortint", "internal-keycache"]
[[example]]
name = "generates_test_keys"
path = "examples/utilities/generates_test_keys.rs"
required-features = ["shortint", "internal-keycache"]
[[example]]
name = "boolean_key_sizes"
path = "examples/utilities/boolean_key_sizes.rs"
required-features = ["boolean", "internal-keycache"]
[[example]]
name = "shortint_key_sizes"
path = "examples/utilities/shortint_key_sizes.rs"
required-features = ["shortint", "internal-keycache"]
[[example]]
name = "hlapi_compact_pk_ct_sizes"
path = "examples/utilities/hlapi_compact_pk_ct_sizes.rs"
required-features = ["integer", "internal-keycache"]
[[example]]
name = "micro_bench_and"
path = "examples/utilities/micro_bench_and.rs"
required-features = ["boolean"]
# Real use-case examples
[[example]]
name = "dark_market"
required-features = ["integer", "internal-keycache"]
[[example]]
name = "regex_engine"
required-features = ["integer"]
[[example]]
name = "sha256_bool"
required-features = ["boolean"]
[lib]

View File

@@ -1,32 +1,28 @@
BSD 3-Clause Clear License
Copyright © 2022 ZAMA.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or other
materials provided with the distribution.
3. Neither the name of ZAMA nor the names of its contributors may be used to endorse
or promote products derived from this software without specific prior written permission.
NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY THIS LICENSE*.
THIS SOFTWARE IS PROVIDED BY THE ZAMA AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
ZAMA OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*In addition to the rights carried by this license, ZAMA grants to the user a non-exclusive,
free and non-commercial license on all patents filed in its name relating to the open-source
code (the "Patents") for the sole purpose of evaluation, development, research, prototyping
and experimentation.
BSD 3-Clause Clear License
Copyright © 2023 ZAMA.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or other
materials provided with the distribution.
3. Neither the name of ZAMA nor the names of its contributors may be used to endorse
or promote products derived from this software without specific prior written permission.
NO EXPRESS OR IMPLIED LICENSES TO ANY PARTY'S PATENT RIGHTS ARE GRANTED BY THIS LICENSE.
THIS SOFTWARE IS PROVIDED BY THE ZAMA AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
ZAMA OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,20 +1,50 @@
#[path = "../utilities.rs"]
mod utilities;
use crate::utilities::{write_to_json, CryptoParametersRecord, OperatorType};
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use tfhe::boolean::client_key::ClientKey;
use tfhe::boolean::parameters::{BooleanParameters, DEFAULT_PARAMETERS, TFHE_LIB_PARAMETERS};
use tfhe::boolean::prelude::BinaryBooleanGates;
use tfhe::boolean::parameters::{
BooleanParameters, DEFAULT_PARAMETERS, PARAMETERS_ERROR_PROB_2_POW_MINUS_165,
PARAMETERS_ERROR_PROB_2_POW_MINUS_165_KS_PBS,
};
use tfhe::boolean::prelude::{BinaryBooleanGates, DEFAULT_PARAMETERS_KS_PBS, TFHE_LIB_PARAMETERS};
use tfhe::boolean::server_key::ServerKey;
criterion_group!(
gates_benches,
bench_default_parameters,
bench_tfhe_lib_parameters
bench_default_parameters_ks_pbs,
bench_low_prob_parameters,
bench_low_prob_parameters_ks_pbs,
bench_tfhe_lib_parameters,
);
criterion_main!(gates_benches);
/// Helper function to write boolean benchmarks parameters to disk in JSON format.
pub fn write_to_json_boolean<T: Into<CryptoParametersRecord<u32>>>(
bench_id: &str,
params: T,
params_alias: impl Into<String>,
display_name: impl Into<String>,
) {
write_to_json(
bench_id,
params,
params_alias,
display_name,
&OperatorType::Atomic,
1,
vec![1],
);
}
// Put all `bench_function` in one place
// so the keygen is only run once per parameters saving time.
fn bench_gates(c: &mut Criterion, params: BooleanParameters, parameter_name: &str) {
fn benchs(c: &mut Criterion, params: BooleanParameters, parameter_name: &str) {
let mut bench_group = c.benchmark_group("gates_benches");
let cks = ClientKey::new(&params);
let sks = ServerKey::new(&cks);
@@ -22,39 +52,59 @@ fn bench_gates(c: &mut Criterion, params: BooleanParameters, parameter_name: &st
let ct2 = cks.encrypt(false);
let ct3 = cks.encrypt(true);
let id = format!("AND gate {}", parameter_name);
c.bench_function(&id, |b| b.iter(|| black_box(sks.and(&ct1, &ct2))));
let id = format!("AND::{parameter_name}");
bench_group.bench_function(&id, |b| b.iter(|| black_box(sks.and(&ct1, &ct2))));
write_to_json_boolean(&id, params, parameter_name, "and");
let id = format!("NAND gate {}", parameter_name);
c.bench_function(&id, |b| b.iter(|| black_box(sks.nand(&ct1, &ct2))));
let id = format!("NAND::{parameter_name}");
bench_group.bench_function(&id, |b| b.iter(|| black_box(sks.nand(&ct1, &ct2))));
write_to_json_boolean(&id, params, parameter_name, "nand");
let id = format!("OR gate {}", parameter_name);
c.bench_function(&id, |b| b.iter(|| black_box(sks.or(&ct1, &ct2))));
let id = format!("OR::{parameter_name}");
bench_group.bench_function(&id, |b| b.iter(|| black_box(sks.or(&ct1, &ct2))));
write_to_json_boolean(&id, params, parameter_name, "or");
let id = format!("XOR gate {}", parameter_name);
c.bench_function(&id, |b| b.iter(|| black_box(sks.xor(&ct1, &ct2))));
let id = format!("XOR::{parameter_name}");
bench_group.bench_function(&id, |b| b.iter(|| black_box(sks.xor(&ct1, &ct2))));
write_to_json_boolean(&id, params, parameter_name, "xor");
let id = format!("XNOR gate {}", parameter_name);
c.bench_function(&id, |b| b.iter(|| black_box(sks.xnor(&ct1, &ct2))));
let id = format!("XNOR::{parameter_name}");
bench_group.bench_function(&id, |b| b.iter(|| black_box(sks.xnor(&ct1, &ct2))));
write_to_json_boolean(&id, params, parameter_name, "xnor");
let id = format!("NOT gate {}", parameter_name);
c.bench_function(&id, |b| b.iter(|| black_box(sks.not(&ct1))));
let id = format!("NOT::{parameter_name}");
bench_group.bench_function(&id, |b| b.iter(|| black_box(sks.not(&ct1))));
write_to_json_boolean(&id, params, parameter_name, "not");
let id = format!("MUX gate {}", parameter_name);
c.bench_function(&id, |b| b.iter(|| black_box(sks.mux(&ct1, &ct2, &ct3))));
let id = format!("MUX::{parameter_name}");
bench_group.bench_function(&id, |b| b.iter(|| black_box(sks.mux(&ct1, &ct2, &ct3))));
write_to_json_boolean(&id, params, parameter_name, "mux");
}
#[cfg(not(feature = "cuda"))]
fn bench_default_parameters(c: &mut Criterion) {
bench_gates(c, DEFAULT_PARAMETERS, "DEFAULT_PARAMETERS");
benchs(c, DEFAULT_PARAMETERS, "DEFAULT_PARAMETERS");
}
#[cfg(feature = "cuda")]
fn bench_default_parameters(_: &mut Criterion) {
let _ = DEFAULT_PARAMETERS; // to avoid unused import warnings
println!("DEFAULT_PARAMETERS not benched as they are not compatible with the cuda feature.");
fn bench_default_parameters_ks_pbs(c: &mut Criterion) {
benchs(c, DEFAULT_PARAMETERS_KS_PBS, "DEFAULT_PARAMETERS_KS_PBS");
}
fn bench_low_prob_parameters(c: &mut Criterion) {
benchs(
c,
PARAMETERS_ERROR_PROB_2_POW_MINUS_165,
"PARAMETERS_ERROR_PROB_2_POW_MINUS_165_KS_PBS",
);
}
fn bench_low_prob_parameters_ks_pbs(c: &mut Criterion) {
benchs(
c,
PARAMETERS_ERROR_PROB_2_POW_MINUS_165_KS_PBS,
"PARAMETERS_ERROR_PROB_2_POW_MINUS_165_KS_PBS",
);
}
fn bench_tfhe_lib_parameters(c: &mut Criterion) {
bench_gates(c, TFHE_LIB_PARAMETERS, "TFHE_LIB_PARAMETERS");
benchs(c, TFHE_LIB_PARAMETERS, " TFHE_LIB_PARAMETERS");
}

View File

@@ -0,0 +1,332 @@
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use tfhe::core_crypto::prelude::*;
criterion_group!(
boolean_like_pbs_group,
multi_bit_pbs::<u32>,
pbs::<u32>,
mem_optimized_pbs::<u32>
);
criterion_group!(
shortint_like_pbs_group,
multi_bit_pbs::<u64>,
pbs::<u64>,
mem_optimized_pbs::<u64>
);
criterion_main!(boolean_like_pbs_group, shortint_like_pbs_group);
fn get_bench_params<Scalar: Numeric>() -> (
LweDimension,
StandardDev,
DecompositionBaseLog,
DecompositionLevelCount,
GlweDimension,
PolynomialSize,
LweBskGroupingFactor,
ThreadCount,
) {
if Scalar::BITS == 64 {
(
LweDimension(742),
StandardDev(0.000007069849454709433),
DecompositionBaseLog(3),
DecompositionLevelCount(5),
GlweDimension(1),
PolynomialSize(1024),
LweBskGroupingFactor(2),
ThreadCount(5),
)
} else if Scalar::BITS == 32 {
(
LweDimension(778),
StandardDev(0.000003725679281679651),
DecompositionBaseLog(18),
DecompositionLevelCount(1),
GlweDimension(3),
PolynomialSize(512),
LweBskGroupingFactor(2),
ThreadCount(5),
)
} else {
unreachable!()
}
}
fn multi_bit_pbs<Scalar: UnsignedTorus + CastInto<usize> + CastFrom<usize> + Sync>(
c: &mut Criterion,
) {
// DISCLAIMER: these toy example parameters are not guaranteed to be secure or yield correct
// computations
// Define parameters for LweBootstrapKey creation
let (
mut input_lwe_dimension,
lwe_modular_std_dev,
decomp_base_log,
decomp_level_count,
glwe_dimension,
polynomial_size,
grouping_factor,
thread_count,
) = get_bench_params::<Scalar>();
let ciphertext_modulus = CiphertextModulus::new_native();
while input_lwe_dimension.0 % grouping_factor.0 != 0 {
input_lwe_dimension = LweDimension(input_lwe_dimension.0 + 1);
}
// Create the PRNG
let mut seeder = new_seeder();
let seeder = seeder.as_mut();
let mut encryption_generator =
EncryptionRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed(), seeder);
let mut secret_generator =
SecretRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed());
// Create the LweSecretKey
let input_lwe_secret_key =
allocate_and_generate_new_binary_lwe_secret_key(input_lwe_dimension, &mut secret_generator);
let output_glwe_secret_key: GlweSecretKeyOwned<Scalar> =
allocate_and_generate_new_binary_glwe_secret_key(
glwe_dimension,
polynomial_size,
&mut secret_generator,
);
let output_lwe_secret_key = output_glwe_secret_key.into_lwe_secret_key();
let multi_bit_bsk = FourierLweMultiBitBootstrapKey::new(
input_lwe_dimension,
glwe_dimension.to_glwe_size(),
polynomial_size,
decomp_base_log,
decomp_level_count,
grouping_factor,
);
// Allocate a new LweCiphertext and encrypt our plaintext
let lwe_ciphertext_in = allocate_and_encrypt_new_lwe_ciphertext(
&input_lwe_secret_key,
Plaintext(Scalar::ZERO),
lwe_modular_std_dev,
ciphertext_modulus,
&mut encryption_generator,
);
let accumulator = GlweCiphertext::new(
Scalar::ZERO,
glwe_dimension.to_glwe_size(),
polynomial_size,
ciphertext_modulus,
);
// Allocate the LweCiphertext to store the result of the PBS
let mut out_pbs_ct = LweCiphertext::new(
Scalar::ZERO,
output_lwe_secret_key.lwe_dimension().to_lwe_size(),
ciphertext_modulus,
);
let id = format!("Multi Bit PBS {}", Scalar::BITS);
#[allow(clippy::unit_arg)]
{
c.bench_function(&id, |b| {
b.iter(|| {
multi_bit_programmable_bootstrap_lwe_ciphertext(
&lwe_ciphertext_in,
&mut out_pbs_ct,
&accumulator.as_view(),
&multi_bit_bsk,
thread_count,
);
black_box(&mut out_pbs_ct);
})
});
}
}
fn pbs<Scalar: UnsignedTorus + CastInto<usize>>(c: &mut Criterion) {
// DISCLAIMER: these toy example parameters are not guaranteed to be secure or yield correct
// computations
// Define parameters for LweBootstrapKey creation
let (
input_lwe_dimension,
lwe_modular_std_dev,
decomp_base_log,
decomp_level_count,
glwe_dimension,
polynomial_size,
_,
_,
) = get_bench_params::<Scalar>();
let ciphertext_modulus = CiphertextModulus::new_native();
// Create the PRNG
let mut seeder = new_seeder();
let seeder = seeder.as_mut();
let mut encryption_generator =
EncryptionRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed(), seeder);
let mut secret_generator =
SecretRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed());
// Create the LweSecretKey
let input_lwe_secret_key =
allocate_and_generate_new_binary_lwe_secret_key(input_lwe_dimension, &mut secret_generator);
let output_glwe_secret_key: GlweSecretKeyOwned<Scalar> =
allocate_and_generate_new_binary_glwe_secret_key(
glwe_dimension,
polynomial_size,
&mut secret_generator,
);
let output_lwe_secret_key = output_glwe_secret_key.into_lwe_secret_key();
// Create the empty bootstrapping key in the Fourier domain
let fourier_bsk = FourierLweBootstrapKey::new(
input_lwe_dimension,
glwe_dimension.to_glwe_size(),
polynomial_size,
decomp_base_log,
decomp_level_count,
);
// Allocate a new LweCiphertext and encrypt our plaintext
let lwe_ciphertext_in = allocate_and_encrypt_new_lwe_ciphertext(
&input_lwe_secret_key,
Plaintext(Scalar::ZERO),
lwe_modular_std_dev,
ciphertext_modulus,
&mut encryption_generator,
);
let accumulator = GlweCiphertext::new(
Scalar::ZERO,
glwe_dimension.to_glwe_size(),
polynomial_size,
ciphertext_modulus,
);
// Allocate the LweCiphertext to store the result of the PBS
let mut out_pbs_ct = LweCiphertext::new(
Scalar::ZERO,
output_lwe_secret_key.lwe_dimension().to_lwe_size(),
ciphertext_modulus,
);
let id = format!("PBS {}", Scalar::BITS);
{
c.bench_function(&id, |b| {
b.iter(|| {
programmable_bootstrap_lwe_ciphertext(
&lwe_ciphertext_in,
&mut out_pbs_ct,
&accumulator.as_view(),
&fourier_bsk,
);
black_box(&mut out_pbs_ct);
})
});
}
}
fn mem_optimized_pbs<Scalar: UnsignedTorus + CastInto<usize>>(c: &mut Criterion) {
// DISCLAIMER: these toy example parameters are not guaranteed to be secure or yield correct
// computations
// Define parameters for LweBootstrapKey creation
let (
input_lwe_dimension,
lwe_modular_std_dev,
decomp_base_log,
decomp_level_count,
glwe_dimension,
polynomial_size,
_,
_,
) = get_bench_params::<Scalar>();
// Create the PRNG
let mut seeder = new_seeder();
let seeder = seeder.as_mut();
let mut encryption_generator =
EncryptionRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed(), seeder);
let mut secret_generator =
SecretRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed());
// Create the LweSecretKey
let input_lwe_secret_key =
allocate_and_generate_new_binary_lwe_secret_key(input_lwe_dimension, &mut secret_generator);
let output_glwe_secret_key: GlweSecretKeyOwned<Scalar> =
allocate_and_generate_new_binary_glwe_secret_key(
glwe_dimension,
polynomial_size,
&mut secret_generator,
);
let output_lwe_secret_key = output_glwe_secret_key.into_lwe_secret_key();
// Create the empty bootstrapping key in the Fourier domain
let fourier_bsk = FourierLweBootstrapKey::new(
input_lwe_dimension,
glwe_dimension.to_glwe_size(),
polynomial_size,
decomp_base_log,
decomp_level_count,
);
// Allocate a new LweCiphertext and encrypt our plaintext
let lwe_ciphertext_in = allocate_and_encrypt_new_lwe_ciphertext(
&input_lwe_secret_key,
Plaintext(Scalar::ZERO),
lwe_modular_std_dev,
ciphertext_modulus,
&mut encryption_generator,
);
let accumulator = GlweCiphertext::new(
Scalar::ZERO,
glwe_dimension.to_glwe_size(),
polynomial_size,
ciphertext_modulus,
);
// Allocate the LweCiphertext to store the result of the PBS
let mut out_pbs_ct = LweCiphertext::new(
Scalar::ZERO,
output_lwe_secret_key.lwe_dimension().to_lwe_size(),
ciphertext_modulus,
);
let mut buffers = ComputationBuffers::new();
let fft = Fft::new(fourier_bsk.polynomial_size());
let fft = fft.as_view();
buffers.resize(
programmable_bootstrap_lwe_ciphertext_mem_optimized_requirement::<Scalar>(
fourier_bsk.glwe_size(),
fourier_bsk.polynomial_size(),
fft,
)
.unwrap()
.unaligned_bytes_required(),
);
let id = format!("PBS mem-optimized {}", Scalar::BITS);
{
c.bench_function(&id, |b| {
b.iter(|| {
programmable_bootstrap_lwe_ciphertext_mem_optimized(
&lwe_ciphertext_in,
&mut out_pbs_ct,
&accumulator.as_view(),
&fourier_bsk,
fft,
buffers.stack(),
);
black_box(&mut out_pbs_ct);
})
});
}
}

View File

@@ -0,0 +1,108 @@
use criterion::{criterion_group, criterion_main, Criterion};
use dyn_stack::PodStack;
fn sqr(x: f64) -> f64 {
x * x
}
fn criterion_bench(c: &mut Criterion) {
{
use tfhe::core_crypto::fft_impl::fft128::crypto::bootstrap::bootstrap_scratch;
use tfhe::core_crypto::prelude::*;
type Scalar = u128;
let small_lwe_dimension = LweDimension(742);
let glwe_dimension = GlweDimension(1);
let polynomial_size = PolynomialSize(2048);
let lwe_modular_std_dev = StandardDev(sqr(0.000007069849454709433));
let pbs_base_log = DecompositionBaseLog(23);
let pbs_level = DecompositionLevelCount(1);
let ciphertext_modulus = CiphertextModulus::new_native();
let mut boxed_seeder = new_seeder();
let seeder = boxed_seeder.as_mut();
let mut secret_generator =
SecretRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed());
let mut encryption_generator =
EncryptionRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed(), seeder);
let small_lwe_sk =
LweSecretKey::generate_new_binary(small_lwe_dimension, &mut secret_generator);
let glwe_sk = GlweSecretKey::<Vec<Scalar>>::generate_new_binary(
glwe_dimension,
polynomial_size,
&mut secret_generator,
);
let big_lwe_sk = glwe_sk.into_lwe_secret_key();
let fourier_bsk = Fourier128LweBootstrapKey::new(
small_lwe_dimension,
glwe_dimension.to_glwe_size(),
polynomial_size,
pbs_base_log,
pbs_level,
);
let fft = Fft128::new(polynomial_size);
let fft = fft.as_view();
let message_modulus: Scalar = 1 << 4;
let input_message: Scalar = 3;
let delta: Scalar = (1 << (Scalar::BITS - 1)) / message_modulus;
let plaintext = Plaintext(input_message * delta);
let lwe_ciphertext_in: LweCiphertextOwned<Scalar> = allocate_and_encrypt_new_lwe_ciphertext(
&small_lwe_sk,
plaintext,
lwe_modular_std_dev,
ciphertext_modulus,
&mut encryption_generator,
);
let accumulator: GlweCiphertextOwned<Scalar> = GlweCiphertextOwned::new(
Scalar::ONE,
glwe_dimension.to_glwe_size(),
polynomial_size,
ciphertext_modulus,
);
let mut pbs_out: LweCiphertext<Vec<Scalar>> = LweCiphertext::new(
0,
big_lwe_sk.lwe_dimension().to_lwe_size(),
ciphertext_modulus,
);
let mut buf = vec![
0u8;
bootstrap_scratch::<Scalar>(
fourier_bsk.glwe_size(),
fourier_bsk.polynomial_size(),
fft
)
.unwrap()
.unaligned_bytes_required()
];
c.bench_function("pbs128", |b| {
b.iter(|| {
fourier_bsk.bootstrap(
&mut pbs_out,
&lwe_ciphertext_in,
&accumulator,
fft,
PodStack::new(&mut buf),
)
});
});
}
}
criterion_group!(benches, criterion_bench);
criterion_main!(benches);

View File

@@ -0,0 +1,550 @@
#[path = "../utilities.rs"]
mod utilities;
use crate::utilities::{write_to_json, CryptoParametersRecord, OperatorType};
use rayon::prelude::*;
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use serde::Serialize;
use tfhe::boolean::parameters::{
BooleanParameters, DEFAULT_PARAMETERS, PARAMETERS_ERROR_PROB_2_POW_MINUS_165,
};
use tfhe::core_crypto::prelude::*;
use tfhe::shortint::keycache::NamedParam;
use tfhe::shortint::parameters::*;
use tfhe::shortint::ClassicPBSParameters;
const SHORTINT_BENCH_PARAMS: [ClassicPBSParameters; 15] = [
PARAM_MESSAGE_1_CARRY_0_KS_PBS,
PARAM_MESSAGE_1_CARRY_1_KS_PBS,
PARAM_MESSAGE_2_CARRY_0_KS_PBS,
PARAM_MESSAGE_2_CARRY_1_KS_PBS,
PARAM_MESSAGE_2_CARRY_2_KS_PBS,
PARAM_MESSAGE_3_CARRY_0_KS_PBS,
PARAM_MESSAGE_3_CARRY_2_KS_PBS,
PARAM_MESSAGE_3_CARRY_3_KS_PBS,
PARAM_MESSAGE_4_CARRY_0_KS_PBS,
PARAM_MESSAGE_4_CARRY_3_KS_PBS,
PARAM_MESSAGE_4_CARRY_4_KS_PBS,
PARAM_MESSAGE_5_CARRY_0_KS_PBS,
PARAM_MESSAGE_6_CARRY_0_KS_PBS,
PARAM_MESSAGE_7_CARRY_0_KS_PBS,
PARAM_MESSAGE_8_CARRY_0_KS_PBS,
];
const BOOLEAN_BENCH_PARAMS: [(&str, BooleanParameters); 2] = [
("BOOLEAN_DEFAULT_PARAMS", DEFAULT_PARAMETERS),
(
"BOOLEAN_TFHE_LIB_PARAMS",
PARAMETERS_ERROR_PROB_2_POW_MINUS_165,
),
];
criterion_group!(
name = pbs_group;
config = Criterion::default().sample_size(2000);
targets = mem_optimized_pbs::<u64>, mem_optimized_pbs::<u32>
);
criterion_group!(
name = multi_bit_pbs_group;
config = Criterion::default().sample_size(2000);
targets = multi_bit_pbs::<u64>,
multi_bit_pbs::<u32>,
multi_bit_deterministic_pbs::<u64>,
multi_bit_deterministic_pbs::<u32>,
);
criterion_group!(
name = pbs_throughput_group;
config = Criterion::default().sample_size(100);
targets = pbs_throughput::<u64>, pbs_throughput::<u32>
);
criterion_main!(pbs_group, multi_bit_pbs_group, pbs_throughput_group);
fn benchmark_parameters<Scalar: UnsignedInteger>(
) -> Vec<(&'static str, CryptoParametersRecord<Scalar>)> {
if Scalar::BITS == 64 {
SHORTINT_BENCH_PARAMS
.iter()
.map(|params| {
(
params.name(),
<ClassicPBSParameters as Into<PBSParameters>>::into(*params)
.to_owned()
.into(),
)
})
.collect()
} else if Scalar::BITS == 32 {
BOOLEAN_BENCH_PARAMS
.iter()
.map(|(name, params)| (*name, params.to_owned().into()))
.collect()
} else {
vec![]
}
}
fn throughput_benchmark_parameters<Scalar: UnsignedInteger>(
) -> Vec<(&'static str, CryptoParametersRecord<Scalar>)> {
if Scalar::BITS == 64 {
vec![
PARAM_MESSAGE_1_CARRY_1_KS_PBS,
PARAM_MESSAGE_2_CARRY_2_KS_PBS,
PARAM_MESSAGE_3_CARRY_3_KS_PBS,
]
.iter()
.map(|params| {
(
params.name(),
<ClassicPBSParameters as Into<PBSParameters>>::into(*params)
.to_owned()
.into(),
)
})
.collect()
} else if Scalar::BITS == 32 {
BOOLEAN_BENCH_PARAMS
.iter()
.map(|(name, params)| (*name, params.to_owned().into()))
.collect()
} else {
vec![]
}
}
fn multi_bit_benchmark_parameters<Scalar: UnsignedInteger + Default>() -> Vec<(
&'static str,
CryptoParametersRecord<Scalar>,
LweBskGroupingFactor,
)> {
if Scalar::BITS == 64 {
vec![
PARAM_MULTI_BIT_MESSAGE_1_CARRY_1_GROUP_2_KS_PBS,
PARAM_MULTI_BIT_MESSAGE_2_CARRY_2_GROUP_2_KS_PBS,
PARAM_MULTI_BIT_MESSAGE_3_CARRY_3_GROUP_2_KS_PBS,
PARAM_MULTI_BIT_MESSAGE_1_CARRY_1_GROUP_3_KS_PBS,
PARAM_MULTI_BIT_MESSAGE_2_CARRY_2_GROUP_3_KS_PBS,
PARAM_MULTI_BIT_MESSAGE_3_CARRY_3_GROUP_3_KS_PBS,
]
.iter()
.map(|params| {
(
params.name(),
<MultiBitPBSParameters as Into<PBSParameters>>::into(*params)
.to_owned()
.into(),
params.grouping_factor,
)
})
.collect()
} else {
// For now there are no parameters available to test multi bit PBS on 32 bits.
vec![]
}
}
fn mem_optimized_pbs<Scalar: UnsignedTorus + CastInto<usize> + Serialize>(c: &mut Criterion) {
let bench_name = "PBS_mem-optimized";
let mut bench_group = c.benchmark_group(bench_name);
// Create the PRNG
let mut seeder = new_seeder();
let seeder = seeder.as_mut();
let mut encryption_generator =
EncryptionRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed(), seeder);
let mut secret_generator =
SecretRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed());
for (name, params) in benchmark_parameters::<Scalar>().iter() {
// Create the LweSecretKey
let input_lwe_secret_key = allocate_and_generate_new_binary_lwe_secret_key(
params.lwe_dimension.unwrap(),
&mut secret_generator,
);
let output_glwe_secret_key: GlweSecretKeyOwned<Scalar> =
allocate_and_generate_new_binary_glwe_secret_key(
params.glwe_dimension.unwrap(),
params.polynomial_size.unwrap(),
&mut secret_generator,
);
let output_lwe_secret_key = output_glwe_secret_key.into_lwe_secret_key();
// Create the empty bootstrapping key in the Fourier domain
let fourier_bsk = FourierLweBootstrapKey::new(
params.lwe_dimension.unwrap(),
params.glwe_dimension.unwrap().to_glwe_size(),
params.polynomial_size.unwrap(),
params.pbs_base_log.unwrap(),
params.pbs_level.unwrap(),
);
// Allocate a new LweCiphertext and encrypt our plaintext
let lwe_ciphertext_in: LweCiphertextOwned<Scalar> = allocate_and_encrypt_new_lwe_ciphertext(
&input_lwe_secret_key,
Plaintext(Scalar::ZERO),
params.lwe_modular_std_dev.unwrap(),
tfhe::core_crypto::prelude::CiphertextModulus::new_native(),
&mut encryption_generator,
);
let accumulator = GlweCiphertext::new(
Scalar::ZERO,
params.glwe_dimension.unwrap().to_glwe_size(),
params.polynomial_size.unwrap(),
tfhe::core_crypto::prelude::CiphertextModulus::new_native(),
);
// Allocate the LweCiphertext to store the result of the PBS
let mut out_pbs_ct = LweCiphertext::new(
Scalar::ZERO,
output_lwe_secret_key.lwe_dimension().to_lwe_size(),
tfhe::core_crypto::prelude::CiphertextModulus::new_native(),
);
let mut buffers = ComputationBuffers::new();
let fft = Fft::new(fourier_bsk.polynomial_size());
let fft = fft.as_view();
buffers.resize(
programmable_bootstrap_lwe_ciphertext_mem_optimized_requirement::<Scalar>(
fourier_bsk.glwe_size(),
fourier_bsk.polynomial_size(),
fft,
)
.unwrap()
.unaligned_bytes_required(),
);
let id = format!("{bench_name}_{name}");
{
bench_group.bench_function(&id, |b| {
b.iter(|| {
programmable_bootstrap_lwe_ciphertext_mem_optimized(
&lwe_ciphertext_in,
&mut out_pbs_ct,
&accumulator.as_view(),
&fourier_bsk,
fft,
buffers.stack(),
);
black_box(&mut out_pbs_ct);
})
});
}
let bit_size = (params.message_modulus.unwrap_or(2) as u32).ilog2();
write_to_json(
&id,
*params,
*name,
"pbs",
&OperatorType::Atomic,
bit_size,
vec![bit_size],
);
}
}
fn multi_bit_pbs<
Scalar: UnsignedTorus + CastInto<usize> + CastFrom<usize> + Default + Sync + Serialize,
>(
c: &mut Criterion,
) {
let bench_name = "multi_bits_PBS";
let mut bench_group = c.benchmark_group(bench_name);
// Create the PRNG
let mut seeder = new_seeder();
let seeder = seeder.as_mut();
let mut encryption_generator =
EncryptionRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed(), seeder);
let mut secret_generator =
SecretRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed());
for (name, params, grouping_factor) in multi_bit_benchmark_parameters::<Scalar>().iter() {
// Create the LweSecretKey
let input_lwe_secret_key = allocate_and_generate_new_binary_lwe_secret_key(
params.lwe_dimension.unwrap(),
&mut secret_generator,
);
let output_glwe_secret_key: GlweSecretKeyOwned<Scalar> =
allocate_and_generate_new_binary_glwe_secret_key(
params.glwe_dimension.unwrap(),
params.polynomial_size.unwrap(),
&mut secret_generator,
);
let output_lwe_secret_key = output_glwe_secret_key.into_lwe_secret_key();
let multi_bit_bsk = FourierLweMultiBitBootstrapKey::new(
params.lwe_dimension.unwrap(),
params.glwe_dimension.unwrap().to_glwe_size(),
params.polynomial_size.unwrap(),
params.pbs_base_log.unwrap(),
params.pbs_level.unwrap(),
*grouping_factor,
);
// Allocate a new LweCiphertext and encrypt our plaintext
let lwe_ciphertext_in = allocate_and_encrypt_new_lwe_ciphertext(
&input_lwe_secret_key,
Plaintext(Scalar::ZERO),
params.lwe_modular_std_dev.unwrap(),
tfhe::core_crypto::prelude::CiphertextModulus::new_native(),
&mut encryption_generator,
);
let accumulator = GlweCiphertext::new(
Scalar::ZERO,
params.glwe_dimension.unwrap().to_glwe_size(),
params.polynomial_size.unwrap(),
tfhe::core_crypto::prelude::CiphertextModulus::new_native(),
);
// Allocate the LweCiphertext to store the result of the PBS
let mut out_pbs_ct = LweCiphertext::new(
Scalar::ZERO,
output_lwe_secret_key.lwe_dimension().to_lwe_size(),
tfhe::core_crypto::prelude::CiphertextModulus::new_native(),
);
let id = format!("{bench_name}_{name}_parallelized");
bench_group.bench_function(&id, |b| {
b.iter(|| {
multi_bit_programmable_bootstrap_lwe_ciphertext(
&lwe_ciphertext_in,
&mut out_pbs_ct,
&accumulator.as_view(),
&multi_bit_bsk,
ThreadCount(10),
);
black_box(&mut out_pbs_ct);
})
});
let bit_size = params.message_modulus.unwrap().ilog2();
write_to_json(
&id,
*params,
*name,
"pbs",
&OperatorType::Atomic,
bit_size,
vec![bit_size],
);
}
}
fn multi_bit_deterministic_pbs<
Scalar: UnsignedTorus + CastInto<usize> + CastFrom<usize> + Default + Serialize + Sync,
>(
c: &mut Criterion,
) {
let bench_name = "multi_bits_deterministic_PBS";
let mut bench_group = c.benchmark_group(bench_name);
// Create the PRNG
let mut seeder = new_seeder();
let seeder = seeder.as_mut();
let mut encryption_generator =
EncryptionRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed(), seeder);
let mut secret_generator =
SecretRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed());
for (name, params, grouping_factor) in multi_bit_benchmark_parameters::<Scalar>().iter() {
// Create the LweSecretKey
let input_lwe_secret_key = allocate_and_generate_new_binary_lwe_secret_key(
params.lwe_dimension.unwrap(),
&mut secret_generator,
);
let output_glwe_secret_key: GlweSecretKeyOwned<Scalar> =
allocate_and_generate_new_binary_glwe_secret_key(
params.glwe_dimension.unwrap(),
params.polynomial_size.unwrap(),
&mut secret_generator,
);
let output_lwe_secret_key = output_glwe_secret_key.into_lwe_secret_key();
let multi_bit_bsk = FourierLweMultiBitBootstrapKey::new(
params.lwe_dimension.unwrap(),
params.glwe_dimension.unwrap().to_glwe_size(),
params.polynomial_size.unwrap(),
params.pbs_base_log.unwrap(),
params.pbs_level.unwrap(),
*grouping_factor,
);
// Allocate a new LweCiphertext and encrypt our plaintext
let lwe_ciphertext_in = allocate_and_encrypt_new_lwe_ciphertext(
&input_lwe_secret_key,
Plaintext(Scalar::ZERO),
params.lwe_modular_std_dev.unwrap(),
tfhe::core_crypto::prelude::CiphertextModulus::new_native(),
&mut encryption_generator,
);
let accumulator = GlweCiphertext::new(
Scalar::ZERO,
params.glwe_dimension.unwrap().to_glwe_size(),
params.polynomial_size.unwrap(),
tfhe::core_crypto::prelude::CiphertextModulus::new_native(),
);
// Allocate the LweCiphertext to store the result of the PBS
let mut out_pbs_ct = LweCiphertext::new(
Scalar::ZERO,
output_lwe_secret_key.lwe_dimension().to_lwe_size(),
tfhe::core_crypto::prelude::CiphertextModulus::new_native(),
);
let id = format!("{bench_name}_{name}_parallelized");
bench_group.bench_function(&id, |b| {
b.iter(|| {
multi_bit_deterministic_programmable_bootstrap_lwe_ciphertext(
&lwe_ciphertext_in,
&mut out_pbs_ct,
&accumulator.as_view(),
&multi_bit_bsk,
ThreadCount(10),
);
black_box(&mut out_pbs_ct);
})
});
let bit_size = params.message_modulus.unwrap().ilog2();
write_to_json(
&id,
*params,
*name,
"pbs",
&OperatorType::Atomic,
bit_size,
vec![bit_size],
);
}
}
fn pbs_throughput<Scalar: UnsignedTorus + CastInto<usize> + Sync + Send + Serialize>(
c: &mut Criterion,
) {
let bench_name = "PBS_throughput";
let mut bench_group = c.benchmark_group(bench_name);
// Create the PRNG
let mut seeder = new_seeder();
let seeder = seeder.as_mut();
let mut encryption_generator =
EncryptionRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed(), seeder);
let mut secret_generator =
SecretRandomGenerator::<ActivatedRandomGenerator>::new(seeder.seed());
for (name, params) in throughput_benchmark_parameters::<Scalar>().iter() {
let input_lwe_secret_key = allocate_and_generate_new_binary_lwe_secret_key(
params.lwe_dimension.unwrap(),
&mut secret_generator,
);
let glwe_secret_key = GlweSecretKey::new_empty_key(
Scalar::ZERO,
params.glwe_dimension.unwrap(),
params.polynomial_size.unwrap(),
);
let big_lwe_sk = glwe_secret_key.into_lwe_secret_key();
let big_lwe_dimension = big_lwe_sk.lwe_dimension();
const NUM_CTS: usize = 512;
let lwe_vec: Vec<_> = (0..NUM_CTS)
.map(|_| {
allocate_and_encrypt_new_lwe_ciphertext(
&input_lwe_secret_key,
Plaintext(Scalar::ZERO),
params.lwe_modular_std_dev.unwrap(),
tfhe::core_crypto::prelude::CiphertextModulus::new_native(),
&mut encryption_generator,
)
})
.collect();
let mut output_lwe_list = LweCiphertextList::new(
Scalar::ZERO,
big_lwe_dimension.to_lwe_size(),
LweCiphertextCount(NUM_CTS),
params.ciphertext_modulus.unwrap(),
);
let lwe_vec = lwe_vec;
let fft = Fft::new(params.polynomial_size.unwrap());
let fft = fft.as_view();
let mut vec_buffers: Vec<_> = (0..NUM_CTS)
.map(|_| {
let mut buffers = ComputationBuffers::new();
buffers.resize(
programmable_bootstrap_lwe_ciphertext_mem_optimized_requirement::<Scalar>(
params.glwe_dimension.unwrap().to_glwe_size(),
params.polynomial_size.unwrap(),
fft,
)
.unwrap()
.unaligned_bytes_required(),
);
buffers
})
.collect();
let glwe = GlweCiphertext::new(
Scalar::ONE << 60,
params.glwe_dimension.unwrap().to_glwe_size(),
params.polynomial_size.unwrap(),
params.ciphertext_modulus.unwrap(),
);
let fbsk = FourierLweBootstrapKey::new(
params.lwe_dimension.unwrap(),
params.glwe_dimension.unwrap().to_glwe_size(),
params.polynomial_size.unwrap(),
params.pbs_base_log.unwrap(),
params.pbs_level.unwrap(),
);
for chunk_size in [1, 16, 32, 64, 128, 256, 512] {
let id = format!("{bench_name}_{name}_{chunk_size}chunk");
{
bench_group.bench_function(&id, |b| {
b.iter(|| {
lwe_vec
.par_iter()
.zip(output_lwe_list.par_iter_mut())
.zip(vec_buffers.par_iter_mut())
.take(chunk_size)
.for_each(|((input_lwe, mut out_lwe), buffer)| {
programmable_bootstrap_lwe_ciphertext_mem_optimized(
input_lwe,
&mut out_lwe,
&glwe,
&fbsk,
fft,
buffer.stack(),
);
});
black_box(&mut output_lwe_list);
})
});
}
let bit_size = (params.message_modulus.unwrap_or(2) as u32).ilog2();
write_to_json(
&id,
*params,
*name,
"pbs",
&OperatorType::Atomic,
bit_size,
vec![bit_size],
);
}
}
}

View File

@@ -0,0 +1,977 @@
#![allow(dead_code)]
#[path = "../utilities.rs"]
mod utilities;
use crate::utilities::{write_to_json, OperatorType};
use std::env;
use criterion::{criterion_group, Criterion};
use itertools::iproduct;
use rand::rngs::ThreadRng;
use rand::Rng;
use std::vec::IntoIter;
use tfhe::integer::keycache::KEY_CACHE;
use tfhe::integer::{RadixCiphertext, ServerKey};
use tfhe::shortint::keycache::NamedParam;
#[allow(unused_imports)]
use tfhe::shortint::parameters::{
PARAM_MESSAGE_1_CARRY_1_KS_PBS, PARAM_MESSAGE_2_CARRY_2_KS_PBS, PARAM_MESSAGE_3_CARRY_3_KS_PBS,
PARAM_MESSAGE_4_CARRY_4_KS_PBS, PARAM_MULTI_BIT_MESSAGE_2_CARRY_2_GROUP_2_KS_PBS,
};
/// An iterator that yields a succession of combinations
/// of parameters and a num_block to achieve a certain bit_size ciphertext
/// in radix decomposition
struct ParamsAndNumBlocksIter {
params_and_bit_sizes:
itertools::Product<IntoIter<tfhe::shortint::PBSParameters>, IntoIter<usize>>,
}
impl Default for ParamsAndNumBlocksIter {
fn default() -> Self {
let is_multi_bit = match env::var("__TFHE_RS_BENCH_TYPE") {
Ok(val) => val.to_lowercase() == "multi_bit",
Err(_) => false,
};
if is_multi_bit {
let params = vec![PARAM_MULTI_BIT_MESSAGE_2_CARRY_2_GROUP_2_KS_PBS.into()];
let bit_sizes = vec![8, 16, 32, 40, 64];
let params_and_bit_sizes = iproduct!(params, bit_sizes);
Self {
params_and_bit_sizes,
}
} else {
// FIXME One set of parameter is tested since we want to benchmark only quickest
// operations.
let params = vec![
PARAM_MESSAGE_2_CARRY_2_KS_PBS.into(),
// PARAM_MESSAGE_3_CARRY_3_KS_PBS.into(),
// PARAM_MESSAGE_4_CARRY_4_KS_PBS.into(),
];
let bit_sizes = vec![8, 16, 32, 40, 64, 128, 256];
let params_and_bit_sizes = iproduct!(params, bit_sizes);
Self {
params_and_bit_sizes,
}
}
}
}
impl Iterator for ParamsAndNumBlocksIter {
type Item = (tfhe::shortint::PBSParameters, usize, usize);
fn next(&mut self) -> Option<Self::Item> {
let (param, bit_size) = self.params_and_bit_sizes.next()?;
let num_block =
(bit_size as f64 / (param.message_modulus().0 as f64).log(2.0)).ceil() as usize;
Some((param, num_block, bit_size))
}
}
/// Base function to bench a server key function that is a binary operation, input ciphertexts will
/// contain non zero carries
fn bench_server_key_binary_function_dirty_inputs<F>(
c: &mut Criterion,
bench_name: &str,
display_name: &str,
binary_op: F,
) where
F: Fn(&ServerKey, &mut RadixCiphertext, &mut RadixCiphertext),
{
let mut bench_group = c.benchmark_group(bench_name);
bench_group
.sample_size(15)
.measurement_time(std::time::Duration::from_secs(60));
let mut rng = rand::thread_rng();
for (param, num_block, bit_size) in ParamsAndNumBlocksIter::default() {
let param_name = param.name();
let bench_id = format!("{bench_name}::{param_name}::{bit_size}_bits");
bench_group.bench_function(&bench_id, |b| {
let (cks, sks) = KEY_CACHE.get_from_params(param);
let encrypt_two_values = || {
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_0 = tfhe::integer::U256::from((clearlow, clearhigh));
let mut ct_0 = cks.encrypt_radix(clear_0, num_block);
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_1 = tfhe::integer::U256::from((clearlow, clearhigh));
let mut ct_1 = cks.encrypt_radix(clear_1, num_block);
// Raise the degree, so as to ensure worst case path in operations
let mut carry_mod = param.carry_modulus().0;
while carry_mod > 0 {
// Raise the degree, so as to ensure worst case path in operations
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_2 = tfhe::integer::U256::from((clearlow, clearhigh));
let ct_2 = cks.encrypt_radix(clear_2, num_block);
sks.unchecked_add_assign(&mut ct_0, &ct_2);
sks.unchecked_add_assign(&mut ct_1, &ct_2);
carry_mod -= 1;
}
(ct_0, ct_1)
};
b.iter_batched(
encrypt_two_values,
|(mut ct_0, mut ct_1)| {
binary_op(&sks, &mut ct_0, &mut ct_1);
},
criterion::BatchSize::SmallInput,
)
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
bit_size as u32,
vec![param.message_modulus().0.ilog2(); num_block],
);
}
bench_group.finish()
}
/// Base function to bench a server key function that is a binary operation, input ciphertext will
/// contain only zero carries
fn bench_server_key_binary_function_clean_inputs<F>(
c: &mut Criterion,
bench_name: &str,
display_name: &str,
binary_op: F,
) where
F: Fn(&ServerKey, &mut RadixCiphertext, &mut RadixCiphertext),
{
let mut bench_group = c.benchmark_group(bench_name);
bench_group
.sample_size(15)
.measurement_time(std::time::Duration::from_secs(60));
let mut rng = rand::thread_rng();
for (param, num_block, bit_size) in ParamsAndNumBlocksIter::default() {
let param_name = param.name();
let bench_id = format!("{bench_name}::{param_name}::{bit_size}_bits");
bench_group.bench_function(&bench_id, |b| {
let (cks, sks) = KEY_CACHE.get_from_params(param);
let encrypt_two_values = || {
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_0 = tfhe::integer::U256::from((clearlow, clearhigh));
let ct_0 = cks.encrypt_radix(clear_0, num_block);
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_1 = tfhe::integer::U256::from((clearlow, clearhigh));
let ct_1 = cks.encrypt_radix(clear_1, num_block);
(ct_0, ct_1)
};
b.iter_batched(
encrypt_two_values,
|(mut ct_0, mut ct_1)| {
binary_op(&sks, &mut ct_0, &mut ct_1);
},
criterion::BatchSize::SmallInput,
)
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
bit_size as u32,
vec![param.message_modulus().0.ilog2(); num_block],
);
}
bench_group.finish()
}
/// Base function to bench a server key function that is a unary operation, input ciphertexts will
/// contain non zero carries
fn bench_server_key_unary_function_dirty_inputs<F>(
c: &mut Criterion,
bench_name: &str,
display_name: &str,
unary_fn: F,
) where
F: Fn(&ServerKey, &mut RadixCiphertext),
{
let mut bench_group = c.benchmark_group(bench_name);
bench_group
.sample_size(15)
.measurement_time(std::time::Duration::from_secs(60));
let mut rng = rand::thread_rng();
for (param, num_block, bit_size) in ParamsAndNumBlocksIter::default() {
let param_name = param.name();
let bench_id = format!("{bench_name}::{param_name}::{bit_size}_bits");
bench_group.bench_function(&bench_id, |b| {
let (cks, sks) = KEY_CACHE.get_from_params(param);
let encrypt_one_value = || {
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_0 = tfhe::integer::U256::from((clearlow, clearhigh));
let mut ct_0 = cks.encrypt_radix(clear_0, num_block);
// Raise the degree, so as to ensure worst case path in operations
let mut carry_mod = param.carry_modulus().0;
while carry_mod > 0 {
// Raise the degree, so as to ensure worst case path in operations
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_2 = tfhe::integer::U256::from((clearlow, clearhigh));
let ct_2 = cks.encrypt_radix(clear_2, num_block);
sks.unchecked_add_assign(&mut ct_0, &ct_2);
carry_mod -= 1;
}
ct_0
};
b.iter_batched(
encrypt_one_value,
|mut ct_0| {
unary_fn(&sks, &mut ct_0);
},
criterion::BatchSize::SmallInput,
)
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
bit_size as u32,
vec![param.message_modulus().0.ilog2(); num_block],
);
}
bench_group.finish()
}
/// Base function to bench a server key function that is a unary operation, input ciphertext will
/// contain only zero carries
fn bench_server_key_unary_function_clean_inputs<F>(
c: &mut Criterion,
bench_name: &str,
display_name: &str,
unary_fn: F,
) where
F: Fn(&ServerKey, &mut RadixCiphertext),
{
let mut bench_group = c.benchmark_group(bench_name);
bench_group
.sample_size(15)
.measurement_time(std::time::Duration::from_secs(60));
let mut rng = rand::thread_rng();
for (param, num_block, bit_size) in ParamsAndNumBlocksIter::default() {
let param_name = param.name();
let bench_id = format!("{bench_name}::{param_name}::{bit_size}_bits");
bench_group.bench_function(&bench_id, |b| {
let (cks, sks) = KEY_CACHE.get_from_params(param);
let encrypt_one_value = || {
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_0 = tfhe::integer::U256::from((clearlow, clearhigh));
cks.encrypt_radix(clear_0, num_block)
};
b.iter_batched(
encrypt_one_value,
|mut ct_0| {
unary_fn(&sks, &mut ct_0);
},
criterion::BatchSize::SmallInput,
)
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
bit_size as u32,
vec![param.message_modulus().0.ilog2(); num_block],
);
}
bench_group.finish()
}
fn bench_server_key_binary_scalar_function_dirty_inputs<F>(
c: &mut Criterion,
bench_name: &str,
display_name: &str,
binary_op: F,
) where
F: Fn(&ServerKey, &mut RadixCiphertext, u64),
{
let mut bench_group = c.benchmark_group(bench_name);
bench_group
.sample_size(15)
.measurement_time(std::time::Duration::from_secs(60));
let mut rng = rand::thread_rng();
for (param, num_block, bit_size) in ParamsAndNumBlocksIter::default() {
let param_name = param.name();
let bench_id = format!("{bench_name}::{param_name}::{bit_size}_bits");
bench_group.bench_function(&bench_id, |b| {
let (cks, sks) = KEY_CACHE.get_from_params(param);
let encrypt_one_value = || {
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_0 = tfhe::integer::U256::from((clearlow, clearhigh));
let mut ct_0 = cks.encrypt_radix(clear_0, num_block);
// Raise the degree, so as to ensure worst case path in operations
let mut carry_mod = param.carry_modulus().0;
while carry_mod > 0 {
// Raise the degree, so as to ensure worst case path in operations
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_2 = tfhe::integer::U256::from((clearlow, clearhigh));
let ct_2 = cks.encrypt_radix(clear_2, num_block);
sks.unchecked_add_assign(&mut ct_0, &ct_2);
carry_mod -= 1;
}
let clear_1 = rng.gen::<u64>();
(ct_0, clear_1)
};
b.iter_batched(
encrypt_one_value,
|(mut ct_0, clear_1)| {
binary_op(&sks, &mut ct_0, clear_1);
},
criterion::BatchSize::SmallInput,
)
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
bit_size as u32,
vec![param.message_modulus().0.ilog2(); num_block],
);
}
bench_group.finish()
}
fn bench_server_key_binary_scalar_function_clean_inputs<F>(
c: &mut Criterion,
bench_name: &str,
display_name: &str,
binary_op: F,
) where
F: Fn(&ServerKey, &mut RadixCiphertext, u64),
{
let mut bench_group = c.benchmark_group(bench_name);
bench_group
.sample_size(15)
.measurement_time(std::time::Duration::from_secs(60));
let mut rng = rand::thread_rng();
for (param, num_block, bit_size) in ParamsAndNumBlocksIter::default() {
let param_name = param.name();
let bench_id = format!("{bench_name}::{param_name}::{bit_size}_bits");
bench_group.bench_function(&bench_id, |b| {
let (cks, sks) = KEY_CACHE.get_from_params(param);
let encrypt_one_value = || {
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_0 = tfhe::integer::U256::from((clearlow, clearhigh));
let ct_0 = cks.encrypt_radix(clear_0, num_block);
let clear_1 = rng.gen::<u64>();
(ct_0, clear_1)
};
b.iter_batched(
encrypt_one_value,
|(mut ct_0, clear_1)| {
binary_op(&sks, &mut ct_0, clear_1);
},
criterion::BatchSize::SmallInput,
)
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
bit_size as u32,
vec![param.message_modulus().0.ilog2(); num_block],
);
}
bench_group.finish()
}
// Functions used to apply different way of selecting a scalar based on the context.
fn default_scalar(rng: &mut ThreadRng, _clear_bit_size: usize) -> u64 {
rng.gen::<u64>()
}
fn shift_scalar(_rng: &mut ThreadRng, _clear_bit_size: usize) -> u64 {
// Shifting by one is the worst case scenario.
1
}
fn mul_scalar(rng: &mut ThreadRng, _clear_bit_size: usize) -> u64 {
loop {
let scalar = rng.gen_range(3u64..=u64::MAX);
// If scalar is power of two, it is just a shit, which is an happy path.
if !scalar.is_power_of_two() {
return scalar;
}
}
}
fn div_scalar(rng: &mut ThreadRng, clear_bit_size: usize) -> u64 {
loop {
let scalar = rng.gen_range(1..=u64::MAX);
// Avoid overflow issues for u64 where we would take values mod 1
if (scalar as u128 % (1u128 << clear_bit_size)) != 0 {
return scalar;
}
}
}
fn if_then_else_parallelized(c: &mut Criterion) {
let bench_name = "integer::if_then_else_parallelized";
let display_name = "if_then_else";
let mut bench_group = c.benchmark_group(bench_name);
bench_group
.sample_size(15)
.measurement_time(std::time::Duration::from_secs(60));
let mut rng = rand::thread_rng();
for (param, num_block, bit_size) in ParamsAndNumBlocksIter::default() {
let param_name = param.name();
let bench_id = format!("{bench_name}::{param_name}::{bit_size}_bits");
bench_group.bench_function(&bench_id, |b| {
let (cks, sks) = KEY_CACHE.get_from_params(param);
let encrypt_tree_values = || {
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_0 = tfhe::integer::U256::from((clearlow, clearhigh));
let ct_0 = cks.encrypt_radix(clear_0, num_block);
let clearlow = rng.gen::<u128>();
let clearhigh = rng.gen::<u128>();
let clear_1 = tfhe::integer::U256::from((clearlow, clearhigh));
let ct_1 = cks.encrypt_radix(clear_1, num_block);
let cond = sks.create_trivial_radix(rng.gen_bool(0.5) as u64, num_block);
(cond, ct_0, ct_1)
};
b.iter_batched(
encrypt_tree_values,
|(condition, true_ct, false_ct)| {
sks.if_then_else_parallelized(&condition, &true_ct, &false_ct)
},
criterion::BatchSize::SmallInput,
)
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
bit_size as u32,
vec![param.message_modulus().0.ilog2(); num_block],
);
}
bench_group.finish()
}
macro_rules! define_server_key_bench_unary_fn (
(method_name: $server_key_method:ident, display_name:$name:ident) => {
fn $server_key_method(c: &mut Criterion) {
bench_server_key_unary_function_dirty_inputs(
c,
concat!("ServerKey::", stringify!($server_key_method)),
stringify!($name),
|server_key, lhs| {
server_key.$server_key_method(lhs);
})
}
}
);
macro_rules! define_server_key_bench_unary_default_fn (
(method_name: $server_key_method:ident, display_name:$name:ident) => {
fn $server_key_method(c: &mut Criterion) {
bench_server_key_unary_function_clean_inputs(
c,
concat!("ServerKey::", stringify!($server_key_method)),
stringify!($name),
|server_key, lhs| {
server_key.$server_key_method(lhs);
})
}
}
);
macro_rules! define_server_key_bench_fn (
(method_name: $server_key_method:ident, display_name:$name:ident) => {
fn $server_key_method(c: &mut Criterion) {
bench_server_key_binary_function_dirty_inputs(
c,
concat!("ServerKey::", stringify!($server_key_method)),
stringify!($name),
|server_key, lhs, rhs| {
server_key.$server_key_method(lhs, rhs);
})
}
}
);
macro_rules! define_server_key_bench_default_fn (
(method_name: $server_key_method:ident, display_name:$name:ident) => {
fn $server_key_method(c: &mut Criterion) {
bench_server_key_binary_function_clean_inputs(
c,
concat!("ServerKey::", stringify!($server_key_method)),
stringify!($name),
|server_key, lhs, rhs| {
server_key.$server_key_method(lhs, rhs);
})
}
}
);
macro_rules! define_server_key_bench_scalar_fn (
(method_name: $server_key_method:ident, display_name:$name:ident) => {
fn $server_key_method(c: &mut Criterion) {
bench_server_key_binary_scalar_function_dirty_inputs(
c,
concat!("ServerKey::", stringify!($server_key_method)),
stringify!($name),
|server_key, lhs, rhs| {
server_key.$server_key_method(lhs, rhs);
})
}
}
);
macro_rules! define_server_key_bench_scalar_default_fn (
(method_name: $server_key_method:ident, display_name:$name:ident) => {
fn $server_key_method(c: &mut Criterion) {
bench_server_key_binary_scalar_function_clean_inputs(
c,
concat!("ServerKey::", stringify!($server_key_method)),
stringify!($name),
|server_key, lhs, rhs| {
server_key.$server_key_method(lhs, rhs);
})
}
}
);
define_server_key_bench_fn!(method_name: smart_add, display_name: add);
define_server_key_bench_fn!(method_name: smart_sub, display_name: sub);
define_server_key_bench_fn!(method_name: smart_mul, display_name: mul);
define_server_key_bench_fn!(method_name: smart_bitand, display_name: bitand);
define_server_key_bench_fn!(method_name: smart_bitor, display_name: bitor);
define_server_key_bench_fn!(method_name: smart_bitxor, display_name: bitxor);
define_server_key_bench_fn!(method_name: smart_add_parallelized, display_name: add);
define_server_key_bench_fn!(method_name: smart_sub_parallelized, display_name: sub);
define_server_key_bench_fn!(method_name: smart_mul_parallelized, display_name: mul);
define_server_key_bench_fn!(method_name: smart_bitand_parallelized, display_name: bitand);
define_server_key_bench_fn!(method_name: smart_bitxor_parallelized, display_name: bitxor);
define_server_key_bench_fn!(method_name: smart_bitor_parallelized, display_name: bitor);
define_server_key_bench_default_fn!(method_name: add_parallelized, display_name: add);
define_server_key_bench_default_fn!(method_name: sub_parallelized, display_name: sub);
define_server_key_bench_default_fn!(method_name: mul_parallelized, display_name: mul);
define_server_key_bench_default_fn!(method_name: bitand_parallelized, display_name: bitand);
define_server_key_bench_default_fn!(method_name: bitxor_parallelized, display_name: bitxor);
define_server_key_bench_default_fn!(method_name: bitor_parallelized, display_name: bitor);
define_server_key_bench_unary_default_fn!(method_name: bitnot_parallelized, display_name: bitnot);
define_server_key_bench_fn!(method_name: unchecked_add, display_name: add);
define_server_key_bench_fn!(method_name: unchecked_sub, display_name: sub);
define_server_key_bench_fn!(method_name: unchecked_mul, display_name: mul);
define_server_key_bench_fn!(method_name: unchecked_bitand, display_name: bitand);
define_server_key_bench_fn!(method_name: unchecked_bitor, display_name: bitor);
define_server_key_bench_fn!(method_name: unchecked_bitxor, display_name: bitxor);
define_server_key_bench_fn!(method_name: unchecked_mul_parallelized, display_name: mul);
define_server_key_bench_fn!(
method_name: unchecked_bitand_parallelized,
display_name: bitand
);
define_server_key_bench_fn!(
method_name: unchecked_bitor_parallelized,
display_name: bitor
);
define_server_key_bench_fn!(
method_name: unchecked_bitxor_parallelized,
display_name: bitxor
);
define_server_key_bench_scalar_fn!(method_name: smart_scalar_add, display_name: add);
define_server_key_bench_scalar_fn!(method_name: smart_scalar_sub, display_name: sub);
define_server_key_bench_scalar_fn!(method_name: smart_scalar_mul, display_name: mul);
define_server_key_bench_scalar_fn!(
method_name: smart_scalar_add_parallelized,
display_name: add
);
define_server_key_bench_scalar_fn!(
method_name: smart_scalar_sub_parallelized,
display_name: sub
);
define_server_key_bench_scalar_fn!(
method_name: smart_scalar_mul_parallelized,
display_name: mul
);
define_server_key_bench_scalar_default_fn!(method_name: scalar_add_parallelized, display_name: add);
define_server_key_bench_scalar_default_fn!(method_name: scalar_sub_parallelized, display_name: sub);
define_server_key_bench_scalar_default_fn!(method_name: scalar_mul_parallelized, display_name: mul);
define_server_key_bench_scalar_default_fn!(
method_name: scalar_left_shift_parallelized,
display_name: left_shift
);
define_server_key_bench_scalar_default_fn!(
method_name: scalar_right_shift_parallelized,
display_name: right_shift
);
define_server_key_bench_scalar_default_fn!(
method_name: scalar_eq_parallelized,
display_name: scalar_equal
);
define_server_key_bench_scalar_default_fn!(
method_name: scalar_ne_parallelized,
display_name: scalar_not_equal
);
define_server_key_bench_scalar_default_fn!(
method_name: scalar_le_parallelized,
display_name: scalar_less_or_equal
);
define_server_key_bench_scalar_default_fn!(
method_name: scalar_lt_parallelized,
display_name: scalar_less_than
);
define_server_key_bench_scalar_default_fn!(
method_name: scalar_ge_parallelized,
display_name: scalar_greater_or_equal
);
define_server_key_bench_scalar_default_fn!(
method_name: scalar_gt_parallelized,
display_name: scalar_greater_than
);
define_server_key_bench_scalar_default_fn!(
method_name: scalar_max_parallelized,
display_name: scalar_max
);
define_server_key_bench_scalar_default_fn!(
method_name: scalar_min_parallelized,
display_name: scalar_min
);
define_server_key_bench_scalar_fn!(method_name: unchecked_scalar_add, display_name: add);
define_server_key_bench_scalar_fn!(method_name: unchecked_scalar_sub, display_name: sub);
define_server_key_bench_scalar_fn!(method_name: unchecked_small_scalar_mul, display_name: mul);
define_server_key_bench_unary_fn!(method_name: smart_neg, display_name: negation);
define_server_key_bench_unary_fn!(method_name: smart_neg_parallelized, display_name: negation);
define_server_key_bench_unary_default_fn!(method_name: neg_parallelized, display_name: negation);
define_server_key_bench_unary_fn!(method_name: full_propagate, display_name: carry_propagation);
define_server_key_bench_unary_fn!(
method_name: full_propagate_parallelized,
display_name: carry_propagation
);
define_server_key_bench_fn!(method_name: unchecked_max, display_name: max);
define_server_key_bench_fn!(method_name: unchecked_min, display_name: min);
define_server_key_bench_fn!(method_name: unchecked_eq, display_name: equal);
define_server_key_bench_fn!(method_name: unchecked_lt, display_name: less_than);
define_server_key_bench_fn!(method_name: unchecked_le, display_name: less_or_equal);
define_server_key_bench_fn!(method_name: unchecked_gt, display_name: greater_than);
define_server_key_bench_fn!(method_name: unchecked_ge, display_name: greater_or_equal);
define_server_key_bench_fn!(method_name: unchecked_max_parallelized, display_name: max);
define_server_key_bench_fn!(method_name: unchecked_min_parallelized, display_name: min);
define_server_key_bench_fn!(method_name: unchecked_eq_parallelized, display_name: equal);
define_server_key_bench_fn!(
method_name: unchecked_lt_parallelized,
display_name: less_than
);
define_server_key_bench_fn!(
method_name: unchecked_le_parallelized,
display_name: less_or_equal
);
define_server_key_bench_fn!(
method_name: unchecked_gt_parallelized,
display_name: greater_than
);
define_server_key_bench_fn!(
method_name: unchecked_ge_parallelized,
display_name: greater_or_equal
);
define_server_key_bench_fn!(method_name: smart_max, display_name: max);
define_server_key_bench_fn!(method_name: smart_min, display_name: min);
define_server_key_bench_fn!(method_name: smart_eq, display_name: equal);
define_server_key_bench_fn!(method_name: smart_lt, display_name: less_than);
define_server_key_bench_fn!(method_name: smart_le, display_name: less_or_equal);
define_server_key_bench_fn!(method_name: smart_gt, display_name: greater_than);
define_server_key_bench_fn!(method_name: smart_ge, display_name: greater_or_equal);
define_server_key_bench_fn!(method_name: smart_max_parallelized, display_name: max);
define_server_key_bench_fn!(method_name: smart_min_parallelized, display_name: min);
define_server_key_bench_fn!(method_name: smart_eq_parallelized, display_name: equal);
define_server_key_bench_fn!(method_name: smart_lt_parallelized, display_name: less_than);
define_server_key_bench_fn!(
method_name: smart_le_parallelized,
display_name: less_or_equal
);
define_server_key_bench_fn!(
method_name: smart_gt_parallelized,
display_name: greater_than
);
define_server_key_bench_fn!(
method_name: smart_ge_parallelized,
display_name: greater_or_equal
);
define_server_key_bench_default_fn!(method_name: max_parallelized, display_name: max);
define_server_key_bench_default_fn!(method_name: min_parallelized, display_name: min);
define_server_key_bench_default_fn!(method_name: eq_parallelized, display_name: equal);
define_server_key_bench_default_fn!(method_name: ne_parallelized, display_name: not_equal);
define_server_key_bench_default_fn!(method_name: lt_parallelized, display_name: less_than);
define_server_key_bench_default_fn!(method_name: le_parallelized, display_name: less_or_equal);
define_server_key_bench_default_fn!(method_name: gt_parallelized, display_name: greater_than);
define_server_key_bench_default_fn!(method_name: ge_parallelized, display_name: greater_or_equal);
define_server_key_bench_default_fn!(
method_name: left_shift_parallelized,
display_name: left_shift
);
define_server_key_bench_default_fn!(
method_name: right_shift_parallelized,
display_name: right_shift
);
define_server_key_bench_default_fn!(
method_name: rotate_left_parallelized,
display_name: rotate_left
);
define_server_key_bench_default_fn!(
method_name: rotate_right_parallelized,
display_name: rotate_right
);
criterion_group!(
smart_ops,
smart_neg,
smart_add,
smart_mul,
smart_bitand,
smart_bitor,
smart_bitxor,
smart_max,
smart_min,
smart_eq,
smart_lt,
smart_le,
smart_gt,
smart_ge,
);
criterion_group!(
smart_parallelized_ops,
smart_add_parallelized,
smart_sub_parallelized,
smart_mul_parallelized,
smart_bitand_parallelized,
smart_bitor_parallelized,
smart_bitxor_parallelized,
smart_max_parallelized,
smart_min_parallelized,
smart_eq_parallelized,
smart_lt_parallelized,
smart_le_parallelized,
smart_gt_parallelized,
smart_ge_parallelized,
);
criterion_group!(
default_parallelized_ops,
add_parallelized,
sub_parallelized,
mul_parallelized,
neg_parallelized,
bitand_parallelized,
bitnot_parallelized,
bitor_parallelized,
bitxor_parallelized,
max_parallelized,
min_parallelized,
eq_parallelized,
ne_parallelized,
lt_parallelized,
le_parallelized,
gt_parallelized,
ge_parallelized,
left_shift_parallelized,
right_shift_parallelized,
rotate_left_parallelized,
rotate_right_parallelized,
if_then_else_parallelized,
);
criterion_group!(
smart_scalar_ops,
smart_scalar_add,
smart_scalar_sub,
smart_scalar_mul,
);
criterion_group!(
smart_scalar_parallelized_ops,
smart_scalar_add_parallelized,
smart_scalar_sub_parallelized,
smart_scalar_mul_parallelized,
);
criterion_group!(
default_scalar_parallelized_ops,
scalar_add_parallelized,
scalar_sub_parallelized,
scalar_mul_parallelized,
scalar_left_shift_parallelized,
scalar_right_shift_parallelized,
scalar_eq_parallelized,
scalar_ne_parallelized,
scalar_lt_parallelized,
scalar_le_parallelized,
scalar_gt_parallelized,
scalar_ge_parallelized,
scalar_min_parallelized,
scalar_max_parallelized,
);
criterion_group!(
unchecked_ops,
unchecked_add,
unchecked_sub,
unchecked_mul,
unchecked_bitand,
unchecked_bitor,
unchecked_bitxor,
unchecked_max,
unchecked_min,
unchecked_eq,
unchecked_lt,
unchecked_le,
unchecked_gt,
unchecked_ge,
);
criterion_group!(
unchecked_scalar_ops,
unchecked_scalar_add,
unchecked_scalar_sub,
unchecked_small_scalar_mul,
unchecked_max_parallelized,
unchecked_min_parallelized,
unchecked_eq_parallelized,
unchecked_lt_parallelized,
unchecked_le_parallelized,
unchecked_gt_parallelized,
unchecked_ge_parallelized,
unchecked_bitand_parallelized,
unchecked_bitor_parallelized,
unchecked_bitxor_parallelized,
);
criterion_group!(misc, full_propagate, full_propagate_parallelized);
fn main() {
match env::var("__TFHE_RS_BENCH_OP_FLAVOR") {
Ok(val) => {
match val.to_lowercase().as_str() {
"default" => default_parallelized_ops(),
"default_scalar" => default_scalar_parallelized_ops(),
"smart" => smart_ops(),
"smart_scalar" => smart_scalar_ops(),
"smart_parallelized" => smart_parallelized_ops(),
"smart_scalar_parallelized" => smart_scalar_parallelized_ops(),
"unchecked" => unchecked_ops(),
"unchecked_scalar" => unchecked_scalar_ops(),
"misc" => misc(),
_ => panic!("unknown benchmark operations flavor"),
};
}
Err(_) => {
default_parallelized_ops();
default_scalar_parallelized_ops()
}
};
Criterion::default().configure_from_args().final_summary();
}

View File

@@ -0,0 +1,45 @@
use concrete_csprng::seeders::Seeder;
use criterion::*;
use tfhe::core_crypto::commons::generators::DeterministicSeeder;
use tfhe::core_crypto::prelude::{
allocate_and_generate_new_binary_glwe_secret_key,
par_allocate_and_generate_new_lwe_bootstrap_key, ActivatedRandomGenerator, CiphertextModulus,
EncryptionRandomGenerator, SecretRandomGenerator,
};
use tfhe::core_crypto::seeders::new_seeder;
use tfhe::shortint::prelude::*;
fn criterion_bench(c: &mut Criterion) {
let parameters = PARAM_MESSAGE_2_CARRY_2_KS_PBS;
let mut seeder = new_seeder();
let mut deterministic_seeder =
DeterministicSeeder::<ActivatedRandomGenerator>::new(seeder.seed());
let mut secret_generator =
SecretRandomGenerator::<ActivatedRandomGenerator>::new(deterministic_seeder.seed());
let mut encryption_generator = EncryptionRandomGenerator::<ActivatedRandomGenerator>::new(
deterministic_seeder.seed(),
&mut deterministic_seeder,
);
let glwe_secret_key = allocate_and_generate_new_binary_glwe_secret_key::<u64, _>(
parameters.glwe_dimension,
parameters.polynomial_size,
&mut secret_generator,
);
let lwe_secret_key_after_ks = glwe_secret_key.clone().into_lwe_secret_key();
c.bench_function("keygen", |b| {
b.iter(|| {
let _ = par_allocate_and_generate_new_lwe_bootstrap_key(
&lwe_secret_key_after_ks,
&glwe_secret_key,
parameters.pbs_base_log,
parameters.pbs_level,
parameters.glwe_modular_std_dev,
CiphertextModulus::new_native(),
&mut encryption_generator,
);
});
});
}
criterion_group!(benches, criterion_bench);
criterion_main!(benches);

View File

@@ -1,39 +1,109 @@
use criterion::{criterion_group, criterion_main, Criterion};
#[path = "../utilities.rs"]
mod utilities;
use crate::utilities::{write_to_json, OperatorType};
use std::env;
use criterion::{criterion_group, Criterion};
use tfhe::shortint::keycache::NamedParam;
use tfhe::shortint::parameters::*;
use tfhe::shortint::{Ciphertext, Parameters, ServerKey};
use tfhe::shortint::{Ciphertext, ClassicPBSParameters, ServerKey, ShortintParameterSet};
use rand::Rng;
use tfhe::shortint::keycache::KEY_CACHE;
use tfhe::shortint::keycache::KEY_CACHE_WOPBS;
use tfhe::shortint::parameters::parameters_wopbs::WOPBS_PARAM_MESSAGE_4_NORM2_6;
use tfhe::shortint::parameters::parameters_wopbs::WOPBS_PARAM_MESSAGE_4_NORM2_6_KS_PBS;
macro_rules! named_param {
($param:ident) => {
(stringify!($param), $param)
};
}
const SERVER_KEY_BENCH_PARAMS: [(&str, Parameters); 4] = [
named_param!(PARAM_MESSAGE_1_CARRY_1),
named_param!(PARAM_MESSAGE_2_CARRY_2),
named_param!(PARAM_MESSAGE_3_CARRY_3),
named_param!(PARAM_MESSAGE_4_CARRY_4),
const SERVER_KEY_BENCH_PARAMS: [ClassicPBSParameters; 4] = [
PARAM_MESSAGE_1_CARRY_1_KS_PBS,
PARAM_MESSAGE_2_CARRY_2_KS_PBS,
PARAM_MESSAGE_3_CARRY_3_KS_PBS,
PARAM_MESSAGE_4_CARRY_4_KS_PBS,
];
fn bench_server_key_binary_function<F>(c: &mut Criterion, bench_name: &str, binary_op: F)
where
F: Fn(&ServerKey, &mut Ciphertext, &mut Ciphertext),
const SERVER_KEY_BENCH_PARAMS_EXTENDED: [ClassicPBSParameters; 15] = [
PARAM_MESSAGE_1_CARRY_0_KS_PBS,
PARAM_MESSAGE_1_CARRY_1_KS_PBS,
PARAM_MESSAGE_2_CARRY_0_KS_PBS,
PARAM_MESSAGE_2_CARRY_1_KS_PBS,
PARAM_MESSAGE_2_CARRY_2_KS_PBS,
PARAM_MESSAGE_3_CARRY_0_KS_PBS,
PARAM_MESSAGE_3_CARRY_2_KS_PBS,
PARAM_MESSAGE_3_CARRY_3_KS_PBS,
PARAM_MESSAGE_4_CARRY_0_KS_PBS,
PARAM_MESSAGE_4_CARRY_3_KS_PBS,
PARAM_MESSAGE_4_CARRY_4_KS_PBS,
PARAM_MESSAGE_5_CARRY_0_KS_PBS,
PARAM_MESSAGE_6_CARRY_0_KS_PBS,
PARAM_MESSAGE_7_CARRY_0_KS_PBS,
PARAM_MESSAGE_8_CARRY_0_KS_PBS,
];
fn bench_server_key_unary_function<F>(
c: &mut Criterion,
bench_name: &str,
display_name: &str,
unary_op: F,
params: &[ClassicPBSParameters],
) where
F: Fn(&ServerKey, &mut Ciphertext),
{
let mut bench_group = c.benchmark_group(bench_name);
for (param_name, param) in SERVER_KEY_BENCH_PARAMS {
for param in params.iter() {
let param: PBSParameters = (*param).into();
let keys = KEY_CACHE.get_from_param(param);
let (cks, sks) = (keys.client_key(), keys.server_key());
let mut rng = rand::thread_rng();
let modulus = 1_u64 << cks.parameters.message_modulus.0;
let modulus = cks.parameters.message_modulus().0 as u64;
let clear_text = rng.gen::<u64>() % modulus;
let mut ct = cks.encrypt(clear_text);
let bench_id = format!("{bench_name}::{}", param.name());
bench_group.bench_function(&bench_id, |b| {
b.iter(|| {
unary_op(sks, &mut ct);
})
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
param.message_modulus().0.ilog2(),
vec![param.message_modulus().0.ilog2()],
);
}
bench_group.finish()
}
fn bench_server_key_binary_function<F>(
c: &mut Criterion,
bench_name: &str,
display_name: &str,
binary_op: F,
params: &[ClassicPBSParameters],
) where
F: Fn(&ServerKey, &mut Ciphertext, &mut Ciphertext),
{
let mut bench_group = c.benchmark_group(bench_name);
for param in params.iter() {
let param: PBSParameters = (*param).into();
let keys = KEY_CACHE.get_from_param(param);
let (cks, sks) = (keys.client_key(), keys.server_key());
let mut rng = rand::thread_rng();
let modulus = cks.parameters.message_modulus().0 as u64;
let clear_0 = rng.gen::<u64>() % modulus;
let clear_1 = rng.gen::<u64>() % modulus;
@@ -41,42 +111,118 @@ where
let mut ct_0 = cks.encrypt(clear_0);
let mut ct_1 = cks.encrypt(clear_1);
let bench_id = format!("{}::{}", bench_name, param_name);
let bench_id = format!("{bench_name}::{}", param.name());
bench_group.bench_function(&bench_id, |b| {
b.iter(|| {
binary_op(sks, &mut ct_0, &mut ct_1);
})
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
param.message_modulus().0.ilog2(),
vec![param.message_modulus().0.ilog2()],
);
}
bench_group.finish()
}
fn bench_server_key_binary_scalar_function<F>(c: &mut Criterion, bench_name: &str, binary_op: F)
where
fn bench_server_key_binary_scalar_function<F>(
c: &mut Criterion,
bench_name: &str,
display_name: &str,
binary_op: F,
params: &[ClassicPBSParameters],
) where
F: Fn(&ServerKey, &mut Ciphertext, u8),
{
let mut bench_group = c.benchmark_group(bench_name);
for (param_name, param) in SERVER_KEY_BENCH_PARAMS {
for param in params {
let param: PBSParameters = (*param).into();
let keys = KEY_CACHE.get_from_param(param);
let (cks, sks) = (keys.client_key(), keys.server_key());
let mut rng = rand::thread_rng();
let modulus = 1_u64 << cks.parameters.message_modulus.0;
let modulus = cks.parameters.message_modulus().0 as u64;
let clear_0 = rng.gen::<u64>() % modulus;
let clear_1 = rng.gen::<u64>() % modulus;
let mut ct_0 = cks.encrypt(clear_0);
let bench_id = format!("{}::{}", bench_name, param_name);
let bench_id = format!("{bench_name}::{}", param.name());
bench_group.bench_function(&bench_id, |b| {
b.iter(|| {
binary_op(sks, &mut ct_0, clear_1 as u8);
})
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
param.message_modulus().0.ilog2(),
vec![param.message_modulus().0.ilog2()],
);
}
bench_group.finish()
}
fn bench_server_key_binary_scalar_division_function<F>(
c: &mut Criterion,
bench_name: &str,
display_name: &str,
binary_op: F,
params: &[ClassicPBSParameters],
) where
F: Fn(&ServerKey, &mut Ciphertext, u8),
{
let mut bench_group = c.benchmark_group(bench_name);
for param in params {
let param: PBSParameters = (*param).into();
let keys = KEY_CACHE.get_from_param(param);
let (cks, sks) = (keys.client_key(), keys.server_key());
let mut rng = rand::thread_rng();
let modulus = cks.parameters.message_modulus().0 as u64;
assert_ne!(modulus, 1);
let clear_0 = rng.gen::<u64>() % modulus;
let mut clear_1 = rng.gen::<u64>() % modulus;
while clear_1 == 0 {
clear_1 = rng.gen::<u64>() % modulus;
}
let mut ct_0 = cks.encrypt(clear_0);
let bench_id = format!("{bench_name}::{}", param.name());
bench_group.bench_function(&bench_id, |b| {
b.iter(|| {
binary_op(sks, &mut ct_0, clear_1 as u8);
})
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
display_name,
&OperatorType::Atomic,
param.message_modulus().0.ilog2(),
vec![param.message_modulus().0.ilog2()],
);
}
bench_group.finish()
@@ -85,24 +231,35 @@ where
fn carry_extract(c: &mut Criterion) {
let mut bench_group = c.benchmark_group("carry_extract");
for (param_name, param) in SERVER_KEY_BENCH_PARAMS {
for param in SERVER_KEY_BENCH_PARAMS {
let param: PBSParameters = param.into();
let keys = KEY_CACHE.get_from_param(param);
let (cks, sks) = (keys.client_key(), keys.server_key());
let mut rng = rand::thread_rng();
let modulus = 1_u64 << cks.parameters.message_modulus.0;
let modulus = cks.parameters.message_modulus().0 as u64;
let clear_0 = rng.gen::<u64>() % modulus;
let ct_0 = cks.encrypt(clear_0);
let bench_id = format!("ServerKey::carry_extract::{}", param_name);
let bench_id = format!("ServerKey::carry_extract::{}", param.name());
bench_group.bench_function(&bench_id, |b| {
b.iter(|| {
sks.carry_extract(&ct_0);
let _ = sks.carry_extract(&ct_0);
})
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
"carry_extract",
&OperatorType::Atomic,
param.message_modulus().0.ilog2(),
vec![param.message_modulus().0.ilog2()],
);
}
bench_group.finish()
@@ -111,38 +268,52 @@ fn carry_extract(c: &mut Criterion) {
fn programmable_bootstrapping(c: &mut Criterion) {
let mut bench_group = c.benchmark_group("programmable_bootstrap");
for (param_name, param) in SERVER_KEY_BENCH_PARAMS {
for param in SERVER_KEY_BENCH_PARAMS {
let param: PBSParameters = param.into();
let keys = KEY_CACHE.get_from_param(param);
let (cks, sks) = (keys.client_key(), keys.server_key());
let mut rng = rand::thread_rng();
let modulus = cks.parameters.message_modulus.0 as u64;
let modulus = cks.parameters.message_modulus().0 as u64;
let acc = sks.generate_accumulator(|x| x);
let acc = sks.generate_lookup_table(|x| x);
let clear_0 = rng.gen::<u64>() % modulus;
let ctxt = cks.encrypt(clear_0);
let id = format!("ServerKey::programmable_bootstrap::{}", param_name);
let bench_id = format!("ServerKey::programmable_bootstrap::{}", param.name());
bench_group.bench_function(&id, |b| {
bench_group.bench_function(&bench_id, |b| {
b.iter(|| {
sks.keyswitch_programmable_bootstrap(&ctxt, &acc);
let _ = sks.apply_lookup_table(&ctxt, &acc);
})
});
write_to_json::<u64, _>(
&bench_id,
param,
param.name(),
"pbs",
&OperatorType::Atomic,
param.message_modulus().0.ilog2(),
vec![param.message_modulus().0.ilog2()],
);
}
bench_group.finish();
}
fn bench_wopbs_param_message_8_norm2_5(c: &mut Criterion) {
// TODO: remove?
fn _bench_wopbs_param_message_8_norm2_5(c: &mut Criterion) {
let mut bench_group = c.benchmark_group("programmable_bootstrap");
let param = WOPBS_PARAM_MESSAGE_4_NORM2_6;
let param = WOPBS_PARAM_MESSAGE_4_NORM2_6_KS_PBS;
let param_set: ShortintParameterSet = param.try_into().unwrap();
let pbs_params = param_set.pbs_parameters().unwrap();
let keys = KEY_CACHE_WOPBS.get_from_param((param, param));
let keys = KEY_CACHE_WOPBS.get_from_param((pbs_params, param));
let (cks, _, wopbs_key) = (keys.client_key(), keys.server_key(), keys.wopbs_key());
let mut rng = rand::thread_rng();
@@ -151,82 +322,423 @@ fn bench_wopbs_param_message_8_norm2_5(c: &mut Criterion) {
let mut ct = cks.encrypt_without_padding(clear as u64);
let vec_lut = wopbs_key.generate_lut_native_crt(&ct, |x| x);
let id = format!("Shortint WOPBS: {:?}", param);
let id = format!("Shortint WOPBS: {param:?}");
bench_group.bench_function(&id, |b| {
b.iter(|| {
wopbs_key.programmable_bootstrapping_native_crt(&mut ct, &vec_lut);
let _ = wopbs_key.programmable_bootstrapping_native_crt(&mut ct, &vec_lut);
})
});
bench_group.finish();
}
macro_rules! define_server_key_unary_bench_fn (
(method_name:$server_key_method:ident, display_name:$name:ident, $params:expr) => {
fn $server_key_method(c: &mut Criterion) {
bench_server_key_unary_function(
c,
concat!("ServerKey::", stringify!($server_key_method)),
stringify!($name),
|server_key, lhs| {
let _ = server_key.$server_key_method(lhs);},
$params)
}
}
);
macro_rules! define_server_key_bench_fn (
($server_key_method:ident) => {
(method_name:$server_key_method:ident, display_name:$name:ident, $params:expr) => {
fn $server_key_method(c: &mut Criterion) {
bench_server_key_binary_function(
c,
concat!("ServerKey::", stringify!($server_key_method)),
stringify!($name),
|server_key, lhs, rhs| {
server_key.$server_key_method(lhs, rhs);
})
let _ = server_key.$server_key_method(lhs, rhs);},
$params)
}
}
);
macro_rules! define_server_key_scalar_bench_fn (
($server_key_method:ident) => {
(method_name:$server_key_method:ident, display_name:$name:ident, $params:expr) => {
fn $server_key_method(c: &mut Criterion) {
bench_server_key_binary_scalar_function(
c,
concat!("ServerKey::", stringify!($server_key_method)),
stringify!($name),
|server_key, lhs, rhs| {
server_key.$server_key_method(lhs, rhs);
})
let _ = server_key.$server_key_method(lhs, rhs);},
$params)
}
}
);
define_server_key_bench_fn!(unchecked_add);
define_server_key_bench_fn!(unchecked_sub);
define_server_key_bench_fn!(unchecked_mul_lsb);
define_server_key_bench_fn!(unchecked_mul_msb);
define_server_key_bench_fn!(smart_bitand);
define_server_key_bench_fn!(smart_bitor);
define_server_key_bench_fn!(smart_bitxor);
define_server_key_bench_fn!(smart_add);
define_server_key_bench_fn!(smart_sub);
define_server_key_bench_fn!(smart_mul_lsb);
macro_rules! define_server_key_scalar_div_bench_fn (
(method_name:$server_key_method:ident, display_name:$name:ident, $params:expr) => {
fn $server_key_method(c: &mut Criterion) {
bench_server_key_binary_scalar_division_function(
c,
concat!("ServerKey::", stringify!($server_key_method)),
stringify!($name),
|server_key, lhs, rhs| {
let _ = server_key.$server_key_method(lhs, rhs);},
$params)
}
}
);
define_server_key_scalar_bench_fn!(unchecked_scalar_add);
define_server_key_scalar_bench_fn!(unchecked_scalar_mul);
define_server_key_unary_bench_fn!(
method_name: unchecked_neg,
display_name: negation,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: unchecked_add,
display_name: add,
&SERVER_KEY_BENCH_PARAMS_EXTENDED
);
define_server_key_bench_fn!(
method_name: unchecked_sub,
display_name: sub,
&SERVER_KEY_BENCH_PARAMS_EXTENDED
);
define_server_key_bench_fn!(
method_name: unchecked_mul_lsb,
display_name: mul,
&SERVER_KEY_BENCH_PARAMS_EXTENDED
);
define_server_key_bench_fn!(
method_name: unchecked_mul_msb,
display_name: mul,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: unchecked_div,
display_name: div,
&SERVER_KEY_BENCH_PARAMS_EXTENDED
);
define_server_key_bench_fn!(
method_name: smart_bitand,
display_name: bitand,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: smart_bitor,
display_name: bitor,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: smart_bitxor,
display_name: bitxor,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: smart_add,
display_name: add,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: smart_sub,
display_name: sub,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: smart_mul_lsb,
display_name: mul,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: bitand,
display_name: bitand,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: bitor,
display_name: bitor,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: bitxor,
display_name: bitxor,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: add,
display_name: add,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: sub,
display_name: sub,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: mul,
display_name: mul,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: div,
display_name: div,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: greater,
display_name: greater,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: greater_or_equal,
display_name: greater_or_equal,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: less,
display_name: less,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: less_or_equal,
display_name: less_or_equal,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: equal,
display_name: equal,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: not_equal,
display_name: not_equal,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_unary_bench_fn!(
method_name: neg,
display_name: negation,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: unchecked_greater,
display_name: greater_than,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: unchecked_less,
display_name: less_than,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_bench_fn!(
method_name: unchecked_equal,
display_name: equal,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: unchecked_scalar_add,
display_name: add,
&SERVER_KEY_BENCH_PARAMS_EXTENDED
);
define_server_key_scalar_bench_fn!(
method_name: unchecked_scalar_sub,
display_name: sub,
&SERVER_KEY_BENCH_PARAMS_EXTENDED
);
define_server_key_scalar_bench_fn!(
method_name: unchecked_scalar_mul,
display_name: mul,
&SERVER_KEY_BENCH_PARAMS_EXTENDED
);
define_server_key_scalar_bench_fn!(
method_name: unchecked_scalar_left_shift,
display_name: left_shift,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: unchecked_scalar_right_shift,
display_name: right_shift,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_div_bench_fn!(
method_name: unchecked_scalar_div,
display_name: div,
&SERVER_KEY_BENCH_PARAMS_EXTENDED
);
define_server_key_scalar_div_bench_fn!(
method_name: unchecked_scalar_mod,
display_name: modulo,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: scalar_add,
display_name: add,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: scalar_sub,
display_name: sub,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: scalar_mul,
display_name: mul,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: scalar_left_shift,
display_name: left_shift,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: scalar_right_shift,
display_name: right_shift,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_div_bench_fn!(
method_name: scalar_div,
display_name: div,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_div_bench_fn!(
method_name: scalar_mod,
display_name: modulo,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: scalar_greater,
display_name: greater,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: scalar_greater_or_equal,
display_name: greater_or_equal,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: scalar_less,
display_name: less,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_bench_fn!(
method_name: scalar_less_or_equal,
display_name: less_or_equal,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_div_bench_fn!(
method_name: scalar_equal,
display_name: equal,
&SERVER_KEY_BENCH_PARAMS
);
define_server_key_scalar_div_bench_fn!(
method_name: scalar_not_equal,
display_name: not_equal,
&SERVER_KEY_BENCH_PARAMS
);
criterion_group!(
arithmetic_operation,
unchecked_add,
unchecked_sub,
unchecked_mul_lsb,
unchecked_mul_msb,
smart_ops,
smart_bitand,
smart_bitor,
smart_bitxor,
smart_add,
smart_sub,
smart_mul_lsb,
);
criterion_group!(
unchecked_ops,
unchecked_neg,
unchecked_add,
unchecked_sub,
unchecked_mul_lsb,
unchecked_mul_msb,
unchecked_div,
unchecked_greater,
unchecked_less,
unchecked_equal,
carry_extract,
// programmable_bootstrapping,
// multivalue_programmable_bootstrapping
//bench_two_block_pbs
//wopbs_v0_norm2_2,
bench_wopbs_param_message_8_norm2_5,
programmable_bootstrapping
);
criterion_group!(
arithmetic_scalar_operation,
unchecked_scalar_ops,
unchecked_scalar_add,
unchecked_scalar_mul,
unchecked_scalar_sub,
unchecked_scalar_div,
unchecked_scalar_mod,
unchecked_scalar_left_shift,
unchecked_scalar_right_shift,
);
criterion_main!(arithmetic_operation,); // arithmetic_scalar_operation,);
criterion_group!(
default_ops,
neg,
bitand,
bitor,
bitxor,
add,
sub,
div,
mul,
greater,
greater_or_equal,
less,
less_or_equal,
equal,
not_equal
);
criterion_group!(
default_scalar_ops,
scalar_add,
scalar_sub,
scalar_div,
scalar_mul,
scalar_mod,
scalar_left_shift,
scalar_right_shift,
scalar_greater,
scalar_greater_or_equal,
scalar_less,
scalar_less_or_equal,
scalar_equal,
scalar_not_equal
);
mod casting;
criterion_group!(
casting,
casting::pack_cast_64,
casting::pack_cast,
casting::cast
);
fn main() {
fn default_bench() {
casting();
default_ops();
default_scalar_ops();
}
match env::var("__TFHE_RS_BENCH_OP_FLAVOR") {
Ok(val) => {
match val.to_lowercase().as_str() {
"default" => default_bench(),
"smart" => smart_ops(),
"unchecked" => {
unchecked_ops();
unchecked_scalar_ops();
}
_ => panic!("unknown benchmark operations flavor"),
};
}
Err(_) => default_bench(),
};
Criterion::default().configure_from_args().final_summary();
}

View File

@@ -0,0 +1,137 @@
use crate::utilities::{write_to_json, OperatorType};
use tfhe::shortint::prelude::*;
use rayon::prelude::*;
use criterion::Criterion;
pub fn pack_cast_64(c: &mut Criterion) {
let bench_name = "pack_cast_64";
let mut bench_group = c.benchmark_group(bench_name);
let (client_key_1, server_key_1): (ClientKey, ServerKey) =
gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let (client_key_2, server_key_2): (ClientKey, ServerKey) =
gen_keys(PARAM_MESSAGE_2_CARRY_2_KS_PBS);
let ks_param = PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS;
let ks_param_name = "PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS";
let ksk = KeySwitchingKey::new(
(&client_key_1, &server_key_1),
(&client_key_2, &server_key_2),
ks_param,
);
let vec_ct = vec![client_key_1.encrypt(1); 64];
let bench_id = format!("{bench_name}_{ks_param_name}");
bench_group.bench_function(&bench_id, |b| {
b.iter(|| {
let _ = (0..32)
.into_par_iter()
.map(|i| {
let byte_idx = 7 - i / 4;
let pair_idx = i % 4;
let b0 = &vec_ct[8 * byte_idx + 2 * pair_idx];
let b1 = &vec_ct[8 * byte_idx + 2 * pair_idx + 1];
ksk.cast(
&server_key_1.unchecked_add(b0, &server_key_1.unchecked_scalar_mul(b1, 2)),
)
})
.collect::<Vec<_>>();
});
});
write_to_json::<u64, _>(
&bench_id,
ks_param,
ks_param_name,
"pack_cast_64",
&OperatorType::Atomic,
0,
vec![],
);
}
pub fn pack_cast(c: &mut Criterion) {
let bench_name = "pack_cast";
let mut bench_group = c.benchmark_group(bench_name);
let (client_key_1, server_key_1): (ClientKey, ServerKey) =
gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let (client_key_2, server_key_2): (ClientKey, ServerKey) =
gen_keys(PARAM_MESSAGE_2_CARRY_2_KS_PBS);
let ks_param = PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS;
let ks_param_name = "PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS";
let ksk = KeySwitchingKey::new(
(&client_key_1, &server_key_1),
(&client_key_2, &server_key_2),
ks_param,
);
let ct_1 = client_key_1.encrypt(1);
let ct_2 = client_key_1.encrypt(1);
let bench_id = format!("{bench_name}_{ks_param_name}");
bench_group.bench_function(&bench_id, |b| {
b.iter(|| {
let _ = ksk.cast(
&server_key_1.unchecked_add(&ct_1, &server_key_1.unchecked_scalar_mul(&ct_2, 2)),
);
});
});
write_to_json::<u64, _>(
&bench_id,
ks_param,
ks_param_name,
"pack_cast",
&OperatorType::Atomic,
0,
vec![],
);
}
pub fn cast(c: &mut Criterion) {
let bench_name = "cast";
let mut bench_group = c.benchmark_group(bench_name);
let (client_key_1, server_key_1): (ClientKey, ServerKey) =
gen_keys(PARAM_MESSAGE_1_CARRY_1_KS_PBS);
let (client_key_2, server_key_2): (ClientKey, ServerKey) =
gen_keys(PARAM_MESSAGE_2_CARRY_2_KS_PBS);
let ks_param = PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS;
let ks_param_name = "PARAM_KEYSWITCH_1_1_KS_PBS_TO_2_2_KS_PBS";
let ksk = KeySwitchingKey::new(
(&client_key_1, &server_key_1),
(&client_key_2, &server_key_2),
ks_param,
);
let ct = client_key_1.encrypt(1);
let bench_id = format!("{bench_name}_{ks_param_name}");
bench_group.bench_function(&bench_id, |b| {
b.iter(|| {
let _ = ksk.cast(&ct);
});
});
write_to_json::<u64, _>(
&bench_id,
ks_param,
ks_param_name,
"cast",
&OperatorType::Atomic,
0,
vec![],
);
}

231
tfhe/benches/utilities.rs Normal file
View File

@@ -0,0 +1,231 @@
use serde::Serialize;
use std::fs;
use std::path::PathBuf;
#[cfg(feature = "boolean")]
use tfhe::boolean::parameters::BooleanParameters;
use tfhe::core_crypto::prelude::*;
#[cfg(feature = "shortint")]
use tfhe::shortint::parameters::ShortintKeySwitchingParameters;
#[cfg(feature = "shortint")]
use tfhe::shortint::PBSParameters;
#[derive(Clone, Copy, Default, Serialize)]
pub struct CryptoParametersRecord<Scalar: UnsignedInteger> {
pub lwe_dimension: Option<LweDimension>,
pub glwe_dimension: Option<GlweDimension>,
pub polynomial_size: Option<PolynomialSize>,
pub lwe_modular_std_dev: Option<StandardDev>,
pub glwe_modular_std_dev: Option<StandardDev>,
pub pbs_base_log: Option<DecompositionBaseLog>,
pub pbs_level: Option<DecompositionLevelCount>,
pub ks_base_log: Option<DecompositionBaseLog>,
pub ks_level: Option<DecompositionLevelCount>,
pub pfks_level: Option<DecompositionLevelCount>,
pub pfks_base_log: Option<DecompositionBaseLog>,
pub pfks_modular_std_dev: Option<StandardDev>,
pub cbs_level: Option<DecompositionLevelCount>,
pub cbs_base_log: Option<DecompositionBaseLog>,
pub message_modulus: Option<usize>,
pub carry_modulus: Option<usize>,
pub ciphertext_modulus: Option<CiphertextModulus<Scalar>>,
}
#[cfg(feature = "boolean")]
impl<Scalar: UnsignedInteger> From<BooleanParameters> for CryptoParametersRecord<Scalar> {
fn from(params: BooleanParameters) -> Self {
CryptoParametersRecord {
lwe_dimension: Some(params.lwe_dimension),
glwe_dimension: Some(params.glwe_dimension),
polynomial_size: Some(params.polynomial_size),
lwe_modular_std_dev: Some(params.lwe_modular_std_dev),
glwe_modular_std_dev: Some(params.glwe_modular_std_dev),
pbs_base_log: Some(params.pbs_base_log),
pbs_level: Some(params.pbs_level),
ks_base_log: Some(params.ks_base_log),
ks_level: Some(params.ks_level),
pfks_level: None,
pfks_base_log: None,
pfks_modular_std_dev: None,
cbs_level: None,
cbs_base_log: None,
message_modulus: None,
carry_modulus: None,
ciphertext_modulus: Some(CiphertextModulus::<Scalar>::new_native()),
}
}
}
#[cfg(feature = "shortint")]
impl<Scalar> From<PBSParameters> for CryptoParametersRecord<Scalar>
where
Scalar: UnsignedInteger + CastInto<u128>,
{
fn from(params: PBSParameters) -> Self {
CryptoParametersRecord {
lwe_dimension: Some(params.lwe_dimension()),
glwe_dimension: Some(params.glwe_dimension()),
polynomial_size: Some(params.polynomial_size()),
lwe_modular_std_dev: Some(params.lwe_modular_std_dev()),
glwe_modular_std_dev: Some(params.glwe_modular_std_dev()),
pbs_base_log: Some(params.pbs_base_log()),
pbs_level: Some(params.pbs_level()),
ks_base_log: Some(params.ks_base_log()),
ks_level: Some(params.ks_level()),
pfks_level: None,
pfks_base_log: None,
pfks_modular_std_dev: None,
cbs_level: None,
cbs_base_log: None,
message_modulus: Some(params.message_modulus().0),
carry_modulus: Some(params.carry_modulus().0),
ciphertext_modulus: Some(
params
.ciphertext_modulus()
.try_to()
.expect("failed to convert ciphertext modulus"),
),
}
}
}
#[cfg(feature = "shortint")]
impl<Scalar: UnsignedInteger> From<ShortintKeySwitchingParameters>
for CryptoParametersRecord<Scalar>
{
fn from(params: ShortintKeySwitchingParameters) -> Self {
CryptoParametersRecord {
lwe_dimension: None,
glwe_dimension: None,
polynomial_size: None,
lwe_modular_std_dev: None,
glwe_modular_std_dev: None,
pbs_base_log: None,
pbs_level: None,
ks_base_log: Some(params.ks_base_log),
ks_level: Some(params.ks_level),
pfks_level: None,
pfks_base_log: None,
pfks_modular_std_dev: None,
cbs_level: None,
cbs_base_log: None,
message_modulus: None,
carry_modulus: None,
ciphertext_modulus: None,
}
}
}
#[derive(Serialize)]
enum PolynomialMultiplication {
Fft,
// Ntt,
}
#[derive(Serialize)]
enum IntegerRepresentation {
Radix,
// Crt,
// Hybrid,
}
#[derive(Serialize)]
enum ExecutionType {
Sequential,
Parallel,
}
#[derive(Serialize)]
enum KeySetType {
Single,
// Multi,
}
#[derive(Serialize)]
enum OperandType {
CipherText,
PlainText,
}
#[derive(Clone, Serialize)]
pub enum OperatorType {
Atomic,
// AtomicPattern,
}
#[derive(Serialize)]
struct BenchmarkParametersRecord<Scalar: UnsignedInteger> {
display_name: String,
crypto_parameters_alias: String,
crypto_parameters: CryptoParametersRecord<Scalar>,
message_modulus: Option<usize>,
carry_modulus: Option<usize>,
ciphertext_modulus: usize,
bit_size: u32,
polynomial_multiplication: PolynomialMultiplication,
precision: u32,
error_probability: f64,
integer_representation: IntegerRepresentation,
decomposition_basis: Vec<u32>,
pbs_algorithm: Option<String>,
execution_type: ExecutionType,
key_set_type: KeySetType,
operand_type: OperandType,
operator_type: OperatorType,
}
/// Writes benchmarks parameters to disk in JSON format.
pub fn write_to_json<
Scalar: UnsignedInteger + Serialize,
T: Into<CryptoParametersRecord<Scalar>>,
>(
bench_id: &str,
params: T,
params_alias: impl Into<String>,
display_name: impl Into<String>,
operator_type: &OperatorType,
bit_size: u32,
decomposition_basis: Vec<u32>,
) {
let params = params.into();
let execution_type = match bench_id.contains("parallelized") {
true => ExecutionType::Parallel,
false => ExecutionType::Sequential,
};
let operand_type = match bench_id.contains("scalar") {
true => OperandType::PlainText,
false => OperandType::CipherText,
};
let record = BenchmarkParametersRecord {
display_name: display_name.into(),
crypto_parameters_alias: params_alias.into(),
crypto_parameters: params.to_owned(),
message_modulus: params.message_modulus,
carry_modulus: params.carry_modulus,
ciphertext_modulus: 64,
bit_size,
polynomial_multiplication: PolynomialMultiplication::Fft,
precision: (params.message_modulus.unwrap_or(2) as u32).ilog2(),
error_probability: 2f64.powf(-41.0),
integer_representation: IntegerRepresentation::Radix,
decomposition_basis,
pbs_algorithm: None, // To be added in future version
execution_type,
key_set_type: KeySetType::Single,
operand_type,
operator_type: operator_type.to_owned(),
};
let mut params_directory = ["benchmarks_parameters", bench_id]
.iter()
.collect::<PathBuf>();
fs::create_dir_all(&params_directory).unwrap();
params_directory.push("parameters.json");
fs::write(params_directory, serde_json::to_string(&record).unwrap()).unwrap();
}
// Empty main to please clippy.
#[allow(dead_code)]
pub fn main() {}

View File

@@ -1,10 +1,24 @@
// tfhe/build.rs
#[cfg(feature = "__c_api")]
fn gen_c_api() {
use std::env;
use std::path::PathBuf;
if std::env::var("_CBINDGEN_IS_RUNNING").is_ok() {
return;
}
fn get_build_profile_name() -> String {
// The profile name is always the 3rd last part of the path (with 1 based indexing).
// e.g. /code/core/target/cli/build/my-build-info-9f91ba6f99d7a061/out
let out_dir = std::env::var("OUT_DIR")
.expect("OUT_DIR is not set, cannot determine build profile, aborting");
out_dir
.split(std::path::MAIN_SEPARATOR)
.nth_back(3)
.expect("Cannot determine build profile, aborting")
.to_string()
}
/// Find the location of the `target/` directory. Note that this may be
/// overridden by `cmake`, so we also need to check the `CARGO_TARGET_DIR`
/// variable.
@@ -12,7 +26,8 @@ fn gen_c_api() {
if let Ok(target) = env::var("CARGO_TARGET_DIR") {
PathBuf::from(target)
} else {
PathBuf::from(env::var("CARGO_MANIFEST_DIR").unwrap()).join("../target/release")
PathBuf::from(env::var("CARGO_MANIFEST_DIR").unwrap())
.join(format!("../target/{}", get_build_profile_name()))
}
}
@@ -24,7 +39,35 @@ fn gen_c_api() {
.display()
.to_string();
cbindgen::generate(crate_dir)
let parse_expand_features_vec = vec![
#[cfg(feature = "__c_api")]
"__c_api",
#[cfg(feature = "boolean-c-api")]
"boolean-c-api",
#[cfg(feature = "shortint-c-api")]
"shortint-c-api",
#[cfg(feature = "high-level-c-api")]
"high-level-c-api",
#[cfg(feature = "boolean")]
"boolean",
#[cfg(feature = "shortint")]
"shortint",
#[cfg(feature = "integer")]
"integer",
];
let parse_expand_vec = if parse_expand_features_vec.is_empty() {
vec![]
} else {
vec![package_name.as_str()]
};
cbindgen::Builder::new()
.with_crate(crate_dir.clone())
.with_config(cbindgen::Config::from_root_or_default(crate_dir))
.with_parse_expand(&parse_expand_vec)
.with_parse_expand_features(&parse_expand_features_vec)
.generate()
.unwrap()
.write_to_file(output_file);
}

View File

@@ -2,7 +2,10 @@ project(tfhe-c-api-tests)
cmake_minimum_required(VERSION 3.16)
set(TFHE_C_API_RELEASE "${CMAKE_CURRENT_SOURCE_DIR}/../../target/release/")
if(NOT CARGO_PROFILE)
set(CARGO_PROFILE release)
endif()
set(TFHE_C_API_RELEASE "${CMAKE_CURRENT_SOURCE_DIR}/../../target/${CARGO_PROFILE}")
include_directories(${TFHE_C_API_RELEASE})
add_library(Tfhe STATIC IMPORTED)

View File

@@ -11,6 +11,9 @@ void test_default_keygen_w_serde(void) {
BooleanCiphertext *ct = NULL;
Buffer ct_ser_buffer = {.pointer = NULL, .length = 0};
BooleanCiphertext *deser_ct = NULL;
BooleanCompressedCiphertext *cct = NULL;
BooleanCompressedCiphertext *deser_cct = NULL;
BooleanCiphertext *decompressed_ct = NULL;
int gen_keys_ok = boolean_gen_keys_with_default_parameters(&cks, &sks);
assert(gen_keys_ok == 0);
@@ -37,10 +40,34 @@ void test_default_keygen_w_serde(void) {
assert(result == true);
destroy_boolean_client_key(cks);
destroy_boolean_server_key(sks);
destroy_boolean_ciphertext(ct);
destroy_boolean_ciphertext(deser_ct);
int c_encrypt_ok = boolean_client_key_encrypt_compressed(cks, true, &cct);
assert(c_encrypt_ok == 0);
int c_ser_ok = boolean_serialize_compressed_ciphertext(cct, &ct_ser_buffer);
assert(c_ser_ok == 0);
deser_view.pointer = ct_ser_buffer.pointer;
deser_view.length = ct_ser_buffer.length;
int c_deser_ok = boolean_deserialize_compressed_ciphertext(deser_view, &deser_cct);
assert(c_deser_ok == 0);
int decomp_ok = boolean_decompress_ciphertext(cct, &decompressed_ct);
assert(decomp_ok == 0);
bool c_result = false;
int c_decrypt_ok = boolean_client_key_decrypt(cks, decompressed_ct, &c_result);
assert(c_decrypt_ok == 0);
assert(c_result == true);
boolean_destroy_client_key(cks);
boolean_destroy_server_key(sks);
boolean_destroy_ciphertext(ct);
boolean_destroy_ciphertext(deser_ct);
boolean_destroy_compressed_ciphertext(cct);
boolean_destroy_compressed_ciphertext(deser_cct);
boolean_destroy_ciphertext(decompressed_ct);
destroy_buffer(&ct_ser_buffer);
}
@@ -48,50 +75,52 @@ void test_predefined_keygen_w_serde(void) {
BooleanClientKey *cks = NULL;
BooleanServerKey *sks = NULL;
int gen_keys_ok = boolean_gen_keys_with_predefined_parameters_set(
BOOLEAN_PARAMETERS_SET_DEFAULT_PARAMETERS, &cks, &sks);
int gen_keys_ok =
boolean_gen_keys_with_parameters(BOOLEAN_PARAMETERS_SET_DEFAULT_PARAMETERS, &cks, &sks);
assert(gen_keys_ok == 0);
destroy_boolean_client_key(cks);
destroy_boolean_server_key(sks);
boolean_destroy_client_key(cks);
boolean_destroy_server_key(sks);
gen_keys_ok = boolean_gen_keys_with_predefined_parameters_set(
BOOLEAN_PARAMETERS_SET_THFE_LIB_PARAMETERS, &cks, &sks);
gen_keys_ok =
boolean_gen_keys_with_parameters(BOOLEAN_PARAMETERS_SET_TFHE_LIB_PARAMETERS, &cks, &sks);
assert(gen_keys_ok == 0);
destroy_boolean_client_key(cks);
destroy_boolean_server_key(sks);
boolean_destroy_client_key(cks);
boolean_destroy_server_key(sks);
}
void test_custom_keygen(void) {
BooleanClientKey *cks = NULL;
BooleanServerKey *sks = NULL;
BooleanParameters *params = NULL;
int params_ok = boolean_create_parameters(10, 1, 1024, 10e-100, 10e-100, 3, 1, 4, 2, &params);
assert(params_ok == 0);
BooleanParameters params = {
.lwe_dimension = 10,
.glwe_dimension = 1,
.polynomial_size = 1024,
.lwe_modular_std_dev = 10e-100,
.glwe_modular_std_dev = 10e-100,
.pbs_base_log = 3,
.pbs_level = 1,
.ks_base_log = 4,
.ks_level = 2,
};
int gen_keys_ok = boolean_gen_keys_with_parameters(params, &cks, &sks);
assert(gen_keys_ok == 0);
destroy_boolean_parameters(params);
destroy_boolean_client_key(cks);
destroy_boolean_server_key(sks);
boolean_destroy_client_key(cks);
boolean_destroy_server_key(sks);
}
void test_public_keygen(void) {
BooleanClientKey *cks = NULL;
BooleanPublicKey *pks = NULL;
BooleanParameters *params = NULL;
BooleanCiphertext *ct = NULL;
int get_params_ok = boolean_get_parameters(BOOLEAN_PARAMETERS_SET_DEFAULT_PARAMETERS, &params);
assert(get_params_ok == 0);
int gen_keys_ok = boolean_gen_client_key(params, &cks);
int gen_keys_ok = boolean_gen_client_key(BOOLEAN_PARAMETERS_SET_DEFAULT_PARAMETERS, &cks);
assert(gen_keys_ok == 0);
int gen_pks = boolean_gen_public_key(cks, &pks);
@@ -108,10 +137,9 @@ void test_public_keygen(void) {
assert(result == true);
destroy_boolean_parameters(params);
destroy_boolean_client_key(cks);
destroy_boolean_public_key(pks);
destroy_boolean_ciphertext(ct);
boolean_destroy_client_key(cks);
boolean_destroy_public_key(pks);
boolean_destroy_ciphertext(ct);
}
int main(void) {

View File

@@ -51,9 +51,9 @@ void test_binary_boolean_function(BooleanClientKey *cks, BooleanServerKey *sks,
assert(decrypted_result == expected);
destroy_boolean_ciphertext(ct_left);
destroy_boolean_ciphertext(ct_right);
destroy_boolean_ciphertext(ct_result);
boolean_destroy_ciphertext(ct_left);
boolean_destroy_ciphertext(ct_right);
boolean_destroy_ciphertext(ct_result);
}
}
}
@@ -103,8 +103,8 @@ void test_binary_boolean_function_assign(
assert(decrypted_result == expected);
destroy_boolean_ciphertext(ct_left_and_result);
destroy_boolean_ciphertext(ct_right);
boolean_destroy_ciphertext(ct_left_and_result);
boolean_destroy_ciphertext(ct_right);
}
}
}
@@ -139,8 +139,8 @@ void test_binary_boolean_function_scalar(BooleanClientKey *cks, BooleanServerKey
assert(decrypted_result == expected);
destroy_boolean_ciphertext(ct_left);
destroy_boolean_ciphertext(ct_result);
boolean_destroy_ciphertext(ct_left);
boolean_destroy_ciphertext(ct_result);
}
}
}
@@ -171,7 +171,7 @@ void test_binary_boolean_function_scalar_assign(BooleanClientKey *cks, BooleanSe
assert(decrypted_result == expected);
destroy_boolean_ciphertext(ct_left_and_result);
boolean_destroy_ciphertext(ct_left_and_result);
}
}
}
@@ -205,8 +205,8 @@ void test_not(BooleanClientKey *cks, BooleanServerKey *sks) {
assert(decrypted_result == expected);
destroy_boolean_ciphertext(ct_in);
destroy_boolean_ciphertext(ct_result);
boolean_destroy_ciphertext(ct_in);
boolean_destroy_ciphertext(ct_result);
}
}
}
@@ -239,7 +239,7 @@ void test_not_assign(BooleanClientKey *cks, BooleanServerKey *sks) {
assert(decrypted_result == expected);
destroy_boolean_ciphertext(ct_in_and_result);
boolean_destroy_ciphertext(ct_in_and_result);
}
}
}
@@ -300,10 +300,10 @@ void test_mux(BooleanClientKey *cks, BooleanServerKey *sks) {
assert(decrypted_result == expected);
destroy_boolean_ciphertext(ct_cond);
destroy_boolean_ciphertext(ct_then);
destroy_boolean_ciphertext(ct_else);
destroy_boolean_ciphertext(ct_result);
boolean_destroy_ciphertext(ct_cond);
boolean_destroy_ciphertext(ct_then);
boolean_destroy_ciphertext(ct_else);
boolean_destroy_ciphertext(ct_result);
}
}
}
@@ -326,19 +326,37 @@ bool c_xnor(bool left, bool right) { return !c_xor(left, right); }
void test_server_key(void) {
BooleanClientKey *cks = NULL;
BooleanCompressedServerKey *csks = NULL;
BooleanServerKey *sks = NULL;
Buffer cks_ser_buffer = {.pointer = NULL, .length = 0};
BooleanClientKey *deser_cks = NULL;
Buffer csks_ser_buffer = {.pointer = NULL, .length = 0};
BooleanCompressedServerKey *deser_csks = NULL;
Buffer sks_ser_buffer = {.pointer = NULL, .length = 0};
BooleanServerKey *deser_sks = NULL;
int gen_keys_ok = boolean_gen_keys_with_default_parameters(&cks, &sks);
assert(gen_keys_ok == 0);
int gen_cks_ok = boolean_gen_client_key(BOOLEAN_PARAMETERS_SET_DEFAULT_PARAMETERS, &cks);
assert(gen_cks_ok == 0);
int gen_csks_ok = boolean_gen_compressed_server_key(cks, &csks);
assert(gen_csks_ok == 0);
int ser_csks_ok = boolean_serialize_compressed_server_key(csks, &csks_ser_buffer);
assert(ser_csks_ok == 0);
BufferView deser_view = {.pointer = csks_ser_buffer.pointer, .length = csks_ser_buffer.length};
int deser_csks_ok = boolean_deserialize_compressed_server_key(deser_view, &deser_csks);
assert(deser_csks_ok == 0);
int decompress_csks_ok = boolean_decompress_server_key(deser_csks, &sks);
assert(decompress_csks_ok == 0);
int ser_cks_ok = boolean_serialize_client_key(cks, &cks_ser_buffer);
assert(ser_cks_ok == 0);
BufferView deser_view = {.pointer = cks_ser_buffer.pointer, .length = cks_ser_buffer.length};
deser_view.pointer = cks_ser_buffer.pointer;
deser_view.length = cks_ser_buffer.length;
int deser_cks_ok = boolean_deserialize_client_key(deser_view, &deser_cks);
assert(deser_cks_ok == 0);
@@ -389,11 +407,14 @@ void test_server_key(void) {
test_binary_boolean_function_scalar_assign(deser_cks, deser_sks, c_xnor,
boolean_server_key_xnor_scalar_assign);
destroy_boolean_client_key(cks);
destroy_boolean_server_key(sks);
destroy_boolean_client_key(deser_cks);
destroy_boolean_server_key(deser_sks);
boolean_destroy_client_key(cks);
boolean_destroy_compressed_server_key(csks);
boolean_destroy_server_key(sks);
boolean_destroy_client_key(deser_cks);
boolean_destroy_compressed_server_key(deser_csks);
boolean_destroy_server_key(deser_sks);
destroy_buffer(&cks_ser_buffer);
destroy_buffer(&csks_ser_buffer);
destroy_buffer(&sks_ser_buffer);
}

View File

@@ -0,0 +1,123 @@
#include <tfhe.h>
#include <assert.h>
#include <inttypes.h>
#include <stdio.h>
int uint128_client_key(const ClientKey *client_key) {
int ok;
FheUint128 *lhs = NULL;
FheUint128 *rhs = NULL;
FheUint128 *result = NULL;
U128 lhs_clear = {10, 20};
U128 rhs_clear = {1, 2};
U128 result_clear = {0};
ok = fhe_uint128_try_encrypt_with_client_key_u128(lhs_clear, client_key, &lhs);
assert(ok == 0);
ok = fhe_uint128_try_encrypt_with_client_key_u128(rhs_clear, client_key, &rhs);
assert(ok == 0);
ok = fhe_uint128_sub(lhs, rhs, &result);
assert(ok == 0);
ok = fhe_uint128_decrypt(result, client_key, &result_clear);
assert(ok == 0);
assert(result_clear.w0 == 9);
assert(result_clear.w1 == 18);
fhe_uint128_destroy(lhs);
fhe_uint128_destroy(rhs);
fhe_uint128_destroy(result);
return ok;
}
int uint128_encrypt_trivial(const ClientKey *client_key) {
int ok;
FheUint128 *lhs = NULL;
FheUint128 *rhs = NULL;
FheUint128 *result = NULL;
U128 lhs_clear = {10, 20};
U128 rhs_clear = {1, 2};
U128 result_clear = {0};
ok = fhe_uint128_try_encrypt_trivial_u128(lhs_clear, &lhs);
assert(ok == 0);
ok = fhe_uint128_try_encrypt_trivial_u128(rhs_clear, &rhs);
assert(ok == 0);
ok = fhe_uint128_sub(lhs, rhs, &result);
assert(ok == 0);
ok = fhe_uint128_decrypt(result, client_key, &result_clear);
assert(ok == 0);
assert(result_clear.w0 == 9);
assert(result_clear.w1 == 18);
fhe_uint128_destroy(lhs);
fhe_uint128_destroy(rhs);
fhe_uint128_destroy(result);
return ok;
}
int uint128_public_key(const ClientKey *client_key, const PublicKey *public_key) {
int ok;
FheUint128 *lhs = NULL;
FheUint128 *rhs = NULL;
FheUint128 *result = NULL;
U128 lhs_clear = {10, 20};
U128 rhs_clear = {1, 2};
U128 result_clear = {0};
ok = fhe_uint128_try_encrypt_with_public_key_u128(lhs_clear, public_key, &lhs);
assert(ok == 0);
ok = fhe_uint128_try_encrypt_with_public_key_u128(rhs_clear, public_key, &rhs);
assert(ok == 0);
ok = fhe_uint128_add(lhs, rhs, &result);
assert(ok == 0);
ok = fhe_uint128_decrypt(result, client_key, &result_clear);
assert(ok == 0);
assert(result_clear.w0 == 11);
assert(result_clear.w1 == 22);
fhe_uint128_destroy(lhs);
fhe_uint128_destroy(rhs);
fhe_uint128_destroy(result);
return ok;
}
int main(void) {
int ok = 0;
ConfigBuilder *builder;
Config *config;
config_builder_all_disabled(&builder);
config_builder_enable_default_integers_small(&builder);
config_builder_build(builder, &config);
ClientKey *client_key = NULL;
ServerKey *server_key = NULL;
PublicKey *public_key = NULL;
generate_keys(config, &client_key, &server_key);
public_key_new(client_key, &public_key);
set_server_key(server_key);
uint128_client_key(client_key);
uint128_encrypt_trivial(client_key);
uint128_public_key(client_key, public_key);
client_key_destroy(client_key);
public_key_destroy(public_key);
server_key_destroy(server_key);
return ok;
}

View File

@@ -0,0 +1,139 @@
#include <tfhe.h>
#include <assert.h>
#include <inttypes.h>
#include <stdio.h>
int uint256_client_key(const ClientKey *client_key) {
int ok;
FheUint256 *lhs = NULL;
FheUint256 *rhs = NULL;
FheUint256 *result = NULL;
FheUint64 *cast_result = NULL;
U256 lhs_clear = {1, 2, 3, 4};
U256 rhs_clear = {5, 6, 7, 8};
U256 result_clear = {0};
ok = fhe_uint256_try_encrypt_with_client_key_u256(lhs_clear, client_key, &lhs);
assert(ok == 0);
ok = fhe_uint256_try_encrypt_with_client_key_u256(rhs_clear, client_key, &rhs);
assert(ok == 0);
ok = fhe_uint256_add(lhs, rhs, &result);
assert(ok == 0);
ok = fhe_uint256_decrypt(result, client_key, &result_clear);
assert(ok == 0);
assert(result_clear.w0 == 6);
assert(result_clear.w1 == 8);
assert(result_clear.w2 == 10);
assert(result_clear.w3 == 12);
// try some casting
ok = fhe_uint256_cast_into_fhe_uint64(result, &cast_result);
assert(ok == 0);
uint64_t u64_clear;
ok = fhe_uint64_decrypt(cast_result, client_key, &u64_clear);
assert(ok == 0);
assert(u64_clear == 6);
fhe_uint256_destroy(lhs);
fhe_uint256_destroy(rhs);
fhe_uint256_destroy(result);
fhe_uint64_destroy(cast_result);
return ok;
}
int uint256_encrypt_trivial(const ClientKey *client_key) {
int ok;
FheUint256 *lhs = NULL;
FheUint256 *rhs = NULL;
FheUint256 *result = NULL;
U256 lhs_clear = {1, 2, 3, 4};
U256 rhs_clear = {5, 6, 7, 8};
U256 result_clear = {0};
ok = fhe_uint256_try_encrypt_trivial_u256(lhs_clear, &lhs);
assert(ok == 0);
ok = fhe_uint256_try_encrypt_trivial_u256(rhs_clear, &rhs);
assert(ok == 0);
ok = fhe_uint256_add(lhs, rhs, &result);
assert(ok == 0);
ok = fhe_uint256_decrypt(result, client_key, &result_clear);
assert(ok == 0);
assert(result_clear.w0 == 6);
assert(result_clear.w1 == 8);
assert(result_clear.w2 == 10);
assert(result_clear.w3 == 12);
fhe_uint256_destroy(lhs);
fhe_uint256_destroy(rhs);
fhe_uint256_destroy(result);
return ok;
}
int uint256_public_key(const ClientKey *client_key, const PublicKey *public_key) {
int ok;
FheUint256 *lhs = NULL;
FheUint256 *rhs = NULL;
FheUint256 *result = NULL;
U256 lhs_clear = {5, 6, 7, 8};
U256 rhs_clear = {1, 2, 3, 4};
U256 result_clear = {0};
ok = fhe_uint256_try_encrypt_with_public_key_u256(lhs_clear, public_key, &lhs);
assert(ok == 0);
ok = fhe_uint256_try_encrypt_with_public_key_u256(rhs_clear, public_key, &rhs);
assert(ok == 0);
ok = fhe_uint256_sub(lhs, rhs, &result);
assert(ok == 0);
ok = fhe_uint256_decrypt(result, client_key, &result_clear);
assert(ok == 0);
assert(result_clear.w0 == 4);
assert(result_clear.w1 == 4);
assert(result_clear.w2 == 4);
assert(result_clear.w3 == 4);
fhe_uint256_destroy(lhs);
fhe_uint256_destroy(rhs);
fhe_uint256_destroy(result);
return ok;
}
int main(void) {
int ok = 0;
ConfigBuilder *builder;
Config *config;
config_builder_all_disabled(&builder);
config_builder_enable_default_integers_small(&builder);
config_builder_build(builder, &config);
ClientKey *client_key = NULL;
ServerKey *server_key = NULL;
PublicKey *public_key = NULL;
generate_keys(config, &client_key, &server_key);
public_key_new(client_key, &public_key);
set_server_key(server_key);
uint256_client_key(client_key);
uint256_encrypt_trivial(client_key);
uint256_public_key(client_key, public_key);
client_key_destroy(client_key);
public_key_destroy(public_key);
server_key_destroy(server_key);
return ok;
}

View File

@@ -0,0 +1,127 @@
#include <tfhe.h>
#include <assert.h>
#include <inttypes.h>
#include <stdio.h>
int client_key_test(const ClientKey *client_key) {
int ok;
FheBool *lhs = NULL;
FheBool *rhs = NULL;
FheBool *result = NULL;
bool lhs_clear = 0;
bool rhs_clear = 1;
ok = fhe_bool_try_encrypt_with_client_key_bool(lhs_clear, client_key, &lhs);
assert(ok == 0);
ok = fhe_bool_try_encrypt_with_client_key_bool(rhs_clear, client_key, &rhs);
assert(ok == 0);
ok = fhe_bool_bitand(lhs, rhs, &result);
assert(ok == 0);
bool clear;
ok = fhe_bool_decrypt(result, client_key, &clear);
assert(ok == 0);
assert(clear == (lhs_clear & rhs_clear));
fhe_bool_destroy(lhs);
fhe_bool_destroy(rhs);
fhe_bool_destroy(result);
return ok;
}
int public_key_test(const ClientKey *client_key, const PublicKey *public_key) {
int ok;
FheBool *lhs = NULL;
FheBool *rhs = NULL;
FheBool *result = NULL;
bool lhs_clear = 0;
bool rhs_clear = 1;
ok = fhe_bool_try_encrypt_with_public_key_bool(lhs_clear, public_key, &lhs);
assert(ok == 0);
ok = fhe_bool_try_encrypt_with_public_key_bool(rhs_clear, public_key, &rhs);
assert(ok == 0);
ok = fhe_bool_bitand(lhs, rhs, &result);
assert(ok == 0);
bool clear;
ok = fhe_bool_decrypt(result, client_key, &clear);
assert(ok == 0);
assert(clear == (lhs_clear & rhs_clear));
fhe_bool_destroy(lhs);
fhe_bool_destroy(rhs);
fhe_bool_destroy(result);
return ok;
}
int trivial_encrypt_test(const ClientKey *client_key) {
int ok;
FheBool *lhs = NULL;
FheBool *rhs = NULL;
FheBool *result = NULL;
bool lhs_clear = 0;
bool rhs_clear = 1;
ok = fhe_bool_try_encrypt_trivial_bool(lhs_clear, &lhs);
assert(ok == 0);
ok = fhe_bool_try_encrypt_trivial_bool(rhs_clear, &rhs);
assert(ok == 0);
ok = fhe_bool_bitand(lhs, rhs, &result);
assert(ok == 0);
bool clear;
ok = fhe_bool_decrypt(result, client_key, &clear);
assert(ok == 0);
assert(clear == (lhs_clear & rhs_clear));
fhe_bool_destroy(lhs);
fhe_bool_destroy(rhs);
fhe_bool_destroy(result);
return ok;
}
int main(void) {
ConfigBuilder *builder;
Config *config;
config_builder_all_disabled(&builder);
config_builder_enable_default_bool(&builder);
config_builder_build(builder, &config);
ClientKey *client_key = NULL;
ServerKey *server_key = NULL;
PublicKey *public_key = NULL;
generate_keys(config, &client_key, &server_key);
public_key_new(client_key, &public_key);
set_server_key(server_key);
client_key_test(client_key);
public_key_test(client_key, public_key);
trivial_encrypt_test(client_key);
client_key_destroy(client_key);
public_key_destroy(public_key);
server_key_destroy(server_key);
return EXIT_SUCCESS;
}

View File

@@ -0,0 +1,217 @@
#include <tfhe.h>
#include <assert.h>
#include <inttypes.h>
#include <stdio.h>
int uint256_client_key(const ClientKey *client_key) {
int ok;
FheUint256 *lhs = NULL;
FheUint256 *rhs = NULL;
FheUint256 *result = NULL;
FheUint64 *cast_result = NULL;
U256 lhs_clear = {1, 2, 3, 4};
U256 rhs_clear = {5, 6, 7, 8};
U256 result_clear = {0};
ok = fhe_uint256_try_encrypt_with_client_key_u256(lhs_clear, client_key, &lhs);
assert(ok == 0);
ok = fhe_uint256_try_encrypt_with_client_key_u256(rhs_clear, client_key, &rhs);
assert(ok == 0);
ok = fhe_uint256_add(lhs, rhs, &result);
assert(ok == 0);
ok = fhe_uint256_decrypt(result, client_key, &result_clear);
assert(ok == 0);
assert(result_clear.w0 == 6);
assert(result_clear.w1 == 8);
assert(result_clear.w2 == 10);
assert(result_clear.w3 == 12);
// try some casting
ok = fhe_uint256_cast_into_fhe_uint64(result, &cast_result);
assert(ok == 0);
uint64_t u64_clear;
ok = fhe_uint64_decrypt(cast_result, client_key, &u64_clear);
assert(ok == 0);
assert(u64_clear == 6);
fhe_uint256_destroy(lhs);
fhe_uint256_destroy(rhs);
fhe_uint256_destroy(result);
fhe_uint64_destroy(cast_result);
return ok;
}
int uint256_encrypt_trivial(const ClientKey *client_key) {
int ok;
FheUint256 *lhs = NULL;
FheUint256 *rhs = NULL;
FheUint256 *result = NULL;
U256 lhs_clear = {1, 2, 3, 4};
U256 rhs_clear = {5, 6, 7, 8};
U256 result_clear = {0};
ok = fhe_uint256_try_encrypt_trivial_u256(lhs_clear, &lhs);
assert(ok == 0);
ok = fhe_uint256_try_encrypt_trivial_u256(rhs_clear, &rhs);
assert(ok == 0);
ok = fhe_uint256_add(lhs, rhs, &result);
assert(ok == 0);
ok = fhe_uint256_decrypt(result, client_key, &result_clear);
assert(ok == 0);
assert(result_clear.w0 == 6);
assert(result_clear.w1 == 8);
assert(result_clear.w2 == 10);
assert(result_clear.w3 == 12);
fhe_uint256_destroy(lhs);
fhe_uint256_destroy(rhs);
fhe_uint256_destroy(result);
return ok;
}
int uint256_public_key(const ClientKey *client_key,
const CompressedCompactPublicKey *compressed_public_key) {
int ok;
CompactPublicKey *public_key = NULL;
FheUint256 *lhs = NULL;
FheUint256 *rhs = NULL;
FheUint256 *result = NULL;
CompactFheUint256List *list = NULL;
U256 result_clear = {0};
U256 clears[2] = {{5, 6, 7, 8}, {1, 2, 3, 4}};
ok = compressed_compact_public_key_decompress(compressed_public_key, &public_key);
assert(ok == 0);
// Compact list example
{
ok = compact_fhe_uint256_list_try_encrypt_with_compact_public_key_u256(&clears[0], 2,
public_key, &list);
assert(ok == 0);
size_t len = 0;
ok = compact_fhe_uint256_list_len(list, &len);
assert(ok == 0);
assert(len == 2);
FheUint256 *expand_output[2] = {NULL};
ok = compact_fhe_uint256_list_expand(list, &expand_output[0], 2);
assert(ok == 0);
// transfer ownership
lhs = expand_output[0];
rhs = expand_output[1];
// We can destroy the compact list
// The expanded ciphertext are independant from it
compact_fhe_uint256_list_destroy(list);
ok = fhe_uint256_sub(lhs, rhs, &result);
assert(ok == 0);
ok = fhe_uint256_decrypt(result, client_key, &result_clear);
assert(ok == 0);
assert(result_clear.w0 == 4);
assert(result_clear.w1 == 4);
assert(result_clear.w2 == 4);
assert(result_clear.w3 == 4);
fhe_uint256_destroy(lhs);
fhe_uint256_destroy(rhs);
fhe_uint256_destroy(result);
}
{
ok = fhe_uint256_try_encrypt_with_compact_public_key_u256(clears[0], public_key, &lhs);
assert(ok == 0);
ok = fhe_uint256_try_encrypt_with_compact_public_key_u256(clears[1], public_key, &rhs);
assert(ok == 0);
ok = fhe_uint256_sub(lhs, rhs, &result);
assert(ok == 0);
ok = fhe_uint256_decrypt(result, client_key, &result_clear);
assert(ok == 0);
assert(result_clear.w0 == 4);
assert(result_clear.w1 == 4);
assert(result_clear.w2 == 4);
assert(result_clear.w3 == 4);
fhe_uint256_destroy(lhs);
fhe_uint256_destroy(rhs);
fhe_uint256_destroy(result);
}
compact_public_key_destroy(public_key);
return ok;
}
int main(void) {
int ok = 0;
{
ConfigBuilder *builder;
Config *config;
config_builder_all_disabled(&builder);
config_builder_enable_custom_integers(&builder,
SHORTINT_PARAM_MESSAGE_2_CARRY_2_COMPACT_PK_KS_PBS);
config_builder_build(builder, &config);
ClientKey *client_key = NULL;
ServerKey *server_key = NULL;
CompressedCompactPublicKey *compressed_public_key = NULL;
generate_keys(config, &client_key, &server_key);
compressed_compact_public_key_new(client_key, &compressed_public_key);
set_server_key(server_key);
uint256_client_key(client_key);
uint256_encrypt_trivial(client_key);
uint256_public_key(client_key, compressed_public_key);
client_key_destroy(client_key);
compressed_compact_public_key_destroy(compressed_public_key);
server_key_destroy(server_key);
}
{
ConfigBuilder *builder;
Config *config;
config_builder_all_disabled(&builder);
config_builder_enable_custom_integers(&builder,
SHORTINT_PARAM_MESSAGE_2_CARRY_2_COMPACT_PK_PBS_KS);
config_builder_build(builder, &config);
ClientKey *client_key = NULL;
ServerKey *server_key = NULL;
CompressedCompactPublicKey *compressed_public_key = NULL;
generate_keys(config, &client_key, &server_key);
compressed_compact_public_key_new(client_key, &compressed_public_key);
set_server_key(server_key);
uint256_client_key(client_key);
uint256_encrypt_trivial(client_key);
uint256_public_key(client_key, compressed_public_key);
client_key_destroy(client_key);
compressed_compact_public_key_destroy(compressed_public_key);
server_key_destroy(server_key);
}
return ok;
}

View File

@@ -0,0 +1,212 @@
#include <tfhe.h>
#include <assert.h>
#include <inttypes.h>
#include <stdio.h>
int uint8_client_key(const ClientKey *client_key) {
int ok;
FheUint8 *lhs = NULL;
FheUint8 *rhs = NULL;
FheUint8 *result = NULL;
uint8_t lhs_clear = 123;
uint8_t rhs_clear = 14;
ok = fhe_uint8_try_encrypt_with_client_key_u8(lhs_clear, client_key, &lhs);
assert(ok == 0);
ok = fhe_uint8_try_encrypt_with_client_key_u8(rhs_clear, client_key, &rhs);
assert(ok == 0);
ok = fhe_uint8_add(lhs, rhs, &result);
assert(ok == 0);
uint8_t clear;
ok = fhe_uint8_decrypt(result, client_key, &clear);
assert(ok == 0);
assert(clear == (lhs_clear + rhs_clear));
fhe_uint8_destroy(lhs);
fhe_uint8_destroy(rhs);
fhe_uint8_destroy(result);
return ok;
}
int uint8_public_key(const ClientKey *client_key, const PublicKey *public_key) {
int ok;
FheUint8 *lhs = NULL;
FheUint8 *rhs = NULL;
FheUint8 *result = NULL;
uint8_t lhs_clear = 123;
uint8_t rhs_clear = 14;
ok = fhe_uint8_try_encrypt_with_public_key_u8(lhs_clear, public_key, &lhs);
assert(ok == 0);
ok = fhe_uint8_try_encrypt_with_public_key_u8(rhs_clear, public_key, &rhs);
assert(ok == 0);
ok = fhe_uint8_sub(lhs, rhs, &result);
assert(ok == 0);
uint8_t clear;
ok = fhe_uint8_decrypt(result, client_key, &clear);
assert(ok == 0);
assert(clear == (lhs_clear - rhs_clear));
fhe_uint8_destroy(lhs);
fhe_uint8_destroy(rhs);
fhe_uint8_destroy(result);
return ok;
}
int uint8_serialization(const ClientKey *client_key) {
int ok;
FheUint8 *lhs = NULL;
FheUint8 *deserialized_lhs = NULL;
FheUint8 *result = NULL;
Buffer value_buffer = {.pointer = NULL, .length = 0};
Buffer cks_buffer = {.pointer = NULL, .length = 0};
BufferView deser_view = {.pointer = NULL, .length = 0};
ClientKey *deserialized_client_key = NULL;
uint8_t lhs_clear = 123;
ok = client_key_serialize(client_key, &cks_buffer);
assert(ok == 0);
deser_view.pointer = cks_buffer.pointer;
deser_view.length = cks_buffer.length;
ok = client_key_deserialize(deser_view, &deserialized_client_key);
assert(ok == 0);
ok = fhe_uint8_try_encrypt_with_client_key_u8(lhs_clear, deserialized_client_key, &lhs);
assert(ok == 0);
ok = fhe_uint8_serialize(lhs, &value_buffer);
assert(ok == 0);
deser_view.pointer = value_buffer.pointer;
deser_view.length = value_buffer.length;
ok = fhe_uint8_deserialize(deser_view, &deserialized_lhs);
assert(ok == 0);
uint8_t clear;
ok = fhe_uint8_decrypt(deserialized_lhs, deserialized_client_key, &clear);
assert(ok == 0);
assert(clear == lhs_clear);
if (value_buffer.pointer != NULL) {
destroy_buffer(&value_buffer);
}
fhe_uint8_destroy(lhs);
fhe_uint8_destroy(deserialized_lhs);
fhe_uint8_destroy(result);
return ok;
}
int uint8_compressed(const ClientKey *client_key) {
int ok;
FheUint8 *lhs = NULL;
FheUint8 *result = NULL;
CompressedFheUint8 *clhs = NULL;
uint8_t lhs_clear = 123;
ok = compressed_fhe_uint8_try_encrypt_with_client_key_u8(lhs_clear, client_key, &clhs);
assert(ok == 0);
ok = compressed_fhe_uint8_decompress(clhs, &lhs);
assert(ok == 0);
uint8_t clear;
ok = fhe_uint8_decrypt(lhs, client_key, &clear);
assert(ok == 0);
assert(clear == lhs_clear);
fhe_uint8_destroy(lhs);
compressed_fhe_uint8_destroy(clhs);
fhe_uint8_destroy(result);
return ok;
}
int main(void) {
int ok = 0;
{
ConfigBuilder *builder;
Config *config;
ok = config_builder_all_disabled(&builder);
assert(ok == 0);
ok = config_builder_enable_default_integers(&builder);
assert(ok == 0);
ok = config_builder_build(builder, &config);
assert(ok == 0);
ClientKey *client_key = NULL;
ServerKey *server_key = NULL;
PublicKey *public_key = NULL;
ok = generate_keys(config, &client_key, &server_key);
assert(ok == 0);
ok = public_key_new(client_key, &public_key);
assert(ok == 0);
ok = uint8_serialization(client_key);
assert(ok == 0);
ok = uint8_compressed(client_key);
assert(ok == 0);
ok = set_server_key(server_key);
assert(ok == 0);
ok = uint8_client_key(client_key);
assert(ok == 0);
ok = uint8_public_key(client_key, public_key);
assert(ok == 0);
client_key_destroy(client_key);
public_key_destroy(public_key);
server_key_destroy(server_key);
}
{
ConfigBuilder *builder;
Config *config;
ok = config_builder_all_disabled(&builder);
assert(ok == 0);
ok = config_builder_enable_default_integers_small(&builder);
assert(ok == 0);
ok = config_builder_build(builder, &config);
assert(ok == 0);
ClientKey *client_key = NULL;
ServerKey *server_key = NULL;
PublicKey *public_key = NULL;
ok = generate_keys(config, &client_key, &server_key);
assert(ok == 0);
ok = public_key_new(client_key, &public_key);
assert(ok == 0);
ok = set_server_key(server_key);
assert(ok == 0);
ok = uint8_client_key(client_key);
assert(ok == 0);
ok = uint8_public_key(client_key, public_key);
assert(ok == 0);
client_key_destroy(client_key);
public_key_destroy(public_key);
server_key_destroy(server_key);
}
return ok;
}

View File

@@ -13,8 +13,8 @@ void micro_bench_and() {
// int gen_keys_ok = boolean_gen_keys_with_default_parameters(&cks, &sks);
// assert(gen_keys_ok == 0);
int gen_keys_ok = boolean_gen_keys_with_predefined_parameters_set(
BOOLEAN_PARAMETERS_SET_THFE_LIB_PARAMETERS, &cks, &sks);
int gen_keys_ok =
boolean_gen_keys_with_parameters(BOOLEAN_PARAMETERS_SET_TFHE_LIB_PARAMETERS, &cks, &sks);
assert(gen_keys_ok == 0);
int num_loops = 10000;
@@ -32,7 +32,7 @@ void micro_bench_and() {
for (int idx_loops = 0; idx_loops < num_loops; ++idx_loops) {
BooleanCiphertext *ct_result = NULL;
boolean_server_key_and(sks, ct_left, ct_right, &ct_result);
destroy_boolean_ciphertext(ct_result);
boolean_destroy_ciphertext(ct_result);
}
clock_t stop = clock();
@@ -41,8 +41,10 @@ void micro_bench_and() {
printf("%g ms, mean %g ms\n", elapsed_ms, mean_ms);
destroy_boolean_client_key(cks);
destroy_boolean_server_key(sks);
boolean_destroy_client_key(cks);
boolean_destroy_server_key(sks);
boolean_destroy_ciphertext(ct_left);
boolean_destroy_ciphertext(ct_right);
}
int main(void) {

View File

@@ -8,13 +8,13 @@
void test_predefined_keygen_w_serde(void) {
ShortintClientKey *cks = NULL;
ShortintServerKey *sks = NULL;
ShortintParameters *params = NULL;
ShortintCiphertext *ct = NULL;
Buffer ct_ser_buffer = {.pointer = NULL, .length = 0};
ShortintCiphertext *deser_ct = NULL;
int get_params_ok = shortint_get_parameters(2, 2, &params);
assert(get_params_ok == 0);
ShortintCompressedCiphertext *cct = NULL;
ShortintCompressedCiphertext *deser_cct = NULL;
ShortintCiphertext *decompressed_ct = NULL;
ShortintPBSParameters params = SHORTINT_PARAM_MESSAGE_2_CARRY_2_KS_PBS;
int gen_keys_ok = shortint_gen_keys_with_parameters(params, &cks, &sks);
assert(gen_keys_ok == 0);
@@ -41,41 +41,93 @@ void test_predefined_keygen_w_serde(void) {
assert(result == 3);
destroy_shortint_client_key(cks);
destroy_shortint_server_key(sks);
destroy_shortint_parameters(params);
destroy_shortint_ciphertext(ct);
destroy_shortint_ciphertext(deser_ct);
int c_encrypt_ok = shortint_client_key_encrypt_compressed(cks, 3, &cct);
assert(c_encrypt_ok == 0);
int c_ser_ok = shortint_serialize_compressed_ciphertext(cct, &ct_ser_buffer);
assert(c_ser_ok == 0);
deser_view.pointer = ct_ser_buffer.pointer;
deser_view.length = ct_ser_buffer.length;
int c_deser_ok = shortint_deserialize_compressed_ciphertext(deser_view, &deser_cct);
assert(c_deser_ok == 0);
int decomp_ok = shortint_decompress_ciphertext(cct, &decompressed_ct);
assert(decomp_ok == 0);
uint64_t c_result = -1;
int c_decrypt_ok = shortint_client_key_decrypt(cks, decompressed_ct, &c_result);
assert(c_decrypt_ok == 0);
assert(c_result == 3);
shortint_destroy_client_key(cks);
shortint_destroy_server_key(sks);
shortint_destroy_ciphertext(ct);
shortint_destroy_ciphertext(deser_ct);
shortint_destroy_compressed_ciphertext(cct);
shortint_destroy_compressed_ciphertext(deser_cct);
shortint_destroy_ciphertext(decompressed_ct);
destroy_buffer(&ct_ser_buffer);
}
void test_server_key_trivial_encrypt(void) {
ShortintClientKey *cks = NULL;
ShortintServerKey *sks = NULL;
ShortintCiphertext *ct = NULL;
ShortintPBSParameters params = SHORTINT_PARAM_MESSAGE_2_CARRY_2_KS_PBS;
int gen_keys_ok = shortint_gen_keys_with_parameters(params, &cks, &sks);
assert(gen_keys_ok == 0);
int encrypt_ok = shortint_server_key_create_trivial(sks, 3, &ct);
assert(encrypt_ok == 0);
uint64_t result = -1;
int decrypt_ok = shortint_client_key_decrypt(cks, ct, &result);
assert(decrypt_ok == 0);
assert(result == 3);
shortint_destroy_client_key(cks);
shortint_destroy_server_key(sks);
shortint_destroy_ciphertext(ct);
}
void test_custom_keygen(void) {
ShortintClientKey *cks = NULL;
ShortintServerKey *sks = NULL;
ShortintParameters *params = NULL;
int params_ok = shortint_create_parameters(10, 1, 1024, 10e-100, 10e-100, 2, 3, 2, 3, 2, 3,
10e-100, 2, 3, 2, 2, &params);
assert(params_ok == 0);
ShortintPBSParameters params = {
.lwe_dimension = 10,
.glwe_dimension = 1,
.polynomial_size = 1024,
.lwe_modular_std_dev = 10e-100,
.glwe_modular_std_dev = 10e-100,
.pbs_base_log = 2,
.pbs_level = 3,
.ks_base_log = 2,
.ks_level = 3,
.message_modulus = 2,
.carry_modulus = 2,
.modulus_power_of_2_exponent = 64,
.encryption_key_choice = ShortintEncryptionKeyChoiceBig,
};
int gen_keys_ok = shortint_gen_keys_with_parameters(params, &cks, &sks);
assert(gen_keys_ok == 0);
destroy_shortint_parameters(params);
destroy_shortint_client_key(cks);
destroy_shortint_server_key(sks);
shortint_destroy_client_key(cks);
shortint_destroy_server_key(sks);
}
void test_public_keygen(void) {
void test_public_keygen(ShortintPBSParameters params) {
ShortintClientKey *cks = NULL;
ShortintServerKey *sks = NULL;
ShortintPublicKey *pks = NULL;
ShortintParameters *params = NULL;
ShortintPublicKey *pks_deser = NULL;
ShortintCiphertext *ct = NULL;
int get_params_ok = shortint_get_parameters(2, 2, &params);
assert(get_params_ok == 0);
Buffer pks_ser_buff = {.pointer = NULL, .length = 0};
int gen_keys_ok = shortint_gen_client_key(params, &cks);
assert(gen_keys_ok == 0);
@@ -83,12 +135,16 @@ void test_public_keygen(void) {
int gen_pks = shortint_gen_public_key(cks, &pks);
assert(gen_pks == 0);
int gen_sks = shortint_gen_server_key(cks, &sks);
assert(gen_sks == 0);
int pks_ser = shortint_serialize_public_key(pks, &pks_ser_buff);
assert(pks_ser == 0);
BufferView pks_ser_buff_view = {.pointer = pks_ser_buff.pointer, .length = pks_ser_buff.length};
int pks_deser_ok = shortint_deserialize_public_key(pks_ser_buff_view, &pks_deser);
assert(pks_deser_ok == 0);
uint64_t msg = 2;
int encrypt_ok = shortint_public_key_encrypt(pks, sks, msg, &ct);
int encrypt_ok = shortint_public_key_encrypt(pks_deser, msg, &ct);
assert(encrypt_ok == 0);
uint64_t result = -1;
@@ -97,16 +153,61 @@ void test_public_keygen(void) {
assert(result == 2);
destroy_shortint_parameters(params);
destroy_shortint_client_key(cks);
destroy_shortint_server_key(sks);
destroy_shortint_public_key(pks);
destroy_shortint_ciphertext(ct);
shortint_destroy_client_key(cks);
shortint_destroy_public_key(pks);
shortint_destroy_public_key(pks_deser);
destroy_buffer(&pks_ser_buff);
shortint_destroy_ciphertext(ct);
}
void test_compressed_public_keygen(ShortintPBSParameters params) {
ShortintClientKey *cks = NULL;
ShortintCompressedPublicKey *cpks = NULL;
ShortintPublicKey *pks = NULL;
ShortintCiphertext *ct = NULL;
int gen_keys_ok = shortint_gen_client_key(params, &cks);
assert(gen_keys_ok == 0);
int gen_cpks = shortint_gen_compressed_public_key(cks, &cpks);
assert(gen_cpks == 0);
uint64_t msg = 2;
int encrypt_compressed_ok = shortint_compressed_public_key_encrypt(cpks, msg, &ct);
assert(encrypt_compressed_ok == 0);
uint64_t result_compressed = -1;
int decrypt_compressed_ok = shortint_client_key_decrypt(cks, ct, &result_compressed);
assert(decrypt_compressed_ok == 0);
assert(result_compressed == 2);
int decompress_ok = shortint_decompress_public_key(cpks, &pks);
assert(decompress_ok == 0);
int encrypt_ok = shortint_public_key_encrypt(pks, msg, &ct);
assert(encrypt_ok == 0);
uint64_t result = -1;
int decrypt_ok = shortint_client_key_decrypt(cks, ct, &result);
assert(decrypt_ok == 0);
assert(result == 2);
shortint_destroy_client_key(cks);
shortint_destroy_compressed_public_key(cpks);
shortint_destroy_public_key(pks);
shortint_destroy_ciphertext(ct);
}
int main(void) {
test_predefined_keygen_w_serde();
test_custom_keygen();
test_public_keygen();
test_public_keygen(SHORTINT_PARAM_MESSAGE_2_CARRY_2_KS_PBS);
test_public_keygen(SHORTINT_PARAM_MESSAGE_2_CARRY_2_PBS_KS);
test_compressed_public_keygen(SHORTINT_PARAM_MESSAGE_2_CARRY_2_KS_PBS);
test_compressed_public_keygen(SHORTINT_PARAM_MESSAGE_2_CARRY_2_PBS_KS);
test_server_key_trivial_encrypt();
return EXIT_SUCCESS;
}

View File

@@ -5,31 +5,31 @@
#include <stdlib.h>
#include <tgmath.h>
uint64_t double_accumulator_2_bits_message(uint64_t in) { return (in * 2) % 4; }
uint64_t double_lookup_table_2_bits_message(uint64_t in) { return (in * 2) % 4; }
uint64_t get_max_value_of_accumulator_generator(uint64_t (*accumulator_func)(uint64_t),
size_t message_bits) {
uint64_t get_max_value_of_lookup_table_generator(uint64_t (*lookup_table_func)(uint64_t),
size_t message_bits) {
uint64_t max_value = 0;
for (size_t idx = 0; idx < (1 << message_bits); ++idx) {
uint64_t acc_value = accumulator_func((uint64_t)idx);
uint64_t acc_value = lookup_table_func((uint64_t)idx);
max_value = acc_value > max_value ? acc_value : max_value;
}
return max_value;
}
uint64_t product_accumulator_2_bits_encrypted_mul(uint64_t left, uint64_t right) {
uint64_t product_lookup_table_2_bits_encrypted_mul(uint64_t left, uint64_t right) {
return (left * right) % 4;
}
uint64_t get_max_value_of_bivariate_accumulator_generator(uint64_t (*accumulator_func)(uint64_t,
uint64_t),
size_t message_bits_left,
size_t message_bits_right) {
uint64_t get_max_value_of_bivariate_lookup_table_generator(uint64_t (*lookup_table_func)(uint64_t,
uint64_t),
size_t message_bits_left,
size_t message_bits_right) {
uint64_t max_value = 0;
for (size_t idx_left = 0; idx_left < (1 << message_bits_left); ++idx_left) {
for (size_t idx_right = 0; idx_right < (1 << message_bits_right); ++idx_right) {
uint64_t acc_value = accumulator_func((uint64_t)idx_left, (uint64_t)idx_right);
uint64_t acc_value = lookup_table_func((uint64_t)idx_left, (uint64_t)idx_right);
max_value = acc_value > max_value ? acc_value : max_value;
}
}
@@ -38,19 +38,16 @@ uint64_t get_max_value_of_bivariate_accumulator_generator(uint64_t (*accumulator
}
void test_shortint_pbs_2_bits_message(void) {
ShortintPBSAccumulator *accumulator = NULL;
ShortintPBSLookupTable *lookup_table = NULL;
ShortintClientKey *cks = NULL;
ShortintServerKey *sks = NULL;
ShortintParameters *params = NULL;
int get_params_ok = shortint_get_parameters(2, 2, &params);
assert(get_params_ok == 0);
ShortintPBSParameters params = SHORTINT_PARAM_MESSAGE_2_CARRY_2_KS_PBS;
int gen_keys_ok = shortint_gen_keys_with_parameters(params, &cks, &sks);
assert(gen_keys_ok == 0);
int gen_acc_ok = shortint_server_key_generate_pbs_accumulator(
sks, double_accumulator_2_bits_message, &accumulator);
int gen_acc_ok = shortint_server_key_generate_pbs_lookup_table(
sks, double_lookup_table_2_bits_message, &lookup_table);
assert(gen_acc_ok == 0);
for (int in_idx = 0; in_idx < 4; ++in_idx) {
@@ -68,11 +65,11 @@ void test_shortint_pbs_2_bits_message(void) {
assert(degree == 3);
int pbs_ok = shortint_server_key_programmable_bootstrap(sks, accumulator, ct, &ct_out);
int pbs_ok = shortint_server_key_programmable_bootstrap(sks, lookup_table, ct, &ct_out);
assert(pbs_ok == 0);
size_t degree_to_set =
(size_t)get_max_value_of_accumulator_generator(double_accumulator_2_bits_message, 2);
(size_t)get_max_value_of_lookup_table_generator(double_lookup_table_2_bits_message, 2);
int set_degree_ok = shortint_ciphertext_set_degree(ct_out, degree_to_set);
assert(set_degree_ok == 0);
@@ -87,13 +84,14 @@ void test_shortint_pbs_2_bits_message(void) {
int decrypt_non_assign_ok = shortint_client_key_decrypt(cks, ct_out, &result_non_assign);
assert(decrypt_non_assign_ok == 0);
assert(result_non_assign == double_accumulator_2_bits_message(in_val));
assert(result_non_assign == double_lookup_table_2_bits_message(in_val));
int pbs_assign_ok = shortint_server_key_programmable_bootstrap_assign(sks, accumulator, ct_out);
int pbs_assign_ok =
shortint_server_key_programmable_bootstrap_assign(sks, lookup_table, ct_out);
assert(pbs_assign_ok == 0);
degree_to_set =
(size_t)get_max_value_of_accumulator_generator(double_accumulator_2_bits_message, 2);
(size_t)get_max_value_of_lookup_table_generator(double_lookup_table_2_bits_message, 2);
set_degree_ok = shortint_ciphertext_set_degree(ct_out, degree_to_set);
assert(set_degree_ok == 0);
@@ -102,32 +100,28 @@ void test_shortint_pbs_2_bits_message(void) {
int decrypt_assign_ok = shortint_client_key_decrypt(cks, ct_out, &result_assign);
assert(decrypt_assign_ok == 0);
assert(result_assign == double_accumulator_2_bits_message(result_non_assign));
assert(result_assign == double_lookup_table_2_bits_message(result_non_assign));
destroy_shortint_ciphertext(ct);
destroy_shortint_ciphertext(ct_out);
shortint_destroy_ciphertext(ct);
shortint_destroy_ciphertext(ct_out);
}
destroy_shortint_pbs_accumulator(accumulator);
destroy_shortint_client_key(cks);
destroy_shortint_server_key(sks);
destroy_shortint_parameters(params);
shortint_destroy_pbs_lookup_table(lookup_table);
shortint_destroy_client_key(cks);
shortint_destroy_server_key(sks);
}
void test_shortint_bivariate_pbs_2_bits_message(void) {
ShortintBivariatePBSAccumulator *accumulator = NULL;
ShortintBivariatePBSLookupTable *lookup_table = NULL;
ShortintClientKey *cks = NULL;
ShortintServerKey *sks = NULL;
ShortintParameters *params = NULL;
int get_params_ok = shortint_get_parameters(2, 2, &params);
assert(get_params_ok == 0);
ShortintPBSParameters params = SHORTINT_PARAM_MESSAGE_2_CARRY_2_KS_PBS;
int gen_keys_ok = shortint_gen_keys_with_parameters(params, &cks, &sks);
assert(gen_keys_ok == 0);
int gen_acc_ok = shortint_server_key_generate_bivariate_pbs_accumulator(
sks, product_accumulator_2_bits_encrypted_mul, &accumulator);
int gen_acc_ok = shortint_server_key_generate_bivariate_pbs_lookup_table(
sks, product_lookup_table_2_bits_encrypted_mul, &lookup_table);
assert(gen_acc_ok == 0);
for (int left_idx = 0; left_idx < 4; ++left_idx) {
@@ -145,12 +139,12 @@ void test_shortint_bivariate_pbs_2_bits_message(void) {
int encrypt_right_ok = shortint_client_key_encrypt(cks, right_val, &ct_right);
assert(encrypt_right_ok == 0);
int pbs_ok = shortint_server_key_bivariate_programmable_bootstrap(sks, accumulator, ct_left,
int pbs_ok = shortint_server_key_bivariate_programmable_bootstrap(sks, lookup_table, ct_left,
ct_right, &ct_out);
assert(pbs_ok == 0);
size_t degree_to_set = (size_t)get_max_value_of_bivariate_accumulator_generator(
product_accumulator_2_bits_encrypted_mul, 2, 2);
size_t degree_to_set = (size_t)get_max_value_of_bivariate_lookup_table_generator(
product_lookup_table_2_bits_encrypted_mul, 2, 2);
int set_degree_ok = shortint_ciphertext_set_degree(ct_right, degree_to_set);
assert(set_degree_ok == 0);
@@ -159,14 +153,14 @@ void test_shortint_bivariate_pbs_2_bits_message(void) {
int decrypt_non_assign_ok = shortint_client_key_decrypt(cks, ct_out, &result_non_assign);
assert(decrypt_non_assign_ok == 0);
assert(result_non_assign == product_accumulator_2_bits_encrypted_mul(left_val, right_val));
assert(result_non_assign == product_lookup_table_2_bits_encrypted_mul(left_val, right_val));
int pbs_assign_ok = shortint_server_key_bivariate_programmable_bootstrap_assign(
sks, accumulator, ct_out, ct_right);
sks, lookup_table, ct_out, ct_right);
assert(pbs_assign_ok == 0);
degree_to_set =
(size_t)get_max_value_of_accumulator_generator(double_accumulator_2_bits_message, 2);
(size_t)get_max_value_of_lookup_table_generator(double_lookup_table_2_bits_message, 2);
set_degree_ok = shortint_ciphertext_set_degree(ct_out, degree_to_set);
assert(set_degree_ok == 0);
@@ -176,18 +170,17 @@ void test_shortint_bivariate_pbs_2_bits_message(void) {
assert(decrypt_assign_ok == 0);
assert(result_assign ==
product_accumulator_2_bits_encrypted_mul(result_non_assign, right_val));
product_lookup_table_2_bits_encrypted_mul(result_non_assign, right_val));
destroy_shortint_ciphertext(ct_left);
destroy_shortint_ciphertext(ct_right);
destroy_shortint_ciphertext(ct_out);
shortint_destroy_ciphertext(ct_left);
shortint_destroy_ciphertext(ct_right);
shortint_destroy_ciphertext(ct_out);
}
}
destroy_shortint_bivariate_pbs_accumulator(accumulator);
destroy_shortint_client_key(cks);
destroy_shortint_server_key(sks);
destroy_shortint_parameters(params);
shortint_destroy_bivariate_pbs_lookup_table(lookup_table);
shortint_destroy_client_key(cks);
shortint_destroy_server_key(sks);
}
int main(void) {

File diff suppressed because it is too large Load Diff

View File

@@ -10,7 +10,7 @@ language = "C"
############## Options for Wrapping the Contents of the Header #################
header = "// Copyright © 2022 ZAMA.\n// All rights reserved."
header = "// Copyright © 2023 ZAMA.\n// All rights reserved."
# trailer = "/* Text to put at the end of the generated file */"
include_guard = "TFHE_RS_C_API_H"
# pragma_once = true
@@ -107,7 +107,6 @@ allow_static_const = true
allow_constexpr = false
sort_by = "Name"
[macro_expansion]
bitflags = false

View File

@@ -1,54 +0,0 @@
# Cryptographic parameters
## Default parameters
The TFHE cryptographic scheme relies on a variant of [Regev cryptosystem](https://cims.nyu.edu/~regev/papers/lwesurvey.pdf), and is based on a problem so hard to solve, that is even post-quantum resistant.
In practice, you need to tune some cryptographic parameters, in order to ensure the correctness of the result, and the security of the computation.
To make it simpler, **we provide two sets of parameters**, which ensure correct computations for a certain probability with the standard security of 128 bits. There exists an error probability due the probabilistic nature of the encryption, which requires adding randomness (called noise) following a Gaussian distribution. If this noise is too large, the decryption will not give a correct result. There is a trade-off between efficiency and correctness: generally, using a less efficient parameter set (in terms of computation time) leads to a smaller risk of having an error during homomorphic evaluation.
In the two proposed sets of parameters, the only difference lies into this probability error.
The default parameter set ensures a probability error of at most $$2^{-40}$$ when computing a
programmable bootstrapping (i.e., any gates but the `not`). The other one is closer to the error
probability claimed into the original [TFHE paper](https://eprint.iacr.org/2018/421),
namely $$2^{-165}$$, but up to date regarding security requirements.
The following array summarizes this:
| Parameter set | Error probability |
|:-------------------:|:-----------------:|
| DEFAULT_PARAMETERS | $$ 2^{-40} $$ |
| TFHE_LIB_PARAMETERS | $$ 2^{-165} $$ |
## User-defined parameters
Note that if you desire, you can also create your own set of parameters.
This is an `unsafe` operation as failing to properly fix the parameters will potentially result with an incorrect and/or insecure computation:
```rust
use tfhe::boolean::prelude::*;
fn main() {
// WARNING: might be insecure and/or incorrect
// You can create your own set of parameters
let parameters = unsafe {
BooleanParameters::new(
LweDimension(586),
GlweDimension(2),
PolynomialSize(512),
StandardDev(0.00008976167396834998),
StandardDev(0.00000002989040792967434),
DecompositionBaseLog(8),
DecompositionLevelCount(2),
DecompositionBaseLog(2),
DecompositionLevelCount(5),
)
};
}
```

View File

@@ -1,23 +1,22 @@
# What is TFHE-rs?
<mark style="background-color:yellow;">⭐️</mark> [<mark style="background-color:yellow;">Star the repo on Github</mark>](https://github.com/zama-ai/tfhe-rs) <mark style="background-color:yellow;">| 🗣</mark> [<mark style="background-color:yellow;">Community support forum</mark> ](https://community.zama.ai)<mark style="background-color:yellow;">| 📁</mark> [<mark style="background-color:yellow;">Contribute to the project</mark>](https://docs.zama.ai/tfhe-rs/developers/contributing)<mark style="background-color:yellow;"></mark>
📁 [Github](https://github.com/zama-ai/tfhe-rs) | 💛 [Community support](https://zama.ai/community) | 🟨 [Zama Bounty Program](https://github.com/zama-ai/bounty-program)
![](_static/docs\_home.jpg)
![](\_static/tfhe-rs-doc-home.png)
TFHE-rs is a pure Rust implementation of TFHE for boolean and small integer arithmetics over encrypted data. It includes a Rust and C API, as well as a client-side WASM API.
TFHE-rs is a pure Rust implementation of TFHE for Boolean and integer arithmetics over encrypted data. It includes a Rust and C API, as well as a client-side WASM API.
TFHE-rs is meant for developers and researchers who want full control over what they can do with TFHE, while not having to worry about the low level implementation.
TFHE-rs is meant for developers and researchers who want full control over what they can do with TFHE, while not worrying about the low level implementation.
The goal is to have a stable, simple, high-performance, and production-ready library for all the advanced features of TFHE.
### Key Cryptographic concepts
## Key cryptographic concepts
TFHE-rs library implements Zamas variant of Fully Homomorphic Encryption over the Torus (TFHE). TFHE is based on Learning With Errors (LWE), a well studied cryptographic primitive believed to be secure even against quantum computers.
The TFHE-rs library implements Zamas variant of Fully Homomorphic Encryption over the Torus (TFHE). TFHE is based on Learning With Errors (LWE), a well-studied cryptographic primitive believed to be secure even against quantum computers.
In cryptography, a raw value is called a message (also sometimes called a cleartext), an encoded message is called a plaintext and an encrypted plaintext is called a ciphertext.
In cryptography, a raw value is called a message (also sometimes called a cleartext), while an encoded message is called a plaintext and an encrypted plaintext is called a ciphertext.
The idea of homomorphic encryption is that you can compute on ciphertexts while not knowing messages encrypted in them. A scheme is said to be _fully homomorphic_, meaning any program can be evaluated with it, if at least two of the following operations are supported \($$x$$is a plaintext and $$E[x]$$ is the
corresponding ciphertext\):
The idea of homomorphic encryption is that you can compute on ciphertexts while not knowing messages encrypted within them. A scheme is said to be _fully homomorphic_, meaning any program can be evaluated with it, if at least two of the following operations are supported ($$x$$is a plaintext and $$E[x]$$ is the corresponding ciphertext):
* homomorphic univariate function evaluation: $$f(E[x]) = E[f(x)]$$
* homomorphic addition: $$E[x] + E[y] = E[x + y]$$
@@ -28,9 +27,8 @@ Zama's variant of TFHE is fully homomorphic and deals with fixed-precision numbe
Using FHE in a Rust program with TFHE-rs consists in:
* generating a client key and a server key using secure parameters:
* client key encrypts/decrypts data and must be kept secret
* server key is used to perform operations on encrypted data and could be
public (also called evaluation key)
* a client key encrypts/decrypts data and must be kept secret
* a server key is used to perform operations on encrypted data and could be public (also called an evaluation key)
* encrypting plaintexts using the client key to produce ciphertexts
* operating homomorphically on ciphertexts with the server key
* decrypting the resulting ciphertexts into plaintexts using the client key

View File

@@ -3,27 +3,52 @@
* [What is TFHE-rs?](README.md)
## Getting Started
* [Installation](getting_started/installation.md)
* [Quick Start](getting_started/quick_start.md)
* [Operations](getting_started/operations.md)
* [Benchmarks](getting_started/benchmarks.md)
* [Security and Cryptography](getting_started/security_and_cryptography.md)
* [Installation](getting\_started/installation.md)
* [Quick Start](getting\_started/quick\_start.md)
* [Supported Operations](getting\_started/operations.md)
* [Benchmarks](getting\_started/benchmarks.md)
* [Security and Cryptography](getting\_started/security\_and\_cryptography.md)
## Tutorials
* [Homomorphic Parity Bit](tutorials/parity_bit.md)
* [Homomorphic Case Changing on Latin String](tutorials/latin_fhe_string.md)
## Booleans
* [Tutorial](Booleans/tutorial.md)
* [Operations](Booleans/operations.md)
* [Cryptographic Parameters](Booleans/parameters.md)
* [Serialization/Deserialization](Booleans/serialization.md)
## How To
* [Configure Rust](how_to/rust_configuration.md)
* [Serialize/Deserialize](how_to/serialization.md)
* [Compress Ciphertexts/Keys](how_to/compress.md)
* [Use Public Key Encryption](how_to/public_key.md)
* [Use Trivial Ciphertext](how_to/trivial_ciphertext.md)
* [Use Parallelized PBS](how_to/parallelized_pbs.md)
* [Use the C API](how_to/c_api.md)
* [Use the JS on WASM API](how_to/js_on_wasm_api.md)
## Shortint
* [Tutorial](shortint/tutorial.md)
* [Operations](shortint/operations.md)
* [Cryptographic Parameters](shortint/parameters.md)
* [Serialization/Deserialization](shortint/serialization.md)
## Fine-grained APIs
* [Quick Start](fine_grained_api/quick_start.md)
* [Boolean](fine_grained_api/Boolean/tutorial.md)
* [Operations](fine_grained_api/Boolean/operations.md)
* [Cryptographic Parameters](fine_grained_api/Boolean/parameters.md)
* [Serialization/Deserialization](fine_grained_api/Boolean/serialization.md)
## C API
* [Tutorial](c_api/tutorial.md)
* [Shortint](fine_grained_api/shortint/tutorial.md)
* [Operations](fine_grained_api/shortint/operations.md)
* [Cryptographic Parameters](fine_grained_api/shortint/parameters.md)
* [Serialization/Deserialization](fine_grained_api/shortint/serialization.md)
* [Integer](fine_grained_api/integer/tutorial.md)
* [Operations](fine_grained_api/integer/operations.md)
* [Cryptographic Parameters](fine_grained_api/integer/parameters.md)
* [Serialization/Deserialization](fine_grained_api/integer/serialization.md)
## Application Tutorials
* [SHA256 with *Boolean API*](application_tutorials/sha256_bool.md)
* [Dark Market with *Integer API*](application_tutorials/dark_market.md)
* [Homomorphic Regular Expressions *Integer API*](application_tutorials/regex.md)
## Crypto Core API [Advanced users]
* [Quick Start](core_crypto/presentation.md)
* [Tutorial](core_crypto/tutorial.md)
## Developers
* [Contributing](dev/contributing.md)

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -1,16 +0,0 @@
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 424 173" width="424" height="173">
<!-- svg-source:excalidraw -->
<defs>
<style>
@font-face {
font-family: "Virgil";
src: url("https://excalidraw.com/Virgil.woff2");
}
@font-face {
font-family: "Cascadia";
src: url("https://excalidraw.com/Cascadia.woff2");
}
</style>
</defs>
<g stroke-linecap="round" transform="translate(26 44) rotate(0 194 38.60598503740641)"><path d="M-0.72 -1.78 C148.22 -1.79, 298.89 -0.4, 389.35 -0.66 M0.64 -0.29 C96.57 -1.4, 195.57 -0.99, 387.19 -1.07 M388.14 2.03 C392.43 23.61, 391.39 49, 385.98 75.21 M389.42 -0.67 C389.81 14.73, 390.33 33.43, 388.78 76.49 M387.8 79.39 C277.29 77.61, 166.8 75.17, -1.18 76.59 M387.01 76.05 C246.87 81.15, 103.96 81.92, 0.98 78.37 M-3.84 75.76 C-2.18 57.07, 4.75 34.74, -0.59 -2.81 M1 77.36 C-0.79 48.54, -1.41 22.74, 1.4 1.83" stroke="#364fc7" stroke-width="1" fill="none"></path></g><g stroke-linecap="round" transform="translate(43.27283572239912 63.29950568870447) rotate(0 35.31670822942641 19.35162094763092)"><path d="M0.11 -2.85 C28.93 -4.68, 48.62 2.97, 73.79 0.71 M0.92 1.49 C21.03 0.46, 41.01 -1.23, 70.4 0.64 M70.88 3.15 C68.43 11.93, 69.14 26.71, 68.06 39.17 M69.56 -1.68 C71.59 8.67, 68.48 17.57, 70.35 40.23 M68.74 39.32 C48.28 40.85, 30.28 35.73, -0.92 40.56 M69.59 37.89 C48.04 38.14, 23.44 38.86, -0.31 39.89 M2.24 36.95 C-1.92 26.09, -1.01 13.79, -3.25 -2.11 M0.55 39.03 C1.24 29.93, -1.56 22.42, 0.25 1.64" stroke="#0b7285" stroke-width="1" fill="none"></path></g><g stroke-linecap="round" transform="translate(244.47880299251847 58.546134663341206) rotate(0 77.40648379052368 25.15710723192018)"><path d="M0.5 3.27 C41.25 1.83, 85.12 4.03, 158.28 2.64 M1.74 -0.75 C60.33 -2.64, 122.13 -1.19, 153.85 -0.7 M155.49 -2.28 C158.86 11.5, 154.4 20.06, 153.49 51.17 M153.03 -0.55 C152.94 16.04, 156.21 35.59, 154.41 49.32 M157.56 51.63 C96.68 46.86, 29.51 52.23, -2.5 49.76 M155.17 52.19 C121.59 51.66, 87.99 54.57, -0.92 50.35 M-1.24 51.37 C-0.7 37.8, 0.93 29.69, -2.96 2.02 M-1.48 52.03 C0.44 36.29, -0.71 20.9, 1.27 0.34" stroke="#a61e4d" stroke-width="1" fill="none"></path></g><g transform="translate(249.47880299251847 71.70324189526139) rotate(0 72.40648379052368 12)"><text x="72.40648379052374" y="17" font-family="Virgil, Segoe UI Emoji" font-size="19.30839567747301px" fill="#a61e4d" text-anchor="middle" style="white-space: pre;" direction="ltr">noise</text></g><g stroke-linecap="round" transform="translate(35.610972568578745 52.67581047381532) rotate(0 89.501246882793 30.962593516209466)"><path d="M3.48 2.12 C42.75 -4.16, 86.77 -1.56, 177.5 3.84 M-1.92 1.02 C66.55 2.87, 130.16 1.88, 179.45 -1.63 M179.35 -3.41 C181.95 11.53, 178.24 25.72, 176.46 63.54 M180.52 0.51 C178.89 22.76, 177.57 49.34, 177.3 60.97 M175.68 59.6 C140.66 62.47, 97.79 56.72, 3.94 58.21 M177.14 61.73 C134.25 62.89, 91.18 61.47, 1.89 60.4 M-0.84 60.36 C-0.67 48.08, 3.19 30.78, -2.25 -3.82 M0.14 61.15 C-2.52 41.13, 0.16 19.4, -0.06 -0.75" stroke="#087f5b" stroke-width="1" fill="none"></path></g><g transform="translate(48.27283572239912 70.65112663633539) rotate(0 30.31670822942641 12)"><text x="30.316708229426418" y="17" font-family="Virgil, Segoe UI Emoji" font-size="19.248703637731044px" fill="#087f5b" text-anchor="middle" style="white-space: pre;" direction="ltr">carry</text></g><g stroke-linecap="round" transform="translate(123.17705735660911 63.319201995012975) rotate(0 43.5411471321695 21.286783042394035)"><path d="M3.78 -1.95 C21.85 2.27, 33.36 -1.31, 85.7 0.15 M0.95 -1.99 C30.47 0.73, 63.7 1.15, 87.61 0.81 M83.5 1.56 C89.47 15.22, 83.3 24.69, 84.55 46.54 M85.31 -0.87 C84.54 12.96, 85.46 26.61, 85.69 43.87 M85.54 45.92 C71.62 40.51, 50.01 40.74, -2.16 41.74 M85.45 44.45 C55.35 43.91, 24.36 42.43, 0.99 42.77 M-3.26 44.49 C1.26 31.11, -1.14 19.41, -1.89 2.33 M-1.27 43.89 C-1.22 30.47, 1.35 21.91, -0.8 -1.4" stroke="#0b7285" stroke-width="1" fill="none"></path></g><g transform="translate(128.1770573566091 72.60598503740698) rotate(0 38.5411471321695 12)"><text x="38.54114713216951" y="17" font-family="Virgil, Segoe UI Emoji" font-size="19.27057356608477px" fill="#087f5b" text-anchor="middle" style="white-space: pre;" direction="ltr">message</text></g><g transform="translate(371 138) rotate(0 20 12.5)"><text x="0" y="18" font-family="Virgil, Segoe UI Emoji" font-size="20px" fill="#364fc7" text-anchor="start" style="white-space: pre;" direction="ltr">LSB</text></g><g transform="translate(10 135) rotate(0 21.5 12.5)"><text x="0" y="18" font-family="Virgil, Segoe UI Emoji" font-size="20px" fill="#364fc7" text-anchor="start" style="white-space: pre;" direction="ltr">MSB</text></g><g transform="translate(162 10) rotate(0 51 12.5)"><text x="0" y="18" font-family="Virgil, Segoe UI Emoji" font-size="20px" fill="#364fc7" text-anchor="start" style="white-space: pre;" direction="ltr">Ciphertext</text></g></svg>

Before

Width:  |  Height:  |  Size: 4.8 KiB

Some files were not shown because too many files have changed in this diff Show More