- The original code seemed to assume that the Barrett reduction would not
overflow if p <= 2^31, this is incorrect but rare
- The correctness constraint has a bound much smaller than 2^31, some
primes bigger than the derived threshold can still use the fast code
given a certain criterion is respected which corresponds to a "lucky" case
of the Barrett reduction, the new code now manages this
maths explained in https://blog.zksecurity.xyz/posts/barrett-tighter-bound/
and copiously in comments in the code
Before, any issue comment or label event would trigger the verify-actor job. Then the next job, prepare-benchmarks, would check if the rest of the workflow should run. Moving this very check in verify-actor ensures the whole workflow to run only if required.
Ciphertext sizes are measured at HLAPI layer with several
parameters set.
Keys sizes are measured at shortint level.
This benchmark has now its dedicated GitHub workflow that would
run, at least, each 24th of the month.
Regression benchmarks are meant to be run in pull-request. They
can be launched in two flavors:
* issue comment: using command like "/bench --backend cpu"
* adding a label: `bench-perfs-cpu` or `bench-perfs-gpu`
Benchmark definitions are written in TOML and located at
ci/regression.toml.
While not exhaustive, it can be easily modified by reading the
embbeded documentation.
"/bench" commands are parsed by a Python script located at
ci/perf_regression.py. This script produces output files that
contains cargo commands and a shell script generating custom
environment variables. The Python script and generated files are
meant to be used only by the workflow
benchmark_perf_regression.yml.
`aggregate_one_hot_vector`` was modified when the KVStore was
added to support inputs where information in the blocks was not packed.
And to detect if blocks where packed it was relying on the degree value.
However, the inputs may come from LUTs that had precise degree, and
could lead to believe the inputs were not packed.
To fix this we split in 2 fn:
* aggregate_one_hot_vector
* aggregate_and_unpack_one_hot_vector
And use the correct one when we know if the inputs are packed
The KVStore is a Hash Table, with homomorphic capabilities
The keys are meant to be clear integers, values are meant to be
Radix/SignedRadix
The ServerKey now has functions to be able to do operations that modify
an existing key,value pair using an encrypted key.
There are a lot of different parameter types in tfhe-rs, related to
different but linked features. Thus when some PBS parameters are
selected, compatible compression parameters must be selected from the
possible parameters.
To make things easier, the MetaParameters struct is added, this stores
in one place parameters that can be used together.
Add the flip(condition: BooleanBlock, a: T, b: T) -> (T, T)
operation that homomorphically flip/swap two values if the
given encrypted boolean encrypts true