Commit Graph

318 Commits

Author SHA1 Message Date
Quentin Bourgerie
50973a39bd refactor(compiler): Remove async offloading of BS/KS 2022-11-30 10:29:19 +01:00
Quentin Bourgerie
9e16f31b87 refactor(bconcrete): Separate bufferization and CAPI call generation 2022-11-30 10:29:19 +01:00
Quentin Bourgerie
5d89ad0f84 enhance(feedback): Add p_error and global_perror to the compiler feedback 2022-11-24 09:59:19 +01:00
Umut
722e4d2eba feat: give crt decomposition feedback 2022-11-24 09:59:19 +01:00
youben11
5661b758d7 feat(CAPI): add initial API to do round-tripping with CompilerEngine 2022-11-23 14:01:25 +01:00
youben11
824aaaeff5 refactor(rust): separate generated CAPI under ffi module 2022-11-23 14:01:25 +01:00
youben11
c0d007e396 refactor: separate python bindings wrapper from CAPI
current CAPI of CompilerEngine isn't really a CAPI. It's initial need
was for the python bindings to have access to the CompilerEngine through
a convenient API. So we now make a clear separation of CAPI and python
wrappers. So we now have wrappers functions, that can be implemented
using C/C++, and will be exposed to python via pybind11. And we have a
CAPI (still need fixing as it still contains C++ code), that can be used
as is, or to build bindings for other languages (such as Rust).
2022-11-23 14:01:25 +01:00
Andi Drebes
46366eec41 feat(compiler): Add fallback implementations for batched keyswitch and bootstrap
Add default implementations for batched keyswitch and bootstrap, which
simply call the scalar versions of these operations in a loop.
2022-11-18 12:06:07 +01:00
Andi Drebes
d46db1bf69 feat(compiler): BConcrete: Add batched keyswitch and bootstrap 2022-11-18 12:06:07 +01:00
Andi Drebes
c9bb6541e9 feat(compiler): Add option --batch-concrete-ops and action dump-concrete-with-loops
The new option `--batch-concrete-ops` invokes the batching pass after
lowering to the Concrete dialect and after lowering linalg operations
with operations from the Concrete dialect to loops.

The new action `dump-concrete-with-loops` dumps the IR right before
batching.
2022-11-18 12:06:07 +01:00
Andi Drebes
75b70054b2 feat(compiler): Make Concrete.bootstrap_lwe and Concrete.keyswitch_lwe batchable 2022-11-18 12:06:07 +01:00
Andi Drebes
c367a4b6fd feat(compiler): Add batching pass
This adds a new pass that is able to hoist operations implementing the
`BatchableOpInterface` out of a loop nest that applies the operation
to the elements of a tensor indexed by the loop induction variables.

Example:

  scf.for %i = c0 to %cN step %c1 {
    scf.for %j = c0 to %cM step %c1 {
      scf.for %k = c0 to %cK step %c1 {
        %s = tensor.extract %T[%i, %j, %k]
        %res = batchable_op %s
        ...
      }
    }
  }

is replaced with:

  %batchedSlice = tensor.extract_slice
       %T[%c0, %c0, %c0] [%cN, %cM, %cK] [%c1, %c1, %c1]
  %flatSlice = tensor.collapse_shape %batchedSlice
  %resTFlat = batchedOp %flatSlice
  %resT = tensor.expand_shape %resTFlat

  scf.for %i = c0 to %cN step %c1 {
    scf.for %j = c0 to %cM step %c1 {
      scf.for %k = c0 to %cK step %c1 {
        %res = tensor.extract %resT[%i, %j, %k]
        ...
      }
    }
  }

Every index of the tensor with the input values may be a quasi-affine
expression on a single loop induction variable, as long as the
difference between the results of the expression for any two
consecutive values of the referenced loop induction variable is
constant.
2022-11-18 12:06:07 +01:00
Andi Drebes
3ce7c96f3f feat(compiler): Add operation interface for batchable operations
This adds a new operation interface that allows an operation to
specify that a batched version of the operation exists that applies it
on the elements of a flat tensor in parallel.
2022-11-18 12:06:07 +01:00
youben11
5b46a74b7f feat(CAPI): API to get the width of an eint/esint 2022-11-17 09:54:51 +01:00
youben11
5576d1d176 feat(CAPI): expose encrypted signed int in CAPI 2022-11-17 09:54:51 +01:00
youben11
0a57af37af feat(rust): add API for FHEDialect's op creation 2022-11-17 09:54:51 +01:00
Samuel Tap
8a231974f7 chore(ffi): update to concrete-ffi without fftw 2022-11-16 11:44:46 +01:00
Mayeul@Zama
f26abad001 chore(compiler): fix return-type-c-linkage 2022-11-14 14:04:46 +01:00
Mayeul@Zama
28594db536 chore(compiler): fix c++11-narrowing 2022-11-14 14:04:46 +01:00
Mayeul@Zama
2a6d1958fd chore(compiler): remove unused loopParallelize 2022-11-14 14:04:46 +01:00
Mayeul@Zama
dd62896bc8 chore(compiler): fix pessimizing-move 2022-11-14 14:04:46 +01:00
youben11
eabd8b959d fix(CAPI): remove Cpp code from CAPI
this required to have a CAPI that when asked for types, returns a
structure that can report if an error was faced during type creation.
This is required since a failure at that stage in the compiler would
lead to a segfault in the python bindings for example, and we want to be
able to handle this scenario gracefully.
2022-11-09 12:53:25 +01:00
Antoniu Pop
94d68515d4 fix(runtime): remove logs for dataflow task execution. 2022-11-07 09:22:21 +00:00
rudy
018684fe2a chore: activate Wall Werror 2022-11-04 10:44:46 +01:00
rudy
0493030033 fix: typo DEFAULT_STRATEGY_V0 2022-11-02 09:33:37 +01:00
Quentin Bourgerie
d934553950 feat(compiler/gpu): Integrate gpu crypto optimization 2022-10-20 10:36:32 +01:00
youben11
48dee4a71b fix: create new op in generic type conversion
converting types of the original op seems to have an impact on other
operations using the result type, which should consider checking the
different cases (whether the type has been converted yet, or not).
However, creating a new op don't have this issue
2022-10-20 10:36:32 +01:00
youben11
ef778ac75b refactor: replace some operands by attrs in bs/ks 2022-10-20 10:36:32 +01:00
youben11
7cd45d1514 test: add GPU end2end tests 2022-10-20 10:36:32 +01:00
youben11
d615ff47f2 feat: support GPU keyswitching 2022-10-20 10:36:32 +01:00
youben11
a7a65025ff refactor: redesign GPU support
- unify CPU and GPU bootstrapping operations
- remove operations to build GLWE from table: this is now done in
  wrapper functions
- remove GPU memory management operations: done in wrappers now, but we
  will have to think about how to deal with it later in MLIR
2022-10-20 10:36:32 +01:00
youben11
d169a27fc0 feat: support GPU (bootstrapping) 2022-10-20 10:36:32 +01:00
Umut
5f845bf9ff feat: add axes argument to transpose 2022-10-17 10:46:03 +03:00
Quentin Bourgerie
0bc2e5830b fix(optimization): Fix manp computation for addition with plaintext 2022-10-11 17:09:32 +02:00
Quentin Bourgerie
cf9a36c197 feat(compiler/runtime): Support the pbs for crt encoding (enable apply_lookup_table up to 16bits) 2022-10-07 09:16:19 +02:00
Andi Drebes
a7051c2c9c enhance(client/server): Add support for scalar results
This patch adds support for scalar results to the client/server
protocol and tests. In addition to `TensorData`, a new type
`ScalarData` is added. Previous representations of scalar values using
one-dimensional `TensorData` instances have been replaced with proper
instantiations of `ScalarData`.

The generic use of `TensorData` for scalar and tensor values has been
replaced with uses of a new variant `ScalarOrTensorData`, which can
either hold an instance of `TensorData` or `ScalarData`.
2022-10-04 14:40:40 +02:00
Andi Drebes
710dd7a88c enhance(compiler): Add function returning LambdaArgument type as string
This adds a new function `getLambdaArgumentTypeAsString(const
LambdaArgument&)` returning the name of a lambda argument type as a
string, e.g., `"uint8_t"` for an `IntLambdaArgument<uint8_t>` or
`"tensor<uint8_t>"` for a
`TensorLambdaArgument<IntLambdaArgument<uint8_t>>`.

Note that, due to the static inheritance scheme for Lambda Arguments
and explicit instantiation, this is only implemented for the common
backing integer types `uint8_t`, `int8_t`, `uint16_t`, `int16_t`,
`uint32_t`, `int32_t`, `uint64_t`, and `int64_t`.
2022-10-04 14:40:40 +02:00
Andi Drebes
7a3cf64171 enhance(compiler): Add constructor for TensorLambdaArgument moving vector values 2022-10-04 14:40:40 +02:00
Andi Drebes
9e24eccc9d enhance(compiler): Add operators == and != for Integer / Tensor Lambda Arguments 2022-10-04 14:40:40 +02:00
Andi Drebes
8255d3e190 fix(compiler): Add support for clear result tensors with element width != 64 bits
Returning tensors with elements whose width is not equal to 64 results
in garbled data. This commit extends the `TensorData` class used to
represent tensors in JIT compilation with support for signed /
unsigned elements of 8/16/32 and 64 bits, such that all clear text
tensors with up to 64 bits can be represented accurately.
2022-10-04 14:40:40 +02:00
Mayeul@Zama
f1833f06f2 feat(compiler): clarify input encryption noise variance 2022-10-03 14:00:16 +02:00
rudy
cb2c9ef6bf feat: accept no evaluation keys 2022-09-26 14:43:25 +02:00
Antoniu Pop
4fbb05e18c feat(compiler): add an asynchronous interface for bootstrap and keyswitch using std::promise/future. 2022-09-19 13:02:20 +01:00
Antoniu Pop
2cf80e76eb feat(compiler): move the lowering of dataflow tasks to RT dialect before bufferization. 2022-09-15 11:55:37 +01:00
Quentin Bourgerie
dbfde466bc feat(python): Add compilation feedback to the python bindings 2022-09-14 10:03:25 +02:00
Quentin Bourgerie
f4673e8276 feat(compiler): First draft or compilation feedback 2022-09-14 10:03:25 +02:00
rudy
48bf6e2696 feat(optimizer): report or warn using global p-error 2022-09-12 17:22:38 +02:00
Umut
41c9f86803 feat: create encrypted signed integer type 2022-09-09 17:38:21 +03:00
youben11
661d33c2b6 feat: keep std bsk and conv to fourier when needed 2022-09-06 07:18:34 +01:00
Quentin Bourgerie
b0743a9924 Revert "fix(optimizer): Temporary fallback to the v0 strategy while the dag one is not fixed"
This reverts commit 98a799f807.
2022-09-01 14:13:38 +02:00