Commit Graph

120 Commits

Author SHA1 Message Date
rudy
4244ae3b41 fix(compiler): explicit global_p_error disable high error warning 2023-09-07 09:20:37 +02:00
youben11
41e1fdf1f8 refactor(frontend): change location format 2023-09-04 09:22:28 +01:00
youben11
530bacb2e3 refactor(compiler): clean statistic passes 2023-09-04 09:22:28 +01:00
youben11
4e8b9a199c feat(compiler): allow forcing encoding from python 2023-09-01 10:29:08 +01:00
youben11
cba3847c92 feat(compiler): setting v0 parameters from py bindings 2023-09-01 10:29:08 +01:00
youben11
9e8c44ed00 feat(compiler/python): expose memory usage in bindings 2023-08-29 15:47:25 +01:00
youben11
54089186ae refactor(compiler): reorganize passes and add memory usage pass 2023-08-29 15:47:25 +01:00
youben11
d88b2c87ac feat(compiler): compute memory usage per location 2023-08-29 15:47:25 +01:00
Ayoub Benaissa
c4686c3631 fix(compiler): lower fhe.zero to either scalar or tensor variant based on encoding
When using crt encoding, some fhe.zero op results will be converted to tensors (crt encoded eint), so should be converted to tfhe.zero_tensor operations instead of tfhe.zero
2023-08-11 18:23:29 +01:00
Bourgerie Quentin
245836b8ba fix(compiler): Fix conv2d with bias equals to zero in multi parameters
The zero bias was folded and lead to empty loops, i.e. loops with copy only and make the TFHE parametrization fail
2023-08-08 11:01:29 +02:00
Umut
9a5b08938e feat(compiler): support multi precision encrypted multiplications 2023-08-04 13:17:14 +02:00
Bourgerie Quentin
bd4540102c fix(compiler/multi-parameters): Fixing encrypted dot and encrypted matmul with multi-parameters 2023-08-01 19:03:57 +02:00
Umut
ade83d5335 feat(compiler): add more detailed statistics 2023-08-01 18:40:08 +02:00
youben11
f6599a91c6 refactor(compiler): add func to populate RTOps type conversion 2023-07-31 16:57:53 +01:00
Andi Drebes
66d9c6eee4 fix(compiler): Do not print type mnenomic in custom printers for RT types
Printing the mnenomic in the custom printers for RT types leads to a
repetition, since the mnenomic is already printed by the
infrastructure invoking the custom printer (e.g., instead of
RT.future<...>, the printed type is `RT.futurefuture<...>`).

This commit removes the mnenomics from the custom printers and thus
causes them to emit the correct textual representation of types.
2023-07-31 14:09:33 +02:00
youben11
761bc5f62d feat(compiler/python): support enabling verbose/debug 2023-07-26 18:22:20 +01:00
Umut
79b38a72ec feat(compiler): provide circuit statistics 2023-07-26 11:08:15 +02:00
Umut
d1b004c87d fix(compiler): move simulation before batching 2023-07-26 11:08:15 +02:00
rudy
885d25424d fix(compiler): revert Workaround fallback to Strategy::V0 when solving with Strategy::DAG_MONO
This reverts commit caee0bae66.
2023-07-24 18:07:59 +02:00
youben11
545bda979d fix(compiler): use dyn sized tensors in CAPI func definitions 2023-07-21 16:53:32 +01:00
youben11
022b1879a1 feat(compiler): support compiling in-memory module 2023-07-21 14:14:55 +01:00
Ayoub Benaissa
67ca4e10b9 fix(compiler): add conversion of tensor.from_elements in simulation 2023-07-21 09:23:31 +01:00
Antoniu Pop
5082cea110 fix(compiler): disable dataflow parallelization when the optimiser strategy is dag-multi. Currently the two don't work well together because dataflow task outlining obfuscates the code early on in the compilation pipeline. 2023-07-20 14:39:28 +01:00
youben11
2f8d877de8 docs(compiler): calling from other lang (rust) 2023-07-13 14:33:54 +02:00
Umut
6fdcb78158 fix(compiler-bindings): use manual function pointer type to avoid compilation error on macOS 2023-06-27 18:36:22 +02:00
youben11
648e868ffe feat(compiler): support parallelization during simulation 2023-06-27 14:21:42 +01:00
youben11
27e1835f23 feat(compiler/python): expose simulation to python-bindings 2023-06-27 14:21:42 +01:00
youben11
eb116058e0 feat(compiler): support invoke on simulated circuits 2023-06-27 14:21:42 +01:00
youben11
32ad46f7c5 feat(compiler): disable runtimeCtx pass in simulation 2023-06-27 14:21:42 +01:00
youben11
5e848a6971 feat(compiler/clientlib): support simulation in enc-args 2023-06-27 14:21:42 +01:00
youben11
ad13602bf3 refactor(compiler/clientlib): add ValueExporter Interface 2023-06-27 14:21:42 +01:00
youben11
7b594c5ecd feat(compiler): add simulation runtime 2023-06-27 14:21:42 +01:00
youben11
b8e462c1cc feat(compiler): add option to compile in simulation mode 2023-06-27 14:21:42 +01:00
youben11
e58b46d86d feat(compiler): add a pass to simulate TFHE ops
lowering is done to CAPI calls that implement simulation as well as
standard MLIR ops
2023-06-27 14:21:42 +01:00
Umut
c98b8f0241 fix(frontend-python): manually abort on ctrl+c 2023-06-27 15:11:21 +02:00
Umut
45e69798aa feat(frontend-python): support ctrl+c during compilation and key generation 2023-06-26 12:37:52 +02:00
Bourgerie Quentin
5147ac8418 feat(compiler): Add canonicalization of FHE/FHELinalg to_signed to_unsigned ops 2023-06-23 14:34:23 +02:00
Ayoub Benaissa
3bdebae1f6 fix(compiler): fix plaintext tensor shape when using crt
client parameter generation was removing last dimension of a tensor when using CRT including plaintext ones, while it should only be done for encrypted ones.
2023-06-21 12:14:11 +01:00
Bourgerie Quentin
b6da228dd4 fix(compiler): Add extra conversion keyswitch just after bootstrap
Rely on strong asumptions of the optimization (see comment)

close https://github.com/zama-ai/concrete-internal/issues/352
2023-06-21 08:53:36 +02:00
Antoniu Pop
9363c40753 fix(compiler): fix key serialization for distributed computing in DFR.
Key serialization for transfers between nodes in clusters was broken
since the changes introduced to separate keys from key parameters and
introduction of support for multi-key (ref
cacffadbd2).

This commit restores functionality for distributing keys to non-shared
memory nodes.
2023-06-19 09:54:21 +01:00
Mayeul@Zama
97b13e871c feat(optimizer): introduce fft precision 2023-06-15 10:48:07 +02:00
Mayeul@Zama
5659195dbc feat(optimizer): accept any ciphertext_modulus_log 2023-06-15 10:48:07 +02:00
Antoniu Pop
81eaaa7560 feat(compiler): add multi-gpu scheduler for batched ops. Scheduler splits op batches in chunks to fit GPU memory and balance load across GPUs. 2023-06-12 22:51:30 +01:00
Andi Drebes
38e14446d6 feat(compiler): Batching: Add pattern folding operations on tensors of constants
This adds a new pattern to the batching pass that folds operations on
tensors of constants into new tensors of constants. E.g.,

  %cst = arith.constant dense<...> : tensor<Nxi9>
  %res = scf.for %i = %c0 to %cN {
    %cst_i9 = tensor.extract %cst[%i]
    %cst_i64 = arith.extui %cst_i9 : i64
    ...
  }

becomes:

  %cst = arith.constant dense<...> : tensor<Nxi64>
  %res = scf.for %i = %c0 to %cN {
    %cst_i64 = tensor.extract %cst[%i]
    ...
  }

The pattern only works for static loops, indexes that are quasi-affine
expressions on single loop induction variables with a constant step
size across iterations and foldable operations that have a single
result.
2023-06-12 22:51:30 +01:00
Andi Drebes
3516ae7682 feat(compiler): Add option for maximum batch size to batching pass
This adds a new compilation option `maxBatchSize` and a command line
option `--max-batch-size` to `concretecompiler`.
2023-06-12 22:51:30 +01:00
Andi Drebes
38a5b5e928 feat(compiler): Add support for batching with multiple batchable operands
The current batching pass only supports batching of operations that
have a single batchable operand, that can only be batched in one way
and that operate on scalar values. However, this does not allow for
efficient batching of all arithmetic operations in TFHE, since these
are often applied to pairs of scalar values from tensors, to tensors
and scalars or to tensors that can be grouped in higher-order tensors.

This commit introduces three new features for batching:

  1. Support of multiple batchable operands

     The operation interface for batching now allows for the
     specification of multiple batchable operands. This set can be
     composed of any subset of an operation's operands, i.e., it is
     not limited to sets of operands with contiguous operand indexes.

  2. Support for multiple batching variants

     To account for multiple kinds of batching, the batching operation
     interface `BatchableOpInterface` now supports variants. The
     batching pass attempts to batch an operation by trying the
     batching variants expressed via the interface in order until it
     succeeds.

  3. Support for batching of tensor values

     Some operations that could be batched already operate on tensor
     values. The new batching pass detects those patterns and groups
     the batchable tensors' values into higher-dimensional tensors.
2023-06-12 22:51:30 +01:00
Antoniu Pop
20394368bf feat(compiler): add lowering of batched mapped bootstrap operations to wrappers and SDFG, with support in the runtime. 2023-06-12 22:51:30 +01:00
Andi Drebes
5a6ed84076 fix(compiler): Batching: Do not attempt to extract static bounds for dynamic loops
The batching pass erroneously assumes that any expression solely
composed of an induction variable has static bounds. This commit adds
a test for the lower bound, upper bound and step checking that they
are indeed static before attempting to determine their static values.
2023-06-12 22:51:30 +01:00
Antoniu Pop
3a679a6f0a feat(compiler): add mapped version of batched bootstrap wrappers for CPU and GPU. 2023-06-12 22:51:30 +01:00
Antoniu Pop
3f9f228a23 feat(compiler): add runtime support for batched operations in SDFG/GPU. 2023-06-12 22:51:30 +01:00