The current scheme used by reinstantiating conversion patterns in
`lib/Conversion/Utils/Dialects` for operations with blocks is to
create a new operation with empty blocks, to move the operations from
the old blocks and then to replace any references to block
arguments. However, such in-place updates of the types of block
arguments leave conversion patterns for operations nested in the
blocks without the ability to determine the original types of values
from before the update.
This change uses proper signature conversion for block arguments, such
that the original types of block arguments with converted types is
preserved, while the new types are made available through the dialect
conversion infrastructure via the respective adaptors.
Some of the TFHE to Concrete conversion patterns implicitly assume
that operands are ciphertexts and thus that the converted types have a
higher number of dimensions than the original types. However, for
non-ciphertext types, the number of dimensions before and after the
conversion must be the same.
This commit adds a check to the respective conversion patterns
triggering a simple type conversion that preserves the number of
dimensions for non-ciphertext types.
This adds support for the tiling of `linalg.generic` operations that
have only parallel iterators or only parallel iterators and a single
reduction dimension via the linalg tiling infrastructure (i.e.,
`mlir::linalg::tileToForallOpUsingTileSizes()` and
`mlir::linalg::tileReductionUsingForall()`).
This allows for the tiling of FHELinalg operations by first replacing
them with appropriate `linalg.generic` oeprations and then invoking
the tiling pass in the pipeline. In order for the tiling to take
place, tile sizes must be specified using the `tile-sizes` operation
attribute, either directly for `linalg.generic` operations or
indirectly for the FHELinalg operation, e.g.,
"FHELinalg.matmul_eint_int"(%a, %b) { "tile-sizes" = [0, 0, 7] } : ...
Tiling of operations with a reduction dimension is currently limited
to tiling of the reduction dimension, i.e., the tile sizes for the
parallel dimensions must be zero.
The Concrete Optimizer is invoked on a representation of the program
in the high-level FHELinalg / FHE Dialects and yields a solution with
a one-to-one mapping of operations to keys. However, the abstractions
used by these dialects do not allow for references to keys and the
application of the solution is delayed until the pipeline reaches a
representation of the program in the lower-level TFHE dialect. Various
transformations applied by the pipeline along the way may break the
one-to-one mapping and add indirections into producer-consumer
relationships, resulting in ambiguous or partial mappings of TFHE
operations to the keys. In particular, explicit frontiers between
optimizer partitions may not be recovered.
This commit preserves explicit frontiers between optimizer partitions
as `optimizer.partition_frontier` operations and lowers these to
keyswitch operations before parametrization of TFHE operations.
This commit:
+ Adds support for a protocol which enables inter-op between concrete,
tfhe-rs and potentially other contributors to the fhe ecosystem.
+ Gets rid of hand-made serialization in the compiler, and
client/server libs.
+ Refactors client/server libs to allow more pre/post processing of
circuit inputs/outputs.
The protocol is supported by a definition in the shape of a capnp file,
which defines different types of objects among which:
+ ProgramInfo object, which is a precise description of a set of fhe
circuit coming from the same compilation (understand function type
information), and the associated key set.
+ *Key objects, which represent secret/public keys used to
encrypt/execute fhe circuits.
+ Value object, which represent values that can be transferred between
client and server to support calls to fhe circuits.
The hand-rolled serialization that was previously used is completely
dropped in favor of capnp in the whole codebase.
The client/server libs, are refactored to introduce a modular design for
pre-post processing. Reading the ProgramInfo file associated with a
compilation, the client and server libs assemble a pipeline of
transformers (functions) for pre and post processing of values coming in
and out of a circuit. This design properly decouples various aspects of
the processing, and allows these capabilities to be safely extended.
In practice this commit includes the following:
+ Defines the specification in a concreteprotocol package
+ Integrate the compilation of this package as a compiler dependency
via cmake
+ Modify the compiler to use the Encodings objects defined in the
protocol
+ Modify the compiler to emit ProgramInfo files as compilation
artifact, and gets rid of the bloated ClientParameters.
+ Introduces a new Common library containing the functionalities shared
between the compiler and the client/server libs.
+ Introduces a functional pre-post processing pipeline to this common
library
+ Modify the client/server libs to support loading ProgramInfo objects,
and calling circuits using Value messages.
+ Drops support of JIT.
+ Drops support of C-api.
+ Drops support of Rust bindings.
Co-authored-by: Nikita Frolov <nf@mkmks.org>
Instead of having one `getSQManp` implementation per op with a lot of repetition, the noise
calculation is now modular.
- Ops that implements`UnaryEint`/`BinaryInt`/`BinaryEint` interfaces share the operand noise
presence check.
- For many scalar ops no further calculation is needed. If it's not the case, an op can override
`sqMANP`.
- Integer operand types lookups are abstracted into `BinaryInt::operandIntType()`
- Finding largest operand value for a type is abstracted into `BinaryInt::operandMaxConstant`
- Noise calculation for matmul ops is simplified and it's now general enough to work for
`matmul_eint_int`, `matmul_int_eint` and `dot_eint_int` at once.
When using crt encoding, some fhe.zero op results will be converted to tensors (crt encoded eint), so should be converted to tfhe.zero_tensor operations instead of tfhe.zero