The Concrete Optimizer is invoked on a representation of the program
in the high-level FHELinalg / FHE Dialects and yields a solution with
a one-to-one mapping of operations to keys. However, the abstractions
used by these dialects do not allow for references to keys and the
application of the solution is delayed until the pipeline reaches a
representation of the program in the lower-level TFHE dialect. Various
transformations applied by the pipeline along the way may break the
one-to-one mapping and add indirections into producer-consumer
relationships, resulting in ambiguous or partial mappings of TFHE
operations to the keys. In particular, explicit frontiers between
optimizer partitions may not be recovered.
This commit preserves explicit frontiers between optimizer partitions
as `optimizer.partition_frontier` operations and lowers these to
keyswitch operations before parametrization of TFHE operations.
This adds a new dialect called `Optimizer` with operations related to
the Concrete Optimizer. Currently, there is only one operation
`optimizer.partition_frontier` that can be inserted between a producer
and a consumer which belong to different partitions computed by the
optimizer. The purpose of this operation is to preserve explicit key
changes from the invocation of the optimizer on high-level dialects
(i.e., FHELinalg / FHE) until the IR is provided with actual
references to keys in low-level dialects (i.e., TFHE).
The main debugging function is
`TypeInferenceUtils::dumpAllState(mlir::Operation* op)` which dumps
the entire state of type inference for the function containing `op`.
On Mac arm, the c api backing the python bindings does not propagate the
exceptions properly to the concretelang python module. This makes all
exceptions raised through `CompilerEngine.cpp` fall in the catch-all
case of the pybind exceptions handler.
Since there is no particular need for a public c api, we just remove it
from the bindings, and move all the content of `CompilerEngine.cpp`
directly in the `CompilerAPIModule.cpp` file.
The current pass applying the parameters determined by the optimizer
to the IR propagates the parametrized TFHE types to operations not
directly tagged with an optimizer ID only under certain conditions. In
particular, it does not always properly propagate types into nested
regions (e.g., of `scf.for` loops).
This burdens preceding transformations that are applied in between the
invocation of the optimizer and the parametrization pass with
data-flow analysis and book-keeping in order to tag newly inserted
operations with the right optimizer IDs that ensure proper
parametrization.
This commit replaces the current parametrization pass with a new pass
that propagates parametrized TFHE types up and down def-use chains
using type inference and a proper rewriter. The pass is limited to the
operations supported by `TFHEParametrizationTypeResolver::resolve`.
This adds the `TypeInferenceRewriter` class, which applies the results
for type inference obtained from the state of a `DataFlowSolver` and
from a final invocation of a type resolver to a module.
Type inference is implemented through the two classes
`ForwardTypeInferenceAnalysis` and `BackwardTypeInferenceAnalysis`,
which can be used as forward and backward dataflow analyses with the
MLIR sparse dataflow analysis framework.
Both classes rely on a type resolver, which must be a class inheriting
`TypeResolver` and that specifies which types are to be considered as
unresolved and that resolves the actual types for the values related
to an operation based on the previous state of type inference.
The type inference state for an operation is represented by an
instance of the class `LocalInferenceState`, which maps the values
related to an operation to instances of `InferredType` (either
indicating the inferred type as an `mlir::Type` or indicating that no
type has been inferred, yet).
The local type inference by a type resolver can be implemented with
type constraints (instances of sub-classes of `TypeConstraint`), which
can be combined into a `TypeConstraintSet`. The latter provides a
function that attempts to apply the constraints until the resulting
type inference state converges.
There are multiple, predefined type constraint classes for common
constraints (e.g., if two values must have the same type or the same
element type). These exist both as static constraints and as dynamic
constraints. Some pre-defined type constraints depend on a class that
yields a pair of values for which the contraints shall be applied
(e.g., yielding two operands or an operand and a result, etc.).
The `TypeInference` dialect provides three operations.
The operation `TypeInference.propagate_downwards` respresents a type
barrier, which is supposed to forward the type of its operand as its
result type during type inference.
The operation `TypeInference.propagate_upwards` also respresents a
type barrier, but is supposed to forward the type of its result as the
type for its operand during type inference.
The operation `TypeInference.unresolved_conflict` can be used as a
marker when two different types have beed inferred for a value (e.g.,
one type during forward dataflow analysis and the other during
backward dataflow analysis)
- added --compress-input compiler option which forces the use of seeded
bootstrap keys and keyswitch keys
- replaced the concrete-cpu FHE implementation with tfhe-rs
Co-authored-by: Nikita Frolov <nf@mkmks.org>
This commit:
+ Adds support for a protocol which enables inter-op between concrete,
tfhe-rs and potentially other contributors to the fhe ecosystem.
+ Gets rid of hand-made serialization in the compiler, and
client/server libs.
+ Refactors client/server libs to allow more pre/post processing of
circuit inputs/outputs.
The protocol is supported by a definition in the shape of a capnp file,
which defines different types of objects among which:
+ ProgramInfo object, which is a precise description of a set of fhe
circuit coming from the same compilation (understand function type
information), and the associated key set.
+ *Key objects, which represent secret/public keys used to
encrypt/execute fhe circuits.
+ Value object, which represent values that can be transferred between
client and server to support calls to fhe circuits.
The hand-rolled serialization that was previously used is completely
dropped in favor of capnp in the whole codebase.
The client/server libs, are refactored to introduce a modular design for
pre-post processing. Reading the ProgramInfo file associated with a
compilation, the client and server libs assemble a pipeline of
transformers (functions) for pre and post processing of values coming in
and out of a circuit. This design properly decouples various aspects of
the processing, and allows these capabilities to be safely extended.
In practice this commit includes the following:
+ Defines the specification in a concreteprotocol package
+ Integrate the compilation of this package as a compiler dependency
via cmake
+ Modify the compiler to use the Encodings objects defined in the
protocol
+ Modify the compiler to emit ProgramInfo files as compilation
artifact, and gets rid of the bloated ClientParameters.
+ Introduces a new Common library containing the functionalities shared
between the compiler and the client/server libs.
+ Introduces a functional pre-post processing pipeline to this common
library
+ Modify the client/server libs to support loading ProgramInfo objects,
and calling circuits using Value messages.
+ Drops support of JIT.
+ Drops support of C-api.
+ Drops support of Rust bindings.
Co-authored-by: Nikita Frolov <nf@mkmks.org>
Instead of having one `getSQManp` implementation per op with a lot of repetition, the noise
calculation is now modular.
- Ops that implements`UnaryEint`/`BinaryInt`/`BinaryEint` interfaces share the operand noise
presence check.
- For many scalar ops no further calculation is needed. If it's not the case, an op can override
`sqMANP`.
- Integer operand types lookups are abstracted into `BinaryInt::operandIntType()`
- Finding largest operand value for a type is abstracted into `BinaryInt::operandMaxConstant`
- Noise calculation for matmul ops is simplified and it's now general enough to work for
`matmul_eint_int`, `matmul_int_eint` and `dot_eint_int` at once.