This adds support for the tiling of `linalg.generic` operations that
have only parallel iterators or only parallel iterators and a single
reduction dimension via the linalg tiling infrastructure (i.e.,
`mlir::linalg::tileToForallOpUsingTileSizes()` and
`mlir::linalg::tileReductionUsingForall()`).
This allows for the tiling of FHELinalg operations by first replacing
them with appropriate `linalg.generic` oeprations and then invoking
the tiling pass in the pipeline. In order for the tiling to take
place, tile sizes must be specified using the `tile-sizes` operation
attribute, either directly for `linalg.generic` operations or
indirectly for the FHELinalg operation, e.g.,
"FHELinalg.matmul_eint_int"(%a, %b) { "tile-sizes" = [0, 0, 7] } : ...
Tiling of operations with a reduction dimension is currently limited
to tiling of the reduction dimension, i.e., the tile sizes for the
parallel dimensions must be zero.
This adds an invocation of the `SCFForallToSCFFor` pass to the
compilation pipeline before lowering to the LLVM dialect as a
sequential fallback path to future passes exploiting the parallelism
of `scf.forall` further up in the pipeline.
This adds a new pass that converts `scf.forall` loops into nested
`scf.for` operations. The conversion carries parallel output tensors
from the original loop as dependencies through the loop nest and
replaces any occurrence of `tensor.parallel_insert_slice` operations
in the `scf.forall.in_parallel` terminator with equivalent
`tensor.insert_slice` operations.
This makes the trip counts of operations in the TFHE statistics pass
as well as the per-location memory usage statistics in the memory
usage statistics pass optional. These values are unset if the trip
count could not be determined statically.
The Concrete Optimizer is invoked on a representation of the program
in the high-level FHELinalg / FHE Dialects and yields a solution with
a one-to-one mapping of operations to keys. However, the abstractions
used by these dialects do not allow for references to keys and the
application of the solution is delayed until the pipeline reaches a
representation of the program in the lower-level TFHE dialect. Various
transformations applied by the pipeline along the way may break the
one-to-one mapping and add indirections into producer-consumer
relationships, resulting in ambiguous or partial mappings of TFHE
operations to the keys. In particular, explicit frontiers between
optimizer partitions may not be recovered.
This commit preserves explicit frontiers between optimizer partitions
as `optimizer.partition_frontier` operations and lowers these to
keyswitch operations before parametrization of TFHE operations.
This adds a new dialect called `Optimizer` with operations related to
the Concrete Optimizer. Currently, there is only one operation
`optimizer.partition_frontier` that can be inserted between a producer
and a consumer which belong to different partitions computed by the
optimizer. The purpose of this operation is to preserve explicit key
changes from the invocation of the optimizer on high-level dialects
(i.e., FHELinalg / FHE) until the IR is provided with actual
references to keys in low-level dialects (i.e., TFHE).
The DAG pass establishing a mapping between operations in the IR and
the optimizer DAG currently omits assignment of the optimizer ID to
`FHE.reinterpret_precision` operations via the `TFHE.OId`
attribute. This prevents subsequent passes from determining to which
optimizer partition a `FHE.reinterpret_precision` operation belongs.
This commit removes the early exit in `FunctionToDag::addOperation`
for the handling of `FHE.reinterpret_precision` that prevented the
code assigning the optimizer ID from being executed.
On Mac arm, the c api backing the python bindings does not propagate the
exceptions properly to the concretelang python module. This makes all
exceptions raised through `CompilerEngine.cpp` fall in the catch-all
case of the pybind exceptions handler.
Since there is no particular need for a public c api, we just remove it
from the bindings, and move all the content of `CompilerEngine.cpp`
directly in the `CompilerAPIModule.cpp` file.
The current pass applying the parameters determined by the optimizer
to the IR propagates the parametrized TFHE types to operations not
directly tagged with an optimizer ID only under certain conditions. In
particular, it does not always properly propagate types into nested
regions (e.g., of `scf.for` loops).
This burdens preceding transformations that are applied in between the
invocation of the optimizer and the parametrization pass with
data-flow analysis and book-keeping in order to tag newly inserted
operations with the right optimizer IDs that ensure proper
parametrization.
This commit replaces the current parametrization pass with a new pass
that propagates parametrized TFHE types up and down def-use chains
using type inference and a proper rewriter. The pass is limited to the
operations supported by `TFHEParametrizationTypeResolver::resolve`.
In order to avoid leftover TFHE operations to be lowered further down
the pipeline after parametrization, run the canonicalizer, which
includes dead code eliminiation.
The `TypeInference` dialect provides three operations.
The operation `TypeInference.propagate_downwards` respresents a type
barrier, which is supposed to forward the type of its operand as its
result type during type inference.
The operation `TypeInference.propagate_upwards` also respresents a
type barrier, but is supposed to forward the type of its result as the
type for its operand during type inference.
The operation `TypeInference.unresolved_conflict` can be used as a
marker when two different types have beed inferred for a value (e.g.,
one type during forward dataflow analysis and the other during
backward dataflow analysis)
In the optimizer, nodes without consumers are identified as outputs.
Since we can now return multiple values, this is inherently buggy,
since a value can then be both returned, and consumed to create another
input.
This commit fixes this by allowing the compiler to tag nodes as being
outputs.
For `ExtractSliceOp` and `InsertSliceOp`, the code performing the
hoisting of indexed operations in the batching pass derives the
indexes of the hoisted operation from the indexes provided by
`ExtractSliceOp::getOffsets()` and
`InsertSliceOp::getOffsets()`. However, these methods only return
dynamic indexes, such that operations with mixed, dynamic and static
offsets are hoisted incorrectly.
This patch replaces the invocations of `ExtractSliceOp::getOffsets()`
and `InsertSliceOp::getOffsets()` with invocations of
`ExtractSliceOp::getMixedOffsets()` and
`InsertSliceOp::getMixedOffsets()`, respectively in order to take into
account both static and dynamic indexes.
The IR generated by the batching pattern introduces intermediate
tensor values omitting the batched dimensions for batched
operands. This happens uncondiationally, leading to the generation of
`tensor.collapse_shape` operations with the same output and inout
shape. However, verification for such operation fails, since the
verifier assumes that the rank of the resulting tensor is reduced at
least by one.
This commit modified the check in `flattenTensor`, such that no
flattening operation is generated if the input and output shapes would
be identical.