On Mac arm, the c api backing the python bindings does not propagate the
exceptions properly to the concretelang python module. This makes all
exceptions raised through `CompilerEngine.cpp` fall in the catch-all
case of the pybind exceptions handler.
Since there is no particular need for a public c api, we just remove it
from the bindings, and move all the content of `CompilerEngine.cpp`
directly in the `CompilerAPIModule.cpp` file.
The current pass applying the parameters determined by the optimizer
to the IR propagates the parametrized TFHE types to operations not
directly tagged with an optimizer ID only under certain conditions. In
particular, it does not always properly propagate types into nested
regions (e.g., of `scf.for` loops).
This burdens preceding transformations that are applied in between the
invocation of the optimizer and the parametrization pass with
data-flow analysis and book-keeping in order to tag newly inserted
operations with the right optimizer IDs that ensure proper
parametrization.
This commit replaces the current parametrization pass with a new pass
that propagates parametrized TFHE types up and down def-use chains
using type inference and a proper rewriter. The pass is limited to the
operations supported by `TFHEParametrizationTypeResolver::resolve`.
In order to avoid leftover TFHE operations to be lowered further down
the pipeline after parametrization, run the canonicalizer, which
includes dead code eliminiation.
This adds the `TypeInferenceRewriter` class, which applies the results
for type inference obtained from the state of a `DataFlowSolver` and
from a final invocation of a type resolver to a module.
Type inference is implemented through the two classes
`ForwardTypeInferenceAnalysis` and `BackwardTypeInferenceAnalysis`,
which can be used as forward and backward dataflow analyses with the
MLIR sparse dataflow analysis framework.
Both classes rely on a type resolver, which must be a class inheriting
`TypeResolver` and that specifies which types are to be considered as
unresolved and that resolves the actual types for the values related
to an operation based on the previous state of type inference.
The type inference state for an operation is represented by an
instance of the class `LocalInferenceState`, which maps the values
related to an operation to instances of `InferredType` (either
indicating the inferred type as an `mlir::Type` or indicating that no
type has been inferred, yet).
The local type inference by a type resolver can be implemented with
type constraints (instances of sub-classes of `TypeConstraint`), which
can be combined into a `TypeConstraintSet`. The latter provides a
function that attempts to apply the constraints until the resulting
type inference state converges.
There are multiple, predefined type constraint classes for common
constraints (e.g., if two values must have the same type or the same
element type). These exist both as static constraints and as dynamic
constraints. Some pre-defined type constraints depend on a class that
yields a pair of values for which the contraints shall be applied
(e.g., yielding two operands or an operand and a result, etc.).
The `TypeInference` dialect provides three operations.
The operation `TypeInference.propagate_downwards` respresents a type
barrier, which is supposed to forward the type of its operand as its
result type during type inference.
The operation `TypeInference.propagate_upwards` also respresents a
type barrier, but is supposed to forward the type of its result as the
type for its operand during type inference.
The operation `TypeInference.unresolved_conflict` can be used as a
marker when two different types have beed inferred for a value (e.g.,
one type during forward dataflow analysis and the other during
backward dataflow analysis)
Some of the tests use lookup tables whose numbers of elements do not
match the sizes of the polynoms of the bootstrap operations they are
passed to. This commit replaces these lookup tables with tables of the
right size.
In the optimizer, nodes without consumers are identified as outputs.
Since we can now return multiple values, this is inherently buggy,
since a value can then be both returned, and consumed to create another
input.
This commit fixes this by allowing the compiler to tag nodes as being
outputs.
For `ExtractSliceOp` and `InsertSliceOp`, the code performing the
hoisting of indexed operations in the batching pass derives the
indexes of the hoisted operation from the indexes provided by
`ExtractSliceOp::getOffsets()` and
`InsertSliceOp::getOffsets()`. However, these methods only return
dynamic indexes, such that operations with mixed, dynamic and static
offsets are hoisted incorrectly.
This patch replaces the invocations of `ExtractSliceOp::getOffsets()`
and `InsertSliceOp::getOffsets()` with invocations of
`ExtractSliceOp::getMixedOffsets()` and
`InsertSliceOp::getMixedOffsets()`, respectively in order to take into
account both static and dynamic indexes.
The IR generated by the batching pattern introduces intermediate
tensor values omitting the batched dimensions for batched
operands. This happens uncondiationally, leading to the generation of
`tensor.collapse_shape` operations with the same output and inout
shape. However, verification for such operation fails, since the
verifier assumes that the rank of the resulting tensor is reduced at
least by one.
This commit modified the check in `flattenTensor`, such that no
flattening operation is generated if the input and output shapes would
be identical.
Currently, the cleanup pattern matches sequences of `tensor.extract` /
`tensor.extract_slice`, `tensor.insert` / `tensor.insert_slice` and
`scf.yield` operations that are embedded into perfect loop nests whose
IVs are referenced in quasi-affine expressions with constant
steps. The pattern ensures that the extraction and insertion operation
use the same IVs for indexing, but does not check whether they appear
in the same order.
However, the order in which IVs are used for indexing is crucial when
replacing the operations with `tensor.extract_slice` and
`tensor.insert_slice` operations in order to preserve the shape of the
slices and the order of elements.
For example, when the cleanup pattern is applied to the following IR:
```
%c0 = arith.constant 0 : index
%c1 = arith.constant 1 : index
%c2 = arith.constant 2 : index
%c3 = arith.constant 3 : index
scf.for %i = %c0 to %c3 step %c1 ... {
scf.for %j = %c0 to %c2 step %c1 ... {
%v = tensor.extract ... [%i, %j]
%t = tensor.insert ... [%j, %i]
scf.yield %t
}
...
}
```
The extracted slice has a shape of 3x2, while the insertion should be
happening with a slice with the shape 2x3.
This commit adds an additional check to the cleanup pattern that
ensures that loop IVs are used for indexing in the same order and
appear the same number of times.
Normalization of indexes in the batching pass currently
unconditionally emits arithmetic operations to shift and scale indexes
regardless of whether these indexes are already normalized. This leads
to unnecessary subtractions with 0 and divisions by 1.
This commit introduces two additional checks to the code normalizing
indexes that prevent arithmetic operations to be emitted for indexes
that do not need to be shifted or scaled.