Use `OpConversionPattern` instead of `OpRewritePattern` for operation
conversion during dialect conversion. This makes explicit and in-place
type conversions unnecessary, since `OpConversionPattern` already
properly converts operand types and provides them to the rewrite rule
through an operation adaptor.
The main contributions of this commit are the two class templates
`TypeConvertingReinstantiationPattern` and
`GenericOneToOneOpConversionPattern`.
The former allows for the definition of a simple replacement rule that
re-instantiates an operation after the types of its operands have been
converted. This is especially useful for type-polymorphic operations
during dialect conversion.
The latter allows for the definition of patterns, where one operation
needs to be replaced with a different operation after conversion of
its operands.
The default implementations for the class templates provide
conversions rules for operations that have a generic builder method
that takes the desired return type(s), the operands and (optionally) a
set of attributes. How attributes are discarded during a conversion
(either by omitting the builder argument or by passing an empty set of
attributes) can be defined through specialization of
`ReinstantiationAttributeDismissalStrategy`.
Custom replacement rules that deviate from the scheme above should be
implemented by specializing
`TypeConvertingReinstantiationPattern::matchAndRewrite()` and
`GenericOneToOneOpConversionPattern::matchAndRewrite()`.
This adds a new option `--unroll-loops-with-sdfg-convertible-ops`,
which causes loops containing SDFG-convertible operations to be fully
unrolled upon the extraction of SDFG-operations using the
`--emit-sdfg-ops` switch. This avoids constant roundtrips between an
SDFG-capable accelerator and the host during execution of a loop.
The option is limited to `scf.for` loops with static bounds and a
static step size. Since full unrolling of loops with large bounds
results in a large number of operations, the option is disabled by
default.
This adds a new dialect called "SDFG" for data flow graphs. An SDFG
data flow graph is composed of a set of processes, connected through
data streams. Special streams allow for data to be injected into and
to be retrieved from the data flow graph.
The dialect is intended to be lowered to API calls that allow for
offloading of the graph on hardware accelerators.
The batching pass passes operands to the batched operation as a flat,
one-dimensional vector produced through a `tensor.collapse_shape`
operation collapsing all dimensions of the original tensor of
operands. Similarly, the shape of the result vector of the batched
operation is expanded to the original shape afterwards using a
`tensor.expand_shape` operation.
The pass emits the `tensor.collapse_shape` and `tensor.expand_shape`
operations unconditionally, even for tensors, which already have only
a single dimension. This causes the verifiers of these operations to
fail in some cases, aborting the entire compilation process.
This patch lets the batching pass emit `tensor.collapse_shape` and
`tensor.expand_shape` for batched operands and batched results only if
the rank of the corresponding tensors is greater than one.
- unify CPU and GPU bootstrapping operations
- remove operations to build GLWE from table: this is now done in
wrapper functions
- remove GPU memory management operations: done in wrappers now, but we
will have to think about how to deal with it later in MLIR
For now what it works are only levelled ops with user parameters. (take a look to the tests)
Done:
- Add parameters to the fhe parameters to support CRT-based large integers
- Add command line options and tests options to allows the user to give those new parameters
- Update the dialects and pipeline to handle new fhe parameters for CRT-based large integers
- Update the client parameters and the client library to handle the CRT-based large integers
Todo:
- Plug the optimizer to compute the CRT-based large interger parameters
- Plug the pbs for the CRT-based large integer
Rebase to llvm-project at 3f81841474fe with a pending upstream patch
for arbitrary element types in linalg named operations.
Co-authored-by: Ayoub Benaissa <ayoub.benaissa@zama.ai>
This commit is introduced because python bindings for `tensor.from_elements` are not generated automatically. Previously, we overcame this with string manipulation, but with the latest version of the compiler, it became a problem. This commit should be reverted eventually. See https://discourse.llvm.org/t/cannot-create-tensor-from-elements-operation-from-python-bindings/4768 for the discussion in LLVM forums.