While the manp analysis wasn't handle tensor the dot operation restrict the operands to come from a block argument. Since the tensor are handled in the manp pass this restriction has no more meaning.
Add a new operation `HLFHELinalg.zero`, broadcasting an encrypted,
zero-valued integer into a tensor of encrypted integers with static
shape.
Example creating a one-dimensional tensor with five elements all
initialized to an encrypted zero:
%tensor = "HLFHELinalg.zero"() : () -> tensor<5x!HLFHE.eint<4>>
When determining the maximum MANP and precision, the `MaxMANPPass`
only takes into account results generated by an operation, but ignores
function parameters. However, encrypted function parameters are
assumed to have a MANP value of 1 and can have an arbitrary
precision. This patch takes into account function arguments by using
their default MANP values and the extracting their precision.
Add support for `tensor.extract` operations in the MANP pass. This
currently only supports extract operations on tensors of encrypted
integers, which are passed as function arguments, e.g.:
func @extract_ith(%t: tensor<10x!HLFHE.eint<5>>, %i: index) -> !HLFHE.eint<5>{
%c = tensor.extract %t[%i] : tensor<10x!HLFHE.eint<5>>
return %c : !HLFHE.eint<5>
}
We decide to make this choice as they are issue to crate tensor of custom integer type in python.
+ we don't do the integer extension before convert to the concrete CAPI that requires i64
This pass calculates the squared Minimal Arithmetic Noise Padding
(MANP) for each operation using the MANP pass and extracts the maximum
(non-squared) Minimal Arithmetic Noise Padding and the maximum
ecrypted integer width from.
This pass calculates the squared Minimal Arithmetic Noise Padding
(MANP) for each operation of a function and stores the result in an
integer attribute named "sqMANP". This metric is identical to the
squared 2-norm of the constant vector of an equivalent dot product
between a vector of encrypted integers resulting directly from an
encryption and a vector of plaintext constants.
The pass supports the following operations:
- HLFHE.dot_eint_int
- HLFHE.zero
- HLFHE.add_eint_int
- HLFHE.add_eint
- HLFHE.sub_int_eint
- HLFHE.mul_eint_int
- HLFHE.apply_lookup_table
If any other operation is encountered, the pass conservatively
fails. The pass further makes the optimistic assumption that all
values passed to a function are either the direct result of an
encryption of a noise-refreshing operation.
We currently use LD_PRELOAD with the python extension to make the JIT
execution find the appropriate symbols, however, not linking with some
libraries caused other tools such as make to complain of not finding
symbols from libpthread and others
This changes the semantics of `HLFHE.dot_eint_int` from memref-based
reference semantics to tensor-based value semantics. The former:
"HLFHE.dot_eint_int"(%arg0, %arg1, %arg2) :
(memref<Nx!HLFHE.eint<0>>, memref<Nxi32>, memref<!HLFHE.eint<0>>) -> ()
becomes:
"HLFHE.dot_eint_int"(%arg0, %arg1) :
(tensor<Nx!HLFHE.eint<0>>, tensor<Nxi32>) -> !HLFHE.eint<0>
As a side effect, data-flow analyses become much easier. With the
previous memref type of the plaintext argument it is difficult to
check whether the plaintext values are statically defined constants or
originate from a memory region changed at execution time (e.g., for
analyses evaluating the impact on noise). Changing the plaintext type
from `memref` to `vector` makes such analyses significantly easier.
* feat(compiler): low level fhe dialect
* feat(compiler): using generated printer/parser in LowLFHE
* feat(compiler): new types and ops for LowLFHE
* tests(compiler): LowLFHE types and ops
* feat(compiler): fill ops
* cleanup
* summary + description
* tests(compiler): use new CLI args
* formatting
- feat(compiler): python bindings
- build: update docker image for python bindings
- pin pybind11 to 2.6.2, 2.7 is not having correct include_dirs set (still
a question why?)
- using generated parser/printer
Add a simple dot product between a vector of encrypted integers and a
vector of plaintext integers to the HLFHE dialect.
The operation takes two input operands and one output operand, all
modeled as memrefs. Example:
"HLFHE.dot_eint_int"(%lhs, %rhs, %out) :
(memref<?x!HLFHE.eint<0>>, memref<?xi32>, memref<!HLFHE.eint<0>>) -> ()