This introduces a new header file `zamalang/Support/Constants.h` for
constants, currently only populated with a constant for the default
pattern rewriting benefit of 1.
Upon invocation of a function with memref arguments, the strides for
all dimensions are currently set to 0. This causes dynamic offsets to
be calculated incorrectly in the function body.
This patch replaces the placeholder values with the actual strides for
each dimension and adds a test with parametric slice extraction from a
tensor that triggers dynamic indexing.
[----------] Global test environment tear-down
[==========] 7 tests from 1 test suite ran. (1513 ms total)
[ PASSED ] 7 tests.
YOU HAVE 2 DISABLED TESTS
[----------] Global test environment tear-down
[==========] 6 tests from 1 test suite ran. (1513 ms total)
[ PASSED ] 6 tests.
YOU HAVE 3 DISABLED TESTS
Compared to previous commit, a fatal test is disabled
[----------] Global test environment tear-down
[==========] 6 tests from 1 test suite ran. (1327 ms total)
[ PASSED ] 5 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] Lambda_check_param.scalar_tensor_to_tensor_good_number_param
1 FAILED TEST
YOU HAVE 3 DISABLED TESTS
While the manp analysis wasn't handle tensor the dot operation restrict the operands to come from a block argument. Since the tensor are handled in the manp pass this restriction has no more meaning.
Add a rewrite pattern that transforms an instance of
`HLFHELinalg.zero` into an instance of `linalg.generate` with an
appropriate region yielding a zero value.
Example:
%out = "HLFHELinalg.zero"() : () -> tensor<MxNx!HLFHE.eint<p>>
becomes:
%0 = tensor.generate {
^bb0(%arg2: index, %arg3: index):
%zero = "HLFHE.zero"() : () -> !HLFHE.eint<p>
tensor.yield %zero : !HLFHE.eint<p>
} : tensor<MxNx!HLFHE.eint<p>>
Add a new operation `HLFHELinalg.zero`, broadcasting an encrypted,
zero-valued integer into a tensor of encrypted integers with static
shape.
Example creating a one-dimensional tensor with five elements all
initialized to an encrypted zero:
%tensor = "HLFHELinalg.zero"() : () -> tensor<5x!HLFHE.eint<4>>
Operations with regions currently need explicit patterns for type
conversion throughout lowering. This change adds the required patterns
for `tensor.generate` to the lowering passes, such that the operation
can be used starting from the lowering from HLFHE to MidLFHE.
Resolves#288
example
before:
Failed to lower to LLVM dialect
after:
Failed to lower to LLVM dialect
test.mlir:3:10: error: unexpected error: 'linalg.copy' op expected indexing_map #1 to have 2 dim(s) to match the number of loops
%0 = tensor.extract_slice %arg0[0, 0] [3, 1] [1, 1] : tensor<3x2x!HLFHE.eint<3>> to tensor<3x!HLFHE.eint<3>>
^
Replace `LinalgGenericTypeConverterPattern`, specialized for
`linalg.generic` with a generic type converter pattern
`RegionOpTypeConverterPattern` that can be instantiated for any
operation that has one or more regions.
Further enhancements:
- Supports multiple regions
- Uses more idiomatic instantiations of `llvm::for_each` instead of
manual iterations using for loops
Try to find the runtime library automatically (should only work on
proper installation of the package), and fail silently by not passing
any RT lib. The RT lib can also be specified manually. The RT lib will
be used as a shared library by the JIT compiler.
Add a new method `JITLambda::Arguments::getResultWidth` returning the
width of a scalar result or the element type of a tensor result at a
given position.