This adds a new option `--unroll-loops-with-sdfg-convertible-ops`,
which causes loops containing SDFG-convertible operations to be fully
unrolled upon the extraction of SDFG-operations using the
`--emit-sdfg-ops` switch. This avoids constant roundtrips between an
SDFG-capable accelerator and the host during execution of a loop.
The option is limited to `scf.for` loops with static bounds and a
static step size. Since full unrolling of loops with large bounds
results in a large number of operations, the option is disabled by
default.
This adds a new dialect called "SDFG" for data flow graphs. An SDFG
data flow graph is composed of a set of processes, connected through
data streams. Special streams allow for data to be injected into and
to be retrieved from the data flow graph.
The dialect is intended to be lowered to API calls that allow for
offloading of the graph on hardware accelerators.
When this type of benchmarks is triggered, only the tests that
benefits from GPU acceleration are run on a specific AWS EC2
instance. Note that this instance (p3.2xlarge) is not a bare metal
one, so performance may variate due to hypervisor controlling the
machine.
- Missing offset in woppbs routine
- Better error message for check of tensor result in end to end fixture
- Modify fixture generator for testing purpose
The batching pass passes operands to the batched operation as a flat,
one-dimensional vector produced through a `tensor.collapse_shape`
operation collapsing all dimensions of the original tensor of
operands. Similarly, the shape of the result vector of the batched
operation is expanded to the original shape afterwards using a
`tensor.expand_shape` operation.
The pass emits the `tensor.collapse_shape` and `tensor.expand_shape`
operations unconditionally, even for tensors, which already have only
a single dimension. This causes the verifiers of these operations to
fail in some cases, aborting the entire compilation process.
This patch lets the batching pass emit `tensor.collapse_shape` and
`tensor.expand_shape` for batched operands and batched results only if
the rank of the corresponding tensors is greater than one.
This CI "feature" is meant to circumvent the 6 hours hard-limit
for a job in GitHub Action.
The benchmark is done using a matrix which is handled by Slab.
Here's the workflow:
1. ML benchmarks are started in a fire and forget fashion via
start_ml_benchmarks.yml
2. Slab will read ci/slab.toml to get the AWS EC2 configuration
and the matrix parameters
3. Slab will launch at most max_parallel_jobs EC2 instances in
parallel
4. Each job will trigger ml_benchmark_subset.yml which will run
only one of the generated YAML file via make generate-mlbench,
based on the value of the matrix item they were given.
5. As soon as a job is completed, the next one in the matrix
will start promptly.
This is done until all the matrix items are exhausted.
This adds a new end-to-end test `apply_lookup_table_batched`, which
forces batching of Concrete operations when invoking the compiler
engine, indirectly causing the `concrete.bootstrap_lwe` and
`concrete.keyswitch_lwe` operations generated from the
`FHELinalg.apply_lookup_table` operation of the test to be batched
into `concrete.batched_bootstrap_lwe` and
`concrete.batched_keyswitch_lwe` operations. The batched operations
trigger the generation of calls to batching wrapper functions further
down the pipeline, effectively testing the lowering and implementation
of batched operations altogether.
Bench just one compilation option for automatic benchmarks. Only 'loop'
option is tested to take advantage of hardware with a lot of available
CPUs. Running benchmarks with 'default' option is suboptimal for this
kind of hardware since it uses only one CPU.
This also remove time consuming MNIST test, as it should be in ML benchmarks.
Moreover Makefile is fixed to use provided Python executable instead of
relying on system one to generate MLIR Yaml files.