This CI "feature" is meant to circumvent the 6 hours hard-limit
for a job in GitHub Action.
The benchmark is done using a matrix which is handled by Slab.
Here's the workflow:
1. ML benchmarks are started in a fire and forget fashion via
start_ml_benchmarks.yml
2. Slab will read ci/slab.toml to get the AWS EC2 configuration
and the matrix parameters
3. Slab will launch at most max_parallel_jobs EC2 instances in
parallel
4. Each job will trigger ml_benchmark_subset.yml which will run
only one of the generated YAML file via make generate-mlbench,
based on the value of the matrix item they were given.
5. As soon as a job is completed, the next one in the matrix
will start promptly.
This is done until all the matrix items are exhausted.
Bench just one compilation option for automatic benchmarks. Only 'loop'
option is tested to take advantage of hardware with a lot of available
CPUs. Running benchmarks with 'default' option is suboptimal for this
kind of hardware since it uses only one CPU.
This also remove time consuming MNIST test, as it should be in ML benchmarks.
Moreover Makefile is fixed to use provided Python executable instead of
relying on system one to generate MLIR Yaml files.
The rust bindings are intented to access both LLVM/MLIR CAPI as well as
the concrete-compiler one. This initial commit provide the API for
LLVM/MLIR only. Tests should be used as an example to how to generate a
valid DAG of operations in MLIR.
This includes several fixes and add some functionalities:
* EC2 instance type can be selected when workflow is triggered manually
* benchmarks will be run on each push on main branch
* docker is not used any more due to building issues
* 10 repetitions are made during the benchmarks then results are aggregated
* more tags are used to identify benchmarks configuration
This adds four new targets `opt`, `mlir-cpu-runner`, `mlir-opt`, and
`mlir-translate` to the toplevel Makefile of the compiler to
conveniently build the corresponding LLVM / MLIR utilities (e.g., for
debugging purposes).
The target `run-mlbench` indirectly depends on the contents of
`tests.ml/bench.zip` which are extracted by `generate-mlbench`. If
`generate-mlbench` has not been built pefore, `run-mlbench`
fails. This adds a the missing dependency from `run-mlbench` to
`generate-mlbench`.
This moves all tests from
`tests/end_to_end_tests/end_to_end_jit_clear_tensor.cc` to the test
specification in YAML format in
`tests/end_to_end_fixture/end_to_end_clear_tensor.yaml`. Parametric
tests and tests invoking lambdas in loops have been fully unrolled.
This adds a new variable `BUILD_TYPE` to the Makefile, controlling
whether the build should be a debug or a release build (values `Debug`
and `Release`, respectively). The default mode is `Release`. Depending
on the build type, the build directory is set to `build-Debug` or
`build-Release`. This enables debug and release builds to co-exist and
to switch easily between the two.
This commit rebases the compiler onto commit f69328049e9e from
llvm-project.
Changes:
* Use of the one-shot bufferizer for improved memory management
* A new pass `OneShotBufferizeDPSWrapper` that converts functions
returning tensors to destination-passing-style as required by the
one-shot bufferizer
* A new pass `LinalgGenericOpWithTensorsToLoopsPass` that converts
`linalg.generic` operations with value semantics to loop nests
* Rebase onto a fork of llvm-project at f69328049e9e with local
modifications to enable bufferization of `linalg.generic` operations
with value semantics
* Workaround for the absence of type propagation after type conversion
via extra patterns in all dialect conversion passes
* Printer, parser and verifier definitions moved from inline
declarations in ODS to the respective source files as required by
upstream changes
* New tests for functions with a large number of inputs
* Increase the number of allowed task inputs as required by new tests
* Use upstream function `mlir_configure_python_dev_packages()` to
locate Python development files for compatibility with various CMake
versions
Co-authored-by: Quentin Bourgerie <quentin.bourgerie@zama.ai>
Co-authored-by: Ayoub Benaissa <ayoub.benaissa@zama.ai>
Co-authored-by: Antoniu Pop <antoniu.pop@zama.ai>