Previousely, we were sending parsed benchmark results to a
Prometheus instance. Do to its time-series nature, Prometheus would
downsample database content to avoid having to much data points
for a given range of time. While this behavior is good for a
continuous stream of data, like monitoring CPU load, it's not suited
for benchmarks. Indeed benchmarks are discrete events that would
occurr once in a while (i.e once a day). Downsampling would, at
some point, simply omit some of benchmarks results.
Using a regular SQL database like PostgreSQL solves this issue.
this required to have a CAPI that when asked for types, returns a
structure that can report if an error was faced during type creation.
This is required since a failure at that stage in the compiler would
lead to a segfault in the python bindings for example, and we want to be
able to handle this scenario gracefully.
Instead of overriding the process stderr to get
the string representation from mlir we can can
directly capture in into a string using mlir's
printOperation.
Another problem with overriding stderr is that
each `#[test]` runs as a different thread meaning that
as soon as we have 2+ tests the tests could panic
due to conflicts/races between the different overrides.
This also moves the expected string directly into the test
as a literal.
Bench just one compilation option for automatic benchmarks. Only 'loop'
option is tested to take advantage of hardware with a lot of available
CPUs. Running benchmarks with 'default' option is suboptimal for this
kind of hardware since it uses only one CPU.
This also remove time consuming MNIST test, as it should be in ML benchmarks.
Moreover Makefile is fixed to use provided Python executable instead of
relying on system one to generate MLIR Yaml files.
The rust bindings are intented to access both LLVM/MLIR CAPI as well as
the concrete-compiler one. This initial commit provide the API for
LLVM/MLIR only. Tests should be used as an example to how to generate a
valid DAG of operations in MLIR.
converting types of the original op seems to have an impact on other
operations using the result type, which should consider checking the
different cases (whether the type has been converted yet, or not).
However, creating a new op don't have this issue