Files
tfhe-rs/ci/regression.toml
David Testé f8684d1f67 chore(ci): add regression benchmark workflow
Regression benchmarks are meant to be run in pull-request. They
can be launched in two flavors:
 * issue comment: using command like "/bench --backend cpu"
 * adding a label: `bench-perfs-cpu` or `bench-perfs-gpu`

Benchmark definitions are written in TOML and located at
ci/regression.toml.
While not exhaustive, it can be easily modified by reading the
embbeded documentation.

"/bench" commands are parsed by a Python script located at
ci/perf_regression.py. This script produces output files that
contains cargo commands and a shell script generating custom
environment variables. The Python script and generated files are
meant to be used only by the workflow
benchmark_perf_regression.yml.
2025-09-16 13:33:49 +02:00

62 lines
1.9 KiB
TOML

# Benchmark regression profile structure is defined as:
#
# [<tfhe-rs_backend>.<regression_profile_name>]
# target.<target_name> = ["<operation_name>", ]
# env.<variable_name> = "<variable_value>"
# slab.backend = "<provider_name>"
# slab.profile = "<slab_profile_name>"
#
# Each tfhe-rs_backend **must** have one regression_profile_name named `default`.
#
# Details:
# --------
#
# > tfhe-rs_backend: name of the backend to use to run the benchmarks
# Possible values are:
# * cpu
# * gpu
# * hpu
#
# > regression_profile_name: any string (containing only dash or underscore as special chars)
# Each tfhe-rs backend should have a default profile.
#
# > target.<target_name>: list of operations to benchmark on the given tfhe-rs benchmark target
# A profile can have multiple targets.
# Possible values for target_name are listed in tfhe-benchmark/Cargo.toml file under `[[bench]]` section in the
# `name` field.
#
# > env.<variable_name>: environment variable that will be used to alter benchmark execution enviroment
# Possible values for variable_name are (case-insensitive):
# * FAST_BENCH
# * BENCH_OP_FLAVOR
# * BENCH_TYPE
# * BENCH_PARAM_TYPE
# * BENCH_PARAMS_SET
#
# > slab.backend: name of on-demand instance provider
# Possible values are:
# * aws
# * hyperstack
#
# > slab.profile: on-demand instance profile to use for the benchmark
# See ci/slab.toml file to have the list of all supported profiles.
[gpu.default]
target.integer-bench = ["mul", "div"]
target.hlapi-dex = ["dex_swap"]
slab.backend = "hyperstack"
slab.profile = "single-h100"
env.fast_bench = "TRUE"
[gpu.multi-h100]
target.integer-bench = ["mul", "div"]
target.hlapi-dex = ["dex_swap"]
slab.backend = "hyperstack"
slab.profile = "multi-h100"
[cpu.default]
target.integer-bench = ["add_parallelized", "mul_parallelized", "div_parallelized"]
slab.backend = "aws"
slab.profile = "bench"
env.fast_bench = "TRUE"