Commit Graph

190 Commits

Author SHA1 Message Date
rudy
717c8c815f fix: ci macos test 2023-02-13 09:20:25 +01:00
rudy
117e15cc05 fix: build python package during build 2023-02-06 09:36:29 +01:00
rudy
a01fab0a90 feat: Makefile, python-package target 2023-02-06 09:36:29 +01:00
rudy
c6a44e9091 fix(ci):remove now useless step to free space 2023-02-02 14:28:54 +01:00
David Testé
bc58e25d2a chore(ci): trigger prepare release workflow on version tag push
The CI don't wait anymore on other builds to trigger release
preparation workflow. It's up to the team to be sure that builds
are passing before pushing a new version tag on default branch.
In addition build workflows will run only when there is push on
default branch. Nothing will happend when a version tag is pushed
now.
2023-01-16 17:21:18 +01:00
David Testé
2fe402f55e chore(ci): build docker images on aws ec2 to speed up process 2023-01-13 15:31:15 +01:00
David Testé
be45125ef8 chore(ci): remove steps related to keysetcache in aws builds 2023-01-13 14:58:16 +01:00
youben11
f6edcd28e9 ci: release linux python wheels with cuda support 2023-01-13 11:43:05 +01:00
David Testé
fd2ce968ea chore(ci): move doc publishing to aws build for cpu
This is done to handle downloading of documentation artifacts.
Doing this between separate workflow is troublesome especially when
you have to wait on serveral of them.
2023-01-13 10:14:58 +01:00
youben11
9c370e1cec ci: fix macos release tarball 2023-01-12 17:56:53 +01:00
David Testé
bf127e0846 chore(ci): move docker images publishing to its own workflow
This removes the old continuous-integration.yml file. Thus it
finalizes the splitting operation.
2023-01-12 14:43:08 +01:00
David Testé
af265206a9 chore(ci): move macos build and release jobs to their own workflow 2023-01-12 14:43:08 +01:00
David Testé
8a41b39f5e chore(ci): wait on all aws builds before publishing documentation 2023-01-12 13:24:15 +01:00
David Testé
2aab1439b2 chore(ci): fix aws build triggering on pull request event
When a PR was opened, thus for the first commits push, aws builds
weren't triggered. The workflow was only executed when commits were
pushed once again (synchronize event).
2023-01-12 10:52:00 +01:00
David Testé
140330f412 chore(ci): move doc publishing to its own workflow
Now this workflow would only be triggered on default branch once
the AWS build for CPU is completed. If the AWS workflows conclusion is
a success then the doc will be published.
2023-01-12 10:52:00 +01:00
David Testé
3ede5642d8 chore(ci): separate cpu and gpu aws builds
This is done to be able to wait on the result on their runs without
trigger workflow waiting on them (via workflow_run event) twice.
2023-01-12 10:52:00 +01:00
David Testé
f2dd9879b4 chore(ci): move doc publishing to its own workflow
This workflow depend on the execution of aws_build.yml when
executed on default branch.
2023-01-12 10:52:00 +01:00
David Testé
d307d438c5 chore(ci): move block pr merge to its own workflow 2023-01-12 10:52:00 +01:00
David Testé
a0abd67455 chore(ci): trigger aws builds on push on main 2023-01-12 10:52:00 +01:00
David Testé
03fb3ca49b chore(ci): move format and linting jobs to their own workflow
This is done to declutter the main workflow continuous-integration.yml.
2023-01-12 10:52:00 +01:00
David Testé
94df9ee21d chore(ci): remove legacy linux build and test job 2023-01-11 17:39:05 +01:00
Quentin Bourgerie
ca83b129b9 chore(ci): fix gpu build on aws using docker 2023-01-11 17:39:05 +01:00
David Testé
be2a377aaf chore(ci): automaticaly trigger aws build on commit push in pr 2023-01-11 17:39:05 +01:00
David Testé
ea5000aecb chore(ci) fix build on aws ec2
Use the right make rule target for testing the GPU and CPU.
2023-01-11 17:39:05 +01:00
rudy
5797d73683 fix(ci): docker build, use --ssh instead of -v 2023-01-06 17:00:17 +01:00
rudy
7bcb3377fa fix(ci): docker build, no ssh forward for hpx and cuda 2023-01-06 17:00:17 +01:00
rudy
3c8c1819a8 fix(ci): rebuild docker image when workflow file is updated 2023-01-06 17:00:17 +01:00
rudy
04ea0ab148 fix: ci, build optimizer in docker 2022-12-22 20:08:07 +01:00
rudy
36dc249712 fix: disabling keysetcache until test on aws 2022-12-15 12:40:43 +01:00
rudy
8641b52782 fix: keysetcache name typo 2022-12-14 12:14:53 +01:00
rudy
b742ac35ae fix(ci): force crt for 9, 10, 11 for table lookup 2022-12-13 13:21:30 +01:00
Quentin Bourgerie
37c2627feb chore(ci): Regenerate the keyset cache 2022-12-12 10:38:44 +01:00
Quentin Bourgerie
b08ccbbfb5 fix(ci): Select cpu by default for benchmark on main 2022-12-09 13:23:00 +01:00
Quentin Bourgerie
07aa334d8d chore(benchmarks): Refactor the benchmark tools 2022-12-09 09:51:23 +01:00
youben11
ff4a0076a1 ci: fix release tarball process
add install target in Makefile to copy necessary libs, bins, and
includes to an installation directory. Use this install target to
package deps into a tarball with new installation instructions.
2022-12-08 07:45:55 +01:00
Quentin Bourgerie
71b24b1255 chore(ci/bench): Generate gpu benchmarks 2022-12-07 21:32:01 +01:00
David Testé
2c48bdd9fe test(benchmark): execute on-demand benchmarks with gpu backend
When this type of benchmarks is triggered, only the tests that
benefits from GPU acceleration are run on a specific AWS EC2
instance. Note that this instance (p3.2xlarge) is not a bare metal
one, so performance may variate due to hypervisor controlling the
machine.
2022-12-07 21:32:01 +01:00
tmontaigu
188642b153 chore(ci): add rustfmt check 2022-12-07 13:55:02 +01:00
David Testé
17479097b4 chore(ci): build compiler from pull request comment with slab
Now compiler builds on AWS can be requested in Pull Request comment
using '@slab-ci <build_command>'. This sets up environment to able
to trigger both CPU and GPU builds on AWS EC2.
2022-12-06 14:35:17 +01:00
youben11
788c4c863d ci: provide cuda tools in test docker image
build cuda image and copy cuda tools in test image to reduce its final
size
2022-12-06 11:29:11 +01:00
Quentin Bourgerie
ccb83cc9be chore(ci): Fix benchmark generation in CI 2022-12-06 09:30:50 +01:00
Quentin Bourgerie
8a557368f1 chore(ci): Stay on ubuntu 20.4 for github runners to fix the release process 2022-12-01 23:18:34 +01:00
Quentin Bourgerie
b171bc3c48 chore(ci): Specify the python executable for running tests 2022-11-30 14:41:40 +01:00
Luis Montero
30be8cf4ae fix: fix MacOS test launching
* Update makefile to use appropriate arguments to find executables on
  macos systems.
* Add `set -e` to make sure that the tests crash if something goes wrong
  in the build or test of the macos job of the CI.

closes #783
2022-11-25 09:48:45 +01:00
David Testé
1e8c0df381 chore(ci): change benchmark parser input name
The use of "schema" was incorrect since it's meant to be used as
database name when sending data to Slab.
2022-11-23 16:22:17 +01:00
youben11
2877281aa6 chore(ci): re-enable rust tests in the CI
they were mostly removed by a rebase
2022-11-23 14:01:25 +01:00
rudy
454edbb538 fix(ci): KeySetCache pruning was broken 2022-11-22 17:28:40 +01:00
David Testé
3c2a75186f chore(ci): run ml benchmarks in a matrix with slab
This CI "feature" is meant to circumvent the 6 hours hard-limit
for a job in GitHub Action.
The benchmark is done using a matrix which is handled by Slab.
Here's the workflow:

  1. ML benchmarks are started in a fire and forget fashion via
     start_ml_benchmarks.yml
  2. Slab will read ci/slab.toml to get the AWS EC2 configuration
     and the matrix parameters
  3. Slab will launch at most max_parallel_jobs EC2 instances in
     parallel
  4. Each job will trigger ml_benchmark_subset.yml which will run
     only one of the generated YAML file via make generate-mlbench,
     based on the value of the matrix item they were given.
  5. As soon as a job is completed, the next one in the matrix
     will start promptly.

This is done until all the matrix items are exhausted.
2022-11-21 11:25:40 +01:00
Mayeul@Zama
fa3556e8cc feat(CI): add cmake-format CI job 2022-11-18 15:15:41 +01:00
David Testé
77729514e0 chore(ci): parse benchmark results to send to postgres instance
Previousely, we were sending parsed benchmark results to a
Prometheus instance. Do to its time-series nature, Prometheus would
downsample database content to avoid having to much data points
for a given range of time. While this behavior is good for a
continuous stream of data, like monitoring CPU load, it's not suited
for benchmarks. Indeed benchmarks are discrete events that would
occurr once in a while (i.e once a day). Downsampling would, at
some point, simply omit some of benchmarks results.
Using a regular SQL database like PostgreSQL solves this issue.
2022-11-15 16:18:25 +01:00