jfrery
|
65be1b0818
|
feat: use signed weights in compile_torch_model
|
2021-12-23 11:24:02 +01:00 |
|
Umut
|
4e6c57766a
|
refactor: improve isclose used in quantization tests
|
2021-12-20 12:19:06 +03:00 |
|
Umut
|
5aad8c50ac
|
test: create check_array_equality fixture
|
2021-12-06 16:44:32 +03:00 |
|
Arthur Meyre
|
a0c26315ea
|
chore: make check_is_good_execution a fixture and fix flaky tests using it
closes #1061
|
2021-12-03 17:43:11 +01:00 |
|
Arthur Meyre
|
c7255dfd66
|
feat: update compile_torch_model to return compiled quantized module
closes #898
|
2021-12-03 10:32:04 +01:00 |
|
jfrery
|
bf70396340
|
fix: increase the input data range to fix error when only one weight per layer
|
2021-12-03 09:24:00 +01:00 |
|
jfrery
|
54c0bc6e87
|
test: disable flaky tests
|
2021-12-01 10:48:41 +01:00 |
|
jfrery
|
1625475897
|
feat: end-to-end compilation of a torch model
|
2021-11-30 11:30:13 +01:00 |
|
jfrery
|
dfc762c2e2
|
feat: add QuantizedReLU6 as a supported activation function
|
2021-11-26 16:55:52 +01:00 |
|
Benoit Chevallier-Mames
|
ec396effb2
|
chore: seed torch as much as possible
closes #877
|
2021-11-24 09:47:37 +01:00 |
|
jfrery
|
aa60d8ace6
|
fix: test_quantized_layer
|
2021-11-18 19:23:27 +01:00 |
|
jfrery
|
a5b1d6232e
|
feat: add signed intergers quantization
|
2021-11-18 17:21:28 +01:00 |
|
Arthur Meyre
|
507ccd05c5
|
feat: static post training quantization and quantization module
|
2021-11-18 10:31:45 +01:00 |
|
jfrery
|
c978107124
|
feat: remove transpose from layers
|
2021-11-17 14:54:16 +01:00 |
|
jfrery
|
c5952cd09f
|
feat: add quantization utilities
|
2021-11-10 18:25:31 +01:00 |
|