refactor: remove ml related functionality, small bug fixes

This commit is contained in:
Umut
2022-01-19 16:24:37 +03:00
parent 442d411722
commit f88e0dfc89
53 changed files with 187 additions and 5791 deletions

View File

@@ -53,22 +53,6 @@ Here is the visual representation of the pipeline:
![Frontend Flow](../../_static/compilation-pipeline/frontend_flow.svg)
## Overview of the torch compilation process
Compiling a torch Module is pretty straightforward.
The torch Module is first converted to a Numpy equivalent we call `NumpyModule` if all the layers in the torch Module are supported.
Then the module is quantized post-training to be compatible with our compiler which only works on integers. The post training quantization uses the provided dataset for calibration.
The dataset is then quantized to be usable for compilation with the QuantizedModule.
The QuantizedModule is compiled yielding an executable FHECircuit.
Here is the visual representation of the different steps:
![Torch compilation flow](../../_static/compilation-pipeline/torch_to_numpy_flow.svg)
## Tracing
Given a Python function `f` such as this one,

View File

@@ -45,13 +45,3 @@ In this section, we will discuss the module structure of **concrete-numpy** brie
- np_inputset_helpers: utilities for inputsets
- np_mlir_converter: utilities for MLIR conversion
- tracing: tracing of numpy functions
- quantization: tools to quantize networks
- post_training: post training quantization
- quantized_activations: management of quantization in activations
- quantized_array: utilities for quantization
- quantized_layers: management of quantization of neural network layers
- quantized_module: main API for quantization
- torch: torch compilation and conversion
- compile: compilation of a torch module, including quantization
- numpy_module: conversion tools to turn a torch module into a numpy function