mirror of
https://github.com/zama-ai/concrete.git
synced 2026-04-17 03:00:54 -04:00
refactor: remove ml related functionality, small bug fixes
This commit is contained in:
@@ -53,22 +53,6 @@ Here is the visual representation of the pipeline:
|
||||
|
||||

|
||||
|
||||
## Overview of the torch compilation process
|
||||
|
||||
Compiling a torch Module is pretty straightforward.
|
||||
|
||||
The torch Module is first converted to a Numpy equivalent we call `NumpyModule` if all the layers in the torch Module are supported.
|
||||
|
||||
Then the module is quantized post-training to be compatible with our compiler which only works on integers. The post training quantization uses the provided dataset for calibration.
|
||||
|
||||
The dataset is then quantized to be usable for compilation with the QuantizedModule.
|
||||
|
||||
The QuantizedModule is compiled yielding an executable FHECircuit.
|
||||
|
||||
Here is the visual representation of the different steps:
|
||||
|
||||

|
||||
|
||||
## Tracing
|
||||
|
||||
Given a Python function `f` such as this one,
|
||||
|
||||
@@ -45,13 +45,3 @@ In this section, we will discuss the module structure of **concrete-numpy** brie
|
||||
- np_inputset_helpers: utilities for inputsets
|
||||
- np_mlir_converter: utilities for MLIR conversion
|
||||
- tracing: tracing of numpy functions
|
||||
- quantization: tools to quantize networks
|
||||
- post_training: post training quantization
|
||||
- quantized_activations: management of quantization in activations
|
||||
- quantized_array: utilities for quantization
|
||||
- quantized_layers: management of quantization of neural network layers
|
||||
- quantized_module: main API for quantization
|
||||
- torch: torch compilation and conversion
|
||||
- compile: compilation of a torch module, including quantization
|
||||
- numpy_module: conversion tools to turn a torch module into a numpy function
|
||||
|
||||
|
||||
Reference in New Issue
Block a user