feat: update/rename docs and advanced examples

This commit is contained in:
jfrery
2022-01-06 13:20:43 +01:00
committed by Jordan Fréry
parent d46e753b0c
commit 226feec93b
9 changed files with 12 additions and 12 deletions

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Fully Connected Neural Network on Iris Dataset\n",
"# Fully Connected Neural Network\n",
"\n",
"In this example, we show how one can train a neural network on a specific task (here, Iris Classification) and use Concrete Numpy to make the model work in FHE settings."
]

View File

@@ -5,7 +5,7 @@
"id": "b760a0f6",
"metadata": {},
"source": [
"# Quantized Linear Regression\n",
"# Linear Regression\n",
"\n",
"Currently, **Concrete** only supports unsigned integers up to 7-bits. Nevertheless, we want to evaluate a linear regression model with it. Luckily, we can make use of **quantization** to overcome this limitation."
]

View File

@@ -5,7 +5,7 @@
"id": "9b835b74",
"metadata": {},
"source": [
"# Quantized Logistic Regression\n",
"# Logistic Regression\n",
"\n",
"Currently, **Concrete** only supports unsigned integers up to 7-bits. Nevertheless, we want to evaluate a logistic regression model with it. Luckily, we can make use of **quantization** to overcome this limitation."
]

View File

@@ -5,7 +5,7 @@
"id": "b760a0f6",
"metadata": {},
"source": [
"# Generalized Linear Model : Poisson Regression\n",
"# Poisson Regression\n",
"\n",
"This tutorial shows how to train several Generalized Linear Models (GLM) with scikit-learn, quantize them and run them in FHE using Concrete Numpy. We make use of strong quantization to insure the accumulator of the linear part does not overflow when computing in FHE (7-bit accumulator). We show that conversion to FHE does not degrade performance with respect to the quantized model working on values in the clear."
]

View File

@@ -4,8 +4,8 @@ Advanced examples
.. toctree::
:maxdepth: 1
IrisFHE.ipynb
QuantizedLinearRegression.ipynb
QuantizedLogisticRegression.ipynb
QuantizedGeneralizedLinearModel.ipynb
FHEDecisionTreeClassifier.ipynb
FullyConnectedNeuralNetwork.ipynb
LinearRegression.ipynb
LogisticRegression.ipynb
PoissonRegression.ipynb
DecisionTreeClassifier.ipynb

View File

@@ -64,4 +64,4 @@ clear_output = quantized_numpy_module.dequantize_output(
)
```
If you want to see more compilation examples, you can check out the [IrisFHE notebook](../advanced_examples/IrisFHE.ipynb)
If you want to see more compilation examples, you can check out the [Fully Connected Neural Network](../advanced_examples/FullyConnectedNeuralNetwork.ipynb)

View File

@@ -69,7 +69,7 @@ Binarizing here is an extreme case of quantization which is introduced [here](..
Quantization and binarization increase inference speed, reduce model byte-size and are required to run computation in FHE. However, quantization and, especially, binarization, induce a loss in the accuracy of the model since it's representation power is diminished. Choosing quantization parameters carefully can alleviate the accuracy loss all the while allowing compilation to FHE.
This is illustrated in both advanced examples [Quantized Linear Regression](../advanced_examples/QuantizedLinearRegression.ipynb) and [Quantized Logistic Regression](../advanced_examples/QuantizedLogisticRegression.ipynb).
This is illustrated in both advanced examples [Linear Regression](../advanced_examples/LinearRegression.ipynb) and [Logistic Regression](../advanced_examples/LogisticRegression.ipynb).
The end result has a granularity/imprecision linked to the data types used and for the Quantized Logistic Regression to the lattice used to evaluate the logistic model.

View File

@@ -148,7 +148,7 @@ It is now possible to compile the `quantized_numpy_module`. Details on how to co
- QuantizedReLU6, the quantized version of `nn.ReLU6`
A well detailed example is available for a [QuantizedLinearRegression](../advanced_examples/QuantizedLinearRegression.ipynb).
A well detailed example is available for a [Linear Regression](../advanced_examples/LinearRegression.ipynb).
## Future releases