diff --git a/docs/user/advanced_examples/FHEDecisionTreeClassifier.ipynb b/docs/user/advanced_examples/DecisionTreeClassifier.ipynb similarity index 100% rename from docs/user/advanced_examples/FHEDecisionTreeClassifier.ipynb rename to docs/user/advanced_examples/DecisionTreeClassifier.ipynb diff --git a/docs/user/advanced_examples/IrisFHE.ipynb b/docs/user/advanced_examples/FullyConnectedNeuralNetwork.ipynb similarity index 99% rename from docs/user/advanced_examples/IrisFHE.ipynb rename to docs/user/advanced_examples/FullyConnectedNeuralNetwork.ipynb index 646596080..10aff6eee 100644 --- a/docs/user/advanced_examples/IrisFHE.ipynb +++ b/docs/user/advanced_examples/FullyConnectedNeuralNetwork.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Fully Connected Neural Network on Iris Dataset\n", + "# Fully Connected Neural Network\n", "\n", "In this example, we show how one can train a neural network on a specific task (here, Iris Classification) and use Concrete Numpy to make the model work in FHE settings." ] diff --git a/docs/user/advanced_examples/QuantizedLinearRegression.ipynb b/docs/user/advanced_examples/LinearRegression.ipynb similarity index 99% rename from docs/user/advanced_examples/QuantizedLinearRegression.ipynb rename to docs/user/advanced_examples/LinearRegression.ipynb index 7e6488b67..7cc740521 100644 --- a/docs/user/advanced_examples/QuantizedLinearRegression.ipynb +++ b/docs/user/advanced_examples/LinearRegression.ipynb @@ -5,7 +5,7 @@ "id": "b760a0f6", "metadata": {}, "source": [ - "# Quantized Linear Regression\n", + "# Linear Regression\n", "\n", "Currently, **Concrete** only supports unsigned integers up to 7-bits. Nevertheless, we want to evaluate a linear regression model with it. Luckily, we can make use of **quantization** to overcome this limitation." ] diff --git a/docs/user/advanced_examples/QuantizedLogisticRegression.ipynb b/docs/user/advanced_examples/LogisticRegression.ipynb similarity index 99% rename from docs/user/advanced_examples/QuantizedLogisticRegression.ipynb rename to docs/user/advanced_examples/LogisticRegression.ipynb index b324b7e2d..2afd188d3 100644 --- a/docs/user/advanced_examples/QuantizedLogisticRegression.ipynb +++ b/docs/user/advanced_examples/LogisticRegression.ipynb @@ -5,7 +5,7 @@ "id": "9b835b74", "metadata": {}, "source": [ - "# Quantized Logistic Regression\n", + "# Logistic Regression\n", "\n", "Currently, **Concrete** only supports unsigned integers up to 7-bits. Nevertheless, we want to evaluate a logistic regression model with it. Luckily, we can make use of **quantization** to overcome this limitation." ] diff --git a/docs/user/advanced_examples/QuantizedGeneralizedLinearModel.ipynb b/docs/user/advanced_examples/PoissonRegression.ipynb similarity index 99% rename from docs/user/advanced_examples/QuantizedGeneralizedLinearModel.ipynb rename to docs/user/advanced_examples/PoissonRegression.ipynb index ce9d7d914..3bcb14680 100644 --- a/docs/user/advanced_examples/QuantizedGeneralizedLinearModel.ipynb +++ b/docs/user/advanced_examples/PoissonRegression.ipynb @@ -5,7 +5,7 @@ "id": "b760a0f6", "metadata": {}, "source": [ - "# Generalized Linear Model : Poisson Regression\n", + "# Poisson Regression\n", "\n", "This tutorial shows how to train several Generalized Linear Models (GLM) with scikit-learn, quantize them and run them in FHE using Concrete Numpy. We make use of strong quantization to insure the accumulator of the linear part does not overflow when computing in FHE (7-bit accumulator). We show that conversion to FHE does not degrade performance with respect to the quantized model working on values in the clear." ] diff --git a/docs/user/advanced_examples/index.rst b/docs/user/advanced_examples/index.rst index 170b7e9cc..08db99740 100644 --- a/docs/user/advanced_examples/index.rst +++ b/docs/user/advanced_examples/index.rst @@ -4,8 +4,8 @@ Advanced examples .. toctree:: :maxdepth: 1 - IrisFHE.ipynb - QuantizedLinearRegression.ipynb - QuantizedLogisticRegression.ipynb - QuantizedGeneralizedLinearModel.ipynb - FHEDecisionTreeClassifier.ipynb + FullyConnectedNeuralNetwork.ipynb + LinearRegression.ipynb + LogisticRegression.ipynb + PoissonRegression.ipynb + DecisionTreeClassifier.ipynb diff --git a/docs/user/howto/compiling_torch_model.md b/docs/user/howto/compiling_torch_model.md index c48f04cfb..995c73843 100644 --- a/docs/user/howto/compiling_torch_model.md +++ b/docs/user/howto/compiling_torch_model.md @@ -64,4 +64,4 @@ clear_output = quantized_numpy_module.dequantize_output( ) ``` -If you want to see more compilation examples, you can check out the [IrisFHE notebook](../advanced_examples/IrisFHE.ipynb) +If you want to see more compilation examples, you can check out the [Fully Connected Neural Network](../advanced_examples/FullyConnectedNeuralNetwork.ipynb) diff --git a/docs/user/howto/reduce_needed_precision.md b/docs/user/howto/reduce_needed_precision.md index b0ba69ccf..1a9bce372 100644 --- a/docs/user/howto/reduce_needed_precision.md +++ b/docs/user/howto/reduce_needed_precision.md @@ -69,7 +69,7 @@ Binarizing here is an extreme case of quantization which is introduced [here](.. Quantization and binarization increase inference speed, reduce model byte-size and are required to run computation in FHE. However, quantization and, especially, binarization, induce a loss in the accuracy of the model since it's representation power is diminished. Choosing quantization parameters carefully can alleviate the accuracy loss all the while allowing compilation to FHE. -This is illustrated in both advanced examples [Quantized Linear Regression](../advanced_examples/QuantizedLinearRegression.ipynb) and [Quantized Logistic Regression](../advanced_examples/QuantizedLogisticRegression.ipynb). +This is illustrated in both advanced examples [Linear Regression](../advanced_examples/LinearRegression.ipynb) and [Logistic Regression](../advanced_examples/LogisticRegression.ipynb). The end result has a granularity/imprecision linked to the data types used and for the Quantized Logistic Regression to the lattice used to evaluate the logistic model. diff --git a/docs/user/howto/use_quantization.md b/docs/user/howto/use_quantization.md index aa45f5744..6275cb23f 100644 --- a/docs/user/howto/use_quantization.md +++ b/docs/user/howto/use_quantization.md @@ -148,7 +148,7 @@ It is now possible to compile the `quantized_numpy_module`. Details on how to co - QuantizedReLU6, the quantized version of `nn.ReLU6` -A well detailed example is available for a [QuantizedLinearRegression](../advanced_examples/QuantizedLinearRegression.ipynb). +A well detailed example is available for a [Linear Regression](../advanced_examples/LinearRegression.ipynb). ## Future releases