chore: update the doc to have clearer uses of Concrete Numpy etc

refs #1288
This commit is contained in:
Benoit Chevallier-Mames
2022-01-05 18:05:04 +01:00
committed by Benoit Chevallier
parent a835d25e15
commit 721bc06eb7
18 changed files with 47 additions and 53 deletions

View File

@@ -11,11 +11,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Trees are a popular class of algorithm in Machine Learning. In this notebook we build a simple Decision Tree Classifier using `scikit-learn` to show that they can be executed homomorphically using the Concrete Numpy.\n",
"Trees are a popular class of algorithm in Machine Learning. In this notebook we build a simple Decision Tree Classifier using `scikit-learn` to show that they can be executed homomorphically using Concrete Numpy.\n",
"\n",
"State of the art classifiers are generally a bit more complex than a single decision tree, here we wanted to demonstrate FHE decision trees so results may not compete with the best models out there!\n",
"\n",
"Converting a tree working over quantized data to its FHE equivalent takes only a few lines of code thanks to the Concrete Numpy.\n",
"Converting a tree working over quantized data to its FHE equivalent takes only a few lines of code thanks to Concrete Numpy.\n",
"\n",
"Let's dive in!"
]

View File

@@ -7,7 +7,7 @@
"source": [
"# Generalized Linear Model : Poisson Regression\n",
"\n",
"This tutorial shows how to train several Generalized Linear Models (GLM) with scikit-learn, quantize them and run them in FHE using the Concrete Numpy. We make use of strong quantization to insure the accumulator of the linear part does not overflow when computing in FHE (7-bit accumulator). We show that conversion to FHE does not degrade performance with respect to the quantized model working on values in the clear."
"This tutorial shows how to train several Generalized Linear Models (GLM) with scikit-learn, quantize them and run them in FHE using Concrete Numpy. We make use of strong quantization to insure the accumulator of the linear part does not overflow when computing in FHE (7-bit accumulator). We show that conversion to FHE does not degrade performance with respect to the quantized model working on values in the clear."
]
},
{