chore: rename package

This commit is contained in:
Arthur Meyre
2022-01-05 12:40:01 +01:00
parent c7b9380b4c
commit e2fc523596
50 changed files with 157 additions and 157 deletions

View File

@@ -11,11 +11,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Trees are a popular class of algorithm in Machine Learning. In this notebook we build a simple Decision Tree Classifier using `scikit-learn` to show that they can be executed homomorphically using the Concrete Framework.\n",
"Trees are a popular class of algorithm in Machine Learning. In this notebook we build a simple Decision Tree Classifier using `scikit-learn` to show that they can be executed homomorphically using the Concrete Numpy.\n",
"\n",
"State of the art classifiers are generally a bit more complex than a single decision tree, here we wanted to demonstrate FHE decision trees so results may not compete with the best models out there!\n",
"\n",
"Converting a tree working over quantized data to its FHE equivalent takes only a few lines of code thanks to the Concrete Framework.\n",
"Converting a tree working over quantized data to its FHE equivalent takes only a few lines of code thanks to the Concrete Numpy.\n",
"\n",
"Let's dive in!"
]
@@ -138,7 +138,7 @@
"source": [
"We now quantize the features to train the tree directly on quantized data, this will make the trained tree FHE friendly by default which is a nice bonus, as well as allowing to see how both trees compare to each other.\n",
"\n",
"The choice here is to compute the quantization parameters over the training set. We use 6 bits for each feature individually as the Concrete Framework precision for PBSes is better for 6 bits of precision."
"The choice here is to compute the quantization parameters over the training set. We use 6 bits for each feature individually as the Concrete Numpy precision for PBSes is better for 6 bits of precision."
]
},
{
@@ -252,7 +252,7 @@
"source": [
"Before we can do that we need to convert the tree to a form that is easy to run homomorphically.\n",
"\n",
"The Hummingbird paper from Microsoft (https://scnakandala.github.io/papers/TR_2020_Hummingbird.pdf and https://github.com/microsoft/hummingbird) gives a method to convert tree evaluation to tensor operations which we support in Concrete Framework.\n",
"The Hummingbird paper from Microsoft (https://scnakandala.github.io/papers/TR_2020_Hummingbird.pdf and https://github.com/microsoft/hummingbird) gives a method to convert tree evaluation to tensor operations which we support in Concrete Numpy.\n",
"\n",
"The next few cells implement the functions necessary for the conversion. They are not optimized well so that they remain readable.\n"
]
@@ -507,7 +507,7 @@
"source": [
"We now have a tensor equivalent of our `DecisionTreeClassifier`, pretty neat isn't it?\n",
"\n",
"Last step is compiling the tensor equivalent to FHE using the Concrete Framework and it's nearly as easy as 1, 2, 3.\n",
"Last step is compiling the tensor equivalent to FHE using the Concrete Numpy and it's nearly as easy as 1, 2, 3.\n",
"\n",
"We use the training input data as well as some synthetic data to calibrate the circuit during compilation."
]
@@ -616,7 +616,7 @@
"source": [
"In this notebook we showed how to quantize a dataset to train a tree directly on integer data so that it's FHE friendly. We saw that despite quantization and its smaller depth the quantized tree classification capabilities were close to a tree trained on the original real-valued dataset.\n",
"\n",
"We then used the Hummingbird paper's algorithm to transform a tree evaluation to a few tensor operations which can be compiled by the Concrete Framework to an FHE circuit.\n",
"We then used the Hummingbird paper's algorithm to transform a tree evaluation to a few tensor operations which can be compiled by the Concrete Numpy to an FHE circuit.\n",
"\n",
"Finally we ran the compiled circuit on a few samples (because inference times are a bit high) to show that clear and FHE computations were the same."
]

View File

@@ -6,7 +6,7 @@
"source": [
"# Fully Connected Neural Network on Iris Dataset\n",
"\n",
"In this example, we show how one can train a neural network on a specific task (here, Iris Classification) and use Concrete Framework to make the model work in FHE settings."
"In this example, we show how one can train a neural network on a specific task (here, Iris Classification) and use Concrete Numpy to make the model work in FHE settings."
]
},
{
@@ -382,7 +382,7 @@
"\n",
"In this notebook, we presented a few steps to have a model (torch neural network) inference in over homomorphically encrypted data: \n",
"- We first trained a fully connected neural network yielding ~100% accuracy\n",
"- Then, we quantized it using Concrete Framework. As we can see, the extreme post training quantization (only 3 bits of precision for weights, inputs and activations) made the neural network accuracy slightly drop (~97%).\n",
"- Then, we quantized it using Concrete Numpy. As we can see, the extreme post training quantization (only 3 bits of precision for weights, inputs and activations) made the neural network accuracy slightly drop (~97%).\n",
"- We then used the compiled inference into its FHE equivalent to get our FHE predictions over the test set\n",
"\n",
"The Homomorphic inference achieves a similar accuracy as the quantized model inference.\n",

View File

@@ -7,7 +7,7 @@
"source": [
"# Generalized Linear Model : Poisson Regression\n",
"\n",
"This tutorial shows how to train several Generalized Linear Models (GLM) with scikit-learn, quantize them and run them in FHE using the Concrete Framework. We make use of strong quantization to insure the accumulator of the linear part does not overflow when computing in FHE (7-bit accumulator). We show that conversion to FHE does not degrade performance with respect to the quantized model working on values in the clear."
"This tutorial shows how to train several Generalized Linear Models (GLM) with scikit-learn, quantize them and run them in FHE using the Concrete Numpy. We make use of strong quantization to insure the accumulator of the linear part does not overflow when computing in FHE (7-bit accumulator). We show that conversion to FHE does not degrade performance with respect to the quantized model working on values in the clear."
]
},
{
@@ -17,7 +17,7 @@
"source": [
"### Import libraries\n",
"\n",
"We import scikit-learn libraries and Concrete framework quantization tools:"
"We import scikit-learn libraries and Concrete Numpy quantization tools:"
]
},
{
@@ -870,11 +870,11 @@
"source": [
"### Conclusion\n",
"\n",
"In this tutorial we have discussed how we can use the Concrete framework to convert a scikit-learn based Poisson regression model to FHE. \n",
"In this tutorial we have discussed how we can use Concrete Numpy to convert a scikit-learn based Poisson regression model to FHE. \n",
"\n",
"First of all, we have shown that, with the proper choice of pipeline and parameters, we can do the conversion with little loss of precision. This decrease in the quality of prediction is due to quantization of model weights and input data, and some minor noise can appear due to FHE. This noise is visible on the single variable FHE trend line as minor deviations of the blue curve with respect to the red one. \n",
"\n",
"Finally, we have shown how conversion of a model to FHE can be done with a single line of code and how quantization is aided by the tools in the Concrete framework. \n"
"Finally, we have shown how conversion of a model to FHE can be done with a single line of code and how quantization is aided by the tools in Concrete Numpy. \n"
]
}
],