docs: English checking and improvement

This commit is contained in:
Jeremy Bradley-Silverio Donato
2021-12-17 16:52:49 +01:00
committed by Benoit Chevallier
parent 20c2fffd6d
commit f387eaedba
10 changed files with 86 additions and 86 deletions

View File

@@ -5,7 +5,7 @@
**concretefhe** is the python API of the **Concrete** framework for developing homomorphic applications.
One of its essential functionalities is to transform Python functions to their `MLIR` equivalent.
Unfortunately, not all python functions can be converted due to the limits of current product (we are in the alpha stage), or sometimes due to inherent restrictions of FHE itself.
However, one can already build interesting and impressing use cases, and more will be available in further versions of the framework.
However, you can already build interesting and impressing use cases, and more will be available in further versions of the framework.
## How can I use it?
@@ -59,7 +59,7 @@ Compiling a torch Module is pretty straightforward.
The torch Module is first converted to a NumPy equivalent we call `NumpyModule` if all the layers in the torch Module are supported.
Then the module is quantized post training to be compatible with our compiler which only works on integers. The post training quantization uses the provided dataset for calibration.
Then the module is quantized post-training to be compatible with our compiler which only works on integers. The post training quantization uses the provided dataset for calibration.
The dataset is then quantized to be usable for compilation with the QuantizedModule.
@@ -108,8 +108,8 @@ resulting_tracer = f(x, y)
`Tracer(computation=Add(self.computation, (2 * y).computation))` which is equal to:
`Tracer(computation=Add(Input("x"), Multiply(Constant(2), Input("y")))`
In the end we will have output `Tracer`s that can be used to create the operation graph.
The implementation is a bit more complex than that but the idea is the same.
In the end, we will have output `Tracer`s that can be used to create the operation graph.
The implementation is a bit more complex than this, but the idea is the same.
Tracing is also responsible for indicating whether the values in the node would be encrypted or not, and the rule for that is if a node has an encrypted predecessor, it is encrypted as well.
@@ -117,14 +117,14 @@ Tracing is also responsible for indicating whether the values in the node would
The goal of topological transforms is to make more functions compilable.
With the current version of **Concrete** floating point inputs and floating point outputs are not supported.
With the current version of **Concrete**,floating point inputs and floating point outputs are not supported.
However, if the floating points operations are intermediate operations, they can sometimes be fused into a single table lookup from integer to integer thanks to some specific transforms.
Let's take a closer look at the transforms we perform today.
Let's take a closer look at the transforms we can currently perform.
### Fusing floating point operations
We decided to allocate a whole new chapter to explain float fusing.
We have allocated a whole new chapter to explain float fusing.
You can find it [here](./float-fusing.md).
## Bounds measurement
@@ -138,7 +138,7 @@ If there were negative values in the range, we could have used `intX` instead of
Bounds measurement is necessary because FHE supports limited precision, and we don't want unexpected behaviour during evaluation of the compiled functions.
There are several ways to perform bounds measurement.
Let's take a closer look at the options we provide today.
Let's take a closer look at the options we provide.
### Inputset evaluation

View File

@@ -8,13 +8,13 @@ The project targets Python 3.8 through 3.10 inclusive.
## Installing Python
**concretefhe** is a `Python` library. So `Python` should be installed to develop **concretefhe**. `v3.8`, `v3.9` and `v3.10` are the only supported versions.
**concretefhe** is a `Python` library, so `Python` should be installed to develop **concretefhe**. `v3.8`, `v3.9` and `v3.10` are the only supported versions.
You can follow [this](https://realpython.com/installing-python/) guide to install it (alternatively you can google `how to install python 3.8 (or 3.9, 3.10)`).
## Installing Poetry
`Poetry` is our package manager. It simplifies dependency and environment management by a lot.
`Poetry` is our package manager. It drastically simplifies dependency and environment management.
You can follow [this](https://python-poetry.org/docs/#installation) official guide to install it.
@@ -35,12 +35,12 @@ brew install make
which gmake
```
It is possible to install `gmake` as `make`, check this [StackOverflow post](https://stackoverflow.com/questions/38901894/how-can-i-install-a-newer-version-of-make-on-mac-os) for more infos.
It is possible to install `gmake` as `make`, check this [StackOverflow post](https://stackoverflow.com/questions/38901894/how-can-i-install-a-newer-version-of-make-on-mac-os) for more info.
On Windows check [this GitHub gist](https://gist.github.com/evanwill/0207876c3243bbb6863e65ec5dc3f058#make).
```{hint}
In the next sections, be sure to use the proper `make` tool for your system, `make`, `gmake` or other.
In the following sections, be sure to use the proper `make` tool for your system: `make`, `gmake`, or other.
```
## Cloning repository
@@ -86,7 +86,7 @@ The venv persists thanks to volumes. We also create a volume for ~/.cache to spe
docker volume ls
```
You can still run all `make` commands inside the docker (to update the venv for example). Be mindful of the current venv being used (the name in parentheses at the beginning of your command prompt).
You can still run all `make` commands inside the docker (to update the venv, for example). Be mindful of the current venv being used (the name in parentheses at the beginning of your command prompt).
```shell
# Here we have dev_venv sourced
@@ -95,7 +95,7 @@ You can still run all `make` commands inside the docker (to update the venv for
## Leaving the environment
After your work is done you can simply run the following command to leave the environment.
After your work is done, you can simply run the following command to leave the environment.
```shell
deactivate
@@ -103,7 +103,7 @@ deactivate
## Syncing environment with the latest changes
From time to time, new dependencies will be added to project or the old ones will be removed. The command below will make sure the project have proper environment. So run it regularly!
From time to time, new dependencies will be added to project or the old ones will be removed. The command below will make sure the project has the proper environment. So run it regularly!
```shell
make sync_env
@@ -113,7 +113,7 @@ make sync_env
### In your OS
If you are having issues consider starting using the dev docker exclusively (unless you are working on OS specific bug fixes or features).
If you are having issues, consider using the dev docker exclusively (unless you are working on OS specific bug fixes or features).
Here are the steps you can take on your OS to try and fix issues:
@@ -131,7 +131,7 @@ rm -rf .venv
make setup_env
```
At this point you should consider using docker as nobody will have the exact same setup as you, unless you need to develop on your OS directly, in which case you can ask for help but may not get a solution right away.
At this point you should consider using docker as nobody will have the exact same setup as you, unless you need to develop on your OS directly, in which case you can ask us for help but may not get a solution right away.
### In docker
@@ -160,4 +160,4 @@ make docker_rebuild
make docker_start
```
If the problem persists at this point, you should consider asking for help.
If the problem persists at this point, you should consider asking for help. We're here and ready to assist!

View File

@@ -6,7 +6,7 @@
"source": [
"# Fully Connected Neural Network on Iris Dataset\n",
"\n",
"In this example, we show how one can train a neural network on a specific task (here Iris Classification) and use Concrete Framework to make the model work in FHE settings."
"In this example, we show how one can train a neural network on a specific task (here, Iris Classification) and use Concrete Framework to make the model work in FHE settings."
]
},
{
@@ -50,12 +50,12 @@
"class FCIris(torch.nn.Module):\n",
" \"\"\"Neural network for Iris classification\n",
" \n",
" We define a fully connected network with 5 fully connected (fc) layers that \n",
" We define a fully connected network with five (5) fully connected (fc) layers that \n",
" perform feature extraction and one (fc) layer to produce the final classification. \n",
" We will use 15 neurons on all the feature extractor layers to ensure that the FHE accumulators\n",
" do not overflow (we are only allowed a maximum of 7 bits-width).\n",
" do not overflow (we are currently only allowed a maximum of 7 bits-width).\n",
"\n",
" Due to accumulator limits we have to design a deep network with few neurons on each layer. \n",
" Due to accumulator limits, we have to design a deep network with only a few neurons on each layer. \n",
" This is in contrast to a traditional approach where the number of neurons increases after \n",
" each layer or block.\n",
" \"\"\"\n",
@@ -314,10 +314,10 @@
"source": [
"## Summary\n",
"\n",
"In this notebook we presented a few steps to have a model (torch neural network) inference in over homomorphically encrypted data: \n",
"In this notebook, we presented a few steps to have a model (torch neural network) inference in over homomorphically encrypted data: \n",
"- We first trained a fully connected neural network yielding ~100% accuracy\n",
"- Then quantized it using Concrete Framework. As we can see, the extreme post training quantization (only 2 bits of precision for weights, inputs and activations) made the neural network accruracy slighlty drop (~73%).\n",
"- We then use the compiled inference into its FHE equivalent to get our FHE predictions over the test set\n",
"- Then, we quantized it using Concrete Framework. As we can see, the extreme post training quantization (only 2 bits of precision for weights, inputs and activations) made the neural network accruracy slighlty drop (~73%).\n",
"- We then used the compiled inference into its FHE equivalent to get our FHE predictions over the test set\n",
"\n",
"The Homomorphic inference achieves a similar accuracy as the quantized model inference. "
]

View File

@@ -7,7 +7,7 @@
"source": [
"# Generalized Linear Model : Poisson Regression\n",
"\n",
"Currently, **Concrete** only supports unsigned integers up to 7-bits, for both parameters, inputs and intermediate values such as accumulators. Nevertheless, we want to evaluate a linear regression model with it. Luckily, we can make use of **quantization** to overcome this limitation!"
"Currently, **Concrete** only supports unsigned integers up to 7-bits, for parameters, inputs, and intermediate values such as accumulators. Nevertheless, we want to evaluate a linear regression model with it. Luckily, we can make use of **quantization** to overcome this limitation."
]
},
{
@@ -147,7 +147,7 @@
"source": [
"### Now, we need a model so let's define it\n",
"\n",
"### First let's split the data, keeping a part of the data to be used for calibration. The calibration set is not used in training nor for testing the model"
"### Let's split the data, keeping a part of the data to be used for calibration. The calibration set is not used in training nor for testing the model."
]
},
{
@@ -215,7 +215,7 @@
"id": "f28155cf",
"metadata": {},
"source": [
"### Let's visualize our predictions to see how our model performs. Note that the graph is on a Y log scale so the regression line looks linear"
"### Let's visualize our predictions to see how our model performs. Note that the graph is on a Y log scale so the regression line looks linear."
]
},
{
@@ -255,7 +255,7 @@
"source": [
"### FHE models need to be quantized, so let's define a **Quantized Poisson Regressor** (Generalized Linear Model with exponential link)\n",
"\n",
"We use the quantization primitives available in the Concrete library: QuantizedArray, QuantizedFunction and QuantizedLinear"
"We use the quantization primitives available in the Concrete library: QuantizedArray, QuantizedFunction, and QuantizedLinear"
]
},
{
@@ -325,7 +325,7 @@
"source": [
"### Let's quantize our model parameters\n",
"\n",
"First we get the calibration data, we then run it through the non quantized model to determine all possible intermediate values. After each operation these values are quantized and the quantized version of the operations are stored in the QuantizedGLM module"
"First, we get the calibration data, and we then run it through the non quantized model to determine all possible intermediate values. After each operation these values are quantized and the quantized version of the operations are stored in the QuantizedGLM module."
]
},
{
@@ -345,7 +345,7 @@
"id": "e2528092",
"metadata": {},
"source": [
"### And quantize our inputs and perform quantized inference "
"### And quantize our inputs and perform quantized inference. "
]
},
{
@@ -402,7 +402,7 @@
"id": "af6bc89e",
"metadata": {},
"source": [
"### Now it's time to make the inference homomorphic"
"### Now it's time to make the inference homomorphic."
]
},
{
@@ -432,7 +432,7 @@
"id": "01d67c28",
"metadata": {},
"source": [
"### Let's compile our quantized inference function to it's homomorphic equivalent"
"### Let's compile our quantized inference function to it's homomorphic equivalent."
]
},
{
@@ -453,7 +453,7 @@
"id": "46753da7",
"metadata": {},
"source": [
"### Finally, let's make homomorphic inference"
"### Finally, let's do homomorphic inference."
]
},
{

View File

@@ -7,7 +7,7 @@
"source": [
"# Quantized Linear Regression\n",
"\n",
"Currently, **Concrete** only supports unsigned integers up to 7-bits. Nevertheless, we want to evaluate a linear regression model with it. Luckily, we can make use of **quantization** to overcome this limitation!"
"Currently, **Concrete** only supports unsigned integers up to 7-bits. Nevertheless, we want to evaluate a linear regression model with it. Luckily, we can make use of **quantization** to overcome this limitation."
]
},
{
@@ -15,7 +15,7 @@
"id": "253288cf",
"metadata": {},
"source": [
"### Let's start by importing some libraries to develop our linear regression model"
"### Let's start by importing some libraries to develop our linear regression model."
]
},
{
@@ -44,7 +44,7 @@
"source": [
"\n",
"\n",
"### Now import Concrete quantization tools "
"### Now, import Concrete quantization tools. "
]
},
{
@@ -62,7 +62,7 @@
"id": "f43e2387",
"metadata": {},
"source": [
"### And some helpers for visualization"
"### And some helpers for visualization."
]
},
{
@@ -83,7 +83,7 @@
"id": "4a5ae7af",
"metadata": {},
"source": [
"### And, finally, the FHE compiler"
"### And, finally, the FHE compiler."
]
},
{
@@ -101,7 +101,7 @@
"id": "53e676b8",
"metadata": {},
"source": [
"### Let's define our Quantized Linear Regression module that quantizes a sklearn linear regression"
"### Let's define our Quantized Linear Regression module that quantizes a sklearn linear regression."
]
},
{
@@ -230,7 +230,7 @@
"id": "75f4fdb7",
"metadata": {},
"source": [
"### Train a linear regression on the training set and visualize predictions on the test set"
"### Train a linear regression on the training set and visualize predictions on the test set."
]
},
{
@@ -251,7 +251,7 @@
"id": "a0ba5509",
"metadata": {},
"source": [
"### Visualize the regression line and the data set"
"### Visualize the regression line and the data set."
]
},
{
@@ -308,7 +308,7 @@
"id": "cd74c5e7",
"metadata": {},
"source": [
"### Now, we can compile our model to FHE, taking as possible input set all of our dataset"
"### Now, we can compile our model to FHE, taking as the possible input set all of our dataset."
]
},
{
@@ -328,7 +328,7 @@
"id": "084fb296",
"metadata": {},
"source": [
"### Time to make some predictions, first in the clear"
"### Time to make some predictions, first in the clear."
]
},
{
@@ -348,7 +348,7 @@
"id": "f28155cf",
"metadata": {},
"source": [
"### Now let's predict using the quantized FHE classifier"
"### Now let's predict using the quantized FHE classifier."
]
},
{
@@ -382,7 +382,7 @@
"id": "23852861",
"metadata": {},
"source": [
"### Evaluate all versions of the classifier"
"### Evaluate all versions of the classifier."
]
},
{
@@ -425,7 +425,7 @@
"id": "704b2f63",
"metadata": {},
"source": [
"### Plot the results of the original and FHE versions of the classifier"
"### Plot the results of both the original and FHE versions of the classifier."
]
},
{

View File

@@ -7,7 +7,7 @@
"source": [
"# Quantized Logistic Regression\n",
"\n",
"Currently, **Concrete** only supports unsigned integers up to 7-bits. Nevertheless, we want to evaluate a logistic regression model with it. Luckily, we can make use of **quantization** to overcome this limitation!"
"Currently, **Concrete** only supports unsigned integers up to 7-bits. Nevertheless, we want to evaluate a logistic regression model with it. Luckily, we can make use of **quantization** to overcome this limitation."
]
},
{
@@ -15,7 +15,7 @@
"id": "7d46edc9",
"metadata": {},
"source": [
"### Let's start by importing some libraries to develop our logistic regression model"
"### Let's start by importing some libraries to develop our logistic regression model."
]
},
{
@@ -41,7 +41,7 @@
"id": "86b77c19",
"metadata": {},
"source": [
"### Now import Concrete quantization tools "
"### Now import Concrete quantization tools. "
]
},
{
@@ -64,7 +64,7 @@
"id": "ff9c1757",
"metadata": {},
"source": [
"### And some helpers for visualization"
"### And some helpers for visualization."
]
},
{
@@ -85,7 +85,7 @@
"id": "d4f43095",
"metadata": {},
"source": [
"### And, finally, the FHE compiler"
"### And, finally, the FHE compiler."
]
},
{
@@ -103,7 +103,7 @@
"id": "34959f0a",
"metadata": {},
"source": [
"### Define our Quantized Logistic Regression model"
"### Define our Quantized Logistic Regression model."
]
},
{
@@ -221,7 +221,7 @@
"id": "0df30d0e",
"metadata": {},
"source": [
"### We need a training set, a handcrafted one for simplicity. Let's also define a grid on which to test our classifier"
"### We need a training set, specifically a handcrafted one for simplicity. Let's also define a grid on which to test our classifier."
]
},
{
@@ -259,7 +259,7 @@
"id": "0b209247",
"metadata": {},
"source": [
"### Train a logistic regression with sklearn on the training set"
"### Train a logistic regression with sklearn on the training set."
]
},
{
@@ -289,7 +289,7 @@
"id": "5be6c7d5",
"metadata": {},
"source": [
"### Let's visualize our data set and initial classifier to get a grasp of it"
"### Let's visualize our data set and initial classifier to get a grasp on it."
]
},
{
@@ -355,7 +355,7 @@
"id": "cd74c5e7",
"metadata": {},
"source": [
"### Now, we can compile our model to FHE, taking as possible input set all of our dataset"
"### Now, we can compile our model to FHE, taking as the possible input set all of our dataset."
]
},
{
@@ -375,7 +375,7 @@
"id": "b608faef",
"metadata": {},
"source": [
"### Time to make some predictions, first in the clear"
"### Time to make some predictions, first in the clear."
]
},
{
@@ -407,7 +407,7 @@
"id": "8fb62d52",
"metadata": {},
"source": [
"### Now let's predict using the quantized FHE classifier"
"### Now let's predict using the quantized FHE classifier."
]
},
{
@@ -458,7 +458,7 @@
"id": "f8c1d98a",
"metadata": {},
"source": [
"### Aggregate accuracies for all the versions of the classifier"
"### Aggregate accuracies for all the versions of the classifier."
]
},
{
@@ -495,7 +495,7 @@
"id": "4810fdaf",
"metadata": {},
"source": [
"### Plot the results of the original and FHE versions of the classifier, showing classification errors induced by quantization with a red circle"
"### Plot the results of both the original and FHE versions of the classifier, showing classification errors induced by quantization with a red circle."
]
},
{

View File

@@ -1,11 +1,11 @@
# Benchmarks
To track our progress over time, we have created a [progress tracker](https://progress.zama.ai) that have
- list of targets that we want to compile
- status on the compilation of these functions
- compilation and evaluation times on different hardware
- accuracy of the functions for which it makes sense
- loss of the functions for which it makes sense
To track our progress over time, we have created a [progress tracker](https://progress.zama.ai) that:
- lists targets that we want to compile
- updates the status on the compilation of these functions
- tracks compilation and evaluation times on different hardware
- displays accuracy of the functions for which it makes sense
- displays loss of the functions for which it makes sense
Note that we are not limited to these, and we'll certainly add more information (e.g., key generation time, encryption time, inference time, decryption time, etc.) once the explicit inference API is available.
@@ -13,9 +13,9 @@ Note that we are not limited to these, and we'll certainly add more information
FIXME(all): update the sentence above when the encrypt, decrypt, run_inference, keygen API's are available
```
Our public benchmarks can be used by competing frameworks or technologies for comparison with **Concrete Framework**. Notably, one can see:
Our public benchmarks can be used by competing frameworks or technologies for comparison with **Concrete Framework**. Notably, you can see:
- if the same functions can be compiled
- what are discrepancies in the exactness of the evaluations
- what are the discrepancies in the exactness of the evaluations
- how do evaluation times compare
If you want to see more functions in the progress tracker or if there is another metric you would like to track, don't hesitate to drop an email to <hello@zama.ai>.

View File

@@ -20,7 +20,7 @@ def f(x, y):
## Compiling the function
To compile the function, you need to provide what are the inputs that it's expecting. In the example function above, `x` and `y` could be scalars or tensors (though, for now, only dot between tensors are supported), they can be encrypted or clear, they can be signed or unsigned, they can have different bit-widths. So, we need to know what they are beforehand. We can do that like so:
To compile the function, you need to identify the inputs that it is expecting. In the example function above, `x` and `y` could be scalars or tensors (though, for now, only dot between tensors are supported), they can be encrypted or clear, they can be signed or unsigned, they can have different bit-widths. So, we need to know what they are beforehand. We can do that like so:
<!--python-test:cont-->
```python
@@ -55,7 +55,7 @@ compiler.eval_on_inputset(inputset)
for input_values in inputset:
compiler(*input_values)
# You can print the traced graph
# You can print the traced graph:
print(str(compiler))
# Outputs

View File

@@ -2,7 +2,7 @@
## Python package
To install **Concrete** from PyPi run the following:
To install **Concrete** from PyPi, run the following:
```shell
pip install concretefhe
@@ -53,7 +53,7 @@ docker run --rm -it -p 8888:8888 -v /host/path:/data zamafhe/concretefhe:v0.2.0
This will launch a **Concrete** enabled jupyter server in the docker, that you can access from your browser.
Alternatively you can just open a shell in the docker with or without volumes:
Alternatively, you can just open a shell in the docker with or without volumes:
```shell
docker run --rm -it zamafhe/concretefhe:v0.2.0 /bin/bash

View File

@@ -1,33 +1,33 @@
# What is **Concrete**
# What is **Concrete**?
## Introduction
**Concrete Framework**, or **Concrete** for short, is an open-source framework which aims to simplify the use of so-called fully homomorphic encryption (FHE) for data scientists.
FHE is a new powerful cryptographic tool, which allows e.g. servers to perform computations directly on encrypted data, without needing to decrypt first. With FHE, privacy is at the center, and one can build services which ensure full privacy of the user and are the perfect equivalent of their unsecure counterpart.
FHE is a powerful cryptographic tool, which allows servers to perform computations directly on encrypted data without needing to decrypt first. With FHE, privacy is at the center, and you can build services which ensure full privacy of the user and are the perfect equivalent of their unsecure counterpart.
FHE is also a killer feature regarding data breaches: as anything done on the server is done over encrypted data, even if the server is compromised, there is in the end no leak of any kind of useful data.
FHE is also a killer feature regarding data breaches: as anything done on the server is done over encrypted data, even if the server is compromised, there is in the end no leak of useful data.
**Concrete** is made of several parts:
- a library, called concrete-lib, which contains the core cryptographic API's for computing with FHE
- a compiler, called concrete-compiler, which allows to turn an MLIR program into an FHE program, on the top of concrete-lib
- some frontends, which convert different langages to MLIR, to finally be compiled.
- a library, called concrete-lib, which contains the core cryptographic APIs for computing with FHE;
- a compiler, called concrete-compiler, which allows you to turn an MLIR program into an FHE program, on the top of concrete-lib;
- and some frontends, which convert different langages to MLIR, to finally be compiled.
```{important}
In the first version of Concrete, there is a single frontend, called homomorphic numpy (or hnp), which is the equivalent of numpy. With our toolchain, a data scientist can convert a numpy program into an FHE program, without any a-priori knowledge on cryptography.
In the first version of Concrete, there is a single frontend, called homomorphic numpy (or hnp), which is the equivalent of numpy. With our toolchain, a data scientist can convert a numpy program into an FHE program, without any in-depth knowledge of cryptography.
```
```{note}
On top of the numpy frontend, we are adding an alpha-version of a torch compiler, which basically transforms a subset of torch modules into numpy, and then use numpy frontend and the compiler. This is an early version of a more stable torch compiler which will be released later in the year.
On top of the numpy frontend, we are adding an alpha-version of a torch compiler, which basically transforms a subset of torch modules into numpy, and then uses numpy frontend and the compiler. This is an early version of a more stable torch compiler which will be released later in 2022.
```
## Organization of the documentation
Basically, we have divided our documentation into several parts:
- one about basic elements, notably description of the installation, that you are currently reading
- one dedicated to _users_ of **Concrete**, with tutorials, how-to's and deeper explanations
- one detailing the API's of the different functions of the frontend, directly done by parsing its source code
- one about basic elements, notably a description of the installation, that you are currently reading
- one dedicated to _users_ of **Concrete**, with tutorials, how-tos and deeper explanations
- one detailing the APIs of the different functions of the frontend, directly done by parsing its source code
- and finally, one dedicated to _developers_ of **Concrete**, who could be internal or external contributors to the framework
## A work in progress
@@ -37,10 +37,10 @@ Concrete is a work in progress, and is currently limited to a certain number of
```
The main _current_ limits are:
- **Concrete** is only supporting unsigned integers
- **Concrete** only supports unsigned integers
- **Concrete** needs the integer to be less than 7 bits (included)
To overcome the above limitations, Concrete has a [popular quantization](../explanation/quantization.md) method built in the framework that allows to map floating point values to integers. We can [use this approach](../howto/use_quantization.md) to run models in FHE. Lastly, we give hints to the user on how to [reduce the precision](../howto/reduce_needed_precision.md) of a model to make it work in Concrete.
To overcome the above limitations, Concrete has a [popular quantization](../explanation/quantization.md) method built in the framework that allows map floating point values to integers. We can [use this approach](../howto/use_quantization.md) to run models in FHE. Lastly, we give hints to the user on how to [reduce the precision](../howto/reduce_needed_precision.md) of a model to make it work in Concrete.
```{warning}
FIXME(Jordan/Andrei): add an .md about the repository of FHE-friendly models (#1212)