mirror of
https://github.com/zama-ai/concrete.git
synced 2026-02-08 19:44:57 -05:00
docs: update compiling_a_torch_model.md
This commit is contained in:
@@ -106,4 +106,4 @@ FIXME(benoit): explain the API to encrypt, run_inference, decrypt, keygen etc wh
|
||||
|
||||
- [Working With Floating Points Tutorial](../tutorial/working_with_floating_points.md)
|
||||
- [Table Lookup Tutorial](../tutorial/table_lookup.md)
|
||||
- [Compiling a torch model](../tutorial/compiling_torch_model.md)
|
||||
- [Compiling a torch model](../howto/compiling_torch_model.md)
|
||||
|
||||
@@ -43,5 +43,5 @@ The main _current_ limits are:
|
||||
To overcome the above limitations, Concrete has a [popular quantization](../explanation/quantization.md) method built in the framework that allows to map floating point values to integers. We can [use this approach](../howto/use_quantization.md) to run models in FHE. Lastly, we give hints to the user on how to [reduce the precision](../howto/reduce_needed_precision.md) of a model to make it work in Concrete.
|
||||
|
||||
```{warning}
|
||||
FIXME(Jordan/Andrei): add an .md about the repository of FHE-friendly models, and ideally .ipynb's
|
||||
FIXME(Jordan/Andrei): add an .md about the repository of FHE-friendly models (#1212)
|
||||
```
|
||||
|
||||
67
docs/user/howto/compiling_torch_model.md
Normal file
67
docs/user/howto/compiling_torch_model.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Compiling a Torch Model
|
||||
|
||||
Concrete Framework allows to compile a torch model to its FHE counterpart.
|
||||
|
||||
|
||||
A simple command can compile a torch model to its FHE counterpart. This process executes most of the concepts described in the documentation on [how to use quantization](use_quantization.md) and triggers the compilation to be able to run the model over homomorphically encrypted data.
|
||||
|
||||
|
||||
```python
|
||||
from torch import nn
|
||||
import torch
|
||||
class LogisticRegression(nn.Module):
|
||||
"""LogisticRegression with Torch"""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.fc1 = nn.Linear(in_features=14, out_features=1)
|
||||
self.sigmoid1 = nn.Sigmoid()
|
||||
|
||||
|
||||
def forward(self, x):
|
||||
"""Forward pass."""
|
||||
out = self.fc1(x)
|
||||
out = self.sigmoid1(out)
|
||||
return out
|
||||
|
||||
torch_model = LogisticRegression()
|
||||
```
|
||||
|
||||
```{warning}
|
||||
Note that the architecture of the neural network passed to be compiled must respect some hard constraints given by FHE. Please read the our [detailed documentation](../howto/reduce_needed_precision.md) on these limitations.
|
||||
```
|
||||
|
||||
Once your model is trained you can simply call the `compile_torch_model` function to execute the compilation.
|
||||
|
||||
<!--python-test:cont-->
|
||||
```python
|
||||
from concrete.torch.compile import compile_torch_model
|
||||
import numpy
|
||||
torch_input = torch.randn(100, 14)
|
||||
quantized_numpy_module = compile_torch_model(
|
||||
torch_model, # our model
|
||||
torch_input, # a representative inputset to be used for both quantization and compilation
|
||||
n_bits = 2,
|
||||
)
|
||||
```
|
||||
|
||||
You can then call `quantized_numpy_module.forward_fhe.run()` to have the FHE inference.
|
||||
|
||||
Now your model is ready to infer in FHE settings !
|
||||
|
||||
<!--python-test:cont-->
|
||||
```python
|
||||
enc_x = numpy.array([numpy.random.randn(14)]).astype(numpy.uint8) # An example that is going to be encrypted, and used for homomorphic inference.
|
||||
fhe_prediction = quantized_numpy_module.forward_fhe.run(enc_x)
|
||||
```
|
||||
|
||||
`fhe_prediction` contains the clear quantized output. The user can now dequantize the output to get the actual floating point prediction as follows:
|
||||
|
||||
<!--python-test:cont-->
|
||||
```python
|
||||
clear_output = quantized_numpy_module.dequantize_output(
|
||||
numpy.array(fhe_prediction, dtype=numpy.float32)
|
||||
)
|
||||
```
|
||||
|
||||
If you want to see more compilation examples, you can check out the [IrisFHE notebook](../advanced_examples/IrisFHE.ipynb)
|
||||
@@ -7,6 +7,7 @@ How To
|
||||
numpy_support.md
|
||||
printing_and_drawing.md
|
||||
use_quantization.md
|
||||
compiling_torch_model.md
|
||||
reduce_needed_precision.md
|
||||
debug_support_submit_issues.md
|
||||
faq.md
|
||||
|
||||
@@ -137,7 +137,7 @@ The current implementation of the framework parses the layers in the order of th
|
||||
Do not reuse a layer or an activation multiple times in the forward (i.e. self.sigmoid for each layer activation) and always place them at the correct position (the order of appearance in the forward function) in the init function.
|
||||
```
|
||||
|
||||
It is now possible to compile the `quantized_numpy_module`. Details on how to compile the model are available in the [torch compilation documentation].
|
||||
It is now possible to compile the `quantized_numpy_module`. Details on how to compile the model are available in the [torch compilation documentation](compiling_torch_model.md).
|
||||
## Building your own QuantizedModule
|
||||
|
||||
Concrete Framework also offers the possibility to build your own models and use them in the FHE settings. The `QuantizedModule` is a very simple abstraction that allows to create any model using the available operators:
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
```{warning}
|
||||
FIXME(jordan): do this section, maybe from one .ipynb that you would do
|
||||
```
|
||||
# Compiling a Torch Model
|
||||
@@ -4,7 +4,6 @@ Tutorial
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
compiling_torch_model.md
|
||||
table_lookup.md
|
||||
working_with_floating_points.md
|
||||
indexing.md
|
||||
|
||||
Reference in New Issue
Block a user