diff --git a/docs/user/basics/compiling_and_executing.md b/docs/user/basics/compiling_and_executing.md index ca1624679..51c9c638b 100644 --- a/docs/user/basics/compiling_and_executing.md +++ b/docs/user/basics/compiling_and_executing.md @@ -106,4 +106,4 @@ FIXME(benoit): explain the API to encrypt, run_inference, decrypt, keygen etc wh - [Working With Floating Points Tutorial](../tutorial/working_with_floating_points.md) - [Table Lookup Tutorial](../tutorial/table_lookup.md) -- [Compiling a torch model](../tutorial/compiling_torch_model.md) +- [Compiling a torch model](../howto/compiling_torch_model.md) diff --git a/docs/user/basics/intro.md b/docs/user/basics/intro.md index d2bdeab97..b1ffcf6f7 100644 --- a/docs/user/basics/intro.md +++ b/docs/user/basics/intro.md @@ -43,5 +43,5 @@ The main _current_ limits are: To overcome the above limitations, Concrete has a [popular quantization](../explanation/quantization.md) method built in the framework that allows to map floating point values to integers. We can [use this approach](../howto/use_quantization.md) to run models in FHE. Lastly, we give hints to the user on how to [reduce the precision](../howto/reduce_needed_precision.md) of a model to make it work in Concrete. ```{warning} -FIXME(Jordan/Andrei): add an .md about the repository of FHE-friendly models, and ideally .ipynb's +FIXME(Jordan/Andrei): add an .md about the repository of FHE-friendly models (#1212) ``` diff --git a/docs/user/howto/compiling_torch_model.md b/docs/user/howto/compiling_torch_model.md new file mode 100644 index 000000000..55c3690f7 --- /dev/null +++ b/docs/user/howto/compiling_torch_model.md @@ -0,0 +1,67 @@ +# Compiling a Torch Model + +Concrete Framework allows to compile a torch model to its FHE counterpart. + + +A simple command can compile a torch model to its FHE counterpart. This process executes most of the concepts described in the documentation on [how to use quantization](use_quantization.md) and triggers the compilation to be able to run the model over homomorphically encrypted data. + + +```python +from torch import nn +import torch +class LogisticRegression(nn.Module): + """LogisticRegression with Torch""" + + def __init__(self): + super().__init__() + self.fc1 = nn.Linear(in_features=14, out_features=1) + self.sigmoid1 = nn.Sigmoid() + + + def forward(self, x): + """Forward pass.""" + out = self.fc1(x) + out = self.sigmoid1(out) + return out + +torch_model = LogisticRegression() +``` + +```{warning} +Note that the architecture of the neural network passed to be compiled must respect some hard constraints given by FHE. Please read the our [detailed documentation](../howto/reduce_needed_precision.md) on these limitations. +``` + +Once your model is trained you can simply call the `compile_torch_model` function to execute the compilation. + + +```python +from concrete.torch.compile import compile_torch_model +import numpy +torch_input = torch.randn(100, 14) +quantized_numpy_module = compile_torch_model( + torch_model, # our model + torch_input, # a representative inputset to be used for both quantization and compilation + n_bits = 2, +) +``` + +You can then call `quantized_numpy_module.forward_fhe.run()` to have the FHE inference. + +Now your model is ready to infer in FHE settings ! + + +```python +enc_x = numpy.array([numpy.random.randn(14)]).astype(numpy.uint8) # An example that is going to be encrypted, and used for homomorphic inference. +fhe_prediction = quantized_numpy_module.forward_fhe.run(enc_x) +``` + +`fhe_prediction` contains the clear quantized output. The user can now dequantize the output to get the actual floating point prediction as follows: + + +```python +clear_output = quantized_numpy_module.dequantize_output( + numpy.array(fhe_prediction, dtype=numpy.float32) +) +``` + +If you want to see more compilation examples, you can check out the [IrisFHE notebook](../advanced_examples/IrisFHE.ipynb) diff --git a/docs/user/howto/index.rst b/docs/user/howto/index.rst index 3d2939c43..a90cbb427 100644 --- a/docs/user/howto/index.rst +++ b/docs/user/howto/index.rst @@ -7,6 +7,7 @@ How To numpy_support.md printing_and_drawing.md use_quantization.md + compiling_torch_model.md reduce_needed_precision.md debug_support_submit_issues.md faq.md diff --git a/docs/user/howto/use_quantization.md b/docs/user/howto/use_quantization.md index b6f88c125..9566f1d38 100644 --- a/docs/user/howto/use_quantization.md +++ b/docs/user/howto/use_quantization.md @@ -137,7 +137,7 @@ The current implementation of the framework parses the layers in the order of th Do not reuse a layer or an activation multiple times in the forward (i.e. self.sigmoid for each layer activation) and always place them at the correct position (the order of appearance in the forward function) in the init function. ``` -It is now possible to compile the `quantized_numpy_module`. Details on how to compile the model are available in the [torch compilation documentation]. +It is now possible to compile the `quantized_numpy_module`. Details on how to compile the model are available in the [torch compilation documentation](compiling_torch_model.md). ## Building your own QuantizedModule Concrete Framework also offers the possibility to build your own models and use them in the FHE settings. The `QuantizedModule` is a very simple abstraction that allows to create any model using the available operators: diff --git a/docs/user/tutorial/compiling_torch_model.md b/docs/user/tutorial/compiling_torch_model.md deleted file mode 100644 index 20b7498b5..000000000 --- a/docs/user/tutorial/compiling_torch_model.md +++ /dev/null @@ -1,4 +0,0 @@ -```{warning} -FIXME(jordan): do this section, maybe from one .ipynb that you would do -``` -# Compiling a Torch Model diff --git a/docs/user/tutorial/index.rst b/docs/user/tutorial/index.rst index e645cf656..dd7978a9a 100644 --- a/docs/user/tutorial/index.rst +++ b/docs/user/tutorial/index.rst @@ -4,7 +4,6 @@ Tutorial .. toctree:: :maxdepth: 1 - compiling_torch_model.md table_lookup.md working_with_floating_points.md indexing.md