From ec411ab8e8e8d1003bd69c886669ebab22b48af2 Mon Sep 17 00:00:00 2001 From: Jeremy Bradley-Silverio Donato Date: Thu, 6 Jan 2022 17:50:35 +0100 Subject: [PATCH] docs: Update compiling_torch_model.md --- docs/user/howto/compiling_torch_model.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/user/howto/compiling_torch_model.md b/docs/user/howto/compiling_torch_model.md index 995c73843..c0d4f9ab3 100644 --- a/docs/user/howto/compiling_torch_model.md +++ b/docs/user/howto/compiling_torch_model.md @@ -1,6 +1,6 @@ # Compiling a Torch Model -**Concrete Numpy** allows to compile a torch model to its FHE counterpart. +**Concrete Numpy** allows you to compile a torch model to its FHE counterpart. A simple command can compile a torch model to its FHE counterpart. This process executes most of the concepts described in the documentation on [how to use quantization](use_quantization.md) and triggers the compilation to be able to run the model over homomorphically encrypted data. @@ -47,7 +47,7 @@ quantized_numpy_module = compile_torch_model( You can then call `quantized_numpy_module.forward_fhe.run()` to have the FHE inference. -Now your model is ready to infer in FHE settings ! +Now your model is ready to infer in FHE settings. ```python