docs: start updating docs to reflect the rewrite

This commit is contained in:
Umut
2022-04-04 13:36:19 +02:00
parent f17745130e
commit 79685ed7dc
14 changed files with 137 additions and 219 deletions

View File

@@ -45,16 +45,16 @@ You can find more detailed installation instructions in [installing.md](docs/use
## A simple example: numpy addition in FHE
```python
import concrete.numpy as hnp
import concrete.numpy as cnp
@cnp.compiler({"x": "encrypted", "y": "encrypted"})
def add(x, y):
return x + y
inputset = [(2, 3), (0, 0), (1, 6), (7, 7), (7, 1), (3, 2), (6, 1), (1, 7), (4, 5), (5, 4)]
compiler = hnp.NPFHECompiler(add, {"x": "encrypted", "y": "encrypted"})
print(f"Compiling...")
circuit = compiler.compile_on_inputset(inputset)
circuit = add.compile(inputset)
examples = [(3, 4), (1, 2), (7, 7), (0, 0)]
for example in examples:

View File

@@ -11,18 +11,18 @@ However, you can already build interesting and impressing use cases, and more wi
```python
# Import necessary Concrete components
import concrete.numpy as hnp
import concrete.numpy as cnp
# Define the function to homomorphize
def f(x, y):
return (2 * x) + y
# Create a Numpy FHE Compiler
compiler = hnp.NPFHECompiler(f, {"x": "encrypted", "y": "encrypted"})
# Create a Compiler
compiler = cnp.Compiler(f, {"x": "encrypted", "y": "encrypted"})
# Compile an FHE Circuit using an inputset
# Compile to a Circuit using an inputset
inputset = [(0, 0), (0, 1), (1, 0), (1, 1), (2, 0), (2, 1), (3, 0), (3, 1)]
circuit = compiler.compile_on_inputset(inputset)
circuit = compiler.compile(inputset)
# Make homomorphic inference
circuit.encrypt_run_decrypt(1, 0)
@@ -31,20 +31,19 @@ circuit.encrypt_run_decrypt(1, 0)
## Overview of the numpy compilation process
The compilation journey begins with tracing to get an easy to understand and manipulate representation of the function.
We call this representation `Operation Graph` which is basically a Directed Acyclic Graph (DAG) containing nodes representing the computations done in the function.
We call this representation `Computation Graph` which is basically a Directed Acyclic Graph (DAG) containing nodes representing the computations done in the function.
Working with graphs is good because they have been studied extensively over the years and there are a lot of algorithms to manipulate them.
Internally, we use [networkx](https://networkx.org) which is an excellent graph library for Python.
The next step in the compilation is transforming the operation graph.
The next step in the compilation is transforming the computation graph.
There are many transformations we perform, and they will be discussed in their own sections.
In any case, the result of transformations is just another operation graph.
In any case, the result of transformations is just another computation graph.
After transformations are applied, we need to determine the bounds (i.e., the minimum and the maximum values) of each intermediate result.
After transformations are applied, we need to determine the bounds (i.e., the minimum and the maximum values) of each intermediate node.
This is required because FHE currently allows a limited precision for computations.
Bound measurement is our way to know what is the needed precision for the function.
There are several approaches to compute bounds, and they will be discussed in their own sections.
The final step is to transform the operation graph to equivalent `MLIR` code.
The final step is to transform the computation graph to equivalent `MLIR` code.
How this is done will be explained in detail in its own chapter.
Once the MLIR is prepared, the rest of the stack, which you can learn more about [here](http://docs.zama.ai/), takes over and completes the compilation process.
@@ -62,7 +61,7 @@ def f(x):
return (2 * x) + 3
```
the goal of tracing is to create the following operation graph without needing any change from the user.
the goal of tracing is to create the following computation graph without needing any change from the user.
![](../../_static/compilation-pipeline/two_x_plus_three.png)
@@ -92,7 +91,7 @@ resulting_tracer = f(x, y)
`Tracer(computation=Add(self.computation, (2 * y).computation))` which is equal to:
`Tracer(computation=Add(Input("x"), Multiply(Constant(2), Input("y")))`
In the end, we will have output Tracers that can be used to create the operation graph.
In the end, we will have output Tracers that can be used to create the computation graph.
The implementation is a bit more complex than this, but the idea is the same.
Tracing is also responsible for indicating whether the values in the node would be encrypted or not, and the rule for that is if a node has an encrypted predecessor, it is encrypted as well.
@@ -113,7 +112,7 @@ You can find it [here](./float-fusing.md).
## Bounds measurement
Given an operation graph, the goal of the bound measurement step is to assign the minimal data type to each node in the graph.
Given a computation graph, the goal of the bound measurement step is to assign the minimal data type to each node in the graph.
Let's say we have an encrypted input that is always between `0` and `10`, we should assign the type `Encrypted<uint4>` to node of this input as `Encrypted<uint4>` is the minimal encrypted integer that supports all the values between `0` and `10`.
@@ -121,21 +120,20 @@ If there were negative values in the range, we could have used `intX` instead of
Bounds measurement is necessary because FHE supports limited precision, and we don't want unexpected behaviour during evaluation of the compiled functions.
There are several ways to perform bounds measurement.
Let's take a closer look at the options we provide.
Let's take a closer look at how we perform bounds measurement.
### Inputset evaluation
This is the simplest approach, but it requires an inputset to be provided by the user.
This is a simple approach that requires an inputset to be provided by the user.
The inputset is not to be confused with the dataset which is classical in ML, as it doesn't require labels.
Rather, it is a set of values which are typical inputs of the function.
The idea is to evaluate each input in the inputset and record the result of each operation in the operation graph.
The idea is to evaluate each input in the inputset and record the result of each operation in the computation graph.
Then we compare the evaluation results with the current minimum/maximum values of each node and update the minimum/maximum accordingly.
After the entire inputset is evaluated, we assign a data type to each node using the minimum and the maximum value it contains.
Here is an example, given this operation graph where `x` is encrypted:
Here is an example, given this computation graph where `x` is encrypted:
![](../../_static/compilation-pipeline/two_x_plus_three.png)
@@ -196,7 +194,7 @@ Assigned Data Types:
## MLIR conversion
The actual compilation will be done by the **Concrete** compiler, which is expecting an MLIR input. The MLIR conversion goes from an operation graph to its MLIR equivalent. You can read more about it [here](./mlir.md)
The actual compilation will be done by the **Concrete** compiler, which is expecting an MLIR input. The MLIR conversion goes from a computation graph to its MLIR equivalent. You can read more about it [here](./mlir.md)
## Example walkthrough #1
@@ -213,7 +211,7 @@ def f(x):
x = "encrypted"
```
#### Corresponding operation graph
#### Corresponding computation graph
![](../../_static/compilation-pipeline/two_x_plus_three.png)
@@ -263,7 +261,7 @@ x = "encrypted"
y = "encrypted"
```
#### Corresponding operation graph
#### Corresponding computation graph
![](../../_static/compilation-pipeline/forty_two_minus_x_plus_y_times_two.png)

View File

@@ -21,7 +21,7 @@ def quantized_sin(x):
This function `quantized_sin` is not strictly supported as is by the compiler as there are floating point intermediate values. However, when looking at the function globally we can see we have a single integer input and a single integer output. As we know the input range we can compute a table to represent the whole computation for each input value, which can later be lowered to a PBS in the FHE world.
Any computation where there is a single variable integer input and a single integer output can be replaced by an equivalent table look-up.
Any computation where there is a single variable integer input and a single integer output can be replaced by an equivalent table lookup.
The `quantized_sin` graph of operations:

View File

@@ -1,29 +1,3 @@
# MLIR
MLIR is the intermediate representation used by the **Concrete** compiler, so we need to convert the operation graph to MLIR, which will look something like the following, for a graph performing the dot between two input tensors.
```
func @main(%arg0: tensor<4xi7>, %arg1: tensor<4x!FHE.eint<6>>) -> !FHE.eint<6> {
%0 = "FHE.dot_eint_int"(%arg1, %arg0) : (tensor<4x!FHE.eint<6>>, tensor<4xi7>) -> !FHE.eint<6>
return %0 : !FHE.eint<6>
}
```
The different steps of the transformation are depicted in the figure below. We will explain each part separately later on.
![MLIR Conversion](../../_static/mlir/MLIR_conversion.png)
The conversion uses as input the operation graph to convert, as well as a dictionary of node converter functions.
## Define function signature
The first step is to define the function signature (excluding return value at this point). We will convert the input node's types to MLIR (e.g. convert `EncryptedTensor(Integer(64, is_signed=False), shape=(4,))` to `tensor<4xi64>`) and map their values to the argument of the function. So if we had an operation graph with one `EncryptedScalar(Integer(7, is_signed=False))`, we will get an MLIR function like `func @main(%arg0 : !FHE.eint<7>) -> (<ret-type>)`. Note that the return type would be detected automatically later on when returning MLIR values.
## Convert nodes in the OpGraph
After that, we will iterate over the operation graph, node by node, and fetch the appropriate conversion function for that node to do the conversion. Converters should be stored in a dictionary mapping a node to the converter function. All functions need to have the same signature `converter(node: IntermediateNode, preds: List[IntermediateNode], ir_to_mlir_node: dict, context: mlir.Context)`.
- The `node` will be just the node to convert, it will be used to get information about inputs and outputs. Each specific conversion might require a different set of information, so each function fetches those separately.
- `preds` would be the operands of the operation, as they are the input for the converted `node`.
- The `ir_to_mlir_node` is a mutable dict that we update as we traverse the graph. It maps nodes to their respective values in MLIR. We need this during the creation of an MLIR operation out of a node, the node's inputs will be operands for the operation, but we can't use them as is, we need their MLIR value. The first nodes to be added are the input nodes, which should map to the arguments of the MLIR function. Everytime we convert a node to its MLIR equivalent, we add the mapping between the node and the MLIR value, so that whenever this node will be used as input to another one, we can retrieve its MLIR value. This will also be useful to know which MLIR value(s) to return at the end, as we already can identify output node(s), it will be easy to retrieve their MLIR values using this data structure.
- The `context` should be loaded with the required dialects to be able to create MLIR operations and types for the compiler.
TODO

View File

@@ -5,16 +5,16 @@
In this section we will go over some terms that we use throughout the project.
- intermediate representation
- a data structure to represent a calculation
- basically a computation graph where nodes are either inputs or operations on other nodes
- a data structure to represent a computation
- basically a computation graph in which nodes are either inputs, constants, or operations on other nodes
- tracing
- it is our technique to directly take a plain numpy function from a user and deduce its intermediate representation in a painless way for the user
- it is the technique to take a python function from a user and generate intermediate representation corresponding to it in a painless way for the user
- bounds
- before intermediate representation is sent to the compiler, we need to know which node will output which type (e.g., uint3 vs uint5)
- there are several ways to do this but the simplest one is to evaluate the intermediate representation with all combinations of inputs and remember the maximum and the minimum values for each node, which is what we call bounds, and bounds can be used to determine the appropriate type for each node
- fhe circuit
- before intermediate representation is converted to MLIR, we need to know which node will output which type (e.g., uint3 vs uint5)
- there are several ways to do this but the simplest one is to evaluate the intermediate representation with some combinations of inputs and remember the maximum and the minimum values for each node, which is what we call bounds, and bounds can be used to determine the appropriate type for each node
- circuit
- it is the result of compilation
- it contains the operation graph and the compiler engine in it
- it is made of the computation graph and the compiler engine
- it has methods for printing, visualizing, and evaluating
## Module structure
@@ -22,26 +22,11 @@ In this section we will go over some terms that we use throughout the project.
In this section, we will discuss the module structure of **concrete-numpy** briefly. You are encouraged to check individual `.py` files to learn more!
- concrete
- common: types and utilities that can be used by multiple frontends (e.g., numpy, torch)
- bounds_measurement: utilities for determining bounds of intermediate representation
- common_helpers: various utilities
- compilation: type definitions related to compilation (e.g., compilation config, compilation artifacts)
- data_types: type definitions of typing information of intermediate representation
- debugging: utilities for printing/displaying intermediate representation
- extensions: utilities that provide special functionality to our users
- fhe_circuit: class to hold the result of the compilation
- helpers: various helpers
- mlir: MLIR conversion module
- operator_graph: code to wrap and make manipulating networkx graphs easier
- optimization: optimization and simplification
- representation: type definitions of intermediate representation
- tracing: utilities for generic function tracing used during intermediate representation creation
- values: define the different types we use, including tensors and scalar, encrypted or clear
- numpy: numpy frontend of the package
- compile: compilation of a numpy function
- np_dtypes_helpers: utilities about types
- np_fhe_compiler: main API for compilation of numpy functions
- np_indexing_helpers: utilities for indexing
- np_inputset_helpers: utilities for inputsets
- np_mlir_converter: utilities for MLIR conversion
- tracing: tracing of numpy functions
- numpy
- dtypes: data type specifications
- values: value specifications (i.e., data type + shape + encryption status)
- representation: representation of computation
- tracing: tracing of python functions
- extensions: custom functionality which is not available in numpy (e.g., conv2d)
- mlir: mlir conversion
- compilation: compilation from python functions to circuits

View File

@@ -5,7 +5,7 @@
Everything you need to compile and execute homomorphic functions is included in a single module. You can import it like so:
```python
import concrete.numpy as hnp
import concrete.numpy as cnp
```
## Defining a function to compile
@@ -41,22 +41,11 @@ Finally, we can compile our function to its homomorphic equivalent.
<!--pytest-codeblocks:cont-->
```python
compiler = hnp.NPFHECompiler(
f, {"x": x, "y": y},
)
circuit = compiler.compile_on_inputset(inputset)
compiler = cnp.Compiler(f, {"x": x, "y": y})
circuit = compiler.compile(inputset)
# If you want, you can separate tracing and compilation steps like so:
# You can either evaluate in one go:
compiler.eval_on_inputset(inputset)
# Or progressively:
for input_values in inputset:
compiler(*input_values)
# You can print the traced graph:
print(str(compiler))
# You can print the compiled circuit:
print(circuit)
# Outputs
@@ -66,29 +55,26 @@ print(str(compiler))
# return %2
# Or draw it
compiler.draw_graph(show=True)
circuit = compiler.get_compiled_fhe_circuit()
circuit.draw(show=True)
```
Here is the graph from the previous code block drawn with `draw_graph`:
Here is the graph from the previous code block drawn with `draw`:
![Drawn graph of previous code block](../../_static/howto/compiling_and_executing_example_graph.png)
## Performing homomorphic evaluation
You can use `.encrypt_run_decrypt(...)` method of `FHECircuit` returned by `hnp.compile_numpy_function(...)` to perform fully homomorphic evaluation. Here are some examples:
You can use `.run(...)` method of `Circuit` to perform fully homomorphic evaluation. Here are some examples:
<!--pytest-codeblocks:cont-->
```python
circuit.encrypt_run_decrypt(3, 4)
circuit.run(3, 4)
# 7
circuit.encrypt_run_decrypt(1, 2)
circuit.run(1, 2)
# 3
circuit.encrypt_run_decrypt(7, 7)
circuit.run(7, 7)
# 14
circuit.encrypt_run_decrypt(0, 0)
circuit.run(0, 0)
# 0
```
@@ -97,20 +83,11 @@ Be careful about the inputs, though.
If you were to run with values outside the range of the inputset, the result might not be correct.
```
While `.encrypt_run_decrypt(...)` is a good start for prototyping examples, more advanced usages require control over the different steps that are happening behind the scene, mainly key generation, encryption, execution, and decryption. The different steps can of course be called separately as in the example below:
<!--pytest-codeblocks:cont-->
```python
# generate keys required for encrypted computation
circuit.keygen()
# this will encrypt arguments that require encryption and pack all arguments
# as well as public materials (public keys)
public_args = circuit.encrypt(3, 4)
# this will run the encrypted computation using public materials and inputs provided
encrypted_result = circuit.run(public_args)
# the execution returns the encrypted result which can later be decrypted
decrypted_result = circuit.decrypt(encrypted_result)
```
Today, we cannot simulate a client / server API in python, but it is for very soon. Then, we will have:
- a `keygen` API, which is used to generate both public and private keys
- an `encrypt` API, which happens on the user's device, and is using private keys
- a `run_inference` API, which happens on the untrusted server and only uses public material
- a `encrypt` API, which happens on the user's device to get final clear result, and is using private keys
## Further reading

View File

@@ -2,64 +2,38 @@
In this section, we list the operations which are supported currently in **Concrete Numpy**. Please have a look to numpy [documentation](https://numpy.org/doc/stable/user/index.html) to know what these operations are about.
## Unary operations
<!--- gen_supported_ufuncs.py: inject supported operations [BEGIN] -->
<!--- do not edit, auto generated part by `python3 gen_supported_ufuncs.py` in docker -->
List of supported unary functions:
List of supported functions:
- absolute
- add
- arccos
- arccosh
- arcsin
- arcsinh
- arctan
- arctan2
- arctanh
- bitwise_and
- bitwise_or
- bitwise_xor
- cbrt
- ceil
- clip
- concatenate
- copysign
- cos
- cosh
- deg2rad
- degrees
- dot
- equal
- exp
- exp2
- expm1
- fabs
- floor
- isfinite
- isinf
- isnan
- log
- log10
- log1p
- log2
- logical_not
- negative
- positive
- rad2deg
- radians
- reciprocal
- rint
- sign
- signbit
- sin
- sinh
- spacing
- sqrt
- square
- tan
- tanh
- trunc
## Binary operations
List of supported binary functions if one of the two operators is a constant scalar:
- arctan2
- bitwise_and
- bitwise_or
- bitwise_xor
- copysign
- equal
- float_power
- floor
- floor_divide
- fmax
- fmin
@@ -69,24 +43,54 @@ List of supported binary functions if one of the two operators is a constant sca
- greater_equal
- heaviside
- hypot
- invert
- isfinite
- isinf
- isnan
- lcm
- ldexp
- left_shift
- less
- less_equal
- log
- log10
- log1p
- log2
- logaddexp
- logaddexp2
- logical_and
- logical_not
- logical_or
- logical_xor
- matmul
- maximum
- minimum
- multiply
- negative
- nextafter
- not_equal
- positive
- power
- rad2deg
- radians
- reciprocal
- remainder
- reshape
- right_shift
- rint
- sign
- signbit
- sin
- sinh
- spacing
- sqrt
- square
- subtract
- sum
- tan
- tanh
- true_divide
- trunc
<!--- gen_supported_ufuncs.py: inject supported operations [END] -->
# Shapes

View File

@@ -9,13 +9,13 @@ You get a compilation error. Here is an example:
<!--pytest-codeblocks:skip-->
```python
import concrete.numpy as hnp
import concrete.numpy as cnp
@cnp.compiler({"x": "encrypted"})
def f(x):
return 42 * x
compiler = hnp.NPFHECompiler(f, {"x": "encrypted"})
circuit = compiler.compile_on_inputset(range(2 ** 3))
circuit = f.compile(range(2 ** 3))
```
results in

View File

@@ -115,17 +115,17 @@ return %1
Manual exports are mostly used for visualization. Nonetheless, they can be very useful for demonstrations. Here is how to do it:
```python
import concrete.numpy as hnp
import concrete.numpy as cnp
import numpy as np
import pathlib
artifacts = cnp.CompilationArtifacts("/tmp/custom/export/path")
@cnp.compiler({"x": "encrypted"}, artifacts=artifacts)
def f(x):
return 127 - (50 * (np.sin(x) + 1)).astype(np.uint32)
artifacts = hnp.CompilationArtifacts(pathlib.Path("/tmp/custom/export/path"))
compiler = hnp.NPFHECompiler(f, {"x": "encrypted"}, compilation_artifacts=artifacts)
compiler.compile_on_inputset(range(2 ** 3))
f.compile(range(2 ** 3))
artifacts.export()
```

View File

@@ -9,16 +9,15 @@ Here are some examples of constant indexing:
### Extracting a single element
```python
import concrete.numpy as hnp
import concrete.numpy as cnp
import numpy as np
@cnp.compiler({"x": "encrypted"})
def f(x):
return x[1]
inputset = [np.random.randint(0, 2 ** 3, size=(3,), dtype=np.uint8) for _ in range(10)]
compiler = hnp.NPFHECompiler(f, {"x": "encrypted"})
circuit = compiler.compile_on_inputset(inputset)
circuit = f.compile(inputset)
test_input = np.array([4, 2, 6], dtype=np.uint8)
expected_output = 2
@@ -29,16 +28,15 @@ assert np.array_equal(circuit.encrypt_run_decrypt(test_input), expected_output)
You can use negative indexing.
```python
import concrete.numpy as hnp
import concrete.numpy as cnp
import numpy as np
@cnp.compiler({"x": "encrypted"})
def f(x):
return x[-1]
inputset = [np.random.randint(0, 2 ** 3, size=(3,), dtype=np.uint8) for _ in range(10)]
compiler = hnp.NPFHECompiler(f, {"x": "encrypted"})
circuit = compiler.compile_on_inputset(inputset)
circuit = f.compile(inputset)
test_input = np.array([4, 2, 6], dtype=np.uint8)
expected_output = 6
@@ -49,16 +47,15 @@ assert np.array_equal(circuit.encrypt_run_decrypt(test_input), expected_output)
You can use multidimensional indexing as well.
```python
import concrete.numpy as hnp
import concrete.numpy as cnp
import numpy as np
@cnp.compiler({"x": "encrypted"})
def f(x):
return x[-1, 1]
inputset = [np.random.randint(0, 2 ** 3, size=(3, 2), dtype=np.uint8) for _ in range(10)]
compiler = hnp.NPFHECompiler(f, {"x": "encrypted"})
circuit = compiler.compile_on_inputset(inputset)
circuit = f.compile(inputset)
test_input = np.array([[4, 2], [1, 5], [7, 6]], dtype=np.uint8)
expected_output = 6
@@ -69,16 +66,15 @@ assert np.array_equal(circuit.encrypt_run_decrypt(test_input), expected_output)
### Extracting a slice
```python
import concrete.numpy as hnp
import concrete.numpy as cnp
import numpy as np
@cnp.compiler({"x": "encrypted"})
def f(x):
return x[1:4]
inputset = [np.random.randint(0, 2 ** 3, size=(5,), dtype=np.uint8) for _ in range(10)]
compiler = hnp.NPFHECompiler(f, {"x": "encrypted"})
circuit = compiler.compile_on_inputset(inputset)
circuit = f.compile(inputset)
test_input = np.array([4, 2, 6, 1, 7], dtype=np.uint8)
expected_output = np.array([2, 6, 1], dtype=np.uint8)

View File

@@ -7,9 +7,9 @@ In this tutorial, we are going to go over the ways to perform direct table looku
**Concrete Numpy** provides a special class to allow direct table lookups. Here is how to use it:
```python
import concrete.numpy as hnp
import concrete.numpy as cnp
table = hnp.LookupTable([2, 1, 3, 0])
table = cnp.LookupTable([2, 1, 3, 0])
def f(x):
return table[x]
@@ -47,12 +47,12 @@ Sometimes you may want to apply a different lookup table to each value in a tens
<!--pytest-codeblocks:skip-->
```python
import concrete.numpy as hnp
import concrete.numpy as cnp
squared = hnp.LookupTable([i ** 2 for i in range(4)])
cubed = hnp.LookupTable([i ** 3 for i in range(4)])
squared = cnp.LookupTable([i ** 2 for i in range(4)])
cubed = cnp.LookupTable([i ** 3 for i in range(4)])
table = hnp.MultiLookupTable([
table = cnp.MultiLookupTable([
[squared, cubed],
[squared, cubed],
[squared, cubed],
@@ -118,7 +118,7 @@ Internally, it uses the following lookup table
<!--pytest-codeblocks:skip-->
```python
table = hnp.LookupTable([50, 92, 95, 57, 12, 2, 36, 82])
table = cnp.LookupTable([50, 92, 95, 57, 12, 2, 36, 82])
```
which is calculated by:

View File

@@ -31,7 +31,7 @@
"metadata": {},
"outputs": [],
"source": [
"import concrete.numpy as hnp\n",
"import concrete.numpy as cnp\n",
"import inspect\n",
"import numpy as np"
]
@@ -535,8 +535,8 @@
],
"source": [
"for operation in supported_operations:\n",
" compiler = hnp.NPFHECompiler(operation, {\"x\": \"encrypted\"})\n",
" circuit = compiler.compile_on_inputset(inputset)\n",
" compiler = cnp.Compiler(operation, {\"x\": \"encrypted\"})\n",
" circuit = compiler.compile(inputset)\n",
" \n",
" # We setup an example tensor that will be encrypted and passed on to the current operation\n",
" sample = np.random.randint(3, 11, size=(3, 2), dtype=np.uint8)\n",

View File

@@ -3,17 +3,16 @@
## An example
```python
import concrete.numpy as cnp
import numpy as np
import concrete.numpy as hnp
# Function using floating points values converted back to integers at the end
@cnp.compiler({"x": "encrypted"})
def f(x):
return np.fabs(50 * (2 * np.sin(x) * np.cos(x))).astype(np.uint32)
# astype is to go back to the integer world
# Compiling with x encrypted
compiler = hnp.NPFHECompiler(f, {"x": "encrypted"})
circuit = compiler.compile_on_inputset(range(64))
circuit = f.compile(range(64))
print(circuit.encrypt_run_decrypt(3) == f(3))
print(circuit.encrypt_run_decrypt(0) == f(0))

View File

@@ -1,17 +1,13 @@
"""Update list of supported functions in the doc."""
import argparse
from concrete.numpy import tracing
from concrete.numpy.tracing import Tracer
def main(file_to_update):
"""Update list of supported functions in file_to_update"""
supported_unary_ufunc = sorted(
f.__name__ for f in tracing.NPTracer.LIST_OF_SUPPORTED_UFUNC if f.nin == 1
)
supported_binary_ufunc = sorted(
f.__name__ for f in tracing.NPTracer.LIST_OF_SUPPORTED_UFUNC if f.nin == 2
)
supported_func = sorted(f.__name__ for f in Tracer.SUPPORTED_NUMPY_OPERATORS)
with open(file_to_update, "r", encoding="utf-8") as file:
lines = file.readlines()
@@ -40,20 +36,9 @@ def main(file_to_update):
keep_line = True
# Inject the supported functions
newlines.append("List of supported unary functions:\n")
newlines.append("List of supported functions:\n")
newlines.extend(f"- {f}\n" for f in supported_unary_ufunc)
newlines.append("\n")
newlines.append("## Binary operations\n")
newlines.append("\n")
newlines.append(
"List of supported binary functions if one of the "
"two operators is a constant scalar:\n"
)
newlines.extend(f"- {f}\n" for f in supported_binary_ufunc)
newlines.extend(f"- {f}\n" for f in supported_func)
newlines.append(line)
else: