mirror of
https://github.com/zama-ai/concrete.git
synced 2026-04-17 03:00:54 -04:00
docs(frontend): include docs changes from #593
This commit is contained in:
@@ -99,7 +99,7 @@ Additional kwargs to `compile` functions take higher precedence. So if you set t
|
||||
* **compiler_debug_mode**: bool = False,
|
||||
* Enable/disable debug mode of the compiler. This can show a lot of information, including passes and pattern rewrites.
|
||||
* **compiler_verbose_mode**: bool = False,
|
||||
* Enable/disable verbose mode of the compiler. This mainly show logs from the compiler, and is less verbose than the debug mode.
|
||||
* Enable/disable verbose mode of the compiler. This mainly shows logs from the compiler, and is less verbose than the debug mode.
|
||||
* **comparison_strategy_preference**: Optional[Union[ComparisonStrategy, str, List[Union[ComparisonStrategy, str]]]] = None
|
||||
* Specify preference for comparison strategies, can be a single strategy or an ordered list of strategies. See [Comparisons](../tutorial/comparisons.md) to learn more.
|
||||
* **bitwise_strategy_preference**: Optional[Union[BitwiseStrategy, str, List[Union[BitwiseStrategy, str]]]] = None
|
||||
|
||||
@@ -90,7 +90,7 @@ return %1
|
||||
|
||||
#### traceback.txt
|
||||
|
||||
This file contains information about the error you received.
|
||||
This file contains information about the error that was received.
|
||||
|
||||
```
|
||||
Traceback (most recent call last):
|
||||
@@ -204,7 +204,7 @@ Subgraphs:
|
||||
|
||||
#### mlir.txt
|
||||
|
||||
This file contains information about the MLIR of the function you compiled using the inputset you provided.
|
||||
This file contains information about the MLIR of the function which was compiled using the provided inputset.
|
||||
|
||||
```
|
||||
module {
|
||||
@@ -300,7 +300,7 @@ You can seek help with your issue by asking a question directly in the [communit
|
||||
|
||||
## Submitting an issue
|
||||
|
||||
If you cannot find a solution in the community forum, or you found a bug in the library, you could create an issue in our GitHub repository.
|
||||
If you cannot find a solution in the community forum, or if you have found a bug in the library, you could create an issue in our GitHub repository.
|
||||
|
||||
In case of a bug, try to:
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ result = np.sum(result_chunks)
|
||||
### Notes
|
||||
|
||||
- Signed bitwise operations are not supported.
|
||||
- Optimal chunk size is selected automatically to reduce the number of table lookups.
|
||||
- The optimal chunk size is selected automatically to reduce the number of table lookups.
|
||||
- Chunked bitwise operations result in at least 4 and at most 9 table lookups.
|
||||
- It is used if no other implementation can be used.
|
||||
|
||||
@@ -78,10 +78,10 @@ module {
|
||||
%cst_0 = arith.constant dense<[0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3]> : tensor<16xi64>
|
||||
%1 = "FHE.apply_lookup_table"(%arg1, %cst_0) : (!FHE.eint<4>, tensor<16xi64>) -> !FHE.eint<4>
|
||||
|
||||
// packing first chunks
|
||||
// packing the first chunks
|
||||
%2 = "FHE.add_eint"(%0, %1) : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
|
||||
|
||||
// applying the bitwise operation to first chunks, adjusted for addition in the end
|
||||
// applying the bitwise operation to the first chunks, adjusted for addition in the end
|
||||
%cst_1 = arith.constant dense<[0, 0, 0, 0, 0, 4, 0, 4, 0, 0, 8, 8, 0, 4, 8, 12]> : tensor<16xi64>
|
||||
%3 = "FHE.apply_lookup_table"(%2, %cst_1) : (!FHE.eint<4>, tensor<16xi64>) -> !FHE.eint<4>
|
||||
|
||||
@@ -93,7 +93,7 @@ module {
|
||||
%cst_3 = arith.constant dense<[0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3]> : tensor<16xi64>
|
||||
%5 = "FHE.apply_lookup_table"(%arg1, %cst_3) : (!FHE.eint<4>, tensor<16xi64>) -> !FHE.eint<4>
|
||||
|
||||
// packing second chunks
|
||||
// packing the second chunks
|
||||
%6 = "FHE.add_eint"(%4, %5) : (!FHE.eint<4>, !FHE.eint<4>) -> !FHE.eint<4>
|
||||
|
||||
// applying the bitwise operation to second chunks
|
||||
@@ -114,7 +114,7 @@ module {
|
||||
|
||||
This implementation uses the fact that we can combine two values into a single value and apply a single table lookup to this combined value!
|
||||
|
||||
There are two major problems with this implementation though:
|
||||
There are two major problems with this implementation:
|
||||
1) packing requires the same bit-width across operands.
|
||||
2) packing requires the bit-width of at least `x.bit_width + y.bit_width` and that bit-width cannot exceed maximum TLU bit-width, which is `16` at the moment.
|
||||
|
||||
@@ -191,7 +191,7 @@ module {
|
||||
|
||||
### 2. fhe.BitwiseStrategy.THREE_TLU_CASTED
|
||||
|
||||
This strategy will not put any constraint in bit-widths during bit-width assignment, instead operands are cast to a bit-width that can store `pack(x, y)` during runtime using table lookups. The idea is:
|
||||
This strategy will not put any constraint on bit-widths during bit-width assignment, instead operands are cast to a bit-width that can store `pack(x, y)` during runtime using table lookups. The idea is:
|
||||
|
||||
```python
|
||||
uint3_to_uint9_lut = fhe.LookupTable([...])
|
||||
@@ -271,7 +271,7 @@ module {
|
||||
|
||||
### 3. fhe.BitwiseStrategy.TWO_TLU_BIGGER_PROMOTED_SMALLER_CASTED
|
||||
|
||||
This strategy is like the middle ground between the two strategies described above. With this strategy, only the bigger operand will be constrained to have at least the required bit-width to store `pack(x, y)`, and the smaller operand will be cast to that bit-width during runtime. The idea is:
|
||||
This strategy can be viewed as a middle ground between the two strategies described above. With this strategy, only the bigger operand will be constrained to have at least the required bit-width to store `pack(x, y)`, and the smaller operand will be cast to that bit-width during runtime. The idea is:
|
||||
|
||||
```python
|
||||
uint3_to_uint9_lut = fhe.LookupTable([...])
|
||||
@@ -287,7 +287,7 @@ result = comparison_lut[x_cast_to_uint9 - y_promoted_to_uint9]
|
||||
|
||||
#### Pros
|
||||
|
||||
- It will only put constraint on the bigger operand, which is great if the smaller operand is used in other costly operations.
|
||||
- It will only put a constraint on the bigger operand, which is great if the smaller operand is used in other costly operations.
|
||||
- It will result in at most 2 table lookups, which is great.
|
||||
|
||||
#### Cons
|
||||
@@ -445,7 +445,7 @@ The same configuration option is used to modify the behavior of encrypted shift
|
||||
|
||||
### With promotion
|
||||
|
||||
In this way, shifted operand and shift result is assigned the same bit-width during bit-width assignment, which avoids an additional TLU on the shifted operand, but it might increase the bit-width of the result or the shifted operand, and if they're used in other costly operations, it could result in significant slowdowns. This is the default behavior.
|
||||
Here, the shifted operand and shift result are assigned the same bit-width during bit-width assignment, which avoids an additional TLU on the shifted operand. On the other hand, it might increase the bit-width of the result or the shifted operand, and if they're used in other costly operations, it could result in significant slowdowns. This is the default behavior.
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
@@ -514,7 +514,7 @@ module {
|
||||
|
||||
### With casting
|
||||
|
||||
Approach described above could be suboptimal for some circuits, so it is advised to check the complexity with it disabled before production. Here is how the implementation changes with it disabled.
|
||||
The approach described above could be suboptimal for some circuits, so it is advised to check the complexity with it disabled before production. Here is how the implementation changes with it disabled.
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
|
||||
@@ -35,11 +35,11 @@ for chunk_comparison in chunk_comparisons[1:]:
|
||||
|
||||
### Notes
|
||||
|
||||
- Signed comparisons are a bit more complex to explain, but they are supported!
|
||||
- Optimal chunk size is selected automatically to reduce the number of table lookups.
|
||||
- Signed comparisons are more complex to explain, but they are supported!
|
||||
- The optimal chunk size is selected automatically to reduce the number of table lookups.
|
||||
- Chunked comparisons result in at least 5 and at most 13 table lookups.
|
||||
- It is used if no other implementation can be used.
|
||||
- `==` and `!=` is using a different chunk comparison and reduction strategy with less table lookups.
|
||||
- `==` and `!=` are using a different chunk comparison and reduction strategy with less table lookups.
|
||||
|
||||
### Pros
|
||||
|
||||
@@ -120,7 +120,7 @@ module {
|
||||
|
||||
This implementation uses the fact that `x [<,<=,==,!=,>=,>] y` is equal to `x - y [<,<=,==,!=,>=,>] 0`, which is just a subtraction and a table lookup!
|
||||
|
||||
There are two major problems with this implementation though:
|
||||
There are two major problems with this implementation:
|
||||
1) subtraction before the TLU requires up to 2 additional bits to avoid overflows (it is 1 in most cases).
|
||||
2) subtraction requires the same bit-width across operands.
|
||||
|
||||
@@ -134,7 +134,7 @@ What this means is if we are comparing `uint3` and `uint6`, we need to convert b
|
||||
|
||||
### 1. fhe.ComparisonStrategy.ONE_TLU_PROMOTED
|
||||
|
||||
This strategy makes sure that during bit-width assignment, both operands are assigned the same bit-width, and that bit-width contains at least the amount of bits required to store `x - y`. The idea is:
|
||||
This strategy makes sure that during bit-width assignment, both operands are assigned the same bit-width, and that bit-width contains at least the number of bits required to store `x - y`. The idea is:
|
||||
|
||||
```python
|
||||
comparison_lut = fhe.LookupTable([...])
|
||||
@@ -196,7 +196,7 @@ module {
|
||||
|
||||
### 2. fhe.ComparisonStrategy.THREE_TLU_CASTED
|
||||
|
||||
This strategy will not put any constraint in bit-widths during bit-width assignment, instead operands are cast to a bit-width that can store `x - y` during runtime using table lookups. The idea is:
|
||||
This strategy will not put any constraint on bit-widths during bit-width assignment, instead operands are cast to a bit-width that can store `x - y` during runtime using table lookups. The idea is:
|
||||
|
||||
```python
|
||||
uint3_to_uint7_lut = fhe.LookupTable([...])
|
||||
@@ -211,12 +211,12 @@ result = comparison_lut[x_cast_to_uint7 - y_cast_to_uint7]
|
||||
|
||||
#### Notes
|
||||
|
||||
- It can result in a single table lookup as well, if x and y are assigned (because of other operations) the same bit-width, and that bit-width can store `x - y`.
|
||||
- Or in two table lookups if only one of the operands is assigned a bit-width bigger than or equal to the bit width that can store `x - y`.
|
||||
- It can result in a single table lookup, if x and y are assigned (because of other operations) the same bit-width and that bit-width can store `x - y`.
|
||||
- Alternatively, two table lookups can be used if only one of the operands is assigned a bit-width bigger than or equal to the bit width that can store `x - y`.
|
||||
|
||||
#### Pros
|
||||
|
||||
- It will not put any constraints on bit-widths of the operands, which is amazing if they are used in other costly operations.
|
||||
- It will not put any constraints on the bit-widths of the operands, which is amazing if they are used in other costly operations.
|
||||
- It will result in at most 3 table lookups, which is still good.
|
||||
|
||||
#### Cons
|
||||
@@ -274,7 +274,7 @@ module {
|
||||
|
||||
### 3. fhe.ComparisonStrategy.TWO_TLU_BIGGER_PROMOTED_SMALLER_CASTED
|
||||
|
||||
This strategy is like the middle ground between the two strategies described above. With this strategy, only the bigger operand will be constrained to have at least the required bit-width to store `x - y`, and the smaller operand will be cast to that bit-width during runtime. The idea is:
|
||||
This strategy can be seen as a middle ground between the two strategies described above. With this strategy, only the bigger operand will be constrained to have at least the required bit-width to store `x - y`, and the smaller operand will be cast to that bit-width during runtime. The idea is:
|
||||
|
||||
```python
|
||||
uint3_to_uint7_lut = fhe.LookupTable([...])
|
||||
@@ -286,16 +286,16 @@ result = comparison_lut[x_cast_to_uint7 - y_promoted_to_uint7]
|
||||
|
||||
#### Notes
|
||||
|
||||
- It can result in a single table lookup as well, if the smaller operand is assigned (because of other operations) the same bit-width as the bigger operand.
|
||||
- It can result in a single table lookup, if the smaller operand is assigned (because of other operations) the same bit-width as the bigger operand.
|
||||
|
||||
#### Pros
|
||||
|
||||
- It will only put constraint on the bigger operand, which is great if the smaller operand is used in other costly operations.
|
||||
- It will only put a constraint on the bigger operand, which is great if the smaller operand is used in other costly operations.
|
||||
- It will result in at most 2 table lookups, which is great.
|
||||
|
||||
#### Cons
|
||||
|
||||
- It will increase the bit-width of the bigger operand which can result in significant slowdowns if the bigger operand is used in other costly operations.
|
||||
- It will increase the bit-width of the bigger operand, which can result in significant slowdowns if the bigger operand is used in other costly operations.
|
||||
- If you are not doing anything else with the smaller operand, or doing less costly operations compared to comparison, it could introduce an unnecessary table lookup and slow down execution compared to `fhe.ComparisonStrategy.THREE_TLU_CASTED`.
|
||||
|
||||
#### Example
|
||||
@@ -349,7 +349,7 @@ module {
|
||||
|
||||
### 4. fhe.ComparisonStrategy.TWO_TLU_BIGGER_CASTED_SMALLER_PROMOTED
|
||||
|
||||
This strategy is like the exact opposite of the strategy above. With this, only the smaller operand will be constrained to have at least the required bit-width, and the bigger operand will be cast during runtime. The idea is:
|
||||
This strategy can be seen as the exact opposite of the strategy above. With this, only the smaller operand will be constrained to have at least the required bit-width, and the bigger operand will be cast during runtime. The idea is:
|
||||
|
||||
```python
|
||||
uint6_to_uint7_lut = fhe.LookupTable([...])
|
||||
@@ -361,16 +361,16 @@ result = comparison_lut[x_promoted_to_uint7 - y_cast_to_uint7]
|
||||
|
||||
#### Notes
|
||||
|
||||
- It can result in a single table lookup as well, if the bigger operand is assigned (because of other operations) the same bit-width as the smaller operand.
|
||||
- It can result in a single table lookup, if the bigger operand is assigned (because of other operations) the same bit-width as the smaller operand.
|
||||
|
||||
#### Pros
|
||||
|
||||
- It will only put constraint on the smaller operand, which is great if the bigger operand is used in other costly operations.
|
||||
- It will only put a constraint on the smaller operand, which is great if the bigger operand is used in other costly operations.
|
||||
- It will result in at most 2 table lookups, which is great.
|
||||
|
||||
#### Cons
|
||||
|
||||
- It will increase the bit-width of the smaller operand which can result in significant slowdowns if the smaller operand is used in other costly operations.
|
||||
- It will increase the bit-width of the smaller operand, which can result in significant slowdowns if the smaller operand is used in other costly operations.
|
||||
- If you are not doing anything else with the bigger operand, or doing less costly operations compared to comparison, it could introduce an unnecessary table lookup and slow down execution compared to `fhe.ComparisonStrategy.THREE_TLU_CASTED`.
|
||||
|
||||
#### Example
|
||||
@@ -424,10 +424,10 @@ module {
|
||||
|
||||
## Clipping Trick
|
||||
|
||||
This implementation uses the fact that the subtraction trick is not optimal in terms of the required intermediate bit width. Comparison result does not change if we `compare(3, 40)` or `compare(3, 4)`, so why not clipping the bigger operand and then doing the subtraction to use less bits!
|
||||
This implementation uses the fact that the subtraction trick is not optimal in terms of the required intermediate bit width. The comparison result does not change if we `compare(3, 40)` or `compare(3, 4)`, so why not clipping the bigger operand and then doing the subtraction to use less bits!
|
||||
|
||||
There are two major problems with this implementation as well though:
|
||||
1) it can not be used when bit-widths are the same (for some cases even when they differ by only one bit)
|
||||
There are two major problems with this implementation:
|
||||
1) it can not be used when the bit-widths are the same (for some cases even when they differ by only one bit)
|
||||
2) subtraction still requires the same bit-width across operands.
|
||||
|
||||
What this means is if we are comparing `uint3` and `uint6`, we need to convert both of them to `uint4` in some way to do the subtraction and proceed with the TLU in 7-bits. There are 2 ways to achieve this behavior.
|
||||
@@ -456,7 +456,7 @@ What this means is if we are comparing `uint3` and `uint6`, we need to convert b
|
||||
|
||||
### 1. fhe.ComparisonStrategy.THREE_TLU_BIGGER_CLIPPED_SMALLER_CASTED
|
||||
|
||||
This strategy will not put any constraint in bit-widths during bit-width assignment, instead the smaller operand is cast to a bit-width that can store `clipped(bigger) - smaller` or `smaller - clipped(bigger)` during runtime using table lookups. The idea is:
|
||||
This strategy will not put any constraint on bit-widths during bit-width assignment, instead the smaller operand is cast to a bit-width that can store `clipped(bigger) - smaller` or `smaller - clipped(bigger)` during runtime using table lookups. The idea is:
|
||||
|
||||
```python
|
||||
uint3_to_uint4_lut = fhe.LookupTable([...])
|
||||
@@ -474,14 +474,14 @@ result = another_comparison_lut[y_clipped - x_cast_to_uint4]
|
||||
|
||||
#### Notes
|
||||
|
||||
- This is a fallback implementation, so if there is a difference of 1-bit (or in some cases 2-bits) and subtraction trick cannot be used optimally, this implementation will be used instead of `fhe.ComparisonStrategy.CHUNKED`.
|
||||
- This is a fallback implementation, so if there is a difference of 1-bit (or in some cases 2-bits) and the subtraction trick cannot be used optimally, this implementation will be used instead of `fhe.ComparisonStrategy.CHUNKED`.
|
||||
- It can result in two table lookups if the smaller operand is assigned a bit-width bigger than or equal to the bit width that can store `clipped(bigger) - smaller` or `smaller - clipped(bigger)`.
|
||||
|
||||
#### Pros
|
||||
|
||||
- It will not put any constraints on bit-widths of the operands, which is amazing if they are used in other costly operations.
|
||||
- It will not put any constraints on the bit-widths of the operands, which is amazing if they are used in other costly operations.
|
||||
- It will result in at most 3 table lookups, which is still good.
|
||||
- And those table lookups will be on smaller bit-widths, which is great.
|
||||
- These table lookups will be on smaller bit-widths, which is great.
|
||||
|
||||
#### Cons
|
||||
|
||||
@@ -556,12 +556,12 @@ result = another_comparison_lut[y_clipped - x_promoted_to_uint4]
|
||||
|
||||
#### Pros
|
||||
|
||||
- It will only put constraint on the smaller operand, which is great if the bigger operand is used in other costly operations.
|
||||
- It will only put a constraint on the smaller operand, which is great if the bigger operand is used in other costly operations.
|
||||
- It will result in exactly 2 table lookups, which is great.
|
||||
|
||||
#### Cons
|
||||
|
||||
- It will increase the bit-width of the bigger operand which can result in significant slowdowns if the bigger operand is used in other costly operations.
|
||||
- It will increase the bit-width of the bigger operand, which can result in significant slowdowns if the bigger operand is used in other costly operations.
|
||||
|
||||
#### Example
|
||||
|
||||
|
||||
@@ -293,9 +293,9 @@ You'd expect all of `a`, `b`, and `c` to be 8-bits, but because inputset is very
|
||||
return %5
|
||||
```
|
||||
|
||||
The first solution in these cases should be to use a bigger inputset, but it can still be tricky to solve with the inputset. That's where `hint` extension comes into play. Hints are a way to provide extra information to compilation process:
|
||||
The first solution in these cases should be to use a bigger inputset, but it can still be tricky to solve with the inputset. That's where the `hint` extension comes into play. Hints are a way to provide extra information to compilation process:
|
||||
|
||||
- Bit-width hints are for constraining the minimum number of bits in the encoded the value. If you hint a value to be 8-bits, it means it should be at least `uint8` or `int8`.
|
||||
- Bit-width hints are for constraining the minimum number of bits in the encoded value. If you hint a value to be 8-bits, it means it should be at least `uint8` or `int8`.
|
||||
|
||||
To fix `f` using hints, you can do:
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Multi Parameters
|
||||
|
||||
Integers in Concrete are encrypted and processed according to a set of cryptographic parameters. By default, multiple of such parameters are selected by Concrete Optimizer. This might not be the best approach for every use case and there is the option to use mono parameters.
|
||||
Integers in Concrete are encrypted and processed according to a set of cryptographic parameters. By default, multiple sets of such parameters are selected by the Concrete Optimizer. This might not be the best approach for every use case, and there is the option to use mono parameters instead.
|
||||
|
||||
When multi parameters are enabled, a different set of parameters are selected for each bit-width in the circuit, which results in:
|
||||
- Faster execution (generally).
|
||||
|
||||
@@ -10,7 +10,7 @@ Concrete analyzes all compiled circuits and calculates some statistics. These st
|
||||
- **packing key switch:** building block for table lookups
|
||||
- **programmable bootstrapping:** building block for table lookups
|
||||
|
||||
You can print all statistics using `show_statistics` configuration option:
|
||||
You can print all statistics using the `show_statistics` configuration option:
|
||||
|
||||
```python
|
||||
from concrete import fhe
|
||||
@@ -69,7 +69,7 @@ Each of these properties can be directly accessed on the circuit (e.g., `circuit
|
||||
|
||||
Circuit analysis also considers [tags](../tutorial/tagging.md)!
|
||||
|
||||
Imagine you have a neural network with 10 layers, each of them tagged. You can easily see the amount of additions and multiplications required for matrix multiplications per layer:
|
||||
Imagine you have a neural network with 10 layers, each of them tagged. You can easily see the number of additions and multiplications required for matrix multiplications per layer:
|
||||
|
||||
```
|
||||
Statistics
|
||||
|
||||
Reference in New Issue
Block a user