mirror of
https://github.com/0xPARC/plonkathon.git
synced 2026-01-08 21:28:02 -05:00
Plonkathon repo setup
This commit is contained in:
251
README.md
251
README.md
@@ -1,20 +1,32 @@
|
||||
# Py_plonk
|
||||
# PlonKathon
|
||||
**PlonKathon** is part of the program for [MIT IAP 2023] [Modern Zero Knowledge Cryptography](https://zkiap.com/). Over the course of this weekend, we will get into the weeds of the PlonK protocol through a series of exercises and extensions. This repository contains a simple python implementation of PlonK adapted from [py_plonk](https://github.com/ethereum/research/tree/master/py_plonk), and targeted to be close to compatible with the implementation at https://zkrepl.dev.
|
||||
|
||||
Py_plonk is a simple python implementation of the PLONK protocol as described in https://eprint.iacr.org/2019/953.pdf (see also https://vitalik.ca/general/2019/09/22/plonk.html), targeted to be close to compatible with the implementation at https://zkrepl.dev. Py_plonk includes:
|
||||
### Exercises
|
||||
#### Prover
|
||||
1. Implement Round 1 of the PlonK prover
|
||||
2. Implement Round 2 of the PlonK prover
|
||||
3. Implement Round 3 of the PlonK prover
|
||||
4. Implement Round 4 of the PlonK prover
|
||||
5. Implement Round 5 of the PlonK prover
|
||||
|
||||
* A simple programming language for describing circuits, which it can compile into the forms needed for a PLONK proof (QL, QR, QM, QO, QC, S1, S2, S3 polynomials)
|
||||
* A prover that can generate proofs for this language, given a list of variable assignments
|
||||
* A verifier that can verify these proofs
|
||||
### Extensions
|
||||
1. Add support for custom gates.
|
||||
[TurboPlonK](https://docs.zkproof.org/pages/standards/accepted-workshop3/proposal-turbo_plonk.pdf) introduced support for custom constraints, beyond the addition and multiplication gates supported here. Try to generalise this implementation to allow circuit writers to define custom constraints.
|
||||
2. Add zero-knowledge.
|
||||
The parts of PlonK that are responsible for ensuring strong privacy are left out of this implementation. See if you can identify them in the [original paper](https://eprint.iacr.org/2019/953.pdf) and add them here.
|
||||
3. Add support for lookups.
|
||||
A lookup argument allows us to prove that a certain element can be found in a public lookup table. [PlonKup](https://eprint.iacr.org/2022/086.pdf) introduces lookup arguments to PlonK. Try to understand the construction in the paper and implement it here.
|
||||
|
||||
Full compatibility is achieved in some cases: for simple programs, py_plonk is capable of outputting verification keys that _exactly_ match https://zkrepl.dev output. See the tests in test.py for some examples.
|
||||
## Getting started
|
||||
|
||||
This implementation is intended for educational use, and to help reproduce and verify verification keys that are generated by other software. **IT HAS NOT BEEN AUDITED AND PROBABLY HAS BUGS, DO NOT USE BY ITSELF IN PRODUCTION.**
|
||||
To get started, you'll need to have a Python version >= 3.8 and [`poetry`](https://python-poetry.org) installed: `curl -sSL https://install.python-poetry.org | python3 -`.
|
||||
|
||||
Many features are missing. The parts of PLONK that are responsible for ensuring strong privacy are left out (they are easy to add, they would simply increase complexity and reduce the educational value of this implementation).
|
||||
Then, run `poetry install` in the root of the repository. This will install all the dependencies in a virtualenv.
|
||||
|
||||
### Example
|
||||
|
||||
Here is a program that lets you prove that you know two small numbers that multiply to a given number (in our example we'll use 91) without revealing what those numbers are:
|
||||
Then, to see the proof system in action, run `poetry run python test.py` from the root of the repository. This will take you through the workflow of setup, proof generation, and verification for several example programs.
|
||||
### Compiler
|
||||
#### Program
|
||||
We specify our program logic in a high-level language involving constraints and variable assignments. Here is a program that lets you prove that you know two small numbers that multiply to a given number (in our example we'll use 91) without revealing what those numbers are:
|
||||
|
||||
```
|
||||
n public
|
||||
@@ -35,31 +47,208 @@ q <== qb012 + 8 * qb3
|
||||
n <== p * q
|
||||
```
|
||||
|
||||
Generating the verification key:
|
||||
Examples of valid program constraints:
|
||||
- `a === 9`
|
||||
- `b <== a * c`
|
||||
- `d <== a * c - 45 * a + 987`
|
||||
|
||||
```
|
||||
setup = utils.Setup.from_file(SETUP_FILENAME)
|
||||
vk = make_verification_key(setup, 16, string_containing_the_above_code)
|
||||
Examples of invalid program constraints:
|
||||
- `7 === 7` (can't assign to non-variable)
|
||||
- `a <== b * * c` (two multiplications in a row)
|
||||
- `e <== a + b * c * d` (multiplicative degree > 2)
|
||||
|
||||
Given a `Program`, we can derive the `CommonPreprocessedInput`, which are the polynomials representing the fixed constraints of the program. The prover later uses these polynomials to construct the quotient polynomial, and to compute their evaluations at a given challenge point.
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class CommonPreprocessedInput:
|
||||
"""Common preprocessed input"""
|
||||
|
||||
group_order: int
|
||||
# q_M(X) multiplication selector polynomial
|
||||
QM: list[Scalar]
|
||||
# q_L(X) left selector polynomial
|
||||
QL: list[Scalar]
|
||||
# q_R(X) right selector polynomial
|
||||
QR: list[Scalar]
|
||||
# q_O(X) output selector polynomial
|
||||
QO: list[Scalar]
|
||||
# q_C(X) constants selector polynomial
|
||||
QC: list[Scalar]
|
||||
# S_σ1(X) first permutation polynomial S_σ1(X)
|
||||
S1: list[Scalar]
|
||||
# S_σ2(X) second permutation polynomial S_σ2(X)
|
||||
S2: list[Scalar]
|
||||
# S_σ3(X) third permutation polynomial S_σ3(X)
|
||||
S3: list[Scalar]
|
||||
```
|
||||
|
||||
A setup file is in this repo. The second argument is the group order; it should be a power of two, at least 8, and at least the number of lines of code in the program.
|
||||
#### Assembly
|
||||
Our "assembly" language consists of `AssemblyEqn`s:
|
||||
|
||||
Proving:
|
||||
|
||||
```
|
||||
assignments = compiler.fill_variable_assignments(eqs, {
|
||||
'pb3': 1, 'pb2': 1, 'pb1': 0, 'pb0': 1,
|
||||
'qb3': 0, 'qb2': 1, 'qb1': 1, 'qb0': 1,
|
||||
})
|
||||
proof = prover.prove_from_witness(setup, 16, eqs, assignments)
|
||||
```python
|
||||
class AssemblyEqn:
|
||||
"""Assembly equation mapping wires to coefficients."""
|
||||
wires: GateWires
|
||||
coeffs: dict[Optional[str], int]
|
||||
```
|
||||
|
||||
`compiler.fill_variable_assignments` is a convenience method that executes the program to fill in any variables that you do not specify yourself. In this case, you only specify the bits of the two factors (7 and 13), the program can compute all the other intermediate and final variables for you.
|
||||
|
||||
`prove_from_witness` generates a proof, passing in the full set of assignments as input.
|
||||
|
||||
Verifying:
|
||||
|
||||
where:
|
||||
```python
|
||||
@dataclass
|
||||
class GateWires:
|
||||
"""Variable names for Left, Right, and Output wires."""
|
||||
L: Optional[str]
|
||||
R: Optional[str]
|
||||
O: Optional[str]
|
||||
```
|
||||
assert verifier.verify_proof(setup, 16, vk, proof, [91], optimized=True)
|
||||
|
||||
Examples of valid program constraints, and corresponding assembly:
|
||||
| program constraint | assembly |
|
||||
| -------------------------- | ------------------------------------------------ |
|
||||
| a === 9 | ([None, None, 'a'], {'': 9}) |
|
||||
| b <== a * c | (['a', 'c', 'b'], {'a*c': 1}) |
|
||||
| d <== a * c - 45 * a + 987 | (['a', 'c', 'd'], {'a*c': 1, 'a': -45, '': 987}) |
|
||||
|
||||
|
||||
### Setup
|
||||
Let $\mathbb{G}_1$ and $\mathbb{G}_2$ be two elliptic curves with a pairing $e : \mathbb{G}_1 \times \mathbb{G}_2 \rightarrow \mathbb{G}_T$. Let $p$ be the order of $\mathbb{G}_1$ and $\mathbb{G}_2$, and $G$ and $H$ be generators of $\mathbb{G}_1$ and $\mathbb{G}_2$. We will use the shorthand notation
|
||||
|
||||
$$[x]_1 = xG \in \mathbb{G}_1 \text{ and } [x]_2 = xH \in \mathbb{G}_2$$
|
||||
|
||||
for any $x \in \mathbb{F}_p$.
|
||||
|
||||
The trusted setup is a preprocessing step that produces a structured reference string:
|
||||
$$\mathsf{srs} = ([1]_1, [x]_1, \cdots, [x^{d-1}]_1, [x]_2),$$
|
||||
where:
|
||||
- $x \in \mathbb{F}$ is a randomly chosen, **secret** evaluation point; and
|
||||
- $d$ is the size of the trusted setup, corresponding to the maximum degree polynomial that it can support.
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class Setup(object):
|
||||
# ([1]₁, [x]₁, ..., [x^{d-1}]₁)
|
||||
# = ( G, xG, ..., x^{d-1}G ), where G is a generator of G_2
|
||||
powers_of_x: list[G1Point]
|
||||
# [x]₂ = xH, where H is a generator of G_2
|
||||
X2: G2Point
|
||||
```
|
||||
|
||||
In this repository, we are using the pairing-friendly [BN254 curve](https://hackmd.io/@jpw/bn254), where:
|
||||
- `p = 21888242871839275222246405745257275088696311157297823662689037894645226208583`
|
||||
- $\mathbb{G}_1$ is the curve $y^2 = x^3 + 3$ over $\mathbb{F}_p$;
|
||||
- $\mathbb{G}_2$ is the twisted curve $y^2 = x^3 + 3/(9+u)$ over $\mathbb{F}_{p^2}$; and
|
||||
- $\mathbb{G}_T = {\mu}_r \subset \mathbb{F}_{p^{12}}^{\times}$.
|
||||
|
||||
We are using an existing setup for $d = 2^{11}$, from this [ceremony](https://github.com/iden3/snarkjs/blob/master/README.md). You can find out more about trusted setup ceremonies [here](https://github.com/weijiekoh/perpetualpowersoftau).
|
||||
|
||||
### Prover
|
||||
The prover creates a proof of knowledge of some satisfying witness to a program.
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class Prover:
|
||||
group_order: int
|
||||
setup: Setup
|
||||
program: Program
|
||||
pk: CommonPreprocessedInput
|
||||
```
|
||||
|
||||
The prover progresses in five rounds, and produces a message at the end of each. After each round, the message is hashed into the `Transcript`.
|
||||
|
||||
The `Proof` consists of all the round messages (`Message1`, `Message2`, `Message3`, `Message4`, `Message5`).
|
||||
|
||||
#### Round 1
|
||||
```python
|
||||
def round_1(
|
||||
self,
|
||||
witness: dict[Optional[str], int],
|
||||
) -> Message1
|
||||
|
||||
@dataclass
|
||||
class Message1:
|
||||
# - [a(x)]₁ (commitment to left wire polynomial)
|
||||
a_1: G1Point
|
||||
# - [b(x)]₁ (commitment to right wire polynomial)
|
||||
b_1: G1Point
|
||||
# - [c(x)]₁ (commitment to output wire polynomial)
|
||||
c_1: G1Point
|
||||
```
|
||||
|
||||
#### Round 2
|
||||
```python
|
||||
def round_2(self) -> Message2
|
||||
|
||||
@dataclass
|
||||
class Message2:
|
||||
# [z(x)]₁ (commitment to permutation polynomial)
|
||||
z_1: G1Point
|
||||
```
|
||||
|
||||
#### Round 3
|
||||
```python
|
||||
def round_3(self) -> Message3
|
||||
|
||||
@dataclass
|
||||
class Message3:
|
||||
# [t_lo(x)]₁ (commitment to t_lo(X), the low chunk of the quotient polynomial t(X))
|
||||
t_lo_1: G1Point
|
||||
# [t_mid(x)]₁ (commitment to t_mid(X), the middle chunk of the quotient polynomial t(X))
|
||||
t_mid_1: G1Point
|
||||
# [t_hi(x)]₁ (commitment to t_hi(X), the high chunk of the quotient polynomial t(X))
|
||||
t_hi_1: G1Point
|
||||
```
|
||||
|
||||
#### Round 4
|
||||
```python
|
||||
def round_4(self) -> Message4
|
||||
|
||||
@dataclass
|
||||
class Message4:
|
||||
# Evaluation of a(X) at evaluation challenge ζ
|
||||
a_eval: Scalar
|
||||
# Evaluation of b(X) at evaluation challenge ζ
|
||||
b_eval: Scalar
|
||||
# Evaluation of c(X) at evaluation challenge ζ
|
||||
c_eval: Scalar
|
||||
# Evaluation of the first permutation polynomial S_σ1(X) at evaluation challenge ζ
|
||||
s1_eval: Scalar
|
||||
# Evaluation of the second permutation polynomial S_σ2(X) at evaluation challenge ζ
|
||||
s2_eval: Scalar
|
||||
# Evaluation of the shifted permutation polynomial z(X) at the shifted evaluation challenge ζω
|
||||
z_shifted_eval: Scalar
|
||||
```
|
||||
|
||||
#### Round 5
|
||||
```python
|
||||
def round_5(self) -> Message5
|
||||
|
||||
@dataclass
|
||||
class Message5:
|
||||
# [W_ζ(X)]₁ (commitment to the opening proof polynomial)
|
||||
W_z_1: G1Point
|
||||
# [W_ζω(X)]₁ (commitment to the opening proof polynomial)
|
||||
W_zw_1: G1Point
|
||||
```
|
||||
|
||||
### Verifier
|
||||
Given a `Setup` and a `Program`, we can generate a verification key for the program:
|
||||
|
||||
```python
|
||||
def verification_key(self, pk: CommonPreprocessedInput) -> VerificationKey
|
||||
```
|
||||
|
||||
The `VerificationKey` contains:
|
||||
|
||||
| verification key element | remark |
|
||||
| ------------------------ | ---------------------------------------------------------------- |
|
||||
| $[q_M(x)]_1$ | commitment to multiplication selector polynomial |
|
||||
| $[q_L(x)]_1$ | commitment to left selector polynomial |
|
||||
| $[q_R(x)]_1$ | commitment to right selector polynomial |
|
||||
| $[q_O(x)]_1$ | commitment to output selector polynomial |
|
||||
| $[q_C(x)]_1$ | commitment to constants selector polynomial |
|
||||
| $[S_{\sigma1}(x)]_1$ | commitment to the first permutation polynomial $S_{\sigma1}(X)$ |
|
||||
| $[S_{\sigma2}(x)]_1$ | commitment to the second permutation polynomial $S_{\sigma2}(X)$ |
|
||||
| $[S_{\sigma3}(x)]_1$ | commitment to the third permutation polynomial $S_{\sigma3}(X)$ |
|
||||
| $[x]_2 = xH$ | (from the $\mathsf{srs}$) |
|
||||
| $\omega$ | an $n$-th root of unity, where $n$ is the program's group order. |
|
||||
|
||||
0
__init__.py
Normal file
0
__init__.py
Normal file
265
compiler.py
265
compiler.py
@@ -1,265 +0,0 @@
|
||||
# A verification key generator for a simple zk language, reverse-engineered
|
||||
# to match https://zkrepl.dev/ output
|
||||
|
||||
from utils import *
|
||||
|
||||
# Outputs the label (an inner-field element) representing a given
|
||||
# (section, index) pair. Expects section = 1 for left, 2 right, 3 output
|
||||
def S_position_to_f_inner(group_order, index, section):
|
||||
assert section in (1, 2, 3) and index < group_order
|
||||
return get_roots_of_unity(group_order)[index] * section
|
||||
|
||||
# Expects input in the form: [['a', 'b', 'c'], ...]
|
||||
def make_s_polynomials(group_order, wires):
|
||||
if len(wires) > group_order:
|
||||
raise Exception("Group order too small")
|
||||
S = {
|
||||
1: [None] * group_order,
|
||||
2: [None] * group_order,
|
||||
3: [None] * group_order,
|
||||
}
|
||||
# For each variable, extract the list of (section, index) positions
|
||||
# where that variable is used
|
||||
variable_uses = {None: set()}
|
||||
for i, wire in enumerate(wires):
|
||||
for section, value in zip((1, 2, 3), wire):
|
||||
if value not in variable_uses:
|
||||
variable_uses[value] = set()
|
||||
variable_uses[value].add((i, section))
|
||||
for i in range(len(wires), group_order):
|
||||
for section in (1, 2, 3):
|
||||
variable_uses[None].add((i, section))
|
||||
# For each list of positions, rotate by one. For example, if some
|
||||
# variable is used in positions (1, 4), (1, 7) and (3, 2), then
|
||||
# we store:
|
||||
#
|
||||
# at S[1][7] the field element representing (1, 4)
|
||||
# at S[3][2] the field element representing (1, 7)
|
||||
# at S[1][4] the field element representing (3, 2)
|
||||
for _, uses in variable_uses.items():
|
||||
uses = sorted(uses)
|
||||
for i in range(len(uses)):
|
||||
next_i = (i+1) % len(uses)
|
||||
S[uses[next_i][1]][uses[next_i][0]] = S_position_to_f_inner(
|
||||
group_order, uses[i][0], uses[i][1]
|
||||
)
|
||||
return (S[1], S[2], S[3])
|
||||
|
||||
def is_valid_variable_name(name):
|
||||
return len(name) > 0 and name.isalnum() and name[0] not in '0123456789'
|
||||
|
||||
# Gets the key to use in the coeffs dictionary for the term for key1*key2,
|
||||
# where key1 and key2 can be constant(''), a variable, or product keys
|
||||
# Note that degrees higher than 2 are disallowed in the compiler, but we
|
||||
# still allow them in the parser in case we find a way to compile them later
|
||||
def get_product_key(key1, key2):
|
||||
members = sorted((key1 or '').split('*') + (key2 or '').split('*'))
|
||||
return '*'.join([x for x in members if x])
|
||||
|
||||
# Converts a arithmetic expression containing numbers, variables and {+, -, *}
|
||||
# into a mapping of term to coefficient
|
||||
#
|
||||
# For example:
|
||||
# ['a', '+', 'b', '*', 'c', '*', '5'] becomes {'a': 1, 'b*c': 5}
|
||||
#
|
||||
# Note that this is a recursive algo, so the input can be a mix of tokens and
|
||||
# mapping expressions
|
||||
#
|
||||
def simplify(exprs, first_is_negative=False):
|
||||
# Splits by + and - first, then *, to follow order of operations
|
||||
# The first_is_negative flag helps us correctly interpret expressions
|
||||
# like 6000 - 700 - 80 + 9 (that's 5229)
|
||||
if '+' in exprs:
|
||||
L = simplify(exprs[:exprs.index('+')], first_is_negative)
|
||||
R = simplify(exprs[exprs.index('+')+1:], False)
|
||||
return {
|
||||
x: L.get(x, 0) + R.get(x, 0) for x in set(L.keys()).union(R.keys())
|
||||
}
|
||||
elif '-' in exprs:
|
||||
L = simplify(exprs[:exprs.index('-')], first_is_negative)
|
||||
R = simplify(exprs[exprs.index('-')+1:], True)
|
||||
return {
|
||||
x: L.get(x, 0) + R.get(x, 0) for x in set(L.keys()).union(R.keys())
|
||||
}
|
||||
elif '*' in exprs:
|
||||
L = simplify(exprs[:exprs.index('*')], first_is_negative)
|
||||
R = simplify(exprs[exprs.index('*')+1:], first_is_negative)
|
||||
o = {}
|
||||
for k1 in L.keys():
|
||||
for k2 in R.keys():
|
||||
o[get_product_key(k1, k2)] = L[k1] * R[k2]
|
||||
return o
|
||||
elif len(exprs) > 1:
|
||||
raise Exception("No ops, expected sub-expr to be a unit: {}"
|
||||
.format(expr))
|
||||
elif exprs[0][0] == '-':
|
||||
return simplify([exprs[0][1:]], not first_is_negative)
|
||||
elif exprs[0].isnumeric():
|
||||
return {'': int(exprs[0]) * (-1 if first_is_negative else 1)}
|
||||
elif is_valid_variable_name(exprs[0]):
|
||||
return {exprs[0]: -1 if first_is_negative else 1}
|
||||
else:
|
||||
raise Exception("ok wtf is {}".format(exprs[0]))
|
||||
|
||||
# Converts an equation to a mapping of term to coefficient, and verifies that
|
||||
# the operations in the equation are valid.
|
||||
#
|
||||
# Also outputs a triple containing the L and R input variables and the output
|
||||
# variable
|
||||
#
|
||||
# Think of the list of (variable triples, coeffs) pairs as this language's
|
||||
# version of "assembly"
|
||||
#
|
||||
# Example valid equations, and output:
|
||||
# a === 9 ([None, None, 'a'], {'': 9})
|
||||
# b <== a * c (['a', 'c', 'b'], {'a*c': 1})
|
||||
# d <== a * c - 45 * a + 987 (['a', 'c', 'd'], {'a*c': 1, 'a': -45, '': 987})
|
||||
#
|
||||
# Example invalid equations:
|
||||
# 7 === 7 # Can't assign to non-variable
|
||||
# a <== b * * c # Two times signs in a row
|
||||
# e <== a + b * c * d # Multiplicative degree > 2
|
||||
#
|
||||
def eq_to_coeffs(eq):
|
||||
tokens = eq.rstrip('\n').split(' ')
|
||||
if tokens[1] in ('<==', '==='):
|
||||
# First token is the output variable
|
||||
out = tokens[0]
|
||||
# Convert the expression to coefficient map form
|
||||
coeffs = simplify(tokens[2:])
|
||||
# Handle the "-x === a * b" case
|
||||
if out[0] == '-':
|
||||
out = out[1:]
|
||||
coeffs['$output_coeff'] = -1
|
||||
# Check out variable name validity
|
||||
if not is_valid_variable_name(out):
|
||||
raise Exception("Invalid out variable name: {}".format(out))
|
||||
# Gather list of variables used in the expression
|
||||
variables = []
|
||||
for t in tokens[2:]:
|
||||
var = t.lstrip('-')
|
||||
if is_valid_variable_name(var) and var not in variables:
|
||||
variables.append(var)
|
||||
# Construct the list of allowed coefficients
|
||||
allowed_coeffs = variables + ['', '$output_coeff']
|
||||
if len(variables) == 0:
|
||||
pass
|
||||
elif len(variables) == 1:
|
||||
variables.append(variables[0])
|
||||
allowed_coeffs.append(get_product_key(*variables))
|
||||
elif len(variables) == 2:
|
||||
allowed_coeffs.append(get_product_key(*variables))
|
||||
else:
|
||||
raise Exception("Max 2 variables, found {}".format(variables))
|
||||
# Check that only allowed coefficients are in the coefficient map
|
||||
for key in coeffs.keys():
|
||||
if key not in allowed_coeffs:
|
||||
raise Exception("Disallowed multiplication: {}".format(key))
|
||||
# Return output
|
||||
return variables + [None] * (2 - len(variables)) + [out], coeffs
|
||||
elif tokens[1] == 'public':
|
||||
return (
|
||||
[tokens[0], None, None],
|
||||
{tokens[0]: -1, '$output_coeff': 0, '$public': True}
|
||||
)
|
||||
else:
|
||||
raise Exception("Unsupported op: {}".format(tokens[1]))
|
||||
|
||||
# Wrapper that compiles to [(vars, coeffs), ...] assembly, for three kinds
|
||||
# of input:
|
||||
# 1. Assembly itself
|
||||
# 2. An array of lines, each containing one equation
|
||||
# 3. A string, where each line contains an equation
|
||||
def to_assembly(inp):
|
||||
if isinstance(inp, str):
|
||||
lines = [line.strip() for line in inp.split('\n')]
|
||||
return [eq_to_coeffs(line) for line in lines if line]
|
||||
elif isinstance(inp, list):
|
||||
return [eq_to_coeffs(eq) if isinstance(eq, str) else eq for eq in inp]
|
||||
else:
|
||||
raise Exception("Unexpected input: {}".format(inp))
|
||||
|
||||
# Generate the gate polynomials a list of 2-item tuples:
|
||||
# Left: variable names, [in_L, in_R, out]
|
||||
# Right: coeffs, {'': constant term, in_L: L term, in_R: R term,
|
||||
# in_L*in_R: product term,
|
||||
# '$output_coeff': coeff on output, 1 by default?}
|
||||
def make_gate_polynomials(group_order, eqs):
|
||||
L = [f_inner(0) for _ in range(group_order)]
|
||||
R = [f_inner(0) for _ in range(group_order)]
|
||||
M = [f_inner(0) for _ in range(group_order)]
|
||||
O = [f_inner(0) for _ in range(group_order)]
|
||||
C = [f_inner(0) for _ in range(group_order)]
|
||||
for i, (variables, coeffs) in enumerate(eqs):
|
||||
L[i] = f_inner(-coeffs.get(variables[0], 0))
|
||||
if variables[1] != variables[0]:
|
||||
R[i] = f_inner(-coeffs.get(variables[1], 0))
|
||||
C[i] = f_inner(-coeffs.get('', 0))
|
||||
O[i] = f_inner(coeffs.get('$output_coeff', 1))
|
||||
if None not in variables:
|
||||
M[i] = f_inner(-coeffs.get(get_product_key(*variables[:2]), 0))
|
||||
return L, R, M, O, C
|
||||
|
||||
# Get the list of public variable assignments, in order
|
||||
def get_public_assignments(coeffs):
|
||||
o = []
|
||||
no_more_allowed = False
|
||||
for coeff in coeffs:
|
||||
if coeff.get('$public', False) is True:
|
||||
if no_more_allowed:
|
||||
raise Exception("Public var declarations must be at the top")
|
||||
var_name = [x for x in list(coeff.keys()) if '$' not in x][0]
|
||||
if coeff != {'$public': True, '$output_coeff': 0, var_name: -1}:
|
||||
raise Exception("Malformatted coeffs: {}",format(coeffs))
|
||||
o.append(var_name)
|
||||
else:
|
||||
no_more_allowed = True
|
||||
return o
|
||||
|
||||
# Generate the verification key with the given setup, group order and equations
|
||||
def make_verification_key(setup, group_order, code):
|
||||
eqs = to_assembly(code)
|
||||
if len(eqs) > group_order:
|
||||
raise Exception("Group order too small")
|
||||
L, R, M, O, C = make_gate_polynomials(group_order, eqs)
|
||||
S1, S2, S3 = make_s_polynomials(group_order, [v for (v, c) in eqs])
|
||||
return {
|
||||
"Qm": evaluations_to_point(setup, group_order, M),
|
||||
"Ql": evaluations_to_point(setup, group_order, L),
|
||||
"Qr": evaluations_to_point(setup, group_order, R),
|
||||
"Qo": evaluations_to_point(setup, group_order, O),
|
||||
"Qc": evaluations_to_point(setup, group_order, C),
|
||||
"S1": evaluations_to_point(setup, group_order, S1),
|
||||
"S2": evaluations_to_point(setup, group_order, S2),
|
||||
"S3": evaluations_to_point(setup, group_order, S3),
|
||||
"X_2": setup.X2,
|
||||
"w": get_root_of_unity(group_order)
|
||||
}
|
||||
|
||||
# Attempts to "run" the program to fill in any intermediate variable
|
||||
# assignments, starting from the given assignments. Eg. if
|
||||
# `starting_assignments` contains {'a': 3, 'b': 5}, and the first line
|
||||
# says `c <== a * b`, then it fills in `c: 15`.
|
||||
def fill_variable_assignments(code, starting_assignments):
|
||||
out = {k: f_inner(v) for k,v in starting_assignments.items()}
|
||||
out[None] = f_inner(0)
|
||||
eqs = to_assembly(code)
|
||||
for variables, coeffs in eqs:
|
||||
in_L, in_R, output = variables
|
||||
out_coeff = coeffs.get('$output_coeff', 1)
|
||||
product_key = get_product_key(in_L, in_R)
|
||||
if output is not None and out_coeff in (-1, 1):
|
||||
new_value = f_inner(
|
||||
coeffs.get('', 0) +
|
||||
out[in_L] * coeffs.get(in_L, 0) +
|
||||
out[in_R] * coeffs.get(in_R, 0) * (1 if in_R != in_L else 0) +
|
||||
out[in_L] * out[in_R] * coeffs.get(product_key, 0)
|
||||
) * out_coeff # should be / but equivalent for (1, -1)
|
||||
if output in out:
|
||||
if out[output] != new_value:
|
||||
raise Exception("Failed assertion: {} = {}"
|
||||
.format(out[output], new_value))
|
||||
else:
|
||||
out[output] = new_value
|
||||
# print('filled in:', output, out[output])
|
||||
return {k: v.n for k,v in out.items()}
|
||||
0
compiler/__init__.py
Normal file
0
compiler/__init__.py
Normal file
166
compiler/assembly.py
Normal file
166
compiler/assembly.py
Normal file
@@ -0,0 +1,166 @@
|
||||
from utils import *
|
||||
from .utils import *
|
||||
from typing import Optional
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class GateWires:
|
||||
"""Variable names for Left, Right, and Output wires."""
|
||||
|
||||
L: Optional[str]
|
||||
R: Optional[str]
|
||||
O: Optional[str]
|
||||
|
||||
def as_list(self) -> list[Optional[str]]:
|
||||
return [self.L, self.R, self.O]
|
||||
|
||||
|
||||
@dataclass
|
||||
class Gate:
|
||||
"""Gate polynomial"""
|
||||
|
||||
L: Scalar
|
||||
R: Scalar
|
||||
M: Scalar
|
||||
O: Scalar
|
||||
C: Scalar
|
||||
|
||||
|
||||
@dataclass
|
||||
class AssemblyEqn:
|
||||
"""Assembly equation mapping wires to coefficients."""
|
||||
|
||||
wires: GateWires
|
||||
coeffs: dict[Optional[str], int]
|
||||
|
||||
def L(self) -> Scalar:
|
||||
return Scalar(-self.coeffs.get(self.wires.L, 0))
|
||||
|
||||
def R(self) -> Scalar:
|
||||
if self.wires.R != self.wires.L:
|
||||
return Scalar(-self.coeffs.get(self.wires.R, 0))
|
||||
return Scalar(0)
|
||||
|
||||
def C(self) -> Scalar:
|
||||
return Scalar(-self.coeffs.get("", 0))
|
||||
|
||||
def O(self) -> Scalar:
|
||||
return Scalar(self.coeffs.get("$output_coeff", 1))
|
||||
|
||||
def M(self) -> Scalar:
|
||||
if None not in self.wires.as_list():
|
||||
return Scalar(
|
||||
-self.coeffs.get(get_product_key(self.wires.L, self.wires.R), 0)
|
||||
)
|
||||
return Scalar(0)
|
||||
|
||||
def gate(self) -> Gate:
|
||||
return Gate(self.L(), self.R(), self.M(), self.O(), self.C())
|
||||
|
||||
|
||||
# Converts a arithmetic expression containing numbers, variables and {+, -, *}
|
||||
# into a mapping of term to coefficient
|
||||
#
|
||||
# For example:
|
||||
# ['a', '+', 'b', '*', 'c', '*', '5'] becomes {'a': 1, 'b*c': 5}
|
||||
#
|
||||
# Note that this is a recursive algo, so the input can be a mix of tokens and
|
||||
# mapping expressions
|
||||
#
|
||||
def evaluate(exprs: list[str], first_is_negative=False) -> dict[Optional[str], int]:
|
||||
# Splits by + and - first, then *, to follow order of operations
|
||||
# The first_is_negative flag helps us correctly interpret expressions
|
||||
# like 6000 - 700 - 80 + 9 (that's 5229)
|
||||
if "+" in exprs:
|
||||
L = evaluate(exprs[: exprs.index("+")], first_is_negative)
|
||||
R = evaluate(exprs[exprs.index("+") + 1 :], False)
|
||||
return {x: L.get(x, 0) + R.get(x, 0) for x in set(L.keys()).union(R.keys())}
|
||||
elif "-" in exprs:
|
||||
L = evaluate(exprs[: exprs.index("-")], first_is_negative)
|
||||
R = evaluate(exprs[exprs.index("-") + 1 :], True)
|
||||
return {x: L.get(x, 0) + R.get(x, 0) for x in set(L.keys()).union(R.keys())}
|
||||
elif "*" in exprs:
|
||||
L = evaluate(exprs[: exprs.index("*")], first_is_negative)
|
||||
R = evaluate(exprs[exprs.index("*") + 1 :], first_is_negative)
|
||||
o = {}
|
||||
for k1 in L.keys():
|
||||
for k2 in R.keys():
|
||||
o[get_product_key(k1, k2)] = L[k1] * R[k2]
|
||||
return o
|
||||
elif len(exprs) > 1:
|
||||
raise Exception("No ops, expected sub-expr to be a unit: {}".format(exprs[1]))
|
||||
elif exprs[0][0] == "-":
|
||||
return evaluate([exprs[0][1:]], not first_is_negative)
|
||||
elif exprs[0].isnumeric():
|
||||
return {"": int(exprs[0]) * (-1 if first_is_negative else 1)}
|
||||
elif is_valid_variable_name(exprs[0]):
|
||||
return {exprs[0]: -1 if first_is_negative else 1}
|
||||
else:
|
||||
raise Exception("ok wtf is {}".format(exprs[0]))
|
||||
|
||||
|
||||
# Converts an equation to a mapping of term to coefficient, and verifies that
|
||||
# the operations in the equation are valid.
|
||||
#
|
||||
# Also outputs a triple containing the L and R input variables and the output
|
||||
# variable
|
||||
#
|
||||
# Think of the list of (variable triples, coeffs) pairs as this language's
|
||||
# version of "assembly"
|
||||
#
|
||||
# Example valid equations, and output:
|
||||
# a === 9 ([None, None, 'a'], {'': 9})
|
||||
# b <== a * c (['a', 'c', 'b'], {'a*c': 1})
|
||||
# d <== a * c - 45 * a + 987 (['a', 'c', 'd'], {'a*c': 1, 'a': -45, '': 987})
|
||||
#
|
||||
# Example invalid equations:
|
||||
# 7 === 7 # Can't assign to non-variable
|
||||
# a <== b * * c # Two times signs in a row
|
||||
# e <== a + b * c * d # Multiplicative degree > 2
|
||||
#
|
||||
def eq_to_assembly(eq: str) -> AssemblyEqn:
|
||||
tokens = eq.rstrip("\n").split(" ")
|
||||
if tokens[1] in ("<==", "==="):
|
||||
# First token is the output variable
|
||||
out = tokens[0]
|
||||
# Convert the expression to coefficient map form
|
||||
coeffs = evaluate(tokens[2:])
|
||||
# Handle the "-x === a * b" case
|
||||
if out[0] == "-":
|
||||
out = out[1:]
|
||||
coeffs["$output_coeff"] = -1
|
||||
# Check out variable name validity
|
||||
if not is_valid_variable_name(out):
|
||||
raise Exception("Invalid out variable name: {}".format(out))
|
||||
# Gather list of variables used in the expression
|
||||
variables = []
|
||||
for t in tokens[2:]:
|
||||
var = t.lstrip("-")
|
||||
if is_valid_variable_name(var) and var not in variables:
|
||||
variables.append(var)
|
||||
# Construct the list of allowed coefficients
|
||||
allowed_coeffs = variables + ["", "$output_coeff"]
|
||||
if len(variables) == 0:
|
||||
pass
|
||||
elif len(variables) == 1:
|
||||
variables.append(variables[0])
|
||||
allowed_coeffs.append(get_product_key(*variables))
|
||||
elif len(variables) == 2:
|
||||
allowed_coeffs.append(get_product_key(*variables))
|
||||
else:
|
||||
raise Exception("Max 2 variables, found {}".format(variables))
|
||||
# Check that only allowed coefficients are in the coefficient map
|
||||
for key in coeffs.keys():
|
||||
if key not in allowed_coeffs:
|
||||
raise Exception("Disallowed multiplication: {}".format(key))
|
||||
# Return output
|
||||
wires = variables + [None] * (2 - len(variables)) + [out]
|
||||
return AssemblyEqn(GateWires(wires[0], wires[1], wires[2]), coeffs)
|
||||
elif tokens[1] == "public":
|
||||
return AssemblyEqn(
|
||||
GateWires(tokens[0], None, None),
|
||||
{tokens[0]: -1, "$output_coeff": 0, "$public": True},
|
||||
)
|
||||
else:
|
||||
raise Exception("Unsupported op: {}".format(tokens[1]))
|
||||
192
compiler/program.py
Normal file
192
compiler/program.py
Normal file
@@ -0,0 +1,192 @@
|
||||
# A simple zk language, reverse-engineered to match https://zkrepl.dev/ output
|
||||
|
||||
from utils import *
|
||||
from .assembly import *
|
||||
from .utils import *
|
||||
from typing import Optional, Set
|
||||
from poly import Polynomial, Basis
|
||||
|
||||
|
||||
@dataclass
|
||||
class CommonPreprocessedInput:
|
||||
"""Common preprocessed input"""
|
||||
|
||||
group_order: int
|
||||
# q_M(X) multiplication selector polynomial
|
||||
QM: Polynomial
|
||||
# q_L(X) left selector polynomial
|
||||
QL: Polynomial
|
||||
# q_R(X) right selector polynomial
|
||||
QR: Polynomial
|
||||
# q_O(X) output selector polynomial
|
||||
QO: Polynomial
|
||||
# q_C(X) constants selector polynomial
|
||||
QC: Polynomial
|
||||
# S_σ1(X) first permutation polynomial S_σ1(X)
|
||||
S1: Polynomial
|
||||
# S_σ2(X) second permutation polynomial S_σ2(X)
|
||||
S2: Polynomial
|
||||
# S_σ3(X) third permutation polynomial S_σ3(X)
|
||||
S3: Polynomial
|
||||
|
||||
|
||||
class Program:
|
||||
constraints: list[AssemblyEqn]
|
||||
group_order: int
|
||||
|
||||
def __init__(self, constraints: list[str], group_order: int):
|
||||
if len(constraints) > group_order:
|
||||
raise Exception("Group order too small")
|
||||
assembly = [eq_to_assembly(constraint) for constraint in constraints]
|
||||
self.constraints = assembly
|
||||
self.group_order = group_order
|
||||
|
||||
def common_preprocessed_input(self) -> CommonPreprocessedInput:
|
||||
L, R, M, O, C = self.make_gate_polynomials()
|
||||
S = self.make_s_polynomials()
|
||||
return CommonPreprocessedInput(
|
||||
self.group_order,
|
||||
M,
|
||||
L,
|
||||
R,
|
||||
O,
|
||||
C,
|
||||
S[Column.LEFT],
|
||||
S[Column.RIGHT],
|
||||
S[Column.OUTPUT],
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_str(cls, constraints: str, group_order: int):
|
||||
lines = [line.strip() for line in constraints.split("\n")]
|
||||
return cls(lines, group_order)
|
||||
|
||||
def coeffs(self) -> list[dict[Optional[str], int]]:
|
||||
return [constraint.coeffs for constraint in self.constraints]
|
||||
|
||||
def wires(self) -> list[GateWires]:
|
||||
return [constraint.wires for constraint in self.constraints]
|
||||
|
||||
def make_s_polynomials(self) -> dict[Column, Polynomial]:
|
||||
# For each variable, extract the list of (column, row) positions
|
||||
# where that variable is used
|
||||
variable_uses: dict[Optional[str], Set[Cell]] = {None: set()}
|
||||
for row, constraint in enumerate(self.constraints):
|
||||
for column, value in zip(Column.variants(), constraint.wires.as_list()):
|
||||
if value not in variable_uses:
|
||||
variable_uses[value] = set()
|
||||
variable_uses[value].add(Cell(column, row))
|
||||
|
||||
# Mark unused cells
|
||||
for row in range(len(self.constraints), self.group_order):
|
||||
for column in Column.variants():
|
||||
variable_uses[None].add(Cell(column, row))
|
||||
|
||||
# For each list of positions, rotate by one.
|
||||
#
|
||||
# For example, if some variable is used in positions
|
||||
# (LEFT, 4), (LEFT, 7) and (OUTPUT, 2), then we store:
|
||||
#
|
||||
# at S[LEFT][7] the field element representing (LEFT, 4)
|
||||
# at S[OUTPUT][2] the field element representing (LEFT, 7)
|
||||
# at S[LEFT][4] the field element representing (OUTPUT, 2)
|
||||
|
||||
S_values = {
|
||||
Column.LEFT: [Scalar(0)] * self.group_order,
|
||||
Column.RIGHT: [Scalar(0)] * self.group_order,
|
||||
Column.OUTPUT: [Scalar(0)] * self.group_order,
|
||||
}
|
||||
|
||||
for _, uses in variable_uses.items():
|
||||
sorted_uses = sorted(uses)
|
||||
for i, cell in enumerate(sorted_uses):
|
||||
next_i = (i + 1) % len(sorted_uses)
|
||||
next_column = sorted_uses[next_i].column
|
||||
next_row = sorted_uses[next_i].row
|
||||
S_values[next_column][next_row] = cell.label(self.group_order)
|
||||
|
||||
S = {}
|
||||
S[Column.LEFT] = Polynomial(S_values[Column.LEFT], Basis.LAGRANGE)
|
||||
S[Column.RIGHT] = Polynomial(S_values[Column.RIGHT], Basis.LAGRANGE)
|
||||
S[Column.OUTPUT] = Polynomial(S_values[Column.OUTPUT], Basis.LAGRANGE)
|
||||
|
||||
return S
|
||||
|
||||
# Get the list of public variable assignments, in order
|
||||
def get_public_assignments(self) -> list[Optional[str]]:
|
||||
coeffs = self.coeffs()
|
||||
o = []
|
||||
no_more_allowed = False
|
||||
for coeff in coeffs:
|
||||
if coeff.get("$public", False) is True:
|
||||
if no_more_allowed:
|
||||
raise Exception("Public var declarations must be at the top")
|
||||
var_name = [x for x in list(coeff.keys()) if "$" not in str(x)][0]
|
||||
if coeff != {"$public": True, "$output_coeff": 0, var_name: -1}:
|
||||
raise Exception("Malformatted coeffs: {}", format(coeffs))
|
||||
o.append(var_name)
|
||||
else:
|
||||
no_more_allowed = True
|
||||
return o
|
||||
|
||||
# Generate the gate polynomials: L, R, M, O, C,
|
||||
# each a list of length `group_order`
|
||||
def make_gate_polynomials(
|
||||
self,
|
||||
) -> tuple[Polynomial, Polynomial, Polynomial, Polynomial, Polynomial]:
|
||||
L = [Scalar(0) for _ in range(self.group_order)]
|
||||
R = [Scalar(0) for _ in range(self.group_order)]
|
||||
M = [Scalar(0) for _ in range(self.group_order)]
|
||||
O = [Scalar(0) for _ in range(self.group_order)]
|
||||
C = [Scalar(0) for _ in range(self.group_order)]
|
||||
for i, constraint in enumerate(self.constraints):
|
||||
gate = constraint.gate()
|
||||
L[i] = gate.L
|
||||
R[i] = gate.R
|
||||
M[i] = gate.M
|
||||
O[i] = gate.O
|
||||
C[i] = gate.C
|
||||
return (
|
||||
Polynomial(L, Basis.LAGRANGE),
|
||||
Polynomial(R, Basis.LAGRANGE),
|
||||
Polynomial(M, Basis.LAGRANGE),
|
||||
Polynomial(O, Basis.LAGRANGE),
|
||||
Polynomial(C, Basis.LAGRANGE),
|
||||
)
|
||||
|
||||
# Attempts to "run" the program to fill in any intermediate variable
|
||||
# assignments, starting from the given assignments. Eg. if
|
||||
# `starting_assignments` contains {'a': 3, 'b': 5}, and the first line
|
||||
# says `c <== a * b`, then it fills in `c: 15`.
|
||||
def fill_variable_assignments(
|
||||
self, starting_assignments: dict[Optional[str], int]
|
||||
) -> dict[Optional[str], int]:
|
||||
out = {k: Scalar(v) for k, v in starting_assignments.items()}
|
||||
out[None] = Scalar(0)
|
||||
for constraint in self.constraints:
|
||||
wires = constraint.wires
|
||||
coeffs = constraint.coeffs
|
||||
in_L = wires.L
|
||||
in_R = wires.R
|
||||
output = wires.O
|
||||
out_coeff = coeffs.get("$output_coeff", 1)
|
||||
product_key = get_product_key(in_L, in_R)
|
||||
if output is not None and out_coeff in (-1, 1):
|
||||
new_value = (
|
||||
Scalar(
|
||||
coeffs.get("", 0)
|
||||
+ out[in_L] * coeffs.get(in_L, 0)
|
||||
+ out[in_R] * coeffs.get(in_R, 0) * (1 if in_R != in_L else 0)
|
||||
+ out[in_L] * out[in_R] * coeffs.get(product_key, 0)
|
||||
)
|
||||
* out_coeff
|
||||
) # should be / but equivalent for (1, -1)
|
||||
if output in out:
|
||||
if out[output] != new_value:
|
||||
raise Exception(
|
||||
"Failed assertion: {} = {}".format(out[output], new_value)
|
||||
)
|
||||
else:
|
||||
out[output] = new_value
|
||||
# print('filled in:', output, out[output])
|
||||
return {k: v.n for k, v in out.items()}
|
||||
60
compiler/utils.py
Normal file
60
compiler/utils.py
Normal file
@@ -0,0 +1,60 @@
|
||||
from utils import *
|
||||
from enum import Enum
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
class Column(Enum):
|
||||
LEFT = 1
|
||||
RIGHT = 2
|
||||
OUTPUT = 3
|
||||
|
||||
def __lt__(self, other):
|
||||
if self.__class__ is other.__class__:
|
||||
return self.value < other.value
|
||||
return NotImplemented
|
||||
|
||||
@staticmethod
|
||||
def variants():
|
||||
return [Column.LEFT, Column.RIGHT, Column.OUTPUT]
|
||||
|
||||
|
||||
@dataclass
|
||||
class Cell:
|
||||
column: Column
|
||||
row: int
|
||||
|
||||
def __key(self):
|
||||
return (self.row, self.column.value)
|
||||
|
||||
def __hash__(self):
|
||||
return hash(self.__key())
|
||||
|
||||
def __lt__(self, other):
|
||||
if self.__class__ is other.__class__:
|
||||
return self.__key() < other.__key()
|
||||
return NotImplemented
|
||||
|
||||
def __repr__(self) -> str:
|
||||
return "(" + str(self.row) + ", " + str(self.column.value) + ")"
|
||||
|
||||
def __str__(self) -> str:
|
||||
return "(" + str(self.row) + ", " + str(self.column.value) + ")"
|
||||
|
||||
# Outputs the label (an inner-field element) representing a given
|
||||
# (column, row) pair. Expects section = 1 for left, 2 right, 3 output
|
||||
def label(self, group_order: int) -> Scalar:
|
||||
assert self.row < group_order
|
||||
return Scalar.roots_of_unity(group_order)[self.row] * self.column.value
|
||||
|
||||
|
||||
# Gets the key to use in the coeffs dictionary for the term for key1*key2,
|
||||
# where key1 and key2 can be constant(''), a variable, or product keys
|
||||
# Note that degrees higher than 2 are disallowed in the compiler, but we
|
||||
# still allow them in the parser in case we find a way to compile them later
|
||||
def get_product_key(key1, key2):
|
||||
members = sorted((key1 or "").split("*") + (key2 or "").split("*"))
|
||||
return "*".join([x for x in members if x])
|
||||
|
||||
|
||||
def is_valid_variable_name(name: str) -> bool:
|
||||
return len(name) > 0 and name.isalnum() and name[0] not in "0123456789"
|
||||
@@ -1,6 +1,62 @@
|
||||
import random, time, sys, math
|
||||
from py_ecc.fields.field_elements import FQ as Field
|
||||
import py_ecc.bn128 as b
|
||||
from typing import NewType
|
||||
|
||||
def multisubset(numbers, subsets, adder=lambda x,y: x+y, zero=0):
|
||||
primitive_root = 5
|
||||
G1Point = NewType("G1Point", tuple[b.FQ, b.FQ])
|
||||
G2Point = NewType("G2Point", tuple[b.FQ2, b.FQ2])
|
||||
|
||||
|
||||
class Scalar(Field):
|
||||
field_modulus = b.curve_order
|
||||
|
||||
# Gets the first root of unity of a given group order
|
||||
@classmethod
|
||||
def root_of_unity(cls, group_order: int):
|
||||
return Scalar(5) ** ((cls.field_modulus - 1) // group_order)
|
||||
|
||||
# Gets the full list of roots of unity of a given group order
|
||||
@classmethod
|
||||
def roots_of_unity(cls, group_order: int):
|
||||
o = [Scalar(1), cls.root_of_unity(group_order)]
|
||||
while len(o) < group_order:
|
||||
o.append(o[-1] * o[1])
|
||||
return o
|
||||
|
||||
|
||||
Base = NewType("Base", b.FQ)
|
||||
|
||||
|
||||
def ec_mul(pt, coeff):
|
||||
if hasattr(coeff, "n"):
|
||||
coeff = coeff.n
|
||||
return b.multiply(pt, coeff % b.curve_order)
|
||||
|
||||
|
||||
# Elliptic curve linear combination. A truly optimized implementation
|
||||
# would replace this with a fast lin-comb algo, see https://ethresear.ch/t/7238
|
||||
def ec_lincomb(pairs):
|
||||
return lincomb(
|
||||
[pt for (pt, _) in pairs],
|
||||
[int(n) % b.curve_order for (_, n) in pairs],
|
||||
b.add,
|
||||
b.Z1,
|
||||
)
|
||||
# Equivalent to:
|
||||
# o = b.Z1
|
||||
# for pt, coeff in pairs:
|
||||
# o = b.add(o, ec_mul(pt, coeff))
|
||||
# return o
|
||||
|
||||
|
||||
################################################################
|
||||
# multicombs
|
||||
################################################################
|
||||
|
||||
import random, sys, math
|
||||
|
||||
|
||||
def multisubset(numbers, subsets, adder=lambda x, y: x + y, zero=0):
|
||||
# Split up the numbers into partitions
|
||||
partition_size = 1 + int(math.log(len(subsets) + 1))
|
||||
# Align number count to partition size (for simplicity)
|
||||
@@ -11,7 +67,7 @@ def multisubset(numbers, subsets, adder=lambda x,y: x+y, zero=0):
|
||||
power_sets = []
|
||||
for i in range(0, len(numbers), partition_size):
|
||||
new_power_set = [zero]
|
||||
for dimension, value in enumerate(numbers[i:i+partition_size]):
|
||||
for dimension, value in enumerate(numbers[i : i + partition_size]):
|
||||
new_power_set += [adder(n, value) for n in new_power_set]
|
||||
power_sets.append(new_power_set)
|
||||
# Compute subset sums, using elements from power set for each range of values
|
||||
@@ -24,19 +80,23 @@ def multisubset(numbers, subsets, adder=lambda x,y: x+y, zero=0):
|
||||
index_in_power_set = 0
|
||||
for j in range(partition_size):
|
||||
if i * partition_size + j in subset:
|
||||
index_in_power_set += 2 ** j
|
||||
index_in_power_set += 2**j
|
||||
o = adder(o, power_sets[i][index_in_power_set])
|
||||
subset_sums.append(o)
|
||||
return subset_sums
|
||||
|
||||
|
||||
# Reduces a linear combination `numbers[0] * factors[0] + numbers[1] * factors[1] + ...`
|
||||
# into a multi-subset problem, and computes the result efficiently
|
||||
def lincomb(numbers, factors, adder=lambda x,y: x+y, zero=0):
|
||||
def lincomb(numbers, factors, adder=lambda x, y: x + y, zero=0):
|
||||
# Maximum bit length of a number; how many subsets we need to make
|
||||
maxbitlen = max(len(bin(f))-2 for f in factors)
|
||||
maxbitlen = max(len(bin(f)) - 2 for f in factors)
|
||||
# Compute the subsets: the ith subset contains the numbers whose corresponding factor
|
||||
# has a 1 at the ith bit
|
||||
subsets = [{i for i in range(len(numbers)) if factors[i] & (1 << j)} for j in range(maxbitlen+1)]
|
||||
subsets = [
|
||||
{i for i in range(len(numbers)) if factors[i] & (1 << j)}
|
||||
for j in range(maxbitlen + 1)
|
||||
]
|
||||
subset_sums = multisubset(numbers, subsets, adder=adder, zero=zero)
|
||||
# For example, suppose a value V has factor 6 (011 in increasing-order binary). Subset 0
|
||||
# will not have V, subset 1 will, and subset 2 will. So if we multiply the output of adding
|
||||
@@ -46,37 +106,48 @@ def lincomb(numbers, factors, adder=lambda x,y: x+y, zero=0):
|
||||
# Here, we compute this as `((subset_2_sum * 2) + subset_1_sum) * 2 + subset_0_sum` for
|
||||
# efficiency: an extra `maxbitlen * 2` group operations.
|
||||
o = zero
|
||||
for i in range(len(subsets)-1, -1, -1):
|
||||
for i in range(len(subsets) - 1, -1, -1):
|
||||
o = adder(adder(o, o), subset_sums[i])
|
||||
return o
|
||||
|
||||
|
||||
# Tests go here
|
||||
def make_mock_adder():
|
||||
counter = [0]
|
||||
|
||||
def adder(x, y):
|
||||
if x and y:
|
||||
counter[0] += 1
|
||||
return x+y
|
||||
return x + y
|
||||
|
||||
return adder, counter
|
||||
|
||||
|
||||
def test_multisubset(numcount, setcount):
|
||||
numbers = [random.randrange(10**20) for _ in range(numcount)]
|
||||
subsets = [{i for i in range(numcount) if random.randrange(2)} for i in range(setcount)]
|
||||
subsets = [
|
||||
{i for i in range(numcount) if random.randrange(2)} for i in range(setcount)
|
||||
]
|
||||
adder, counter = make_mock_adder()
|
||||
o = multisubset(numbers, subsets, adder=adder)
|
||||
for output, subset in zip(o, subsets):
|
||||
assert output == sum([numbers[x] for x in subset])
|
||||
|
||||
|
||||
def test_lincomb(numcount, bitlength=256):
|
||||
numbers = [random.randrange(10**20) for _ in range(numcount)]
|
||||
factors = [random.randrange(2**bitlength) for _ in range(numcount)]
|
||||
adder, counter = make_mock_adder()
|
||||
o = lincomb(numbers, factors, adder=adder)
|
||||
assert o == sum([n*f for n,f in zip(numbers, factors)])
|
||||
total_ones = sum(bin(f).count('1') for f in factors)
|
||||
assert o == sum([n * f for n, f in zip(numbers, factors)])
|
||||
total_ones = sum(bin(f).count("1") for f in factors)
|
||||
print("Naive operation count: %d" % (bitlength * numcount + total_ones))
|
||||
print("Optimized operation count: %d" % (bitlength * 2 + counter[0]))
|
||||
print("Optimization factor: %.2f" % ((bitlength * numcount + total_ones) / (bitlength * 2 + counter[0])))
|
||||
print(
|
||||
"Optimization factor: %.2f"
|
||||
% ((bitlength * numcount + total_ones) / (bitlength * 2 + counter[0]))
|
||||
)
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_lincomb(int(sys.argv[1]) if len(sys.argv) >= 2 else 80)
|
||||
423
poetry.lock
generated
Normal file
423
poetry.lock
generated
Normal file
@@ -0,0 +1,423 @@
|
||||
# This file is automatically @generated by Poetry and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "black"
|
||||
version = "22.12.0"
|
||||
description = "The uncompromising code formatter."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "black-22.12.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9eedd20838bd5d75b80c9f5487dbcb06836a43833a37846cf1d8c1cc01cef59d"},
|
||||
{file = "black-22.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:159a46a4947f73387b4d83e87ea006dbb2337eab6c879620a3ba52699b1f4351"},
|
||||
{file = "black-22.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d30b212bffeb1e252b31dd269dfae69dd17e06d92b87ad26e23890f3efea366f"},
|
||||
{file = "black-22.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:7412e75863aa5c5411886804678b7d083c7c28421210180d67dfd8cf1221e1f4"},
|
||||
{file = "black-22.12.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c116eed0efb9ff870ded8b62fe9f28dd61ef6e9ddd28d83d7d264a38417dcee2"},
|
||||
{file = "black-22.12.0-cp37-cp37m-win_amd64.whl", hash = "sha256:1f58cbe16dfe8c12b7434e50ff889fa479072096d79f0a7f25e4ab8e94cd8350"},
|
||||
{file = "black-22.12.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77d86c9f3db9b1bf6761244bc0b3572a546f5fe37917a044e02f3166d5aafa7d"},
|
||||
{file = "black-22.12.0-cp38-cp38-win_amd64.whl", hash = "sha256:82d9fe8fee3401e02e79767016b4907820a7dc28d70d137eb397b92ef3cc5bfc"},
|
||||
{file = "black-22.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:101c69b23df9b44247bd88e1d7e90154336ac4992502d4197bdac35dd7ee3320"},
|
||||
{file = "black-22.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:559c7a1ba9a006226f09e4916060982fd27334ae1998e7a38b3f33a37f7a2148"},
|
||||
{file = "black-22.12.0-py3-none-any.whl", hash = "sha256:436cc9167dd28040ad90d3b404aec22cedf24a6e4d7de221bec2730ec0c97bcf"},
|
||||
{file = "black-22.12.0.tar.gz", hash = "sha256:229351e5a18ca30f447bf724d007f890f97e13af070bb6ad4c0a441cd7596a2f"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
click = ">=8.0.0"
|
||||
mypy-extensions = ">=0.4.3"
|
||||
pathspec = ">=0.9.0"
|
||||
platformdirs = ">=2"
|
||||
tomli = {version = ">=1.1.0", markers = "python_full_version < \"3.11.0a7\""}
|
||||
typing-extensions = {version = ">=3.10.0.0", markers = "python_version < \"3.10\""}
|
||||
|
||||
[package.extras]
|
||||
colorama = ["colorama (>=0.4.3)"]
|
||||
d = ["aiohttp (>=3.7.4)"]
|
||||
jupyter = ["ipython (>=7.8.0)", "tokenize-rt (>=3.2.0)"]
|
||||
uvloop = ["uvloop (>=0.15.2)"]
|
||||
|
||||
[[package]]
|
||||
name = "cached-property"
|
||||
version = "1.5.2"
|
||||
description = "A decorator for caching properties in classes."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "cached-property-1.5.2.tar.gz", hash = "sha256:9fa5755838eecbb2d234c3aa390bd80fbd3ac6b6869109bfc1b499f7bd89a130"},
|
||||
{file = "cached_property-1.5.2-py2.py3-none-any.whl", hash = "sha256:df4f613cf7ad9a588cc381aaf4a512d26265ecebd5eb9e1ba12f1319eb85a6a0"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "click"
|
||||
version = "8.1.3"
|
||||
description = "Composable command line interface toolkit"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
|
||||
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
colorama = {version = "*", markers = "platform_system == \"Windows\""}
|
||||
|
||||
[[package]]
|
||||
name = "colorama"
|
||||
version = "0.4.6"
|
||||
description = "Cross-platform colored terminal text."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7"
|
||||
files = [
|
||||
{file = "colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6"},
|
||||
{file = "colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cytoolz"
|
||||
version = "0.12.1"
|
||||
description = "Cython implementation of Toolz: High performance functional utilities"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5c59bb4ca88e1c69931468bf21f91c8f64d8bf1999eb163b7a2df336f60c304a"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:4d700e011156ff112966c6d77faaae125fcaf538f4cec2b9ce8957de82858f0f"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23c3f57c48eb939d2986eba4aeaeedf930ebf94d58c91a42d4e0fc45ed5427dc"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:25ff13c468c06da9ef26651dc389e7e8bb7af548f8c1dfb96305f57f18d398a8"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a734511144309ea6e105406633affb74e303a3df07d8a3954f9b01946e27ecb1"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:48bc2f30d1b2646d675bb8e7778ab59379bf9edc59fe06fb0e7f85ba1271bf44"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:30936ae8fa68b6a1ac8ad6c4bacb5a8a00d51bc6c89f9614a1557b0105d09f8a"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:efd1b2da3ee577fcfa723a214f73186aef9674dd5b28242d90443c7a82722b0f"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6805b007af3557ee6c20dab491b6e55a8177f5b6845d9e6c653374d540366ba7"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:a6e63fc67b23830947b51e0a488992e3c904fce825ead565f3904dcf621d05f7"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:9e324a94856d88ecf10f34c102d0ded67d7c3cf644153d77e34a29720ce6aa47"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:02975e2b1e61e47e9afa311f4c1783d155136fad37c54a1cebfe991c5a0798a1"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-win32.whl", hash = "sha256:b6569f6038133909cd658dbdcc6fc955f791dc47a7f5b55d2066f742253dcbfe"},
|
||||
{file = "cytoolz-0.12.1-cp310-cp310-win_amd64.whl", hash = "sha256:1be368623e46ad3c1ce807e7a436acb119c26001507b31f92ceb21b86e08c386"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:849f461bffa1e7700ccfcb5186df29cd4cdcc9efdb7199cb8b5681dc37045d72"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4284120c978fb7039901bf6e66832cb3e82ac1b2a107512e735bdb04fd5533ed"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2ec296f01c29c809698eaf677211b6255691295c2b35caab2131e1e7eaadfbac"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:37c53f456a1c84566a7d911eec57c4c6280b915ab0600e7671582793cc2769fe"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1b6761791973b1e839b8309d5853b40eeb413368e31beaf5f2b6ed44c6fc7cf0"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ff478682e8ee6dbaa37201bb71bf4a6eee744006ab000e8f5cea05066fc7c845"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:867bebe6be30ee36a836f9b835790762a74f46be8cc339ea57b68dcecdbc1133"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:7e903df991f0957e2b271a37bb25d28e0d260c52825ae67507d15ca55a935961"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:e797c4afb1b7962d3205b1959e1051f7e6bfbba29da44042a9efc2391f1feb38"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:b8eceaa12b7f152b046b67cb053ec2b5b00f73593983de69bc5e63a8aca4a7a8"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:b575393dd431b8e211de35bd593d831dac870172b16e2b7934f3566b8fc89377"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:3032c0ba42dee5836d6b57a72a569b65df2c29e8ed266cb900d569003cf933a9"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-win32.whl", hash = "sha256:c576bd63495150385b8d05eaae775387f378be2fd9805d3ffb4d17c87271fbad"},
|
||||
{file = "cytoolz-0.12.1-cp311-cp311-win_amd64.whl", hash = "sha256:421b224dc4157a0d66625acb5798cf50858cfa06a5232d39a8bd6cf1fa88aca3"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:be5a454a95797343d0fb1ed02caecae73a023b1393c112951c84f17ec9f4076c"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:061387aa39b9c1576c25d0c59142513c09e77a2a07bd5d6211a43c7a758b6f45"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:14f4dbc3f0ec8f6fc68865489af21dcf042ff007d2737c27bfd73296f15db544"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a816bff6bf424753e1ac2441902ceaf37ae6718b745a53f6aa1a60c617fb4f5f"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:633f19d1990b1cf9c67dce9c28bf8b5a18e42785d15548607a100e1236384d5d"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6fa7009c843667868aa8bdb3d68e5ef3d6356dd418b17ed5ca4e1340e82483a5"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:1c29dd04e282ddfd45b457e3551075beec9128aa9271245e58ce924bf6e055f8"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:cd35c0be4c46274129dd1678bb911dd4e93d23968b26f4e39cd55bc7cb3b1bac"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-musllinux_1_1_ppc64le.whl", hash = "sha256:5158ae6d8dd112d003f677039a3613ca7d2592bfe35d7accf23684edb961fc26"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-musllinux_1_1_s390x.whl", hash = "sha256:7eb9e6fa8a82c3d2f519f7d3942898a97792e3895569e9501b9431048289b82f"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:ac6784cc43aec51a86cf9058a2a343084f8cf46a9281bea5762bfa608127c53b"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-win32.whl", hash = "sha256:794cce219bbcb2f36ca220f27d5afd64eaa854e04901bd6f240be156a578b607"},
|
||||
{file = "cytoolz-0.12.1-cp36-cp36m-win_amd64.whl", hash = "sha256:695dd8231e4f1bfb9a2363775a6e4e56ad9d2058058f817203a49614f4bfe33b"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b1bd8017ef0da935a20106272c5f5ff6b1114add1ccb09cfed1ff7ec5cc01c6d"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:56e1ebf6eb4438b8c45cbe7e7b22fc65df0c9efa97a70d3bf2f51e08b19756a5"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:816c2038008ebf50d81171ddfae377f1af9e71d504ec609469dcb0906bfcf2ae"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9bebe58f7a160db7838eb70990c704db4bdc2d58bd364290fd69be0587be8bac"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a72440305f634604827f96810e4469877b89f5c060d6852267650a49b0e3768c"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b46ebc463bb45f278a2b94e630061c26e10077cb68d4c93583d8f4199699a5ef"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:e75e287787e6adafed9d8c3d3e7647c0b5eb460221f9f92d7dfe48b45ba77c0d"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:03ab22c9aeb1535f8647d23b6520b0c3d41aaa18d04ef42b352dde1931f2e2b1"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:b2ac288f27a2689d9e39f4cf4df5437a8eb038eaae515169586c77f9f8fb343a"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:97a24c0d0806fcf9a6e75fc18aeb95adc37eb0baf6451f10a2de23ffd815329d"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:42c9e5cd2a48a257b1f2402334b48122501f249b8dcf77082f569f2680f185eb"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-win32.whl", hash = "sha256:35fae4eaa0eaf9072a5fe2d244a79e65baae4e5ddbe9cc629c5037af800213a2"},
|
||||
{file = "cytoolz-0.12.1-cp37-cp37m-win_amd64.whl", hash = "sha256:5af43ca7026ead3dd08b261e4f7163cd2cf3ceaa74fa5a81f7b7ea5d445e41d6"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:fcc378fa97f02fbcef090b3611305425d72bd1c0afdd13ef4a82dc67d40638b6"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:cc3645cf6b9246cb8e179db2803e4f0d148211d2a2cf22d5c9b5219111cd91a0"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2b245b824f4705aef0e4a03fafef3ad6cb59ef43cc564cdbf683ee28dfc11ad5"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c1964dcb5f250fd13fac210944b20810d61ef4094a17fbbe502ab7a7eaeeace7"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f7194a22a4a24f3561cb6ad1cca9c9b2f2cf34cc8d4bce6d6a24c80960323fa8"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e1c5434db53f3a94a37ad8aedb231901e001995d899af6ed1165f3d27fa04a6a"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b30cd083ef8af4ba66d9fe5cc75c653ede3f2655f97a032db1a14cc8a006719c"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:bef934bd3e024d512c6c0ad1c66eb173f61d9ccb4dbca8d75f727a5604f7c2f6"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:37320669c364f7d370392af33cc1034b4563da66c22cd3261e3530f4d30dbe4b"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:3cb95d23defb2322cddf70efb4af6dac191d95edaa343e8c1f58f1afa4f92ecd"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:ac5895d5f78dbd8646fe37266655ba4995f9cfec38a86595282fee69e41787da"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:499af2aff04f65b4c23de1df08e1d1484a93b23ddaaa0163e44b5070b68356eb"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-win32.whl", hash = "sha256:aa61e3da751a2dfe95aeca603f3ef510071a136ba9905f61ae6cb5d0696271ad"},
|
||||
{file = "cytoolz-0.12.1-cp38-cp38-win_amd64.whl", hash = "sha256:f5b43ce952a5a31441556c55f5f5f5a8e62c28581a0ff2a2c31c04ef992d73bd"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b8b8f88251b84b3877254cdd59c86a1dc6b2b39a03c6c9c067d344ef879562e0"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:d72415b0110f7958dd3a5ee98a70166f47bd42ede85e3535669c794d06f57406"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f8101ab6de5aa0b26a2b5032bc488d430010c91863e701812d65836b03a12f61"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2eed428b5e68c28abf2c71195e799850e040d67a27c05f7785319c611665b86a"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:59641eb1f41cb688b3cb2f98c9003c493a5024325f76b5c02333d08dd972127c"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2a48940ff0449ffcf690310bf9228bb57885f7571406ed2fe05c98e299987195"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9bae431a5985cdb2014be09d37206c288e0d063940cf9539e9769bd2ec26b220"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:cb8b10405960a8e6801a4702af98ea640130ec6ecfc1208195762de3f5503ba9"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:3c9a16a5b4f54d5c0a131f56b0ca65998a9a74958b5b36840c280edba4f8b907"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:49911cb533c96d275e31e7eaeb0742ac3f7afe386a1d8c40937814d75039a0f7"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:dbae37d48ef5a0ab90cfaf2b9312d96f034b1c828208a9cbe25377a1b19ba129"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c34e69be4429633fc614febe3127fa03aa418a1abb9252f29d9ba5b3394573a5"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-win32.whl", hash = "sha256:0d474dacbafbdbb44c7de986bbf71ff56ae62df0d52ab3b6fa966784dc88737a"},
|
||||
{file = "cytoolz-0.12.1-cp39-cp39-win_amd64.whl", hash = "sha256:3d6d0b0075731832343eb88229cea4bf39e96f3fc7acbc449aadbdfec2842703"},
|
||||
{file = "cytoolz-0.12.1-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:8506d1863f30d26f577c4ed59d2cfd03d2f39569f9cbaa02a764a9de73d312d5"},
|
||||
{file = "cytoolz-0.12.1-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a1eae39656a1685e8b3f433eecfd72015ce5c1d7519e9c8f9402153c68331bb"},
|
||||
{file = "cytoolz-0.12.1-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4a0055943074c6c85b77fcc3f42f7c54010a3478daa2ed9d6243d0411c84a4d3"},
|
||||
{file = "cytoolz-0.12.1-pp37-pypy37_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a8a7a325b8fe885a6dd91093616c703134f2dacbd869bc519970df3849c2a15b"},
|
||||
{file = "cytoolz-0.12.1-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:7b60caf0fa5f1b49f1062f7dc0f66c7b23e2736bad50fa8296bfb845995e3051"},
|
||||
{file = "cytoolz-0.12.1-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:980e7eb7205e01816a92f3290cfc80507957e64656b9271a0dfebb85fe3718c0"},
|
||||
{file = "cytoolz-0.12.1-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:06d38a40fe153f23cda0e823413fe9d9ebee89dd461827285316eff929fb121e"},
|
||||
{file = "cytoolz-0.12.1-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d540e9c34a61b53b6a374ea108794a48388178f7889d772e364cdbd6df37774c"},
|
||||
{file = "cytoolz-0.12.1-pp38-pypy38_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:117871f036926e42d3abcee587eafa9dc7383f1064ac53a806d33e76604de311"},
|
||||
{file = "cytoolz-0.12.1-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:31131b54a0c72efc0eb432dc66df546c6a54f2a7d396c9a34cf65ac1c26b1df8"},
|
||||
{file = "cytoolz-0.12.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:4534cbfad73cdb1a6dad495530d4186d57d73089c01e9cb0558caab50e46cb3b"},
|
||||
{file = "cytoolz-0.12.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50db41e875e36aec11881b8b12bc69c6f4836b7dd9e88a9e5bbf26c2cb3ba6cd"},
|
||||
{file = "cytoolz-0.12.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6716855f9c669c9e25a185d88e0f169839bf8553d16496796325acd114607c11"},
|
||||
{file = "cytoolz-0.12.1-pp39-pypy39_pp73-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2f32452e833f0605b871626e6c61b71b0cba24233aad0e04accc3240497d4995"},
|
||||
{file = "cytoolz-0.12.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:ba74c239fc6cb6e962eabc420967c7565f3f363b776c89b3df5234caecf1f463"},
|
||||
{file = "cytoolz-0.12.1.tar.gz", hash = "sha256:fc33909397481c90de3cec831bfb88d97e220dc91939d996920202f184b4648e"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
toolz = ">=0.8.0"
|
||||
|
||||
[package.extras]
|
||||
cython = ["cython"]
|
||||
|
||||
[[package]]
|
||||
name = "eth-hash"
|
||||
version = "0.5.1"
|
||||
description = "eth-hash: The Ethereum hashing function, keccak256, sometimes (erroneously) called sha3"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.7, <4"
|
||||
files = [
|
||||
{file = "eth-hash-0.5.1.tar.gz", hash = "sha256:9805075f653e114a31a99678e93b257fb4082337696f4eff7b4371fe65158409"},
|
||||
{file = "eth_hash-0.5.1-py3-none-any.whl", hash = "sha256:4d992e885f3ae3901abbe98bd776ba62d0f6335f98c6e9fc60a39b9d114dfb5a"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
dev = ["Sphinx (>=5.0.0,<6)", "black (>=22.0,<23)", "bumpversion (>=0.5.3,<1)", "flake8 (==3.7.9)", "ipython", "isort (>=4.2.15,<5)", "jinja2 (>=3.0.0,<3.1.0)", "mypy (==0.961)", "pydocstyle (>=5.0.0,<6)", "pytest (>=6.2.5,<7)", "pytest-watch (>=4.1.0,<5)", "pytest-xdist (>=2.4.0,<3)", "sphinx-rtd-theme (>=0.1.9,<1)", "towncrier (>=21,<22)", "tox (>=3.14.6,<4)", "twine", "wheel"]
|
||||
doc = ["Sphinx (>=5.0.0,<6)", "jinja2 (>=3.0.0,<3.1.0)", "sphinx-rtd-theme (>=0.1.9,<1)", "towncrier (>=21,<22)"]
|
||||
lint = ["black (>=22.0,<23)", "flake8 (==3.7.9)", "isort (>=4.2.15,<5)", "mypy (==0.961)", "pydocstyle (>=5.0.0,<6)"]
|
||||
pycryptodome = ["pycryptodome (>=3.6.6,<4)"]
|
||||
pysha3 = ["pysha3 (>=1.0.0,<2.0.0)", "safe-pysha3 (>=1.0.0)"]
|
||||
test = ["pytest (>=6.2.5,<7)", "pytest-xdist (>=2.4.0,<3)", "tox (>=3.14.6,<4)"]
|
||||
|
||||
[[package]]
|
||||
name = "eth-typing"
|
||||
version = "3.2.0"
|
||||
description = "eth-typing: Common type annotations for ethereum python packages"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.6, <4"
|
||||
files = [
|
||||
{file = "eth-typing-3.2.0.tar.gz", hash = "sha256:177e2070da9bf557fe0fd46ee467a7be2d0b6476aa4dc18680603e7da1fc5690"},
|
||||
{file = "eth_typing-3.2.0-py3-none-any.whl", hash = "sha256:2d7540c1c65c0e686c1dc357b8376a53caf4e1693724a90191ad133be568841d"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
dev = ["bumpversion (>=0.5.3,<1)", "flake8 (==3.8.3)", "ipython", "isort (>=4.2.15,<5)", "mypy (==0.782)", "pydocstyle (>=3.0.0,<4)", "pytest (>=6.2.5,<7)", "pytest-watch (>=4.1.0,<5)", "pytest-xdist", "sphinx (>=4.2.0,<5)", "sphinx-rtd-theme (>=0.1.9)", "towncrier (>=21,<22)", "tox (>=2.9.1,<3)", "twine", "wheel"]
|
||||
doc = ["sphinx (>=4.2.0,<5)", "sphinx-rtd-theme (>=0.1.9)", "towncrier (>=21,<22)"]
|
||||
lint = ["flake8 (==3.8.3)", "isort (>=4.2.15,<5)", "mypy (==0.782)", "pydocstyle (>=3.0.0,<4)"]
|
||||
test = ["pytest (>=6.2.5,<7)", "pytest-xdist", "tox (>=2.9.1,<3)"]
|
||||
|
||||
[[package]]
|
||||
name = "eth-utils"
|
||||
version = "2.1.0"
|
||||
description = "eth-utils: Common utility functions for python code that interacts with Ethereum"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.7,<4"
|
||||
files = [
|
||||
{file = "eth-utils-2.1.0.tar.gz", hash = "sha256:fcb4c3c1b32947ba92970963f9aaf40da73b04ea1034964ff8c0e70595127138"},
|
||||
{file = "eth_utils-2.1.0-py3-none-any.whl", hash = "sha256:63901e54ec9e4ac16ae0a0d28e1dc48b968c20184d22f2727e5f3ca24b6250bc"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cytoolz = {version = ">=0.10.1", markers = "implementation_name == \"cpython\""}
|
||||
eth-hash = ">=0.3.1"
|
||||
eth-typing = ">=3.0.0"
|
||||
toolz = {version = ">0.8.2", markers = "implementation_name == \"pypy\""}
|
||||
|
||||
[package.extras]
|
||||
dev = ["Sphinx (>=1.6.5,<2)", "black (>=22)", "bumpversion (>=0.5.3,<1)", "flake8 (==3.7.9)", "hypothesis (>=4.43.0,<5.0.0)", "ipython", "isort (>=4.2.15,<5)", "jinja2 (>=3.0.0,<3.0.1)", "mypy (==0.910)", "pydocstyle (>=5.0.0,<6)", "pytest (>=6.2.5,<7)", "pytest-watch (>=4.1.0,<5)", "pytest-xdist", "sphinx-rtd-theme (>=0.1.9,<2)", "towncrier (>=21,<22)", "tox (==3.14.6)", "twine (>=1.13,<2)", "types-setuptools", "wheel (>=0.30.0,<1.0.0)"]
|
||||
doc = ["Sphinx (>=1.6.5,<2)", "jinja2 (>=3.0.0,<3.0.1)", "sphinx-rtd-theme (>=0.1.9,<2)", "towncrier (>=21,<22)"]
|
||||
lint = ["black (>=22)", "flake8 (==3.7.9)", "isort (>=4.2.15,<5)", "mypy (==0.910)", "pydocstyle (>=5.0.0,<6)", "pytest (>=6.2.5,<7)", "types-setuptools"]
|
||||
test = ["hypothesis (>=4.43.0,<5.0.0)", "pytest (>=6.2.5,<7)", "pytest-xdist", "tox (==3.14.6)", "types-setuptools"]
|
||||
|
||||
[[package]]
|
||||
name = "merlin"
|
||||
version = "0.1.0"
|
||||
description = "merlin is a Python implementation of Merlin transcripts (https://merlin.cool)"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "^3.9"
|
||||
files = []
|
||||
develop = false
|
||||
|
||||
[package.source]
|
||||
type = "git"
|
||||
url = "https://github.com/nalinbhardwaj/curdleproofs.pie"
|
||||
reference = "master"
|
||||
resolved_reference = "805d06785b6ff35fde7148762277dd1ae678beeb"
|
||||
subdirectory = "merlin"
|
||||
[[package]]
|
||||
name = "mypy"
|
||||
version = "0.991"
|
||||
description = "Optional static typing for Python"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "mypy-0.991-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7d17e0a9707d0772f4a7b878f04b4fd11f6f5bcb9b3813975a9b13c9332153ab"},
|
||||
{file = "mypy-0.991-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0714258640194d75677e86c786e80ccf294972cc76885d3ebbb560f11db0003d"},
|
||||
{file = "mypy-0.991-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0c8f3be99e8a8bd403caa8c03be619544bc2c77a7093685dcf308c6b109426c6"},
|
||||
{file = "mypy-0.991-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc9ec663ed6c8f15f4ae9d3c04c989b744436c16d26580eaa760ae9dd5d662eb"},
|
||||
{file = "mypy-0.991-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:4307270436fd7694b41f913eb09210faff27ea4979ecbcd849e57d2da2f65305"},
|
||||
{file = "mypy-0.991-cp310-cp310-win_amd64.whl", hash = "sha256:901c2c269c616e6cb0998b33d4adbb4a6af0ac4ce5cd078afd7bc95830e62c1c"},
|
||||
{file = "mypy-0.991-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:d13674f3fb73805ba0c45eb6c0c3053d218aa1f7abead6e446d474529aafc372"},
|
||||
{file = "mypy-0.991-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1c8cd4fb70e8584ca1ed5805cbc7c017a3d1a29fb450621089ffed3e99d1857f"},
|
||||
{file = "mypy-0.991-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:209ee89fbb0deed518605edddd234af80506aec932ad28d73c08f1400ef80a33"},
|
||||
{file = "mypy-0.991-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:37bd02ebf9d10e05b00d71302d2c2e6ca333e6c2a8584a98c00e038db8121f05"},
|
||||
{file = "mypy-0.991-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:26efb2fcc6b67e4d5a55561f39176821d2adf88f2745ddc72751b7890f3194ad"},
|
||||
{file = "mypy-0.991-cp311-cp311-win_amd64.whl", hash = "sha256:3a700330b567114b673cf8ee7388e949f843b356a73b5ab22dd7cff4742a5297"},
|
||||
{file = "mypy-0.991-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:1f7d1a520373e2272b10796c3ff721ea1a0712288cafaa95931e66aa15798813"},
|
||||
{file = "mypy-0.991-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:641411733b127c3e0dab94c45af15fea99e4468f99ac88b39efb1ad677da5711"},
|
||||
{file = "mypy-0.991-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:3d80e36b7d7a9259b740be6d8d906221789b0d836201af4234093cae89ced0cd"},
|
||||
{file = "mypy-0.991-cp37-cp37m-win_amd64.whl", hash = "sha256:e62ebaad93be3ad1a828a11e90f0e76f15449371ffeecca4a0a0b9adc99abcef"},
|
||||
{file = "mypy-0.991-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:b86ce2c1866a748c0f6faca5232059f881cda6dda2a893b9a8373353cfe3715a"},
|
||||
{file = "mypy-0.991-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ac6e503823143464538efda0e8e356d871557ef60ccd38f8824a4257acc18d93"},
|
||||
{file = "mypy-0.991-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:0cca5adf694af539aeaa6ac633a7afe9bbd760df9d31be55ab780b77ab5ae8bf"},
|
||||
{file = "mypy-0.991-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a12c56bf73cdab116df96e4ff39610b92a348cc99a1307e1da3c3768bbb5b135"},
|
||||
{file = "mypy-0.991-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:652b651d42f155033a1967739788c436491b577b6a44e4c39fb340d0ee7f0d70"},
|
||||
{file = "mypy-0.991-cp38-cp38-win_amd64.whl", hash = "sha256:4175593dc25d9da12f7de8de873a33f9b2b8bdb4e827a7cae952e5b1a342e243"},
|
||||
{file = "mypy-0.991-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:98e781cd35c0acf33eb0295e8b9c55cdbef64fcb35f6d3aa2186f289bed6e80d"},
|
||||
{file = "mypy-0.991-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:6d7464bac72a85cb3491c7e92b5b62f3dcccb8af26826257760a552a5e244aa5"},
|
||||
{file = "mypy-0.991-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c9166b3f81a10cdf9b49f2d594b21b31adadb3d5e9db9b834866c3258b695be3"},
|
||||
{file = "mypy-0.991-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8472f736a5bfb159a5e36740847808f6f5b659960115ff29c7cecec1741c648"},
|
||||
{file = "mypy-0.991-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5e80e758243b97b618cdf22004beb09e8a2de1af481382e4d84bc52152d1c476"},
|
||||
{file = "mypy-0.991-cp39-cp39-win_amd64.whl", hash = "sha256:74e259b5c19f70d35fcc1ad3d56499065c601dfe94ff67ae48b85596b9ec1461"},
|
||||
{file = "mypy-0.991-py3-none-any.whl", hash = "sha256:de32edc9b0a7e67c2775e574cb061a537660e51210fbf6006b0b36ea695ae9bb"},
|
||||
{file = "mypy-0.991.tar.gz", hash = "sha256:3c0165ba8f354a6d9881809ef29f1a9318a236a6d81c690094c5df32107bde06"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
mypy-extensions = ">=0.4.3"
|
||||
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
|
||||
typing-extensions = ">=3.10"
|
||||
|
||||
[package.extras]
|
||||
dmypy = ["psutil (>=4.0)"]
|
||||
install-types = ["pip"]
|
||||
python2 = ["typed-ast (>=1.4.0,<2)"]
|
||||
reports = ["lxml"]
|
||||
|
||||
[[package]]
|
||||
name = "mypy-extensions"
|
||||
version = "0.4.3"
|
||||
description = "Experimental type system extensions for programs checked with the mypy typechecker."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
|
||||
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pathspec"
|
||||
version = "0.10.3"
|
||||
description = "Utility library for gitignore style pattern matching of file paths."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "pathspec-0.10.3-py3-none-any.whl", hash = "sha256:3c95343af8b756205e2aba76e843ba9520a24dd84f68c22b9f93251507509dd6"},
|
||||
{file = "pathspec-0.10.3.tar.gz", hash = "sha256:56200de4077d9d0791465aa9095a01d421861e405b5096955051deefd697d6f6"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "platformdirs"
|
||||
version = "2.6.2"
|
||||
description = "A small Python package for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "platformdirs-2.6.2-py3-none-any.whl", hash = "sha256:83c8f6d04389165de7c9b6f0c682439697887bca0aa2f1c87ef1826be3584490"},
|
||||
{file = "platformdirs-2.6.2.tar.gz", hash = "sha256:e1fea1fe471b9ff8332e229df3cb7de4f53eeea4998d3b6bfff542115e998bd2"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
docs = ["furo (>=2022.12.7)", "proselint (>=0.13)", "sphinx (>=5.3)", "sphinx-autodoc-typehints (>=1.19.5)"]
|
||||
test = ["appdirs (==1.4.4)", "covdefaults (>=2.2.2)", "pytest (>=7.2)", "pytest-cov (>=4)", "pytest-mock (>=3.10)"]
|
||||
|
||||
[[package]]
|
||||
name = "py-ecc"
|
||||
version = "6.0.0"
|
||||
description = "Elliptic curve crypto in python including secp256k1 and alt_bn128"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.6, <4"
|
||||
files = [
|
||||
{file = "py_ecc-6.0.0-py3-none-any.whl", hash = "sha256:54e8aa4c30374fa62d582c599a99f352c153f2971352171318bd6910a643be0b"},
|
||||
{file = "py_ecc-6.0.0.tar.gz", hash = "sha256:3fc8a79e38975e05dc443d25783fd69212a1ca854cc0efef071301a8f7d6ce1d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cached-property = ">=1.5.1,<2"
|
||||
eth-typing = ">=3.0.0,<4"
|
||||
eth-utils = ">=2.0.0,<3"
|
||||
mypy-extensions = ">=0.4.1"
|
||||
|
||||
[package.extras]
|
||||
dev = ["bumpversion (>=0.5.3,<1)", "flake8 (==3.5.0)", "mypy (==0.641)", "mypy-extensions (>=0.4.1)", "pytest (==6.2.5)", "pytest-xdist (==1.26.0)", "twine"]
|
||||
lint = ["flake8 (==3.5.0)", "mypy (==0.641)", "mypy-extensions (>=0.4.1)"]
|
||||
test = ["pytest (==6.2.5)", "pytest-xdist (==1.26.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "tomli"
|
||||
version = "2.0.1"
|
||||
description = "A lil' TOML parser"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "tomli-2.0.1-py3-none-any.whl", hash = "sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc"},
|
||||
{file = "tomli-2.0.1.tar.gz", hash = "sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "toolz"
|
||||
version = "0.12.0"
|
||||
description = "List processing tools and functional utilities"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.5"
|
||||
files = [
|
||||
{file = "toolz-0.12.0-py3-none-any.whl", hash = "sha256:2059bd4148deb1884bb0eb770a3cde70e7f954cfbbdc2285f1f2de01fd21eb6f"},
|
||||
{file = "toolz-0.12.0.tar.gz", hash = "sha256:88c570861c440ee3f2f6037c4654613228ff40c93a6c25e0eba70d17282c6194"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "typing-extensions"
|
||||
version = "4.4.0"
|
||||
description = "Backported and Experimental Type Hints for Python 3.7+"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "typing_extensions-4.4.0-py3-none-any.whl", hash = "sha256:16fa4864408f655d35ec496218b85f79b3437c829e93320c7c9215ccfd92489e"},
|
||||
{file = "typing_extensions-4.4.0.tar.gz", hash = "sha256:1511434bb92bf8dd198c12b1cc812e800d4181cfcb867674e0f8279cc93087aa"},
|
||||
]
|
||||
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = "^3.9"
|
||||
content-hash = "a24d88f775e9672a0722f50a7b24562c7c804451e871c17c4193104f105002e6"
|
||||
182
poly.py
Normal file
182
poly.py
Normal file
@@ -0,0 +1,182 @@
|
||||
from curve import Scalar
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class Basis(Enum):
|
||||
LAGRANGE = 1
|
||||
MONOMIAL = 2
|
||||
|
||||
|
||||
class Polynomial:
|
||||
values: list[Scalar]
|
||||
basis: Basis
|
||||
|
||||
def __init__(self, values: list[Scalar], basis: Basis):
|
||||
assert all(isinstance(x, Scalar) for x in values)
|
||||
assert isinstance(basis, Basis)
|
||||
self.values = values
|
||||
self.basis = basis
|
||||
|
||||
def __eq__(self, other):
|
||||
return (self.basis == other.basis) and (self.values == other.values)
|
||||
|
||||
def __add__(self, other):
|
||||
if isinstance(other, Polynomial):
|
||||
assert len(self.values) == len(other.values)
|
||||
assert self.basis == other.basis
|
||||
|
||||
return Polynomial(
|
||||
[x + y for x, y in zip(self.values, other.values)],
|
||||
self.basis,
|
||||
)
|
||||
else:
|
||||
assert isinstance(other, Scalar)
|
||||
return Polynomial(
|
||||
[x + other for x in self.values],
|
||||
self.basis,
|
||||
)
|
||||
|
||||
def __sub__(self, other):
|
||||
if isinstance(other, Polynomial):
|
||||
assert len(self.values) == len(other.values)
|
||||
assert self.basis == other.basis
|
||||
|
||||
return Polynomial(
|
||||
[x - y for x, y in zip(self.values, other.values)],
|
||||
self.basis,
|
||||
)
|
||||
else:
|
||||
assert isinstance(other, Scalar)
|
||||
return Polynomial(
|
||||
[x - other for x in self.values],
|
||||
self.basis,
|
||||
)
|
||||
|
||||
def __mul__(self, other):
|
||||
if isinstance(other, Polynomial):
|
||||
assert self.basis == Basis.LAGRANGE
|
||||
assert self.basis == other.basis
|
||||
assert len(self.values) == len(other.values)
|
||||
|
||||
return Polynomial(
|
||||
[x * y for x, y in zip(self.values, other.values)],
|
||||
self.basis,
|
||||
)
|
||||
else:
|
||||
assert isinstance(other, Scalar)
|
||||
return Polynomial(
|
||||
[x * other for x in self.values],
|
||||
self.basis,
|
||||
)
|
||||
|
||||
def __truediv__(self, other):
|
||||
if isinstance(other, Polynomial):
|
||||
assert self.basis == Basis.LAGRANGE
|
||||
assert self.basis == other.basis
|
||||
assert len(self.values) == len(other.values)
|
||||
|
||||
return Polynomial(
|
||||
[x / y for x, y in zip(self.values, other.values)],
|
||||
self.basis,
|
||||
)
|
||||
else:
|
||||
assert isinstance(other, Scalar)
|
||||
return Polynomial(
|
||||
[x / other for x in self.values],
|
||||
self.basis,
|
||||
)
|
||||
|
||||
def shift(self, shift: int):
|
||||
assert self.basis == Basis.LAGRANGE
|
||||
assert shift < len(self.values)
|
||||
|
||||
return Polynomial(
|
||||
self.values[shift:] + self.values[:shift],
|
||||
self.basis,
|
||||
)
|
||||
|
||||
# Convenience method to do FFTs specifically over the subgroup over which
|
||||
# all of the proofs are operating
|
||||
def fft(self, inv=False):
|
||||
# Fast Fourier transform, used to convert between polynomial coefficients
|
||||
# and a list of evaluations at the roots of unity
|
||||
# See https://vitalik.ca/general/2019/05/12/fft.html
|
||||
def _fft(vals, modulus, roots_of_unity):
|
||||
if len(vals) == 1:
|
||||
return vals
|
||||
L = _fft(vals[::2], modulus, roots_of_unity[::2])
|
||||
R = _fft(vals[1::2], modulus, roots_of_unity[::2])
|
||||
o = [0] * len(vals)
|
||||
for i, (x, y) in enumerate(zip(L, R)):
|
||||
y_times_root = y * roots_of_unity[i]
|
||||
o[i] = (x + y_times_root) % modulus
|
||||
o[i + len(L)] = (x - y_times_root) % modulus
|
||||
return o
|
||||
|
||||
roots = [x.n for x in Scalar.roots_of_unity(len(self.values))]
|
||||
o, nvals = Scalar.field_modulus, [x.n for x in self.values]
|
||||
if inv:
|
||||
assert self.basis == Basis.LAGRANGE
|
||||
# Inverse FFT
|
||||
invlen = Scalar(1) / len(self.values)
|
||||
reversed_roots = [roots[0]] + roots[1:][::-1]
|
||||
return Polynomial(
|
||||
[Scalar(x) * invlen for x in _fft(nvals, o, reversed_roots)],
|
||||
Basis.MONOMIAL,
|
||||
)
|
||||
else:
|
||||
assert self.basis == Basis.MONOMIAL
|
||||
# Regular FFT
|
||||
return Polynomial(
|
||||
[Scalar(x) for x in _fft(nvals, o, roots)], Basis.LAGRANGE
|
||||
)
|
||||
|
||||
def ifft(self):
|
||||
return self.fft(True)
|
||||
|
||||
# Converts a list of evaluations at [1, w, w**2... w**(n-1)] to
|
||||
# a list of evaluations at
|
||||
# [offset, offset * q, offset * q**2 ... offset * q**(4n-1)] where q = w**(1/4)
|
||||
# This lets us work with higher-degree polynomials, and the offset lets us
|
||||
# avoid the 0/0 problem when computing a division (as long as the offset is
|
||||
# chosen randomly)
|
||||
def to_coset_extended_lagrange(self, offset):
|
||||
assert self.basis == Basis.LAGRANGE
|
||||
group_order = len(self.values)
|
||||
x_powers = self.ifft().values
|
||||
x_powers = [(offset**i * x) for i, x in enumerate(x_powers)] + [Scalar(0)] * (
|
||||
group_order * 3
|
||||
)
|
||||
return Polynomial(x_powers, Basis.MONOMIAL).fft()
|
||||
|
||||
# Convert from offset form into coefficients
|
||||
# Note that we can't make a full inverse function of to_coset_extended_lagrange
|
||||
# because the output of this might be a deg >= n polynomial, which cannot
|
||||
# be expressed via evaluations at n roots of unity
|
||||
def coset_extended_lagrange_to_coeffs(self, offset):
|
||||
assert self.basis == Basis.LAGRANGE
|
||||
|
||||
shifted_coeffs = self.ifft().values
|
||||
inv_offset = 1 / offset
|
||||
return Polynomial(
|
||||
[v * inv_offset**i for (i, v) in enumerate(shifted_coeffs)],
|
||||
Basis.LAGRANGE,
|
||||
)
|
||||
|
||||
# Given a polynomial expressed as a list of evaluations at roots of unity,
|
||||
# evaluate it at x directly, without using an FFT to covert to coeffs first
|
||||
def barycentric_eval(self, x: Scalar):
|
||||
assert self.basis == Basis.LAGRANGE
|
||||
|
||||
order = len(self.values)
|
||||
roots_of_unity = Scalar.roots_of_unity(order)
|
||||
return (
|
||||
(Scalar(x) ** order - 1)
|
||||
/ order
|
||||
* sum(
|
||||
[
|
||||
value * root / (x - root)
|
||||
for value, root in zip(self.values, roots_of_unity)
|
||||
]
|
||||
)
|
||||
)
|
||||
500
prover.py
500
prover.py
@@ -1,283 +1,315 @@
|
||||
from compiler import to_assembly, get_public_assignments, \
|
||||
make_s_polynomials, make_gate_polynomials
|
||||
from compiler.program import Program, CommonPreprocessedInput
|
||||
from utils import *
|
||||
from setup import *
|
||||
from typing import Optional
|
||||
from dataclasses import dataclass
|
||||
from transcript import Transcript, Message1, Message2, Message3, Message4, Message5
|
||||
from poly import Polynomial, Basis
|
||||
|
||||
def prove_from_witness(setup, group_order, code, var_assignments):
|
||||
eqs = to_assembly(code)
|
||||
|
||||
if None not in var_assignments:
|
||||
var_assignments[None] = 0
|
||||
@dataclass
|
||||
class Proof:
|
||||
msg_1: Message1
|
||||
msg_2: Message2
|
||||
msg_3: Message3
|
||||
msg_4: Message4
|
||||
msg_5: Message5
|
||||
|
||||
variables = [v for (v, c) in eqs]
|
||||
coeffs = [c for (v, c) in eqs]
|
||||
def flatten(self):
|
||||
proof = {}
|
||||
proof["a_1"] = self.msg_1.a_1
|
||||
proof["b_1"] = self.msg_1.b_1
|
||||
proof["c_1"] = self.msg_1.c_1
|
||||
proof["z_1"] = self.msg_2.z_1
|
||||
proof["t_lo_1"] = self.msg_3.t_lo_1
|
||||
proof["t_mid_1"] = self.msg_3.t_mid_1
|
||||
proof["t_hi_1"] = self.msg_3.t_hi_1
|
||||
proof["a_eval"] = self.msg_4.a_eval
|
||||
proof["b_eval"] = self.msg_4.b_eval
|
||||
proof["c_eval"] = self.msg_4.c_eval
|
||||
proof["s1_eval"] = self.msg_4.s1_eval
|
||||
proof["s2_eval"] = self.msg_4.s2_eval
|
||||
proof["z_shifted_eval"] = self.msg_4.z_shifted_eval
|
||||
proof["W_z_1"] = self.msg_5.W_z_1
|
||||
proof["W_zw_1"] = self.msg_5.W_zw_1
|
||||
return proof
|
||||
|
||||
# Compute wire assignments
|
||||
A = [f_inner(0) for _ in range(group_order)]
|
||||
B = [f_inner(0) for _ in range(group_order)]
|
||||
C = [f_inner(0) for _ in range(group_order)]
|
||||
for i, (in_L, in_R, out) in enumerate(variables):
|
||||
A[i] = f_inner(var_assignments[in_L])
|
||||
B[i] = f_inner(var_assignments[in_R])
|
||||
C[i] = f_inner(var_assignments[out])
|
||||
A_pt = evaluations_to_point(setup, group_order, A)
|
||||
B_pt = evaluations_to_point(setup, group_order, B)
|
||||
C_pt = evaluations_to_point(setup, group_order, C)
|
||||
|
||||
public_vars = get_public_assignments(coeffs)
|
||||
PI = (
|
||||
[f_inner(-var_assignments[v]) for v in public_vars] +
|
||||
[f_inner(0) for _ in range(group_order - len(public_vars))]
|
||||
)
|
||||
@dataclass
|
||||
class Prover:
|
||||
group_order: int
|
||||
setup: Setup
|
||||
program: Program
|
||||
pk: CommonPreprocessedInput
|
||||
|
||||
buf = serialize_point(A_pt) + serialize_point(B_pt) + serialize_point(C_pt)
|
||||
def __init__(self, setup: Setup, program: Program):
|
||||
self.group_order = program.group_order
|
||||
self.setup = setup
|
||||
self.program = program
|
||||
self.pk = program.common_preprocessed_input()
|
||||
|
||||
# The first two Fiat-Shamir challenges
|
||||
beta = binhash_to_f_inner(keccak256(buf))
|
||||
gamma = binhash_to_f_inner(keccak256(keccak256(buf)))
|
||||
def prove(self, witness: dict[Optional[str], int]) -> Proof:
|
||||
# Initialise Fiat-Shamir transcript
|
||||
transcript = Transcript(b"plonk")
|
||||
|
||||
# Compute the accumulator polynomial for the permutation arguments
|
||||
S1, S2, S3 = make_s_polynomials(group_order, variables)
|
||||
Z = [f_inner(1)]
|
||||
roots_of_unity = get_roots_of_unity(group_order)
|
||||
for i in range(group_order):
|
||||
Z.append(
|
||||
Z[-1] *
|
||||
(A[i] + beta * roots_of_unity[i] + gamma) *
|
||||
(B[i] + beta * 2 * roots_of_unity[i] + gamma) *
|
||||
(C[i] + beta * 3 * roots_of_unity[i] + gamma) /
|
||||
(A[i] + beta * S1[i] + gamma) /
|
||||
(B[i] + beta * S2[i] + gamma) /
|
||||
(C[i] + beta * S3[i] + gamma)
|
||||
# Collect fixed and public information
|
||||
# FIXME: Hash pk and PI into transcript
|
||||
public_vars = self.program.get_public_assignments()
|
||||
PI = Polynomial(
|
||||
[Scalar(-witness[v]) for v in public_vars]
|
||||
+ [Scalar(0) for _ in range(self.group_order - len(public_vars))],
|
||||
Basis.LAGRANGE,
|
||||
)
|
||||
assert Z.pop() == 1
|
||||
Z_pt = evaluations_to_point(setup, group_order, Z)
|
||||
alpha = binhash_to_f_inner(keccak256(serialize_point(Z_pt)))
|
||||
print("Permutation accumulator polynomial successfully generated")
|
||||
self.PI = PI
|
||||
|
||||
# Compute the quotient polynomial
|
||||
# Round 1
|
||||
msg_1 = self.round_1(witness)
|
||||
self.beta, self.gamma = transcript.round_1(msg_1)
|
||||
|
||||
# List of roots of unity at 4x fineness
|
||||
quarter_roots = get_roots_of_unity(group_order * 4)
|
||||
# Round 2
|
||||
msg_2 = self.round_2()
|
||||
self.alpha, self.fft_cofactor = transcript.round_2(msg_2)
|
||||
|
||||
# This value could be anything, it just needs to be unpredictable. Lets us
|
||||
# have evaluation forms at cosets to avoid zero evaluations, so we can
|
||||
# divide polys without the 0/0 issue
|
||||
fft_offset = binhash_to_f_inner(
|
||||
keccak256(keccak256(serialize_point(Z_pt)))
|
||||
)
|
||||
# Round 3
|
||||
msg_3 = self.round_3()
|
||||
self.zeta = transcript.round_3(msg_3)
|
||||
|
||||
fft_expand = lambda x: fft_expand_with_offset(x, fft_offset)
|
||||
expanded_evals_to_coeffs = lambda x: offset_evals_to_coeffs(x, fft_offset)
|
||||
# Round 4
|
||||
msg_4 = self.round_4()
|
||||
self.v = transcript.round_4(msg_4)
|
||||
|
||||
A_big = fft_expand(A)
|
||||
B_big = fft_expand(B)
|
||||
C_big = fft_expand(C)
|
||||
# Z_H = X^N - 1, also in evaluation form in the coset
|
||||
ZH_big = [
|
||||
((f_inner(r) * fft_offset) ** group_order - 1)
|
||||
for r in quarter_roots
|
||||
]
|
||||
# Round 5
|
||||
msg_5 = self.round_5()
|
||||
|
||||
QL, QR, QM, QO, QC = make_gate_polynomials(group_order, eqs)
|
||||
return Proof(msg_1, msg_2, msg_3, msg_4, msg_5)
|
||||
|
||||
QL_big, QR_big, QM_big, QO_big, QC_big, PI_big = \
|
||||
(fft_expand(x) for x in (QL, QR, QM, QO, QC, PI))
|
||||
def round_1(
|
||||
self,
|
||||
witness: dict[Optional[str], int],
|
||||
) -> Message1:
|
||||
program = self.program
|
||||
setup = self.setup
|
||||
group_order = self.group_order
|
||||
|
||||
Z_big = fft_expand(Z)
|
||||
Z_shifted_big = Z_big[4:] + Z_big[:4]
|
||||
S1_big = fft_expand(S1)
|
||||
S2_big = fft_expand(S2)
|
||||
S3_big = fft_expand(S3)
|
||||
if None not in witness:
|
||||
witness[None] = 0
|
||||
|
||||
# Equals 1 at x=1 and 0 at other roots of unity
|
||||
L1_big = fft_expand([f_inner(1)] + [f_inner(0)] * (group_order - 1))
|
||||
# Compute wire assignments for A, B, C, corresponding:
|
||||
# - A_values: witness[program.wires()[i].L]
|
||||
# - B_values: witness[program.wires()[i].R]
|
||||
# - C_values: witness[program.wires()[i].O]
|
||||
|
||||
# Some sanity checks to make sure everything is ok up to here
|
||||
for i in range(group_order):
|
||||
# print('a', A[i], 'b', B[i], 'c', C[i])
|
||||
# print('ql', QL[i], 'qr', QR[i], 'qm', QM[i], 'qo', QO[i], 'qc', QC[i])
|
||||
# Construct A, B, C Lagrange interpolation polynomials for
|
||||
# A_values, B_values, C_values
|
||||
|
||||
# Compute a_1, b_1, c_1 commitments to A, B, C polynomials
|
||||
|
||||
# Sanity check that witness fulfils gate constraints
|
||||
assert (
|
||||
A[i] * QL[i] + B[i] * QR[i] + A[i] * B[i] * QM[i] +
|
||||
C[i] * QO[i] + PI[i] + QC[i] == 0
|
||||
self.A * self.pk.QL
|
||||
+ self.B * self.pk.QR
|
||||
+ self.A * self.B * self.pk.QM
|
||||
+ self.C * self.pk.QO
|
||||
+ self.PI
|
||||
+ self.pk.QC
|
||||
== Polynomial([Scalar(0)] * group_order, Basis.LAGRANGE)
|
||||
)
|
||||
|
||||
# Return a_1, b_1, c_1
|
||||
return Message1(a_1, b_1, c_1)
|
||||
|
||||
def round_2(self) -> Message2:
|
||||
group_order = self.group_order
|
||||
setup = self.setup
|
||||
|
||||
# Using A, B, C, values, and pk.S1, pk.S2, pk.S3, compute
|
||||
# Z_values for permutation grand product polynomial Z
|
||||
#
|
||||
# Note the convenience function:
|
||||
# self.rlc(val1, val2) = val_1 + self.beta * val_2 + gamma
|
||||
|
||||
# Check that the last term Z_n = 1
|
||||
assert Z_values.pop() == 1
|
||||
|
||||
# Sanity-check that Z was computed correctly
|
||||
for i in range(group_order):
|
||||
assert (
|
||||
self.rlc(self.A.values[i], roots_of_unity[i])
|
||||
* self.rlc(self.B.values[i], 2 * roots_of_unity[i])
|
||||
* self.rlc(self.C.values[i], 3 * roots_of_unity[i])
|
||||
) * Z_values[i] - (
|
||||
self.rlc(self.A.values[i], self.pk.S1.values[i])
|
||||
* self.rlc(self.B.values[i], self.pk.S2.values[i])
|
||||
* self.rlc(self.C.values[i], self.pk.S3.values[i])
|
||||
) * Z_values[
|
||||
(i + 1) % group_order
|
||||
] == 0
|
||||
|
||||
# Construct Z, Lagrange interpolation polynomial for Z_values
|
||||
# Cpmpute z_1 commitment to Z polynomial
|
||||
|
||||
# Return z_1
|
||||
return Message2(z_1)
|
||||
|
||||
def round_3(self) -> Message3:
|
||||
group_order = self.group_order
|
||||
setup = self.setup
|
||||
|
||||
# Compute the quotient polynomial
|
||||
|
||||
# List of roots of unity at 4x fineness, i.e. the powers of µ
|
||||
# where µ^(4n) = 1
|
||||
|
||||
# Using self.fft_expand, move A, B, C into coset extended Lagrange basis
|
||||
|
||||
# Expand public inputs polynomial PI into coset extended Lagrange
|
||||
|
||||
# Expand selector polynomials pk.QL, pk.QR, pk.QM, pk.QO, pk.QC
|
||||
# into the coset extended Lagrange basis
|
||||
|
||||
# Expand permutation grand product polynomial Z into coset extended
|
||||
# Lagrange basis
|
||||
|
||||
# Expand shifted Z(ω) into coset extended Lagrange basis
|
||||
|
||||
# Expand permutation polynomials pk.S1, pk.S2, pk.S3 into coset
|
||||
# extended Lagrange basis
|
||||
|
||||
# Compute Z_H = X^N - 1, also in evaluation form in the coset
|
||||
|
||||
# Compute L0, the Lagrange basis polynomial that evaluates to 1 at x = 1 = ω^0
|
||||
# and 0 at other roots of unity
|
||||
|
||||
# Expand L0 into the coset extended Lagrange basis
|
||||
L0_big = self.fft_expand(
|
||||
Polynomial([Scalar(1)] + [Scalar(0)] * (group_order - 1), Basis.LAGRANGE)
|
||||
)
|
||||
|
||||
for i in range(group_order):
|
||||
# Compute the quotient polynomial (called T(x) in the paper)
|
||||
# It is only possible to construct this polynomial if the following
|
||||
# equations are true at all roots of unity {1, w ... w^(n-1)}:
|
||||
# 1. All gates are correct:
|
||||
# A * QL + B * QR + A * B * QM + C * QO + PI + QC = 0
|
||||
#
|
||||
# 2. The permutation accumulator is valid:
|
||||
# Z(wx) = Z(x) * (rlc of A, X, 1) * (rlc of B, 2X, 1) *
|
||||
# (rlc of C, 3X, 1) / (rlc of A, S1, 1) /
|
||||
# (rlc of B, S2, 1) / (rlc of C, S3, 1)
|
||||
# rlc = random linear combination: term_1 + beta * term2 + gamma * term3
|
||||
#
|
||||
# 3. The permutation accumulator equals 1 at the start point
|
||||
# (Z - 1) * L0 = 0
|
||||
# L0 = Lagrange polynomial, equal at all roots of unity except 1
|
||||
|
||||
# Sanity check: QUOT has degree < 3n
|
||||
assert (
|
||||
(A[i] + beta * roots_of_unity[i] + gamma) *
|
||||
(B[i] + beta * 2 * roots_of_unity[i] + gamma) *
|
||||
(C[i] + beta * 3 * roots_of_unity[i] + gamma)
|
||||
) * Z[i] - (
|
||||
(A[i] + beta * S1[i] + gamma) *
|
||||
(B[i] + beta * S2[i] + gamma) *
|
||||
(C[i] + beta * S3[i] + gamma)
|
||||
) * Z[(i+1) % group_order] == 0
|
||||
self.expanded_evals_to_coeffs(QUOT_big).values[-group_order:]
|
||||
== [0] * group_order
|
||||
)
|
||||
print("Generated the quotient polynomial")
|
||||
|
||||
# Compute the quotient polynomial (called T(x) in the paper)
|
||||
# It is only possible to construct this polynomial if the following
|
||||
# equations are true at all roots of unity {1, w ... w^(n-1)}:
|
||||
#
|
||||
# 1. All gates are correct:
|
||||
# A * QL + B * QR + A * B * QM + C * QO + PI + QC = 0
|
||||
# 2. The permutation accumulator is valid:
|
||||
# Z(wx) = Z(x) * (rlc of A, X, 1) * (rlc of B, 2X, 1) *
|
||||
# (rlc of C, 3X, 1) / (rlc of A, S1, 1) /
|
||||
# (rlc of B, S2, 1) / (rlc of C, S3, 1)
|
||||
# rlc = random linear combination: term_1 + beta * term2 + gamma * term3
|
||||
# 3. The permutation accumulator equals 1 at the start point
|
||||
# (Z - 1) * L1 = 0
|
||||
# L1 = Lagrange polynomial, equal at all roots of unity except 1
|
||||
# Split up T into T1, T2 and T3 (needed because T has degree 3n, so is
|
||||
# too big for the trusted setup)
|
||||
|
||||
QUOT_big = [((
|
||||
A_big[i] * QL_big[i] +
|
||||
B_big[i] * QR_big[i] +
|
||||
A_big[i] * B_big[i] * QM_big[i] +
|
||||
C_big[i] * QO_big[i] +
|
||||
PI_big[i] +
|
||||
QC_big[i]
|
||||
) + (
|
||||
(A_big[i] + beta * fft_offset * quarter_roots[i] + gamma) *
|
||||
(B_big[i] + beta * 2 * fft_offset * quarter_roots[i] + gamma) *
|
||||
(C_big[i] + beta * 3 * fft_offset * quarter_roots[i] + gamma)
|
||||
) * alpha * Z_big[i] - (
|
||||
(A_big[i] + beta * S1_big[i] + gamma) *
|
||||
(B_big[i] + beta * S2_big[i] + gamma) *
|
||||
(C_big[i] + beta * S3_big[i] + gamma)
|
||||
) * alpha * Z_shifted_big[i] + (
|
||||
(Z_big[i] - 1) * L1_big[i] * alpha**2
|
||||
)) / ZH_big[i] for i in range(group_order * 4)]
|
||||
# Sanity check that we've computed T1, T2, T3 correctly
|
||||
assert (
|
||||
T1.barycentric_eval(fft_cofactor)
|
||||
+ T2.barycentric_eval(fft_cofactor) * fft_cofactor**group_order
|
||||
+ T3.barycentric_eval(fft_cofactor) * fft_cofactor ** (group_order * 2)
|
||||
) == QUOT_big.values[0]
|
||||
|
||||
all_coeffs = expanded_evals_to_coeffs(QUOT_big)
|
||||
print("Generated T1, T2, T3 polynomials")
|
||||
|
||||
# Sanity check: QUOT has degree < 3n
|
||||
assert (
|
||||
expanded_evals_to_coeffs(QUOT_big)[-group_order:] ==
|
||||
[0] * group_order
|
||||
)
|
||||
print("Generated the quotient polynomial")
|
||||
# Compute commitments t_lo_1, t_mid_1, t_hi_1 to T1, T2, T3 polynomials
|
||||
|
||||
# Split up T into T1, T2 and T3 (needed because T has degree 3n, so is
|
||||
# too big for the trusted setup)
|
||||
T1 = f_inner_fft(all_coeffs[:group_order])
|
||||
T2 = f_inner_fft(all_coeffs[group_order: group_order * 2])
|
||||
T3 = f_inner_fft(all_coeffs[group_order * 2: group_order * 3])
|
||||
# Return t_lo_1, t_mid_1, t_hi_1
|
||||
return Message3(t_lo_1, t_mid_1, t_hi_1)
|
||||
|
||||
T1_pt = evaluations_to_point(setup, group_order, T1)
|
||||
T2_pt = evaluations_to_point(setup, group_order, T2)
|
||||
T3_pt = evaluations_to_point(setup, group_order, T3)
|
||||
print("Generated T1, T2, T3 polynomials")
|
||||
def round_4(self) -> Message4:
|
||||
# Compute evaluations to be used in constructing the linearization polynomial.
|
||||
|
||||
# Compute a_eval = A(zeta)
|
||||
# Compute b_eval = B(zeta)
|
||||
# Compute c_eval = C(zeta)
|
||||
# Compute s1_eval = pk.S1(zeta)
|
||||
# Compute s2_eval = pk.S2(zeta)
|
||||
# Compute z_shifted_eval = Z(zeta * ω)
|
||||
|
||||
buf2 = serialize_point(T1_pt)+serialize_point(T2_pt)+serialize_point(T3_pt)
|
||||
zed = binhash_to_f_inner(keccak256(buf))
|
||||
# Return a_eval, b_eval, c_eval, s1_eval, s2_eval, z_shifted_eval
|
||||
return Message4(a_eval, b_eval, c_eval, s1_eval, s2_eval, z_shifted_eval)
|
||||
|
||||
# Sanity check that we've computed T1, T2, T3 correctly
|
||||
assert (
|
||||
barycentric_eval_at_point(T1, fft_offset) +
|
||||
barycentric_eval_at_point(T2, fft_offset) * fft_offset**group_order +
|
||||
barycentric_eval_at_point(T3, fft_offset) * fft_offset**(group_order*2)
|
||||
) == QUOT_big[0]
|
||||
def round_5(self) -> Message5:
|
||||
# Evaluate the Lagrange basis polynomial L0 at zeta
|
||||
# Evaluate the vanishing polynomial Z_H(X) = X^n at zeta
|
||||
|
||||
# Compute the "linearization polynomial" R. This is a clever way to avoid
|
||||
# needing to provide evaluations of _all_ the polynomials that we are
|
||||
# checking an equation betweeen: instead, we can "skip" the first
|
||||
# multiplicand in each term. The idea is that we construct a
|
||||
# polynomial which is constructed to equal 0 at Z only if the equations
|
||||
# that we are checking are correct, and which the verifier can reconstruct
|
||||
# the KZG commitment to, and we provide proofs to verify that it actually
|
||||
# equals 0 at Z
|
||||
#
|
||||
# In order for the verifier to be able to reconstruct the commitment to R,
|
||||
# it has to be "linear" in the proof items, hence why we can only use each
|
||||
# proof item once; any further multiplicands in each term need to be
|
||||
# replaced with their evaluations at Z, which do still need to be provided
|
||||
# Move T1, T2, T3 into the coset extended Lagrange basis
|
||||
# Move pk.QL, pk.QR, pk.QM, pk.QO, pk.QC into the coset extended Lagrange basis
|
||||
# Move Z into the coset extended Lagrange basis
|
||||
# Move pk.S3 into the coset extended Lagrange basis
|
||||
|
||||
A_ev = barycentric_eval_at_point(A, zed)
|
||||
B_ev = barycentric_eval_at_point(B, zed)
|
||||
C_ev = barycentric_eval_at_point(C, zed)
|
||||
S1_ev = barycentric_eval_at_point(S1, zed)
|
||||
S2_ev = barycentric_eval_at_point(S2, zed)
|
||||
Z_shifted_ev = barycentric_eval_at_point(Z, zed * roots_of_unity[1])
|
||||
# Compute the "linearization polynomial" R. This is a clever way to avoid
|
||||
# needing to provide evaluations of _all_ the polynomials that we are
|
||||
# checking an equation betweeen: instead, we can "skip" the first
|
||||
# multiplicand in each term. The idea is that we construct a
|
||||
# polynomial which is constructed to equal 0 at Z only if the equations
|
||||
# that we are checking are correct, and which the verifier can reconstruct
|
||||
# the KZG commitment to, and we provide proofs to verify that it actually
|
||||
# equals 0 at Z
|
||||
#
|
||||
# In order for the verifier to be able to reconstruct the commitment to R,
|
||||
# it has to be "linear" in the proof items, hence why we can only use each
|
||||
# proof item once; any further multiplicands in each term need to be
|
||||
# replaced with their evaluations at Z, which do still need to be provided
|
||||
|
||||
L1_ev = barycentric_eval_at_point([1] + [0] * (group_order - 1), zed)
|
||||
ZH_ev = zed ** group_order - 1
|
||||
PI_ev = barycentric_eval_at_point(PI, zed)
|
||||
# Commit to R
|
||||
|
||||
T1_big = fft_expand(T1)
|
||||
T2_big = fft_expand(T2)
|
||||
T3_big = fft_expand(T3)
|
||||
# Sanity-check R
|
||||
assert R.barycentric_eval(zeta) == 0
|
||||
|
||||
R_big = [(
|
||||
A_ev * QL_big[i] +
|
||||
B_ev * QR_big[i] +
|
||||
A_ev * B_ev * QM_big[i] +
|
||||
C_ev * QO_big[i] +
|
||||
PI_ev +
|
||||
QC_big[i]
|
||||
) + (
|
||||
(A_ev + beta * zed + gamma) *
|
||||
(B_ev + beta * 2 * zed + gamma) *
|
||||
(C_ev + beta * 3 * zed + gamma)
|
||||
) * alpha * Z_big[i] - (
|
||||
(A_ev + beta * S1_ev + gamma) *
|
||||
(B_ev + beta * S2_ev + gamma) *
|
||||
(C_ev + beta * S3_big[i] + gamma)
|
||||
) * alpha * Z_shifted_ev + (
|
||||
(Z_big[i] - 1) * L1_ev
|
||||
) * alpha**2 - (
|
||||
T1_big[i] +
|
||||
zed ** group_order * T2_big[i] +
|
||||
zed ** (group_order * 2) * T3_big[i]
|
||||
) * ZH_ev for i in range(4 * group_order)]
|
||||
print("Generated linearization polynomial R")
|
||||
|
||||
R_coeffs = expanded_evals_to_coeffs(R_big)
|
||||
assert R_coeffs[group_order:] == [0] * (group_order * 3)
|
||||
R = f_inner_fft(R_coeffs[:group_order])
|
||||
# Generate proof that W(z) = 0 and that the provided evaluations of
|
||||
# A, B, C, S1, S2 are correct
|
||||
|
||||
print('R_pt', evaluations_to_point(setup, group_order, R))
|
||||
# Move A, B, C into the coset extended Lagrange basis
|
||||
# Move pk.S1, pk.S2 into the coset extended Lagrange basis
|
||||
|
||||
assert barycentric_eval_at_point(R, zed) == 0
|
||||
# In the COSET EXTENDED LAGRANGE BASIS,
|
||||
# Construct W_Z = (
|
||||
# R
|
||||
# + v * (A - a_eval)
|
||||
# + v**2 * (B - b_eval)
|
||||
# + v**3 * (C - c_eval)
|
||||
# + v**4 * (S1 - s1_eval)
|
||||
# + v**5 * (S2 - s2_eval)
|
||||
# ) / (X - zeta)
|
||||
|
||||
print("Generated linearization polynomial R")
|
||||
# Check that degree of W_z is not greater than n
|
||||
assert W_z_coeffs[group_order:] == [0] * (group_order * 3)
|
||||
|
||||
buf3 = b''.join([
|
||||
serialize_int(x) for x in
|
||||
(A_ev, B_ev, C_ev, S1_ev, S2_ev, Z_shifted_ev)
|
||||
])
|
||||
v = binhash_to_f_inner(keccak256(buf3))
|
||||
# Compute W_z_1 commitment to W_z
|
||||
|
||||
# Generate proof that W(z) = 0 and that the provided evaluations of
|
||||
# A, B, C, S1, S2 are correct
|
||||
# Generate proof that the provided evaluation of Z(z*w) is correct. This
|
||||
# awkwardly different term is needed because the permutation accumulator
|
||||
# polynomial Z is the one place where we have to check between adjacent
|
||||
# coordinates, and not just within one coordinate.
|
||||
# In other words: Compute W_zw = (Z - z_shifted_eval) / (X - zeta * ω)
|
||||
|
||||
W_z_big = [(
|
||||
R_big[i] +
|
||||
v * (A_big[i] - A_ev) +
|
||||
v**2 * (B_big[i] - B_ev) +
|
||||
v**3 * (C_big[i] - C_ev) +
|
||||
v**4 * (S1_big[i] - S1_ev) +
|
||||
v**5 * (S2_big[i] - S2_ev)
|
||||
) / (fft_offset * quarter_roots[i] - zed) for i in range(group_order * 4)]
|
||||
# Check that degree of W_z is not greater than n
|
||||
assert W_zw_coeffs[group_order:] == [0] * (group_order * 3)
|
||||
|
||||
W_z_coeffs = expanded_evals_to_coeffs(W_z_big)
|
||||
assert W_z_coeffs[group_order:] == [0] * (group_order * 3)
|
||||
W_z = f_inner_fft(W_z_coeffs[:group_order])
|
||||
W_z_pt = evaluations_to_point(setup, group_order, W_z)
|
||||
# Compute W_z_1 commitment to W_z
|
||||
|
||||
# Generate proof that the provided evaluation of Z(z*w) is correct. This
|
||||
# awkwardly different term is needed because the permutation accumulator
|
||||
# polynomial Z is the one place where we have to check between adjacent
|
||||
# coordinates, and not just within one coordinate.
|
||||
print("Generated final quotient witness polynomials")
|
||||
|
||||
W_zw_big = [
|
||||
(Z_big[i] - Z_shifted_ev) /
|
||||
(fft_offset * quarter_roots[i] - zed * roots_of_unity[1])
|
||||
for i in range(group_order * 4)]
|
||||
# Return W_z_1, W_zw_1
|
||||
return Message5(W_z_1, W_zw_1)
|
||||
|
||||
W_zw_coeffs = expanded_evals_to_coeffs(W_zw_big)
|
||||
assert W_zw_coeffs[group_order:] == [0] * (group_order * 3)
|
||||
W_zw = f_inner_fft(W_zw_coeffs[:group_order])
|
||||
W_zw_pt = evaluations_to_point(setup, group_order, W_zw)
|
||||
def fft_expand(self, x: Polynomial):
|
||||
return x.to_coset_extended_lagrange(self.fft_cofactor)
|
||||
|
||||
print("Generated final quotient witness polynomials")
|
||||
return (
|
||||
A_pt, B_pt, C_pt, Z_pt, T1_pt, T2_pt, T3_pt, W_z_pt, W_zw_pt,
|
||||
A_ev, B_ev, C_ev, S1_ev, S2_ev, Z_shifted_ev
|
||||
)
|
||||
def expanded_evals_to_coeffs(self, x: Polynomial):
|
||||
return x.coset_extended_lagrange_to_coeffs(self.fft_cofactor)
|
||||
|
||||
def rlc(self, term_1, term_2):
|
||||
return term_1 + term_2 * self.beta + self.gamma
|
||||
|
||||
23
pyproject.toml
Normal file
23
pyproject.toml
Normal file
@@ -0,0 +1,23 @@
|
||||
[tool.poetry]
|
||||
name = "plonkathon"
|
||||
version = "0.1.0"
|
||||
description = "A simple Python implementation of PLONK adapted from py_plonk"
|
||||
authors = ["0xPARC / Vitalik Buterin"]
|
||||
license = "MIT"
|
||||
readme = "README.md"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.9"
|
||||
py-ecc = "^6.0.0"
|
||||
merlin = {git = "https://github.com/nalinbhardwaj/curdleproofs.pie", rev = "master", subdirectory = "merlin"}
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
mypy = "^0.991"
|
||||
black = "^22.12.0"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
|
||||
[tool.mypy]
|
||||
explicit_package_bases = true
|
||||
89
setup.py
Normal file
89
setup.py
Normal file
@@ -0,0 +1,89 @@
|
||||
from utils import *
|
||||
import py_ecc.bn128 as b
|
||||
from curve import ec_lincomb, G1Point, G2Point
|
||||
from compiler.program import CommonPreprocessedInput
|
||||
from verifier import VerificationKey
|
||||
from dataclasses import dataclass
|
||||
from poly import Polynomial, Basis
|
||||
|
||||
# Recover the trusted setup from a file in the format used in
|
||||
# https://github.com/iden3/snarkjs#7-prepare-phase-2
|
||||
SETUP_FILE_G1_STARTPOS = 80
|
||||
SETUP_FILE_POWERS_POS = 60
|
||||
|
||||
|
||||
@dataclass
|
||||
class Setup(object):
|
||||
# ([1]₁, [x]₁, ..., [x^{d-1}]₁)
|
||||
# = ( G, xG, ..., x^{d-1}G ), where G is a generator of G_2
|
||||
powers_of_x: list[G1Point]
|
||||
# [x]₂ = xH, where H is a generator of G_2
|
||||
X2: G2Point
|
||||
|
||||
@classmethod
|
||||
def from_file(cls, filename):
|
||||
contents = open(filename, "rb").read()
|
||||
# Byte 60 gives you the base-2 log of how many powers there are
|
||||
powers = 2 ** contents[SETUP_FILE_POWERS_POS]
|
||||
# Extract G1 points, which start at byte 80
|
||||
values = [
|
||||
int.from_bytes(contents[i : i + 32], "little")
|
||||
for i in range(
|
||||
SETUP_FILE_G1_STARTPOS, SETUP_FILE_G1_STARTPOS + 32 * powers * 2, 32
|
||||
)
|
||||
]
|
||||
assert max(values) < b.field_modulus
|
||||
# The points are encoded in a weird encoding, where all x and y points
|
||||
# are multiplied by a factor (for montgomery optimization?). We can
|
||||
# extract the factor because we know the first point is the generator.
|
||||
factor = b.FQ(values[0]) / b.G1[0]
|
||||
values = [b.FQ(x) / factor for x in values]
|
||||
powers_of_x = [(values[i * 2], values[i * 2 + 1]) for i in range(powers)]
|
||||
print("Extracted G1 side, X^1 point: {}".format(powers_of_x[1]))
|
||||
# Search for start of G2 points. We again know that the first point is
|
||||
# the generator.
|
||||
pos = SETUP_FILE_G1_STARTPOS + 32 * powers * 2
|
||||
target = (factor * b.G2[0].coeffs[0]).n
|
||||
while pos < len(contents):
|
||||
v = int.from_bytes(contents[pos : pos + 32], "little")
|
||||
if v == target:
|
||||
break
|
||||
pos += 1
|
||||
print("Detected start of G2 side at byte {}".format(pos))
|
||||
X2_encoding = contents[pos + 32 * 4 : pos + 32 * 8]
|
||||
X2_values = [
|
||||
b.FQ(int.from_bytes(X2_encoding[i : i + 32], "little")) / factor
|
||||
for i in range(0, 128, 32)
|
||||
]
|
||||
X2 = (b.FQ2(X2_values[:2]), b.FQ2(X2_values[2:]))
|
||||
assert b.is_on_curve(X2, b.b2)
|
||||
print("Extracted G2 side, X^1 point: {}".format(X2))
|
||||
# assert b.pairing(b.G2, powers_of_x[1]) == b.pairing(X2, b.G1)
|
||||
# print("X^1 points checked consistent")
|
||||
return cls(powers_of_x, X2)
|
||||
|
||||
# Encodes the KZG commitment that evaluates to the given values in the group
|
||||
def commit(self, values: Polynomial) -> G1Point:
|
||||
assert values.basis == Basis.LAGRANGE
|
||||
|
||||
# inverse FFT from Lagrange basis to monomial basis
|
||||
coeffs = values.ifft().values
|
||||
if len(coeffs) > len(self.powers_of_x):
|
||||
raise Exception("Not enough powers in setup")
|
||||
return ec_lincomb([(s, x) for s, x in zip(self.powers_of_x, coeffs)])
|
||||
|
||||
# Generate the verification key for this program with the given setup
|
||||
def verification_key(self, pk: CommonPreprocessedInput) -> VerificationKey:
|
||||
return VerificationKey(
|
||||
pk.group_order,
|
||||
self.commit(pk.QM),
|
||||
self.commit(pk.QL),
|
||||
self.commit(pk.QR),
|
||||
self.commit(pk.QO),
|
||||
self.commit(pk.QC),
|
||||
self.commit(pk.S1),
|
||||
self.commit(pk.S2),
|
||||
self.commit(pk.S3),
|
||||
self.X2,
|
||||
Scalar.root_of_unity(pk.group_order),
|
||||
)
|
||||
180
test.py
180
test.py
@@ -1,23 +1,32 @@
|
||||
import compiler as c
|
||||
import prover as p
|
||||
import verifier as v
|
||||
from compiler.program import Program
|
||||
from setup import Setup
|
||||
from prover import Prover
|
||||
from verifier import VerificationKey
|
||||
import json
|
||||
from mini_poseidon import rc, mds, poseidon_hash
|
||||
from test.mini_poseidon import rc, mds, poseidon_hash
|
||||
from utils import *
|
||||
|
||||
|
||||
def basic_test():
|
||||
setup = c.Setup.from_file('powersOfTau28_hez_final_11.ptau')
|
||||
# Extract 2^28 powers of tau
|
||||
setup = Setup.from_file("test/powersOfTau28_hez_final_11.ptau")
|
||||
print("Extracted setup")
|
||||
vk = c.make_verification_key(setup, 8, ['c <== a * b'])
|
||||
program = Program(["c <== a * b"], 8)
|
||||
vk = setup.verification_key(program.common_preprocessed_input())
|
||||
print("Generated verification key")
|
||||
their_output = json.load(open('main.plonk.vkey.json'))
|
||||
for key in ('Qm', 'Ql', 'Qr', 'Qo', 'Qc', 'S1', 'S2', 'S3', 'X_2'):
|
||||
if c.interpret_json_point(their_output[key]) != vk[key]:
|
||||
raise Exception("Mismatch {}: ours {} theirs {}"
|
||||
.format(key, vk[key], their_output[key]))
|
||||
assert vk['w'] == int(their_output['w'])
|
||||
their_output = json.load(open("test/main.plonk.vkey.json"))
|
||||
for key in ("Qm", "Ql", "Qr", "Qo", "Qc", "S1", "S2", "S3", "X_2"):
|
||||
if interpret_json_point(their_output[key]) != getattr(vk, key):
|
||||
raise Exception(
|
||||
"Mismatch {}: ours {} theirs {}".format(
|
||||
key, getattr(vk, key), their_output[key]
|
||||
)
|
||||
)
|
||||
assert getattr(vk, "w") == int(their_output["w"])
|
||||
print("Basic test success")
|
||||
return setup
|
||||
|
||||
|
||||
# Equivalent to this zkrepl code:
|
||||
#
|
||||
# template Example () {
|
||||
@@ -27,47 +36,61 @@ def basic_test():
|
||||
# c <== a * b + a;
|
||||
# }
|
||||
def ab_plus_a_test(setup):
|
||||
vk = c.make_verification_key(setup, 8, ['ab === a - c', '-ab === a * b'])
|
||||
program = Program(["ab === a - c", "-ab === a * b"], 8)
|
||||
vk = setup.verification_key(program.common_preprocessed_input())
|
||||
print("Generated verification key")
|
||||
their_output = json.load(open('main.plonk.vkey-58.json'))
|
||||
for key in ('Qm', 'Ql', 'Qr', 'Qo', 'Qc', 'S1', 'S2', 'S3', 'X_2'):
|
||||
if c.interpret_json_point(their_output[key]) != vk[key]:
|
||||
raise Exception("Mismatch {}: ours {} theirs {}"
|
||||
.format(key, vk[key], their_output[key]))
|
||||
assert vk['w'] == int(their_output['w'])
|
||||
their_output = json.load(open("test/main.plonk.vkey-58.json"))
|
||||
for key in ("Qm", "Ql", "Qr", "Qo", "Qc", "S1", "S2", "S3", "X_2"):
|
||||
if interpret_json_point(their_output[key]) != getattr(vk, key):
|
||||
raise Exception(
|
||||
"Mismatch {}: ours {} theirs {}".format(
|
||||
key, getattr(vk, key), their_output[key]
|
||||
)
|
||||
)
|
||||
assert getattr(vk, "w") == int(their_output["w"])
|
||||
print("ab+a test success")
|
||||
|
||||
|
||||
def one_public_input_test(setup):
|
||||
vk = c.make_verification_key(setup, 8, ['c public', 'c === a * b'])
|
||||
program = Program(["c public", "c === a * b"], 8)
|
||||
vk = setup.verification_key(program.common_preprocessed_input())
|
||||
print("Generated verification key")
|
||||
their_output = json.load(open('main.plonk.vkey-59.json'))
|
||||
for key in ('Qm', 'Ql', 'Qr', 'Qo', 'Qc', 'S1', 'S2', 'S3', 'X_2'):
|
||||
if c.interpret_json_point(their_output[key]) != vk[key]:
|
||||
raise Exception("Mismatch {}: ours {} theirs {}"
|
||||
.format(key, vk[key], their_output[key]))
|
||||
assert vk['w'] == int(their_output['w'])
|
||||
their_output = json.load(open("test/main.plonk.vkey-59.json"))
|
||||
for key in ("Qm", "Ql", "Qr", "Qo", "Qc", "S1", "S2", "S3", "X_2"):
|
||||
if interpret_json_point(their_output[key]) != getattr(vk, key):
|
||||
raise Exception(
|
||||
"Mismatch {}: ours {} theirs {}".format(
|
||||
key, getattr(vk, key), their_output[key]
|
||||
)
|
||||
)
|
||||
assert getattr(vk, "w") == int(their_output["w"])
|
||||
print("One public input test success")
|
||||
|
||||
|
||||
def prover_test(setup):
|
||||
print("Beginning prover test")
|
||||
eqs = ['e public', 'c <== a * b', 'e <== c * d']
|
||||
assignments = {'a': 3, 'b': 4, 'c': 12, 'd': 5, 'e': 60}
|
||||
return p.prove_from_witness(setup, 8, eqs, assignments)
|
||||
program = Program(["e public", "c <== a * b", "e <== c * d"], 8)
|
||||
assignments = {"a": 3, "b": 4, "c": 12, "d": 5, "e": 60}
|
||||
prover = Prover(setup, program)
|
||||
proof = prover.prove(assignments)
|
||||
print("Prover test success")
|
||||
return proof
|
||||
|
||||
|
||||
def verifier_test(setup, proof):
|
||||
print("Beginning verifier test")
|
||||
eqs = ['e public', 'c <== a * b', 'e <== c * d']
|
||||
program = Program(["e public", "c <== a * b", "e <== c * d"], 8)
|
||||
public = [60]
|
||||
vk = c.make_verification_key(setup, 8, eqs)
|
||||
assert v.verify_proof(setup, 8, vk, proof, public, optimized=False)
|
||||
assert v.verify_proof(setup, 8, vk, proof, public, optimized=True)
|
||||
vk = setup.verification_key(program.common_preprocessed_input())
|
||||
assert vk.verify_proof(8, proof, public)
|
||||
assert vk.verify_proof_unoptimized(8, proof, public)
|
||||
print("Verifier test success")
|
||||
|
||||
|
||||
def factorization_test(setup):
|
||||
print("Beginning test: prove you know small integers that multiply to 91")
|
||||
eqs = """
|
||||
n public
|
||||
program = Program.from_str(
|
||||
"""n public
|
||||
pb0 === pb0 * pb0
|
||||
pb1 === pb1 * pb1
|
||||
pb2 === pb2 * pb2
|
||||
@@ -82,44 +105,56 @@ def factorization_test(setup):
|
||||
qb01 <== qb0 + 2 * qb1
|
||||
qb012 <== qb01 + 4 * qb2
|
||||
q <== qb012 + 8 * qb3
|
||||
n <== p * q
|
||||
"""
|
||||
n <== p * q""",
|
||||
16,
|
||||
)
|
||||
public = [91]
|
||||
vk = c.make_verification_key(setup, 16, eqs)
|
||||
vk = setup.verification_key(program.common_preprocessed_input())
|
||||
print("Generated verification key")
|
||||
assignments = c.fill_variable_assignments(eqs, {
|
||||
'pb3': 1, 'pb2': 1, 'pb1': 0, 'pb0': 1,
|
||||
'qb3': 0, 'qb2': 1, 'qb1': 1, 'qb0': 1,
|
||||
})
|
||||
proof = p.prove_from_witness(setup, 16, eqs, assignments)
|
||||
assignments = program.fill_variable_assignments(
|
||||
{
|
||||
"pb3": 1,
|
||||
"pb2": 1,
|
||||
"pb1": 0,
|
||||
"pb0": 1,
|
||||
"qb3": 0,
|
||||
"qb2": 1,
|
||||
"qb1": 1,
|
||||
"qb0": 1,
|
||||
}
|
||||
)
|
||||
prover = Prover(setup, program)
|
||||
proof = prover.prove(assignments)
|
||||
print("Generated proof")
|
||||
assert v.verify_proof(setup, 16, vk, proof, public, optimized=True)
|
||||
assert vk.verify_proof(16, proof, public)
|
||||
print("Factorization test success!")
|
||||
|
||||
def output_proof_lang():
|
||||
|
||||
def output_proof_lang() -> str:
|
||||
o = []
|
||||
o.append('L0 public')
|
||||
o.append('M0 public')
|
||||
o.append('M64 public')
|
||||
o.append('R0 <== 0')
|
||||
o.append("L0 public")
|
||||
o.append("M0 public")
|
||||
o.append("M64 public")
|
||||
o.append("R0 <== 0")
|
||||
for i in range(64):
|
||||
for j, pos in enumerate(('L', 'M', 'R')):
|
||||
f = {'x': i, 'r': rc[i][j], 'p': pos}
|
||||
if i < 4 or i >= 60 or pos == 'L':
|
||||
o.append('{p}adj{x} <== {p}{x} + {r}'.format(**f))
|
||||
o.append('{p}sq{x} <== {p}adj{x} * {p}adj{x}'.format(**f))
|
||||
o.append('{p}qd{x} <== {p}sq{x} * {p}sq{x}'.format(**f))
|
||||
o.append('{p}qn{x} <== {p}qd{x} * {p}adj{x}'.format(**f))
|
||||
for j, pos in enumerate(("L", "M", "R")):
|
||||
f = {"x": i, "r": rc[i][j], "p": pos}
|
||||
if i < 4 or i >= 60 or pos == "L":
|
||||
o.append("{p}adj{x} <== {p}{x} + {r}".format(**f))
|
||||
o.append("{p}sq{x} <== {p}adj{x} * {p}adj{x}".format(**f))
|
||||
o.append("{p}qd{x} <== {p}sq{x} * {p}sq{x}".format(**f))
|
||||
o.append("{p}qn{x} <== {p}qd{x} * {p}adj{x}".format(**f))
|
||||
else:
|
||||
o.append('{p}qn{x} <== {p}{x} + {r}'.format(**f))
|
||||
for j, pos in enumerate(('L', 'M', 'R')):
|
||||
f = {'x': i, 'p': pos, 'm': mds[j]}
|
||||
o.append('{p}suma{x} <== Lqn{x} * {m}'.format(**f))
|
||||
f = {'x': i, 'p': pos, 'm': mds[j+1]}
|
||||
o.append('{p}sumb{x} <== {p}suma{x} + Mqn{x} * {m}'.format(**f))
|
||||
f = {'x': i, 'xp1': i+1, 'p': pos, 'm': mds[j+2]}
|
||||
o.append('{p}{xp1} <== {p}sumb{x} + Rqn{x} * {m}'.format(**f))
|
||||
return '\n'.join(o)
|
||||
o.append("{p}qn{x} <== {p}{x} + {r}".format(**f))
|
||||
for j, pos in enumerate(("L", "M", "R")):
|
||||
f = {"x": i, "p": pos, "m": mds[j]}
|
||||
o.append("{p}suma{x} <== Lqn{x} * {m}".format(**f))
|
||||
f = {"x": i, "p": pos, "m": mds[j + 1]}
|
||||
o.append("{p}sumb{x} <== {p}suma{x} + Mqn{x} * {m}".format(**f))
|
||||
f = {"x": i, "xp1": i + 1, "p": pos, "m": mds[j + 2]}
|
||||
o.append("{p}{xp1} <== {p}sumb{x} + Rqn{x} * {m}".format(**f))
|
||||
return "\n".join(o)
|
||||
|
||||
|
||||
def poseidon_test(setup):
|
||||
# PLONK-prove the correctness of a Poseidon execution. Note that this is
|
||||
@@ -127,16 +162,19 @@ def poseidon_test(setup):
|
||||
# a custom PLONK gate to do a round in a single gate
|
||||
expected_value = poseidon_hash(1, 2)
|
||||
# Generate code for proof
|
||||
code = output_proof_lang()
|
||||
program = Program.from_str(output_proof_lang(), 1024)
|
||||
print("Generated code for Poseidon test")
|
||||
assignments = c.fill_variable_assignments(code, {'L0': 1, 'M0': 2})
|
||||
vk = c.make_verification_key(setup, 1024, code)
|
||||
assignments = program.fill_variable_assignments({"L0": 1, "M0": 2})
|
||||
vk = setup.verification_key(program.common_preprocessed_input())
|
||||
print("Generated verification key")
|
||||
proof = p.prove_from_witness(setup, 1024, code, assignments)
|
||||
prover = Prover(setup, program)
|
||||
proof = prover.prove(assignments)
|
||||
print("Generated proof")
|
||||
assert v.verify_proof(setup, 1024, vk, proof, [1, 2, expected_value])
|
||||
assert vk.verify_proof(1024, proof, [1, 2, expected_value])
|
||||
print("Verified proof!")
|
||||
if __name__ == '__main__':
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
setup = basic_test()
|
||||
ab_plus_a_test(setup)
|
||||
one_public_input_test(setup)
|
||||
|
||||
0
test/__init__.py
Normal file
0
test/__init__.py
Normal file
@@ -1,9 +1,7 @@
|
||||
from py_ecc.fields.field_elements import FQ as Field
|
||||
from py_ecc import bn128 as b
|
||||
import json
|
||||
|
||||
class f_inner(Field):
|
||||
field_modulus = b.curve_order
|
||||
from curve import Scalar
|
||||
|
||||
# Mimics the Poseidon hash for params:
|
||||
#
|
||||
@@ -19,21 +17,22 @@ class f_inner(Field):
|
||||
# https://github.com/ingonyama-zk/poseidon-hash
|
||||
|
||||
rc = [
|
||||
[f_inner(a), f_inner(b), f_inner(c)]
|
||||
for (a,b,c) in json.load(open('rc.json'))
|
||||
[Scalar(a), Scalar(b), Scalar(c)]
|
||||
for (a, b, c) in json.load(open("test/poseidon_rc.json"))
|
||||
]
|
||||
|
||||
mds = [f_inner(1) / i for i in range(3, 8)]
|
||||
mds = [Scalar(1) / i for i in range(3, 8)]
|
||||
|
||||
|
||||
def poseidon_hash(in1, in2):
|
||||
L, M, R = f_inner(in1), f_inner(in2), f_inner(0)
|
||||
L, M, R = Scalar(in1), Scalar(in2), Scalar(0)
|
||||
for i in range(64):
|
||||
L = (L + rc[i][0]) ** 5
|
||||
M += rc[i][1]
|
||||
R += rc[i][2]
|
||||
if i < 4 or i >= 60:
|
||||
M = M ** 5
|
||||
R = R ** 5
|
||||
M = M**5
|
||||
R = R**5
|
||||
|
||||
(L, M, R) = (
|
||||
(L * mds[0] + M * mds[1] + R * mds[2]),
|
||||
123
transcript.py
Normal file
123
transcript.py
Normal file
@@ -0,0 +1,123 @@
|
||||
from utils import Scalar
|
||||
from curve import G1Point
|
||||
from merlin import MerlinTranscript
|
||||
from py_ecc.secp256k1.secp256k1 import bytes_to_int
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message1:
|
||||
# [a(x)]₁ (commitment to left wire polynomial)
|
||||
a_1: G1Point
|
||||
# [b(x)]₁ (commitment to right wire polynomial)
|
||||
b_1: G1Point
|
||||
# [c(x)]₁ (commitment to output wire polynomial)
|
||||
c_1: G1Point
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message2:
|
||||
# [z(x)]₁ (commitment to permutation polynomial)
|
||||
z_1: G1Point
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message3:
|
||||
# [t_lo(x)]₁ (commitment to t_lo(X), the low chunk of the quotient polynomial t(X))
|
||||
t_lo_1: G1Point
|
||||
# [t_mid(x)]₁ (commitment to t_mid(X), the middle chunk of the quotient polynomial t(X))
|
||||
t_mid_1: G1Point
|
||||
# [t_hi(x)]₁ (commitment to t_hi(X), the high chunk of the quotient polynomial t(X))
|
||||
t_hi_1: G1Point
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message4:
|
||||
# Evaluation of a(X) at evaluation challenge ζ
|
||||
a_eval: Scalar
|
||||
# Evaluation of b(X) at evaluation challenge ζ
|
||||
b_eval: Scalar
|
||||
# Evaluation of c(X) at evaluation challenge ζ
|
||||
c_eval: Scalar
|
||||
# Evaluation of the first permutation polynomial S_σ1(X) at evaluation challenge ζ
|
||||
s1_eval: Scalar
|
||||
# Evaluation of the second permutation polynomial S_σ2(X) at evaluation challenge ζ
|
||||
s2_eval: Scalar
|
||||
# Evaluation of the shifted permutation polynomial z(X) at the shifted evaluation challenge ζω
|
||||
z_shifted_eval: Scalar
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message5:
|
||||
# [W_ζ(X)]₁ (commitment to the opening proof polynomial)
|
||||
W_z_1: G1Point
|
||||
# [W_ζω(X)]₁ (commitment to the opening proof polynomial)
|
||||
W_zw_1: G1Point
|
||||
|
||||
|
||||
class Transcript(MerlinTranscript):
|
||||
def append(self, label: bytes, item: bytes) -> None:
|
||||
self.append_message(label, item)
|
||||
|
||||
def append_scalar(self, label: bytes, item: Scalar):
|
||||
self.append_message(label, item.n.to_bytes(32, "big"))
|
||||
|
||||
def append_point(self, label: bytes, item: G1Point):
|
||||
self.append_message(label, item[0].n.to_bytes(32, "big"))
|
||||
self.append_message(label, item[1].n.to_bytes(32, "big"))
|
||||
|
||||
def get_and_append_challenge(self, label: bytes) -> Scalar:
|
||||
while True:
|
||||
challenge_bytes = self.challenge_bytes(label, 255)
|
||||
f = Scalar(bytes_to_int(challenge_bytes))
|
||||
if f != Scalar.zero(): # Enforce challenge != 0
|
||||
self.append(label, challenge_bytes)
|
||||
return f
|
||||
|
||||
def round_1(self, message: Message1) -> tuple[Scalar, Scalar]:
|
||||
self.append_point(b"a_1", message.a_1)
|
||||
self.append_point(b"b_1", message.b_1)
|
||||
self.append_point(b"c_1", message.c_1)
|
||||
|
||||
# The first two Fiat-Shamir challenges
|
||||
beta = self.get_and_append_challenge(b"beta")
|
||||
gamma = self.get_and_append_challenge(b"gamma")
|
||||
|
||||
return beta, gamma
|
||||
|
||||
def round_2(self, message: Message2) -> tuple[Scalar, Scalar]:
|
||||
self.append_point(b"z_1", message.z_1)
|
||||
|
||||
alpha = self.get_and_append_challenge(b"alpha")
|
||||
# This value could be anything, it just needs to be unpredictable. Lets us
|
||||
# have evaluation forms at cosets to avoid zero evaluations, so we can
|
||||
# divide polys without the 0/0 issue
|
||||
fft_cofactor = self.get_and_append_challenge(b"fft_cofactor")
|
||||
|
||||
return alpha, fft_cofactor
|
||||
|
||||
def round_3(self, message: Message3) -> Scalar:
|
||||
self.append_point(b"t_lo_1", message.t_lo_1)
|
||||
self.append_point(b"t_mid_1", message.t_mid_1)
|
||||
self.append_point(b"t_hi_1", message.t_hi_1)
|
||||
|
||||
zeta = self.get_and_append_challenge(b"zeta")
|
||||
return zeta
|
||||
|
||||
def round_4(self, message: Message4) -> Scalar:
|
||||
self.append_scalar(b"a_eval", message.a_eval)
|
||||
self.append_scalar(b"b_eval", message.b_eval)
|
||||
self.append_scalar(b"c_eval", message.c_eval)
|
||||
self.append_scalar(b"s1_eval", message.s1_eval)
|
||||
self.append_scalar(b"s2_eval", message.s2_eval)
|
||||
self.append_scalar(b"z_shifted_eval", message.z_shifted_eval)
|
||||
|
||||
v = self.get_and_append_challenge(b"v")
|
||||
return v
|
||||
|
||||
def round_5(self, message: Message5) -> Scalar:
|
||||
self.append_point(b"W_z_1", message.W_z_1)
|
||||
self.append_point(b"W_zw_1", message.W_zw_1)
|
||||
|
||||
u = self.get_and_append_challenge(b"u")
|
||||
return u
|
||||
183
utils.py
183
utils.py
@@ -1,127 +1,11 @@
|
||||
import py_ecc.bn128 as b
|
||||
from py_ecc.fields.field_elements import FQ as Field
|
||||
from functools import cache
|
||||
from Crypto.Hash import keccak
|
||||
import py_ecc.bn128 as b
|
||||
from py_ecc.fields.field_elements import FQ as Field
|
||||
from multicombs import lincomb
|
||||
from curve import Scalar
|
||||
|
||||
f = b.FQ
|
||||
f2 = b.FQ2
|
||||
|
||||
class f_inner(Field):
|
||||
field_modulus = b.curve_order
|
||||
|
||||
primitive_root = 5
|
||||
|
||||
# Gets the first root of unity of a given group order
|
||||
@cache
|
||||
def get_root_of_unity(group_order):
|
||||
return f_inner(5) ** ((b.curve_order - 1) // group_order)
|
||||
|
||||
# Gets the full list of roots of unity of a given group order
|
||||
@cache
|
||||
def get_roots_of_unity(group_order):
|
||||
o = [f_inner(1), get_root_of_unity(group_order)]
|
||||
while len(o) < group_order:
|
||||
o.append(o[-1] * o[1])
|
||||
return o
|
||||
|
||||
def keccak256(x):
|
||||
return keccak.new(digest_bits=256).update(x).digest()
|
||||
|
||||
def serialize_int(x):
|
||||
return x.n.to_bytes(32, 'big')
|
||||
|
||||
def serialize_point(pt):
|
||||
return pt[0].n.to_bytes(32, 'big') + pt[1].n.to_bytes(32, 'big')
|
||||
|
||||
# Converts a hash to a f_inner element
|
||||
def binhash_to_f_inner(h):
|
||||
return f_inner(int.from_bytes(h, 'big'))
|
||||
|
||||
def ec_mul(pt, coeff):
|
||||
if hasattr(coeff, 'n'):
|
||||
coeff = coeff.n
|
||||
return b.multiply(pt, coeff % b.curve_order)
|
||||
|
||||
# Elliptic curve linear combination. A truly optimized implementation
|
||||
# would replace this with a fast lin-comb algo, see https://ethresear.ch/t/7238
|
||||
def ec_lincomb(pairs):
|
||||
return lincomb(
|
||||
[pt for (pt, n) in pairs],
|
||||
[int(n) % b.curve_order for (pt, n) in pairs],
|
||||
b.add,
|
||||
b.Z1
|
||||
)
|
||||
# Equivalent to:
|
||||
# o = b.Z1
|
||||
# for pt, coeff in pairs:
|
||||
# o = b.add(o, ec_mul(pt, coeff))
|
||||
# return o
|
||||
|
||||
# Encodes the KZG commitment to the given polynomial coeffs
|
||||
def coeffs_to_point(setup, coeffs):
|
||||
if len(coeffs) > len(setup.G1_side):
|
||||
raise Exception("Not enough powers in setup")
|
||||
return ec_lincomb([(s, x) for s, x in zip(setup.G1_side, coeffs)])
|
||||
|
||||
# Encodes the KZG commitment that evaluates to the given values in the group
|
||||
def evaluations_to_point(setup, group_order, evals):
|
||||
return coeffs_to_point(setup, f_inner_fft(evals, inv=True))
|
||||
|
||||
# Recover the trusted setup from a file in the format used in
|
||||
# https://github.com/iden3/snarkjs#7-prepare-phase-2
|
||||
SETUP_FILE_G1_STARTPOS = 80
|
||||
SETUP_FILE_POWERS_POS = 60
|
||||
|
||||
class Setup(object):
|
||||
|
||||
def __init__(self, G1_side, X2):
|
||||
self.G1_side = G1_side
|
||||
self.X2 = X2
|
||||
|
||||
@classmethod
|
||||
def from_file(cls, filename):
|
||||
contents = open(filename, 'rb').read()
|
||||
# Byte 60 gives you the base-2 log of how many powers there are
|
||||
powers = 2**contents[SETUP_FILE_POWERS_POS]
|
||||
# Extract G1 points, which start at byte 80
|
||||
values = [
|
||||
int.from_bytes(contents[i: i+32], 'little')
|
||||
for i in range(SETUP_FILE_G1_STARTPOS,
|
||||
SETUP_FILE_G1_STARTPOS + 32 * powers * 2, 32)
|
||||
]
|
||||
assert max(values) < b.field_modulus
|
||||
# The points are encoded in a weird encoding, where all x and y points
|
||||
# are multiplied by a factor (for montgomery optimization?). We can
|
||||
# extractthe factor because we know the first point is the generator.
|
||||
factor = f(values[0]) / b.G1[0]
|
||||
values = [f(x) / factor for x in values]
|
||||
G1_side = [(values[i*2], values[i*2+1]) for i in range(powers)]
|
||||
print("Extracted G1 side, X^1 point: {}".format(G1_side[1]))
|
||||
# Search for start of G2 points. We again know that the first point is
|
||||
# the generator.
|
||||
pos = SETUP_FILE_G1_STARTPOS + 32 * powers * 2
|
||||
target = (factor * b.G2[0].coeffs[0]).n
|
||||
while pos < len(contents):
|
||||
v = int.from_bytes(contents[pos: pos+32], 'little')
|
||||
if v == target:
|
||||
break
|
||||
pos += 1
|
||||
print("Detected start of G2 side at byte {}".format(pos))
|
||||
X2_encoding = contents[pos + 32 * 4: pos + 32 * 8]
|
||||
X2_values = [
|
||||
f(int.from_bytes(X2_encoding[i: i + 32], 'little')) / factor
|
||||
for i in range(0, 128, 32)
|
||||
]
|
||||
X2 = (f2(X2_values[:2]), f2(X2_values[2:]))
|
||||
assert b.is_on_curve(X2, b.b2)
|
||||
print("Extracted G2 side, X^1 point: {}".format(X2))
|
||||
# assert b.pairing(b.G2, G1_side[1]) == b.pairing(X2, b.G1)
|
||||
# print("X^1 points checked consistent")
|
||||
return cls(G1_side, X2)
|
||||
|
||||
# Extracts a point from JSON in zkrepl's format
|
||||
def interpret_json_point(p):
|
||||
if len(p) == 3 and isinstance(p[0], str) and p[2] == "1":
|
||||
@@ -136,68 +20,3 @@ def interpret_json_point(p):
|
||||
elif len(p) == 3 and p == [["0", "0"], ["1", "0"], ["0", "0"]]:
|
||||
return b.Z2
|
||||
raise Exception("cannot interpret that point: {}".format(p))
|
||||
|
||||
# Fast Fourier transform, used to convert between polynomial coefficients
|
||||
# and a list of evaluations at the roots of unity
|
||||
# See https://vitalik.ca/general/2019/05/12/fft.html
|
||||
def _fft(vals, modulus, roots_of_unity):
|
||||
if len(vals) == 1:
|
||||
return vals
|
||||
L = _fft(vals[::2], modulus, roots_of_unity[::2])
|
||||
R = _fft(vals[1::2], modulus, roots_of_unity[::2])
|
||||
o = [0 for i in vals]
|
||||
for i, (x, y) in enumerate(zip(L, R)):
|
||||
y_times_root = y*roots_of_unity[i]
|
||||
o[i] = (x+y_times_root) % modulus
|
||||
o[i+len(L)] = (x-y_times_root) % modulus
|
||||
return o
|
||||
|
||||
# Convenience method to do FFTs specifically over the subgroup over which
|
||||
# all of the proofs are operating
|
||||
def f_inner_fft(vals, inv=False):
|
||||
roots = [x.n for x in get_roots_of_unity(len(vals))]
|
||||
o, nvals = b.curve_order, [x.n for x in vals]
|
||||
if inv:
|
||||
# Inverse FFT
|
||||
invlen = f_inner(1) / len(vals)
|
||||
reversed_roots = [roots[0]] + roots[1:][::-1]
|
||||
return [f_inner(x) * invlen for x in _fft(nvals, o, reversed_roots)]
|
||||
else:
|
||||
# Regular FFT
|
||||
return [f_inner(x) for x in _fft(nvals, o, roots)]
|
||||
|
||||
# Converts a list of evaluations at [1, w, w**2... w**(n-1)] to
|
||||
# a list of evaluations at
|
||||
# [offset, offset * q, offset * q**2 ... offset * q**(4n-1)] where q = w**(1/4)
|
||||
# This lets us work with higher-degree polynomials, and the offset lets us
|
||||
# avoid the 0/0 problem when computing a division (as long as the offset is
|
||||
# chosen randomly)
|
||||
def fft_expand_with_offset(vals, offset):
|
||||
group_order = len(vals)
|
||||
x_powers = f_inner_fft(vals, inv=True)
|
||||
x_powers = [
|
||||
(offset**i * x) for i, x in enumerate(x_powers)
|
||||
] + [f_inner(0)] * (group_order * 3)
|
||||
return f_inner_fft(x_powers)
|
||||
|
||||
# Convert from offset form into coefficients
|
||||
# Note that we can't make a full inverse function of fft_expand_with_offset
|
||||
# because the output of this might be a deg >= n polynomial, which cannot
|
||||
# be expressed via evaluations at n roots of unity
|
||||
def offset_evals_to_coeffs(evals, offset):
|
||||
shifted_coeffs = f_inner_fft(evals, inv=True)
|
||||
inv_offset = (1 / offset)
|
||||
return [v * inv_offset ** i for (i, v) in enumerate(shifted_coeffs)]
|
||||
|
||||
# Given a polynomial expressed as a list of evaluations at roots of unity,
|
||||
# evaluate it at x directly, without using an FFT to covert to coeffs first
|
||||
def barycentric_eval_at_point(values, x):
|
||||
order = len(values)
|
||||
roots_of_unity = get_roots_of_unity(order)
|
||||
return (
|
||||
(f_inner(x)**order - 1) / order *
|
||||
sum([
|
||||
value * root / (x - root)
|
||||
for value, root in zip(values, roots_of_unity)
|
||||
])
|
||||
)
|
||||
|
||||
420
verifier.py
420
verifier.py
@@ -1,185 +1,135 @@
|
||||
import py_ecc.bn128 as b
|
||||
from utils import *
|
||||
from dataclasses import dataclass
|
||||
from curve import *
|
||||
from transcript import Transcript
|
||||
from poly import Polynomial, Basis
|
||||
|
||||
def verify_proof(setup, group_order, vk, proof, public=[], optimized=True):
|
||||
(
|
||||
A_pt, B_pt, C_pt, Z_pt, T1_pt, T2_pt, T3_pt, W_z_pt, W_zw_pt,
|
||||
A_ev, B_ev, C_ev, S1_ev, S2_ev, Z_shifted_ev
|
||||
) = proof
|
||||
|
||||
Ql_pt, Qr_pt, Qm_pt, Qo_pt, Qc_pt, S1_pt, S2_pt, S3_pt, X2 = (
|
||||
vk["Ql"], vk["Qr"], vk["Qm"], vk["Qo"], vk["Qc"],
|
||||
vk["S1"], vk["S2"], vk["S3"], vk["X_2"]
|
||||
)
|
||||
@dataclass
|
||||
class VerificationKey:
|
||||
"""Verification key"""
|
||||
|
||||
# Compute challenges (should be same as those computed by prover)
|
||||
group_order: int
|
||||
# [q_M(x)]₁ (commitment to multiplication selector polynomial)
|
||||
Qm: G1Point
|
||||
# [q_L(x)]₁ (commitment to left selector polynomial)
|
||||
Ql: G1Point
|
||||
# [q_R(x)]₁ (commitment to right selector polynomial)
|
||||
Qr: G1Point
|
||||
# [q_O(x)]₁ (commitment to output selector polynomial)
|
||||
Qo: G1Point
|
||||
# [q_C(x)]₁ (commitment to constants selector polynomial)
|
||||
Qc: G1Point
|
||||
# [S_σ1(x)]₁ (commitment to the first permutation polynomial S_σ1(X))
|
||||
S1: G1Point
|
||||
# [S_σ2(x)]₁ (commitment to the second permutation polynomial S_σ2(X))
|
||||
S2: G1Point
|
||||
# [S_σ3(x)]₁ (commitment to the third permutation polynomial S_σ3(X))
|
||||
S3: G1Point
|
||||
# [x]₂ = xH, where H is a generator of G_2
|
||||
X_2: G2Point
|
||||
# nth root of unity, where n is the program's group order.
|
||||
w: Scalar
|
||||
|
||||
buf = serialize_point(A_pt) + serialize_point(B_pt) + serialize_point(C_pt)
|
||||
# More optimized version that tries hard to minimize pairings and
|
||||
# elliptic curve multiplications, but at the cost of being harder
|
||||
# to understand and mixing together a lot of the computations to
|
||||
# efficiently batch them
|
||||
def verify_proof(self, group_order: int, pf, public=[]) -> bool:
|
||||
# 4. Compute challenges
|
||||
beta, gamma, alpha, zeta, v, u = self.compute_challenges(pf)
|
||||
proof = pf.flatten()
|
||||
|
||||
beta = binhash_to_f_inner(keccak256(buf))
|
||||
gamma = binhash_to_f_inner(keccak256(keccak256(buf)))
|
||||
# 5. Compute zero polynomial evaluation Z_H(ζ) = ζ^n - 1
|
||||
root_of_unity = Scalar.root_of_unity(group_order)
|
||||
ZH_ev = zeta**group_order - 1
|
||||
|
||||
alpha = binhash_to_f_inner(keccak256(serialize_point(Z_pt)))
|
||||
# 6. Compute Lagrange polynomial evaluation L_0(ζ)
|
||||
L0_ev = ZH_ev / (group_order * (zeta - 1))
|
||||
|
||||
buf2 = serialize_point(T1_pt)+serialize_point(T2_pt)+serialize_point(T3_pt)
|
||||
zed = binhash_to_f_inner(keccak256(buf))
|
||||
|
||||
buf3 = b''.join([
|
||||
serialize_int(x) for x in
|
||||
(A_ev, B_ev, C_ev, S1_ev, S2_ev, Z_shifted_ev)
|
||||
])
|
||||
v = binhash_to_f_inner(keccak256(buf3))
|
||||
|
||||
# Does not need to be standardized, only needs to be unpredictable
|
||||
u = binhash_to_f_inner(keccak256(buf + buf2 + buf3))
|
||||
|
||||
ZH_ev = zed ** group_order - 1
|
||||
|
||||
root_of_unity = get_root_of_unity(group_order)
|
||||
|
||||
L1_ev = (
|
||||
(zed ** group_order - 1) /
|
||||
(group_order * (zed - 1))
|
||||
)
|
||||
|
||||
PI_ev = barycentric_eval_at_point(
|
||||
[f_inner(-x) for x in public] +
|
||||
[f_inner(0) for _ in range(group_order - len(public))],
|
||||
zed
|
||||
)
|
||||
|
||||
if not optimized:
|
||||
# Basic, easier-to-understand version of what's going on
|
||||
|
||||
# Recover the commitment to the linearization polynomial R,
|
||||
# exactly the same as what was created by the prover
|
||||
R_pt = ec_lincomb([
|
||||
(Qm_pt, A_ev * B_ev),
|
||||
(Ql_pt, A_ev),
|
||||
(Qr_pt, B_ev),
|
||||
(Qo_pt, C_ev),
|
||||
(b.G1, PI_ev),
|
||||
(Qc_pt, 1),
|
||||
(Z_pt, (
|
||||
(A_ev + beta * zed + gamma) *
|
||||
(B_ev + beta * 2 * zed + gamma) *
|
||||
(C_ev + beta * 3 * zed + gamma) *
|
||||
alpha
|
||||
)),
|
||||
(S3_pt, (
|
||||
-(A_ev + beta * S1_ev + gamma) *
|
||||
(B_ev + beta * S2_ev + gamma) *
|
||||
beta *
|
||||
alpha * Z_shifted_ev
|
||||
)),
|
||||
(b.G1, (
|
||||
-(A_ev + beta * S1_ev + gamma) *
|
||||
(B_ev + beta * S2_ev + gamma) *
|
||||
(C_ev + gamma) *
|
||||
alpha * Z_shifted_ev
|
||||
)),
|
||||
(Z_pt, L1_ev * alpha ** 2),
|
||||
(b.G1, -L1_ev * alpha ** 2),
|
||||
(T1_pt, -ZH_ev),
|
||||
(T2_pt, -ZH_ev * zed**group_order),
|
||||
(T3_pt, -ZH_ev * zed**(group_order*2)),
|
||||
])
|
||||
|
||||
print('verifier R_pt', R_pt)
|
||||
|
||||
# Verify that R(z) = 0 and the prover-provided evaluations
|
||||
# A(z), B(z), C(z), S1(z), S2(z) are all correct
|
||||
assert b.pairing(
|
||||
b.G2,
|
||||
ec_lincomb([
|
||||
(R_pt, 1),
|
||||
(A_pt, v),
|
||||
(b.G1, -v * A_ev),
|
||||
(B_pt, v**2),
|
||||
(b.G1, -v**2 * B_ev),
|
||||
(C_pt, v**3),
|
||||
(b.G1, -v**3 * C_ev),
|
||||
(S1_pt, v**4),
|
||||
(b.G1, -v**4 * S1_ev),
|
||||
(S2_pt, v**5),
|
||||
(b.G1, -v**5 * S2_ev),
|
||||
])
|
||||
) == b.pairing(
|
||||
b.add(X2, ec_mul(b.G2, -zed)),
|
||||
W_z_pt
|
||||
# 7. Compute public input polynomial evaluation PI(ζ).
|
||||
PI = Polynomial(
|
||||
[Scalar(-x) for x in public]
|
||||
+ [Scalar(0) for _ in range(group_order - len(public))],
|
||||
Basis.LAGRANGE,
|
||||
)
|
||||
print("done check 1")
|
||||
|
||||
# Verify that the provided value of Z(zed*w) is correct
|
||||
assert b.pairing(
|
||||
b.G2,
|
||||
ec_lincomb([
|
||||
(Z_pt, 1),
|
||||
(b.G1, -Z_shifted_ev)
|
||||
])
|
||||
) == b.pairing(
|
||||
b.add(X2, ec_mul(b.G2, -zed * root_of_unity)),
|
||||
W_zw_pt
|
||||
)
|
||||
print("done check 2")
|
||||
return True
|
||||
|
||||
else:
|
||||
# More optimized version that tries hard to minimize pairings and
|
||||
# elliptic curve multiplications, but at the cost of being harder
|
||||
# to understand and mixing together a lot of the computations to
|
||||
# efficiently batch them
|
||||
|
||||
PI_ev = PI.barycentric_eval(zeta)
|
||||
|
||||
# Compute the constant term of R. This is not literally the degree-0
|
||||
# term of the R polynomial; rather, it's the portion of R that can
|
||||
# be computed directly, without resorting to elliptic cutve commitments
|
||||
r0 = (
|
||||
PI_ev - L1_ev * alpha ** 2 - (
|
||||
alpha *
|
||||
(A_ev + beta * S1_ev + gamma) *
|
||||
(B_ev + beta * S2_ev + gamma) *
|
||||
(C_ev + gamma) *
|
||||
Z_shifted_ev
|
||||
PI_ev
|
||||
- L0_ev * alpha**2
|
||||
- (
|
||||
alpha
|
||||
* (proof["a_eval"] + beta * proof["s1_eval"] + gamma)
|
||||
* (proof["b_eval"] + beta * proof["s2_eval"] + gamma)
|
||||
* (proof["c_eval"] + gamma)
|
||||
* proof["z_shifted_eval"]
|
||||
)
|
||||
)
|
||||
|
||||
# D = (R - r0) + u * Z
|
||||
D_pt = ec_lincomb([
|
||||
(Qm_pt, A_ev * B_ev),
|
||||
(Ql_pt, A_ev),
|
||||
(Qr_pt, B_ev),
|
||||
(Qo_pt, C_ev),
|
||||
(Qc_pt, 1),
|
||||
(Z_pt, (
|
||||
(A_ev + beta * zed + gamma) *
|
||||
(B_ev + beta * 2 * zed + gamma) *
|
||||
(C_ev + beta * 3 * zed + gamma) * alpha +
|
||||
L1_ev * alpha ** 2 +
|
||||
u
|
||||
)),
|
||||
(S3_pt, (
|
||||
-(A_ev + beta * S1_ev + gamma) *
|
||||
(B_ev + beta * S2_ev + gamma) *
|
||||
alpha * beta * Z_shifted_ev
|
||||
)),
|
||||
(T1_pt, -ZH_ev),
|
||||
(T2_pt, -ZH_ev * zed**group_order),
|
||||
(T3_pt, -ZH_ev * zed**(group_order*2)),
|
||||
])
|
||||
|
||||
F_pt = ec_lincomb([
|
||||
(D_pt, 1),
|
||||
(A_pt, v),
|
||||
(B_pt, v**2),
|
||||
(C_pt, v**3),
|
||||
(S1_pt, v**4),
|
||||
(S2_pt, v**5),
|
||||
])
|
||||
|
||||
E_pt = ec_mul(b.G1, (
|
||||
-r0 + v * A_ev + v**2 * B_ev + v**3 * C_ev +
|
||||
v**4 * S1_ev + v**5 * S2_ev + u * Z_shifted_ev
|
||||
))
|
||||
|
||||
# D = (R - r0) + u * Z
|
||||
D_pt = ec_lincomb(
|
||||
[
|
||||
(self.Qm, proof["a_eval"] * proof["b_eval"]),
|
||||
(self.Ql, proof["a_eval"]),
|
||||
(self.Qr, proof["b_eval"]),
|
||||
(self.Qo, proof["c_eval"]),
|
||||
(self.Qc, 1),
|
||||
(
|
||||
proof["z_1"],
|
||||
(
|
||||
(proof["a_eval"] + beta * zeta + gamma)
|
||||
* (proof["b_eval"] + beta * 2 * zeta + gamma)
|
||||
* (proof["c_eval"] + beta * 3 * zeta + gamma)
|
||||
* alpha
|
||||
+ L0_ev * alpha**2
|
||||
+ u
|
||||
),
|
||||
),
|
||||
(
|
||||
self.S3,
|
||||
(
|
||||
-(proof["a_eval"] + beta * proof["s1_eval"] + gamma)
|
||||
* (proof["b_eval"] + beta * proof["s2_eval"] + gamma)
|
||||
* alpha
|
||||
* beta
|
||||
* proof["z_shifted_eval"]
|
||||
),
|
||||
),
|
||||
(proof["t_lo_1"], -ZH_ev),
|
||||
(proof["t_mid_1"], -ZH_ev * zeta**group_order),
|
||||
(proof["t_hi_1"], -ZH_ev * zeta ** (group_order * 2)),
|
||||
]
|
||||
)
|
||||
|
||||
F_pt = ec_lincomb(
|
||||
[
|
||||
(D_pt, 1),
|
||||
(proof["a_1"], v),
|
||||
(proof["b_1"], v**2),
|
||||
(proof["c_1"], v**3),
|
||||
(self.S1, v**4),
|
||||
(self.S2, v**5),
|
||||
]
|
||||
)
|
||||
|
||||
E_pt = ec_mul(
|
||||
b.G1,
|
||||
(
|
||||
-r0
|
||||
+ v * proof["a_eval"]
|
||||
+ v**2 * proof["b_eval"]
|
||||
+ v**3 * proof["c_eval"]
|
||||
+ v**4 * proof["s1_eval"]
|
||||
+ v**5 * proof["s2_eval"]
|
||||
+ u * proof["z_shifted_eval"]
|
||||
),
|
||||
)
|
||||
|
||||
# What's going on here is a clever re-arrangement of terms to check
|
||||
# the same equations that are being checked in the basic version,
|
||||
# but in a way that minimizes the number of EC muls and even
|
||||
@@ -195,15 +145,133 @@ def verify_proof(setup, group_order, vk, proof, public=[], optimized=True):
|
||||
#
|
||||
# so at this point we can take a random linear combination of the two
|
||||
# checks, and verify it with only one pairing.
|
||||
assert b.pairing(X2, ec_lincomb([
|
||||
(W_z_pt, 1),
|
||||
(W_zw_pt, u)
|
||||
])) == b.pairing(b.G2, ec_lincomb([
|
||||
(W_z_pt, zed),
|
||||
(W_zw_pt, u * zed * root_of_unity),
|
||||
(F_pt, 1),
|
||||
(E_pt, -1)
|
||||
]))
|
||||
|
||||
assert b.pairing(
|
||||
self.X_2, ec_lincomb([(proof["W_z_1"], 1), (proof["W_zw_1"], u)])
|
||||
) == b.pairing(
|
||||
b.G2,
|
||||
ec_lincomb(
|
||||
[
|
||||
(proof["W_z_1"], zeta),
|
||||
(proof["W_zw_1"], u * zeta * root_of_unity),
|
||||
(F_pt, 1),
|
||||
(E_pt, -1),
|
||||
]
|
||||
),
|
||||
)
|
||||
|
||||
print("done combined check")
|
||||
return True
|
||||
|
||||
# Basic, easier-to-understand version of what's going on
|
||||
def verify_proof_unoptimized(self, group_order: int, pf, public=[]) -> bool:
|
||||
# 4. Compute challenges
|
||||
beta, gamma, alpha, zeta, v, _ = self.compute_challenges(pf)
|
||||
proof = pf.flatten()
|
||||
|
||||
# 5. Compute zero polynomial evaluation Z_H(ζ) = ζ^n - 1
|
||||
root_of_unity = Scalar.root_of_unity(group_order)
|
||||
ZH_ev = zeta**group_order - 1
|
||||
|
||||
# 6. Compute Lagrange polynomial evaluation L_0(ζ)
|
||||
L0_ev = ZH_ev / (group_order * (zeta - 1))
|
||||
|
||||
# 7. Compute public input polynomial evaluation PI(ζ).
|
||||
PI = Polynomial(
|
||||
[Scalar(-x) for x in public]
|
||||
+ [Scalar(0) for _ in range(group_order - len(public))],
|
||||
Basis.LAGRANGE,
|
||||
)
|
||||
PI_ev = PI.barycentric_eval(zeta)
|
||||
|
||||
# Recover the commitment to the linearization polynomial R,
|
||||
# exactly the same as what was created by the prover
|
||||
R_pt = ec_lincomb(
|
||||
[
|
||||
(self.Qm, proof["a_eval"] * proof["b_eval"]),
|
||||
(self.Ql, proof["a_eval"]),
|
||||
(self.Qr, proof["b_eval"]),
|
||||
(self.Qo, proof["c_eval"]),
|
||||
(b.G1, PI_ev),
|
||||
(self.Qc, 1),
|
||||
(
|
||||
proof["z_1"],
|
||||
(
|
||||
(proof["a_eval"] + beta * zeta + gamma)
|
||||
* (proof["b_eval"] + beta * 2 * zeta + gamma)
|
||||
* (proof["c_eval"] + beta * 3 * zeta + gamma)
|
||||
* alpha
|
||||
),
|
||||
),
|
||||
(
|
||||
self.S3,
|
||||
(
|
||||
-(proof["a_eval"] + beta * proof["s1_eval"] + gamma)
|
||||
* (proof["b_eval"] + beta * proof["s2_eval"] + gamma)
|
||||
* beta
|
||||
* alpha
|
||||
* proof["z_shifted_eval"]
|
||||
),
|
||||
),
|
||||
(
|
||||
b.G1,
|
||||
(
|
||||
-(proof["a_eval"] + beta * proof["s1_eval"] + gamma)
|
||||
* (proof["b_eval"] + beta * proof["s2_eval"] + gamma)
|
||||
* (proof["c_eval"] + gamma)
|
||||
* alpha
|
||||
* proof["z_shifted_eval"]
|
||||
),
|
||||
),
|
||||
(proof["z_1"], L0_ev * alpha**2),
|
||||
(b.G1, -L0_ev * alpha**2),
|
||||
(proof["t_lo_1"], -ZH_ev),
|
||||
(proof["t_mid_1"], -ZH_ev * zeta**group_order),
|
||||
(proof["t_hi_1"], -ZH_ev * zeta ** (group_order * 2)),
|
||||
]
|
||||
)
|
||||
|
||||
print("verifier R_pt", R_pt)
|
||||
|
||||
# Verify that R(z) = 0 and the prover-provided evaluations
|
||||
# A(z), B(z), C(z), S1(z), S2(z) are all correct
|
||||
assert b.pairing(
|
||||
b.G2,
|
||||
ec_lincomb(
|
||||
[
|
||||
(R_pt, 1),
|
||||
(proof["a_1"], v),
|
||||
(b.G1, -v * proof["a_eval"]),
|
||||
(proof["b_1"], v**2),
|
||||
(b.G1, -(v**2) * proof["b_eval"]),
|
||||
(proof["c_1"], v**3),
|
||||
(b.G1, -(v**3) * proof["c_eval"]),
|
||||
(self.S1, v**4),
|
||||
(b.G1, -(v**4) * proof["s1_eval"]),
|
||||
(self.S2, v**5),
|
||||
(b.G1, -(v**5) * proof["s2_eval"]),
|
||||
]
|
||||
),
|
||||
) == b.pairing(b.add(self.X_2, ec_mul(b.G2, -zeta)), proof["W_z_1"])
|
||||
print("done check 1")
|
||||
|
||||
# Verify that the provided value of Z(zeta*w) is correct
|
||||
assert b.pairing(
|
||||
b.G2, ec_lincomb([(proof["z_1"], 1), (b.G1, -proof["z_shifted_eval"])])
|
||||
) == b.pairing(
|
||||
b.add(self.X_2, ec_mul(b.G2, -zeta * root_of_unity)), proof["W_zw_1"]
|
||||
)
|
||||
print("done check 2")
|
||||
return True
|
||||
|
||||
# Compute challenges (should be same as those computed by prover)
|
||||
def compute_challenges(
|
||||
self, proof
|
||||
) -> tuple[Scalar, Scalar, Scalar, Scalar, Scalar, Scalar]:
|
||||
transcript = Transcript(b"plonk")
|
||||
beta, gamma = transcript.round_1(proof.msg_1)
|
||||
alpha, _fft_cofactor = transcript.round_2(proof.msg_2)
|
||||
zeta = transcript.round_3(proof.msg_3)
|
||||
v = transcript.round_4(proof.msg_4)
|
||||
u = transcript.round_5(proof.msg_5)
|
||||
|
||||
return beta, gamma, alpha, zeta, v, u
|
||||
|
||||
Reference in New Issue
Block a user