More conversions + added TODO notes

This commit is contained in:
Hendrik Eeckhaut
2025-05-14 09:37:59 +02:00
parent 8232d0da96
commit feb2ff2756
18 changed files with 740 additions and 9 deletions

18
research/README.md Normal file
View File

@@ -0,0 +1,18 @@
# TLSNotary Research
This folder contains write-ups from research.
## Typst
The research write-ups are written in the [typst](https://github.com/typst/typst) markup language.
You can install `typst` with:
```sh
cargo install --git https://github.com/typst/typst typst-cli
```
or `brew install typst` on MacOs.
To compile the `typ`-files into `pdf`-files, run:
```sh
typst compile file.typ
```

BIN
research/a2m_and_m2a.pdf Normal file

Binary file not shown.

91
research/a2m_and_m2a.typ Normal file
View File

@@ -0,0 +1,91 @@
#set page(paper: "a4")
#set par(justify: true)
= M2A
OT sender and receiver want to get an *additive* sharing $(y, overline(b))$ from a
*multiplicative* sharing $(x, a)$. So the sender starts with $x$ and ends up with
$y$ and the receiver starts with $a$ and ends up with $overline(b)$.
They compute $y=a x + b arrow.l.r.double y - b = a x arrow.l.r.double y +
overline(b) = a x$ in $n$ OTs, where $n$ is given by the bitsize of the field
elements $y, x, a, b$. The receiver's inputs for every $i$-th OT are $x_i$ and
his output is $y_i$. The sender's inputs are a linear combination of $a$ and
$b_i$ and he outputs $b_i$.
== Compute $y = a x + b$
#align(center)[
#box[$P_R$ #v(4em)]
#box[$x_i$ #line(length: 2cm) #v(3em) $y_i$ #line(length: 2cm) #v(1em)]
#box[#square(size: 8em)[#v(3em) $"OT"_i$]]
#box[$k_i = (b_i, a+b_i)$ #line(length: 2.5cm) #v(3em) $b_i$ #line(length: 2.5cm) #v(1em)]
#box[$P_S$ #v(4em)]
]
=== *The OT sender $P_S$:*
+ Sample $n$ random field elements $b_i arrow.l \$$ so that $b = sum_(i = 0)^n 2^i b_i$
+ In each $i$-th OT: Send $k_i = (b_i, a + b_i)$ to $P_R$
+ Compute and output $overline(b) = - b = - sum_(i = 0)^n 2^i b_i$
=== *The OT receiver $P_R$:*
+ Bit-decomposes $x = sum_(i = 0)^n 2^i x_i$
+ In each $i$-th OT: Depending on the bit of $x_i$ he receives $y_i =
k_i^x_i$, which is
- $k_i^0 = b_i$ if $x_i = 0$
- $k_i^1 = a + b_i$ if $x_i = 1$
+ Compute and output $y = sum_(i = 0)^n 2^i k_i = a x + b$
== Correctness Check
+ Repeat the whole M2A protocol for the same $x$ but with random $a_2, b_2$. We
now label $a_1 := a$ and $b_1 := b$, which are the original values from the
previously executed M2A protocol
+ $P_R$ sends $2$ random field elements $chi_1, chi_2$ to $P_S$
+ $P_S$ computes $a^* = chi_1 a_1 + chi_2 a_2$ and $b^* = chi_1 b_1 + chi_2 b_2$
and sends them to $P_R$.
+ $P_R$ checks that $ chi_1 y_1 + chi_2 y_2 = a^* x + b^*$
= A2M
OT sender and receiver want to get a *multiplicative* sharing $(y, overline(b))$
from an *additive* sharing $(x, a)$. So the sender starts with $x$ and ends up
with $y$ and the receiver starts with $a$ and ends up with $overline(b)$.
They compute $y=(a + x) b arrow.l.r.double y b^(-1) = a + x arrow.l.r.double y
overline(b) = a + x$ in $n$ OTs, where $n$ is given by the bitsize of the field
elements $y, x, a, b$. The receiver's inputs for every $i$-th OT are $x_i$ and
his output is $y_i$. The sender's inputs are a linear combination of $a_i$ and
$b$ including a mask $m_i$ and he outputs $b$.
== Compute $y = (a + x) b$
#align(center)[
#box[$P_R$ #v(4em)]
#box[$x_i$ #line(length: 2cm) #v(3em) $y_i$ #line(length: 2cm) #v(1em)]
#box[#square(size: 8em)[#v(3em) *OT*]]
#box[$k_i = (a_i b + m_i, (a_i + 1) b + m_i)$ #line(length: 5cm) #v(3em) $b$ #line(length: 5cm) #v(1em)]
#box[$P_S$ #v(4em)]
]
=== *The OT sender $P_S$:*
+ Sample a random field element $b arrow.l \$$
+ Sample $n$ random field elements $m_i arrow.l \$$, with $sum_(i = 0)^n 2^i m_i = 0$
+ Bit-decomposes $a = sum_(i = 0)^n 2^i a_i$
+ In each $i$-th OT: Send $k_i = (a_i b + m_i, (a_i + 1) b + m_i)$ to $P_R$
+ Compute and output $overline(b) = b^(-1)$
=== *The OT receiver $P_R$:*
+ Bit-decomposes $x = sum_(i = 0)^n 2^i x_i$
+ In each $i$-th OT: Depending on the bit of $x_i$ he receives $y_i =
k_i^x_i$, which is
- $k_i^0 = a_i b + m_i$ if $x_i = 0$
- $k_i^1 = (a_i + 1) b + m_i$ if $x_i = 1$
+ Compute and output $y = sum_(i = 0)^n 2^i k_i = (a + x) b$
== Correctness Check
+ Repeat the whole A2M protocol for the same $x$ but with random $a_2, b_2$. We
now label $a_1 := a$ and $b_1 := b$, which are the original values from the
previously executed A2M protocol
+ $P_R$ sends $2$ random field elements $chi_1, chi_2$ to $P_S$
+ $P_S$ computes $z^* = chi_1 a_1 b_1 + chi_2 a_2 b_2$ and $b^* = chi_1 b_1 + chi_2 b_2$
and sends them to $P_R$.
+ $P_R$ checks that $ chi_1 y_1 + chi_2 y_2 = b^* x + z^*$

203
research/ghash.typ Normal file
View File

@@ -0,0 +1,203 @@
#set page(paper: "a4")
#set par(justify: true)
#set text(size: 12pt)
#show link: underline
= GHASH
We want to compute GHASH MAC in 2PC which is of the form $sum_(k=1)^l H^k dot
b_k$, where $H^k, b_k in "GF"(2^128)$. $H$ is split into additive shares for
parties $P_A$ and $P_B$, such that $P_A$ knows $H_1$ and $P_B$ knows $H_2$ and
$H = H_1 + H_2$. We now need to compute additive shares of powers of $H$.
== Functionality $cal(F)_(H^l)$
On input $(H_1)$ from $P_A$ and $H_2$ from $P_B$, the functionality returns
all the $H_(1,k)$ to $P_A$ and $H_(2,k)$ to $P_B$ for $k = 2...l$, such that
$H_(1,k) + H_(2,k) = (H_1 + H_2)^k$.
== Protocols
The following protocols all implement the functionality $cal(F)_(H^l)$. All
protocols guarantee privacy for $H_1$ and $H_2$, i.e. there is no leakage to the
other party. All protocols are implementations with unpredictable errors, that
means correctness is *not* guaranteed in the presence of a malicious adversary
deviating from the protocol. This is tolerable in the context of TLSNotary.
We will assume that $l$, which determines the highest power $H^l$ both parties want
to compute is a compile-time constant, so that it does not complicate protocol
and performance analysis.
When computing bandwidths of protocols, we assume that both parties have access
to a sufficient number of pre-distributed random OTs. In order to simplify the
computation of rounds, we assume that there is a sufficient number of
pre-distributed ROLEs available. This means we ignore rounds for setting up
ROLEs, because this can be batched and is needed for every protocol discussed
here.
The following table gives an overview about the different protocols:
#align(center)[
#table(
columns: (auto, auto, auto, auto),
inset: 10pt,
align: horizon + center,
[*Protocol*], [*0 Issue*], [*Rounds*], [*Bandwidth*],
$Pi_"A2M"$,
"yes",
[
Off: 0\
On: 1.5\
],
[
Off: 0\
On: 2.1 MB\
],
$Pi_"ROLE + OLE"$,
"yes",
[
Off: 0.5\
On: 0.5\
],
[
Off: 2.1 MB\
On: 128 bit\
],
$Pi_"ROLE + OLE + Zero"$,
"no",
[
Off: 0.5\
On: 0.5\
],
[
Off: 6.3 MB\
On: 256 bit\
],
$Pi_"Beaver"$,
"no",
[
Off: 2\
On: 0.5\
],
[
Off: 4.2 MB\
On: 128 bit\
],
)
]
=== A2M Protocol
This protocol converts the additive shares $H_"1/2"$ into multiplicative shares
$H_"1/2"^*$. Then both parties can locally computer higher powers
$H_(1"/"2)^k^*$. Afterwards they convert these higher powers back into additive
shares $H_("1/2", k)$.
==== Protocol $Pi_"A2M"^l$
+ $P_A$ samples a random field element $r arrow.l "GF"(2^128)$.
+ Both parties call $cal(F)_"OLE" (r, H_2) -> (x, y)$. So $P_A$ knows $(r,
x)$ and $P_B$ knows $(H_2, y)$ and it holds that $r dot H_2 = x + y$.
+ $P_A$ defines $m = r dot H_1 + x$ and sends $m$ to $P_B$.
+ $P_A$ defines $H_1^* = r^(-1)$ and $P_B$ defines $H_2^* = m + y$.
+ Both parties locally compute $H_"1/2"^k^*$ for $k = 2...l$.
+ Both parties call $cal(F)_"OLE" (H_1^k^*, H_2^k^*) arrow.r (H_"1,k",
H_"2,k")$ for $k = 2...l$.
+ $P_A$ outputs $H_"1,k"$ and $P_B$ outputs $H_"2,k"$.
==== Performance Analysis
The protocol has no offline communication, all the communication takes place
online with 1.5 rounds (steps 2, 3, 6). The bandwidth of the protocol is $1026
dot (128 + 128^2) + 1026 dot 128 + 128 approx 2.1 "MB"$.
=== ROLE + OLE Protocol
This protocol is nearly identical to the original GHASH construction from
#link("https://eprint.iacr.org/2023/964")[XYWY23]. It only addresses the leakage
of $H_(1"/"2)$ in the presence of a malicious adversary using $0$ as an input
for $cal(F)_"OLE"$. Instead of using $cal(F)_"OLE"$ for all powers $k = 1...l$,
we replace the first invocation of $cal(F)_"OLE"$ with $cal(F)_"ROLE"$ and then
only use $cal(F)_"OLE"$ for $k = 2...l$. The 0 issue is still present for higher
powers of $H$, but it can be fixed with the zero check.
==== Protocol $Pi_"ROLE + OLE"^l$
+ Both parties call $cal(F)_"ROLE"$, so that $P_A$ gets $(a_1, x_1)$ and $P_B$
gets $(b_1, y_1)$.
+ $P_A$ defines $(r_A, r_1) := (a_1, x_1)$ and $P_B$ defines
$(r_B, r_2) := (b_1, y_1)$.
+ $P_A$ locally computes $r_A^k$ and $P_B$ locally computes $r_B^k$, for
$k=2...l$.
+ Both parties call $cal(F)_"OLE" (r_A^k, r_B^k) arrow.r (r_(1,k), r_(2,k))$, so
that $P_A$ gets $r_(1,k)$ and $P_B$ gets $r_(2,k)$ for $k = 2...l$.
+ $P_A$ opens $d_1 = H_1 - r_1$ and $P_B$ opens $d_2 = H_2 - r_2$, so that both
parties know $d = d_1 + d_2 = (H_1 + H_2) - (r_1 +r_2)$.
+ Define the polynomials $f_k$ over $"GF"(2^128)$, with
$f_k (x) := (d + x)^k = sum_(j=0)^k f_(j,k) dot x^j$. $P_A$ locally evaluates
and outputs $H_(1,k) = f_k (r_(1,k))$ and $P_B$ locally evaluates and outputs
$H_(2,k) = f_k (r_(2,k))$ for $k = 1...l$.
==== Analysis of 0 issue
The OLEs of step 4 are still vulnerable to the 0 issue. This allows a malicious
$P_A$ to learn all the $r_(2,k), k = 2...l$ and by that also all the $H_(2,k)$.
$P_A$ can then output some arbitrary $s_k in bb(F)$ in step 6, which allows him to
completely set all the $H^k$ for $k = 2...l$.
However, he will not be able to set $r_(2,1)$, which means he cannot set $H^1$. He
is also not able to remove it from $"MAC" = sum_(k=1)^l H^k dot b_k$, if for example
some $b_k = b_(k')$, because he would need to know $r_(2,1)$ for that. So in
other words if $"MAC" = "MAC"_1 + "MAC"_2$, then $"MAC"_2$ always contains some private,
uncontrollable mask $H_2^1 dot b_1$, which prevents $P_A$ from completely
controlling the $"MAC"$. Thus, fixing the 0 issue is optional.
==== Performance Analysis
- The protocol only needs 0.5 offline round (step 4) and 0.5 online round
(step 5). This holds even if the zero-check is applied.
- The protocol has an upload/download size of
- *Offline*:
- *Without zero-check*: $1026 dot (128 + 128^2) + 1025 dot 128 approx 2.1 "MB"$
- *With zero-check*: Approximately 2-times overhead, so $approx 6.3 "MB"$
- *Online*:
- *Without zero-check*: $128 "bit"$
- *With zero-check*: $256 "bit"$
=== Beaver Protocol
This protocol is nearly identical to the original GHASH construction from
#link("https://eprint.iacr.org/2023/964")[XYWY23]. It only addresses the leakage
of $H_(1"/"2)$ in the presence of a malicious adversary using $0$ as an input
for $cal(F)_"OLE"$. Instead of using $cal(F)_"OLE"$ , we sample $r = r_1 + r_2$
randomly and compute the higher powers of additive shares with
$cal(F)_"Beaver"$. This protocol does not suffer from the 0 issue.
==== Protocol $Pi_"Beaver"^l$
+ Both parties sample a random field element. $P_A$ samples $r_1 arrow.l
"GF"(2^128)$ and $P_B$ samples $r_1 arrow.l "GF"(2^128)$.
+ Both parties repeatedly call $cal(F)_"Beaver" (r_(1,k - 1), r_1, r_(2,k - 1),
r_2) -> (r_(1, k), r_(2, k))$ for $k = 2...l$.
+ $P_A$ opens $d_1 = H_1 - r_1$ and $P_B$ opens $d_2 = H_2 - r_2$, so that both
parties know $d = d_1 + d_2 = (H_1 + H_2) - (r_1 +r_2)$.
+ Define the polynomials $f_k$ over $"GF"(2^128)$, with
$f_k (x) := (d + x)^k = sum_(j=0)^k f_(j,k) dot x^j$. $P_A$ locally evaluates
and outputs $H_(1,k) = f_k (r_(1,k))$ and $P_B$ locally evaluates and outputs
$H_(2,k) = f_k (r_(2,k))$ for $k = 1...l$.
==== Performance Analysis
- By using free-squaring in $"GF"(2^128)$ and batching calls to $cal(F)_"Beaver"$
the protocol needs 2 offline rounds (repeatedly step 2) and 0.5 online round
(step 3).
- The protocol has an upload/download size of
- *Offline*: $1025 dot (128 + 128^2) + 1025 dot 128 approx 2.1 "MB"$
- *Online*: $128 "bit"$

BIN
research/ole-flavors.pdf Normal file

Binary file not shown.

132
research/ole-flavors.typ Normal file
View File

@@ -0,0 +1,132 @@
#set page(paper: "a4")
#set par(justify: true)
#set text(size: 12pt)
= Oblivious Linear Evaluation (OLE) flavors from random OT
Here we sum up different OLE flavors, where some of them are needed for
subprotocols of TLSNotary. All mentioned OLE protocol flavors are
implementations with errors, i.e. in the presence of a malicious adversary, he
can introduce additive errors to the result. This means correctness is not
guaranteed, but privacy is.
== Functionality $cal(F)_"ROT"$
*Note*: In the literature there are different flavors of random OT, depending on
if the receiver can choose his input or not. Here we that assume the receiver
has a choice.
Define the functionality $cal(F)_"ROT"$:
- The sender $P_A$ receives the random tuple $(t_0, t_1)$, where $t_0, t_1$ are
$kappa$-bit messages.
- The receiver $P_B$ inputs a bit $x$ and receives the corresponding $t_x$.
== Random OLE
=== Functionality $cal(F)_"ROLE"$
Define the functionality $cal(F)_"ROLE"$ where
- $P_A$ receives $(a, x)$
- $P_B$ receives $(b, y)$
such that $ y = a dot b + x$
=== Protocol $Pi_"ROLE"$
+ $P_B$ randomly samples $d arrow.l bb(F)$ and $f arrow.l bb(F)$.
+ $P_A$ randomly samples $c arrow.l bb(F)$ and $e arrow.l bb(F)$.
+ For each $i = 1, ... , l$ where $l = |f|$: Both parties call
$cal(F)_"ROT" (f)$, so $P_A$ knows $t_0^i, t_1^i$ and $P_B$ knows $t_(f_i)$.
+ $P_A$ sends $e$ and $u_i = t_(i,0) - t_(i,1) + c$ to $P_B$.
+ $P_B$ defines $b = e + f$ and sends $d$ to $P_A$.
+ $P_A$ defines $a = c + d$ and outputs
$x = sum 2^i t_(i,0) - a dot e$
+ $P_B$ computes $ y_i
&= f_i (u_i + d) + t_(i,f_i) \
&= f_i (t_(i,0) - t_(i,1) + c + d) + t_(i,f_i) \
&= f_i dot a + t_(i,0) $
and outputs $y = 2^i y_i$
+ Now it holds that $y = a dot b + x$.
== Vector OLE
=== Functionality $cal(F)_"VOLE"$
Define the functionality $cal(F)_"VOLE"$ which maintains a counter $k$ and
which allows to call an $"Extend"_k$ command multiple times.
- When calling $"Initialize"$, $P_B$ inputs a field element $b$. This sets up the
functionality for subsequent calls to $"Extend"_k$.
- When calling $"Extend"_k$: $P_A$ receives $(a_k, x_k)$ and $P_B$ receives
$y_k$.
such that $ y_k = a_k dot b + x_k$
=== Protocol $Pi_"VOLE"$
*Note*: This is the $Pi_"COPEe"$ construction from KOS16.
+ Initialization:
- $P_B$ chooses some field element $b$.
- Both parties call $cal(F)_"ROT" (b)$, so $P_A$ knows
$t_0^i, t_1^i$ and $P_B$ knows $t_(b_i)$.
- With some PRF define: $s_(i,0)^k := "PRF"(t^i_0, k)$, $s_(i,1)^k :=
"PRF"(t^i_1, k)$
+ $"Extend"_k$: This can be batched or/and repeated several times.
- $P_A$ chooses some field element $a_k$ and sends
$u_i^k = s_(i,0)^k - s_(i,1)^k + a_k$ to $P_B$.
- $P_A$ outputs $x_k = sum 2^i s_(i,0)^k$
- $P_B$ computes $ y^k_i
&= b_i dot u^k_i + s_(i,f_i)^k \
&= b_i (s_(i,0)^k - s_(i,1)^k + a_k) + s_(i,f_i)^k \
&= b_i dot a_k + s_(i,0)^k $
and outputs $y_k = 2^i y^k_i$
+ Now it holds that $y_k = a_k dot b + x_k$.
== Random Vector OLE
=== Functionality $cal(F)_"RVOLE"$
Define the functionality $cal(F)_"RVOLE"$ which maintains a counter $k$ and
which allows to call an $"Extend"_k$ command multiple times.
- When calling $"Initialize"$, $P_B$ receives a field element $b$. This sets up
the functionality for subsequent calls to $"Extend"_k$.
- When calling $"Extend"_k$: $P_A$ receives $(a_k, x_k)$ and $P_B$ receives
$y_k$.
such that $ y_k = a_k dot b + x_k$
=== Protocol $Pi_"RVOLE"$
+ Initialization:
- $P_B$ chooses some field element $f$.
- Both parties call $cal(F)_"ROT" (f)$, so $P_A$ knows
$t_0^i, t_1^i$ and $P_B$ knows $t_(f_i)$.
- $P_A$ sends $e$ to $P_B$ and $P_B$ defines $b = e + f$.
- With some PRF define: $s_(i,0)^k := "PRF"(t^i_0, k)$, $s_(i,1)^k :=
"PRF"(t^i_1, k)$
+ $"Extend"_k$: This can be batched or/and repeated several times.
- $P_A$ samples randomly $c_k arrow.l bb(F)$ and
$P_B$ samples randomly $d_k arrow.l bb(F)$.
- $P_A$ sends $u_i^k = s_(i,0)^k - s_(i,1)^k + c_k$ to $P_B$.
- $P_B$ sends $d_k$ to $P_A$.
- $P_A$ defines $a_k = c_k + d_k$ and outputs
$x_k = sum 2^i s_(i,0)^k - a_k dot e$
- $P_B$ computes $ y^k_i
&= f_i (u^k_i + d_k) + s_(i,f_i)^k \
&= f_i (s_(i,0)^k - s_(i,1)^k + c_k + d_k) + s_(i,f_i)^k \
&= f_i dot a_k + s_(i,0)^k $
and outputs $y_k = 2^i y^k_i$
+ Now it holds that $y_k = a_k dot b + x_k$.
== OLE from random OLE
=== Functionality $cal(F)_"OLE"$
Define the functionality $cal(F)_"OLE"$. After getting input $a$ from $P_A$ and $b$
from $P_B$ return $x$ to $P_A$ and $y$ to $P_B$ such that $y = a dot b + x$.
=== Protocol $Pi_"OLE"$
Both parties have access to a functionality $cal(F)_"ROLE"$, and call
$"Extend"_k$, so $P_A$ receives $(a'_k, x'_k)$ and $P_B$ receives $(b'_k, y'_k)$.
Then they perform the following derandomization:
- $P_A$ sends $u_k = a_k + a'_k$ to $P_B$.
- $P_B$ sends $v_k = b_k + b'_k$ to $P_A$.
- $P_A$ outputs $x_k = x'_k + a'_k dot v_k$.
- $P_B$ outputs $y_k = y'_k + b_k dot u_k$.
Now it holds that $ y_k - x_k
&= (y'_k + b_k dot u_k) - (x'_k + a'_k dot v_k) \
&= (y'_k + b_k dot (a_k + a'_k)) - (x'_k + a'_k dot (b_k + b'_k)) \
&= a_k dot b_k
$

View File

@@ -0,0 +1,99 @@
#set page(paper: "a4")
#set par(justify: true)
#set text(size: 12pt)
= Zero Check for Oblivious Linear Evaluation
In many subprotocols of TLSNotary we use OLEs to convert shares of a product into
shares of a sum. We recall the definition of the functionality:
=== Functionality $cal(F)_"OLE"$
Define the functionality $cal(F)_"OLE"$. After getting input $a$ from $P_A$ and $b$
from $P_B$ return $x$ to $P_A$ and $y$ to $P_B$ such that $y = a dot b + x$.
Problems arise in the presence of a malicious adversary, who inputs $0$ into the
protocol, because now $y = x$, which means that privacy for the honest party's
output is no longer guaranteed.
To address this shortcoming both parties can run the following protocol, which
allows an honest party to detect an input of $0$. Without loss of generality
let's assume $P_B$ is honest and wants to check if $P_A$ used a 0 input. The
protocol can be repeated with roles swapped, to also check that $P_B$ was honest.
Note that these executions can be batched.
=== Protocol $Pi_"OLE-Zero"$
+ $P_A$ chooses some OLE input $a_k$ and $P_B$ chooses $b_k$ for $k = 1...l$.
These are the base OLEs we want to check for 0 input of $P_A$.
+ Both parties call $cal(F)_"ROLE" -> (a_0, b_0), (x_0, y_0)$. This is needed as
a mask later.
+ $P_A$ sets $a_(k + 1) := a_(k - l)^(-1)$ , and $P_B$ sets $b_(k + 1) := y_(k - l)$ for
$k = l...2l$.
+ Both parties call $cal(F)_"OLE" (a_k, b_k) -> (x_k, y_k)$ for
$k = 1...(2l + 1)$.
+ $P_A$ computes $ s = sum_(k = 0)^l - x_k dot a_k^(-1) + sum_(k = l + 1)^(2l +
1) - x_k$ and sends $s$ to $P_B$.
+ $P_B$ checks that $sum_(k = 0)^l b_k = s + sum_(k = l + 1)^(2l + 1) y_k $. If
this does not hold $P_B$ aborts.
=== Intuition
+ We notice that a linear function has an inverse: $ y(x) &= m x + n \
y^(-1)(x) &= 1/m x - 1/m n $ such that $y^(-1)(y(x)) = x$. So if we have some
value $x_0$ and plug it in into $y(x)$ we get $y(x_0) = y_0$. Then the inverse
function gives us back $y^(-1)(y_0) = x_0$.
+ Having a function $y(x) = n$, is a constant function. This function does not
have an inverse, because we cannot solve for $x$. So there is no way to get
back from $y_0 arrow.r x_0$. But this is the exact thing what happens when
some of the parties $P_(A"/"B)$ inputs 0 into the OLE. Maybe we can exploit
this to construct some check.
+ To have an easy example, we now look at a single OLE, but it also works for an
arbitrary number. So we want to come up with a check for a single
$cal(F)_"OLE" (a_0, b_0) -> (x_0, y_0)$. We assume now $P_B$ wants to check if $P_A$
did input $a_0 = 0$ into this OLE. So $P_A$ inputs $a_0$ and gets $x_0$, and $P_B$
inputs $b_0$ and gets $y_0$, such that $y_0 = a_0 b_0 + x_0$. Notice, how we have the
analogy to the linear function. We now have $y(b)$, $P_A$ being the function
provider, defining the function with slope $a_0$ and offset $x_0$. $P_B$ is the
evaluator, plugging in $b_0$ into $y(b)$ and getting $y_0$.
+ $P_A$ now has to pass a check. He is required to invert the function, which
gives him $ y^(-1) (b) = 1/a b - 1/a x $
Now the problem is that when trying to do an OLE with this inverse function
$P_A$ can only invert the slope $a_1$ but not the offset $x_1$ since this is
the output of the OLE for him. But what he can do is calculate the difference
and send this as a correction to $P_B$. $P_B$ will input his output from the
OLE before and expect to get his original input (including a correction term),
since they call the inverse function.
So both parties will now call the inverse OLE. $P_A$ will input
$a_1 = a_0^(-1)$ and $P_B$ will input $b_1 = y_0$, such that
$ cal(F)_"OLE" (a_1, b_1) arrow.r (x_1, y_1) =
cal(F)_"OLE" (a_0^(-1), y_0) arrow.r (x_1, y_1)$. So the equation will be
$ y_1 = 1/a_0 y_0 + x_1 $ Using that $y_0 = a_0 b_0 + x_0$, we get
$ y_1 = b_0 + x_0 / a_0 + x_1 $
In other words $P_B$ will get $y_1$ as an output. $P_A$ will send him the
correction term $s = -x_0/a_0 - x_1$ and $P_B$ will check that $y_1 + s =
b_0$.
+ The last thing we have to make sure is that $P_B$ cannot abuse this check
to get some information about the inputs and outputs of $P_A$. For example,
when $P_B$ plugs in a $b_1 eq.not y_0$ he would learn another point on the
inverse function, not belonging to the original point. He can easily do the
math and arrive at
$
x_0 &= (b_0 b_1 -y_0 (y_1 + s)) / (b_0 - y_1 - s) \
a_0 &= (y_0 - x_0) / b_0 \
a_1 &= 1 / a_0 \
x_1 &= - x_0 / a_0 - s
$
Since $P_B$ knows $y_0, y_1, b_0, b_1, s$ this would totally leak the inputs
and outputs of $P_A$.
+ We address this by introducing a ROLE which works as a mask for the correction
term. $P_A$ and $P_B$ call $cal(F)_"ROLE" arrow.r (a_2, x_2), (b_2, y_2)$ and
then the inverse OLE for this ROLE (note this has to be an OLE because it is
chosen input) $ cal(F)_"OLE" (a_3, b_3) arrow.r (x_3, y_3) = cal(F)_"OLE"
(a_2^(-1), y_2) arrow.r (x_3, y_3)$. Then instead of sending $s = -x_0 / a_0 -
x_1$, $P_A$ will send $s = -x_0 / a_0 - x_1 -x_2 / a_2 - x_3$ and $P_B$ will
check that $y_1 + y_3 + s = b_0 + b_2$. Note that a single ROLE is enough to
mask the correction term for an arbitrary amount of OLEs, not just 1 like in
this example.