Compare commits

...

40 Commits

Author SHA1 Message Date
Jimmy Debe
500f69a0ec Update marketplace.md 2024-06-20 10:02:23 -04:00
Jimmy Debe
c0449f6c4b Update marketplace.md 2024-06-19 17:45:52 -04:00
Jimmy Debe
0c2fb0c40e Update marketplace.md 2024-05-30 22:41:37 -04:00
Jimmy Debe
6a82c7e4d4 Update marketplace.md 2024-05-30 22:36:38 -04:00
Dmitriy Ryajov
9f0020dbca Merge branch 'main' into codex-marketplace 2024-05-30 11:46:44 -06:00
Jimmy Debe
5acdaba06f Update marketplace.md 2024-05-30 13:29:40 -04:00
Jimmy Debe
58e1317940 Update marketplace.md 2024-05-30 10:30:21 -04:00
Jimmy Debe
822f1a61ef Add diagram 2024-05-29 22:28:58 -04:00
Jimmy Debe
137ee43542 Update and rename codex/marketplace.md to codex/raw/marketplace.md 2024-05-29 22:24:27 -04:00
Jimmy Debe
41debdc659 Add request diagram 2024-05-29 22:01:00 -04:00
Jimmy Debe
88c24503d8 Create request-contract.md 2024-05-29 21:58:40 -04:00
Jimmy Debe
b48e7c8d10 Update marketplace.md 2024-05-28 22:31:25 -04:00
Jimmy Debe
7b443c1aab 17/WAKU2-RLN-RELAY: Update (#32)
Move 17/WAKU2-RLN-RELAY to stable open discussion. 
Implementation :
- [ nim ](https://github.com/waku-org/nwaku)
- [ go ](https://github.com/waku-org/go-waku)
2024-05-28 22:27:45 -04:00
Jimmy Debe
99be3b9745 Move Raw Specs (#37)
Move Vac raw specs into raw subdirectory.
2024-05-27 07:57:18 -04:00
ramsesfv
7e3a625812 ETH-SECPM-DEC (#28)
Co-authored-by: Jimmy Debe <91767824+jimstir@users.noreply.github.com>
Co-authored-by: Ekaterina Broslavskaya <seemenkina@gmail.com>
Co-authored-by: seugu <99656002+seugu@users.noreply.github.com>
2024-05-27 12:15:46 +02:00
Jimmy Debe
023b8f69e6 Update marketplace.md 2024-05-23 17:57:11 -04:00
Jimmy Debe
464205e8bf Update marketplace.md 2024-05-23 11:35:43 -04:00
Jimmy Debe
d7d813fb04 Update marketplace.md 2024-05-23 11:24:00 -04:00
ramsesfv
e234e9d5a3 Update eth-secpm.md (#35)
Added flow diagrams

---------

Co-authored-by: Jimmy Debe <91767824+jimstir@users.noreply.github.com>
2024-05-21 11:21:34 +02:00
Jimmy Debe
21394feaef Update marketplace.md 2024-05-15 20:13:20 -04:00
Jimmy Debe
5497f4de51 Update marketplace.md 2024-05-15 20:06:16 -04:00
Jimmy Debe
b8df748e47 Update marketplace.md 2024-05-14 20:30:19 -04:00
Jimmy Debe
e5b859abfb Update WAKU2-NETWORK: Move to draft (#5) 2024-05-10 16:41:48 +02:00
Jimmy Debe
fe90d98f89 Update marketplace.md 2024-05-09 21:49:17 -04:00
Jimmy Debe
8cf4866bb7 Update marketplace.md 2024-05-02 06:47:36 -04:00
Jimmy Debe
115d3fffc2 Update marketplace.md 2024-05-02 06:38:33 -04:00
Jimmy Debe
e13acacf54 Update codex/marketplace.md
Co-authored-by: Eric <5089238+emizzle@users.noreply.github.com>
2024-05-02 06:07:30 -04:00
Jimmy Debe
b40b61b525 Update marketplace.md 2024-05-01 20:28:15 -04:00
Jimmy Debe
d84c62e358 Update marketplace.md 2024-05-01 12:57:12 -04:00
Jimmy Debe
5c3d69f484 Update marketplace.md 2024-04-30 22:35:23 -04:00
Jimmy Debe
1f7cfda4c6 Update marketplace.md 2024-04-29 21:26:55 -04:00
Jimmy Debe
3a7a30bc96 Update marketplace.md 2024-04-29 20:25:40 -04:00
Jimmy Debe
9aa7f04b1b Update marketplace.md 2024-04-29 14:24:48 -04:00
Jimmy Debe
3710944981 Update marketplace.md 2024-04-25 22:22:51 -04:00
Jimmy Debe
c714e0711c Create marketplace.md 2024-04-25 20:36:05 -04:00
Filip Pajic
69f2853407 fix: Syntax fix for index documents inside Waku foldersFix syntax (#34)
# What does this PR resolve? 🚀
- Changes title inside Waku/README.md from h2 to h1
- Changes title inside Waku/Deprecated/README.md from h2 to h1

# Details 📝
The syntax for the title of the markdown seems to not be proper one
comparing to other README documents.
It's important to define titles with h1(#) to be able to parse it
properly later on by the website
2024-04-23 14:17:17 -04:00
Hanno Cornelius
8f94e97cf2 docs: deprecate swap protocol (#31)
Deprecates swap protocol.
2024-04-18 13:38:26 -04:00
Jimmy Debe
d82eaccdc0 Update WAKU2-METADATA: Move to draft (#6)
Move 66/WAKU2-METADATA to draft.
2024-04-17 15:24:44 -04:00
LordGhostX
8b552ba2e0 chore: mark 16/WAKU2-RPC as deprecated (#30) 2024-04-16 15:43:27 +02:00
Jimmy Debe
0b0e00f510 feat(rln-stealth-commitments): add initial tech writeup (#23)
By: rymnc
Reference pull request: https://github.com/vacp2p/rfc/pull/658

Initial writeup on viability of stealth commitments for status
communities

---------

Co-authored-by: fryorcraken <110212804+fryorcraken@users.noreply.github.com>
2024-04-15 17:34:56 +05:30
23 changed files with 1665 additions and 105 deletions

View File

@@ -0,0 +1 @@
![image](https://github.com/vacp2p/rfc-index/assets/91767824/75db34e5-0cc7-44a9-8b31-f7492e652d4b)

Binary file not shown.

After

Width:  |  Height:  |  Size: 530 KiB

276
codex/raw/marketplace.md Normal file
View File

@@ -0,0 +1,276 @@
---
title: CODEX-MARKETPLACE
name: Codex Storage Marketplace
status: raw
tags: codex
editor: Dmitriy <dryajov@status.im>
contributors:
- Mark <mark@codex.storage>
- Adam <adam.u@status.im>
- Eric <ericmastro@status.im>
- Jimmy Debe <jimmy@status.im>
---
## Abstract
Codex Marketplace and its interactions are defined by a smart contract deployed on an EVM-compatible blockchain.
This specification describes these interactions for all the different roles in the network.
The specification is meant for a Codex client implementor.
The goal is to create a storage marketplace that promotes durability.
## Motivation
The Codex network aims to create a peer-to-peer storage engine with strong data durability,
data persistence guarantees and node storage incentives.
Support for resource restricted devices, like mobile devices should also be embraced.
The protocol should remove complexity to allow for a simple implementation and
simplify incentive mechanisms.
## Semantics
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
### Definitions
| Terminology | Description |
| --------------- | --------- |
| storage providers | A Codex node that provides storage services to the marketplace. |
| validator nodes | A Codex node that checks for missing storage proofs and triggers for a reward. |
| client nodes | The most common Codex node that interacts with other nodes to store, locate and retrieve data. |
| slots | Created by client nodes when a new dataset is requested to be stored. Discussed further in the [slots section](#slots). |
### Storage Request
Client nodes can create storage requests on the Codex network via the Codex marketplace.
The marketplace handles storage requests, the storage slot state,
storage provider rewards, storage provider collaterals, and storage proof state.
To create a request to store a dataset on the Codex network,
client nodes MUST split the dataset into data chunks, $(c_1, c_2, c_3, \ldots, c_{n})$.
Using an erasure coding technique,
the data chunks are encoded and placed into separate slots.
The erasure coding technique SHOULD be the [Reed-Soloman algorithm](https://hackmd.io/FB58eZQoTNm-dnhu0Y1XnA).
When the client node is prompted by the user to create a storage request,
it MUST submit a transaction with the desired request parameters.
Once a request is created via the transaction,
all slots MUST be filled by storage providers before the request is officially started.
If the request does not attract enough storage providers after a time defined by `expiry` runs out,
the request is `canceled`.
If canceled, the storage provider SHOULD initiate a transaction call in order to receive its `collateral` along with a portion of the `reward`.
The remaining `reward` is returned to the requester.
The requester MAY create a new request with different values to restart the process.
In order to submit the new storage request with the transaction,
the following parameters MUST be specified in the transaction call:
```solidity
// the Codex node requesting storage
address client;
// content identifier
string cid;
// merkle root of the dataset, used to verify storage proofs
byte32 merkleRoot;
// amount of token from the requester to reward storage providers
uint256 reward;
// amount of tokens required for collateral by storage providers
uint256 collateral;
// frequency that proofs are checked by validator nodes
uint256 proofProbability;
// amount of desired time for stoage request
uint256 duration;
// the number of requested slots
uint64 slots;
// amount of storage per slot
uint256 slotSize;
// Amount of time before request expires
uint256 expiry;
// random value to differentiate from other requests
byte32 nonce;
```
`cid`
An identifier used to locate the dataset
- MUST be a [CIDv1](https://github.com/multiformats/cid#cidv1) with sha-256 based [multihash](https://github.com/multiformats/multihash)
- MUST be generated by the client node
`reward`
- it is an REQUIRED amount to be included in the transaction for a storage request.
- it SHOULD be amount of tokens offered per byte per second
- it MUST be a token known to the network.
After tokens are recevied by the Codex Marketplace,
it MUST be released to storage providers who successfully fill slots until the storage request is complete.
`collateral`
All storage providers MUST provide token collateral before being able to fill a storage slot.
The following is related to storage provider who has offered `collateral`.
If a storage provider, filling a slot,
fails to provide enough proofs of storage, the `collateral` MUST be forfeited.
This MAY be managed by updating a smart contract object that tracks the number of missed proofs,
percentage of `collateral` already slashed, or number of slashed `collateral` for slot to be freed.
The storage provider MAY be able to fill the same failed slot,
but MUST replace any `collateral` that was already forfeited.
A portion of the `collateral` MUST be offered as a reward to validator nodes,
and a portion SHOULD be offered as a reward to other storage providers that repair freed [slots](#slots).
`proofProbability`
Determines the inverse probability that a proof is required in a period.
The probability MUST be:
$\frac{1}{proofProbability}$
- Storage providers are REQUIRED to provide proofs of storage per period that are submited to the marketplace smart contract and verified by validator nodes.
- The requester SHOULD provide the value for the frequency of proofs provided by storage providers.
`duration`
- it SHOULD be in seconds
- Once the `reward` has depleted from periodic storage provider payments,
the storage request SHOULD end.
The requester MAY renew the storage request by creating a new request with the same `cid` value.
Different storage providers MAY fulfill the request.
- Data MAY be considered lost during contract `duration` when no other storage providers decide to fill empty slots.
### Fulfilling Requests
In order for a storage request to start,
storage providers MUST enter a storage contract with the requester via the marketplace smart contract.
When storage providers are selected to fill a slot for the request,
storage providers MUST NOT abandon the slot, unless the slot state is `cancelled`, `complete` or `failed`.
If too many slots are abandoned, the slot state SHOULD be changed to `failed`.
Below is the smart contract lifecycle for a storage request:
![image](../images/request-contract.png)
### Slots
Slots is a method used by the Codex network to distribute data chucks amongst storage providers.
Data chucks, created by clients nodes, MUST use a method of distributing the dataset for data resiliency.
- Client nodes SHOULD decide how many nodes should fill the slots of a storage contract.
- Storage providers MUST be selected before filling a slot,
Each slot represents a chunk of a dataset provided during the storage request.
The first state of a slot is `free`, meaning that the slot is waiting to be reserved by a storage provider.
The Codex marketplace using a slot dispersal mechanism to decide what storage providers can reserve a slot,
see [dispersal section below](#dispersal).
After a slot reservation is secured, the storage provider MUST:
- provide token collateral and proof of storage to fill the slot
- provide proofs of storage periodically
Once filled, the slot state SHOULD be changed from `reserved` to `filled`.
The `reward` payout SHOULD be calculated as periodic payments until the request `duration` is complete.
Once complete, the slot state SHOULD be changed to `finished` and payout occurs.
A slot MUST become empty after the storage provider fails to provide proofs of storage to the marketplace.
The state of the slot SHOULD change from `filled` to `free` when validator nodes see the slot is missing proofs.
The storage provider assigned to that slot MUST forfeit its `collateral`.
Other storage providers can earn a small portion of the forfeited `collateral` by providing a new proof of storage and `collateral`,
this is referred to as repairing the empty slot.
The slot lifecycle of a storage provider that has filled a slot is demonstrated below:
-----------
proof & proof &
collateral reserved proof missed collateral missed
| | | | | |
v v v v v v
-------------------------------------------------------------------------
slot: |///////////////////////////////| |///////////////////////|
------------------------------------------------------------------------
| |
v v
Update Check maxNumOfSlash
slashCriterion is reached - Lost Collateral
(number of proofs missed)
---------------- time ---------------->
#### Slot Dispersal
Storage providers compete with one another to store data from storage requests.
Before a storage provider can download the data, they MUST obtain a reseversation for a slot.
The Codex network uses an expanding window based on the Kademlia distance function to select storage providers that are allowed to reserve a slot.
This starts with a random source address hash function that can be contructed as:
hash(blockHash, requestId, slotIndex, reservationIndex);
`blockHash`: unique identifier for a specific EVM-compatible block
`requestId`: unique identifier for storage request
`slotIndex`: index of current empty slot
`reservationIndex`: index of current slot reservation
The unique source address, along with the storage provider's blockchain address,
is used to calculate the expanding window.
The distance between the two addresses can be defined by:
$$ XOR(A,A_0) $$
The allowed distance over time $t_1$, can be defined as $2^{256} * F(t_1)$.
When the storage provider's distance is greater than the allowed distance,
the storage provider SHOULD be eligible to to obtain a slot reservation.
- Note after eligiblity, the storage provider MUST provide `collateral` and
storage proofs to make slot state change `reserved` to `filled`.
### Filling the Slot
When the value of the allowed distance increases,
more storage providers SHOULD be elgiblable to participate in reserving a slot.
The Codex network allows a storage provider is allowed to fill a slot after calculating the storage provider's Kademlia distance is less than the allowed distance.
The total value storage providers MUST obtain can be defined as:
$$ XOR(A,A_0) < 2^{256} * F(t_1) $$
- XOR(A,A_0) represents Kademlia distance function
- 2^{256} represents the total number of 256-bit addresses in the address space
- F(t_1) represents the expansion function over time
Eligible storage providers represented below:
start point
| Kademlia distance
t=3 t=2 t=1 v
<------(------(------(------·------)------)------)------>
^ ^
| |
this provider is this provider is
allowed at t=2 allowed at t=3
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
1. [Reed-Soloman algorithm](https://hackmd.io/FB58eZQoTNm-dnhu0Y1XnA)
2. [CIDv1](https://github.com/multiformats/cid#cidv1)
3. [multihash](https://github.com/multiformats/multihash)
4. [Proof-of-Data-Possession](https://hackmd.io/2uRBltuIT7yX0CyczJevYg?view)
5. [Codex market implementation](https://github.com/codex-storage/nim-codex/blob/master/codex/market.nim)

View File

@@ -0,0 +1,732 @@
---
title: VAC-DECENTRALIZED-MESSAGING-ETHEREUM
name: Decentralized Key and Session Setup for Secure Messaging over Ethereum
status: raw
category: informational
editor: Ramses Fernandez-Valencia <ramses@status.im>
contributors:
---
## Abstract
This document introduces a decentralized group messaging protocol using Ethereum adresses as identifiers.
It is based in the proposal [DCGKA](https://eprint.iacr.org/2020/1281) by Weidner et al.
It includes also approximations to overcome limitations related to using PKI and the multi-device setting.
## Motivation
The need for secure communications has become paramount.
Traditional centralized messaging protocols are susceptible to various security threats,
including unauthorized access, data breaches, and single points of failure.
Therefore a decentralized approach to secure communication becomes increasingly relevant,
offering a robust solution to address these challenges.
Secure messaging protocols used should have the following key features:
1. **Asynchronous Messaging:** Users can send messages even if the recipients are not online at the moment.
2. **Resilience to Compromise:** If a user's security is compromised,
the protocol ensures that previous messages remain secure through forward secrecy (FS).
This means that messages sent before the compromise cannot be decrypted by adversaries.
Additionally, the protocol maintains post-compromise security (PCS) by regularly updating keys,
making it difficult for adversaries to decrypt future communication.
3. **Dynamic Group Management:** Users can easily add or remove group members at any time,
reflecting the flexible nature of communication within the app.
In this field, there exists a *trilemma*, similar to what one observes in blockchain,
involving three key aspects:
1. security,
2. scalability, and
3. decentralization.
For instance, protocols like the [MLS](https://messaginglayersecurity.rocks) perform well in terms of scalability and security.
However, they falls short in decentralization.
Newer studies such as [CoCoa](https://eprint.iacr.org/2022/251) improve features related to security and scalability,
but they still rely on servers, which may not be fully trusted though they are necessary.
On the other hand,
older studies like [Causal TreeKEM](https://mattweidner.com/assets/pdf/acs-dissertation.pdf) exhibit decent scalability (logarithmic)
but lack forward secrecy and have weak post-compromise security (PCS).
The creators of [DCGKA](https://eprint.iacr.org/2020/1281) introduce a decentralized,
asynchronous secure group messaging protocol that supports dynamic groups.
This protocol operates effectively on various underlying networks without strict requirements on message ordering or latency.
It can be implemented in peer-to-peer or anonymity networks,
accommodating network partitions, high latency links, and disconnected operation seamlessly.
Notably, the protocol doesn't rely on servers or
a consensus protocol for its functionality.
This proposal provides end-to-end encryption with forward secrecy and post-compromise security,
even when multiple users concurrently modify the group state.
## Theory
### Protocol overview
This protocol makes use of ratchets to provide FS by encrypting each message with a different key.
In the figure one can see the ratchet for encrypting a sequence of messages.
The sender requires an initial update secret `I_1`, which is introduced in a PRG.
The PRG will produce two outputs, namely a symmetric key for AEAD encryption, and
a seed for the next ratchet state.
The associated data needed in the AEAD encryption includes the message index `i`.
The ciphertext `c_i` associated to message `m_i` is then broadcasted to all group members.
The next step requires deleting `I_1`, `k_i` and any old ratchet state.
After a period of time the sender may replace the ratchet state with new update secrets `I_2`, `I_3`, and so on.
To start a post-compromise security update,
a user creates a new random value known as a seed secret and
shares it with every other group member through a secure two-party channel.
Upon receiving the seed secret,
each group member uses it to calculate an update secret for both the sender's ratchet and their own.
Additionally, the recipient sends an unencrypted acknowledgment to the group confirming the update.
Every member who receives the acknowledgment updates not only the ratchet for the original sender but
also the ratchet for the sender of the acknowledgment.
Consequently, after sharing the seed secret through `n - 1` two-party messages and
confirming it with `n - 1` broadcast acknowledgments,
every group member has derived an update secret and updated their ratchet accordingly.
When removing a group member,
the user who initiates the removal conducts a post-compromise security update
by sending the update secret to all group members except the one being removed.
To add a new group member,
each existing group member shares the necessary state with the new user,
enabling them to derive their future update secrets.
Since group members may receive messages in various orders,
it's important to ensure that each sender's ratchet is updated consistently
with the same sequence of update secrets at each group member.
The network protocol used in this scheme ensures that messages from the same sender are processed in the order they were sent.
### Components of the protocol
This protocol relies in 3 components:
authenticated causal broadcast (ACB),
decentralized group membership (DGM) and
2-party secure messaging (2SM).
#### Authenticated causal broadcast
A causal order is a partial order relation `<` on messages.
Two messages `m_1` and `m_2` are causally ordered, or
`m_1` causally precedes `m_2`
(denoted by `m_1 < m_2`), if one of the following contiditions hold:
1. `m_1` and `m_2` were sent by the same group member, and `m_1` was sent before `m_2`.
2. `m_2` was sent by a group member U, and `m_1` was received and
processed by `U` before sending `m_2`.
3. There exists `m_3` such that `m_1 < m_3` and `m_3 < m_2`.
Causal broadcast requires that before processing `m`,
a group member must process all preceding messages `{m' | m' < m}`.
The causal broadcast module used in this protocol authenticates the sender of each message,
as well as its causal ordering metadata, using a digital signature under the senders identity key.
This prevents a passive adversary from impersonating users or affecting causally ordered delivery.
#### Decentralized group membership
This protocol assumes the existence of a decentralized group membership function (denoted as DGM)
that takes a set of membership change messages and their causal order relantionships,
and returns the current set of group members IDs.
It needs to be deterministic and depend only on causal order, and not exact order.
#### 2-party secure messaging (2SM)
This protocol makes use of bidirectional 2-party secure messaging schemes,
which consist of 3 algorithms: `2SM-Init`, `2SM-Send` and `2SM-Receive`.
##### 2SM-Init
This function takes two IDs as inputs:
`ID1` representing the local user and `ID2` representing the other party.
It returns an initial protocol state `sigma`.
The 2SM protocol relies on a Public Key Infrastructure (PKI) or
a key server to map these IDs to their corresponding public keys.
In practice, the PKI should incorporate ephemeral prekeys.
This allows users to send messages to a new group member,
even if that member is currently offline.
##### 2SM-Send
This function takes a state `sigma` and a plaintext `m` as inputs, and
returns a new state `sigma` and a ciphertext `c`.
##### 2SM-Receive
This function takes a state `sigma` and a ciphertext `c`, and
returns a new state `sigma` and a plaintext `m`.
#### 2SM Syntax
The variable `sigma` denotes the state consisting in the variables below:
```
sigma.mySks[0] = sk
sigma.nextIndex = 1
sigma.receivedSk = empty_string
sigma.otherPk = pk`<br>
sigma.otherPksender = “other”
sigma.otherPkIndex = 0
```
#### 2SM-Init
On input a key pair `(sk, pk)`, this functions otuputs a state `sigma`.
#### 2SM-Send
This function encrypts the message `m` using `sigma.otherPk`,
which represents the other partys current public key.
This key is determined based on the last public key generated for the other party or
the last public key received from the other party,
whichever is more recent.
`sigma.otherPkSender` is set to `me` in the former case and `other` in the latter case.
Metadata including `otherPkSender` and
`otherPkIndex` are included in the message to indicate which of the recipients public keys is being utilized.
Additionally, this function generates a new key pair for the local user,
storing the secret key in `sigma.mySks` and sending the public key.
Similarly, it generates a new key pair for the other party,
sending the secret key (encrypted) and storing the public key in `sigma.otherPk`.
```
sigma.mySks[sigma.nextIndex], myNewPk) = PKE-Gen()
(otherNewSk, otherNewPk) = PKE-Gen()
plaintext = (m, otherNewSk, sigma`.nextIndex, myNewPk)
msg = (PKE-Enc(sigma.otherPk, plaintext), sigma.otherPkSender, sigma.otherPkIndex)
sigma.nextIndex++
(sigma.otherPk, sigma.otherPkSender, sigma.otherPkIndex) = (otherNewPk, "me", empty_string)
return (sigma`, msg)
```
#### 2SM-Receive
This function utilizes the metadata of the message `c` to determine which secret key to utilize for decryption,
assigning it to `sk`.
If the secret key corresponds to one generated by ourselves,
that secret key along with all keys with lower index are deleted.
This deletion is indicated by `sigma.mySks[≤ keyIndex] = empty_string`.
Subsequently, the new public and secret keys contained in the message are stored.
```
(ciphertext, keySender, keyIndex) = c
if keySender = "other" then
sk = sigma.mySks[keyIndex]
sigma.mySks[≤ keyIndex] = empty_string
else sk = sigma.receivedSk
(m, sigma.receivedSk, sigma.otherPkIndex, sigma.otherPk) = PKE-Dec(sk, ciphertext)
sigma.otherPkSender = "other"
return (sigma, m)
```
### PKE Syntax
The required PKE that MUST be used is ElGamal with a 2048-bit modulus `p`.
#### Parameters
The following parameters must be used:
```
p = 308920927247127345254346920820166145569
g = 2
```
#### PKE-KGen
Each user `u` MUST do the following:
```
PKE-KGen():
a = randint(2, p-2)
pk = (p, g, g^a)
sk = a
return (pk, sk)
```
#### PKE-Enc
A user `v` encrypting a message `m` for `u` MUST follow these steps:
```
PKE-Enc(pk):
k = randint(2, p-2)
eta = g^k % p
delta = m * (g^a)^k % p
return ((eta, delta))
```
#### PKE-Dec
The user `u` recovers a message `m` from a ciphertext `c` by performing the following operations:
```
PKE-Dec(sk):
mu = eta^(p-1-sk) % p
return ((mu * delta) % p)
```
### DCGKA Syntax
#### Auxiliary functions
There exist 6 functions that are auxiliary for the rest of components of the protocol, namely:
#### init
This function takes an `ID` as input and returns its associated initial state, denoted by `gamma`:
```
gamma.myId = ID
gamma.mySeq = 0
gamma.history = empty
gamma.nextSeed = empty_string
gamma.2sm[·] = empty_string
gamma.memberSecret[·, ·, ·] = empty_string
gamma.ratchet[·] = empty_string
return (gamma)
```
#### encrypt-to
Upon reception of the recipients `ID` and a plaintext,
it encrypts a direct message for another group member.
Should it be the first message for a particular `ID`,
then the `2SM` protocol state is initialized and stored in `gamma.2sm[recipient.ID]`.
One then uses `2SM_Send` to encrypt the message and store the updated protocol in `gamma`.
```
if gamma.2sm[recipient_ID] = empty_string then
gamma.2sm[recipient_ID] = 2SM_Init(gamma.myID, recipient_ID)
(gamma.2sm[recipient_ID], ciphertext) = 2SM_Send(gamma.2sm[recipient_ID], plaintext)
return (gamma, ciphertext)
```
#### decrypt-from
After receiving the senders `ID` and a ciphertext,
it behaves as the reverse function of `encrypt-to` and has a similar initialization:
```
if gamma.2sm[sender_ID] = empty_string then
gamma.2sm[sender_ID] = 2SM_Init(gamma.myID, sender_ID)
(gamma.2sm[sender_ID], plaintext) = 2SM_Receive(gamma.2sm[sender_ID], ciphertext)
return (gamma, plaintext)
```
#### update-ratchet
This function generates the next update secret `I_update` for the group member `ID`.
The ratchet state is stored in `gamma.ratchet[ID]`.
It is required to use a HMAC-based key derivation function HKDF to combine the ratchet state with an input,
returning an update secret and a new ratchet state.
```
(updateSecret, gamma.ratchet[ID]) = HKDF(gamma.ratchet[ID], input)
return (gamma, updateSecret)
```
#### member-view
This function calculates the set of group members based on the most recent control message sent by the specified user `ID`.
It filters the group membership operations to include only those observed by the specified `ID`, and
then invokes the DGM function to generate the group membership.
```
ops = {m in gamma.history st. m was sent or acknowledged by ID}
return DGM(ops)
```
#### generate-seed
This functions generates a random bit string and
sends it encrypted to each member of the group using the `2SM` mechanism.
It returns the updated protocol state and
the set of direct messages (denoted as `dmsgs`) to send.
```
gamma.nextSeed = random.randbytes()
dmsgs = empty
for each ID in recipients:
(gamma, msg) = encrypt-to(gamma, ID, gamma.nextSeed)
dmsgs = dmsgs + (ID, msg)
return (gamma, dmsgs)
```
### Creation of a group
A group is generated in a 3 steps procedure:
1. A user calls the `create` function and broadcasts a control message of type *create*.
2. Each receiver of the message processes the message and broadcasts an *ack* control message.
3. Each member processes the *ack* message received.
#### create
This function generates a *create* control message and
calls `generate-seed` to define the set of direct messages that need to be sent.
Then it calls `process-create` to process the control message for this user.
The function `process-create` returns a tuple including an updated state gamma and
an update secret `I`.
```
control = (“create”, gamma.mySeq, IDs)
(gamma, dmsgs) = generate-seed(gamma, IDs)
(gamma, _, _, I, _) = process-create(gamma, gamma.myId, gamma.mySeq, IDs, empty_string)
return (gamma, control, dmsgs, I)
```
#### process-seed
This function initially employs `member-view` to identify the users who were part of the group when the control message was dispatched.
Then, it attempts to acquire the seed secret through the following steps:
1. If the control message was dispatched by the local user,
it uses the most recent invocation of `generate-seed` stored the seed secret in `gamma.nextSeed`.
2. If the `control` message was dispatched by another user, and
the local user is among its recipients,
the function utilizes `decrypt-from` to decrypt the direct message that includes the seed secret.
3. Otherwise, it returns an `ack` message without deriving an update secret.
Afterwards, `process-seed` generates separate member secrets for each group member from the seed secret by combining the seed secret and
each user ID using HKDF.
The secret for the sender of the message is stored in `senderSecret`,
while those for the other group members are stored in `gamma.memberSecret`.
The sender's member secret is immediately utilized to update their KDF ratchet and
compute their update secret `I_sender` using `update-ratchet`.
If the local user is the sender of the control message,
the process is completed, and the update secret is returned.
However, if the seed secret is received from another user,
an `ack` control message is constructed for broadcast,
including the sender ID and sequence number of the message being acknowledged.
The final step computes an update secret `I_me` for the local user invoking the `process-ack` function.
```
recipients = member-view(gamma, sender) - {sender}
if sender = gamma.myId then seed = gamma.nextSeed; gamma.nextSeed = empty_string
else if gamma.myId in recipients then (gamma, seed) = decrypt-from(gamma, sender, dmsg)
else
return (gamma, (ack, ++gamma.mySeq, (sender, seq)), empty_string , empty_string , empty_string)
for ID in recipients do gamma.memberSecret[sender, seq, ID] = HKDF(seed, ID)
senderSecret = HKDF(seed, sender)
(gamma, I_sender) = update-ratchet(gamma, sender, senderSecret)
if sender = gamma.myId then return (gamma, empty_string , empty_string , I_sender, empty_string)
control = (ack, ++gamma.mySeq, (sender, seq))
members = member-view(gamma, gamma.myId)
forward = empty
for ID in {members - (recipients + {sender})}
s = gamma.memberSecret[sender, seq, gamma.myId]
(gamma, msg) = encrypt-to(gamma, ID, s)
forward = forward + {(ID, msg)}
(gamma, _, _, I_me, _) = process-ack(gamma, gamma.myId, gamma.mySeq, (sender, seq), empty_string)
return (gamma, control, forward, I_sender, I_me)
```
#### process-create
This function is called by the sender and each of the receivers of the `create` control message.
First, it records the information from the create message in the `gamma.history+ {op}`,
which is used to track group membership changes. Then, it proceeds to call `process-seed`.
```
op = (”create”, sender, seq, IDs)
gamma.history = gamma.history + {op}
return (process-seed(gamma, sender, seq, dmsg))
```
#### process-ack
This function is called by those group members once they receive an ack message.
In `process-ack`, `ackID` and `ackSeq` are the sender and
sequence number of the acknowledged message.
Firstly, if the acknowledged message is a group membership operation,
it records the acknowledgement in `gamma.history`.
Following this, the function retrieves the relevant member secret from `gamma.memberSecret`,
which was previously obtained from the seed secret contained in the acknowledged message.
Finally, it updates the ratchet for the sender of the `ack` and
returns the resulting update secret.
```
if (ackID, ackSeq) was a create / add / remove then
op = ("ack", sender, seq, ackID, ackSeq)
gamma.history = gamma.history + {op}`
s = gamma.memberSecret[ackID, ackSeq, sender]
gamma.memberSecret[ackID, ackSeq, sender] = empty_string
if (s = empty_string) & (dmsg = empty_string) then return (gamma, empty_string, empty_string, empty_string, empty_string)
if (s = empty_string) then (gamma, s) = decrypt-from(gamma, sender, dmsg)
(gamma, I) = update-ratchet(gamma, sender, s)
return (gamma, empty_string, empty_string, I, empty_string)
```
The HKDF function MUST follow RFC 5869 using the hash function SHA256.
### Post-compromise security updates and group member removal
The functions `update` and `remove` share similarities with `create`:
they both call the function `generate-seed` to encrypt a new seed secret for each group member.
The distinction lies in the determination of the group members using `member-view`.
In the case of `remove`, the user being removed is excluded from the recipients of the seed secret.
Additionally, the control message they construct is designated with type `update` or `remove` respectively.
Likewise, `process-update` and `process-remove` are akin to `process-create`.
The function `process-update` skips the update of `gamma.history`,
whereas `process-remove` includes a removal operation in the history.
#### update
```
control = ("update", ++gamma.mySeq, empty_string)
recipients = member-view(gamma, gamma.myId) - {gamma.myId}
(gamma, dmsgs) = generate-seed(gamma, recipients)
(gamma, _, _, I , _) = process-update(gamma, gamma.myId, gamma.mySeq, empty_string, empty_string)
return (gamma, control, dmsgs, I)
```
#### remove
```
control = ("remove", ++gamma.mySeq, empty)
recipients = member-view(gamma, gamma.myId) - {ID, gamma.myId}
(gamma, dmsgs) = generate-seed(gamma, recipients)
(gamma, _, _, I , _) = process-update(gamma, gamma.myId, gamma.mySeq, ID, empty_string)
return (gamma, control, dmsgs, I)
```
#### process-update
`return process-seed(gamma, sender, seq, dmsg)`
#### process-remove
```
op = ("remove", sender, seq, removed)
gamma.history = gamma.history + {op}
return process-seed(gamma, sender, seq, dmsg)
```
### Group member addition
#### add
When adding a new group member,
an existing member initiates the process by invoking the `add` function and
providing the ID of the user to be added.
This function prepares a control message marked as `add` for broadcast to the group.
Simultaneously, it creates a welcome message intended for the new member as a direct message.
This `welcome` message includes the current state of the sender's KDF ratchet,
encrypted using `2SM`, along with the history of group membership operations conducted so far.
```
control = ("add", ++gamma.mySeq, ID)
(gamma, c) = encrypt-to(gamma, ID, gamma.ratchet[gamma.myId])
op = ("add", gamma.myId, gamma.mySeq, ID)
welcome = (gamma.history + {op}, c)
(gamma, _, _, I, _) = process-add(gamma, gamma.myId, gamma.mySeq, ID, empty_string)
return (gamma, control, (ID, welcome), I)
```
#### process-add
This function is invoked by both the sender and
each recipient of an `add` message, which includes the new group member.
If the local user is the newly added member,
the function proceeds to call `process-welcome` and then exits.
Otherwise, it extends `gamma.history` with the `add` operation.
Line 5 determines whether the local user was already a group member at the time the `add` message was sent;
this condition is typically true but may be false if multiple users were added concurrently.
On lines 6 to 8, the ratchet for the sender of the *add* message is updated twice.
In both calls to `update-ratchet`,
a constant string is used as the ratchet input instead of a random seed secret.
The value returned by the first ratchet update is stored in `gamma.memberSecret` as the added users initial member secret.
The result of the second ratchet update becomes `I_sender`,
the update secret for the sender of the `add` message.
On line 10, if the local user is the sender, the update secret is returned.
If the local user is not the sender, an acknowledgment for the `add` message is required.
Therefore, on line 11, a control message of type `add-ack` is constructed for broadcast.
Subsequently, in line 12 the current ratchet state is encrypted using `2SM` to generate a direct message intended for the added user,
allowing them to decrypt subsequent messages sent by the sender.
Finally, in lines 13 to 15, `process-add-ack` is called to calculate the local users update secret (`I_me`),
which is then returned along with `I_sender`.
```
if added = gamma.myId then return process-welcome(gamma, sender, seq, dmsg)
op = ("add", sender, seq, added)
gamma.history = gamma.history + {op}
if gamma.myId in member-view(gamma, sender) then
(gamma, s) = update-ratchet(gamma, sender, "welcome")
gamma.memberSecret[sender, seq, added] = s
(gamma, I_sender) = update-ratchet(gamma, sender, "add")
else I_sender = empty_string
if sender = gamma.myId then return (gamma, empty_string, empty_string, I_sender, empty_string)
control = ("add-ack", ++gamma.mySeq, (sender, seq))
(gamma, c) = encrypt-to(gamma, added, ratchet[gamma.myId])
(gamma, _, _, I_me, _) = process-add-ack(gamma, gamma.myId, gamma.mySeq, (sender, seq), empty_string)
return (gamma, control, {(added, c)}, I_sender, I_me)
```
#### process-add-ack
This function is invoked by both the sender and each recipient of an `add-ack` message,
including the new group member.
Upon lines 12, the acknowledgment is added to `gamma.history`,
mirroring the process in `process-ack`.
If the current user is the new group member,
the `add-ack` message includes the direct message constructed in `process-add`;
this direct message contains the encrypted ratchet state of the sender of the `add-ack`,
then it is decrypted on lines 35.
Upon line 6, a check is performed to check if the local user was already a group member at the time the `add-ack` was sent.
If affirmative, a new update secret `I` for the sender of the `add-ack` is computed on line 7 by invoking `update-ratchet` with the constant string `add`.
In the scenario involving the new member,
the ratchet state was recently initialized on line 5.
This ratchet update facilitates all group members, including the new addition,
to derive each members update by obtaining any update secret from before their inclusion.
```
op = ("ack", sender, seq, ackID, ackSeq)
gamma$.history = gamma.history + {op}
if dmsg != empty_string then
(gamma, s) = decrypt-from(gamma, sender, dmsg)
gamma.ratchet[sender] = s
if gamma.myId in member-view(gamma, sender) then
(gamma, I) = update-ratchet(gamma, sender, "add")
return (gamma, empty_string, empty_string, I, empty_string)
else return (gamma, empty_string, empty_string, empty_string, empty_string)
```
#### process-welcome
This function serves as the second step called by a newly added group member.
In this context, `adderHistory` represents the adding users copy of `gamma.history` sent in their welcome message,
which is utilized to initialize the added users history.
Here, `c` denotes the ciphertext of the adding users ratchet state,
which is decrypted on line 2 using `decrypt-from`.
Once `gamma.ratchet[sender]` is initialized,
`update-ratchet` is invoked twice on lines 3 to 5 with the constant strings `welcome` and `add` respectively.
These operations mirror the ratchet operations performed by every other group member in `process-add`.
The outcome of the first `update-ratchet` call becomes the first member secret for the added user,
while the second call returns `I_sender`, the update secret for the sender of the add operation.
Subsequently, the new group member constructs an *ack* control message to broadcast on line 6 and
calls `process-ack` to compute their initial update secret I_me.
The function `process-ack` reads from `gamma.memberSecret` and
passes it to `update-ratchet`.
The previous ratchet state for the new member is the empty string `empty`, as established by `init`,
thereby initializing the new members ratchet.
Upon receiving the new members `ack`,
every other group member initializes their copy of the new members ratchet in a similar manner.
By the conclusion of `process-welcome`,
the new group member has acquired update secrets for themselves and the user who added them.
The ratchets for other group members are initialized by `process-add-ack`.
```
gamma.history = adderHistory
(gamma, gamma.ratchet[sender]) = decrypt-from(gamma, sender, c)
(gamma, s) = update-ratchet(gamma, sender, "welcome")
gamma.memberSecret[sender, seq, gamma.myId] = s
(gamma, I_sender) = update-ratchet(gamma, sender, "add")
control = ("ack", ++gamma.mySeq, (sender, seq))
(gamma, _, _, I_me, _) = process-ack(gamma, gamma.myId, gamma.mySeq, (sender, seq), empty_string)
return (gamma, control, empty_string , I_sender, I_me)
```
## Privacy Considerations
### Dependency on PKI
The [DCGKA](https://eprint.iacr.org/2020/1281) proposal presents some limitations highlighted by the authors.
Among these limitations one finds the requirement of a PKI (or a key server) mapping IDs to public keys.
One method to overcome this limitation is adapting the protocol SIWE (Sign in with Ethereum) so
a user `u_1` who wants to start a communication with a user `u_2` can interact with latters wallet to request a public key using an Ethereum address as `ID`.
#### SIWE
The [SIWE](https://docs.login.xyz/general-information/siwe-overview) (Sign In With Ethereum) proposal was a suggested standard for leveraging Ethereum to authenticate and authorize users on web3 applications.
Its goal is to establish a standardized method for users to sign in to web3 applications using their Ethereum address and private key,
mirroring the process by which users currently sign in to web2 applications using their email and password.
Below follows the required steps:
1. A server generates a unique Nonce for each user intending to sign in.
2. A user initiates a request to connect to a website using their wallet.
3. The user is presented with a distinctive message that includes the Nonce and details about the website.
4. The user authenticates their identity by signing in with their wallet.
5. Upon successful authentication, the user's identity is confirmed or approved.
6. The website grants access to data specific to the authenticated user.
#### Our approach
The idea in the [DCGKA](https://eprint.iacr.org/2020/1281) setting closely resembles the procedure outlined in SIWE. Here:
1. The server corresponds to user D1,
who initiates a request (instead of generating a nonce) to obtain the public key of user D2.
2. Upon receiving the request, the wallet of D2 send the request to the user,
3. User D2 receives the request from the wallet, and decides whether accepts or rejects.
4. The wallet and responds with a message containing the requested public key in case of acceptance by D2.
This message may be signed, allowing D1 to verify that the owner of the received public key is indeed D2.
### Multi-device setting
One may see the set of devices as a group and create a group key for internal communications.
One may use treeKEM for instance,
since it provides interesting properties like forward secrecy and post-compromise security.
All devices share the same `ID`,
which is held by one of them, and from other users point of view, they would look as a single user.
Using servers, like in the paper [Multi-Device for Signal](https://eprint.iacr.org/2019/1363), should be avoided;
but this would imply using a particular device as receiver and broadcaster within the group.
There is an obvious drawback which is having a single device working as a “server”.
Should this device be attacked or without connection, there should be a mechanism for its revocation and replacement.
Another approach for communications between devices could be using the keypair of each device.
This could open the door to use UPKE, since keypairs should be regenerated frequently.
Each time a device sends a message, either an internal message or an external message,
it needs to replicate and broadcast it to all devices in the group.
The mechanism for the substitution of misbehaving leader devices follows:
1. Each device within a group knows the details of other leader devices.
This information may come from metadata in received messages, and is replicated by the leader device.
2. To replace a leader, the user should select any other device within its group and
use it to send a signed message to all other users.
3. To get the ability to sign messages,
this new leader should request the keypair associated to the ID to the wallet.
4. Once the leader has been changed,
it revocates access from DCGKA to the former leader using the DCGKA protocol.
5. The new leader starts a key update in DCGKA.
Not all devices in a group should be able to send messages to other users.
Only the leader device should be in charge of sending and receiving messages.
To prevent other devices from sending messages outside their group, a requirement should be signing each message.
The keys associated to the `ID` should only be in control of the leader device.
The leader device is in charge of setting the keys involved in the DCGKA.
This information must be replicated within the group to make sure it is updated.
To detect missing messages or potential misbehavior, messages must include a counter.
### Using UPKE
Managing the group of devices of a user can be done either using a group key protocol such as treeKEM or
using the keypair of each device.
Setting a common key for a group of devices under the control of the same actor might be excessive,
furthermore it may imply some of the problems one can find in the usual setting of a group of different users;
for example: one of the devices may not participate in the required updating processes, representing a threat for the group.
The other approach to managing the group of devices is using each devices keypair,
but it would require each device updating these materia frequently, something that may not happens.
[UPKE](https://eprint.iacr.org/2022/068) is a form of asymetric cryptography
where any user can update any other users key pair by running an update algorithm with (high-entropy) private coins.
Any sender can initiate a *key update* by sending a special update ciphertext.
This ciphertext updates the receivers public key and also, once processed by the receiver, will update their secret key.
To the best of my knowledge,
there exists several efficient constructions both [UPKE from ElGamal](https://eprint.iacr.org/2019/1189) (based in the DH assumption) and
[UPKE from Lattices]((https://eprint.iacr.org/2023/1400)) (based in lattices).
None of them have been implemented in a secure messaging protocol, and this opens the door to some novel research.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
- [DCGKA](https://eprint.iacr.org/2020/1281)
- [MLS](https://messaginglayersecurity.rocks)
- [CoCoa](https://eprint.iacr.org/2022/251)
- [Causal TreeKEM](https://mattweidner.com/assets/pdf/acs-dissertation.pdf)
- [SIWE](https://docs.login.xyz/general-information/siwe-overview)
- [Multi-device for Signal](https://eprint.iacr.org/2019/1363)
- [UPKE](https://eprint.iacr.org/2022/068)
- [UPKE from ElGamal](https://eprint.iacr.org/2019/1189)
- [UPKE from Lattices](https://eprint.iacr.org/2023/1400)

View File

@@ -1,6 +1,5 @@
---
slug: 70
title: 70/ETH-SECPM
title: ETH-SECPM
name: Secure channel setup using Ethereum accounts
status: raw
category: Standards Track
@@ -283,7 +282,11 @@ These identifiers MUST be computed according to Section 5.2 of [RFC9420](https:/
Each member of a group presents a credential that provides one or more identities for the member and associates them with the member's signing key.
The identities and signing key are verified by the Authentication Service in use for a group.
Credentials MUST follow the specifications of section 5.3 of [RFC9420](https://datatracker.ietf.org/doc/rfc9420/).
Credentials MUST follow the specifications of section 5.3 of [RFC9420](https://datatracker.ietf.org/doc/rfc9420/).
Below follows the flow diagram for the generation of credentials.
Users MUST generate key pairs by themselves.
![figure1](./images/eth-secpm_credential.png)
### Message framing
Handshake and application messages use a common framing structure providing encryption to ensure confidentiality within the group, and signing to authenticate the sender.
@@ -499,6 +502,11 @@ ProposalType proposal_types<V>;
CredentialType credential_types<V>;
}
```
The flow diagram shows the procedure to fetch key material from other users:
![figure2](./images/eth-secpm_fetching.png)
Below follows the flow diagram for the creation of a group:
![figure3](./images/eth-secpm_creation.png)
### Group evolution
Group membership can change, and existing members can change their keys in order to achieve post-compromise security.
@@ -543,6 +551,18 @@ The validation MUST be done according to one of the procedures described in Sect
When creating or processing a Commit, a client applies a list of proposals to the ratchet tree and `GroupContext`.
The client MUST apply the proposals in the list in the order described in Section 12.3 of [RFC9420](https://datatracker.ietf.org/doc/rfc9420/).
Below follows the flow diagram for the addition of a member to a group:
![figure4](./images/eth-secpm_add.png)
The diagram below shows the procedure to remove a group member:
<br>
![figure5](./images/eth-secpm_remove.png)
The flow diagram below shows an update procedure:
<br>
![figure6](./images/eth-secpm_update.png)
### Commit messages
Commit messages initiate new group epochs.
It informs group members to update their representation of the state of the group by applying the proposals and advancing the key schedule.
@@ -790,6 +810,20 @@ After successfully parsing the message into ABNF terms, translation MAY happen a
- The curve vurve448 MUST be chosen due to its higher security level: 224-bit security instead of the 128-bit security provided by X25519.
- It is important that Bob MUST NOT reuse `SPK`.
## Considerations related to the use of Ethereum addresses
### With respect to the Authentication Service
- If users used their Ethereum addresses as identifiers, they MUST generate their own credentials.
These credentials MUST use the digital signature key pair associated to the Ethereum address.
- Other users can verify credentials.
- With this approach, there is no need to have a dedicated Authentication Service responsible for the issuance and verification of credentials.
- The interaction diagram showing the generation of credentials becomes obsolete.
### With respect to the Delivery Service
- Users MUST generate their own KeyPackage.
- Other users can verify KeyPackages when required.
- A Delivery Service storage system MUST verify KeyPackages before storing them.
- Interaction diagrams involving the DS do not change.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).

View File

@@ -1,6 +1,5 @@
---
slug: 46
title: 46/GOSSIPSUB-TOR-PUSH
title: GOSSIPSUB-TOR-PUSH
name: Gossipsub Tor Push
status: raw
category: Standards Track

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

1
vac/raw/images/test.txt Normal file
View File

@@ -0,0 +1 @@

View File

@@ -1,6 +1,5 @@
---
slug: 48
title: 48/RLN-INTEREP-SPEC
title: RLN-INTEREP-SPEC
name: Interep as group management for RLN
status: raw
category:

View File

@@ -0,0 +1,105 @@
---
title: RLN-STEALTH-COMMITMENTS
name: RLN Stealth Commitment Usage
category: Standards Track
editor: Aaryamann Challani <aaryamann@status.im>
contributors:
- Jimmy Debe <jimmy@status.im>
---
## Abstract
This specification describes the usage of stealth commitments to add prospective users to a network-governed [32/RLN-V1](./32/rln-v1.md) membership set.
## Motivation
When [32/RLN-V1](./32/rln-v1.md) is enforced in [10/Waku2](../waku/standards/core/10/waku2.md),
all users are required to register to a membership set.
The membership set will store user identities allowing the secure interaction within an application.
Forcing a user to do an on-chain transaction to join a membership set is an onboarding friction,
and some projects may be opposed to this method.
To improve the user experience,
stealth commitments can be used by a counterparty to register identities on the user's behalf,
while maintaining the user's anonymity.
This document specifies a privacy-preserving mechanism,
allowing a counterparty to utilize [32/RLN-V1](./32/rln-v1.md) to register an `identityCommitment` on-chain.
Counterparties will be able to register members to a RLN membership set without exposing the user's private keys.
## Background
The [32/RLN-V1](./32/rln-v1.md) protocol,
consists of a smart contract that stores a `idenitityCommitment` in a membership set.
In order for a user to join the membership set,
the user is required to make a transaction on the blockchain.
A set of public keys is used to compute a stealth commitment for a user,
as described in [ERC-5564](https://eips.ethereum.org/EIPS/eip-5564).
This specification is an implementation of the [ERC-5564](https://eips.ethereum.org/EIPS/eip-5564) scheme,
tailored to the curve that is used in the [32/RLN-V1](./32/rln-v1.md) protocol.
This can be used in a couple of ways in applications:
1. Applications can add users to the [32/RLN-V1](./32/rln-v1.md) membership set in a batch.
2. Users of the application can register other users to the [32/RLN-V1](./32/rln-v1.md) membership set.
This is useful when the prospective user does not have access to funds on the network that [32/RLN-V1](./32/rln-v1.md) is deployed on.
## Wire Format Specification
The two parties, the requester and the receiver, MUST exchange the following information:
```protobuf
message Request {
// The spending public key of the requester
bytes spending_public_key = 1;
// The viewing public key of the requester
bytes viewing_public_key = 2;
}
```
### Generate Stealth Commitment
The application or user SHOULD generate a `stealth_commitment` after a request to do so is received.
This commitment MAY be inserted into the corresponding application membership set.
Once the membership set is updated, the receiver SHOULD exchange the following as a response to the request:
```protobuf
message Response {
// The used to check if the stealth_commitment belongs to the requester
bytes view_tag = 2;
// The stealth commitment for the requester
bytes stealth_commitment = 3;
// The ephemeral public key used to generate the commitment
bytes ephemeral_public_key = 4;
}
```
The receiver MUST generate an `ephemeral_public_key`, `view_tag` and `stealth_commitment`.
This will be used to check the stealth commitment used to register to the membership set,
and the user MUST be able to check ownership with their `viewing_public_key`.
## Implementation Suggestions
An implementation of the Stealth Address scheme is available in the [erc-5564-bn254](https://github.com/rymnc/erc-5564-bn254) repository,
which also includes a test to generate a stealth commitment for a given user.
## Security/Privacy Considerations
This specification inherits the security and privacy considerations of the [Stealth Address](https://eips.ethereum.org/EIPS/eip-5564) scheme.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
- [10/Waku2](../waku/standards/core/10/waku2.md)
- [32/RLN-V1](./32/rln-v1.md)
- [ERC-5564](https://eips.ethereum.org/EIPS/eip-5564)

View File

@@ -1,6 +1,5 @@
---
slug: 58
title: 58/RLN-V2
title: RLN-V2
name: Rate Limit Nullifier V2
status: raw
editor: Rasul Ibragimov <curryrasul@gmail.com>

View File

@@ -1,4 +1,4 @@
## Waku RFCs
# Waku RFCs
Waku builds a family of privacy-preserving, censorship-resistant communication protocols for web3 applications.

View File

@@ -2,7 +2,7 @@
slug: 16
title: 16/WAKU2-RPC
name: Waku v2 RPC API
status: draft
status: deprecated
tags: waku-core
editor: Hanno Cornelius <hanno@status.im>
---

View File

@@ -2,7 +2,7 @@
slug: 18
title: 18/WAKU2-SWAP
name: Waku SWAP Accounting
status: draft
status: deprecated
editor: Oskar Thorén <oskarth@titanproxy.com>
contributor: Ebube Ud <ebube@status.im>
---

View File

@@ -1,4 +1,4 @@
## Deprecated RFCs
# Deprecated RFCs
Deprecated specifications are no longer used in Waku products.
This subdirectory is for achrive purpose and

View File

@@ -12,113 +12,148 @@ contributors:
- Hanno Cornelius <hanno@status.im>
---
The `17/WAKU2-RLN-RELAY` protocol is an extension of `11/WAKU2-RELAY` which additionally provides spam protection using [Rate Limiting Nullifiers (RLN)](../../../../vac/32/rln-v1.md).
## Abstract
This specification describes the `17/WAKU2-RLN-RELAY` protocol,
which is an extension of [`11/WAKU2-RELAY`](../11/relay.md) to provide spam protection using [Rate Limiting Nullifiers (RLN)](../../../../vac/32/rln-v1.md).
The security objective is to contain spam activity in a GossipSub network by enforcing a global messaging rate to all the peers.
Peers that violate the messaging rate are considered spammers and their message is considered spam.
The security objective is to contain spam activity in the (64/WAKU-NETWORK)[] by enforcing a global messaging rate to all the peers.
Peers that violate the messaging rate are considered spammers and
their message is considered spam.
Spammers are also financially punished and removed from the system.
<!-- **Protocol identifier***: `/vac/waku/waku-rln-relay/2.0.0-alpha1` -->
## Motivation
In open and anonymous p2p messaging networks, one big problem is spam resistance.
Existing solutions, such as Whispers proof of work are computationally expensive hence not suitable for resource-limited nodes.
Other reputation-based approaches might not be desirable, due to issues around arbitrary exclusion and privacy.
In open and anonymous p2p messaging networks,
one big problem is spam resistance.
Existing solutions, such as Whispers proof of work,
are computationally expensive hence not suitable for resource-limited nodes.
Other reputation-based approaches might not be desirable,
due to issues around arbitrary exclusion and privacy.
We augment the [`11/WAKU2-RELAY`](../11/relay.md) protocol with a novel construct of [RLN](../../../../vac/32/rln-v1.md) to enable an efficient economic spam prevention mechanism that can be run in resource-constrained environments.
## Specification
## Flow
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
### Flow
The messaging rate is defined by the `period` which indicates how many messages can be sent in a given period.
We define an `epoch` as $\lceil$ `unix_time` / `period` $\rceil$. For example, if `unix_time` is `1644810116` and we set `period` to `30`, then `epoch` is $\lceil$`(unix_time/period)`$\rceil$ `= 54827003`.
Note that `epoch` refers to epoch in RLN and not Unix epoch. This means a message can only be sent every period, where period is up to the application.
See see section [Recommended System Parameters](#recommended-system-parameters) for some recommended ways to set a sensible `period` value depending on the application.
We define an `epoch` as $\lceil$ `unix_time` / `period` $\rceil$.
For example, if `unix_time` is `1644810116` and we set `period` to `30`,
then `epoch` is $\lceil$ `(unix_time/period)` $\rceil$ `= 54827003`.
> **NOTE:** The `epoch` refers to the epoch in RLN and not Unix epoch.
This means a message can only be sent every period, where the `period` is up to the application.
See section [Recommended System Parameters](#recommended-system-parameters) for the RECOMMENDED method to set a sensible `period` value depending on the application.
Peers subscribed to a spam-protected `pubsubTopic` are only allowed to send one message per `epoch`.
The higher-level layers adopting `17/WAKU2-RLN-RELAY` MAY choose to enforce the messaging rate for `WakuMessages` with a specific `contentTopic` published on a `pubsubTopic`.
The higher-level layers adopting `17/WAKU2-RLN-RELAY` MAY choose to enforce the messaging rate for `WakuMessages` with a specific `contentTopic` published on a `pubsubTopic`.
#### Setup and Registration
A `pubsubTopic` that is spam-protected requires subscribed peers to form a [RLN group](../../../../vac/32/rln-v1.md).
- Peers MUST be registered to the RLN group to be able to publish messages.
- Registration MAY be moderated through a smart contract deployed on the Ethereum blockchain.
### Setup and Registration
Peers subscribed to a specific `pubsubTopic` form a [RLN group](../../../../vac/32/rln-v1.md).
<!-- link to the RLN group definition in the RLN RFC -->
Peers MUST be registered to the RLN group to be able to publish messages.
Registration is moderated through a smart contract deployed on the Ethereum blockchain.
Each peer has an [RLN key pair](../../../../vac/32/rln-v1.md) denoted by `sk` and `pk`.
The secret key `sk` is secret data and MUST be persisted securely by the peer.
The state of the membership contract contains the list of registered members' public identity keys i.e., `pk`s.
For the registration, a peer creates a transaction that invokes the registration function of the contract via which registers its `pk` in the group.
The transaction also transfers some amount of ether to the contract to be staked.
- The secret key `sk` is secret data and MUST be persisted securely by the peer.
- The state of the membership contract SHOULD contain a list of all registered members' public identity keys i.e.,
`pk`s.
For registration, a peer MUST create a transaction to invoke the registration function on the contract,
which registers its `pk` in the RLN group.
- The transaction MUST transfer additional tokens to the contract to be staked.
This amount is denoted by `staked_fund` and is a system parameter.
The peer who has the secret key `sk` associated with a registered `pk` would be able to withdraw a portion `reward_portion` of the staked fund by providing valid proof. <!-- a secure way to prove the possession of a pk is yet under discussion, maybe via commit and reveal -->
The peer who has the secret key `sk` associated with a registered `pk` would be able to withdraw a portion `reward_portion` of the staked fund by providing valid proof.
`reward_portion` is also a system parameter.
Note that `sk` is initially only known to its owning peer however, it may get exposed to other peers in case the owner attempts spamming the system i.e., sending more than one message per `epoch`.
> **NOTE:** Initially `sk` is only known to its owning peer however,
it may get exposed to other peers in case the owner attempts spamming the system i.e.,
sending more than one message per `epoch`.
An overview of registration is illustrated in Figure 1.
![Figure 1: Registration.](./images/rln-relay.png)
#### Publishing
### Publishing
To publish at a given `epoch`, the publishing peer proceeds based on the regular [`11/WAKU2-RELAY`](../11/relay.md) protocol.
However, to protect against spamming, each `WakuMessage` (which is wrapped inside the `data` field of a PubSub message) MUST carry a [`RateLimitProof`](##RateLimitProof) with the following fields.
To publish at a given `epoch`,
the publishing peer proceeds based on the regular [`11/WAKU2-RELAY`](../11/relay.md) protocol.
However, to protect against spamming, each `WakuMessage`
(which is wrapped inside the `data` field of a PubSub message)
MUST carry a [`RateLimitProof`](##RateLimitProof) with the following fields.
Section [Payload](#payloads) covers the details about the type and encoding of these fields.
The `merkle_root` contains the root of the Merkle tree.
- The `merkle_root` contains the root of the Merkle tree.
- The `epoch` represents the current epoch.
- The `nullifier` is an internal nullifier acting as a fingerprint that allows specifying whether two messages are published by the same peer during the same `epoch`.
- The `nullifier` is a deterministic value derived from `sk` and
`epoch` therefore any two messages issued by the same peer
(i.e., using the same `sk`) for the same `epoch` are guaranteed to have identical `nullifier`s.
- The `share_x` and `share_y` can be seen as partial disclosure of peer's `sk` for the intended `epoch`.
They are derived deterministically from peer's `sk` and
current `epoch` using [Shamir secret sharing scheme](../../../../vac/32/rln-v1.md).
The `epoch` represents the current epoch.
If a peer discloses more than one such pair (`share_x`, `share_y`) for the same `epoch`,
it would allow full disclosure of its `sk` and
hence get access to its staked fund in the membership contract.
The `nullifier` is an internal nullifier acting as a fingerprint that allows specifying whether two messages are published by the same peer during the same `epoch`.
The `nullifier` is a deterministic value derived from `sk` and `epoch` therefore any two messages issued by the same peer (i.e., using the same `sk`) for the same `epoch` are guaranteed to have identical `nullifier`s.
- The `proof` field is a zero-knowledge proof signifying that:
The `share_x` and `share_y` can be seen as partial disclosure of peer's `sk` for the intended `epoch`.
They are derived deterministically from peer's `sk` and current `epoch` using [Shamir secret sharing scheme](../../../../vac/32/rln-v1.md).
If a peer discloses more than one such pair (`share_x`, `share_y`) for the same `epoch`, it would allow full disclosure of its `sk` and hence get access to its staked fund in the membership contract.
The `proof` field is a zero-knowledge proof signifying that:
1. The message owner is the current member of the group i.e., her/his identity commitment key `pk` is part of the membership group Merkle tree with the root `merkle_root`.
2. `share_x` and `share_y` are correctly computed.
1. The message owner is the current member of the group i.e.,
the peer's identity commitment key, `pk`,
is part of the membership group Merkle tree with the root `merkle_root`.
2. `share_x` and `share_y` are correctly computed.
3. The `nullifier` is constructed correctly.
For more details about the proof generation check [RLN](../../../../vac/32/rln-v1.md)
The proof generation relies on the knowledge of two pieces of private information i.e., `sk` and `authPath`.
The `authPath` is a subset of Merkle tree nodes by which a peer can prove the inclusion of its `pk` in the group. <!-- TODO refer to RLN RFC for authPath def -->
The proof generation also requires a set of public inputs which are: the Merkle tree root `merkle_root`, the current `epoch`, and the message for which the proof is going to be generated.
In `17/WAKU2-RLN-RELAY`, the message is the concatenation of `WakuMessage`'s `payload` filed and its `contentTopic` i.e., `payload||contentTopic`.
The proof generation also requires a set of public inputs which are:
the Merkle tree root `merkle_root`, the current `epoch`, and
the message for which the proof is going to be generated.
In `17/WAKU2-RLN-RELAY`, the message is the concatenation of `WakuMessage`'s `payload` filed and
its `contentTopic` i.e., `payload||contentTopic`.
### Group Synchronization
#### Group Synchronization
Proof generation relies on the knowledge of Merkle tree root `merkle_root` and `authPath` which both require access to the membership Merkle tree.
Getting access to the Merkle tree can be done in various ways.
One way is that all the peers construct the tree locally.
This can be done by listening to the registration and deletion events emitted by the membership contract.
Getting access to the Merkle tree can be done in various ways:
1. Peers construct the tree locally.
This can be done by listening to the registration and
deletion events emitted by the membership contract.
Peers MUST update the local Merkle tree on a per-block basis.
This is discussed further in the [Merkle Root Validation](#merkle-root-validation) section.
Another approach for synchronizing the state of slashed `pk`s is to disseminate such information through a p2p GossipSub network to which all peers are subscribed.
This is in addition to sending the deletion transaction to the membership contract.
2. For synchronizing the state of slashed `pk`s,
disseminate such information through a `pubsubTopic` to which all peers are subscribed.
A deletion transaction SHOULD occur on the membership contract.
The benefit of an off-chain slashing is that it allows real-time removal of spammers as opposed to on-chain slashing in which peers get informed with a delay,
where the delay is due to mining the slashing transaction.
For the group synchronization, one important security consideration is that peers MUST make sure they always use the most recent Merkle tree root in their proof generation.
For the group synchronization,
one important security consideration is that peers MUST make sure they always use the most recent Merkle tree root in their proof generation.
The reason is that using an old root can allow inference about the index of the user's `pk` in the membership tree hence compromising user privacy and breaking message unlinkability.
### Routing
#### Routing
Upon the receipt of a PubSub message via [`11/WAKU2-RELAY`](../11/relay.md) protocol, the routing peer parses the `data` field as a `WakuMessage` and gets access to the `RateLimitProof` field.
Upon the receipt of a PubSub message via [`11/WAKU2-RELAY`](../11/relay.md) protocol,
the routing peer parses the `data` field as a `WakuMessage` and gets access to the `RateLimitProof` field.
The peer then validates the `RateLimitProof` as explained next.
#### Epoch Validation
If the `epoch` attached to the message is more than `max_epoch_gap` apart from the routing peer's current `epoch` then the message is discarded and considered invalid.
##### Epoch Validation
If the `epoch` attached to the `WakuMessage` is more than `max_epoch_gap`,
apart from the routing peer's current `epoch`,
then the `WakuMessage` MUST be discarded and considered invalid.
This is to prevent a newly registered peer from spamming the system by messaging for all the past epochs.
`max_epoch_gap` is a system parameter for which we provide some recommendations in section [Recommended System Parameters](#recommended-system-parameters).
#### Merkle Root Validation
##### Merkle Root Validation
The routing peers MUST check whether the provided Merkle root in the `RateLimitProof` is valid.
It can do so by maintaining a local set of valid Merkle roots, which consist of `acceptable_root_window_size` past roots.
It can do so by maintaining a local set of valid Merkle roots,
which consist of `acceptable_root_window_size` past roots.
These roots refer to the final state of the Merkle tree after a whole block consisting of group changes is processed.
The Merkle roots are updated on a per-block basis instead of a per-event basis.
This is done because if Merkle roots are updated on a per-event basis, some peers could send messages with a root that refers to a Merkle tree state that might get invalidated while the message is still propagating in the network, due to many registrations happening during this time frame.
@@ -130,33 +165,38 @@ This also allows peers which are not well connected to the network to be able to
This network delay is related to the nature of asynchronous network conditions, which means that peers see membership changes asynchronously, and therefore may have differing local Merkle trees.
See [Recommended System Parameters](#recommended-system-parameters) on choosing an appropriate `acceptable_root_window_size`.
#### Proof Verification
##### Proof Verification
The routing peers MUST check whether the zero-knowledge proof `proof` is valid.
It does so by running the zk verification algorithm as explained in [RLN](../../../../vac/32/rln-v1.md).
If `proof` is invalid then the message is discarded.
If `proof` is invalid then the message MUST be discarded.
#### Spam detection
To enable local spam detection and slashing, routing peers MUST record the `nullifier`, `share_x`, and `share_y` of incoming messages which are not discarded i.e., not found spam or with invalid proof or epoch.
##### Spam detection
To enable local spam detection and slashing,
routing peers MUST record the `nullifier`, `share_x`, and `share_y`
of incoming messages which are not discarded i.e., not found spam or with invalid proof or epoch.
To spot spam messages, the peer checks whether a message with an identical `nullifier` has already been relayed.
1. If such a message exists and its `share_x` and `share_y` components are different from the incoming message, then slashing takes place.
That is, the peer uses the `share_x` and `share_y` of the new message and the `share'_x` and `share'_y` of the old record to reconstruct the `sk` of the message owner.
The `sk` then can be used to delete the spammer from the group and withdraw a portion `reward_portion` of its staked fund.
2. If the `share_x` and `share_y` fields of the previously relayed message are identical to the incoming message, then the message is a duplicate and shall be discarded.
3. If none is found, then the message gets relayed.
1. If such a message exists and its `share_x` and `share_y`
components are different from the incoming message, then slashing takes place.
That is, the peer uses the `share_x` and `share_y`
of the new message and the `share'_x` and `share'_y`
of the old record to reconstruct the `sk` of the message owner.
The `sk` then MAY be used to delete the spammer from the group and
withdraw a portion `reward_portion` of its staked funds.
3. If the `share_x` and `share_y` fields of the previously relayed message are identical to the incoming message,
then the message is a duplicate and MUST be discarded.
4. If none is found, then the message gets relayed.
An overview of the routing procedure and slashing is provided in Figure 2.
<!-- TODO: may consider [validator functions](https://github.com/libp2p/specs/tree/master/pubsub#topic-validation) or [extended validators](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#extended-validators) for the spam detection -->
![Figure 2: Publishing, Routing and Slashing workflow.](./images/rln-message-verification.png)
-------
## Payloads
### Payloads
Payloads are protobuf messages implemented using [protocol buffers v3](https://developers.google.com/protocol-buffers/).
Nodes MAY extend the [14/WAKU2-MESSAGE](../14/message.md) with a `rate_limit_proof` field to indicate that their message is not spam.
Nodes MAY extend the [14/WAKU2-MESSAGE](../14/message.md) with a `rate_limit_proof` field to indicate that their message is not spam.
```diff
@@ -177,51 +217,52 @@ message WakuMessage {
optional uint32 version = 3;
optional sint64 timestamp = 10;
optional bool ephemeral = 31;
+ optional bytes rate_limit_proof = 21;
RateLimitProof rate_limit_proof = 21;
}
```
### WakuMessage
#### WakuMessage
`rate_limit_proof` holds the information required to prove that the message owner has not exceeded the message rate limit.
### RateLimitProof
#### RateLimitProof
Below is the description of the fields of `RateLimitProof` and their types.
| Parameter | Type | Description |
| ----: | ----------- | ----------- |
| `proof` | array of 256 bytes | the zkSNARK proof as explained in the [Publishing process](##Publishing) |
| `merkle_root` | array of 32 bytes in little-endian order | the root of membership group Merkle tree at the time of publishing the message |
| `share_x` and `share_y`| array of 32 bytes each | Shamir secret shares of the user's secret identity key `sk` . `share_x` is the Poseidon hash of the `WakuMessage`'s `payload` concatenated with its `contentTopic` . `share_y` is calculated using [Shamir secret sharing scheme](../../../../vac/32/rln-v1.md) | <!-- todo specify the poseidon hash setting -->
| `share_x` and `share_y`| array of 32 bytes each | Shamir secret shares of the user's secret identity key `sk` . `share_x` is the Poseidon hash of the `WakuMessage`'s `payload` concatenated with its `contentTopic` . `share_y` is calculated using [Shamir secret sharing scheme](../../../../vac/32/rln-v1.md) |
| `nullifier` | array of 32 bytes | internal nullifier derived from `epoch` and peer's `sk` as explained in [RLN construct](../../../../vac/32/rln-v1.md)|
## Recommended System Parameters
The system parameters are summarized in the following table, and the recommended values for a subset of them are presented next.
### Recommended System Parameters
The system parameters are summarized in the following table, and the RECOMMENDED values for a subset of them are presented next.
| Parameter | Description |
| ----: |----------- |
| `period` | the length of `epoch` in seconds |
| `staked_fund` | the amount of wei to be staked by peers at the registration |
| `staked_fund` | the amount of funds to be staked by peers at the registration |
| `reward_portion` | the percentage of `staked_fund` to be rewarded to the slashers |
| `max_epoch_gap` | the maximum allowed gap between the `epoch` of a routing peer and the incoming message |
| `acceptable_root_window_size` | The maximum number of past Merkle roots to store |
### Epoch Length
#### Epoch Length
A sensible value for the `period` depends on the application for which the spam protection is going to be used.
For example, while the `period` of `1` second i.e., messaging rate of `1` per second, might be acceptable for a chat application, might be too low for communication among Ethereum network validators.
For example, while the `period` of `1` second i.e.,
messaging rate of `1` per second, might be acceptable for a chat application,
might be too low for communication among Ethereum network validators.
One should look at the desired throughput of the application to decide on a proper `period` value.
In the proof of concept implementation of `17/WAKU2-RLN-RELAY` protocol which is available in [nim-waku](https://github.com/status-im/nim-waku), the `period` is set to `1` second.
Nevertheless, this value is also subject to change depending on user experience.
### Maximum Epoch Gap
We discussed in the [Routing](#routing) section that the gap between the epoch observed by the routing peer and the one attached to the incoming message should not exceed a threshold denoted by `max_epoch_gap` .
The value of `max_epoch_gap` can be measured based on the following factors.
#### Maximum Epoch Gap
We discussed in the [Routing](#routing) section that the gap between the epoch observed by the routing peer and
the one attached to the incoming message should not exceed a threshold denoted by `max_epoch_gap`.
The value of `max_epoch_gap` can be measured based on the following factors.
- Network transmission delay `Network_Delay`: the maximum time that it takes for a message to be fully disseminated in the GossipSub network.
- Clock asynchrony `Clock_Asynchrony`: The maximum difference between the Unix epoch clocks perceived by network peers which can be due to clock drifts.
With a reasonable approximation of the preceding values, one can set `max_epoch_gap` as
`max_epoch_gap` $= \lceil \frac{\text{Network Delay} + \text{Clock Asynchrony}}{\text{Epoch Length}}\rceil$ where `period` is the length of the `epoch` in seconds.
With a reasonable approximation of the preceding values, one can set `max_epoch_gap` as
`max_epoch_gap` $= \lceil \frac{\text{Network Delay} + \text{Clock Asynchrony}}{\text{Epoch Length}}\rceil$ where `period` is the length of the `epoch` in seconds.
`Network_Delay` and `Clock_Asynchrony` MUST have the same resolution as `period` .
By this formulation, `max_epoch_gap` indeed measures the maximum number of `epoch`s that can elapse since a message gets routed from its origin to all the other peers in the network.
@@ -236,14 +277,16 @@ By this formulation, `acceptable_root_window_size` will provide a lower bound of
The `acceptable_root_window_size` should indicate how many blocks may have been mined during the time it takes for a peer to receive a message.
This formula represents a lower bound of the number of acceptable roots.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
1. [RLN documentation](https://hackmd.io/tMTLMYmTR5eynw2lwK9n1w?view)
2. [Public inputs to the RLN circuit](https://hackmd.io/tMTLMYmTR5eynw2lwK9n1w?view#Public-Inputs)
3. [Shamir secret sharing scheme used in RLN](https://hackmd.io/tMTLMYmTR5eynw2lwK9n1w?view#Linear-Equation-amp-SSS)
4. [RLN internal nullifier](https://hackmd.io/tMTLMYmTR5eynw2lwK9n1w?view#Nullifiers)
1. [`11/WAKU2-RELAY`](../11/relay.md)
2. [RLN](../../../../vac/32/rln-v1.md)
3. [14/WAKU2-MESSAGE](../14/message.md)
4. [RLN documentation](https://hackmd.io/tMTLMYmTR5eynw2lwK9n1w?view)
5. [Public inputs to the RLN circuit](https://hackmd.io/tMTLMYmTR5eynw2lwK9n1w?view#Public-Inputs)
6. [Shamir secret sharing scheme used in RLN](https://hackmd.io/tMTLMYmTR5eynw2lwK9n1w?view#Linear-Equation-amp-SSS)
7. [RLN internal nullifier](https://hackmd.io/tMTLMYmTR5eynw2lwK9n1w?view#Nullifiers)

View File

@@ -0,0 +1,320 @@
---
slug: 64
title: 64/WAKU2-NETWORK
name: Waku v2 Network
status: draft
category: Best Current Practice
tags: waku/application
editor: Hanno Cornelius <hanno@status.im>
contributors:
---
## Abstract
This specification describes an opinionated deployment of [10/WAKU2](../10/waku2.md) protocols to form a coherent and
shared decentralized messaging network that is open-access,
useful for generalized messaging, privacy-preserving, scalable and
accessible even to resource-restricted devices.
We'll refer to this opinionated deployment simply as
_the public Waku Network_, _the Waku Network_ or, if the context is clear, _the network_
in the rest of this document.
## Theory / Semantics
### Routing protocol
The Waku Network is built on the [17/WAKU2-RLN-RELAY](../17/rln-relay.md) routing protocol,
which in turn is an extension of [11/WAKU2-RELAY](../11/relay.md) with spam protection measures.
### Network shards
Traffic in the Waku Network is sharded into eight [17/WAKU2-RLN-RELAY](../17/rln-relay.md) pubsub topics.
Each pubsub topic is named according to the static shard naming format
defined in [WAKU2-RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md)
with:
* `<cluster_id>` set to `1`
* `<shard_number>` occupying the range `0` to `7`.
In other words, the Waku Network is a [17/WAKU2-RLN-RELAY](../17/rln-relay.md) network
routed on the combination of the eight pubsub topics:
```
/waku/2/rs/1/0
/waku/2/rs/1/1
...
/waku/2/rs/1/7
```
A node MUST use [WAKU-METADATA](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) protocol to identify the `<cluster_id>` that every
inbound/outbound peer that attempts to connect supports. In any of the following cases, the node MUST trigger a disconnection:
* [WAKU-METADATA](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) dial fails.
* [WAKU-METADATA](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) reports an empty `<cluster_id>`.
* [WAKU-METADATA](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) reports a `<cluster_id>` different than `1`.
## Roles
There are two distinct roles evident in the network, those of:
1) nodes, and
2) applications.
### Nodes
Nodes are the individual software units
using [10/WAKU2](../10/waku2.md) protocols to form a p2p messaging network.
Nodes, in turn, can participate in a shard as full relayers, i.e. _relay nodes_,
or by running a combination of protocols suitable for resource-restricted environments,
i.e. _non-relay nodes_.
Nodes can also provide various services to the network,
such as storing historical messages or protecting the network against spam.
See the section on [default services](#default-services) for more.
#### Relay nodes
Relay nodes MUST follow [17/WAKU2-RLN-RELAY](../17/rln-relay.md)
to route messages to other nodes in the network
for any of the pubsub topics [defined as the Waku Network shards](#network-shards).
Relay nodes MAY choose to subscribe to any of these shards,
but MUST be subscribed to at least one defined shard.
Each relay node SHOULD be subscribed to as many shards as it has resources to support.
If a relay node supports an encapsulating application,
it SHOULD be subscribed to all the shards servicing that application.
If resource restrictions prevent a relay node from servicing all shards used by the encapsulating application,
it MAY choose to support some shards as a non-relay node.
#### Bootstrapping and discovery
Nodes MAY use any method to bootstrap connection to the network,
but it is RECOMMENDED that each node retrieves a list of bootstrap peers to connect to using [EIP-1459 DNS-based discovery](https://eips.ethereum.org/EIPS/eip-1459).
Relay nodes SHOULD use [33/WAKU2-DISCV5](../33/discv5.md) to continually discover other peers in the network.
Each relay node MUST encode its supported shards into its discoverable ENR,
as described in [WAKU2-RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md/#discovery).
The ENR MUST be updated if the set of supported shards change.
A node MAY choose to ignore discovered peers that do not support any of the shards in its own subscribed set.
#### Transports
Relay nodes MUST follow [10/WAKU2](../10/waku2.md) specifications with regards to supporting different transports.
If TCP transport is available, each relay node MUST support it as transport for both dialing and listening.
In addition, a relay node SHOULD support secure websockets for bidirectional communication streams,
for example to allow connections from and to web browser-based clients.
A relay node MAY support unsecure websockets if required by the application or running environment.
#### Default services
For each supported shard,
each relay node SHOULD enable and support the following protocols as a service node:
1. [12/WAKU2-FILTER](../12/filter.md) to allow resource-restricted peers to subscribe to messages matching a specific content filter.
2. [13/WAKU2-STORE](../13/store.md) to allow other peers to request historical messages from this node.
3. [19/WAKU2-LIGHTPUSH](../19/lightpush.md) to allow resource-restricted peers to request publishing a message to the network on their behalf.
4. [WAKU2-PEER-EXCHANGE](https://github.com/waku-org/specs/blob/master/standards/core/peer-exchange.md) to allow resource-restricted peers to discover more peers in a resource efficient way.
#### Store service nodes
Each relay node SHOULD support [13/WAKU2-STORE](../13/store.md) as a store service node,
for each supported shard.
The store SHOULD be configured to retain at least `12` hours of messages per supported shard.
Store service nodes SHOULD only store messages with a valid [`rate_limit_proof`](#message-attributes) attribute.
#### Non-relay nodes
Nodes MAY opt out of relay functionality on any network shard
and instead request services from relay nodes as clients
using any of the defined service protocols:
1. [12/WAKU2-FILTER](../12/filter.md) to subscribe to messages matching a specific content filter.
2. [13/WAKU2-STORE](../13/store.md) to request historical messages matching a specific content filter.
3. [19/WAKU2-LIGHTPUSH](../19/lightpush.md) to request publishing a message to the network.
4. [WAKU2-PEER-EXCHANGE](https://github.com/waku-org/specs/blob/master/standards/core/peer-exchange.md) to discover more peers in a resource efficient way.
#### Store client nodes
Nodes MAY request historical messages from [13/WAKU2-STORE](../13/store.md) service nodes as store clients.
A store client SHOULD discard any messages retrieved from a store service node that do not contain a valid [`rate_limit_proof`](#message-attributes) attribute.
The client MAY consider service nodes returning messages without a valid [`rate_limit_proof`](#message-attributes) attribute as untrustworthy.
The mechanism by which this may happen is currently underdefined.
### Applications
Applications are the higher-layer projects or platforms that make use of the generalized messaging capability of the network.
In other words, an application defines a payload used in the various [10/WAKU2](../10/waku2.md) protocols.
Any participant in an application SHOULD make use of an underlying node in order to communicate on the network.
Applications SHOULD make use of an [autosharding](#autosharding) API
to allow the underlying node to automatically select the target shard on the Waku Network.
See the section on [autosharding](#autosharding) for more.
## RLN rate-limiting
The [17/WAKU2-RLN-RELAY](../17/rln-relay.md) protocol uses [32/RLN-V1](../../../../vac/32/rln-v1.md) proofs
to ensure that a pre-agreed rate limit is not exceeded by any publisher.
While the network is under capacity,
individual relayers MAY choose to freely route messages without RLN proofs
up to a discretionary bandwidth limit,
after which messages without proofs MUST be discarded by relay nodes.
This bandwidth limit SHOULD be enforced using a [bandwidth validation mechanism](#free-bandwidth-exceeded) separate from a RLN rate-limiting.
This implies that quality of service and reliability is significantly lower for messages without proofs
and at times of high network utilization these messages may not be relayed at all.
### RLN Parameters
For the Waku Network,
the `epoch` is set to `1` second
and the maximum number of messages published per `epoch` is limited to `1` per publisher.
The `max_epoch_gap` is set to `20` seconds,
meaning that validators (relay nodes),
MUST _reject_ messages with an `epoch` more than 20 seconds into the past or
future compared to the validator's own clock.
All nodes, validators and publishers,
SHOULD use Network Time Protocol (NTP) to synchronize their own clocks,
thereby ensuring valid timestamps for proof generation and validation.
### Memberships
Each publisher to the Waku Network SHOULD register an RLN membership
with one of the RLN storage contracts
moderated in the Sepolia registry contract with address [0xF1935b338321013f11068abCafC548A7B0db732C](https://sepolia.etherscan.io/address/0xF1935b338321013f11068abCafC548A7B0db732C#code).
Initial memberships are registered in the Sepolia RLN storage contract with address [0x58322513A35a8f747AF5A385bA14C2AbE602AA59](https://sepolia.etherscan.io/address/0x58322513A35a8f747AF5A385bA14C2AbE602AA59#code).
RLN membership setup and registration MUST follow [17/WAKU2-RLN-RELAY](../17/rln-relay.md/#setup-and-registration),
with the `staked_fund` set to `0`.
In other words, the Waku Network does not use RLN staking.
### RLN Proofs
Each RLN member MUST generate and attach an RLN proof to every published message
as described in [17/WAKU2-RLN-RELAY](../17/rln-relay.md/#publishing).
Slashing is not implemented for the Waku Network.
Instead, validators will penalise peers forwarding messages exceeding the rate limit
as specified for [the rate-limiting validation mechanism](#rate-limit-exceeded).
This incentivizes all relay nodes to validate RLN proofs
and reject messages violating rate limits
in order to continue participating in the network.
## Network traffic
All payload on the Waku Network MUST be encapsulated in a [14/WAKU2-MESSAGE](../14/message.md)
with rate limit proof extensions defined for [17/WAKU2-RLN-RELAY](../17/rln-relay.md/#payloads).
Each message on the Waku Network SHOULD be validated by each relayer,
according to the rules discussed under [message validation](#message-validation).
### Message Attributes
- The mandatory `payload` attribute MUST contain the message data payload as crafted by the application.
- The mandatory `content_topic` attribute MUST specify a string identifier that can be used for content-based filtering.
This is also crafted by the application.
See [Autosharding](#autosharding) for more on the content topic format.
- The optional `meta` attribute MAY be omitted.
If present, will form part of the message uniqueness vector described in [14/WAKU2-MESSAGE](../14/message.md).
- The optional `version` attribute SHOULD be set to `0`. It MUST be interpreted as `0` if not present.
- The mandatory `timestamp` attribute MUST contain the Unix epoch time at which the message was generated by the application.
The value MUST be in nanoseconds.
It MAY contain a fudge factor of up to 1 seconds in either direction to improve resistance to timing attacks.
- The optional `ephemeral` attribute MUST be set to `true` if the message should not be persisted by the Waku Network.
- The optional `rate_limit_proof` attribute SHOULD be populated with the RLN proof as set out in [RLN Proofs](#rln-proofs).
Messages with this field unpopulated MAY be discarded from the network by relayers.
This field MUST be populated if the message should be persisted by the Waku Network.
### Message Size
Any [14/WAKU2-MESSAGE](../14/message.md) published to the network MUST NOT exceed an absolute maximum size of `150` kilobytes.
This limit applies to the entire message after protobuf serialization, including attributes.
It is RECOMMENDED not to exceed an average size of `4` kilobytes for [14/WAKU2-MESSAGE](../14/message.md) published to the network.
### Message Validation
Relay nodes MUST apply [gossipsub v1.1 validation](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#extended-validators) to each relayed message and
SHOULD apply all of the rules set out in the section below to determine the validity of a message.
Validation has one of three outcomes,
repeated here from the [gossipsub specification](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#extended-validators) for ease of reference:
1. Accept - the message is considered valid and it MUST be delivered and forwarded to the network.
2. Reject - the message is considered invalid, MUST be rejected and SHOULD trigger a gossipsub scoring penalty against the transmitting peer.
3. Ignore - the message SHOULD NOT be delivered and forwarded to the network, but this MUST NOT trigger a gossipsub scoring penalty against the transmitting peer.
The following validation rules are defined:
#### Decoding failure
If a message fails to decode as a valid [14/WAKU2-MESSAGE](../14/message.md),
the relay node MUST _reject_ the message.
This SHOULD trigger a penalty against the transmitting peer.
#### Invalid timestamp
If a message has a timestamp deviating by more than `20` seconds
either into the past or the future
when compared to the relay node's internal clock,
the relay node MUST _reject_ the message.
This allows for some deviation between internal clocks,
network routing latency and
an optional [fudge factor when timestamping new messages](#message-attributes).
#### Free bandwidth exceeded
If a message contains no RLN proof
and the current bandwidth utilization on the shard the message was published to
equals or exceeds `1` Mbps,
the relay node SHOULD _ignore_ the message.
#### Invalid RLN epoch
If a message contains an RLN proof
and the `epoch` attached to the proof deviates by more than `max_epoch_gap` seconds
from the relay node's own `epoch`,
the relay node MUST _reject_ the message.
`max_epoch_gap` is [set to `20` seconds](#rln-parameters) for the Waku Network.
#### Invalid RLN proof
If a message contains an RLN proof
and the zero-knowledge proof is invalid
according to the verification process described in [32/RLN-V1](../../../../vac/32/rln-v1.md),
the relay node MUST _ignore_ the message.
#### Rate limit exceeded
If a message contains an RLN proof
and the relay node detects double signaling
according to the verification process described in [32/RLN-V1](../../../../vac/32/rln-v1.md),
the relay node MUST _reject_ the message
for violating the agreed rate limit of `1` message every `1` second.
This SHOULD trigger a penalty against the transmitting peer.
## Autosharding
Nodes in the Waku Network SHOULD allow encapsulating applications to use autosharding,
as defined in [WAKU2-RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md/#automatic-sharding)
by automatically determining the appropriate pubsub topic
from the list [of defined Waku Network shards](#network-shards).
This allows the application to omit the target pubsub topic
when invoking any Waku protocol function.
Applications using autosharding MUST use content topics in the format
defined in [WAKU2-RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md/#content-topics-format-for-autosharding)
and SHOULD use the short length format:
```
/{application-name}/{version-of-the-application}/{content-topic-name}/{encoding}
```
When an encapsulating application makes use of autosharding
the underlying node MUST determine the target pubsub topic(s)
from the content topics provided by the application
using the hashing mechanism defined in [WAKU2-RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md/#automatic-sharding).
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
* [10/WAKU2](../10/waku2.md)
* [17/WAKU2-RLN-RELAY](../17/rln-relay.md)
* [11/WAKU2-RELAY](../11/relay.md)
* [WAKU2-RELAY-SHARDING](../../core/relay-sharding.md)
* [WAKU-METADATA](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md)
* [EIP-1459 DNS-based discovery](https://eips.ethereum.org/EIPS/eip-1459)
* [33/WAKU2-DISCV5](../33/discv5.md)
* [12/WAKU2-FILTER](../12/filter.md)
* [13/WAKU2-STORE](../13/store.md)
* [19/WAKU2-LIGHTPUSH](../19/lightpush.md)
* [34/WAKU2-PEER-EXCHANGE](../../core/peer-exchange.md)
* [32/RLN-V1](../../../../vac/32/rln-v1.md)
* [14/WAKU2-MESSAGE](../14/message.md)
* [gossipsub v1.1 validation](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#extended-validators)
* [WAKU2-RELAY-SHARDING](https://github.com/waku-org/specs/blob/master/standards/core/relay-sharding.md/)
*

View File

@@ -0,0 +1,51 @@
---
slug: 66
title: 66/WAKU2-METADATA
name: Waku Metadata Protocol
status: draft
editor: Alvaro Revuelta <alrevuelta@status.im>
contributors:
---
## Abstract
This specification describes the metadata that can be associated with a [10/WAKU2](../10/waku2.md) node.
## Metadata Protocol
Waku specifies a req/resp protocol that provides information about the node's medatadata.
Such metadata is meant to be used by the node to decide if a peer is worth connecting or not.
The node that makes the request, includes its metadata so that the receiver is aware of it,
without requiring an extra interaction.
The parameters are the following:
* `clusterId`: Unique identifier of the cluster that the node is running in.
* `shards`: Shard indexes that the node is subscribed to.
***Protocol Identifier***
/vac/waku/metadata/1.0.0
### Request
```proto
message WakuMetadataRequest {
optional uint32 cluster_id = 1;
repeated uint32 shards = 2;
}
```
### Response
```proto
message WakuMetadataResponse {
optional uint32 cluster_id = 1;
repeated uint32 shards = 2;
}
```
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
- [10/WAKU2](../10/waku2.md)