Compare commits

...

29 Commits

Author SHA1 Message Date
Jimmy Debe
d2fb0bab9c Update and rename nomos/data-availability-layer.md to nomos/raw/data-availability.md 2024-09-06 11:56:02 -04:00
Jimmy Debe
b220cd053d Update data-availability-layer.md 2024-08-13 18:10:58 -04:00
Jimmy Debe
a7bba29820 Update data-availability-layer.md 2024-07-19 08:05:46 -04:00
Jimmy Debe
ee8acd3a47 Update data-availability-layer.md 2024-06-27 22:47:18 -04:00
Jimmy Debe
72aafc5e2c Update data-availability-layer.md 2024-06-27 17:57:30 -04:00
Jimmy Debe
caf83eb61e Update data-availability-layer.md 2024-06-27 13:53:16 -04:00
Jimmy Debe
9553437901 Update data-availability-layer.md 2024-06-27 00:01:48 -04:00
Jimmy Debe
9fa153cf85 Update data-availability-layer.md 2024-06-26 22:22:58 -04:00
Jimmy Debe
e942152ca8 Update data-availability-layer.md 2024-06-25 22:46:02 -04:00
Jimmy Debe
1f3a767b77 Update data-availability-layer.md 2024-06-20 22:56:45 -04:00
Jimmy Debe
9c7b761991 Update data-availability-layer.md 2024-06-20 01:21:14 -04:00
Jimmy Debe
a62f9480e5 Update data-availability-layer.md 2024-06-18 23:07:57 -04:00
Jimmy Debe
5e7ffd35a9 Update data-availability-layer.md 2024-06-13 21:58:32 -04:00
Jimmy Debe
b2f1d7c10a Update data-availability-layer.md 2024-06-06 19:06:31 -04:00
Jimmy Debe
1042e73141 Update data-availability-layer.md 2024-05-30 23:02:07 -04:00
Jimmy Debe
235ff23c10 Update and rename data-availability.md to data-availability-layer.md 2024-05-28 15:26:23 -04:00
Jimmy Debe
1f96afcc11 Update data-availability.md 2024-05-27 22:32:57 -04:00
Jimmy Debe
1f94cd38b6 Update data-availability.md 2024-05-24 10:37:26 -04:00
Jimmy Debe
11ab920d60 Update data-availability.md 2024-05-16 18:59:44 -04:00
Jimmy Debe
866ad0b08d Create data-availability.md 2024-05-08 15:56:02 -04:00
Filip Pajic
69f2853407 fix: Syntax fix for index documents inside Waku foldersFix syntax (#34)
# What does this PR resolve? 🚀
- Changes title inside Waku/README.md from h2 to h1
- Changes title inside Waku/Deprecated/README.md from h2 to h1

# Details 📝
The syntax for the title of the markdown seems to not be proper one
comparing to other README documents.
It's important to define titles with h1(#) to be able to parse it
properly later on by the website
2024-04-23 14:17:17 -04:00
Hanno Cornelius
8f94e97cf2 docs: deprecate swap protocol (#31)
Deprecates swap protocol.
2024-04-18 13:38:26 -04:00
Jimmy Debe
d82eaccdc0 Update WAKU2-METADATA: Move to draft (#6)
Move 66/WAKU2-METADATA to draft.
2024-04-17 15:24:44 -04:00
LordGhostX
8b552ba2e0 chore: mark 16/WAKU2-RPC as deprecated (#30) 2024-04-16 15:43:27 +02:00
Jimmy Debe
0b0e00f510 feat(rln-stealth-commitments): add initial tech writeup (#23)
By: rymnc
Reference pull request: https://github.com/vacp2p/rfc/pull/658

Initial writeup on viability of stealth commitments for status
communities

---------

Co-authored-by: fryorcraken <110212804+fryorcraken@users.noreply.github.com>
2024-04-15 17:34:56 +05:30
Jimmy Debe
43f4989bb1 Fix Markdown Lint (#25)
The linter was not checking any files.
2024-03-26 17:42:18 +01:00
Jimmy Debe
7698e60d58 RFC Website Workflow Sync (#27)
A workflow to sync this repository with the rfc website.
2024-03-26 17:41:58 +01:00
Jimmy Debe
2eaa7949c4 Broken Links + Change Editors (#26)
Fix to broken links, changed links, and added new editors to spec, 10,
12, 14, 17, 19.
2024-03-21 10:08:40 -04:00
Jimmy Debe
92d8cf339b Add Markdown Linting (#24) 2024-03-08 16:41:51 +01:00
32 changed files with 804 additions and 127 deletions

32
.github/workflows/markdown-lint.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: markdown-linting
on:
push:
branches:
- '**'
pull_request:
branches:
- '**'
jobs:
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Get changed files
continue-on-error: true
run: |
echo "CHANGED_FILES<<EOF" >> $GITHUB_ENV
gh pr diff ${{ github.event.number }} --name-only | sed -e 's|$|,|' | xargs -i echo "{}" >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Markdown Linter
uses: DavidAnson/markdownlint-cli2-action@v15
with:
globs: ${{ env.CHANGED_FILES }}

41
.github/workflows/website-sync.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Website Sync
on:
pull_request:
types: [closed]
branches:
- main
jobs:
sync:
if: github.event.pull_request.merged == true
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Clone Website Repo
run: |
git clone git@github.com:vacp2p/rfc-website.git
cd rfc-website
git config --local user.email "actions@github.com"
git config --local user.name "GitHub Actions"
- name: List of changed files
id: changed_files
run: |
echo "::set-output name=files::$(git diff --name-only ${{ github.event.before }} ${{ github.sha }})"
- name: Copy changed files to Website Repo
run: |
for file in ${{ steps.changed_files.outputs.files }}; do
cp --parents "$file" rfc-website/
done
- name: Push changes to Website Repo
run: |
cd rfc-website
git add .
git commit -m "Sync website"
git push origin main

View File

@@ -0,0 +1,441 @@
---
title: NOMOS-DATA-AVAILABILITY-PROTOCOL
name: Nomos Data Availability Protocol
status: raw
tags: nomos
editor:
contributors:
---
## Abstract
This specification describes the protocol for the data availability for the Nomos network.
Nomos provides several services for network states to create efficient ecosystems.
Data availabilty is an important problem that network states need to solved.
## Background
Nomos is a cluster of blockchains known as zones.
Zones are layer 2 blockchains that utilize Nomos to maintain sovereignty.
They are initialized by the Nomos network and can utilize Nomos services, but
provide resources on their own.
They can define their own state as they are sovreign networks.
Nomos provides tools at the global layer that allow zones to define arbitrary configurations.
Nomos has two global layers offering services to Zones.
The base layer provides data availibility guarantees to zones that choose to utilize it.
The second layer is the coordination layer which enables state transition verification through zero-knowledge validity proofs.
The base layer allows users with resource-limiting devices, also known as light clients,
the ability to obtain all block data and process it locally.
Light clients should be able to access blockchain data similar to a full node.
To achieve this,
the Nomos data availbilty protocol provides guarantees that transaction data within Nomos zones are vaild.
## Motivation and Goal
Decentralized blockchains require full nodes to verify network transactions by downloading all the data of the network.
This becomes a problem as the blockchain data grows, full nodes will need more resources to download and
store the data while maintaining connection to the network.
Light nodes on the other hand do not download the entire network data because of it's resource limiting nature.
This retricts the network from scaling as the network is reliant on full nodes to process transactions,
and requires light nodes to rely on centralized parties.
A blockchain should allow light nodes to prove the validity of transaction data,
without requiring light nodes downloading all the blockchain data.
The data availability service on the Nomos base layer is a service that is used by zones for data availability guarantees.
This allows participants of a zone to access blockchain data in the event that nodes within a zone does not make the data available.
The service includes data encoding, verification, data availability sampling mechanism,
and data retrieval API to solve the data availability problem.
### Definitions
| Terminology | Description |
| --------------- | --------- |
| provider nodes | A Nomos base layer nodes providing data availability. |
| dispersal nodes | A Nomos node that |
| Nomos Zone | |
| light clients | A low resource |
## Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in [2119](https://www.ietf.org/rfc/rfc2119.txt).
The data availability service of the Nomos base layer consist of different node roles, dispersal clients,
data availibity sampling nodes, and data availability provider nodes.
All network participants do data sampling and verification.
Nodes MAY decide to provide resources to the data availibilty service of the Nomos base-layer,
join a zone as a dispersal node, be a light client or
combination of different roles.
Limited resource roles,
like a dispersal client or data availitbiaty sampling node,
utilize Nomos zones to create or retrieve blockchain transactions.
A light client SHOULD NOT download large amounts of block data owned by zones.
- it MAY selectively vailidate zero knowledge proofs from the [Nomos Coordination Layer](#TODO),
- it MAY verify data availibility of the base layer for zones they prefer.
Data availibity on the base layer is only a temporary guarantee.
The data can only be verified for a predetermined time, based on the Nomos network.
The base layer MUST NOT provide long term incentives or
allocate resources to repair missing data.
It is the resposiblity of Zones to make blockchain data availible.
In the event that light clients can not access data,
it MAY utilize the data avilability of the Nomos base layer.
### Base Layer Nodes
Base layer nodes offering data availbility MUST NOT process or validate block data,
- it MUST store proof commitments
- it MUST store data chunks of zone block data.
- provides data availability for a limit amount of time.
The role of a provider node is to store polynomial commitment schemes for Nomos zones.
They MUST join a membership based list using libp2p,
to announce participation in a subnet,
which is a group of Nomos data availibilty provider nodes.
Nodes must register during a proof-of-validator stage where public keys are verified and
a node enters a subnet.
The RECCOMMENDED number of provider nodes within a subnet is 4096
Nodes registered within a subnet are connected with each other for data passing.
The list MUST be used by light nodes and
zones to connect to a node within a subnet to send data chunk to.
The data stored by provider nodes MUST not be interpeted or accessed,
except when sending data for [data availability sampling](#data-sampling), or
block reconstruction by light clients.
#### Message Passing
Nodes that participate in a Nomos zone are considered to be a Nomos base-layer nodes.
Nomos base-layer utilizes a libp2p publish/subscribe implementation to handle message passing between nodes in the network.
All base-layer nodes MUST be assigned to a data availability `pubsub-topic`.
Node configuations SHOULD define a `pubsub-topic` that is shared by all data availiability nodes:
```rs
pubsub-topic = 'DA_TOPIC';
```
#### Sending Data
Zones are responisble for creating data chunks that need to be stored on the blockchain.
The data SHOULD be sent to provider nodes.
#### Encoding and Verification
Nomos protocol allows nodes within a zone to encode data chucks using Reed Solomon and KGZ commitments.
Data chunks are divided into a finite field element,
a two-dimensional array also known as a matrices,
where data is organized into rows and columns.
For example: a matrix represented as $Data_{}$ for block data divided into chunks,
which is represented as ${ \Large c_{jk} }$:
$${ \Large Data = \begin{bmatrix} c_{11} & c_{12} & c_{13} & c_{...} & c_{1k} \cr c_{21} & c_{22} & c_{23} & c_{...} & c_{2k} \cr c_{31} & c_{32} & c_{33} & c_{...} & c_{3k} \cr c_{...} & c_{...} & c_{...} & c_{...} & c_{...} \cr c_{j1} & c_{j2} & c_{j3} & c_{...} & c_{jk} \end{bmatrix}}$$
Each row is a chunk of data and each column is considered a piece.
So there are ${ \Large k_{} }$ data pieces which include ${ \Large j_{} }$ data chucks.
- Each chuck SHOULD limit byte size of data
For every row ${ \Large i_{} }$,
a unique polynomial ${ \Large f_{i} }$ such that ${ \Large c_{ig} = f_{i}(w^{(g-1)}) }$,
for ${ \Large i_{} = 1,...,k }$ and ${ \Large g_{} = 1,...,j }$.
Random KGZ commitment values for the polynomials compute to:
$${ \Large f_{i} = (c_{i1}, c_{i2}, c_{i3},..., c_{ik}) }$$ and compute ${ \Large r_{i} = com(f_{i}) }$.
##### Reed Solomon Encoding
Nomos protocol REQUIRES data to be encoded using Reed-Solomon encoding after the data blob is divided into chucks,
placed into a matrix of row and columns, and
KZG commitments are computed for each data piece.
Encoding allows zones to ensure the security and integity of its blockchain data.
Using Reed-Solomon encoding, the martix from the previous step is expanded by the rows for redundancy.
The polynomial ${ \Large c_{ig} = f_{i}(w^{(g-1)}) }$ at new points ${ \Large w_{j} }$ where ${ \Large j_{} = k+1, k+2, ..., n }$.
The extended data can be demonstrated:
$${ \Large Extended Data = \begin{bmatrix} c_{11} & c_{12} & c_{...} & c_{1k} & c_{1(k+1)} & c_{1(k+2)} & c_{...} & c_{1(2k)} \cr c_{21} & c_{22} & c_{...} & c_{2k} & c_{...} & c_{...} & c_{...} & c_{...} \cr c_{31} & c_{32} & c_{...} & c_{3k} & c_{...} & c_{...} & c_{...} & c_{...} \cr c_{...} & c_{...} & c_{...} & c_{...} & c_{...} & c_{...} & c_{...} & c_{...} \cr c_{j1} & c_{j2} & c_{...} & c_{jk} & c_{j(k+1)} & c_{j(k+2)} & c_{...} & c_{j(2k)} \end{bmatrix}}$$
- There is an expansion factor of 1/2, so ${ \Large n_{} = 2k }$
- Calculate the row chuck: ${ \Large eval(f_{i}, w^{(j-1)}) \rightarrow c_{ji}, \pi_{c_{ji}} }$
##### Hash and Commitment Value of Colunms
Next, a dispersal client calculates the commitment for the inputs of each column using KGZ commitments.
Assume, ${ \Large j = 1,...,2k }$:
Each column contains ${ \Large j_{} }$ data chucks.
Using Lagrange interpolation, we can calculate the unique polynomial defined by these chunks.
Let's denote this polynomial as $\theta$
The commitment values for each column are calculated as follows:
${\Large \theta_j=\text{Interpolate}(data_1^j,data_2^j,\dots,data_\ell^j)}$
${ \Large C_j=com(\theta_j)}$
- In this protocol, we use elliptic curve as a group,
thus the entries of $C_j$s are also elliptic curve points.
Lets represent the $x$-coordinate of $C_j$ as $C_j^x$ and the $y $-coordinate of $C_j$ as $C_1^y$.
If you have just $C_j^x$ and one bit of $C_j^y$ then you can construct $C_j$.
Therefore, there is no need to use both coordinates of $C_j$.
However, for the sake of simplicity in the representation, we use only the value $C_j$ for now.
- We also calculate the hash of column data such that;
$H_j=Hash(01data_j^1||02data_j^2||\dots||0\ell data_\ell^j)$
##### Aggregate Column Commitment
The position integrity of each column for all data can be provided by a new column commitments.
To link each column to one another, we will calculate a new commitment value.
Each $\{H_j, C_j\}$ can be considered the new vector and assume they are in evaluation form.
In this case, calculate a new polynomial $\Phi$ and vector commitment value $C_{agg}$ as follows:
$\Phi=\text{Interpolate}(H_1, C_1,H_2, C_2,\dots,H_n, C_n)$
$C_{agg}=com(\Phi)$
Also calculate the proof value $\pi_{H_j,C_j}$ for each column.
Data chucks are sent with aggregate commitments, a list of row commitments for entire data blob, and
a column commitment for the specific data chuck.
##### Dispersal
###### Verification Process
Once encoded,
the data is dispersed to different Nomos data availibilty provider nodes that have joined a subnet on the base layer.
It is RECCOMENDED that the dispersal client sends a column to 4096 provider nodes for better bandwidth optimization.
A dispersal client sends the following:
```python
class EncodedData:
column_data:
extended_matrix: ChuckMatrix
row_commitments: List[Commitments]
row_proofs: List[List[Proof]]
column_commitment: List[Commitment]
aggregated_column_commitment: Commitment
aggregated_column_proofs: List[Proof]
```
These values are represented as:
- `extended_matrix` : ${ \Large data_i^j }$
- `row_commitments` : ${ \Large \{r_1,r_2,\dots,r_{\ell}\} }$
- `row_proofs` : ${ \Large \{\pi^j_{r_1},\pi^j_{r_2}, \dots,\pi^j_{r_\ell}\} }$
- `column_data` : ${ \Large \{data_1^j,data_2^j,\dots,data_\ell^j\} }$
- `column_commitment` : ${ \Large C_{j} }$
- `aggregated_column_commitment` : ${ \Large C_{agg} }$
- `aggregated_column_proofs` : ${ \Large \pi_{H_j,C_j} }$
When a provider node receives data chunks from dispersal nodes,
the data chunk is stored in the provider's node memory.
The following steps SHOULD occur once data is received by a provider node:
1. Checks the `aggregated_column_proofs` and verify the proofs.
Zone calculates the $eval$ value and sends it to $node_j$.
${ \Large eval(\Phi,w^{j-1})\to H_j, C_j }$, ${ \Large \pi_{H_j,C_j} }$
2. Calculates the `column_commitment` data.
${ \Large \theta'_j=\text{Interpolate}(data_1^j,data_2^j,\...\ell^j) }$
This value SHOULD be equal to ${ \Large C_j }$ : ${ \Large C_j\stackrel{?}{=}com(\theta'_j) }$
3. Calulates the hash of `column_data` :
${ \Large H_j=Hash(01data_j^1||02data_j^2||\dots||0\ell data_\ell^j)}$
Then verifies the proof :
${ \Large verify(r_i, data_i^j, \pi_{r_i}^j)\to true/false }$
4. For each `row_commitment`, verifies the proof of every chunk against its corresponding row commitment:
${ \Large verify(r_i, data_i^j, \pi_{r_i}^j)\to true/false }$
If all verification steps are true, this proves that the data has been encoded correctly.
### VID Certificate
A verifiable information dispersal certificate, VID certificate,
is a list of attestation from data availibility nodes.
It is used to verify that the data chucks have been dispersed properly amongst nodes in the base layer.
The provider node signs an attestation that contain the hash value of the `row_commitment` and
of the `aggregated_column_commitment`.
Signitures are verified by dispersal clients and
valid signitures SHOULD be added to the VID certificate
For every provider node $j$, assuming $sk_j$ is the private key, a signature is generated as follows:
${ \Large \sigma_j=Sign(sk_j, hash(C_{agg}, r_1,r_2,\dots,r_{\ell})) }$
The provider node sends the signed attestation back to the zone dispersal clients confirming the data has been received and
verified.
Once a dispersal client verifies data chucks have been hashed and signed by the base layer,
the VID certificate SHOULD be created.
The attesstation is created with the following values:
```rs
// Provider node SHOULD hash using Blake2 algorithm
// blob_hash : Hash of row_commitment and column_commitiment
fn sendAsstation () {
attestation_hash = hash(blob_hash, DAnode);
}
```
The VID certificate is then sent to block builder to be accepted through concensus,
as desirbed in [Cryptarchia](#).
### Data Availability Sampling
Light nodes MAY choose to be a data availability sampling node.
This node can participate in any other NOMOS service while providing verification of data dispersal services.
For example, a dispersal client can send data to be available through the base layer and
decide to perform data availability sampling to have a great assurance that the data is available.
This would reduce the potential threats from malicious or
faulty nodes not replicating data in their subnets.
The following steps are REQUIRED by a data availability sampling node to verify data dispersal:
1. Choose a random column value and row value from base layer provider nodes.
Light node wants to opening of $C_t$ and $r_{t'}$.
2. Assuming provider node $node_t$, it calculates the $eval$ value for the `column_commitment`.
Also calculates the `row_commitment` value $r_{t'}$ and the proof of it.
Then sends these values to the sampling node.
${ \Large eval(\Phi,w^{t-1})\to C_t,\pi_{C_t} }$
3. Sampling nodes verifies the `row_commitment` and the `column_commitment` as follows:
${ \Large verify(C_{agg},C_{r},\pi_{C_r}) \to true/false }$
${ \Large verify(C_{agg},C_r, \pi_{C_r})\to true/false}$
4. If this proof is true, then light nodes wants to opening of the column commitment.
$node_r$ calculates the $eval$ value and sends it to the light node to be verified.
${ \Large eval(\theta_t,w^{t'-1})\to data_{t'}^{t},\pi_{data_{t'}^{t}} }$
${ \Large verify(C_t, data_{t'}^t, \pi_{data_{t'}^t})\to true/false }$
If this is true, then this proves that the data chuck has been encoded correctly.
### Blockchain Data
The block data is stored by nodes within zones and can be retreived using the [read api](#).
A block producer, which is also be a base-layer provider node,
MUST choose certificates that need to be added to a new block from the base-layer mempool in the order it was received.
A block contains a list of VID certificates.
Once a new block for a zone is created,
it MUST be sent to the base layer to be persisted for a short period of time.
A zone MAY choose to use alternative methods to persist block data, like decentralized storage solutions.
A provider node will verify the signtures within the block are is also stored in the node memory.
If the node has the same data,
the block SHOULD be persisted.
If the node does not have the data,
the block SHOULD be skipped.
Light nodes are not REQUIRED to download all the blockchain data belonging to a zone.
To fulfill this requirement,
zone partipants MAY utilize the data availability of the base layer to retrieve block data and
pay for this resource with the native token.
Other nodes within the zones are REQUIRED to download block data for all prefered zones.
Data included in hash for next block in Zone
After block producer verify VID certificates,
the following data is store on the blockchain:
- CertificateID: A hash of the VID Certificate (including the C_agg and signatures from DA Nodes)
- AppID: The application identification for the specific application(zone) for the data chunk
- Index: A number for a particular sequence or position of the data chunk within the context of its AppID
Block producers receive certificates from zones along with metadata, `AppId` and
`Index`.
The metadata values are also stored in the block.
### Data Availability Core API
Data availiablity nodes utilize `read` and `write` API functions.
The `read` function allow node to query for information and
`write` function for communication for multiple services.
Data chuck is encoded as described above in [Encoding and Verification](#) and
delivered using the message passing protocol described above in [Message Passing](#).
The API functions are detailed below:
```python
class Chunk:
def __init__(self, data, app_id, index):
self.data = data
self.app_id = app_id
self.index = index
class Metadata:
def __init__(self, app_id, index):
self.app_id = app_id
self.index = index
class Certificate:
def __init__(self, proof, chunks_info):
self.proof = proof
self.chunks_info = chunks_info
class Block:
def __init__(self, certificates):
self.certificates = certificates
def receive_chunk():
# Receives from network new chunks to be processed
# Returns a tuple of (Chunk, Metadata)
chunk = Chunk(data = "chunk_data", app_id = "app_id", index = "index")
metadata = Metadata(app_id = "app_id", index = "index")
return chunk, metadata
def receive_block():
# Read from blockchain latest blocks added
# Returns a Block
certificate = Certificate(proof = "proof", chunks_info = "chunks_info")
block = Block(certificates = [certificate])
return block
def write_to_cache(chunk, metadata):
# Logic to write the chunk {metadata.index} to cache
def write_to_storage(certificate):
# Logic to write data to storage based on the certificate.proof
def da_node():
while True:
# Receiving chunk and metadata
chunk, metadata = receive_chunk()
write_to_cache(chunk, metadata)
# Receiving a block
block = receive_block()
for certificate in block.certificates:
write_to_storage(certificate)
```
- `receive_chunk` - Receives new chunks to be processed
- `receive_block` - Receives latest blocks added to the blockchain
- `write_to_cache` - Function to store newly recieved chunk in cache
- `write_to_storage` - Used when a certificate for Zone's data is observed in a blockchain
### Security Considerations
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References

View File

@@ -40,7 +40,7 @@ This document describes how 2 peers communicate with each other to send messages
This protocol MAY use any key-exchange mechanism previously discussed -
1. [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md)
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/noise.md)
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md)
This protocol can provide end-to-end encryption to give peers a strong degree of privacy and security.
Public chat messages are publicly readable by anyone since there's no permission model for who is participating in a public chat.
@@ -67,7 +67,7 @@ It is handled by the key-exchange protocol used. For example,
1. [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md), the session management is described in [54/WAKU2-X3DH-SESSIONS](../../waku/standards/application/54/x3dh-sessions.md)
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/noise.md), the session management is described in [WAKU2-NOISE-SESSIONS](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/noise-sessions/noise-sessions.md)
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md), the session management is described in [WAKU2-NOISE-SESSIONS](https://github.com/waku-org/specs/blob/master/standards/application/noise-sessions.md)
## Negotiation of a 1:1 chat amongst multiple participants (group chat)
@@ -203,7 +203,7 @@ To change the display image of the group chat, group admins MUST use an `IMAGE_C
## Security Considerations
1. Inherits the security considerations of the key-exchange mechanism used, e.g., [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md) or [WAKU2-NOISE](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/noise.md)
1. Inherits the security considerations of the key-exchange mechanism used, e.g., [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md) or [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md)
## Copyright
@@ -212,10 +212,10 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
## References
1. [53/WAKU2-X3DH](../../waku/standards/application/53/x3dh.md)
2. [35/WAKU2-NOISE](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/noise.md)
2. [WAKU2-NOISE](https://github.com/waku-org/specs/blob/master/standards/application/noise.md)
3. [65/STATUS-ACCOUNT](../65/account-address.md)
4. [54/WAKU2-X3DH-SESSIONS](../../waku/standards/application/54/x3dh-sessions.md)
5. [37/WAKU2-NOISE-SESSIONS](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/noise-sessions/noise-sessions.md)
5. [WAKU2-NOISE-SESSIONS](https://github.com/waku-org/specs/blob/master/standards/application/noise-sessions.md)
6. [56/STATUS-COMMUNITIES](../56/communities.md)
7. [chat_message.proto](https://github.com/status-im/status-go/blob/5fd9e93e9c298ed087e6716d857a3951dbfb3c1e/protocol/protobuf/chat_message.proto#L1)
8. [emoji_reaction.proto](https://github.com/status-im/status-go/blob/5fd9e93e9c298ed087e6716d857a3951dbfb3c1e/protocol/protobuf/emoji_reaction.proto)

View File

@@ -321,7 +321,7 @@ There are two scenarios in which member nodes can receive such a magnet link mes
2. The member node requests messages for a time range of up to 30 days from store nodes (this is the case when a new community member joins a community)
### Downloading message archives
When member nodes receive a message with a `CommunityMessageHistoryArchive` ([62/STATUS-PAYLOAD](../62/payload.md)) from the aforementioned channnel, they MUST extract the `magnet_uri` and pass it to their underlying BitTorrent client so they can fetch the latest message history archive index, which is the `index` file of the torrent (see [Creating message archive torrents](#creating-message-archive-torrents)).
When member nodes receive a message with a `CommunityMessageHistoryArchive` ([62/STATUS-PAYLOADS](../62/payloads.md)) from the aforementioned channnel, they MUST extract the `magnet_uri` and pass it to their underlying BitTorrent client so they can fetch the latest message history archive index, which is the `index` file of the torrent (see [Creating message archive torrents](#creating-message-archive-torrents)).
Due to the nature of distributed systems, there's no guarantee that a received message is the "last" message. This is especially true when member nodes request historical messages from store nodes.
@@ -389,4 +389,4 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
* [Extensions for Peers to Send Metadata Files](https://www.bittorrent.org/beps/bep_0009.html)
* [org channels spec](../56/communities.md)
* [14/WAKU2-MESSAGE](../../waku/standards/core/14/message.md)
* [62/STATUS-PAYLOAD](../62/payload.md)
* [62/STATUS-PAYLOADS](../62/payloads.md)

View File

@@ -9,7 +9,7 @@ contributors:
`25/LIBP2P-DNS-DISCOVERY` specifies a scheme to implement [`libp2p`](https://libp2p.io/) peer discovery via DNS for Waku v2.
The generalised purpose is to retrieve an arbitrarily long, authenticated, updateable list of [`libp2p` peers](https://docs.libp2p.io/concepts/peer-id/) to bootstrap connection to a `libp2p` network.
Since [`10/WAKU2`](https://rfc.vac.dev/spec/10/) currently specifies use of [`libp2p` peer identities](https://docs.libp2p.io/concepts/peer-id/),
Since [`10/WAKU2`](../../waku/standards/core/10/waku2.md) currently specifies use of [`libp2p` peer identities](https://docs.libp2p.io/concepts/peer-id/),
this method is suitable for a new Waku v2 node to discover other Waku v2 nodes to connect to.
This specification is largely based on [EIP-1459](https://eips.ethereum.org/EIPS/eip-1459),
@@ -126,7 +126,7 @@ Copyright and related rights waived via
# References
1. [`10/WAKU2`](https://rfc.vac.dev/spec/10/)
1. [`10/WAKU2`](../../waku/standards/core/10/waku2.md)
1. [EIP-1459: Client Protocol](https://eips.ethereum.org/EIPS/eip-1459#client-protocol)
1. [EIP-1459: Node Discovery via DNS ](https://eips.ethereum.org/EIPS/eip-1459)
1. [`libp2p`](https://libp2p.io/)

View File

@@ -52,7 +52,7 @@ Depending on the application requirements, the registration can be implemented i
- centralized registrations, by using a central server
- decentralized registrations, by using a smart contract
What is important is that the users' identity commitments (explained in section [User Indetity](#user-identity)) are stored in a Merkle tree,
What is important is that the users' identity commitments (explained in section [User Identity](#user-identity)) are stored in a Merkle tree,
and the users can obtain a Merkle proof proving that they are part of the group.
Also depending on the application requirements,

View File

@@ -17,7 +17,7 @@ Therefore a decentralized approach to secure communication becomes increasingly
offering a robust solution to address these challenges.
This specification outlines a private messaging service using the Ethereum blockchain as authentication service.
Rooted in the existing [model](https://rfc.vac.dev/spec/20/),
Rooted in the existing [model](../../waku/standards/application/20/toy-eth-pm.md),
this proposal addresses the deficiencies related to forward privacy and authentication inherent in the current framework.
The specification is divided into 3 sections:

View File

@@ -0,0 +1,105 @@
---
title: RLN-STEALTH-COMMITMENTS
name: RLN Stealth Commitment Usage
category: Standards Track
editor: Aaryamann Challani <aaryamann@status.im>
contributors:
- Jimmy Debe <jimmy@status.im>
---
## Abstract
This specification describes the usage of stealth commitments to add prospective users to a network-governed [32/RLN-V1](./32/rln-v1.md) membership set.
## Motivation
When [32/RLN-V1](./32/rln-v1.md) is enforced in [10/Waku2](../waku/standards/core/10/waku2.md),
all users are required to register to a membership set.
The membership set will store user identities allowing the secure interaction within an application.
Forcing a user to do an on-chain transaction to join a membership set is an onboarding friction,
and some projects may be opposed to this method.
To improve the user experience,
stealth commitments can be used by a counterparty to register identities on the user's behalf,
while maintaining the user's anonymity.
This document specifies a privacy-preserving mechanism,
allowing a counterparty to utilize [32/RLN-V1](./32/rln-v1.md) to register an `identityCommitment` on-chain.
Counterparties will be able to register members to a RLN membership set without exposing the user's private keys.
## Background
The [32/RLN-V1](./32/rln-v1.md) protocol,
consists of a smart contract that stores a `idenitityCommitment` in a membership set.
In order for a user to join the membership set,
the user is required to make a transaction on the blockchain.
A set of public keys is used to compute a stealth commitment for a user,
as described in [ERC-5564](https://eips.ethereum.org/EIPS/eip-5564).
This specification is an implementation of the [ERC-5564](https://eips.ethereum.org/EIPS/eip-5564) scheme,
tailored to the curve that is used in the [32/RLN-V1](./32/rln-v1.md) protocol.
This can be used in a couple of ways in applications:
1. Applications can add users to the [32/RLN-V1](./32/rln-v1.md) membership set in a batch.
2. Users of the application can register other users to the [32/RLN-V1](./32/rln-v1.md) membership set.
This is useful when the prospective user does not have access to funds on the network that [32/RLN-V1](./32/rln-v1.md) is deployed on.
## Wire Format Specification
The two parties, the requester and the receiver, MUST exchange the following information:
```protobuf
message Request {
// The spending public key of the requester
bytes spending_public_key = 1;
// The viewing public key of the requester
bytes viewing_public_key = 2;
}
```
### Generate Stealth Commitment
The application or user SHOULD generate a `stealth_commitment` after a request to do so is received.
This commitment MAY be inserted into the corresponding application membership set.
Once the membership set is updated, the receiver SHOULD exchange the following as a response to the request:
```protobuf
message Response {
// The used to check if the stealth_commitment belongs to the requester
bytes view_tag = 2;
// The stealth commitment for the requester
bytes stealth_commitment = 3;
// The ephemeral public key used to generate the commitment
bytes ephemeral_public_key = 4;
}
```
The receiver MUST generate an `ephemeral_public_key`, `view_tag` and `stealth_commitment`.
This will be used to check the stealth commitment used to register to the membership set,
and the user MUST be able to check ownership with their `viewing_public_key`.
## Implementation Suggestions
An implementation of the Stealth Address scheme is available in the [erc-5564-bn254](https://github.com/rymnc/erc-5564-bn254) repository,
which also includes a test to generate a stealth commitment for a given user.
## Security/Privacy Considerations
This specification inherits the security and privacy considerations of the [Stealth Address](https://eips.ethereum.org/EIPS/eip-5564) scheme.
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
- [10/Waku2](../waku/standards/core/10/waku2.md)
- [32/RLN-V1](./32/rln-v1.md)
- [ERC-5564](https://eips.ethereum.org/EIPS/eip-5564)

View File

@@ -1,4 +1,4 @@
## Waku RFCs
# Waku RFCs
Waku builds a family of privacy-preserving, censorship-resistant communication protocols for web3 applications.

View File

@@ -2,7 +2,7 @@
slug: 16
title: 16/WAKU2-RPC
name: Waku v2 RPC API
status: draft
status: deprecated
tags: waku-core
editor: Hanno Cornelius <hanno@status.im>
---
@@ -176,7 +176,7 @@ The `get_waku_v2_relay_v1_messages` method returns a list of messages that were
## Relay Private API
The Private API provides functionality to encrypt/decrypt `WakuMessage` payloads using either symmetric or asymmetric cryptography. This allows backwards compatibility with [Waku v1 nodes](../6/waku1.md).
The Private API provides functionality to encrypt/decrypt `WakuMessage` payloads using either symmetric or asymmetric cryptography. This allows backwards compatibility with [Waku v1 nodes](../../legacy/6/waku1.md).
It is the API client's responsibility to keep track of the keys used for encrypted communication. Since keys must be cached by the client and provided to the node to encrypt/decrypt payloads, a Private API SHOULD NOT be exposed on non-local or untrusted nodes.
### Types

View File

@@ -2,7 +2,7 @@
slug: 18
title: 18/WAKU2-SWAP
name: Waku SWAP Accounting
status: draft
status: deprecated
editor: Oskar Thorén <oskarth@titanproxy.com>
contributor: Ebube Ud <ebube@status.im>
---
@@ -141,7 +141,7 @@ In the soft phase only accounting is performed, without consequence for the
peers. No disconnect or sending of cheques is performed at this tage.
SWAP protocol is performed in conjunction with another request-reply protocol to account for its usage.
It SHOULD be done for [13/WAKU2-STORE](../../standards/core/13/store.md)
It SHOULD be done for [13/WAKU2-STORE](../../core/13/store.md)
and it MAY be done for other request/reply protocols.
A client SHOULD log accounting state per peer
@@ -173,7 +173,7 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
1. [Prisoner's Dilemma](https://en.wikipedia.org/wiki/Prisoner%27s_dilemma)
2. [Axelrod - Evolution of Cooperation (book)](https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation)
3. [Book of Swarm](https://web.archive.org/web/20210126130038/https://gateway.ethswarm.org/bzz/latest.bookofswarm.eth)
4. [13/WAKU2-STORE](../../standards/core/13/store.md)
4. [13/WAKU2-STORE](../../core/13/store.md)
<!--

View File

@@ -1,4 +1,4 @@
## Deprecated RFCs
# Deprecated RFCs
Deprecated specifications are no longer used in Waku products.
This subdirectory is for achrive purpose and

View File

@@ -89,7 +89,7 @@ This is used for content based filtering.
See [14/WAKU2-MESSAGE spec](../../standards/core/14/message.md) for where this is specified.
Note that this doesn't impact routing of messages between relaying nodes,
but it does impact how request/reply protocols such as
[12/WAKU2-FILTER](../../standards/core/14/filter.md) and [13/WAKU2-STORE](../../standards/core/13/store.md) are used.
[12/WAKU2-FILTER](../../standards/core/12/filter.md) and [13/WAKU2-STORE](../../standards/core/13/store.md) are used.
This is especially useful for nodes that have limited bandwidth,
and only want to pull down messages that match this given content topic.
@@ -163,7 +163,7 @@ Copyright and related rights waived via
* [RELAY-SHARDING](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/relay-sharding.md)
* [Ethereum 2 P2P spec](https://github.com/ethereum/eth2.0-specs/blob/dev/specs/phase0/p2p-interface.md#topics-and-messages)
* [14/WAKU2-MESSAGE](../../standards/core/14/message.md)
* [12/WAKU2-FILTER](../../standards/core/14/filter.md)
* [12/WAKU2-FILTER](../../standards/core/12/filter.md)
* [13/WAKU2-STORE](../../standards/core/13/store.md)
* [6/WAKU1](../../deprecated/5/waku0.md)
* [15/WAKU-BRIDGE](../../standards/core/15/bridge.md)

View File

@@ -74,7 +74,7 @@ This requires keeping track of the [last time each peer was disconnected](#track
A Waku v2 client MAY choose to implement a keep-alive mechanism to certain peers.
If a client chooses to implement keep-alive on a connection,
it SHOULD do so by sending periodic [libp2p pings](https://docs.libp2p.io/concepts/protocols/#ping) as per `10/WAKU2` [client recommendations](../../standards/core/10/WAKU2.md/#recommendations-for-clients).
it SHOULD do so by sending periodic [libp2p pings](https://docs.libp2p.io/concepts/protocols/#ping) as per `10/WAKU2` [client recommendations](../../standards/core/10/waku2.md/#recommendations-for-clients).
The recommended period between pings SHOULD be _at most_ 50% of the shortest idle connection timeout for the specific client and transport.
For example, idle TCP connections often times out after 10 to 15 minutes.
@@ -96,4 +96,4 @@ Copyright and related rights waived via
- [`18/WAKU2-SWAP`](../../standards/application/18/swap.md)
- [backing off period](https://github.com/libp2p/specs/blob/master/pubsub/gossipsub/gossipsub-v1.1.md#prune-backoff-and-peer-exchange)
- [libp2p pings](https://docs.libp2p.io/concepts/protocols/#ping)
- [`10/WAKU2` client recommendations](https://rfc.vac.dev/spec/10/#recommendations-for-clients)
- [`10/WAKU2` client recommendations](../../standards/core/10/waku2.md/#recommendations-for-clients)

View File

@@ -46,10 +46,10 @@ The proposed protocol MUST adhere to the following design requirements:
2. Bob is willing to participate to Eth-PM, and publishes `B'`,
3. Bob's ownership of `B'` MUST be verifiable,
4. Alice wants to send message `M` to Bob,
5. Bob SHOULD be able to get `M` using [10/WAKU2 spec](../../standards/core/10/waku2.md),
5. Bob SHOULD be able to get `M` using [10/WAKU2 spec](../../core/10/waku2.md),
6. Participants only have access to their Ethereum Wallet via the Web3 API,
7. Carole MUST NOT be able to read `M`'s content even if she is storing it or relaying it,
8. [Waku Message Version 1](../../standards/application/26/payload.md) Asymmetric Encryption is used for encryption purposes.
8. [Waku Message Version 1](../26/payload.md) Asymmetric Encryption is used for encryption purposes.
## Limitations
@@ -155,7 +155,7 @@ it is not enough in itself to deduce Bob's Public Key.
This is why the protocol dictates that Bob MUST send his Public Key first,
and Alice MUST receive it before she can send him a message.
Moreover, nim-waku, the reference implementation of [13/WAKU2-STORE](../../standards/core/13/store.md)),
Moreover, nim-waku, the reference implementation of [13/WAKU2-STORE](../../core/13/store.md),
stores messages for a maximum period of 30 days.
This means that Bob would need to broadcast his public key at least every 30 days to be reachable.
@@ -202,10 +202,10 @@ Alice MAY monitor the Waku v2 to collect Ethereum Address and Encryption Public
Alice SHOULD verify that the `signature`s of `PublicKeyMessage`s she receives are valid as per EIP-712.
She SHOULD drop any message without a signature or with an invalid signature.
Using Bob's Encryption Public Key, retrieved via [10/WAKU2](../../standards/core/10/waku2.md), Alice MAY now send an encrypted message to Bob.
Using Bob's Encryption Public Key, retrieved via [10/WAKU2 spec](../../core/10/waku2.md), Alice MAY now send an encrypted message to Bob.
If she wishes to do so, Alice MUST encrypt her message `M` using Bob's Encryption Public Key `B'`,
as per [26/WAKU-PAYLOAD Asymmetric Encryption specs](../../standards/application/26/payload.md/#asymmetric).
as per [26/WAKU-PAYLOAD Asymmetric Encryption specs](../26/payload.md/#asymmetric).
Alice SHOULD now publish this message on the Private Message content topic.
@@ -214,12 +214,12 @@ Alice SHOULD now publish this message on the Private Message content topic.
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
- [10/WAKU2 spec](../../standards/core/10/waku2.md)
- [Waku Message Version 1](../../standards/application/26/payload.md)
- [10/WAKU2 spec](../../core/10/waku2.md)
- [Waku Message Version 1](../26/payload.md)
- [X3DH](https://www.signal.org/docs/specifications/x3dh/)
- [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
- [Status secure transport spec](https://specs.status.im/spec/5)
- [EIP-712](https://eips.ethereum.org/EIPS/eip-712)
- [13/WAKU2-STORE](../../standards/core/13/store.md))
- [13/WAKU2-STORE](../../core/13/store.md)
- [The Graph](https://thegraph.com/)

View File

@@ -7,14 +7,14 @@ editor: Sanaz Taheri <sanaz@status.im>
contributors:
---
The reliability of [13/WAKU2-STORE](../../standards/core/13/store.md) protocol heavily relies on the fact that full nodes i.e., those who persist messages have high availability and uptime and do not miss any messages.
The reliability of [13/WAKU2-STORE](../../core/13/store.md) protocol heavily relies on the fact that full nodes i.e., those who persist messages have high availability and uptime and do not miss any messages.
If a node goes offline, then it will risk missing all the messages transmitted in the network during that time.
In this specification, we provide a method that makes the store protocol resilient in presence of faulty nodes.
Relying on this method, nodes that have been offline for a time window will be able to fix the gap in their message history when getting back online.
Moreover, nodes with lower availability and uptime can leverage this method to reliably provide the store protocol services as a full node.
## Method description
As the first step towards making the [13/WAKU2-STORE](../../standards/core/13/store.md) protocol fault-tolerant, we introduce a new type of time-based query through which nodes fetch message history from each other based on their desired time window.
As the first step towards making the [13/WAKU2-STORE](../../core/13/store.md) protocol fault-tolerant, we introduce a new type of time-based query through which nodes fetch message history from each other based on their desired time window.
This method operates based on the assumption that the querying node knows some other nodes in the store protocol which have been online for that targeted time window.
## Security Consideration
@@ -23,7 +23,7 @@ The main security consideration to take into account while using this method is
This will gradually result in the extraction of the node's activity pattern which can lead to inference attacks.
## Wire Specification
We extend the [HistoryQuery](../../standards/core/13/store.md/#payloads) protobuf message with two fields of `start_time` and `end_time` to signify the time range to be queried.
We extend the [HistoryQuery](../../core/13/store.md/#payloads) protobuf message with two fields of `start_time` and `end_time` to signify the time range to be queried.
### Payloads
@@ -46,10 +46,10 @@ message HistoryQuery {
RPC call to query historical messages.
- `start_time`: this field MAY be filled out to signify the starting point of the queried time window.
This field holds the Unix epoch time in nanoseconds.
The `messages` field of the corresponding [`HistoryResponse`](../../standards/core/13/store.md/#HistoryResponse) MUST contain historical waku messages whose [`timestamp`](../../standards/core/14/message.md/#Payloads) is larger than or equal to the `start_time`.
The `messages` field of the corresponding [`HistoryResponse`](../../core/13/store.md/#HistoryResponse) MUST contain historical waku messages whose [`timestamp`](../../core/14/message.md/#Payloads) is larger than or equal to the `start_time`.
- `end_time` this field MAY be filled out to signify the ending point of the queried time window.
This field holds the Unix epoch time in nanoseconds.
The `messages` field of the corresponding [`HistoryResponse`](../../standards/core/13/store.md/#HistoryResponse) MUST contain historical waku messages whose [`timestamp`](../../standards/core/14/message.md/#Payloads) is less than or equal to the `end_time`.
The `messages` field of the corresponding [`HistoryResponse`](../../core/13/store.md/#HistoryResponse) MUST contain historical waku messages whose [`timestamp`](../../core/14/message.md/#Payloads) is less than or equal to the `end_time`.
A time-based query is considered valid if its `end_time` is larger than or equal to the `start_time`.
Queries that do not adhere to this condition will not get through e.g. an open-end time query in which the `start_time` is given but no `end_time` is supplied is not valid.
@@ -61,7 +61,7 @@ In order to account for nodes asynchrony, and assuming that nodes may be out of
That is if the original window is [`l`,`r`] then the history query SHOULD be made for `[start_time: l - 20s, end_time: r + 20s]`.
Note that `HistoryQuery` preserves `AND` operation among the queried attributes.
As such, The `messages` field of the corresponding [`HistoryResponse`](../../standards/core/13/store.md/#HistoryResponse) MUST contain historical waku messages that satisfy the indicated `pubsubtopic` AND `contentFilters` AND the time range [`start_time`, `end_time`].
As such, The `messages` field of the corresponding [`HistoryResponse`](../../core/13/store.md/#HistoryResponse) MUST contain historical waku messages that satisfy the indicated `pubsubtopic` AND `contentFilters` AND the time range [`start_time`, `end_time`].
## Copyright
@@ -70,5 +70,5 @@ Copyright and related rights waived via
## References
- [13/WAKU2-STORE](../../standards/core/13/store.md)
- [13/WAKU2-STORE](../../core/13/store.md)
- [`timestamp`](../../standards/core/14/message.md/#Payloads)

View File

@@ -8,9 +8,9 @@ contributors:
---
This specification describes how Waku provides confidentiality, authenticity, and integrity, as well as some form of unlinkability.
Specifically, it describes how encryption, decryption and signing works in [6/WAKU1](../../standards/core/6/waku1.md) and in [10/WAKU2 spec](../../standards/core/10/waku2.md) with [14/WAKU-MESSAGE version 1](../../standards/core/14/message.md/#version1).
Specifically, it describes how encryption, decryption and signing works in [6/WAKU1](../../legacy/6/waku1.md) and in [10/WAKU2 spec](../../core/10/waku2.md) with [14/WAKU-MESSAGE version 1](../../core/14/message.md/#version1).
This specification effectively replaces [7/WAKU-DATA](../../standards/application/7/DATA.md) as well as [6/WAKU1 Payload encryption](../../standards/core/6/waku1.md/#payload-encryption) but written in a way that is agnostic and self-contained for Waku v1 and Waku v2.
This specification effectively replaces [7/WAKU-DATA](../../legacy/7/data.md) as well as [6/WAKU1 Payload encryption](../../legacy/6/waku1.md/#payload-encryption) but written in a way that is agnostic and self-contained for Waku v1 and Waku v2.
Large sections of the specification originate from [EIP-627: Whisper spec](https://eips.ethereum.org/EIPS/eip-627) as well from [RLPx Transport Protocol spec (ECIES encryption)](https://github.com/ethereum/devp2p/blob/master/rlpx.md#ecies-encryption) with some modifications.
@@ -42,9 +42,9 @@ ECIES is using the following cryptosystem:
## Specification
For 6/WAKU1, the `data` field is used in the `waku envelope`, and the field MAY contain the encrypted payload.
For [6/WAKU1](../../legacy/6/waku1.md), the `data` field is used in the `waku envelope`, and the field MAY contain the encrypted payload.
For 10/WAKU2, the `payload` field is used in `WakuMessage` and MAY contain the encrypted payload.
For [10/WAKU2 spec](../../core/10/waku2.md), the `payload` field is used in `WakuMessage` and MAY contain the encrypted payload.
The fields that are concatenated and encrypted as part of the `data` (Waku v1) / `payload` (Waku v2) field are:
- flags
@@ -142,10 +142,10 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
## References
1. [6/WAKU1](../../standards/core/6/waku1.md)
2. [10/WAKU2 spec](../../standards/core/10/waku2.md)
3. [14/WAKU-MESSAGE version 1](../../standards/core/14/message.md/#version1)
4. [7/WAKU-DATA](../../standards/application/7/DATA.md)
1. [6/WAKU1](../../legacy/6/waku1.md)
2. [10/WAKU2 spec](../../core/10/waku2.md)
3. [14/WAKU-MESSAGE version 1](../../core/14/message.md/#version1)
4. [7/WAKU-DATA](../../legacy/7/data.md)
5. [EIP-627: Whisper spec](https://eips.ethereum.org/EIPS/eip-627)
6. [RLPx Transport Protocol spec (ECIES encryption)](https://github.com/ethereum/devp2p/blob/master/rlpx.md#ecies-encryption)
7. [Status 5/SECURE-TRANSPORT](https://specs.status.im/spec/5)

View File

@@ -54,7 +54,7 @@ Types used in this specification are defined using the [Protobuf](https://develo
End-to-end encryption (E2EE) takes place between two clients.
The main cryptographic protocol is a Double Ratchet protocol, which is derived from the [Off-the-Record protocol](https://otr.cypherpunks.ca/Protocol-v3-4.1.1.html), using a different ratchet.
[The Waku v2 protocol](../../standards/core/10/waku2.md) subsequently encrypts the message payload, using symmetric key encryption.
[The Waku v2 protocol](../../core/10/waku2.md) subsequently encrypts the message payload, using symmetric key encryption.
Furthermore, the concept of prekeys (through the use of [X3DH](https://signal.org/docs/specifications/x3dh/)) is used to allow the protocol to operate in an asynchronous environment.
It is not necessary for two parties to be online at the same time to initiate an encrypted conversation.
@@ -230,7 +230,7 @@ The message key MUST be used to encrypt the next message to be sent.
1. Inherits the security considerations of [X3DH](https://signal.org/docs/specifications/x3dh/#security-considerations) and [Double Ratchet](https://signal.org/docs/specifications/doubleratchet/#security-considerations).
2. Inherits the security considerations of the [Waku v2 protocol](../../standards/core/10/waku2.md).
2. Inherits the security considerations of the [Waku v2 protocol](../../core/10/waku2.md).
3. The protocol is designed to be used in a decentralized manner, however, it is possible to use a centralized server to serve prekey bundles. In this case, the server is trusted.
@@ -249,7 +249,7 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
- [Signal's Double Ratchet](https://signal.org/docs/specifications/doubleratchet/)
- [Protobuf](https://developers.google.com/protocol-buffers/)
- [Off-the-Record protocol](https://otr.cypherpunks.ca/Protocol-v3-4.1.1.html)
- [The Waku v2 protocol](../../standards/core/10/waku2.md)
- [The Waku v2 protocol](../../core/10/waku2.md)
- [HKDF](https://www.rfc-editor.org/rfc/rfc5869)
- [2/ACCOUNT](https://specs.status.im/spec/2#x3dh-prekey-bundles)
- [reference wire format](https://github.com/status-im/status-go/blob/a904d9325e76f18f54d59efc099b63293d3dcad3/services/shhext/chat/encryption.proto#L12)

View File

@@ -19,7 +19,7 @@ contributors:
This document specifies how to manage sessions based on an X3DH key exchange.
This includes how to establish new sessions, how to re-establish them, how to maintain them, and how to close them.
[53/WAKU2-X3DH](../../standards/application/53/X3DH.md) specifies the Waku `X3DH` protocol for end-to-end encryption.
[53/WAKU2-X3DH](../53/x3dh.md) specifies the Waku `X3DH` protocol for end-to-end encryption.
Once two peers complete an X3DH handshake, they SHOULD establish an X3DH session.
## Session Establishment
@@ -146,7 +146,7 @@ In this case an empty message containing bundle information MUST be sent back, w
## Security Considerations
1. Inherits all security considerations from [53/WAKU2-X3DH](../../standards/application/53/X3DH.md).
1. Inherits all security considerations from [53/WAKU2-X3DH](../53/x3dh.md).
### Recommendations
@@ -159,6 +159,6 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
## References
1. [53/WAKU2-X3DH](../../standards/application/53/X3DH.md)
1. [53/WAKU2-X3DH](../53/x3dh.md)
2. [Signal's Sesame Algorithm](https://signal.org/docs/specifications/sesame/)

View File

@@ -3,12 +3,13 @@ slug: 10
title: 10/WAKU2
name: Waku v2
status: draft
editor: Oskar Thorén <oskarth@titanproxy.com>
editor: Hanno Cornelius <hanno@status.im>
contributors:
- Sanaz Taheri <sanaz@status.im>
- Hanno Cornelius <hanno@status.im>
- Reeshav Khan <reeshav@status.im>
- Daniel Kaiser <danielkaiser@status.im>
- Oskar Thorén <oskarth@titanproxy.com>
---
## Abstract
@@ -23,7 +24,7 @@ These capabilities are things such as:
This makes Waku ideal for running a p2p protocol on mobile and in similarly restricted environments.
Historically, it has its roots in [6/WAKU1](../6/waku1.md),
Historically, it has its roots in [6/WAKU1](../../legacy/6/waku1.md),
which stems from [Whisper](https://eips.ethereum.org/EIPS/eip-627), originally part of the Ethereum stack.
However, Waku v2 acts more as a thin wrapper for PubSub and has a different API.
It is implemented in an iterative manner where initial focus is on porting essential functionality to libp2p.
@@ -211,7 +212,7 @@ This is used to fetch historical messages for mostly offline devices.
See [13/WAKU2-STORE spec](../13/store.md) spec for more details.
There is also an experimental fault-tolerant addition to the store protocol that relaxes the high availability requirement.
See [21/WAKU2-FT-STORE](../../application/21/ft-store.md)
See [21/WAKU2-FT-STORE](../../application/21/fault-tolerant-store.md)
#### Content filtering
@@ -244,9 +245,9 @@ The PubSub topics `pubtopic1` and `pubtopic2` is used for routing and indicates
Ditto for [13/WAKU2-STORE](../13/store.md) where it indicates that these messages are persisted on that node.
1. Node A creates a WakuMessage `msg1` with a ContentTopic `contentTopic1`.
See [14/WAKU2-MESSAGE](../core/14/message.md) for more details.
If WakuMessage version is set to 1, we use the [6/WAKU1](../6/waku1.md) compatible `data` field with encryption.
See [7/WAKU-DATA](../../application/7/data.md) for more details.
See [14/WAKU2-MESSAGE](../14/message.md) for more details.
If WakuMessage version is set to 1, we use the [6/WAKU1](../../legacy/6/waku1.md) compatible `data` field with encryption.
See [7/WAKU-DATA](../../legacy/7/data.md) for more details.
2. Node F requests to get messages filtered by PubSub topic `pubtopic1` and ContentTopic `contentTopic1`.
Node D subscribes F to this filter and will in the future forward messages that match that filter.
@@ -362,10 +363,10 @@ This includes Waku v1 specs, as they are used for bridging between the two netwo
| Spec | nim-waku (Nim) | go-waku (Go) | js-waku (Node JS) | js-waku (Browser JS) |
| ---- | -------------- | ------------ | ----------------- | -------------------- |
|[6/WAKU1](../6/waku1.md)|✔|||
|[7/WAKU-DATA](../7/data.md)|✔|✔||
|[8/WAKU-MAIL](../../application/8/mail.md)|✔|||
|[9/WAKU-RPC](../9/waku2-rpc.md)|✔|||
|[6/WAKU1](../../legacy/6/waku1.md)|✔|||
|[7/WAKU-DATA](../../legacy/7/data.md)|✔|✔||
|[8/WAKU-MAIL](../../legacy/8/mail.md)|✔|||
|[9/WAKU-RPC](../../legacy/9/rpc.md)|✔|||
|[10/WAKU2](../10/waku2.md)|✔|🚧|🚧|🚧|
|[11/WAKU2-RELAY](../11/relay.md)|✔|✔|✔|✔|
|[12/WAKU2-FILTER](../12/filter.md)|✔|✔||
@@ -393,7 +394,7 @@ To implement a minimal Waku v2 client, we recommend implementing the following s
To get compatibility with Waku v1:
- [7/WAKU-DATA](../7/data.md)
- [7/WAKU-DATA](../../legacy/7/data.md)
- [14/WAKU2-MESSAGE](../14/message.md) - version 1 (encrypted with `7/WAKU-DATA`)
For an interoperable keep-alive mechanism:
@@ -429,7 +430,7 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
1. [libp2p specs](https://github.com/libp2p/specs)
2. [6/WAKU1](../6/waku1.md)
2. [6/WAKU1](../../legacy/6/waku1.md)
3. [Whisper spec (EIP627)](https://eips.ethereum.org/EIPS/eip-627)
@@ -473,7 +474,7 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
23. [19/WAKU2-LIGHTPUSH](../19/lightpush.md)
24. [7/WAKU-DATA](../../application/7/data.md)
24. [7/WAKU-DATA](../../legacy/7/data.md)
25. [15/WAKU-BRIDGE](../15/bridge.md)
@@ -487,9 +488,9 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
30. [js-waku (NodeJS and Browser)](https://github.com/status-im/js-waku/)
31. [8/WAKU-MAIL](../../application/8/mail.md)
31. [8/WAKU-MAIL](../../legacy/8/mail.md)
32. [9/WAKU-RPC](../9/waku2-rpc.md)
32. [9/WAKU-RPC](../../legacy/9/rpc.md)
33. [16/WAKU2-RPC](../16/rpc.md)

View File

@@ -4,11 +4,13 @@ title: 13/WAKU2-STORE
name: Waku v2 Store
status: draft
tags: waku-core
editor: Sanaz Taheri <sanaz@status.im>
editor: Simon-Pierre Vivier <simvivier@status.im>
contributors:
- Dean Eigenmann <dean@status.im>
- Oskar Thorén <oskarth@titanproxy.com>
- Aaryamann Challani <aaryamann@status.im>
- Sanaz Taheri <sanaz@status.im>
- Hanno Cornelius <hanno@status.im>
---
## Abstract

View File

@@ -5,12 +5,13 @@ name: Waku v2 Message
status: draft
category: Standards Track
tags: waku/core-protocol
editor: Oskar Thorén <oskarth@titanproxy.com>
editor: Hanno Cornelius <hanno@status.im>
contributors:
- Sanaz Taheri <sanaz@status.im>
- Aaryamann Challani <aaryamann@status.im>
- Lorenzo Delgado <lorenzo@status.im>
- Abhimanyu Rawat <abhi@status.im>
- Oskar Thorén <oskarth@titanproxy.com>
---
## Abstract
@@ -28,7 +29,7 @@ When sending messages over Waku, there are multiple requirements:
- One may have a separate encryption layer as part of the application.
- One may want to provide efficient routing for resource-restricted devices.
- One may want to provide compatibility with [Waku v1 envelopes](../6/waku1.md).
- One may want to provide compatibility with [Waku v1 envelopes](../../legacy/6/waku1.md).
- One may want encrypted payloads by default.
- One may want to provide unlinkability to get metadata protection.
@@ -195,7 +196,7 @@ Therefore, because message timestamps arent independently verified, this attr
It should not solely be relied upon for operations such as message ordering.
For example, a malicious actor can arbitrarily set the `timestamp` of a Waku message to a high value so that it always shows up as the most recent message in a chat application.
Applications using Waku messages `timestamp` attribute are recommended to use additional methods for more robust message ordering.
An example of how to deal with message ordering against adversarial message timestamps can be found in the Status protocol, see [6/PAYLOADS](../6/waku1.md/#clock-vs-timestamp-and-message-ordering).
An example of how to deal with message ordering against adversarial message timestamps can be found in the Status protocol, see [62/STATUS-PAYLOADS](../../../../status/62/payloads.md/#clock-vs-timestamp-and-message-ordering).
### Reliability of the `ephemeral` attribute
@@ -210,8 +211,8 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
## References
- [6/WAKU1](/spec/6/)
- [6/WAKU1](../../legacy/6/waku1.md)
- [Google Protocol buffers v3](https://developers.google.com/protocol-buffers/)
- [26/WAKU-PAYLOAD](../../application/26/payload.md)
- [35/WAKU2-NOISE]([/spec/35](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/noise.md))
- [6/PAYLOADS](https://specs.status.im/spec/6#clock-vs-timestamp-and-message-ordering)
- [62/STATUS-PAYLOADS](../../../../status/62/payloads.md/#clock-vs-timestamp-and-message-ordering)

View File

@@ -4,10 +4,12 @@ title: 17/WAKU2-RLN-RELAY
name: Waku v2 RLN Relay
status: draft
tags: waku-core
editor: Sanaz Taheri <sanaz@status.im>
editor: Alvaro Revuelta <alvaro@status.im>
contributors:
- Oskar Thorén <oskarth@titanproxy.com>
- Aaryamann Challani <aaryamann@status.im>
- Sanaz Taheri <sanaz@status.im>
- Hanno Cornelius <hanno@status.im>
---
The `17/WAKU2-RLN-RELAY` protocol is an extension of `11/WAKU2-RELAY` which additionally provides spam protection using [Rate Limiting Nullifiers (RLN)](../../../../vac/32/rln-v1.md).
@@ -19,16 +21,16 @@ Spammers are also financially punished and removed from the system.
<!-- **Protocol identifier***: `/vac/waku/waku-rln-relay/2.0.0-alpha1` -->
# Motivation
## Motivation
In open and anonymous p2p messaging networks, one big problem is spam resistance.
Existing solutions, such as Whispers proof of work are computationally expensive hence not suitable for resource-limited nodes.
Other reputation-based approaches might not be desirable, due to issues around arbitrary exclusion and privacy.
We augment the [`11/WAKU2-RELAY`](/spec/11) protocol with a novel construct of [RLN](/spec/32) to enable an efficient economic spam prevention mechanism that can be run in resource-constrained environments.
We augment the [`11/WAKU2-RELAY`](../11/relay.md) protocol with a novel construct of [RLN](../../../../vac/32/rln-v1.md) to enable an efficient economic spam prevention mechanism that can be run in resource-constrained environments.
# Flow
## Flow
The messaging rate is defined by the `period` which indicates how many messages can be sent in a given period.
@@ -40,12 +42,12 @@ The higher-level layers adopting `17/WAKU2-RLN-RELAY` MAY choose to enforce the
## Setup and Registration
Peers subscribed to a specific `pubsubTopic` form a [RLN group](/spec/32).
### Setup and Registration
Peers subscribed to a specific `pubsubTopic` form a [RLN group](../../../../vac/32/rln-v1.md).
<!-- link to the RLN group definition in the RLN RFC -->
Peers MUST be registered to the RLN group to be able to publish messages.
Registration is moderated through a smart contract deployed on the Ethereum blockchain.
Each peer has an [RLN key pair](/spec/32) denoted by `sk` and `pk`.
Each peer has an [RLN key pair](../../../../vac/32/rln-v1.md) denoted by `sk` and `pk`.
The secret key `sk` is secret data and MUST be persisted securely by the peer.
The state of the membership contract contains the list of registered members' public identity keys i.e., `pk`s.
For the registration, a peer creates a transaction that invokes the registration function of the contract via which registers its `pk` in the group.
@@ -60,9 +62,9 @@ An overview of registration is illustrated in Figure 1.
![Figure 1: Registration.](./images/rln-relay.png)
## Publishing
### Publishing
To publish at a given `epoch`, the publishing peer proceeds based on the regular [`11/WAKU2-RELAY`](/spec/11) protocol.
To publish at a given `epoch`, the publishing peer proceeds based on the regular [`11/WAKU2-RELAY`](../11/relay.md) protocol.
However, to protect against spamming, each `WakuMessage` (which is wrapped inside the `data` field of a PubSub message) MUST carry a [`RateLimitProof`](##RateLimitProof) with the following fields.
Section [Payload](#payloads) covers the details about the type and encoding of these fields.
@@ -74,7 +76,7 @@ The `nullifier` is an internal nullifier acting as a fingerprint that allows spe
The `nullifier` is a deterministic value derived from `sk` and `epoch` therefore any two messages issued by the same peer (i.e., using the same `sk`) for the same `epoch` are guaranteed to have identical `nullifier`s.
The `share_x` and `share_y` can be seen as partial disclosure of peer's `sk` for the intended `epoch`.
They are derived deterministically from peer's `sk` and current `epoch` using [Shamir secret sharing scheme](/spec/32).
They are derived deterministically from peer's `sk` and current `epoch` using [Shamir secret sharing scheme](../../../../vac/32/rln-v1.md).
If a peer discloses more than one such pair (`share_x`, `share_y`) for the same `epoch`, it would allow full disclosure of its `sk` and hence get access to its staked fund in the membership contract.
@@ -82,13 +84,13 @@ The `proof` field is a zero-knowledge proof signifying that:
1. The message owner is the current member of the group i.e., her/his identity commitment key `pk` is part of the membership group Merkle tree with the root `merkle_root`.
2. `share_x` and `share_y` are correctly computed.
3. The `nullifier` is constructed correctly.
For more details about the proof generation check [RLN](/spec/32)
For more details about the proof generation check [RLN](../../../../vac/32/rln-v1.md)
The proof generation relies on the knowledge of two pieces of private information i.e., `sk` and `authPath`.
The `authPath` is a subset of Merkle tree nodes by which a peer can prove the inclusion of its `pk` in the group. <!-- TODO refer to RLN RFC for authPath def -->
The proof generation also requires a set of public inputs which are: the Merkle tree root `merkle_root`, the current `epoch`, and the message for which the proof is going to be generated.
In `17/WAKU2-RLN-RELAY`, the message is the concatenation of `WakuMessage`'s `payload` filed and its `contentTopic` i.e., `payload||contentTopic`.
## Group Synchronization
### Group Synchronization
Proof generation relies on the knowledge of Merkle tree root `merkle_root` and `authPath` which both require access to the membership Merkle tree.
Getting access to the Merkle tree can be done in various ways.
@@ -104,17 +106,17 @@ where the delay is due to mining the slashing transaction.
For the group synchronization, one important security consideration is that peers MUST make sure they always use the most recent Merkle tree root in their proof generation.
The reason is that using an old root can allow inference about the index of the user's `pk` in the membership tree hence compromising user privacy and breaking message unlinkability.
## Routing
### Routing
Upon the receipt of a PubSub message via [`11/WAKU2-RELAY`](/spec/11) protocol, the routing peer parses the `data` field as a `WakuMessage` and gets access to the `RateLimitProof` field.
Upon the receipt of a PubSub message via [`11/WAKU2-RELAY`](../11/relay.md) protocol, the routing peer parses the `data` field as a `WakuMessage` and gets access to the `RateLimitProof` field.
The peer then validates the `RateLimitProof` as explained next.
### Epoch Validation
#### Epoch Validation
If the `epoch` attached to the message is more than `max_epoch_gap` apart from the routing peer's current `epoch` then the message is discarded and considered invalid.
This is to prevent a newly registered peer from spamming the system by messaging for all the past epochs.
`max_epoch_gap` is a system parameter for which we provide some recommendations in section [Recommended System Parameters](#recommended-system-parameters).
### Merkle Root Validation
#### Merkle Root Validation
The routing peers MUST check whether the provided Merkle root in the `RateLimitProof` is valid.
It can do so by maintaining a local set of valid Merkle roots, which consist of `acceptable_root_window_size` past roots.
These roots refer to the final state of the Merkle tree after a whole block consisting of group changes is processed.
@@ -128,12 +130,12 @@ This also allows peers which are not well connected to the network to be able to
This network delay is related to the nature of asynchronous network conditions, which means that peers see membership changes asynchronously, and therefore may have differing local Merkle trees.
See [Recommended System Parameters](#recommended-system-parameters) on choosing an appropriate `acceptable_root_window_size`.
### Proof Verification
#### Proof Verification
The routing peers MUST check whether the zero-knowledge proof `proof` is valid.
It does so by running the zk verification algorithm as explained in [RLN](/spec/32).
It does so by running the zk verification algorithm as explained in [RLN](../../../../vac/32/rln-v1.md).
If `proof` is invalid then the message is discarded.
### Spam detection
#### Spam detection
To enable local spam detection and slashing, routing peers MUST record the `nullifier`, `share_x`, and `share_y` of incoming messages which are not discarded i.e., not found spam or with invalid proof or epoch.
To spot spam messages, the peer checks whether a message with an identical `nullifier` has already been relayed.
1. If such a message exists and its `share_x` and `share_y` components are different from the incoming message, then slashing takes place.
@@ -151,10 +153,10 @@ An overview of the routing procedure and slashing is provided in Figure 2.
-------
# Payloads
## Payloads
Payloads are protobuf messages implemented using [protocol buffers v3](https://developers.google.com/protocol-buffers/).
Nodes MAY extend the [14/WAKU2-MESSAGE](/spec/14) with a `rate_limit_proof` field to indicate that their message is not spam.
Nodes MAY extend the [14/WAKU2-MESSAGE](../14/message.md) with a `rate_limit_proof` field to indicate that their message is not spam.
```diff
@@ -179,22 +181,22 @@ message WakuMessage {
}
```
## WakuMessage
### WakuMessage
`rate_limit_proof` holds the information required to prove that the message owner has not exceeded the message rate limit.
## RateLimitProof
### RateLimitProof
Below is the description of the fields of `RateLimitProof` and their types.
| Parameter | Type | Description |
| ----: | ----------- | ----------- |
| `proof` | array of 256 bytes | the zkSNARK proof as explained in the [Publishing process](##Publishing) |
| `merkle_root` | array of 32 bytes in little-endian order | the root of membership group Merkle tree at the time of publishing the message |
| `share_x` and `share_y`| array of 32 bytes each | Shamir secret shares of the user's secret identity key `sk` . `share_x` is the Poseidon hash of the `WakuMessage`'s `payload` concatenated with its `contentTopic` . `share_y` is calculated using [Shamir secret sharing scheme](/spec/32) | <!-- todo specify the poseidon hash setting -->
| `nullifier` | array of 32 bytes | internal nullifier derived from `epoch` and peer's `sk` as explained in [RLN construct](/spec/32)|
| `share_x` and `share_y`| array of 32 bytes each | Shamir secret shares of the user's secret identity key `sk` . `share_x` is the Poseidon hash of the `WakuMessage`'s `payload` concatenated with its `contentTopic` . `share_y` is calculated using [Shamir secret sharing scheme](../../../../vac/32/rln-v1.md) | <!-- todo specify the poseidon hash setting -->
| `nullifier` | array of 32 bytes | internal nullifier derived from `epoch` and peer's `sk` as explained in [RLN construct](../../../../vac/32/rln-v1.md)|
# Recommended System Parameters
## Recommended System Parameters
The system parameters are summarized in the following table, and the recommended values for a subset of them are presented next.
| Parameter | Description |
@@ -205,14 +207,14 @@ The system parameters are summarized in the following table, and the recommended
| `max_epoch_gap` | the maximum allowed gap between the `epoch` of a routing peer and the incoming message |
| `acceptable_root_window_size` | The maximum number of past Merkle roots to store |
## Epoch Length
### Epoch Length
A sensible value for the `period` depends on the application for which the spam protection is going to be used.
For example, while the `period` of `1` second i.e., messaging rate of `1` per second, might be acceptable for a chat application, might be too low for communication among Ethereum network validators.
One should look at the desired throughput of the application to decide on a proper `period` value.
In the proof of concept implementation of `17/WAKU2-RLN-RELAY` protocol which is available in [nim-waku](https://github.com/status-im/nim-waku), the `period` is set to `1` second.
Nevertheless, this value is also subject to change depending on user experience.
## Maximum Epoch Gap
### Maximum Epoch Gap
We discussed in the [Routing](#routing) section that the gap between the epoch observed by the routing peer and the one attached to the incoming message should not exceed a threshold denoted by `max_epoch_gap` .
The value of `max_epoch_gap` can be measured based on the following factors.
- Network transmission delay `Network_Delay`: the maximum time that it takes for a message to be fully disseminated in the GossipSub network.
@@ -235,11 +237,11 @@ The `acceptable_root_window_size` should indicate how many blocks may have been
This formula represents a lower bound of the number of acceptable roots.
# Copyright
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
# References
## References
1. [RLN documentation](https://hackmd.io/tMTLMYmTR5eynw2lwK9n1w?view)
2. [Public inputs to the RLN circuit](https://hackmd.io/tMTLMYmTR5eynw2lwK9n1w?view#Public-Inputs)

View File

@@ -3,9 +3,10 @@ slug: 19
title: 19/WAKU2-LIGHTPUSH
name: Waku v2 Light Push
status: draft
editor: Oskar Thorén <oskarth@titanproxy.com>
editor: Hanno Cornelius <hanno@status.im>
contributors:
- Daniel Kaiser <danielkaiser@status.im>
- Oskar Thorén <oskarth@titanproxy.com>
---
**Protocol identifier**: `/vac/waku/lightpush/2.0.0-beta1`
@@ -45,13 +46,13 @@ message PushRPC {
Nodes that respond to `PushRequests` MUST either
relay the encapsulated message via [11/WAKU2-RELAY](../11/relay.md) protocol on the specified `pubsub_topic`,
or forward the `PushRequest` via 19/LIGHTPUSH on a [44/WAKU2-DANDELION](https://github.com/waku-org/specs/blob/waku-RFC/standards/application/dandelion.md) stem.
or forward the `PushRequest` via 19/LIGHTPUSH on a [WAKU2-DANDELION](https://github.com/waku-org/specs/blob/waku-RFC/standards/application/dandelion.md) stem.
If they are unable to do so for some reason, they SHOULD return an error code in `PushResponse`.
## Security Considerations
Since this can introduce an amplification factor, it is RECOMMENDED for the node relaying to the rest of the network to take extra precautions.
This can be done by rate limiting via [17/WAKU2-RLN-RELAY](https://rfc.vac.dev/spec/17/).
This can be done by rate limiting via [17/WAKU2-RLN-RELAY](../17/rln-relay.md).
Note that the above is currently not fully implemented.
@@ -62,5 +63,5 @@ Copyright and related rights waived via [CC0](https://creativecommons.org/public
## References
* [11/WAKU2-RELAY](../11/relay.md)
* [44/WAKU2-DANDELION](https://github.com/waku-org/specs/blob/waku-RFC/standards/application/dandelion.md)
* [WAKU2-DANDELION](https://github.com/waku-org/specs/blob/waku-RFC/standards/application/dandelion.md)
* [17/WAKU2-RLN-RELAY](../17/rln-relay.md)

View File

@@ -52,7 +52,7 @@ This also increases decentralization.
`33/WAKU2-DISCV5` spans a discovery network isolated from the Ethereum Discovery v5 network.
Another simple solution would be taking part in the Ethereum Discovery network, and filtering Waku nodes based on whether they support [31/WAKU2-ENR](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/enr.md).
Another simple solution would be taking part in the Ethereum Discovery network, and filtering Waku nodes based on whether they support [WAKU2-ENR](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/enr.md).
This solution is more resilient towards eclipse attacks.
However, this discovery method is very inefficient for small percentages of Waku nodes (see [estimation](https://forum.vac.dev/t/waku-v2-discv5-roadmap-discussion/121/8)).
It boils down to random walk discovery and does not offer a O(log(n)) hop bound.
@@ -157,7 +157,7 @@ Properly protecting against eclipse attacks is challenging and raises research q
1. [10/WAKU2](../10/waku2.md)
1. [11/WAKU2-RELAY](../11/relay.md)
1. [`31/WAKU2-ENR`](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/enr.md)
1. [`WAKU2-ENR`](https://github.com/waku-org/specs/blob/waku-RFC/standards/core/enr.md)
1. [Node Discovery Protocol v5 (`discv5`)](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
1. [`discv5` semantics](https://github.com/ethereum/devp2p/blob/master/discv5/discv5-theory.md).
1. [`discv5` wire protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5-wire.md)

View File

@@ -697,7 +697,7 @@ This list has this format:
### `extern int waku_content_topic(char* applicationName, unsigned int applicationVersion, char* contentTopicName, char* encoding, WakuCallBack onOkCb)`
Create a content topic string according to [RFC 23](https://rfc.vac.dev/spec/23/).
Create a content topic string according to [RFC 23](../../../informational/23/topics.md).
**Parameters**
@@ -710,14 +710,14 @@ Create a content topic string according to [RFC 23](https://rfc.vac.dev/spec/23/
**Returns**
`int` with a status code. Possible values:
- 0 - The operation was completed successfuly. `onOkCb` will receive the content topic formatted according to [RFC 23](https://rfc.vac.dev/spec/23/): `/{application-name}/{version-of-the-application}/{content-topic-name}/{encoding}`
- 0 - The operation was completed successfuly. `onOkCb` will receive the content topic formatted according to [RFC 23](../../../informational/23/topics.md): `/{application-name}/{version-of-the-application}/{content-topic-name}/{encoding}`
- 1 - The operation failed for any reason.
- 2 - The function is missing the `onOkCb` callback
### `extern int waku_pubsub_topic(char* name, char* encoding, WakuCallBack onOkCb)`
Create a pubsub topic string according to [RFC 23](https://rfc.vac.dev/spec/23/).
Create a pubsub topic string according to [RFC 23](../../../informational/23/topics.md).
**Parameters**
@@ -728,13 +728,13 @@ Create a pubsub topic string according to [RFC 23](https://rfc.vac.dev/spec/23/)
**Returns**
`int` with a status code. Possible values:
- 0 - The operation was completed successfuly. `onOkCb` will get populated with a pubsub topic formatted according to [RFC 23](https://rfc.vac.dev/spec/23/): `/waku/2/{topic-name}/{encoding}`
- 0 - The operation was completed successfuly. `onOkCb` will get populated with a pubsub topic formatted according to [RFC 23](../../../informational/23/topics.md): `/waku/2/{topic-name}/{encoding}`
- 1 - The operation failed for any reason.
- 2 - The function is missing the `onOkCb` callback
### `extern int waku_default_pubsub_topic(WakuCallBack onOkCb)`
Returns the default pubsub topic used for exchanging waku messages defined in [RFC 10](https://rfc.vac.dev/spec/10/).
Returns the default pubsub topic used for exchanging waku messages defined in [RFC 10](../10/waku2.md).
**Parameters**
1. `WakuCallBack onOkCb`: callback to be executed if the function is succesful
@@ -752,7 +752,7 @@ Publish a message using Waku Relay.
**Parameters**
1. `char* messageJson`: JSON string containing the [Waku Message](https://rfc.vac.dev/spec/14/) as [`JsonMessage`](#jsonmessage-type).
1. `char* messageJson`: JSON string containing the [Waku Message](../14/message.md) as [`JsonMessage`](#jsonmessage-type).
2. `char* pubsubTopic`: pubsub topic on which to publish the message.
If `NULL`, it uses the default pubsub topic.
3. `int timeoutMs`: Timeout value in milliseconds to execute the call.
@@ -1089,7 +1089,7 @@ Publish a message using Waku Lightpush.
**Parameters**
1. `char* messageJson`: JSON string containing the [Waku Message](https://rfc.vac.dev/spec/14/) as [`JsonMessage`](#jsonmessage-type).
1. `char* messageJson`: JSON string containing the [Waku Message](../14/message.md) as [`JsonMessage`](#jsonmessage-type).
2. `char* pubsubTopic`: pubsub topic on which to publish the message.
If `NULL`, it uses the default pubsub topic.
3. `char* peerID`: Peer ID supporting the lightpush protocol.
@@ -1177,7 +1177,7 @@ Encrypt a message using symmetric encryption and optionally sign the message
**Parameters**
1. `char* messageJson`: JSON string containing the [Waku Message](https://rfc.vac.dev/spec/14/) as [`JsonMessage`](#jsonmessage-type).
1. `char* messageJson`: JSON string containing the [Waku Message](../14/message.md) as [`JsonMessage`](#jsonmessage-type).
2. `char* symmetricKey`: hex encoded secret key to be used for encryption.
3. `char* optionalSigningKey`: hex encoded private key to be used to sign the message.
4. `WakuCallBack onOkCb`: callback to be executed if the function is succesful
@@ -1198,7 +1198,7 @@ Encrypt a message using asymmetric encryption and optionally sign the message
**Parameters**
1. `char* messageJson`: JSON string containing the [Waku Message](https://rfc.vac.dev/spec/14/) as [`JsonMessage`](#jsonmessage-type).
1. `char* messageJson`: JSON string containing the [Waku Message](../14/message.md) as [`JsonMessage`](#jsonmessage-type).
2. `char* publicKey`: hex encoded public key to be used for encryption.
3. `char* optionalSigningKey`: hex encoded private key to be used to sign the message.
4. `WakuCallBack onOkCb`: callback to be executed if the function is succesful
@@ -1221,7 +1221,7 @@ Decrypt a message using a symmetric key
**Parameters**
1. `char* messageJson`: JSON string containing the [Waku Message](https://rfc.vac.dev/spec/14/) as [`JsonMessage`](#jsonmessage-type).
1. `char* messageJson`: JSON string containing the [Waku Message](../14/message.md) as [`JsonMessage`](#jsonmessage-type).
2. `char* symmetricKey`: 32 byte symmetric key hex encoded.
3. `WakuCallBack onOkCb`: callback to be executed if the function is succesful
4. `WakuCallBack onErrCb`: callback to be executed if the function fails
@@ -1249,7 +1249,7 @@ Decrypt a message using a secp256k1 private key
**Parameters**
1. `char* messageJson`: JSON string containing the [Waku Message](https://rfc.vac.dev/spec/14/) as [`JsonMessage`](#jsonmessage-type).
1. `char* messageJson`: JSON string containing the [Waku Message](../14/message.md) as [`JsonMessage`](#jsonmessage-type).
2. `char* privateKey`: secp256k1 private key hex encoded.
3. `WakuCallBack onOkCb`: callback to be executed if the function is succesful
4. `WakuCallBack onErrCb`: callback to be executed if the function fails

View File

@@ -0,0 +1,51 @@
---
slug: 66
title: 66/WAKU2-METADATA
name: Waku Metadata Protocol
status: draft
editor: Alvaro Revuelta <alrevuelta@status.im>
contributors:
---
## Abstract
This specification describes the metadata that can be associated with a [10/WAKU2](../10/waku2.md) node.
## Metadata Protocol
Waku specifies a req/resp protocol that provides information about the node's medatadata.
Such metadata is meant to be used by the node to decide if a peer is worth connecting or not.
The node that makes the request, includes its metadata so that the receiver is aware of it,
without requiring an extra interaction.
The parameters are the following:
* `clusterId`: Unique identifier of the cluster that the node is running in.
* `shards`: Shard indexes that the node is subscribed to.
***Protocol Identifier***
/vac/waku/metadata/1.0.0
### Request
```proto
message WakuMetadataRequest {
optional uint32 cluster_id = 1;
repeated uint32 shards = 2;
}
```
### Response
```proto
message WakuMetadataResponse {
optional uint32 cluster_id = 1;
repeated uint32 shards = 2;
}
```
## Copyright
Copyright and related rights waived via [CC0](https://creativecommons.org/publicdomain/zero/1.0/).
## References
- [10/WAKU2](../10/waku2.md)

View File

@@ -36,7 +36,7 @@ This protocol needs to advertise the `waku/1` [capability](https://ethereum.gitb
### Gossip based routing
In Whisper, envelopes are gossiped between peers. Whisper is a form of rumor-mongering protocol that works by flooding to its connected peers based on some factors. Envelopes are eligible for retransmission until their TTL expires. A node SHOULD relay envelopes to all connected nodes if an envelope matches their PoW and bloom filter settings. If a node works in light mode, it MAY choose not to forward envelopes. A node MUST NOT send expired envelopes, unless the envelopes are sent as a [8/WAKU-MAIL](../../application/8/mail.md) response. A node SHOULD NOT send an envelope to a peer that it has already sent before.
In Whisper, envelopes are gossiped between peers. Whisper is a form of rumor-mongering protocol that works by flooding to its connected peers based on some factors. Envelopes are eligible for retransmission until their TTL expires. A node SHOULD relay envelopes to all connected nodes if an envelope matches their PoW and bloom filter settings. If a node works in light mode, it MAY choose not to forward envelopes. A node MUST NOT send expired envelopes, unless the envelopes are sent as a [8/WAKU-MAIL](../8/mail.md) response. A node SHOULD NOT send an envelope to a peer that it has already sent before.
### Maximum Packet Size
@@ -343,7 +343,7 @@ The drawback of sending message confirmations is that it increases the noise in
#### P2P Request
This packet is used for sending Dapp-level peer-to-peer requests, e.g. Waku Mail Client requesting historic (expired) envelopes from the [Waku Mail Server](../../application/8/mail.md).
This packet is used for sending Dapp-level peer-to-peer requests, e.g. Waku Mail Client requesting historic (expired) envelopes from the [Waku Mail Server](../8/mail.md).
#### P2P Message
@@ -353,7 +353,7 @@ This packet is used for sending the peer-to-peer envelopes, which are not suppos
This packet is used to indicate that all envelopes, requested earlier with a P2P Request packet (`0x7E`), have been sent via one or more P2P Message packets (`0x7F`).
The content of the packet is explained in the [Waku Mail Server](../../application/8/mail.md) specification.
The content of the packet is explained in the [Waku Mail Server](../8/mail.md) specification.
### Payload Encryption
@@ -373,7 +373,7 @@ Packet codes `0x7E` and `0x7F` may be used to implement Waku Mail Server and Cli
Waku supports multiple capabilities. These include light node, rate limiting and bridging of traffic. Here we list these capabilities, how they are identified, what properties they have and what invariants they must maintain.
Additionally there is the capability of a mailserver which is documented in its on [specification](../../application/8/mail.md).
Additionally there is the capability of a mailserver which is documented in its on [specification](../8/mail.md).
### Light node
@@ -452,7 +452,7 @@ It is desirable to have a strategy for maintaining forward compatibility between
## Appendix A: Security considerations
There are several security considerations to take into account when running Waku. Chief among them are: scalability, DDoS-resistance and privacy. These also vary depending on what capabilities are used. The security considerations for extra capabilities such as [mailservers](../../application/8/mail.md#security-considerations) can be found in their respective specifications.
There are several security considerations to take into account when running Waku. Chief among them are: scalability, DDoS-resistance and privacy. These also vary depending on what capabilities are used. The security considerations for extra capabilities such as [mailservers](../8/mail.md#security-considerations) can be found in their respective specifications.
### Scalability and UX

View File

@@ -9,7 +9,7 @@ contributors:
- Kim De Mey <kimdemey@status.im>
---
This specification describes the encryption, decryption and signing of the content in the [data field used in Waku](../../standards/core/6/waku1.md/#abnf-specification).
This specification describes the encryption, decryption and signing of the content in the [data field used in Waku](../6/waku1.md/#abnf-specification).
## Specification

View File

@@ -100,7 +100,7 @@ A mailserver client fetches archival envelopes from a mailserver through a direc
In this direct connection, the client discloses its IP/ID as well as the topics/ bloom filter it is interested in to the mailserver.
The collection of such information allows the mailserver to link clients' IP/IDs to their topic interests and build a profile for each client over time.
As such, the mailserver client has to trust the mailserver with this level of information.
A similar concern exists for the light nodes and their direct peers which is discussed in the security considerations of [6/WAKU1](/spec/7).
A similar concern exists for the light nodes and their direct peers which is discussed in the security considerations of [6/WAKU1](../6/waku1.md).
**Mailserver trusted connection:**

View File

@@ -46,7 +46,7 @@ In this section you will find objects used throughout the JSON RPC API.
#### Message
The message object represents a Waku message. Below you will find the description of the attributes contained in the message object. A message is the decrypted payload and padding of an [envelope](/spec/7) along with all of its metadata and other extra information such as the hash.
The message object represents a Waku message. Below you will find the description of the attributes contained in the message object. A message is the decrypted payload and padding of an [envelope](../7/data.md) along with all of its metadata and other extra information such as the hash.
| Field | Type | Description |
| ----: | :--: | ----------- |